Selected papers of the ''Turing Centenary Conference: CiE 2012''

2012 Editors: Arnold Beckmann, Anuj Dawar


1. Tree-Automatic Well-Founded Trees

Martin Huschenbett ; Alexander Kartzow ; Jiamou Liu ; Markus Lohrey.
We investigate tree-automatic well-founded trees. Using Delhomme's decomposition technique for tree-automatic structures, we show that the (ordinal) rank of a tree-automatic well-founded tree is strictly below omega^omega. Moreover, we make a step towards proving that the ranks of tree-automatic well-founded partial orders are bounded by omega^omega^omega: we prove this bound for what we call upwards linear partial orders. As an application of our result, we show that the isomorphism problem for tree-automatic well-founded trees is complete for level Delta^0_{omega^omega} of the hyperarithmetical hierarchy with respect to Turing-reductions.

2. Admissibility in Finitely Generated Quasivarieties

George Metcalfe ; Christoph Röthlisberger.
Checking the admissibility of quasiequations in a finitely generated (i.e., generated by a finite set of finite algebras) quasivariety Q amounts to checking validity in a suitable finite free algebra of the quasivariety, and is therefore decidable. However, since free algebras may be large even for small sets of small algebras and very few generators, this naive method for checking admissibility in $\Q$ is not computationally feasible. In this paper, algorithms are introduced that generate a minimal (with respect to a multiset well-ordering on their cardinalities) finite set of algebras such that the validity of a quasiequation in this set corresponds to admissibility of the quasiequation in Q. In particular, structural completeness (validity and admissibility coincide) and almost structural completeness (validity and admissibility coincide for quasiequations with unifiable premises) can be checked. The algorithms are illustrated with a selection of well-known finitely generated quasivarieties, and adapted to handle also admissibility of rules in finite-valued logics.

3. Lower Bound on Weights of Large Degree Threshold Functions

Vladimir V. Podolskii.
An integer polynomial $p$ of $n$ variables is called a \emph{threshold gate} for a Boolean function $f$ of $n$ variables if for all $x \in \zoon$ $f(x)=1$ if and only if $p(x)\geq 0$. The \emph{weight} of a threshold gate is the sum of its absolute values. In this paper we study how large a weight might be needed if we fix some function and some threshold degree. We prove $2^{\Omega(2^{2n/5})}$ lower bound on this value. The best previous bound was $2^{\Omega(2^{n/8})}$ (Podolskii, 2009). In addition we present substantially simpler proof of the weaker $2^{\Omega(2^{n/4})}$ lower bound. This proof is conceptually similar to other proofs of the bounds on weights of nonlinear threshold gates, but avoids a lot of technical details arising in other proofs. We hope that this proof will help to show the ideas behind the construction used to prove these lower bounds.

4. Automatic functions, linear time and learning

John Case ; Sanjay Jain ; Frank Stephan ; Frank Stephan.
The present work determines the exact nature of {\em linear time computable} notions which characterise automatic functions (those whose graphs are recognised by a finite automaton). The paper also determines which type of linear time notions permit full learnability for learning in the limit of automatic classes (families of languages which are uniformly recognised by a finite automaton). In particular it is shown that a function is automatic iff there is a one-tape Turing machine with a left end which computes the function in linear time where the input before the computation and the output after the computation both start at the left end. It is known that learners realised as automatic update functions are restrictive for learning. In the present work it is shown that one can overcome the problem by providing work tapes additional to a resource-bounded base tape while keeping the update-time to be linear in the length of the largest datum seen so far. In this model, one additional such work tape provides additional learning power over the automatic learner model and two additional work tapes give full learning power. Furthermore, one can also consider additional queues or additional stacks in place of additional work tapes and for these devices, one queue or two stacks are sufficient for full learning power while one stack is insufficient.

5. Reductions to the set of random strings: The resource-bounded case

Eric Allender ; Harry Buhrman ; Luke Friedman ; Bruno Loff.
This paper is motivated by a conjecture that BPP can be characterized in terms of polynomial-time nonadaptive reductions to the set of Kolmogorov-random strings. In this paper we show that an approach laid out in [Allender et al] to settle this conjecture cannot succeed without significant alteration, but that it does bear fruit if we consider time-bounded Kolmogorov complexity instead. We show that if a set A is reducible in polynomial time to the set of time-t-bounded Kolmogorov random strings (for all large enough time bounds t), then A is in P/poly, and that if in addition such a reduction exists for any universal Turing machine one uses in the definition of Kolmogorov complexity, then A is in PSPACE.

6. Finite state verifiers with constant randomness

Cem Say ; Abuzer Yakaryilmaz.
We give a new characterization of $\mathsf{NL}$ as the class of languages whose members have certificates that can be verified with small error in polynomial time by finite state machines that use a constant number of random bits, as opposed to its conventional description in terms of deterministic logarithmic-space verifiers. It turns out that allowing two-way interaction with the prover does not change the class of verifiable languages, and that no polynomially bounded amount of randomness is useful for constant-memory computers when used as language recognizers, or public-coin verifiers. A corollary of our main result is that the class of outcome problems corresponding to O(log n)-space bounded games of incomplete information where the universal player is allowed a constant number of moves equals NL.

7. On the relative proof complexity of deep inference via atomic flows

Anupam Das.
We consider the proof complexity of the minimal complete fragment, KS, of standard deep inference systems for propositional logic. To examine the size of proofs we employ atomic flows, diagrams that trace structural changes through a proof but ignore logical information. As results we obtain a polynomial simulation of versions of Resolution, along with some extensions. We also show that these systems, as well as bounded-depth Frege systems, cannot polynomially simulate KS, by giving polynomial-size proofs of certain variants of the propositional pigeonhole principle in KS.