![]() |
![]() |
2012
Editors: Arnold Beckmann, Anuj Dawar
The seven papers in this special issue arose from the 2012 Turing Centenary Conference CiE 2012: How the World Computes, held at University of Cambridge in June 2012. CiE 2012 was the eigth meeting in the series of conferences associated with the Association for Computability in Europe.
The main aim of the Association for Computability in Europe is to promote the development, particularly in Europe, of computability-related science, ranging over mathematics, computer science, and applications in various natural and engineering sciences such as physics and biology, including the promotion of the study of philosophy and history of computing as it relates to questions of computability.
CiE 2012 was one of a series of special events, running throughout the Alan Turing Year, celebrating Turing's unique impact on mathematics, computing, computer science, informatics, morphogenesis, artificial intelligence, philosophy and the wider scientific world. Its central theme was the computability-theoretic concerns underlying the broad spectrum of Turing's interests, and the contemporary research areas founded upon and animated by them. In this sense, CiE 2012, held in Cambridge in the week running up to the centenary of Turing's birthday, dealt with the essential core of what made Turing's contribution so influential and long-lasting.
As is usual in computer science, CiE 2012 had a regular pre-proceedings volume published in the book series Lecture Notes in Computer Science:
Barry S. Cooper, Anuj Dawar, and Benedikt Löwe (eds.), How the World Computes, Turing Centenary Conference and 8th Conference on Computability in Europe, CiE 2012, Cambridge, UK, June 18-23, 2012, Proceedings, Heidelberg 2012 [Lecture Notes in Computer Science 7318].
As a follow-up to the conference, the organizers of CiE 2012 have also prepared post-conference publications. Two special issues of journals are being edited with journal versions of talks and presentations at CiE 2012. Our publication policy does not allow double publications of the same research content: in order to get accepted for a post-proceedings special issue, a journal version of a talk must exhibit unpublished research content beyond the content printed in the LNCS volume.
This special issue of the journal Logical Methods in Computer Science is one of the post-conference publications. The special issue underwent a thorough and strict refereeing process. After CiE 2012, we invited nine authors to submit a journal version of their paper to this special issue; in the end, we accepted the seven papers that the reader can find in this issue.
The selection procedure was the work of many referees who put in a lot of work to ascertain the quality of the special issue. The paper by Eric Allender, Harry Buhrman, Luke Friedman, and Bruno Loff represents the special session on Cryptography, Complexity, and Randomness (organized by Rod Downey and Jack Lutz) at CiE 2012. The remaining six papers are full versions of contributed talks; all of those have a short version published in the mentioned LNCS pre-proceedings volume.
We would like to thank all our referees for their help in producing this special issue, including the members of the CiE 2012 Programme Committee.
Finally, we would also like to express our gratitude to the many sponsors that made the Turing Centenary Conference possible. They are (in alphabetic order): the Association for Symbolic Logic, Cambridge University Press, Elsevier B.V., the European Association for Computer Science Logic (EACSL), the International Federation for Computational Logic (IFCoLog), IOS Press, the Isaac Newton Institute for Mathematical Sciences, King's College, Cambridge, Microsoft Research Cambridge, Robinson College Cambridge, Science Magazine/AAAS, Springer-Verlag, and the University of Cambridge.
For the most current information about the conference series CiE-CS, we refer the reader to our web-page
We investigate tree-automatic well-founded trees. Using Delhomme's decomposition technique for tree-automatic structures, we show that the (ordinal) rank of a tree-automatic well-founded tree is strictly below omega^omega. Moreover, we make a step towards proving that the ranks of tree-automatic well-founded partial orders are bounded by omega^omega^omega: we prove this bound for what we call upwards linear partial orders. As an application of our result, we show that the isomorphism problem for tree-automatic well-founded trees is complete for level Delta^0_{omega^omega} of the hyperarithmetical hierarchy with respect to Turing-reductions.
Checking the admissibility of quasiequations in a finitely generated (i.e., generated by a finite set of finite algebras) quasivariety Q amounts to checking validity in a suitable finite free algebra of the quasivariety, and is therefore decidable. However, since free algebras may be large even for small sets of small algebras and very few generators, this naive method for checking admissibility in $\Q$ is not computationally feasible. In this paper, algorithms are introduced that generate a minimal (with respect to a multiset well-ordering on their cardinalities) finite set of algebras such that the validity of a quasiequation in this set corresponds to admissibility of the quasiequation in Q. In particular, structural completeness (validity and admissibility coincide) and almost structural completeness (validity and admissibility coincide for quasiequations with unifiable premises) can be checked. The algorithms are illustrated with a selection of well-known finitely generated quasivarieties, and adapted to handle also admissibility of rules in finite-valued logics.
An integer polynomial $p$ of $n$ variables is called a \emph{threshold gate} for a Boolean function $f$ of $n$ variables if for all $x \in \zoon$ $f(x)=1$ if and only if $p(x)\geq 0$. The \emph{weight} of a threshold gate is the sum of its absolute values. In this paper we study how large a weight might be needed if we fix some function and some threshold degree. We prove $2^{\Omega(2^{2n/5})}$ lower bound on this value. The best previous bound was $2^{\Omega(2^{n/8})}$ (Podolskii, 2009). In addition we present substantially simpler proof of the weaker $2^{\Omega(2^{n/4})}$ lower bound. This proof is conceptually similar to other proofs of the bounds on weights of nonlinear threshold gates, but avoids a lot of technical details arising in other proofs. We hope that this proof will help to show the ideas behind the construction used to prove these lower bounds.
The present work determines the exact nature of {\em linear time computable} notions which characterise automatic functions (those whose graphs are recognised by a finite automaton). The paper also determines which type of linear time notions permit full learnability for learning in the limit of automatic classes (families of languages which are uniformly recognised by a finite automaton). In particular it is shown that a function is automatic iff there is a one-tape Turing machine with a left end which computes the function in linear time where the input before the computation and the output after the computation both start at the left end. It is known that learners realised as automatic update functions are restrictive for learning. In the present work it is shown that one can overcome the problem by providing work tapes additional to a resource-bounded base tape while keeping the update-time to be linear in the length of the largest datum seen so far. In this model, one additional such work tape provides additional learning power over the automatic learner model and two additional work tapes give full learning power. Furthermore, one can also consider additional queues or additional stacks in place of additional work tapes and for these devices, one queue or two stacks are sufficient for full learning power while one stack is insufficient.
This paper is motivated by a conjecture that BPP can be characterized in terms of polynomial-time nonadaptive reductions to the set of Kolmogorov-random strings. In this paper we show that an approach laid out in [Allender et al] to settle this conjecture cannot succeed without significant alteration, but that it does bear fruit if we consider time-bounded Kolmogorov complexity instead. We show that if a set A is reducible in polynomial time to the set of time-t-bounded Kolmogorov random strings (for all large enough time bounds t), then A is in P/poly, and that if in addition such a reduction exists for any universal Turing machine one uses in the definition of Kolmogorov complexity, then A is in PSPACE.
We give a new characterization of $\mathsf{NL}$ as the class of languages whose members have certificates that can be verified with small error in polynomial time by finite state machines that use a constant number of random bits, as opposed to its conventional description in terms of deterministic logarithmic-space verifiers. It turns out that allowing two-way interaction with the prover does not change the class of verifiable languages, and that no polynomially bounded amount of randomness is useful for constant-memory computers when used as language recognizers, or public-coin verifiers. A corollary of our main result is that the class of outcome problems corresponding to O(log n)-space bounded games of incomplete information where the universal player is allowed a constant number of moves equals NL.
We consider the proof complexity of the minimal complete fragment, KS, of standard deep inference systems for propositional logic. To examine the size of proofs we employ atomic flows, diagrams that trace structural changes through a proof but ignore logical information. As results we obtain a polynomial simulation of versions of Resolution, along with some extensions. We also show that these systems, as well as bounded-depth Frege systems, cannot polynomially simulate KS, by giving polynomial-size proofs of certain variants of the propositional pigeonhole principle in KS.