Volume 9, Issue 1

2013


1. Type classes for efficient exact real arithmetic in Coq

Robbert Krebbers ; Bas Spitters.
Floating point operations are fast, but require continuous effort on the part of the user in order to ensure that the results are correct. This burden can be shifted away from the user by providing a library of exact analysis in which the computer handles the error estimates. Previously, we [Krebbers/Spitters 2011] provided a fast implementation of the exact real numbers in the Coq proof assistant. Our implementation improved on an earlier implementation by O'Connor by using type classes to describe an abstract specification of the underlying dense set from which the real numbers are built. In particular, we used dyadic rationals built from Coq's machine integers to obtain a 100 times speed up of the basic operations already. This article is a substantially expanded version of [Krebbers/Spitters 2011] in which the implementation is extended in the various ways. First, we implement and verify the sine and cosine function. Secondly, we create an additional implementation of the dense set based on Coq's fast rational numbers. Thirdly, we extend the hierarchy to capture order on undecidable structures, while it was limited to decidable structures before. This hierarchy, based on type classes, allows us to share theory on the naturals, integers, rationals, dyadics, and reals in a convenient way. Finally, we obtain another dramatic speed-up by avoiding evaluation of termination proofs at runtime.

2. Asymptotically almost all \lambda-terms are strongly normalizing

René David ; Katarzyna Grygiel ; Jakub Kozik ; Christophe Raffalli ; Guillaume Theyssier ; Marek Zaionc.
We present quantitative analysis of various (syntactic and behavioral) properties of random \lambda-terms. Our main results are that asymptotically all the terms are strongly normalizing and that any fixed closed term almost never appears in a random term. Surprisingly, in combinatory logic (the translation of the \lambda-calculus into combinators), the result is exactly opposite. We show that almost all terms are not strongly normalizing. This is due to the fact that any fixed combinator almost always appears in a random combinator.

3. Counting CTL

François Laroussinie ; Antoine Meyer ; Eudes Petonnet.
This paper presents a range of quantitative extensions for the temporal logic CTL. We enhance temporal modalities with the ability to constrain the number of states satisfying certain sub-formulas along paths. By selecting the combinations of Boolean and arithmetic operations allowed in constraints, one obtains several distinct logics generalizing CTL. We provide a thorough analysis of their expressiveness and succinctness, and of the complexity of their model-checking and satisfiability problems (ranging from P-complete to undecidable). Finally, we present two alternative logics with similar features and provide a comparative study of the properties of both variants.

4. A Rewriting View of Simple Typing

Aaron Stump ; Garrin Kimmell ; Hans Zantema ; Ruba El Haj Omar.
This paper shows how a recently developed view of typing as small-step abstract reduction, due to Kuan, MacQueen, and Findler, can be used to recast the development of simple type theory from a rewriting perspective. We show how standard meta-theoretic results can be proved in a completely new way, using the rewriting view of simple typing. These meta-theoretic results include standard type preservation and progress properties for simply typed lambda calculus, as well as generalized versions where typing is taken to include both abstract and concrete reduction. We show how automated analysis tools developed in the term-rewriting community can be used to help automate the proofs for this meta-theory. Finally, we show how to adapt a standard proof of normalization of simply typed lambda calculus, for the rewriting approach to typing.

5. Vector Addition System Reversible Reachability Problem

Jérôme Leroux.
The reachability problem for vector addition systems is a central problem of net theory. This problem is known to be decidable but the complexity is still unknown. Whereas the problem is EXPSPACE-hard, no elementary upper bounds complexity are known. In this paper we consider the reversible reachability problem. This problem consists to decide if two configurations are reachable one from each other, or equivalently if they are in the same strongly connected component of the reachability graph. We show that this problem is EXPSPACE-complete. As an application of the introduced materials we characterize the reversibility domains of a vector addition system.

6. Coarse abstractions make Zeno behaviours difficult to detect

Frédéric Herbreteau ; B Srivathsan.
An infinite run of a timed automaton is Zeno if it spans only a finite amount of time. Such runs are considered unfeasible and hence it is important to detect them, or dually, find runs that are non-Zeno. Over the years important improvements have been obtained in checking reachability properties for timed automata. We show that some of these very efficient optimizations make testing for Zeno runs costly. In particular we show NP-completeness for the LU-extrapolation of Behrmann et al. We analyze the source of this complexity in detail and give general conditions on extrapolation operators that guarantee a (low) polynomial complexity of Zenoness checking. We propose a slight weakening of the LU-extrapolation that satisfies these conditions.

7. On (Subgame Perfect) Secure Equilibrium in Quantitative Reachability Games

Thomas Brihaye ; Véronique Bruyère ; Julie De Pril ; Hugo Gimbert.
We study turn-based quantitative multiplayer non zero-sum games played on finite graphs with reachability objectives. In such games, each player aims at reaching his own goal set of states as soon as possible. A previous work on this model showed that Nash equilibria (resp. secure equilibria) are guaranteed to exist in the multiplayer (resp. two-player) case. The existence of secure equilibria in the multiplayer case remained and is still an open problem. In this paper, we focus our study on the concept of subgame perfect equilibrium, a refinement of Nash equilibrium well-suited in the framework of games played on graphs. We also introduce the new concept of subgame perfect secure equilibrium. We prove the existence of subgame perfect equilibria (resp. subgame perfect secure equilibria) in multiplayer (resp. two-player) quantitative reachability games. Moreover, we provide an algorithm deciding the existence of secure equilibria in the multiplayer case.

8. On the Complexity of Equivalence and Minimisation for Q-weighted Automata

Stefan Kiefer ; Andrzej Murawski ; Joel Ouaknine ; Bjoern Wachter ; James Worrell.
This paper is concerned with the computational complexity of equivalence and minimisation for automata with transition weights in the field Q of rational numbers. We use polynomial identity testing and the Isolation Lemma to obtain complexity bounds, focussing on the class NC of problems within P solvable in polylogarithmic parallel time. For finite Q-weighted automata, we give a randomised NC procedure that either outputs that two automata are equivalent or returns a word on which they differ. We also give an NC procedure for deciding whether a given automaton is minimal, as well as a randomised NC procedure that minimises an automaton. We consider probabilistic automata with rewards, similar to Markov Decision Processes. For these automata we consider two notions of equivalence: expectation equivalence and distribution equivalence. The former requires that two automata have the same expected reward on each input word, while the latter requires that each input word induce the same distribution on rewards in each automaton. For both notions we give algorithms for deciding equivalence by reduction to equivalence of Q-weighted automata. Finally we show that the equivalence problem for Q-weighted visibly pushdown automata is logspace equivalent to the polynomial identity testing problem.

9. Generalizing determinization from automata to coalgebras

Alexandra Silva ; Filippo Bonchi ; Marcello Bonsangue ; Jan Rutten.
The powerset construction is a standard method for converting a nondeterministic automaton into a deterministic one recognizing the same language. In this paper, we lift the powerset construction from automata to the more general framework of coalgebras with structured state spaces. Coalgebra is an abstract framework for the uniform study of different kinds of dynamical systems. An endofunctor F determines both the type of systems (F-coalgebras) and a notion of behavioural equivalence (~_F) amongst them. Many types of transition systems and their equivalences can be captured by a functor F. For example, for deterministic automata the derived equivalence is language equivalence, while for non-deterministic automata it is ordinary bisimilarity. We give several examples of applications of our generalized determinization construction, including partial Mealy machines, (structured) Moore automata, Rabin probabilistic automata, and, somewhat surprisingly, even pushdown automata. To further witness the generality of the approach we show how to characterize coalgebraically several equivalences which have been object of interest in the concurrency community, such as failure or ready semantics.

10. Bisimilarity on Basic Process Algebra is in 2-ExpTime (an explicit proof)

Petr Jancar.
Burkart, Caucal, Steffen (1995) showed a procedure deciding bisimulation equivalence of processes in Basic Process Algebra (BPA), i.e. of sequential processes generated by context-free grammars. They improved the previous decidability result of Christensen, Hüttel, Stirling (1992), since their procedure has obviously an elementary time complexity and the authors claim that a close analysis would reveal a double exponential upper bound. Here a self-contained direct proof of the membership in 2-ExpTime is given. This is done via a Prover-Refuter game which shows that there is an alternating Turing machine deciding the problem in exponential space. The proof uses similar ingredients (size-measures, decompositions, bases) as the previous proofs, but one new simplifying factor is an explicit addition of infinite regular strings to the state space. An auxiliary claim also shows an explicit exponential upper bound on the equivalence level of nonbisimilar normed BPA processes. The importance of clarifying the 2-ExpTime upper bound for BPA bisimilarity has recently increased due to the shift of the known lower bound from PSpace (Srba, 2002) to ExpTime (Kiefer, 2012).

11. L-Recursion and a new Logic for Logarithmic Space

Martin Grohe ; Berit Grußien ; André Hernich ; Bastian Laubner.
We extend first-order logic with counting by a new operator that allows it to formalise a limited form of recursion which can be evaluated in logarithmic space. The resulting logic LREC has a data complexity in LOGSPACE, and it defines LOGSPACE-complete problems like deterministic reachability and Boolean formula evaluation. We prove that LREC is strictly more expressive than deterministic transitive closure logic with counting and incomparable in expressive power with symmetric transitive closure logic STC and transitive closure logic (with or without counting). LREC is strictly contained in fixed-point logic with counting FPC. We also study an extension LREC= of LREC that has nicer closure properties and is more expressive than both LREC and STC, but is still contained in FPC and has a data complexity in LOGSPACE. Our main results are that LREC captures LOGSPACE on the class of directed trees and that LREC= captures LOGSPACE on the class of interval graphs.

12. Collapsible Pushdown Graphs of Level 2 are Tree-Automatic

Alexander Kartzow.
We show that graphs generated by collapsible pushdown systems of level 2 are tree-automatic. Even if we allow epsilon-contractions and reachability predicates (with regular constraints) for pairs of configurations, the structures remain tree-automatic whence their first-order logic theories are decidable. As a corollary we obtain the tree-automaticity of the second level of the Caucal-hierarchy.

13. Unifying Büchi Complementation Constructions

Seth J. Fogarty ; Orna Kupferman ; Thomas Wilke ; Moshe Y. Vardi.
Complementation of Büchi automata, required for checking automata containment, is of major theoretical and practical interest in formal verification. We consider two recent approaches to complementation. The first is the rank-based approach of Kupferman and Vardi, which operates over a DAG that embodies all runs of the automaton. This approach is based on the observation that the vertices of this DAG can be ranked in a certain way, termed an odd ranking, iff all runs are rejecting. The second is the slice-based approach of Kähler and Wilke. This approach tracks levels of "split trees" - run trees in which only essential information about the history of each run is maintained. While the slice-based construction is conceptually simple, the complementing automata it generates are exponentially larger than those of the recent rank-based construction of Schewe, and it suffers from the difficulty of symbolically encoding levels of split trees. In this work we reformulate the slice-based approach in terms of run DAGs and preorders over states. In doing so, we begin to draw parallels between the rank-based and slice-based approaches. Through deeper analysis of the slice-based approach, we strongly restrict the nondeterminism it generates. We are then able to employ the slice-based approach to provide a new odd ranking, called a retrospective ranking, that is different from the one provided by Kupferman and Vardi. This new ranking allows us to construct a […]

14. Call-by-Value and Call-by-Name Dual Calculi with Inductive and Coinductive Types

Daisuke Kimura ; Makoto Tatsuta.
This paper extends the dual calculus with inductive types and coinductive types. The paper first introduces a non-deterministic dual calculus with inductive and coinductive types. Besides the same duality of the original dual calculus, it has the duality of inductive and coinductive types, that is, the duality of terms and coterms for inductive and coinductive types, and the duality of their reduction rules. Its strong normalization is also proved, which is shown by translating it into a second-order dual calculus. The strong normalization of the second-order dual calculus is proved by translating it into the second-order symmetric lambda calculus. This paper then introduces a call-by-value system and a call-by-name system of the dual calculus with inductive and coinductive types, and shows the duality of call-by-value and call-by-name, their Church-Rosser properties, and their strong normalization. Their strong normalization is proved by translating them into the non-deterministic dual calculus with inductive and coinductive types.

15. An Algebraic Preservation Theorem for Aleph-Zero Categorical Quantified Constraint Satisfaction

Hubie Chen ; Moritz Müller.
We prove an algebraic preservation theorem for positive Horn definability in aleph-zero categorical structures. In particular, we define and study a construction which we call the periodic power of a structure, and define a periomorphism of a structure to be a homomorphism from the periodic power of the structure to the structure itself. Our preservation theorem states that, over an aleph-zero categorical structure, a relation is positive Horn definable if and only if it is preserved by all periomorphisms of the structure. We give applications of this theorem, including a new proof of the known complexity classification of quantified constraint satisfaction on equality templates.

16. Polylogarithmic Cuts in Models of V^0

Sebastian Müller.
We study initial cuts of models of weak two-sorted Bounded Arithmetics with respect to the strength of their theories and show that these theories are stronger than the original one. More explicitly we will see that polylogarithmic cuts of models of $\mathbf{V}^0$ are models of $\mathbf{VNC}^1$ by formalizing a proof of Nepomnjascij's Theorem in such cuts. This is a strengthening of a result by Paris and Wilkie. We can then exploit our result in Proof Complexity to observe that Frege proof systems can be sub exponentially simulated by bounded depth Frege proof systems. This result has recently been obtained by Filmus, Pitassi and Santhanam in a direct proof. As an interesting observation we also obtain an average case separation of Resolution from AC0-Frege by applying a recent result with Tzameret.