2010
The call-by-need lambda calculus provides an equational framework for reasoning syntactically about lazy evaluation. This paper examines its operational characteristics. By a series of reasoning steps, we systematically unpack the standard-order reduction relation of the calculus and discover a novel abstract machine definition which, like the calculus, goes "under lambdas." We prove that machine evaluation is equivalent to standard-order evaluation. Unlike traditional abstract machines, delimited control plays a significant role in the machine's behavior. In particular, the machine replaces the manipulation of a heap using store-based effects with disciplined management of the evaluation stack using control-based effects. In short, state is replaced with control. To further articulate this observation, we present a simulation of call-by-need in a call-by-value language using delimited control operations.
Size-Change Termination (SCT) is a method of proving program termination based on the impossibility of infinite descent. To this end we may use a program abstraction in which transitions are described by monotonicity constraints over (abstract) variables. When only constraints of the form x>y' and x>=y' are allowed, we have size-change graphs. Both theory and practice are now more evolved in this restricted framework then in the general framework of monotonicity constraints. This paper shows that it is possible to extend and adapt some theory from the domain of size-change graphs to the general case, thus complementing previous work on monotonicity constraints. In particular, we present precise decision procedures for termination; and we provide a procedure to construct explicit global ranking functions from monotonicity constraints in singly-exponential time, which is better than what has been published so far even for size-change graphs.
We present Classical BI (CBI), a new addition to the family of bunched logics which originates in O'Hearn and Pym's logic of bunched implications BI. CBI differs from existing bunched logics in that its multiplicative connectives behave classically rather than intuitionistically (including in particular a multiplicative version of classical negation). At the semantic level, CBI-formulas have the normal bunched logic reading as declarative statements about resources, but its resource models necessarily feature more structure than those for other bunched logics; principally, they satisfy the requirement that every resource has a unique dual. At the proof-theoretic level, a very natural formalism for CBI is provided by a display calculus à la Belnap, which can be seen as a generalisation of the bunched sequent calculus for BI. In this paper we formulate the aforementioned model theory and proof theory for CBI, and prove some fundamental results about the logic, most notably completeness of the proof theory with respect to the semantics.
We propose a method for automatically generating abstract transformers for static analysis by abstract interpretation. The method focuses on linear constraints on programs operating on rational, real or floating-point variables and containing linear assignments and tests. Given the specification of an abstract domain, and a program block, our method automatically outputs an implementation of the corresponding abstract transformer. It is thus a form of program transformation. In addition to loop-free code, the same method also applies for obtaining least fixed points as functions of the precondition, which permits the analysis of loops and recursive functions. The motivation of our work is data-flow synchronous programming languages, used for building control-command embedded systems, but it also applies to imperative and functional programming. Our algorithms are based on quantifier elimination and symbolic manipulation techniques over linear arithmetic formulas. We also give less general results for nonlinear constraints and nonlinear program constructs.
Previous deforestation and supercompilation algorithms may introduce accidental termination when applied to call-by-value programs. This hides looping bugs from the programmer, and changes the behavior of a program depending on whether it is optimized or not. We present a supercompilation algorithm for a higher-order call-by-value language and prove that the algorithm both terminates and preserves termination properties. This algorithm utilizes strictness information to decide whether to substitute or not and compares favorably with previous call-by-name transformations.
Garbage collectors are notoriously hard to verify, due to their low-level interaction with the underlying system and the general difficulty in reasoning about reachability in graphs. Several papers have presented verified collectors, but either the proofs were hand-written or the collectors were too simplistic to use on practical applications. In this work, we present two mechanically verified garbage collectors, both practical enough to use for real-world C# benchmarks. The collectors and their associated allocators consist of x86 assembly language instructions and macro instructions, annotated with preconditions, postconditions, invariants, and assertions. We used the Boogie verification generator and the Z3 automated theorem prover to verify this assembly language code mechanically. We provide measurements comparing the performance of the verified collector with that of the standard Bartok collectors on off-the-shelf C# benchmarks, demonstrating their competitiveness.
We consider quantifier-free spatial logics, designed for qualitative spatial representation and reasoning in AI, and extend them with the means to represent topological connectedness of regions and restrict the number of their connected components. We investigate the computational complexity of these logics and show that the connectedness constraints can increase complexity from NP to PSpace, ExpTime and, if component counting is allowed, to NExpTime.
We present decidability results for termination of classes of term rewriting systems modulo permutative theories. Termination and innermost termination modulo permutative theories are shown to be decidable for term rewrite systems (TRS) whose right-hand side terms are restricted to be shallow (variables occur at depth at most one) and linear (each variable occurs at most once). Innermost termination modulo permutative theories is also shown to be decidable for shallow TRS. We first show that a shallow TRS can be transformed into a flat (only variables and constants occur at depth one) TRS while preserving termination and innermost termination. The decidability results are then proved by showing that (a) for right-flat right-linear (flat) TRS, non-termination (respectively, innermost non-termination) implies non-termination starting from flat terms, and (b) for right-flat TRS, the existence of non-terminating derivations starting from a given term is decidable. On the negative side, we show PSPACE-hardness of termination and innermost termination for shallow right-linear TRS, and undecidability of termination for flat TRS.
Recursive domain equations have natural solutions. In particular there are domains defined by strictly positive induction. The class of countably based domains gives a computability theory for possibly non-countably based topological spaces. A $ qcb_{0} $ space is a topological space characterized by its strong representability over domains. In this paper, we study strictly positive inductive definitions for $ qcb_{0} $ spaces by means of domain representations, i.e. we show that there exists a canonical fixed point of every strictly positive operation on $qcb_{0} $ spaces.
Weighted automata are nondeterministic automata with numerical weights on transitions. They can define quantitative languages~$L$ that assign to each word~$w$ a real number~$L(w)$. In the case of infinite words, the value of a run is naturally computed as the maximum, limsup, liminf, limit-average, or discounted-sum of the transition weights. The value of a word $w$ is the supremum of the values of the runs over $w$. We study expressiveness and closure questions about these quantitative languages. We first show that the set of words with value greater than a threshold can be non-$\omega$-regular for deterministic limit-average and discounted-sum automata, while this set is always $\omega$-regular when the threshold is isolated (i.e., some neighborhood around the threshold contains no word). In the latter case, we prove that the $\omega$-regular language is robust against small perturbations of the transition weights. We next consider automata with transition weights Weighted automata are nondeterministic automata with numerical weights ontransitions. They can define quantitative languages~$L$ that assign to eachword~$w$ a real number~$L(w)$. In the case of infinite words, the value of arun is naturally computed as the maximum, limsup, liminf, limit-average, ordiscounted-sum of the transition weights. The value of a word $w$ is thesupremum of the values of the runs over $w$. We study expressiveness andclosure questions about these quantitative languages. We first show that the […]
We present a restriction of the solos calculus which is stable under reduction and expressive enough to contain an encoding of the pi-calculus. As a consequence, it is shown that equalizing names that are already equal is not required by the encoding of the pi-calculus. In particular, the induced solo diagrams bear an acyclicity property that induces a faithful encoding into differential interaction nets. This gives a (new) proof that differential interaction nets are expressive enough to contain an encoding of the pi-calculus. All this is worked out in the case of finitary (replication free) systems without sum, match nor mismatch.
We consider the problem of intruder deduction in security protocol analysis: that is, deciding whether a given message M can be deduced from a set of messages Gamma under the theory of blind signatures and arbitrary convergent equational theories modulo associativity and commutativity (AC) of certain binary operators. The traditional formulations of intruder deduction are usually given in natural-deduction-like systems and proving decidability requires significant effort in showing that the rules are "local" in some sense. By using the well-known translation between natural deduction and sequent calculus, we recast the intruder deduction problem as proof search in sequent calculus, in which locality is immediate. Using standard proof theoretic methods, such as permutability of rules and cut elimination, we show that the intruder deduction problem can be reduced, in polynomial time, to the elementary deduction problem, which amounts to solving certain equations in the underlying individual equational theories. We show that this result extends to combinations of disjoint AC-convergent theories whereby the decidability of intruder deduction under the combined theory reduces to the decidability of elementary deduction in each constituent theory. To further demonstrate the utility of the sequent-based approach, we show that, for Dolev-Yao intruders, our sequent-based techniques can be used to solve the more difficult problem of solving deducibility constraints, where the […]
Simulation and bisimulation metrics for stochastic systems provide a quantitative generalization of the classical simulation and bisimulation relations. These metrics capture the similarity of states with respect to quantitative specifications written in the quantitative {\mu}-calculus and related probabilistic logics. We first show that the metrics provide a bound for the difference in long-run average and discounted average behavior across states, indicating that the metrics can be used both in system verification, and in performance evaluation. For turn-based games and MDPs, we provide a polynomial-time algorithm for the computation of the one-step metric distance between states. The algorithm is based on linear programming; it improves on the previous known exponential-time algorithm based on a reduction to the theory of reals. We then present PSPACE algorithms for both the decision problem and the problem of approximating the metric distance between two states, matching the best known algorithms for Markov chains. For the bisimulation kernel of the metric our algorithm works in time O(n^4) for both turn-based games and MDPs; improving the previously best known O(n^9\cdot log(n)) time algorithm for MDPs. For a concurrent game G, we show that computing the exact distance between states is at least as hard as computing the value of concurrent reachability games and the square-root-sum problem in computational geometry. We show that checking whether the metric distance is […]
Sampled semantics of timed automata is a finite approximation of their dense time behavior. While the former is closer to the actual software or hardware systems with a fixed granularity of time, the abstract character of the latter makes it appealing for system modeling and verification. We study one aspect of the relation between these two semantics, namely checking whether the system exhibits some qualitative (untimed) behaviors in the dense time which cannot be reproduced by any implementation with a fixed sampling rate. More formally, the \emph{sampling problem} is to decide whether there is a sampling rate such that all qualitative behaviors (the untimed language) accepted by a given timed automaton in dense time semantics can be also accepted in sampled semantics. We show that this problem is decidable.
Terms are a concise representation of tree structures. Since they can be naturally defined by an inductive type, they offer data structures in functional programming and mechanised reasoning with useful principles such as structural induction and structural recursion. However, for graphs or "tree-like" structures - trees involving cycles and sharing - it remains unclear what kind of inductive structures exists and how we can faithfully assign a term representation of them. In this paper we propose a simple term syntax for cyclic sharing structures that admits structural induction and recursion principles. We show that the obtained syntax is directly usable in the functional language Haskell and the proof assistant Agda, as well as ordinary data structures such as lists and trees. To achieve this goal, we use a categorical approach to initial algebra semantics in a presheaf category. That approach follows the line of Fiore, Plotkin and Turi's models of abstract syntax with variable binding.
We examine a bidirectional propositional dynamic logic (PDL) for finite and infinite message sequence charts (MSCs) extending LTL and TLC-. By this kind of multi-modal logic we can express properties both in the entire future and in the past of an event. Path expressions strengthen the classical until operator of temporal logic. For every formula defining an MSC language, we construct a communicating finite-state machine (CFM) accepting the same language. The CFM obtained has size exponential in the size of the formula. This synthesis problem is solved in full generality, i.e., also for MSCs with unbounded channels. The model checking problem for CFMs and HMSCs turns out to be in PSPACE for existentially bounded MSCs. Finally, we show that, for PDL with intersection, the semantics of a formula cannot be captured by a CFM anymore.
The Description Logic EL has recently drawn considerable attention since, on the one hand, important inference problems such as the subsumption problem are polynomial. On the other hand, EL is used to define large biomedical ontologies. Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The main result of this paper is that unification in EL is decidable. More precisely, EL-unification is NP-complete, and thus has the same complexity as EL-matching. We also show that, w.r.t. the unification type, EL is less well-behaved: it is of type zero, which in particular implies that there are unification problems that have no finite complete set of unifiers.
Properties of Term Rewriting Systems are called modular iff they are preserved under (and reflected by) disjoint union, i.e. when combining two Term Rewriting Systems with disjoint signatures. Convergence is the property of Infinitary Term Rewriting Systems that all reduction sequences converge to a limit. Strong Convergence requires in addition that redex positions in a reduction sequence move arbitrarily deep. In this paper it is shown that both Convergence and Strong Convergence are modular properties of non-collapsing Infinitary Term Rewriting Systems, provided (for convergence) that the term metrics are granular. This generalises known modularity results beyond metric \infty.
We apply to the semantics of Arithmetic the idea of ``finite approximation'' used to provide computational interpretations of Herbrand's Theorem, and we interpret classical proofs as constructive proofs (with constructive rules for $\vee, \exists$) over a suitable structure $\StructureN$ for the language of natural numbers and maps of Gödel's system $\SystemT$. We introduce a new Realizability semantics we call ``Interactive learning-based Realizability'', for Heyting Arithmetic plus $\EM_1$ (Excluded middle axiom restricted to $\Sigma^0_1$ formulas). Individuals of $\StructureN$ evolve with time, and realizers may ``interact'' with them, by influencing their evolution. We build our semantics over Avigad's fixed point result, but the same semantics may be defined over different constructive interpretations of classical arithmetic (Berardi and de' Liguoro use continuations). Our notion of realizability extends intuitionistic realizability and differs from it only in the atomic case: we interpret atomic realizers as ``learning agents''.
The characterisation of termination using well-founded monotone algebras has been a milestone on the way to automated termination techniques, of which we have seen an extensive development over the past years. Both the semantic characterisation and most known termination methods are concerned with global termination, uniformly of all the terms of a term rewriting system (TRS). In this paper we consider local termination, of specific sets of terms within a given TRS. The principal goal of this paper is generalising the semantic characterisation of global termination to local termination. This is made possible by admitting the well-founded monotone algebras to be partial. We also extend our approach to local relative termination. The interest in local termination naturally arises in program verification, where one is probably interested only in sensible inputs, or just wants to characterise the set of inputs for which a program terminates. Local termination will be also be of interest when dealing with a specific class of terms within a TRS that is known to be non-terminating, such as combinatory logic (CL) or a TRS encoding recursive program schemes or Turing machines. We show how some of the well-known techniques for proving global termination, such as stepwise removal of rewrite rules and semantic labelling, can be adapted to the local case. We also describe transformations reducing local to global termination problems. The resulting techniques for proving local termination […]
Streams are infinite sequences over a given data type. A stream specification is a set of equations intended to define a stream. We propose a transformation from such a stream specification to a term rewriting system (TRS) in such a way that termination of the resulting TRS implies that the stream specification is well-defined, that is, admits a unique solution. As a consequence, proving well-definedness of several interesting stream specifications can be done fully automatically using present powerful tools for proving TRS termination. In order to increase the power of this approach, we investigate transformations that preserve semantics and well-definedness. We give examples for which the above mentioned technique applies for the ransformed specification while it fails for the original one.
The reachability problem for Vector Addition Systems (VASs) is a central problem of net theory. The general problem is known to be decidable by algorithms exclusively based on the classical Kosaraju-Lambert-Mayr-Sacerdote-Tenney decomposition. This decomposition is used in this paper to prove that the Parikh images of languages recognized by VASs are semi-pseudo-linear; a class that extends the semi-linear sets, a.k.a. the sets definable in Presburger arithmetic. We provide an application of this result; we prove that a final configuration is not reachable from an initial one if and only if there exists a semi-linear inductive invariant that contains the initial configuration but not the final one. Since we can decide if a Presburger formula denotes an inductive invariant, we deduce that there exist checkable certificates of non-reachability. In particular, there exists a simple algorithm for deciding the general VAS reachability problem based on two semi-algorithms. A first one that tries to prove the reachability by enumerating finite sequences of actions and a second one that tries to prove the non-reachability by enumerating Presburger formulas.
In this paper, we present a systematic way of deriving (1) languages of (generalised) regular expressions, and (2) sound and complete axiomatizations thereof, for a wide variety of systems. This generalizes both the results of Kleene (on regular languages and deterministic finite automata) and Milner (on regular behaviours and finite labelled transition systems), and includes many other systems such as Mealy and Moore machines.
We show that for any type in Martin-Löf Intensional Type Theory, the terms of that type and its higher identity types form a weak omega-category in the sense of Leinster. Precisely, we construct a contractible globular operad of definable composition laws, and give an action of this operad on the terms of any type and its identity types.
We study logics defined in terms of second-order monadic monoidal and groupoidal quantifiers. These are generalized quantifiers defined by monoid and groupoid word-problems, equivalently, by regular and context-free languages. We give a computational classification of the expressive power of these logics over strings with varying built-in predicates. In particular, we show that ATIME(n) can be logically characterized in terms of second-order monadic monoidal quantifiers.