2009 Editors: Ralf Treinen, Jean Goubault-Larrecq
We consider the problem of intruder deduction in security protocol analysis: that is, deciding whether a given message M can be deduced from a set of messages Gamma under the theory of blind signatures and arbitrary convergent equational theories modulo associativity and commutativity (AC) of certain binary operators. The traditional formulations of intruder deduction are usually given in natural-deduction-like systems and proving decidability requires significant effort in showing that the rules are "local" in some sense. By using the well-known translation between natural deduction and sequent calculus, we recast the intruder deduction problem as proof search in sequent calculus, in which locality is immediate. Using standard proof theoretic methods, such as permutability of rules and cut elimination, we show that the intruder deduction problem can be reduced, in polynomial time, to the elementary deduction problem, which amounts to solving certain equations in the underlying individual equational theories. We show that this result extends to combinations of disjoint AC-convergent theories whereby the decidability of intruder deduction under the combined theory reduces to the decidability of elementary deduction in each constituent theory. To further demonstrate the utility of the sequent-based approach, we show that, for Dolev-Yao intruders, our sequent-based techniques can be used to solve the more difficult problem of solving deducibility constraints, where the […]
Properties of Term Rewriting Systems are called modular iff they are preserved under (and reflected by) disjoint union, i.e. when combining two Term Rewriting Systems with disjoint signatures. Convergence is the property of Infinitary Term Rewriting Systems that all reduction sequences converge to a limit. Strong Convergence requires in addition that redex positions in a reduction sequence move arbitrarily deep. In this paper it is shown that both Convergence and Strong Convergence are modular properties of non-collapsing Infinitary Term Rewriting Systems, provided (for convergence) that the term metrics are granular. This generalises known modularity results beyond metric \infty.
The characterisation of termination using well-founded monotone algebras has been a milestone on the way to automated termination techniques, of which we have seen an extensive development over the past years. Both the semantic characterisation and most known termination methods are concerned with global termination, uniformly of all the terms of a term rewriting system (TRS). In this paper we consider local termination, of specific sets of terms within a given TRS. The principal goal of this paper is generalising the semantic characterisation of global termination to local termination. This is made possible by admitting the well-founded monotone algebras to be partial. We also extend our approach to local relative termination. The interest in local termination naturally arises in program verification, where one is probably interested only in sensible inputs, or just wants to characterise the set of inputs for which a program terminates. Local termination will be also be of interest when dealing with a specific class of terms within a TRS that is known to be non-terminating, such as combinatory logic (CL) or a TRS encoding recursive program schemes or Turing machines. We show how some of the well-known techniques for proving global termination, such as stepwise removal of rewrite rules and semantic labelling, can be adapted to the local case. We also describe transformations reducing local to global termination problems. The resulting techniques for proving local termination […]
We define two transformations from term rewriting systems (TRSs) to context-sensitive TRSs in such a way that termination of the target system implies outermost termination of the original system. In the transformation based on 'context extension', each outermost rewrite step is modeled by exactly one step in the transformed system. This transformation turns out to be complete for the class of left-linear TRSs. The second transformation is called `dynamic labeling' and results in smaller sized context-sensitive TRSs. Here each modeled step is adjoined with a small number of auxiliary steps. As a result state-of-the-art termination methods for context-sensitive rewriting become available for proving termination of outermost rewriting. Both transformations have been implemented in Jambox, making it the most successful tool in the category of outermost rewriting of the last edition of the annual termination competition.
The Description Logic EL has recently drawn considerable attention since, on the one hand, important inference problems such as the subsumption problem are polynomial. On the other hand, EL is used to define large biomedical ontologies. Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The main result of this paper is that unification in EL is decidable. More precisely, EL-unification is NP-complete, and thus has the same complexity as EL-matching. We also show that, w.r.t. the unification type, EL is less well-behaved: it is of type zero, which in particular implies that there are unification problems that have no finite complete set of unifiers.
Streams are infinite sequences over a given data type. A stream specification is a set of equations intended to define a stream. We propose a transformation from such a stream specification to a term rewriting system (TRS) in such a way that termination of the resulting TRS implies that the stream specification is well-defined, that is, admits a unique solution. As a consequence, proving well-definedness of several interesting stream specifications can be done fully automatically using present powerful tools for proving TRS termination. In order to increase the power of this approach, we investigate transformations that preserve semantics and well-definedness. We give examples for which the above mentioned technique applies for the ransformed specification while it fails for the original one.
Interaction nets are a graphical formalism inspired by Linear Logic proof-nets often used for studying higher order rewriting e.g. \Beta-reduction. Traditional presentations of interaction nets are based on graph theory and rely on elementary properties of graph theory. We give here a more explicit presentation based on notions borrowed from Girard's Geometry of Interaction: interaction nets are presented as partial permutations and a composition of nets, the gluing, is derived from the execution formula. We then define contexts and reduction as the context closure of rules. We prove strong confluence of the reduction within our framework and show how interaction nets can be viewed as the quotient of some generalized proof-nets.