2011
We study the derivational complexity induced by the dependency pair method, enhanced with standard refinements. We obtain upper bounds on the derivational complexity induced by the dependency pair method in terms of the derivational complexity of the base techniques employed. In particular we show that the derivational complexity induced by the dependency pair method based on some direct technique, possibly refined by argument filtering, the usable rules criterion, or dependency graphs, is primitive recursive in the derivational complexity induced by the direct method. This implies that the derivational complexity induced by a standard application of the dependency pair method based on traditional termination orders like KBO, LPO, and MPO is exactly the same as if those orders were applied as the only termination technique.
The theory of classical realizability is a framework in which we can develop the proof-program correspondence. Using this framework, we show how to transform into programs the proofs in classical analysis with dependent choice and the existence of a well ordering of the real line. The principal tools are: The notion of realizability algebra, which is a three-sorted variant of the well known combinatory algebra of Curry. An adaptation of the method of forcing used in set theory to prove consistency results. Here, it is used in another way, to obtain programs associated with a well ordering of R and the existence of a non trivial ultrafilter on N.
The coalgebraic approach to modal logic provides a uniform framework that captures the semantics of a large class of structurally different modal logics, including e.g. graded and probabilistic modal logics and coalition logic. In this paper, we introduce the coalgebraic mu-calculus, an extension of the general (coalgebraic) framework with fixpoint operators. Our main results are completeness of the associated tableau calculus and EXPTIME decidability for guarded formulas. Technically, this is achieved by reducing satisfiability to the existence of non-wellfounded tableaux, which is in turn equivalent to the existence of winning strategies in parity games. Our results are parametric in the underlying class of models and yield, as concrete applications, previously unknown complexity bounds for the probabilistic mu-calculus and for an extension of coalition logic with fixpoints.
Size-Change Termination (SCT) is a method of proving program termination based on the impossibility of infinite descent. To this end we use a program abstraction in which transitions are described by Monotonicity Constraints over (abstract) variables. When only constraints of the form x>y' and x\geq y' are allowed, we have size-change graphs. In the last decade, both theory and practice have evolved significantly in this restricted framework. The crucial underlying assumption of most of the past work is that the domain of the variables is well-founded. In a recent paper I showed how to extend and adapt some theory from the domain of size-change graphs to general monotonicity constraints, thus complementing previous work, but remaining in the realm of well-founded domains. However, monotonicity constraints are, interestingly, capable of proving termination also in the integer domain, which is not well-founded. The purpose of this paper is to explore the application of monotonicity constraints in this domain. We lay the necessary theoretical foundation, and present precise decision procedures for termination; finally, we provide a procedure to construct explicit global ranking functions from monotonicity constraints in singly-exponential time, and of optimal worst-case size and dimension (ordinal).
We investigate conditions under which a co-computably enumerable set in a computable metric space is computable. Using higher-dimensional chains and spherical chains we prove that in each computable metric space which is locally computable each co-computably enumerable sphere is computable and each co-c.e. cell with co-c.e. boundary sphere is computable.
Nominal abstract syntax is a popular first-order technique for encoding, and reasoning about, abstract syntax involving binders. Many of its applications involve constraint solving. The most commonly used constraint solving algorithm over nominal abstract syntax is the Urban-Pitts-Gabbay nominal unification algorithm, which is well-behaved, has a well-developed theory and is applicable in many cases. However, certain problems require a constraint solver which respects the equivariance property of nominal logic, such as Cheney's equivariant unification algorithm. This is more powerful but is more complicated and computationally hard. In this paper we present a novel algorithm for solving constraints over a simple variant of nominal abstract syntax which we call non-permutative. This constraint problem has similar complexity to equivariant unification but without many of the additional complications of the equivariant unification term language. We prove our algorithm correct, paying particular attention to issues of termination, and present an explicit translation of name-name equivariant unification problems into non-permutative constraints.
We develop local reasoning techniques for message passing concurrent programs based on ideas from separation logics and resource usage analysis. We extend processes with permission- resources and define a reduction semantics for this extended language. This provides a foundation for interpreting separation formulas for message-passing concurrency. We also define a sound proof system permitting us to infer satisfaction compositionally using local, separation-based reasoning.
Using coalgebraic methods, we extend Conway's theory of games to possibly non-terminating, i.e. non-wellfounded games (hypergames). We take the view that a play which goes on forever is a draw, and hence rather than focussing on winning strategies, we focus on non-losing strategies. Hypergames are a fruitful metaphor for non-terminating processes, Conway's sum being similar to shuffling. We develop a theory of hypergames, which extends in a non-trivial way Conway's theory; in particular, we generalize Conway's results on game determinacy and characterization of strategies. Hypergames have a rather interesting theory, already in the case of impartial hypergames, for which we give a compositional semantics, in terms of a generalized Grundy-Sprague function and a system of generalized Nim games. Equivalences and congruences on games and hypergames are discussed. We indicate a number of intriguing directions for future work. We briefly compare hypergames with other notions of games used in computer science.
Relational data exchange is the problem of translating relational data from a source schema into a target schema, according to a specification of the relationship between the source data and the target data. One of the basic issues is how to answer queries that are posed against target data. While consensus has been reached on the definitive semantics for monotonic queries, this issue turned out to be considerably more difficult for non-monotonic queries. Several semantics for non-monotonic queries have been proposed in the past few years. This article proposes a new semantics for non-monotonic queries, called the GCWA*-semantics. It is inspired by semantics from the area of deductive databases. We show that the GCWA*-semantics coincides with the standard open world semantics on monotonic queries, and we further explore the (data) complexity of evaluating non-monotonic queries under the GCWA*-semantics. In particular, we introduce a class of schema mappings for which universal queries can be evaluated under the GCWA*-semantics in polynomial time (data complexity) on the core of the universal solutions.
We are interested in identifying and enforcing the isolation requirements of a concurrent program, i.e., concurrency control that ensures that the program meets its specification. The thesis of this paper is that this can be done systematically starting from a sequential proof, i.e., a proof of correctness of the program in the absence of concurrent interleavings. We illustrate our thesis by presenting a solution to the problem of making a sequential library thread-safe for concurrent clients. We consider a sequential library annotated with assertions along with a proof that these assertions hold in a sequential execution. We show how we can use the proof to derive concurrency control that ensures that any execution of the library methods, when invoked by concurrent clients, satisfies the same assertions. We also present an extension to guarantee that the library methods are linearizable or atomic.
We study and compare in two degree-theoretic ways (iterated Halting oracles analogous to Kleene's arithmetical hierarchy and the Borel hierarchy of descriptive set theory) the capabilities and limitations of three models of analytic computation: BSS machines (aka real-RAM) and strongly/weakly analytic machines as introduced by Hotz et. al. (1995).
We provide a mathematical theory and methodology for synthesising equational logics from algebraic metatheories. We illustrate our methodology by means of two applications: a rational reconstruction of Birkhoff's Equational Logic and a new equational logic for reasoning about algebraic structure with name-binding operators.
It is known that the composition of schema mappings, each specified by source-to-target tgds (st-tgds), can be specified by a second-order tgd (SO tgd). We consider the question of what happens when target constraints are allowed. Specifically, we consider the question of specifying the composition of standard schema mappings (those specified by st-tgds, target egds, and a weakly acyclic set of target tgds). We show that SO tgds, even with the assistance of arbitrary source constraints and target constraints, cannot specify in general the composition of two standard schema mappings. Therefore, we introduce source-to-target second-order dependencies (st-SO dependencies), which are similar to SO tgds, but allow equations in the conclusion. We show that st-SO dependencies (along with target egds and target tgds) are sufficient to express the composition of every finite sequence of standard schema mappings, and further, every st-SO dependency specifies such a composition. In addition to this expressive power, we show that st-SO dependencies enjoy other desirable properties. In particular, they have a polynomial-time chase that generates a universal solution. This universal solution can be used to find the certain answers to unions of conjunctive queries in polynomial time. It is easy to show that the composition of an arbitrary number of standard schema mappings is equivalent to the composition of only two standard schema mappings. We show that surprisingly, the analogous result […]
We study probabilistic complexity classes and questions of derandomisation from a logical point of view. For each logic L we introduce a new logic BPL, bounded error probabilistic L, which is defined from L in a similar way as the complexity class BPP, bounded error probabilistic polynomial time, is defined from PTIME. Our main focus lies on questions of derandomisation, and we prove that there is a query which is definable in BPFO, the probabilistic version of first-order logic, but not in Cinf, finite variable infinitary logic with counting. This implies that many of the standard logics of finite model theory, like transitive closure logic and fixed-point logic, both with and without counting, cannot be derandomised. Similarly, we present a query on ordered structures which is definable in BPFO but not in monadic second-order logic, and a query on additive structures which is definable in BPFO but not in FO. The latter of these queries shows that certain uniform variants of AC0 (bounded-depth polynomial sized circuits) cannot be derandomised. These results are in contrast to the general belief that most standard complexity classes can be derandomised. Finally, we note that BPIFP+C, the probabilistic version of fixed-point logic with counting, captures the complexity class BPP, even on unordered structures.
This short note presents a new relation between coherent spaces and finiteness spaces. This takes the form of a functor from COH to FIN commuting with the additive and multiplicative structure of linear logic. What makes this correspondence possible and conceptually interesting is the use of the infinite Ramsey theorem. Along the way, the question of the cardinality of the collection of finiteness spaces on N is answered. Basic knowledge about coherent spaces and finiteness spaces is assumed.
We investigate the connection between measure, capacity and algorithmic randomness for the space of closed sets. For any computable measure m, a computable capacity T may be defined by letting T(Q) be the measure of the family of closed sets K which have nonempty intersection with Q. We prove an effective version of Choquet's capacity theorem by showing that every computable capacity may be obtained from a computable measure in this way. We establish conditions on the measure m that characterize when the capacity of an m-random closed set equals zero. This includes new results in classical probability theory as well as results for algorithmic randomness. For certain computable measures, we construct effectively closed sets with positive capacity and with Lebesgue measure zero. We show that for computable measures, a real q is upper semi-computable if and only if there is an effectively closed set with capacity q.
Language-based information flow methods offer a principled way to enforce strong security properties, but enforcing noninterference is too inflexible for realistic applications. Security-typed languages have therefore introduced declassification mechanisms for relaxing confidentiality policies, and endorsement mechanisms for relaxing integrity policies. However, a continuing challenge has been to define what security is guaranteed when such mechanisms are used. This paper presents a new semantic framework for expressing security policies for declassification and endorsement in a language-based setting. The key insight is that security can be characterized in terms of the influence that declassification and endorsement allow to the attacker. The new framework introduces two notions of security to describe the influence of the attacker. Attacker control defines what the attacker is able to learn from observable effects of this code; attacker impact captures the attacker's influence on trusted locations. This approach yields novel security conditions for checked endorsements and robust integrity. The framework is flexible enough to recover and to improve on the previously introduced notions of robustness and qualified robustness. Further, the new security conditions can be soundly enforced by a security type system. The applicability and enforcement of the new policies is illustrated through various examples, including data sanitization and authentication.
It is well-known that simple type theory is complete with respect to non-standard set-valued models. Completeness for standard models only holds with respect to certain extended classes of models, e.g., the class of cartesian closed categories. Similarly, dependent type theory is complete for locally cartesian closed categories. However, it is usually difficult to establish the coherence of interpretations of dependent type theory, i.e., to show that the interpretations of equal expressions are indeed equal. Several classes of models have been used to remedy this problem. We contribute to this investigation by giving a semantics that is standard, coherent, and sufficiently general for completeness while remaining relatively easy to compute with. Our models interpret types of Martin-Löf's extensional dependent type theory as sets indexed over posets or, equivalently, as fibrations over posets. This semantics can be seen as a generalization to dependent type theory of the interpretation of intuitionistic first-order logic in Kripke models. This yields a simple coherent model theory, with respect to which simple and dependent type theory are sound and complete.
We present a formalization of modern SAT solvers and their properties in a form of abstract state transition systems. SAT solving procedures are described as transition relations over states that represent the values of the solver's global variables. Several different SAT solvers are formalized, including both the classical DPLL procedure and its state-of-the-art successors. The formalization is made within the Isabelle/HOL system and the total correctness (soundness, termination, completeness) is shown for each presented system (with respect to a simple notion of satisfiability that can be manually checked). The systems are defined in a general way and cover procedures used in a wide range of modern SAT solvers. Our formalization builds up on the previous work on state transition systems for SAT, but it gives machine-verifiable proofs, somewhat more general specifications, and weaker assumptions that ensure the key correctness properties. The presented proofs of formal correctness of the transition systems can be used as a key building block in proving correctness of SAT solvers by using other verification approaches.
We analyse the computational complexity of finding Nash equilibria in turn-based stochastic multiplayer games with omega-regular objectives. We show that restricting the search space to equilibria whose payoffs fall into a certain interval may lead to undecidability. In particular, we prove that the following problem is undecidable: Given a game G, does there exist a Nash equilibrium of G where Player 0 wins with probability 1? Moreover, this problem remains undecidable when restricted to pure strategies or (pure) strategies with finite memory. One way to obtain a decidable variant of the problem is to restrict the strategies to be positional or stationary. For the complexity of these two problems, we obtain a common lower bound of NP and upper bounds of NP and PSPACE respectively. Finally, we single out a special case of the general problem that, in many cases, admits an efficient solution. In particular, we prove that deciding the existence of an equilibrium in which each player either wins or loses with probability 1 can be done in polynomial time for games where the objective of each player is given by a parity condition with a bounded number of priorities.
Separation logic is a Hoare-style logic for reasoning about programs with heap-allocated mutable data structures. As a step toward extending separation logic to high-level languages with ML-style general (higher-order) storage, we investigate the compatibility of nested Hoare triples with several variations of higher-order frame rules. The interaction of nested triples and frame rules can be subtle, and the inclusion of certain frame rules is in fact unsound. A particular combination of rules can be shown consistent by means of a Kripke model where worlds live in a recursively defined ultrametric space. The resulting logic allows us to elegantly prove programs involving stored code. In particular, using recursively defined assertions, it leads to natural specifications and proofs of invariants required for dealing with recursion through the store.
Probabilistic Büchi Automata (PBA) are randomized, finite state automata that process input strings of infinite length. Based on the threshold chosen for the acceptance probability, different classes of languages can be defined. In this paper, we present a number of results that clarify the power of such machines and properties of the languages they define. The broad themes we focus on are as follows. We present results on the decidability and precise complexity of the emptiness, universality and language containment problems for such machines, thus answering questions central to the use of these models in formal verification. Next, we characterize the languages recognized by PBAs topologically, demonstrating that though general PBAs can recognize languages that are not regular, topologically the languages are as simple as \omega-regular languages. Finally, we introduce Hierarchical PBAs, which are syntactically restricted forms of PBAs that are tractable and capture exactly the class of \omega-regular languages.
This paper presents a new exponential lower bound for the two most popular deterministic variants of the strategy improvement algorithms for solving parity, mean payoff, discounted payoff and simple stochastic games. The first variant improves every node in each step maximizing the current valuation locally, whereas the second variant computes the globally optimal improvement in each step. We outline families of games on which both variants require exponentially many strategy iterations.