Volume 11, Issue 1

2015


1. A Hoare logic for the coinductive trace-based big-step semantics of While

Keiko Nakata ; Tarmo Uustalu.
In search for a foundational framework for reasoning about observable behavior of programs that may not terminate, we have previously devised a trace-based big-step semantics for While. In this semantics, both traces and evaluation (relating initial states of program runs to traces they produce) are defined coinductively. On terminating runs, this semantics agrees with the standard inductive state-based semantics. Here we present a Hoare logic counterpart of our coinductive trace-based semantics and prove it sound and complete. Our logic subsumes the standard partial-correctness state-based Hoare logic as well as the total-correctness variation: they are embeddable. In the converse direction, projections can be constructed: a derivation of a Hoare triple in our trace-based logic can be translated into a derivation in the state-based logic of a translated, weaker Hoare triple. Since we work with a constructive underlying logic, the range of program properties we can reason about has a fine structure; in particular, we can distinguish between termination and nondivergence, e.g., unbounded classically total search fails to be terminating, but is nonetheless nondivergent. Our meta-theory is entirely constructive as well, and we have formalized it in Coq.

2. Permission-Based Separation Logic for Multithreaded Java Programs

Christian Haack ; Marieke Huisman ; Clément Hurlin ; Afshin Amighi.
This paper presents a program logic for reasoning about multithreaded Java-like programs with dynamic thread creation, thread joining and reentrant object monitors. The logic is based on concurrent separation logic. It is the first detailed adaptation of concurrent separation logic to a multithreaded Java-like language. The program logic associates a unique static access permission with each heap location, ensuring exclusive write accesses and ruling out data races. Concurrent reads are supported through fractional permissions. Permissions can be transferred between threads upon thread starting, thread joining, initial monitor entrancies and final monitor exits. In order to distinguish between initial monitor entrancies and monitor reentrancies, auxiliary variables keep track of multisets of currently held monitors. Data abstraction and behavioral subtyping are facilitated through abstract predicates, which are also used to represent monitor invariants, preconditions for thread starting and postconditions for thread joining. Value-parametrized types allow to conveniently capture common strong global invariants, like static object ownership relations. The program logic is presented for a model language with Java-like classes and interfaces, the soundness of the program logic is proven, and a number of illustrative examples are presented.

3. Monads need not be endofunctors

Thosten Altenkirch ; James Chapman ; Tarmo Uustalu.
We introduce a generalization of monads, called relative monads, allowing for underlying functors between different categories. Examples include finite-dimensional vector spaces, untyped and typed lambda-calculus syntax and indexed containers. We show that the Kleisli and Eilenberg-Moore constructions carry over to relative monads and are related to relative adjunctions. Under reasonable assumptions, relative monads are monoids in the functor category concerned and extend to monads, giving rise to a coreflection between relative monads and monads. Arrows are also an instance of relative monads.

4. On the relative proof complexity of deep inference via atomic flows

Anupam Das.
We consider the proof complexity of the minimal complete fragment, KS, of standard deep inference systems for propositional logic. To examine the size of proofs we employ atomic flows, diagrams that trace structural changes through a proof but ignore logical information. As results we obtain a polynomial simulation of versions of Resolution, along with some extensions. We also show that these systems, as well as bounded-depth Frege systems, cannot polynomially simulate KS, by giving polynomial-size proofs of certain variants of the propositional pigeonhole principle in KS.

5. Program Logics for Homogeneous Generative Run-Time Meta-Programming

Martin Berger ; Laurence Tratt.
This paper provides the first program logic for homogeneous generative run-time meta-programming---using a variant of MiniML by Davies and Pfenning as its underlying meta-programming language. We show the applicability of our approach by reasoning about example meta-programs from the literature. We also demonstrate that our logics are relatively complete in the sense of Cook, enable the inductive derivation of characteristic formulae, and exactly capture the observational properties induced by the operational semantics.

6. Extracting verified decision procedures: DPLL and Resolution

Ulrich Berger ; Andrew Lawrence ; Fredrik Nordvall Forsberg ; Monika Seisenberger.
This article is concerned with the application of the program extraction technique to a new class of problems: the synthesis of decision procedures for the classical satisfiability problem that are correct by construction. To this end, we formalize a completeness proof for the DPLL proof system and extract a SAT solver from it. When applied to a propositional formula in conjunctive normal form the program produces either a satisfying assignment or a DPLL derivation showing its unsatisfiability. We use non-computational quantifiers to remove redundant computational content from the extracted program and translate it into Haskell to improve performance. We also prove the equivalence between the resolution proof system and the DPLL proof system with a bound on the size of the resulting resolution proof. This demonstrates that it is possible to capture quantitative information about the extracted program on the proof level. The formalization is carried out in the interactive proof assistant Minlog.

7. Simulation in the Call-by-Need Lambda-Calculus with Letrec, Case, Constructors, and Seq

Manfred Schmidt-Schauß ; David Sabel ; Elena Machkasova.
This paper shows equivalence of several versions of applicative similarity and contextual approximation, and hence also of applicative bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seq-operator. LR models an untyped version of the core language of Haskell. The use of bisimilarities simplifies equivalence proofs in calculi and opens a way for more convenient correctness proofs for program transformations. The proof is by a fully abstract and surjective transfer into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of our similarities and contextual approximation can be shown by Howe's method. Similarity is transferred back to LR on the basis of an inductively defined similarity. The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite trees which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, which is also an identity on […]

8. The extensional realizability model of continuous functionals and three weakly non-constructive classical theorems

Dag Normann.
We investigate wether three statements in analysis, that can be proved classically, are realizable in the realizability model of extensional continuous functionals induced by Kleene's second model $K_2$. We prove that a formulation of the Riemann Permutation Theorem as well as the statement that all partially Cauchy sequences are Cauchy cannot be realized in this model, while the statement that the product of two anti-Specker spaces is anti-Specker can be realized.

9. Complexity of Problems of Commutative Grammars

Eryk Kopczynski.
We consider commutative regular and context-free grammars, or, in other words, Parikh images of regular and context-free languages. By using linear algebra and a branching analog of the classic Euler theorem, we show that, under an assumption that the terminal alphabet is fixed, the membership problem for regular grammars (given v in binary and a regular commutative grammar G, does G generate v?) is P, and that the equivalence problem for context free grammars (do G_1 and G_2 generate the same language?) is in $\mathrm{\Pi_2^P}$.

10. Weak Cat-Operads

Kosta DOSEN ; Zoran Petric.
An operad (this paper deals with non-symmetric operads)may be conceived as a partial algebra with a family of insertion operations, Gerstenhaber's circle-i products, which satisfy two kinds of associativity, one of them involving commutativity. A Cat-operad is an operad enriched over the category Cat of small categories, as a 2-category with small hom-categories is a category enriched over Cat. The notion of weak Cat-operad is to the notion of Cat-operad what the notion of bicategory is to the notion of 2-category. The equations of operads like associativity of insertions are replaced by isomorphisms in a category. The goal of this paper is to formulate conditions concerning these isomorphisms that ensure coherence, in the sense that all diagrams of canonical arrows commute. This is the sense in which the notions of monoidal category and bicategory are coherent. The coherence proof in the paper is much simplified by indexing the insertion operations in a context-independent way, and not in the usual manner. This proof, which is in the style of term rewriting, involves an argument with normal forms that generalizes what is established with the completeness proof for the standard presentation of symmetric groups. This generalization may be of an independent interest, and related to matters other than those studied in this paper. Some of the coherence conditions for weak Cat-operads lead to the hemiassociahedron, which is a polyhedron related to, but different from, the […]

11. Effective zero-dimensionality for computable metric spaces

Robert Kenny.
We begin to study classical dimension theory from the computable analysis (TTE) point of view. For computable metric spaces, several effectivisations of zero-dimensionality are shown to be equivalent. The part of this characterisation that concerns covering dimension extends to higher dimensions and to closed shrinkings of finite open covers. To deal with zero-dimensional subspaces uniformly, four operations (relative to the space and a class of subspaces) are defined; these correspond to definitions of inductive and covering dimensions and a countable basis condition. Finally, an effective retract characterisation of zero-dimensionality is proven under an effective compactness condition. In one direction this uses a version of the construction of bilocated sets.

12. The Computational Complexity of Propositional Cirquent Calculus

Matthew Steven Bauer.
Introduced in 2006 by Japaridze, cirquent calculus is a refinement of sequent calculus. The advent of cirquent calculus arose from the need for a deductive system with a more explicit ability to reason about resources. Unlike the more traditional proof-theoretic approaches that manipulate tree-like objects (formulas, sequents, etc.), cirquent calculus is based on circuit-style structures called cirquents, in which different "peer" (sibling, cousin, etc.) substructures may share components. It is this resource sharing mechanism to which cirquent calculus owes its novelty (and its virtues). From its inception, cirquent calculus has been paired with an abstract resource semantics. This semantics allows for reasoning about the interaction between a resource provider and a resource user, where resources are understood in the their most general and intuitive sense. Interpreting resources in a more restricted computational sense has made cirquent calculus instrumental in axiomatizing various fundamental fragments of Computability Logic, a formal theory of (interactive) computability. The so-called "classical" rules of cirquent calculus, in the absence of the particularly troublesome contraction rule, produce a sound and complete system CL5 for Computability Logic. In this paper, we investigate the computational complexity of CL5, showing it is $\Sigma_2^p$-complete. We also show that CL5 without the duplication rule has polynomial size proofs and is NP-complete.

13. Positive Inductive-Recursive Definitions

Neil Ghani ; Fredrik Nordvall Forsberg ; Lorenzo Malatesta.
A new theory of data types which allows for the definition of types as initial algebras of certain functors Fam(C) -> Fam(C) is presented. This theory, which we call positive inductive-recursive definitions, is a generalisation of Dybjer and Setzer's theory of inductive-recursive definitions within which C had to be discrete -- our work can therefore be seen as lifting this restriction. This is a substantial endeavour as we need to not only introduce a type of codes for such data types (as in Dybjer and Setzer's work), but also a type of morphisms between such codes (which was not needed in Dybjer and Setzer's development). We show how these codes are interpreted as functors on Fam(C) and how these morphisms of codes are interpreted as natural transformations between such functors. We then give an application of positive inductive-recursive definitions to the theory of nested data types and we give concrete examples of recursive functions defined on universes by using their elimination principle. Finally we justify the existence of positive inductive-recursive definitions by adapting Dybjer and Setzer's set-theoretic model to our setting.

14. Bialgebraic Semantics for Logic Programming

Filippo Bonchi ; Fabio Zanasi.
Bialgebrae provide an abstract framework encompassing the semantics of different kinds of computational models. In this paper we propose a bialgebraic approach to the semantics of logic programming. Our methodology is to study logic programs as reactive systems and exploit abstract techniques developed in that setting. First we use saturation to model the operational semantics of logic programs as coalgebrae on presheaves. Then, we make explicit the underlying algebraic structure by using bialgebrae on presheaves. The resulting semantics turns out to be compositional with respect to conjunction and term substitution. Also, it encodes a parallel model of computation, whose soundness is guaranteed by a built-in notion of synchronisation between different threads.

15. High-level Counterexamples for Probabilistic Automata

Ralf Wimmer ; Nils Jansen ; Erika Ã?brahÃ?m ; Joost-Pieter Katoen.
Providing compact and understandable counterexamples for violated system properties is an essential task in model checking. Existing works on counterexamples for probabilistic systems so far computed either a large set of system runs or a subset of the system's states, both of which are of limited use in manual debugging. Many probabilistic systems are described in a guarded command language like the one used by the popular model checker PRISM. In this paper we describe how a smallest possible subset of the commands can be identified which together make the system erroneous. We additionally show how the selected commands can be further simplified to obtain a well-understandable counterexample.

16. Ranking Templates for Linear Loops

Jan Leike ; Matthias Heizmann.
We present a new method for the constraint-based synthesis of termination arguments for linear loop programs based on linear ranking templates. Linear ranking templates are parameterized, well-founded relations such that an assignment to the parameters gives rise to a ranking function. Our approach generalizes existing methods and enables us to use templates for many different ranking functions with affine-linear components. We discuss templates for multiphase, nested, piecewise, parallel, and lexicographic ranking functions. These ranking templates can be combined to form more powerful templates. Because these ranking templates require both strict and non-strict inequalities, we use Motzkin's transposition theorem instead of Farkas' lemma to transform the generated $\exists\forall$-constraint into an $\exists$-constraint.

17. Domain Representations Induced by Dyadic Subbases

Hideki Tsuiki ; Yasuyuki Tsukamoyo.
We study domain representations induced by dyadic subbases and show that a proper dyadic subbase S of a second-countable regular space X induces an embedding of X in the set of minimal limit elements of a subdomain D of $\{0,1,\perp\}\omega$. In particular, if X is compact, then X is a retract of the set of limit elements of D.

18. Modelling MAC-Layer Communications in Wireless Systems

Andrea Cerone ; Matthew Hennessy ; Massimo Merro.
We present a timed process calculus for modelling wireless networks in which individual stations broadcast and receive messages; moreover the broadcasts are subject to collisions. Based on a reduction semantics for the calculus we define a contextual equivalence to compare the external behaviour of such wireless networks. Further, we construct an extensional LTS (labelled transition system) which models the activities of stations that can be directly observed by the external environment. Standard bisimulations in this LTS provide a sound proof method for proving systems contextually equivalence. We illustrate the usefulness of the proof methodology by a series of examples. Finally we show that this proof method is also complete, for a large class of systems.

19. Kernelizing MSO Properties of Trees of Fixed Height, and Some Consequences

Jakub Gajarsky ; Petr Hlineny.
Fix an integer h>=1. In the universe of coloured trees of height at most h, we prove that for any graph decision problem defined by an MSO formula with r quantifiers, there exists a set of kernels, each of size bounded by an elementary function of r and the number of colours. This yields two noteworthy consequences. Consider any graph class G having a one-dimensional MSO interpretation in the universe of coloured trees of height h (equivalently, G is a class of shrub-depth h). First, class G admits an MSO model checking algorithm whose runtime has an elementary dependence on the formula size. Second, on G the expressive powers of FO and MSO coincide (which extends a 2012 result of Elberfeld, Grohe, and Tantau).

20. Aspect-oriented linearizability proofs

Soham Chakraborty ; Thomas A. Henzinger ; Ali Sezgin ; Viktor Vafeiadis.
Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on the identification of the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof.