2017
We develop normalisation by evaluation (NBE) for dependent types based on presheaf categories. Our construction is formulated in the metalanguage of type theory using quotient inductive types. We use a typed presentation hence there are no preterms or realizers in our construction, and every construction respects the conversion relation. NBE for simple types uses a logical relation between the syntax and the presheaf interpretation. In our construction, we merge the presheaf interpretation and the logical relation into a proof-relevant logical predicate. We prove normalisation, completeness, stability and decidability of definitional equality. Most of the constructions were formalized in Agda.
A compiler is fully-abstract if the compilation from source language programs to target language programs reflects and preserves behavioural equivalence. Such compilers have important security benefits, as they limit the power of an attacker interacting with the program in the target language to that of an attacker interacting with the program in the source language. Proving compiler full-abstraction is, however, rather complicated. A common proof technique is based on the back-translation of target-level program contexts to behaviourally-equivalent source-level contexts. However, constructing such a back- translation is problematic when the source language is not strong enough to embed an encoding of the target language. For instance, when compiling from STLC to ULC, the lack of recursive types in the former prevents such a back-translation. We propose a general and elegant solution for this problem. The key insight is that it suffices to construct an approximate back-translation. The approximation is only accurate up to a certain number of steps and conservative beyond that, in the sense that the context generated by the back-translation may diverge when the original would not, but not vice versa. Based on this insight, we describe a general technique for proving compiler full-abstraction and demonstrate it on a compiler from STLC to ULC. The proof uses asymmetric cross-language logical relations and makes innovative use of step-indexing to express the relation between a […]
We define a new class of languages of $\omega$-words, strictly extending $\omega$-regular languages. One way to present this new class is by a type of regular expressions. The new expressions are an extension of $\omega$-regular expressions where two new variants of the Kleene star $L^*$ are added: $L^B$ and $L^S$. These new exponents are used to say that parts of the input word have bounded size, and that parts of the input can have arbitrarily large sizes, respectively. For instance, the expression $(a^Bb)^\omega$ represents the language of infinite words over the letters $a,b$ where there is a common bound on the number of consecutive letters $a$. The expression $(a^Sb)^\omega$ represents a similar language, but this time the distance between consecutive $b$'s is required to tend toward the infinite. We develop a theory for these languages, with a focus on decidability and closure. We define an equivalent automaton model, extending Büchi automata. The main technical result is a complementation lemma that works for languages where only one type of exponent---either $L^B$ or $L^S$---is used. We use the closure and decidability results to obtain partial decidability results for the logic MSOLB, a logic obtained by extending monadic second-order logic with new quantifiers that speak about the size of sets.
The satisfiability and finite satisfiability problems for the two-variable guarded fragment of first-order logic with counting quantifiers, a database, and path-functional dependencies are both ExpTime-complete.
A data tree is a finite tree whose every node carries a label from a finite alphabet and a datum from some infinite domain. We introduce a new model of automata over unranked data trees with a decidable emptiness problem. It is essentially a bottom-up alternating automaton with one register that can store one data value and can be used to perform equality tests with the data values occurring within the subtree of the current node. We show that it captures the expressive power of the vertical fragment of XPath - containing the child, descendant, parent and ancestor axes - obtaining thus a decision procedure for its satisfiability problem.
The finite spectrum of a first-order sentence is the set of positive integers that are the sizes of its models. The class of finite spectra is known to be the same as the complexity class NE. We consider the spectra obtained by limiting models to be either planar (in the graph-theoretic sense) or by bounding the degree of elements. We show that the class of such spectra is still surprisingly rich by establishing that significant fragments of NE are included among them. At the same time, we establish non-trivial upper bounds showing that not all sets in NE are obtained as planar or bounded-degree spectra.
We analyze the data complexity of ontology-mediated querying where the ontologies are formulated in a description logic (DL) of the ALC family and queries are conjunctive queries, positive existential queries, or acyclic conjunctive queries. Our approach is non-uniform in the sense that we aim to understand the complexity of each single ontology instead of for all ontologies formulated in a certain language. While doing so, we quantify over the queries and are interested, for example, in the question whether all queries can be evaluated in polynomial time w.r.t. a given ontology. Our results include a PTime/coNP-dichotomy for ontologies of depth one in the description logic ALCFI, the same dichotomy for ALC- and ALCI-ontologies of unrestricted depth, and the non-existence of such a dichotomy for ALCF-ontologies. For the latter DL, we additionally show that it is undecidable whether a given ontology admits PTime query evaluation. We also consider the connection between PTime query evaluation and rewritability into (monadic) Datalog.
Cyclic data structures, such as cyclic lists, in functional programming are tricky to handle because of their cyclicity. This paper presents an investigation of categorical, algebraic, and computational foundations of cyclic datatypes. Our framework of cyclic datatypes is based on second-order algebraic theories of Fiore et al., which give a uniform setting for syntax, types, and computation rules for describing and reasoning about cyclic datatypes. We extract the "fold" computation rules from the categorical semantics based on iteration categories of Bloom and Esik. Thereby, the rules are correct by construction. We prove strong normalisation using the General Schema criterion for second-order computation rules. Rather than the fixed point law, we particularly choose Bekic law for computation, which is a key to obtaining strong normalisation. We also prove the property of "Church-Rosser modulo bisimulation" for the computation rules. Combining these results, we have a remarkable decidability result of the equational theory of cyclic data and fold.
Following previous work on CCS, we propose a compositional model for the $\pi$-calculus in which processes are interpreted as sheaves on certain simple sites. Such sheaves are a concurrent form of innocent strategies, in the sense of Hyland-Ong/Nickau game semantics. We define an analogue of fair testing equivalence in the model and show that our interpretation is intensionally fully abstract for it. That is, the interpretation preserves and reflects fair testing equivalence; and furthermore, any innocent strategy is fair testing equivalent to the interpretation of some process. The central part of our work is the construction of our sites, relying on a combinatorial presentation of $\pi$-calculus traces in the spirit of string diagrams.
In this paper we study the logical foundations of automated inductive theorem proving. To that aim we first develop a theoretical model that is centered around the difficulty of finding induction axioms which are sufficient for proving a goal. Based on this model, we then analyze the following aspects: the choice of a proof shape, the choice of an induction rule and the language of the induction formula. In particular, using model-theoretic techniques, we clarify the relationship between notions of inductiveness that have been considered in the literature on automated inductive theorem proving.
A universal process of a process calculus is one that, given the Gödel index of a process of a certain type, produces a process equivalent to the encoded process. This paper demonstrates how universal processes can be formally defined and how a universal process of the value-passing calculus can be constructed. The existence of such a universal process in a process model can be explored to implement higher order communications, security protocols, and programming languages in the process model. A process version of the S-m-n theorem is stated to showcase how to embed the recursion theory in a process calculus.
In order to tackle the development of concurrent and distributed systems, the active object programming model provides a high-level abstraction to program concurrent behaviours. There exists already a variety of active object frameworks targeted at a large range of application domains: modelling, verification, efficient execution. However, among these frameworks, very few consider a multi-threaded execution of active objects. Introducing controlled parallelism within active objects enables overcoming some of their limitations. In this paper, we present a complete framework around the multi-active object programming model. We present it through ProActive, the Java library that offers multi-active objects, and through MultiASP, the programming language that allows the formalisation of our developments. We then show how to compile an active object language with cooperative multi-threading into multi-active objects. This paper also presents different use cases and the development support to illustrate the practical usability of our language. Formalisation of our work provides the programmer with guarantees on the behaviour of the multi-active object programming model and of the compiler.
We discuss possibilities of application of Numerical Analysis methods to proving computability, in the sense of the TTE approach, of solution operators of boundary-value problems for systems of PDEs. We prove computability of the solution operator for a symmetric hyperbolic system with computable real coefficients and dissipative boundary conditions, and of the Cauchy problem for the same system (we also prove computable dependence on the coefficients) in a cube $Q\subseteq\mathbb R^m$. Such systems describe a wide variety of physical processes (e.g. elasticity, acoustics, Maxwell equations). Moreover, many boundary-value problems for the wave equation also can be reduced to this case, thus we partially answer a question raised in Weihrauch and Zhong (2002). Compared with most of other existing methods of proving computability for PDEs, this method does not require existence of explicit solution formulas and is thus applicable to a broader class of (systems of) equations.
We propose a general framework to build certified proofs of distributed self-stabilizing algorithms with the proof assistant Coq. We first define in Coq the locally shared memory model with composite atomicity, the most commonly used model in the self-stabilizing area. We then validate our framework by certifying a non trivial part of an existing silent self-stabilizing algorithm which builds a $k$-clustering of the network. We also certify a quantitative property related to the output of this algorithm. Precisely, we show that the computed $k$-clustering contains at most $\lfloor \frac{n-1}{k+1} \rfloor + 1$ clusterheads, where $n$ is the number of nodes in the network. To obtain these results, we also developed a library which contains general tools related to potential functions and cardinality of sets.
Weak bisimulations are typically used in process algebras where silent steps are used to abstract from internal behaviours. They facilitate relating implementations to specifications. When an implementation fails to conform to its specification, pinpointing the root cause can be challenging. In this paper we provide a generic characterisation of branching-, delayed-, $\eta$- and weak-bisimulation as a game between Spoiler and Duplicator, offering an operational understanding of the relations. We show how such games can be used to assist in diagnosing non-conformance between implementation and specification. Moreover, we show how these games can be extended to distinguish divergences.
Rewriting is a formalism widely used in computer science and mathematical logic. When using rewriting as a programming or modeling paradigm, the rewrite rules describe the transformations one wants to operate and rewriting strategies are used to con- trol their application. The operational semantics of these strategies are generally accepted and approaches for analyzing the termination of specific strategies have been studied. We propose in this paper a generic encoding of classic control and traversal strategies used in rewrite based languages such as Maude, Stratego and Tom into a plain term rewriting system. The encoding is proven sound and complete and, as a direct consequence, estab- lished termination methods used for term rewriting systems can be applied to analyze the termination of strategy controlled term rewriting systems. We show that the encoding of strategies into term rewriting systems can be easily adapted to handle many-sorted signa- tures and we use a meta-level representation of terms to reduce the size of the encodings. The corresponding implementation in Tom generates term rewriting systems compatible with the syntax of termination tools such as AProVE and TTT2, tools which turned out to be very effective in (dis)proving the termination of the generated term rewriting systems. The approach can also be seen as a generic strategy compiler which can be integrated into languages providing pattern matching primitives; experiments in Tom show that applying our […]
The paper presents an elaborated and simplified version of the structural result for branching bisimilarity on normed BPA (Basic Process Algebra) processes that was the crux of a conference paper by Czerwinski and Jancar (arxiv 7/2014 and LiCS 2015). That paper focused on the computational complexity, and a NEXPTIME-upper bound has been derived; the authors built on the ideas by Fu (ICALP 2013), and strengthened his decidability result. Later He and Huang announced the EXPTIME-completeness of this problem (arxiv 1/2015, and LiCS 2015), giving a technical proof for the EXPTIME membership. He and Huang indirectly acknowledge the decomposition ideas by Czerwinski and Jancar on which they also built, but it is difficult to separate their starting point from their new ideas. One aim here is to present the previous decomposition result of Czerwinski and Jancar in a technically new framework, noting that branching bisimulation equivalence on normed BPA processes corresponds to a rational monoid (in the sense of [Sakarovitch, 1987]); in particular it is shown that the mentioned equivalence can be decided by normal-form computing deterministic finite transducers. Another aim is to provide a complete description, including an informal overview, that should also make clear how Fu's ideas were used, and to give all proofs in a form that should be readable and easily verifiable.
Using the notion of formal ball, we present a few new results in the theory of quasi-metric spaces. With no specific order: every continuous Yoneda-complete quasi-metric space is sober and convergence Choquet-complete hence Baire in its $d$-Scott topology; for standard quasi-metric spaces, algebraicity is equivalent to having enough center points; on a standard quasi-metric space, every lower semicontinuous $\bar{\mathbb{R}}_+$-valued function is the supremum of a chain of Lipschitz Yoneda-continuous maps; the continuous Yoneda-complete quasi-metric spaces are exactly the retracts of algebraic Yoneda-complete quasi-metric spaces; every continuous Yoneda-complete quasi-metric space has a so-called quasi-ideal model, generalizing a construction due to K. Martin. The point is that all those results reduce to domain-theoretic constructions on posets of formal balls.
In the Simply Typed $\lambda$-calculus Statman investigates the reducibility relation $\leq_{\beta\eta}$ between types: for $A,B \in \mathbb{T}^0$, types freely generated using $\rightarrow$ and a single ground type $0$, define $A \leq_{\beta\eta} B$ if there exists a $\lambda$-definable injection from the closed terms of type $A$ into those of type $B$. Unexpectedly, the induced partial order is the (linear) well-ordering (of order type) $\omega + 4$. In the proof a finer relation $\leq_{h}$ is used, where the above injection is required to be a Böhm transformation, and an (a posteriori) coarser relation $\leq_{h^+}$, requiring a finite family of Böhm transformations that is jointly injective. We present this result in a self-contained, syntactic, constructive and simplified manner. En route similar results for $\leq_h$ (order type $\omega + 5$) and $\leq_{h^+}$ (order type $8$) are obtained. Five of the equivalence classes of $\leq_{h^+}$ correspond to canonical term models of Statman, one to the trivial term model collapsing all elements of the same type, and one does not even form a model by the lack of closed terms of many types.
Given a logic presented in a sequent calculus, a natural question is that of equivalence of proofs: to determine whether two given proofs are equated by any denotational semantics, ie any categorical interpretation of the logic compatible with its cut-elimination procedure. This notion can usually be captured syntactically by a set of rule permutations. Very generally, proofnets can be defined as combinatorial objects which provide canonical representatives of equivalence classes of proofs. In particular, the existence of proof nets for a logic provides a solution to the equivalence problem of this logic. In certain fragments of linear logic, it is possible to give a notion of proofnet with good computational properties, making it a suitable representation of proofs for studying the cut-elimination procedure, among other things. It has recently been proved that there cannot be such a notion of proofnets for the multiplicative (with units) fragment of linear logic, due to the equivalence problem for this logic being Pspace-complete. We investigate the multiplicative-additive (without unit) fragment of linear logic and show it is closely related to binary decision trees: we build a representation of proofs based on binary decision trees, reducing proof equivalence to decision tree equivalence, and give a converse encoding of binary decision trees as proofs. We get as our main result that the complexity of the proof equivalence problem of the studied fragment is […]
This short text summarizes the work in biology proposed in our book, Perspectives on Organisms, where we analyse the unity proper to organisms by looking at it from different viewpoints. We discuss the theoretical roles of biological time, complexity, theoretical symmetries, singularities and critical transitions. We explicitly borrow from the conclusions in some key chapters and introduce them by a reflection on "incompleteness", also proposed in the book. We consider that incompleteness is a fundamental notion to understand the way in which we construct knowledge. Then we will introduce an approach to biological dynamics where randomness is central to the theoretical determination: randomness does not oppose biological stability but contributes to it by variability, adaptation, and diversity. Then, evolutionary and ontogenetic trajectories are continual changes of coherence structures involving symmetry changes within an ever-changing global stability.
We show that a version of Martin-Löf type theory with an extensional identity type former I, a unit type N1 , Sigma-types, Pi-types, and a base type is a free category with families (supporting these type formers) both in a 1- and a 2-categorical sense. It follows that the underlying category of contexts is a free locally cartesian closed category in a 2-categorical sense because of a previously proved biequivalence. We show that equality in this category is undecidable by reducing it to the undecidability of convertibility in combinatory logic. Essentially the same construction also shows a slightly strengthened form of the result that equality in extensional Martin-Löf type theory with one universe is undecidable.
The technique known as Grilliot's trick constitutes a template for explicitly defining the Turing jump functional $(\exists^2)$ in terms of a given effectively discontinuous type two functional. In this paper, we discuss the standard extensionality trick: a technique similar to Grilliot's trick in Nonstandard Analysis. This nonstandard trick proceeds by deriving from the existence of certain nonstandard discontinuous functionals, the Transfer principle from Nonstandard analysis limited to $\Pi_1^0$-formulas; from this (generally ineffective) implication, we obtain an effective implication expressing the Turing jump functional in terms of a discontinuous functional (and no longer involving Nonstandard Analysis). The advantage of our nonstandard approach is that one obtains effective content without paying attention to effective content. We also discuss a new class of functionals which all seem to fall outside the established categories. These functionals directly derive from the Standard Part axiom of Nonstandard Analysis.
We study a classical realizability model (in the sense of J.-L. Krivine) arising from a model of untyped lambda calculus in coherence spaces. We show that this model validates countable choice using bar recursion and bar induction.
Timed session types formalise timed communication protocols between two participants at the endpoints of a session. They feature a decidable compliance relation, which generalises to the timed setting the progress-based compliance between untimed session types. We show a sound and complete technique to decide when a timed session type admits a compliant one. Then, we show how to construct the most precise session type compliant with a given one, according to the subtyping preorder induced by compliance. Decidability of subtyping follows from these results.
Characterising tractable fragments of the constraint satisfaction problem (CSP) is an important challenge in theoretical computer science and artificial intelligence. Forbidding patterns (generic sub-instances) provides a means of defining CSP fragments which are neither exclusively language-based nor exclusively structure-based. It is known that the class of binary CSP instances in which the broken-triangle pattern (BTP) does not occur, a class which includes all tree-structured instances, are decided by arc consistency (AC), a ubiquitous reduction operation in constraint solvers. We provide a characterisation of simple partially-ordered forbidden patterns which have this AC-solvability property. It turns out that BTP is just one of five such AC-solvable patterns. The four other patterns allow us to exhibit new tractable classes.
In this paper we propose a formal framework for studying privacy in information systems. The proposal follows a two-axes schema where the first axis considers privacy as a taxonomy of rights and the second axis involves the ways an information system stores and manipulates information. We develop a correspondence between the above schema and an associated model of computation. In particular, we propose the \Pcalc, a calculus based on the $\pi$-calculus with groups extended with constructs for reasoning about private data. The privacy requirements of an information system are captured via a privacy policy language. The correspondence between the privacy model and the \Pcalc semantics is established using a type system for the calculus and a satisfiability definition between types and privacy policies. We deploy a type preservation theorem to show that a system respects a policy and it is safe if the typing of the system satisfies the policy. We illustrate our methodology via analysis of two use cases: a privacy-aware scheme for electronic traffic pricing and a privacy-preserving technique for speed-limit enforcement.
This paper presents matching logic, a first-order logic (FOL) variant for specifying and reasoning about structure by means of patterns and pattern matching. Its sentences, the patterns, are constructed using variables, symbols, connectives and quantifiers, but no difference is made between function and predicate symbols. In models, a pattern evaluates into a power-set domain (the set of values that match it), in contrast to FOL where functions and predicates map into a regular domain. Matching logic uniformly generalizes several logical frameworks important for program analysis, such as: propositional logic, algebraic specification, FOL with equality, modal logic, and separation logic. Patterns can specify separation requirements at any level in any program configuration, not only in the heaps or stores, without any special logical constructs for that: the very nature of pattern matching is that if two structures are matched as part of a pattern, then they can only be spatially separated. Like FOL, matching logic can also be translated into pure predicate logic with equality, at the same time admitting its own sound and complete proof system. A practical aspect of matching logic is that FOL reasoning with equality remains sound, so off-the-shelf provers and SMT solvers can be used for matching logic reasoning. Matching logic is particularly well-suited for reasoning about programs in programming languages that have an operational semantics, but it is not limited to this.
We study an extension of Plotkin's call-by-value lambda-calculus via two commutation rules (sigma-reductions). These commutation rules are sufficient to remove harmful call-by-value normal forms from the calculus, so that it enjoys elegant characterizations of many semantic properties. We prove that this extended calculus is a conservative refinement of Plotkin's one. In particular, the notions of solvability and potential valuability for this calculus coincide with those for Plotkin's call-by-value lambda-calculus. The proof rests on a standardization theorem proved by generalizing Takahashi's approach of parallel reductions to our set of reduction rules. The standardization is weak (i.e. redexes are not fully sequentialized) because of overlapping interferences between reductions.
We provide requirements on effectively enumerable topological spaces which guarantee that the Rice-Shapiro theorem holds for the computable elements of these spaces. We show that the relaxation of these requirements leads to the classes of effectively enumerable topological spaces where the Rice-Shapiro theorem does not hold. We propose two constructions that generate effectively enumerable topological spaces with particular properties from wn--families and computable trees without computable infinite paths. Using them we propose examples that give a flavor of this class.