Volume 15, Issue 1

2019


1. Natural Transformations as Rewrite Rules and Monad Composition

Dexter Kozen.
Eklund et al. (2002) present a graphical technique aimed at simplifying the verification of various category-theoretic constructions, notably the composition of monads. In this note we take a different approach involving string rewriting. We show that a given tuple $(T,\mu,\eta)$ is a monad if and only if $T$ is a terminal object in a certain category of strings and rewrite rules, and that this fact can be established by proving confluence of the rewrite system. We illustrate the technique on the monad composition problem. We also give a characterization of adjunctions in terms of rewrite categories.

2. Models of Type Theory Based on Moore Paths

Ian Orton ; Andrew M. Pitts.
This paper introduces a new family of models of intensional Martin-Löf type theory. We use constructive ordered algebra in toposes. Identity types in the models are given by a notion of Moore path. By considering a particular gros topos, we show that there is such a model that is non-truncated, i.e. contains non-trivial structure at all dimensions. In other words, in this model a type in a nested sequence of identity types can contain more than one element, no matter how great the degree of nesting. Although inspired by existing non-truncated models of type theory based on simplicial and cubical sets, the notion of model presented here is notable for avoiding any form of Kan filling condition in the semantics of types.
Section: Type theory and constructive mathematics

3. Probabilistic call by push value

Thomas Ehrhard ; Christine Tasson.
We introduce a probabilistic extension of Levy's Call-By-Push-Value. This extension consists simply in adding a " flipping coin " boolean closed atomic expression. This language can be understood as a major generalization of Scott's PCF encompassing both call-by-name and call-by-value and featuring recursive (possibly lazy) data types. We interpret the language in the previously introduced denotational model of probabilistic coherence spaces, a categorical model of full classical Linear Logic, interpreting data types as coalgebras for the resource comonad. We prove adequacy and full abstraction, generalizing earlier results to a much more realistic and powerful programming language.

4. A Finite-Model-Theoretic View on Propositional Proof Complexity

Erich Grädel ; Martin Grohe ; Benedikt Pago ; Wied Pakusa.
We establish new, and surprisingly tight, connections between propositional proof complexity and finite model theory. Specifically, we show that the power of several propositional proof systems, such as Horn resolution, bounded-width resolution, and the monomial calculus of bounded degree, can be characterised in a precise sense by variants of fixed-point logics that are of fundamental importance in descriptive complexity theory. Our main results are that Horn resolution has the same expressive power as least fixed-point logic, that bounded-width resolution captures existential least fixed-point logic, and that the polynomial calculus with bounded degree over the rationals solves precisely the problems definable in fixed-point logic with counting. We also study the bounded-degree polynomial calculus. Over the rationals, it captures fixed-point logic with counting if we restrict the bit-complexity of the coefficients. For unrestricted coefficients, we can only say that the bounded-degree polynomial calculus is at most as powerful as bounded variable infinitary counting logic, but a precise logical characterisation of its power remains an open problem. These connections between logics and proof systems allow us to establish finite-model-theoretic tools for proving lower bounds for the polynomial calculus over the rationals and also over finite fields. This is a corrected version of the paper (arXiv:1802.09377) published originally on January 23, 2019.

5. Extending set functors to generalised metric spaces

Adriana Balan ; Alexander Kurz ; Jiří Velebil.
For a commutative quantale $\mathcal{V}$, the category $\mathcal{V}-cat$ can be perceived as a category of generalised metric spaces and non-expanding maps. We show that any type constructor $T$ (formalised as an endofunctor on sets) can be extended in a canonical way to a type constructor $T_{\mathcal{V}}$ on $\mathcal{V}-cat$. The proof yields methods of explicitly calculating the extension in concrete examples, which cover well-known notions such as the Pompeiu-Hausdorff metric as well as new ones. Conceptually, this allows us to to solve the same recursive domain equation $X\cong TX$ in different categories (such as sets and metric spaces) and we study how their solutions (that is, the final coalgebras) are related via change of base. Mathematically, the heart of the matter is to show that, for any commutative quantale $\mathcal{V}$, the `discrete' functor $D:\mathsf{Set}\to \mathcal{V}-cat$ from sets to categories enriched over $\mathcal{V}$ is $\mathcal{V}-cat$-dense and has a density presentation that allows us to compute left-Kan extensions along $D$.

6. Degrees of extensionality in the theory of Böhm trees and Sallé's conjecture

Benedetto Intrigila ; Giulio Manzonetto ; Andrew Polonsky.
The main observational equivalences of the untyped lambda-calculus have been characterized in terms of extensional equalities between Böhm trees. It is well known that the lambda-theory H*, arising by taking as observables the head normal forms, equates two lambda-terms whenever their Böhm trees are equal up to countably many possibly infinite eta-expansions. Similarly, two lambda-terms are equal in Morris's original observational theory H+, generated by considering as observable the beta-normal forms, whenever their Böhm trees are equal up to countably many finite eta-expansions. The lambda-calculus also possesses a strong notion of extensionality called "the omega-rule", which has been the subject of many investigations. It is a longstanding open problem whether the equivalence B-omega obtained by closing the theory of Böhm trees under the omega-rule is strictly included in H+, as conjectured by Sallé in the seventies. In this paper we demonstrate that the two aforementioned theories actually coincide, thus disproving Sallé's conjecture. The proof technique we develop for proving the latter inclusion is general enough to provide as a byproduct a new characterization, based on bounded eta-expansions, of the least extensional equality between Böhm trees. Together, these results provide a taxonomy of the different degrees of extensionality in the theory of Böhm trees.

7. Shrub-depth: Capturing Height of Dense Graphs

Robert Ganian ; Petr Hliněný ; Jaroslav Nešetřil ; Jan Obdržálek ; Patrice Ossona de Mendez.
The recent increase of interest in the graph invariant called tree-depth and in its applications in algorithms and logic on graphs led to a natural question: is there an analogously useful "depth" notion also for dense graphs (say; one which is stable under graph complementation)? To this end, in a 2012 conference paper, a new notion of shrub-depth has been introduced, such that it is related to the established notion of clique-width in a similar way as tree-depth is related to tree-width. Since then shrub-depth has been successfully used in several research papers. Here we provide an in-depth review of the definition and basic properties of shrub-depth, and we focus on its logical aspects which turned out to be most useful. In particular, we use shrub-depth to give a characterization of the lower ${\omega}$ levels of the MSO1 transduction hierarchy of simple graphs.

8. A Light Modality for Recursion

Paula Severi.
We investigate the interplay between a modality for controlling the behaviour of recursive functional programs on infinite structures which are completely silent in the syntax. The latter means that programs do not contain "marks" showing the application of the introduction and elimination rules for the modality. This shifts the burden of controlling recursion from the programmer to the compiler. To do this, we introduce a typed lambda calculus a la Curry with a silent modality and guarded recursive types. The typing discipline guarantees normalisation and can be transformed into an algorithm which infers the type of a program.

9. A sequent calculus for a semi-associative law

Noam Zeilberger.
We introduce a sequent calculus with a simple restriction of Lambek's product rules that precisely captures the classical Tamari order, i.e., the partial order on fully-bracketed words (equivalently, binary trees) induced by a semi-associative law (equivalently, right rotation). We establish a focusing property for this sequent calculus (a strengthening of cut-elimination), which yields the following coherence theorem: every valid entailment in the Tamari order has exactly one focused derivation. We then describe two main applications of the coherence theorem, including: 1. A new proof of the lattice property for the Tamari order, and 2. A new proof of the Tutte-Chapoton formula for the number of intervals in the Tamari lattice $Y_n$.

10. Relative Entailment Among Probabilistic Implications

Albert Atserias ; José L. Balcázar ; Marie Ely Piceno.
We study a natural variant of the implicational fragment of propositional logic. Its formulas are pairs of conjunctions of positive literals, related together by an implicational-like connective; the semantics of this sort of implication is defined in terms of a threshold on a conditional probability of the consequent, given the antecedent: we are dealing with what the data analysis community calls confidence of partial implications or association rules. Existing studies of redundancy among these partial implications have characterized so far only entailment from one premise and entailment from two premises, both in the stand-alone case and in the case of presence of additional classical implications (this is what we call "relative entailment"). By exploiting a previously noted alternative view of the entailment in terms of linear programming duality, we characterize exactly the cases of entailment from arbitrary numbers of premises, again both in the stand-alone case and in the case of presence of additional classical implications. As a result, we obtain decision algorithms of better complexity; additionally, for each potential case of entailment, we identify a critical confidence threshold and show that it is, actually, intrinsic to each set of premises and antecedent of the conclusion.

11. The Subpower Membership Problem for Finite Algebras with Cube Terms

Andrei Bulatov ; Peter Mayr ; Ágnes Szendrei.
The subalgebra membership problem is the problem of deciding if a given element belongs to an algebra given by a set of generators. This is one of the best established computational problems in algebra. We consider a variant of this problem, which is motivated by recent progress in the Constraint Satisfaction Problem, and is often referred to as the Subpower Membership Problem (SMP). In the SMP we are given a set of tuples in a direct product of algebras from a fixed finite set $\mathcal{K}$ of finite algebras, and are asked whether or not a given tuple belongs to the subalgebra of the direct product generated by a given set. Our main result is that the subpower membership problem SMP($\mathcal{K}$) is in P if $\mathcal{K}$ is a finite set of finite algebras with a cube term, provided $\mathcal{K}$ is contained in a residually small variety. We also prove that for any finite set of finite algebras $\mathcal{K}$ in a variety with a cube term, each one of the problems SMP($\mathcal{K}$), SMP($\mathbb{HS} \mathcal{K}$), and finding compact representations for subpowers in $\mathcal{K}$, is polynomial time reducible to any of the others, and the first two lie in NP.

12. Efficient reduction of nondeterministic automata with application to language inclusion testing

Lorenzo Clemente ; Richard Mayr.
We present efficient algorithms to reduce the size of nondeterministic Büchi word automata (NBA) and nondeterministic finite word automata (NFA), while retaining their languages. Additionally, we describe methods to solve PSPACE-complete automata problems like language universality, equivalence, and inclusion for much larger instances than was previously possible ($\ge 1000$ states instead of 10-100). This can be used to scale up applications of automata in formal verification tools and decision procedures for logical theories. The algorithms are based on new techniques for removing transitions (pruning) and adding transitions (saturation), as well as extensions of classic quotienting of the state space. These techniques use criteria based on combinations of backward and forward trace inclusions and simulation relations. Since trace inclusion relations are themselves PSPACE-complete, we introduce lookahead simulations as good polynomial time computable approximations thereof. Extensive experiments show that the average-case time complexity of our algorithms scales slightly above quadratically. (The space complexity is worst-case quadratic.) The size reduction of the automata depends very much on the class of instances, but our algorithm consistently reduces the size far more than all previous techniques. We tested our algorithms on NBA derived from LTL-formulae, NBA derived from mutual exclusion protocols and many classes of random NBA and NFA, and compared their performance […]

13. Size, Cost, and Capacity: A Semantic Technique for Hard Random QBFs

Olaf Beyersdorff ; Joshua Blinkhorn ; Luke Hinde.
As a natural extension of the SAT problem, an array of proof systems for quantified Boolean formulas (QBF) have been proposed, many of which extend a propositional proof system to handle universal quantification. By formalising the construction of the QBF proof system obtained from a propositional proof system by adding universal reduction (Beyersdorff, Bonacina & Chew, ITCS `16), we present a new technique for proving proof-size lower bounds in these systems. The technique relies only on two semantic measures: the cost of a QBF, and the capacity of a proof. By examining the capacity of proofs in several QBF systems, we are able to use the technique to obtain lower bounds based on cost alone. As applications of the technique, we first prove exponential lower bounds for a new family of simple QBFs representing equality. The main application is in proving exponential lower bounds with high probability for a class of randomly generated QBFs, the first `genuine' lower bounds of this kind, which apply to the QBF analogues of resolution, Cutting Planes, and Polynomial Calculus. Finally, we employ the technique to give a simple proof of hardness for the prominent formulas of Kleine Büning, Karpinski and Flögel.

14. Algebra, coalgebra, and minimization in polynomial differential equations

Michele Boreale.
We consider reasoning and minimization in systems of polynomial ordinary differential equations (ode's). The ring of multivariate polynomials is employed as a syntax for denoting system behaviours. We endow this set with a transition system structure based on the concept of Lie-derivative, thus inducing a notion of L-bisimulation. We prove that two states (variables) are L-bisimilar if and only if they correspond to the same solution in the ode's system. We then characterize L-bisimilarity algebraically, in terms of certain ideals in the polynomial ring that are invariant under Lie-derivation. This characterization allows us to develop a complete algorithm, based on building an ascending chain of ideals, for computing the largest L-bisimulation containing all valid identities that are instances of a user-specified template. A specific largest L-bisimulation can be used to build a reduced system of ode's, equivalent to the original one, but minimal among all those obtainable by linear aggregation of the original equations. A computationally less demanding approximate reduction and linearization technique is also proposed.

15. Sahlqvist via Translation

Willem Conradie ; Alessandra Palmigiano ; Zhiguang Zhao.
In recent years, unified correspondence has been developed as a generalized Sahlqvist theory which applies uniformly to all signatures of normal and regular (distributive) lattice expansions. This includes a general definition of the Sahlqvist and inductive formulas and inequalities in every such signature, based on order theory. This definition covers in particular all (bi-)intuitionistic modal logics. The theory of these logics has been intensively studied over the past seventy years in connection with classical polyadic modal logics, using suitable versions of Goedel-McKinsey-Tarski translations as main tools. It is therefore natural to ask (1) whether a general perspective on Goedel-McKinsey-Tarski translations can be attained, also based on order-theoretic principles like those underlying the general definition of Sahlqvist and inductive formulas and inequalities, which accounts for the known Goedel-McKinsey-Tarski translations and applies uniformly to all signatures of normal (distributive) lattice expansions; (2) whether this general perspective can be used to transfer correspondence and canonicity theorems for Sahlqvist and inductive formulas and inequalities in all signatures described above under Goedel-McKinsey-Tarski translations. In the present paper, we set out to answer these questions. We answer (1) in the affirmative; as to (2), we prove the transfer of the correspondence theorem for inductive inequalities of arbitrary signatures of normal distributive […]

16. Almost Every Simply Typed Lambda-Term Has a Long Beta-Reduction Sequence

Kazuyuki Asada ; Naoki Kobayashi ; Ryoma Sin'ya ; Takeshi Tsukada.
It is well known that the length of a beta-reduction sequence of a simply typed lambda-term of order k can be huge; it is as large as k-fold exponential in the size of the lambda-term in the worst case. We consider the following relevant question about quantitative properties, instead of the worst case: how many simply typed lambda-terms have very long reduction sequences? We provide a partial answer to this question, by showing that asymptotically almost every simply typed lambda-term of order k has a reduction sequence as long as (k-1)-fold exponential in the term size, under the assumption that the arity of functions and the number of variables that may occur in every subterm are bounded above by a constant. To prove it, we have extended the infinite monkey theorem for strings to a parametrized one for regular tree languages, which may be of independent interest. The work has been motivated by quantitative analysis of the complexity of higher-order model checking.

17. Behavioural equivalences for timed systems

Tomasz Brengos ; Marco Peressotti.
Timed transition systems are behavioural models that include an explicit treatment of time flow and are used to formalise the semantics of several foundational process calculi and automata. Despite their relevance, a general mathematical characterisation of timed transition systems and their behavioural theory is still missing. We introduce the first uniform framework for timed behavioural models that encompasses known behavioural equivalences such as timed bisimulations, timed language equivalences as well as their weak and time-abstract counterparts. All these notions of equivalences are naturally organised by their discriminating power in a spectrum. We prove that this result does not depend on the type of the systems under scrutiny: it holds for any generalisation of timed transition system. We instantiate our framework to timed transition systems and their quantitative extensions such as timed probabilistic systems.

18. Thin Games with Symmetry and Concurrent Hyland-Ong Games

Simon Castellan ; Pierre Clairambault ; Glynn Winskel.
We build a cartesian closed category, called Cho, based on event structures. It allows an interpretation of higher-order stateful concurrent programs that is refined and precise: on the one hand it is conservative with respect to standard Hyland-Ong games when interpreting purely functional programs as innocent strategies, while on the other hand it is much more expressive. The interpretation of programs constructs compositionally a representation of their execution that exhibits causal dependencies and remembers the points of non-deterministic branching.The construction is in two stages. First, we build a compact closed category Tcg. It is a variant of Rideau and Winskel's category CG, with the difference that games and strategies in Tcg are equipped with symmetry to express that certain events are essentially the same. This is analogous to the underlying category of AJM games enriching simple games with an equivalence relations on plays. Building on this category, we construct the cartesian closed category Cho as having as objects the standard arenas of Hyland-Ong games, with strategies, represented by certain events structures, playing on games with symmetry obtained as expanded forms of these arenas.To illustrate and give an operational light on these constructions, we interpret (a close variant of) Idealized Parallel Algol in Cho.

19. Shortest paths in one-counter systems

Dmitry Chistikov ; Wojciech Czerwiński ; Piotr Hofman ; Michał Pilipczuk ; Michael Wehar.
We show that any one-counter automaton with $n$ states, if its language is non-empty, accepts some word of length at most $O(n^2)$. This closes the gap between the previously known upper bound of $O(n^3)$ and lower bound of $\Omega(n^2)$. More generally, we prove a tight upper bound on the length of shortest paths between arbitrary configurations in one-counter transition systems (weaker bounds have previously appeared in the literature).

20. Displayed Categories

Benedikt Ahrens ; Peter LeFanu Lumsdaine.
We introduce and develop the notion of *displayed categories*. A displayed category over a category C is equivalent to "a category D and functor F : D --> C", but instead of having a single collection of "objects of D" with a map to the objects of C, the objects are given as a family indexed by objects of C, and similarly for the morphisms. This encapsulates a common way of building categories in practice, by starting with an existing category and adding extra data/properties to the objects and morphisms. The interest of this seemingly trivial reformulation is that various properties of functors are more naturally defined as properties of the corresponding displayed categories. Grothendieck fibrations, for example, when defined as certain functors, use equality on objects in their definition. When defined instead as certain displayed categories, no reference to equality on objects is required. Moreover, almost all examples of fibrations in nature are, in fact, categories whose standard construction can be seen as going via displayed categories. We therefore propose displayed categories as a basis for the development of fibrations in the type-theoretic setting, and similarly for various other notions whose classical definitions involve equality on objects. Besides giving a conceptual clarification of such issues, displayed categories also provide a powerful tool in computer formalisation, unifying and abstracting common constructions and proof […]

21. On the Incomparability of Cache Algorithms in Terms of Timing Leakage

Pablo Cañones ; Boris Köpf ; Jan Reineke.
Modern computer architectures rely on caches to reduce the latency gap between the CPU and main memory. While indispensable for performance, caches pose a serious threat to security because they leak information about memory access patterns of programs via execution time. In this paper, we present a novel approach for reasoning about the security of cache algorithms with respect to timing leaks. The basis of our approach is the notion of leak competitiveness, which compares the leakage of two cache algorithms on every possible program. Based on this notion, we prove the following two results: First, we show that leak competitiveness is symmetric in the cache algorithms. This implies that no cache algorithm dominates another in terms of leakage via a program's total execution time. This is in contrast to performance, where it is known that such dominance relationships exist. Second, when restricted to caches with finite control, the leak-competitiveness relationship between two cache algorithms is either asymptotically linear or constant. No other shapes are possible.

22. The principle of pointfree continuity

Tatsuji Kawai ; Giovanni Sambin.
In the setting of constructive pointfree topology, we introduce a notion of continuous operation between pointfree topologies and the corresponding principle of pointfree continuity. An operation between points of pointfree topologies is continuous if it is induced by a relation between the bases of the topologies; this gives a rigorous condition for Brouwer's continuity principle to hold. The principle of pointfree continuity for pointfree topologies $\mathcal{S}$ and $\mathcal{T}$ says that any relation which induces a continuous operation between points is a morphism from $\mathcal{S}$ to $\mathcal{T}$. The principle holds under the assumption of bi-spatiality of $\mathcal{S}$. When $\mathcal{S}$ is the formal Baire space or the formal unit interval and $\mathcal{T}$ is the formal topology of natural numbers, the principle is equivalent to spatiality of the formal Baire space and formal unit interval, respectively. Some of the well-known connections between spatiality, bar induction, and compactness of the unit interval are recast in terms of our principle of continuity. We adopt the Minimalist Foundation as our constructive foundation, and positive topology as the notion of pointfree topology. This allows us to distinguish ideal objects from constructive ones, and in particular, to interpret choice sequences as points of the formal Baire space.

23. Web spaces and worldwide web spaces: topological aspects of domain theory

Marcel Erné.
Web spaces, wide web spaces and worldwide web spaces (alias C-spaces) provide useful generalizations of continuous domains. We present new characterizations of such spaces and their patch spaces, obtained by joining the original topology with a second topology having the dual specialization order; these patch spaces possess good convexity and separation properties and determine the original web spaces. The category of C-spaces is concretely isomorphic to the category of fan spaces; these are certain quasi-ordered spaces having neighborhood bases of fans, obtained by deleting a finite number of principal dual ideals from a principal dual ideal. Our approach has useful consequences for domain theory, because the T$_0$ web spaces are exactly the generalized Scott spaces of locally approximating ideal extensions, and the T$_0$ C-spaces are exactly the generalized Scott spaces of globally approximating and interpolating ideal extensions. We extend the characterization of continuous lattices as meet-continuous lattices with T$_2$ Lawson topology and the Fundamental Theorem of Compact Semilattices to non-complete posets. Finally, cardinal invariants like density and weight of the involved objects are investigated.

24. Capturing Polynomial Time using Modular Decomposition

Berit Grußien.
The question of whether there is a logic that captures polynomial time is one of the main open problems in descriptive complexity theory and database theory. In 2010 Grohe showed that fixed point logic with counting captures polynomial time on all classes of graphs with excluded minors. We now consider classes of graphs with excluded induced subgraphs. For such graph classes, an effective graph decomposition, called modular decomposition, was introduced by Gallai in 1976. The graphs that are non-decomposable with respect to modular decomposition are called prime. We present a tool, the Modular Decomposition Theorem, that reduces (definable) canonization of a graph class C to (definable) canonization of the class of prime graphs of C that are colored with binary relations on a linearly ordered set. By an application of the Modular Decomposition Theorem, we show that fixed point logic with counting captures polynomial time on the class of permutation graphs. Within the proof of the Modular Decomposition Theorem, we show that the modular decomposition of a graph is definable in symmetric transitive closure logic with counting. We obtain that the modular decomposition tree is computable in logarithmic space. It follows that cograph recognition and cograph canonization is computable in logarithmic space.

25. On the First-Order Complexity of Induced Subgraph Isomorphism

Oleg Verbitsky ; Maksim Zhukovskii.
Given a graph $F$, let $I(F)$ be the class of graphs containing $F$ as an induced subgraph. Let $W[F]$ denote the minimum $k$ such that $I(F)$ is definable in $k$-variable first-order logic. The recognition problem of $I(F)$, known as Induced Subgraph Isomorphism (for the pattern graph $F$), is solvable in time $O(n^{W[F]})$. Motivated by this fact, we are interested in determining or estimating the value of $W[F]$. Using Olariu's characterization of paw-free graphs, we show that $I(K_3+e)$ is definable by a first-order sentence of quantifier depth 3, where $K_3+e$ denotes the paw graph. This provides an example of a graph $F$ with $W[F]$ strictly less than the number of vertices in $F$. On the other hand, we prove that $W[F]=4$ for all $F$ on 4 vertices except the paw graph and its complement. If $F$ is a graph on $t$ vertices, we prove a general lower bound $W[F]>(1/2-o(1))t$, where the function in the little-o notation approaches 0 as $t$ inreases. This bound holds true even for a related parameter $W^*[F]\le W[F]$, which is defined as the minimum $k$ such that $I(F)$ is definable in the infinitary logic $L^k_{\infty\omega}$. We show that $W^*[F]$ can be strictly less than $W[F]$. Specifically, $W^*[P_4]=3$ for $P_4$ being the path graph on 4 vertices. Using the lower bound for $W[F]$, we also obtain a succintness result for existential monadic second-order logic: A usage of just one monadic quantifier sometimes reduces the first-order quantifier depth at a […]

26. Coaxioms: flexible coinductive definitions by inference systems

Francesco Dagnino.
We introduce a generalized notion of inference system to support more flexible interpretations of recursive definitions. Besides axioms and inference rules with the usual meaning, we allow also coaxioms, which are, intuitively, axioms which can only be applied "at infinite depth" in a proof tree. Coaxioms allow us to interpret recursive definitions as fixed points which are not necessarily the least, nor the greatest one, whose existence is guaranteed by a smooth extension of classical results. This notion nicely subsumes standard inference systems and their inductive and coinductive interpretation, thus allowing formal reasoning in cases where the inductive and coinductive interpretation do not provide the intended meaning, but are rather mixed together. This is a corrected version of the paper (arXiv:1808.02943v4) published originally on 12 March 2019

27. Stone-Type Dualities for Separation Logics

Simon Docherty ; David Pym.
Stone-type duality theorems, which relate algebraic and relational/topological models, are important tools in logic because -- in addition to elegant abstraction -- they strengthen soundness and completeness to a categorical equivalence, yielding a framework through which both algebraic and topological methods can be brought to bear on a logic. We give a systematic treatment of Stone-type duality for the structures that interpret bunched logics, starting with the weakest systems, recovering the familiar BI and Boolean BI (BBI), and extending to both classical and intuitionistic Separation Logic. We demonstrate the uniformity and modularity of this analysis by additionally capturing the bunched logics obtained by extending BI and BBI with modalities and multiplicative connectives corresponding to disjunction, negation and falsum. This includes the logic of separating modalities (LSM), De Morgan BI (DMBI), Classical BI (CBI), and the sub-classical family of logics extending Bi-intuitionistic (B)BI (Bi(B)BI). We additionally obtain as corollaries soundness and completeness theorems for the specific Kripke-style models of these logics as presented in the literature: for DMBI, the sub-classical logics extending BiBI and a new bunched logic, Concurrent Kleene BI (connecting our work to Concurrent Separation Logic), this is the first time soundness and completeness theorems have been proved. We thus obtain a comprehensive semantic account of the multiplicative variants of all […]

28. Initial Semantics for Reduction Rules

Benedikt Ahrens.
We give an algebraic characterization of the syntax and operational semantics of a class of simply-typed languages, such as the language PCF: we characterize simply-typed syntax with variable binding and equipped with reduction rules via a universal property, namely as the initial object of some category of models. For this purpose, we employ techniques developed in two previous works: in the first work we model syntactic translations between languages over different sets of types as initial morphisms in a category of models. In the second work we characterize untyped syntax with reduction rules as initial object in a category of models. In the present work, we combine the techniques used earlier in order to characterize simply-typed syntax with reduction rules as initial object in a category. The universal property yields an operator which allows to specify translations---that are semantically faithful by construction---between languages over possibly different sets of types. As an example, we upgrade a translation from PCF to the untyped lambda calculus, given in previous work, to account for reduction in the source and target. Specifically, we specify a reduction semantics in the source and target language through suitable rules. By equipping the untyped lambda calculus with the structure of a model of PCF, initiality yields a translation from PCF to the lambda calculus, that is faithful with respect to the reduction semantics specified by the rules. This paper is an […]

29. Topological Scott Convergence Theorem

Hadrian Andradi ; Weng Kin Ho.
Recently, J. D. Lawson encouraged the domain theory community to consider the scientific program of developing domain theory in the wider context of $T_0$ spaces instead of restricting to posets. In this paper, we respond to this calling with an attempt to formulate a topological version of the Scott Convergence Theorem, i.e., an order-theoretic characterisation of those posets for which the Scott-convergence $\mathcal{S}$ is topological. To do this, we make use of the $\mathcal{ID}$ replacement principle to create topological analogues of well-known domain-theoretic concepts, e.g., $\mathcal{I}$-continuous spaces correspond to continuous posets, as $\mathcal{I}$-convergence corresponds to $\mathcal{S}$-convergence. In this paper, we consider two novel topological concepts, namely, the $\mathcal{I}$-stable spaces and the $\mathcal{DI}$ spaces, and as a result we obtain some necessary (respectively, sufficient) conditions under which the convergence structure $\mathcal{I}$ is topological.

30. Disjunctive bases: normal forms and model theory for modal logics

Sebastian Enqvist ; Yde Venema.
We present the concept of a disjunctive basis as a generic framework for normal forms in modal logic based on coalgebra. Disjunctive bases were defined in previous work on completeness for modal fixpoint logics, where they played a central role in the proof of a generic completeness theorem for coalgebraic mu-calculi. Believing the concept has a much wider significance, here we investigate it more thoroughly in its own right. We show that the presence of a disjunctive basis at the "one-step" level entails a number of good properties for a coalgebraic mu-calculus, in particular, a simulation theorem showing that every alternating automaton can be transformed into an equivalent nondeterministic one. Based on this, we prove a Lyndon theorem for the full fixpoint logic, its fixpoint-free fragment and its one-step fragment, and a Uniform Interpolation result, for both the full mu-calculus and its fixpoint-free fragment. We also raise the questions, when a disjunctive basis exists, and how disjunctive bases are related to Moss' coalgebraic "nabla" modalities. Nabla formulas provide disjunctive bases for many coalgebraic modal logics, but there are cases where disjunctive bases give useful normal forms even when nabla formulas fail to do so, our prime example being graded modal logic. We also show that disjunctive bases are preserved by forming sums, products and compositions of coalgebraic modal logics, providing tools for modular construction of modal […]

31. Proving Soundness of Extensional Normal-Form Bisimilarities

Dariusz Biernacki ; Serguei Lenglet ; Piotr Polesiuk.
Normal-form bisimilarity is a simple, easy-to-use behavioral equivalence that relates terms in $\lambda$-calculi by decomposing their normal forms into bisimilar subterms. Moreover, it typically allows for powerful up-to techniques, such as bisimulation up to context, which simplify bisimulation proofs even further. However, proving soundness of these relations becomes complicated in the presence of $\eta$-expansion and usually relies on ad hoc proof methods which depend on the language. In this paper we propose a more systematic proof method to show that an extensional normal-form bisimilarity along with its corresponding up to context technique are sound. We illustrate our technique with three calculi: the call-by-value $\lambda$-calculus, the call-by-value $\lambda$-calculus with the delimited-control operators shift and reset, and the call-by-value $\lambda$-calculus with the abortive control operators call/cc and abort. In the first two cases, there was previously no sound up to context technique validating the $\eta$-law, whereas no theory of normal-form bisimulations for a calculus with call/cc and abort has been presented before. Our results have been fully formalized in the Coq proof assistant.

32. A classical groupoid model for quantum networks

David J. Reutter ; Jamie Vicary.
We give a mathematical analysis of a new type of classical computer network architecture, intended as a model of a new technology that has recently been proposed in industry. Our approach is based on groubits, generalizations of classical bits based on groupoids. This network architecture allows the direct execution of a number of protocols that are usually associated with quantum networks, including teleportation, dense coding and secure key distribution.

33. On Thin Air Reads: Towards an Event Structures Model of Relaxed Memory

Alan Jeffrey ; James Riely.
To model relaxed memory, we propose confusion-free event structures over an alphabet with a justification relation. Executions are modeled by justified configurations, where every read event has a justifying write event. Justification alone is too weak a criterion, since it allows cycles of the kind that result in so-called thin-air reads. Acyclic justification forbids such cycles, but also invalidates event reorderings that result from compiler optimizations and dynamic instruction scheduling. We propose the notion of well-justification, based on a game-like model, which strikes a middle ground. We show that well-justified configurations satisfy the DRF theorem: in any data-race free program, all well-justified configurations are sequentially consistent. We also show that rely-guarantee reasoning is sound for well-justified configurations, but not for justified configurations. For example, well-justified configurations are type-safe. Well-justification allows many, but not all reorderings performed by relaxed memory. In particular, it fails to validate the commutation of independent reads. We discuss variations that may address these shortcomings.

34. Computer-Assisted Proving of Combinatorial Conjectures Over Finite Domains: A Case Study of a Chess Conjecture

Predrag Janičić ; Filip Marić ; Marko Maliković.
There are several approaches for using computers in deriving mathematical proofs. For their illustration, we provide an in-depth study of using computer support for proving one complex combinatorial conjecture -- correctness of a strategy for the chess KRK endgame. The final, machine verifiable, result presented in this paper is that there is a winning strategy for white in the KRK endgame generalized to $n \times n$ board (for natural $n$ greater than $3$). We demonstrate that different approaches for computer-based theorem proving work best together and in synergy and that the technology currently available is powerful enough for providing significant help to humans deriving complex proofs.

35. Moschovakis Extension of Represented Spaces

Dimiter Skordev.
Given a represented space (in the sense of TTE theory), an appropriate representation is constructed for the Moschovakis extension of its carrier (with paying attention to the cases of effective topological spaces and effective metric spaces). Some results are presented about TTE computability in the represented space obtained in this way. For single-valued functions, we prove, roughly speaking, the computability of any function which is absolutely prime computable in some computable functions. A similar result holds for multi-valued functions, but with an analog of absolute prime computability. The formulation of this result makes use of the notion of computability in iterative combinatory spaces - a notion studied by the author in other publications.

36. Abstract Hidden Markov Models: a monadic account of quantitative information flow

Annabelle McIver ; Carroll Morgan ; Tahiry Rabehaja.
Hidden Markov Models, HMM's, are mathematical models of Markov processes with state that is hidden, but from which information can leak. They are typically represented as 3-way joint-probability distributions. We use HMM's as denotations of probabilistic hidden-state sequential programs: for that, we recast them as `abstract' HMM's, computations in the Giry monad $\mathbb{D}$, and we equip them with a partial order of increasing security. However to encode the monadic type with hiding over some state $\mathcal{X}$ we use $\mathbb{D}\mathcal{X}\to \mathbb{D}^2\mathcal{X}$ rather than the conventional $\mathcal{X}{\to}\mathbb{D}\mathcal{X}$ that suffices for Markov models whose state is not hidden. We illustrate the $\mathbb{D}\mathcal{X}\to \mathbb{D}^2\mathcal{X}$ construction with a small Haskell prototype. We then present uncertainty measures as a generalisation of the extant diversity of probabilistic entropies, with characteristic analytic properties for them, and show how the new entropies interact with the order of increasing security. Furthermore, we give a `backwards' uncertainty-transformer semantics for HMM's that is dual to the `forwards' abstract HMM's - it is an analogue of the duality between forwards, relational semantics and backwards, predicate-transformer semantics for imperative programs with demonic choice. Finally, we argue that, from this new denotational-semantic viewpoint, one can see that the Dalenius […]