Volume 18, Issue 3

2022


1. Modal meet-implication logic

Jim de Groot ; Dirk Pattinson.
We extend the meet-implication fragment of propositional intuitionistic logic with a meet-preserving modality. We give semantics based on semilattices and a duality result with a suitable notion of descriptive frame. As a consequence we obtain completeness and identify a common (modal) fragment of a large class of modal intuitionistic logics. We recognise this logic as a dialgebraic logic, and as a consequence obtain expressivity-somewhere-else. Within the dialgebraic framework, we then investigate the extension of the meet-implication fragment of propositional intuitionistic logic with a monotone modality and prove completeness and expressivity-somewhere-else for it.

2. Differentials and distances in probabilistic coherence spaces

Thomas Ehrhard.
In probabilistic coherence spaces, a denotational model of probabilistic functional languages, morphisms are analytic and therefore smooth. We explore two related applications of the corresponding derivatives. First we show how derivatives allow to compute the expectation of execution time in the weak head reduction of probabilistic PCF (pPCF). Next we apply a general notion of "local" differential of morphisms to the proof of a Lipschitz property of these morphisms allowing in turn to relate the observational distance on pPCF terms to a distance the model is naturally equipped with. This suggests that extending probabilistic programming languages with derivatives, in the spirit of the differential lambda-calculus, could be quite meaningful.

3. Modules over monads and operational semantics (expanded version)

André Hirschowitz ; Tom Hirschowitz ; Ambroise Lafont.
This paper is a contribution to the search for efficient and high-level mathematical tools to specify and reason about (abstract) programming languages or calculi. Generalising the reduction monads of Ahrens et al., we introduce transition monads, thus covering new applications such as lambda-bar-mu-calculus, pi-calculus, Positive GSOS specifications, differential lambda-calculus, and the big-step, simply-typed, call-by-value lambda-calculus. Moreover, we design a suitable notion of signature for transition monads.

4. A Coalgebraic Approach to Dualities for Neighborhood Frames

Guram Bezhanishvili ; Nick Bezhanishvili ; Jim de Groot.
We develop a uniform coalgebraic approach to Jónsson-Tarski and Thomason type dualities for various classes of neighborhood frames and neighborhood algebras. In the first part of the paper we construct an endofunctor on the category of complete and atomic Boolean algebras that is dual to the double powerset functor on $\mathsf{Set}$. This allows us to show that Thomason duality for neighborhood frames can be viewed as an algebra-coalgebra duality. We generalize this approach to any class of algebras for an endofunctor presented by one-step axioms in the language of infinitary modal logic. As a consequence, we obtain a uniform approach to dualities for various classes of neighborhood frames, including monotone neighborhood frames, pretopological spaces, and topological spaces. In the second part of the paper we develop a coalgebraic approach to Jónsson-Tarski duality for neighborhood algebras and descriptive neighborhood frames. We introduce an analogue of the Vietoris endofunctor on the category of Stone spaces and show that descriptive neighborhood frames are isomorphic to coalgebras for this endofunctor. This allows us to obtain a coalgebraic proof of the duality between descriptive neighborhood frames and neighborhood algebras. Using one-step axioms in the language of finitary modal logic, we restrict this duality to other classes of neighborhood algebras studied in the literature, including monotone modal algebras and contingency algebras. We conclude the paper by […]

5. Why Does Propositional Quantification Make Modal and Temporal Logics on Trees Robustly Hard?

Bartosz Bednarczyk ; Stéphane Demri.
Adding propositional quantification to the modal logics K, T or S4 is known to lead to undecidability but CTL with propositional quantification under the tree semantics (tQCTL) admits a non-elementary Tower-complete satisfiability problem. We investigate the complexity of strict fragments of tQCTL as well as of the modal logic K with propositional quantification under the tree semantics. More specifically, we show that tQCTL restricted to the temporal operator EX is already Tower-hard, which is unexpected as EX can only enforce local properties. When tQCTL restricted to EX is interpreted on N-bounded trees for some N >= 2, we prove that the satisfiability problem is AExpPol-complete; AExpPol-hardness is established by reduction from a recently introduced tiling problem, instrumental for studying the model-checking problem for interval temporal logics. As consequences of our proof method, we prove Tower-hardness of tQCTL restricted to EF or to EXEF and of the well-known modal logics such as K, KD, GL, K4 and S4 with propositional quantification under a semantics based on classes of trees.

6. Fusible numbers and Peano Arithmetic

Jeff Erickson ; Gabriel Nivasch ; Junyan Xu.
Inspired by a mathematical riddle involving fuses, we define the "fusible numbers" as follows: $0$ is fusible, and whenever $x,y$ are fusible with $|y-x|<1$, the number $(x+y+1)/2$ is also fusible. We prove that the set of fusible numbers, ordered by the usual order on $\mathbb R$, is well-ordered, with order type $\varepsilon_0$. Furthermore, we prove that the density of the fusible numbers along the real line grows at an incredibly fast rate: Letting $g(n)$ be the largest gap between consecutive fusible numbers in the interval $[n,\infty)$, we have $g(n)^{-1} \ge F_{\varepsilon_0}(n-c)$ for some constant $c$, where $F_\alpha$ denotes the fast-growing hierarchy. Finally, we derive some true statements that can be formulated but not proven in Peano Arithmetic, of a different flavor than previously known such statements: PA cannot prove the true statement "For every natural number $n$ there exists a smallest fusible number larger than $n$." Also, consider the algorithm "$M(x)$: if $x<0$ return $-x$, else return $M(x-M(x-1))/2$." Then $M$ terminates on real inputs, although PA cannot prove the statement "$M$ terminates on all natural inputs."

7. Modularising Verification Of Durable Opacity

Eleni Bila ; John Derrick ; Simon Doherty ; Brijesh Dongol ; Gerhard Schellhorn ; Heike Wehrheim.
Non-volatile memory (NVM), also known as persistent memory, is an emerging paradigm for memory that preserves its contents even after power loss. NVM is widely expected to become ubiquitous, and hardware architectures are already providing support for NVM programming. This has stimulated interest in the design of novel concepts ensuring correctness of concurrent programming abstractions in the face of persistency and in the development of associated verification approaches. Software transactional memory (STM) is a key programming abstraction that supports concurrent access to shared state. In a fashion similar to linearizability as the correctness condition for concurrent data structures, there is an established notion of correctness for STMs known as opacity. We have recently proposed durable opacity as the natural extension of opacity to a setting with non-volatile memory. Together with this novel correctness condition, we designed a verification technique based on refinement. In this paper, we extend this work in two directions. First, we develop a durably opaque version of NOrec (no ownership records), an existing STM algorithm proven to be opaque. Second, we modularise our existing verification approach by separating the proof of durability of memory accesses from the proof of opacity. For NOrec, this allows us to re-use an existing opacity proof and complement it with a proof of the durability of accesses to shared state.

8. Uniform Envelopes

Eike Neumann.
In the author's PhD thesis (2019) universal envelopes were introduced as a tool for studying the continuously obtainable information on discontinuous functions. To any function $f \colon X \to Y$ between $\operatorname{qcb}_0$-spaces one can assign a so-called universal envelope which, in a well-defined sense, encodes all continuously obtainable information on the function. A universal envelope consists of two continuous functions $F \colon X \to L$ and $\xi_L \colon Y \to L$ with values in a $\Sigma$-split injective space $L$. Any continuous function with values in an injective space whose composition with the original function is again continuous factors through the universal envelope. However, it is not possible in general to uniformly compute this factorisation. In this paper we propose the notion of uniform envelopes. A uniform envelope is additionally endowed with a map $u_L \colon L \to \mathcal{O}^2(Y)$ that is compatible with the multiplication of the double powerspace monad $\mathcal{O}^2$ in a certain sense. This yields for every continuous map with values in an injective space a choice of uniformly computable extension. Under a suitable condition which we call uniform universality, this extension yields a uniformly computable solution for the above factorisation problem. Uniform envelopes can be endowed with a composition operation. We establish criteria that ensure that the composition of two uniformly universal envelopes is again uniformly universal. […]

9. Computability of Data-Word Transductions over Different Data Domains

Léo Exibard ; Emmanuel Filiot ; Nathan Lhote ; Pierre-Alain Reynier.
In this paper, we investigate the problem of synthesizing computable functions of infinite words over an infinite alphabet (data $\omega$-words). The notion of computability is defined through Turing machines with infinite inputs which can produce the corresponding infinite outputs in the limit. We use non-deterministic transducers equipped with registers, an extension of register automata with outputs, to describe specifications. Being non-deterministic, such transducers may not define functions but more generally relations of data $\omega$-words. In order to increase the expressive power of these machines, we even allow guessing of arbitrary data values when updating their registers. For functions over data $\omega$-words, we identify a sufficient condition (the possibility of determining the next letter to be outputted, which we call next letter problem) under which computability (resp. uniform computability) and continuity (resp. uniform continuity) coincide. We focus on two kinds of data domains: first, the general setting of oligomorphic data, which encompasses any data domain with equality, as well as the setting of rational numbers with linear order; and second, the set of natural numbers equipped with linear order. For both settings, we prove that functionality, i.e. determining whether the relation recognized by the transducer is actually a function, is decidable. We also show that the so-called next letter problem is decidable, yielding equivalence between […]

10. Addressing Machines as models of lambda-calculus

Giuseppe Della Penna ; Benedetto Intrigila ; Giulio Manzonetto.
Turing machines and register machines have been used for decades in theoretical computer science as abstract models of computation. Also the $\lambda$-calculus has played a central role in this domain as it allows to focus on the notion of functional computation, based on the substitution mechanism, while abstracting away from implementation details. The present article starts from the observation that the equivalence between these formalisms is based on the Church-Turing Thesis rather than an actual encoding of $\lambda$-terms into Turing (or register) machines. The reason is that these machines are not well-suited for modelling $\lambda$-calculus programs. We study a class of abstract machines that we call "addressing machine" since they are only able to manipulate memory addresses of other machines. The operations performed by these machines are very elementary: load an address in a register, apply a machine to another one via their addresses, and call the address of another machine. We endow addressing machines with an operational semantics based on leftmost reduction and study their behaviour. The set of addresses of these machines can be easily turned into a combinatory algebra. In order to obtain a model of the full untyped $\lambda$-calculus, we need to introduce a rule that bares similarities with the $\omega$-rule and the rule $\zeta_\beta$ from combinatory logic.

11. A First-Order Complete Temporal Logic for Structured Context-Free Languages

Michele Chiari ; Dino Mandrioli ; Matteo Pradella.
The problem of model checking procedural programs has fostered much research towards the definition of temporal logics for reasoning on context-free structures. The most notable of such results are temporal logics on Nested Words, such as CaRet and NWTL. Recently, the logic OPTL was introduced, based on the class of Operator Precedence Languages (OPLs), more powerful than Nested Words. We define the new OPL-based logic POTL and prove its FO-completeness. POTL improves on NWTL by enabling the formulation of requirements involving pre/post-conditions, stack inspection, and others in the presence of exception-like constructs. It improves on OPTL too, which instead we show not to be FO-complete; it also allows to express more easily stack inspection and function-local properties. In a companion paper we report a model checking procedure for POTL and experimental results based on a prototype tool developed therefor. For completeness a short summary of this complementary result is provided in this paper too.

12. Timed Automata Robustness Analysis via Model Checking

Jaroslav Bendík ; Ahmet Sencan ; Ebru Aydin Gol ; Ivana Černá.
Timed automata (TA) have been widely adopted as a suitable formalism to model time-critical systems. Furthermore, contemporary model-checking tools allow the designer to check whether a TA complies with a system specification. However, the exact timing constants are often uncertain during the design phase. Consequently, the designer is often able to build a TA with a correct structure, however, the timing constants need to be tuned to satisfy the specification. Moreover, even if the TA initially satisfies the specification, it can be the case that just a slight perturbation during the implementation causes a violation of the specification. Unfortunately, model-checking tools are usually not able to provide any reasonable guidance on how to fix the model in such situations. In this paper, we propose several concepts and techniques to cope with the above mentioned design phase issues when dealing with reachability and safety specifications.

13. Comparator automata in quantitative verification

Suguman Bansal ; Swarat Chaudhuri ; Moshe Y. Vardi.
The notion of comparison between system runs is fundamental in formal verification. This concept is implicitly present in the verification of qualitative systems, and is more pronounced in the verification of quantitative systems. In this work, we identify a novel mode of comparison in quantitative systems: the online comparison of the aggregate values of two sequences of quantitative weights. This notion is embodied by comparator automata (comparators, in short), a new class of automata that read two infinite sequences of weights synchronously and relate their aggregate values. We show that aggregate functions that can be represented with Büchi automaton result in comparators that are finite-state and accept by the Büchi condition as well. Such $\omega$-regular comparators further lead to generic algorithms for a number of well-studied problems, including the quantitative inclusion and winning strategies in quantitative graph games with incomplete information, as well as related non-decision problems, such as obtaining a finite representation of all counterexamples in the quantitative inclusion problem. We study comparators for two aggregate functions: discounted-sum and limit-average. We prove that the discounted-sum comparator is $\omega$-regular iff the discount-factor is an integer. Not every aggregate function, however, has an $\omega$-regular comparator. Specifically, we show that the language of sequence-pairs for which limit-average aggregates exist is neither […]

14. Tameness and the power of programs over monoids in DA

Nathan Grosshans ; Pierre Mckenzie ; Luc Segoufin.
The program-over-monoid model of computation originates with Barrington's proof that the model captures the complexity class $\mathsf{NC^1}$. Here we make progress in understanding the subtleties of the model. First, we identify a new tameness condition on a class of monoids that entails a natural characterization of the regular languages recognizable by programs over monoids from the class. Second, we prove that the class known as $\mathbf{DA}$ satisfies tameness and hence that the regular languages recognized by programs over monoids in $\mathbf{DA}$ are precisely those recognizable in the classical sense by morphisms from $\mathbf{QDA}$. Third, we show by contrast that the well studied class of monoids called $\mathbf{J}$ is not tame. Finally, we exhibit a program-length-based hierarchy within the class of languages recognized by programs over monoids from $\mathbf{DA}$.

15. Point-free Construction of Real Exponentiation

Ming Ng ; Steven Vickers.
We define a point-free construction of real exponentiation and logarithms, i.e.\ we construct the maps $\exp\colon (0, \infty)\times \mathbb{R} \rightarrow \!(0,\infty),\, (x, \zeta) \mapsto x^\zeta$ and $\log\colon (1,\infty)\times (0, \infty) \rightarrow\mathbb{R},\, (b, y) \mapsto \log_b(y)$, and we develop familiar algebraic rules for them. The point-free approach is constructive, and defines the points of a space as models of a geometric theory, rather than as elements of a set - in particular, this allows geometric constructions to be applied to points living in toposes other than Set. Our geometric development includes new lifting and gluing techniques in point-free topology, which highlight how properties of $\mathbb{Q}$ determine properties of real exponentiation. This work is motivated by our broader research programme of developing a version of adelic geometry via topos theory. In particular, we wish to construct the classifying topos of places of $\mathbb{Q}$, which will provide a geometric perspective into the subtle relationship between $\mathbb{R}$ and $\mathbb{Q}_p$, a question of longstanding number-theoretic interest.

16. Minimization and Canonization of GFG Transition-Based Automata

Bader Abu Radi ; Orna Kupferman.
While many applications of automata in formal methods can use nondeterministic automata, some applications, most notably synthesis, need deterministic or good-for-games (GFG) automata. The latter are nondeterministic automata that can resolve their nondeterministic choices in a way that only depends on the past. The minimization problem for deterministic Büchi and co-Büchi word automata is NP-complete. In particular, no canonical minimal deterministic automaton exists, and a language may have different minimal deterministic automata. We describe a polynomial minimization algorithm for GFG co-Büchi word automata with transition-based acceptance. Thus, a run is accepting if it traverses a set $\alpha$ of designated transitions only finitely often. Our algorithm is based on a sequence of transformations we apply to the automaton, on top of which a minimal quotient automaton is defined. We use our minimization algorithm to show canonicity for transition-based GFG co-Büchi word automata: all minimal automata have isomorphic safe components (namely components obtained by restricting the transitions to these not in $\alpha$) and once we saturate the automata with $\alpha$-transitions, we get full isomorphism.

17. One-Clock Priced Timed Games with Negative Weights

Thomas Brihaye ; Gilles Geeraerts ; Axel Haddad ; Engel Lefaucheux ; Benjamin Monmege.
Priced timed games are two-player zero-sum games played on priced timed automata (whose locations and transitions are labeled by weights modelling the cost of spending time in a state and executing an action, respectively). The goals of the players are to minimise and maximise the cost to reach a target location, respectively. We consider priced timed games with one clock and arbitrary integer weights and show that, for an important subclass of them (the so-called simple priced timed games), one can compute, in pseudo-polynomial time, the optimal values that the players can achieve, with their associated optimal strategies. As side results, we also show that one-clock priced timed games are determined and that we can use our result on simple priced timed games to solve the more general class of so-called negative-reset-acyclic priced timed games (with arbitrary integer weights and one clock). The decidability status of the full class of priced timed games with one-clock and arbitrary integer weights still remains open.

18. Moss' logic for ordered coalgebras

Marta Bílková ; Matěj Dostál.
We present a finitary version of Moss' coalgebraic logic for $T$-coalgebras, where $T$ is a locally monotone endofunctor of the category of posets and monotone maps. The logic uses a single cover modality whose arity is given by the least finitary subfunctor of the dual of the coalgebra functor $T_\omega^\partial$, and the semantics of the modality is given by relation lifting. For the semantics to work, $T$ is required to preserve exact squares. For the finitary setting to work, $T_\omega^\partial$ is required to preserve finite intersections. We develop a notion of a base for subobjects of $T_\omega X$. This in particular allows us to talk about the finite poset of subformulas for a given formula. The notion of a base is introduced generally for a category equipped with a suitable factorisation system. We prove that the resulting logic has the Hennessy-Milner property for the notion of similarity based on the notion of relation lifting. We define a sequent proof system for the logic, and prove its completeness.

19. Deciding All Behavioral Equivalences at Once: A Game for Linear-Time--Branching-Time Spectroscopy

Benjamin Bisping ; David N. Jansen ; Uwe Nestmann.
We introduce a generalization of the bisimulation game that finds distinguishing Hennessy-Milner logic formulas from every finitary, subformula-closed language in van Glabbeek's linear-time--branching-time spectrum between two finite-state processes. We identify the relevant dimensions that measure expressive power to yield formulas belonging to the coarsest distinguishing behavioral preorders and equivalences; the compared processes are equivalent in each coarser behavioral equivalence from the spectrum. We prove that the induced algorithm can determine the best fit of (in)equivalences for a pair of processes.

20. Instance reducibility and Weihrauch degrees

Andrej Bauer.
We identify a notion of reducibility between predicates, called instance reducibility, which commonly appears in reverse constructive mathematics. The notion can be generally used to compare and classify various principles studied in reverse constructive mathematics (formal Church's thesis, Brouwer's Continuity principle and Fan theorem, Excluded middle, Limited principle, Function choice, Markov's principle, etc.). We show that the instance degrees form a frame, i.e., a complete lattice in which finite infima distribute over set-indexed suprema. They turn out to be equivalent to the frame of upper sets of truth values, ordered by the reverse Smyth partial order. We study the overall structure of the lattice: the subobject classifier embeds into the lattice in two different ways, one monotone and the other antimonotone, and the $\lnot\lnot$-dense degrees coincide with those that are reducible to the degree of Excluded middle. We give an explicit formulation of instance degrees in a relative realizability topos, and call these extended Weihrauch degrees, because in Kleene-Vesley realizability the $\lnot\lnot$-dense modest instance degrees correspond precisely to Weihrauch degrees. The extended degrees improve the structure of Weihrauch degrees by equipping them with computable infima and suprema, an implication, the ability to control access to parameters and computation of results, and by generally widening the scope of Weihrauch reducibility.

21. The Third Trick

Hannes Diener ; Matthew Hendtlass.
We prove a result, similar to the ones known as Ishihara's First and Second Trick, for sequences of functions.

22. Typability and Type Inference in Atomic Polymorphism

M. Clarence Protin ; Gilda Ferreira.
It is well-known that typability, type inhabitation and type inference are undecidable in the Girard-Reynolds polymorphic system F. It has recently been proven that type inhabitation remains undecidable even in the predicative fragment of system F in which all universal instantiations have an atomic witness (system Fat). In this paper we analyze typability and type inference in Curry style variants of system Fat and show that typability is decidable and that there is an algorithm for type inference which is capable of dealing with non-redundancy constraints.

23. Strongly-Normalizing Higher-Order Relational Queries

Wilmer Ricciotti ; James Cheney.
Language-integrated query is a powerful programming construct allowing database queries and ordinary program code to interoperate seamlessly and safely. Language-integrated query techniques rely on classical results about the nested relational calculus, stating that its queries can be algorithmically translated to SQL, as long as their result type is a flat relation. Cooper and others advocated higher-order nested relational calculi as a basis for language-integrated queries in functional languages such as Links and F#. However, the translation of higher-order relational queries to SQL relies on a rewrite system for which no strong normalization proof has been published: a previous proof attempt does not deal correctly with rewrite rules that duplicate subterms. This paper fills the gap in the literature, explaining the difficulty with a previous proof attempt, and showing how to extend the $\top\top$-lifting approach of Lindley and Stark to accommodate duplicating rewrites. We also show how to extend the proof to a recently-introduced calculus for heterogeneous queries mixing set and multiset semantics.

24. Limits of real numbers in the binary signed digit representation

Franziskus Wiesnet ; Nils Köpp.
We extract verified algorithms for exact real number computation from constructive proofs. To this end we use a coinductive representation of reals as streams of binary signed digits. The main objective of this paper is the formalisation of a constructive proof that real numbers are closed with respect to limits. All the proofs of the main theorem and the first application are implemented in the Minlog proof system and the extracted terms are further translated into Haskell. We compare two approaches. The first approach is a direct proof. In the second approach we make use of the representation of reals by a Cauchy-sequence of rationals. Utilizing translations between the two represenation and using the completeness of the Cauchy-reals, the proof is very short. In both cases we use Minlog's program extraction mechanism to automatically extract a formally verified program that transforms a converging sequence of reals, i.e.~a sequence of streams of binary signed digits together with a modulus of convergence, into the binary signed digit representation of its limit. The correctness of the extracted terms follows directly from the soundness theorem of program extraction. As a first application we use the extracted algorithms together with Heron's method to construct an algorithm that computes square roots with respect to the binary signed digit representation. In a second application we use the convergence theorem to show that the signed digit representation of real […]

25. Small Promise CSPs that reduce to large CSPs

Alexandr Kazda ; Peter Mayr ; Dmitriy Zhuk.
For relational structures A, B of the same signature, the Promise Constraint Satisfaction Problem PCSP(A,B) asks whether a given input structure maps homomorphically to A or does not even map to B. We are promised that the input satisfies exactly one of these two cases. If there exists a structure C with homomorphisms $A\to C\to B$, then PCSP(A,B) reduces naturally to CSP(C). To the best of our knowledge all known tractable PCSPs reduce to tractable CSPs in this way. However Barto showed that some PCSPs over finite structures A, B require solving CSPs over infinite C. We show that even when such a reduction to finite C is possible, this structure may become arbitrarily large. For every integer $n>1$ and every prime p we give A, B of size n with a single relation of arity $n^p$ such that PCSP(A, B) reduces via a chain of homomorphisms $ A\to C\to B$ to a tractable CSP over some C of size p but not over any smaller structure. In a second family of examples, for every prime $p\geq 7$ we construct A, B of size $p-1$ with a single ternary relation such that PCSP(A, B) reduces via $A\to C\to B$ to a tractable CSP over some C of size p but not over any smaller structure. In contrast we show that if A, B are graphs and PCSP(A,B) reduces to tractable CSP(C) for some finite digraph C, then already A or B has a tractable CSP. This extends results and answers a question of Deng et al.

26. Counterexample-Guided Prophecy for Model Checking Modulo the Theory of Arrays

Makai Mann ; Ahmed Irfan ; Alberto Griggio ; Oded Padon ; Clark Barrett.
We develop a framework for model checking infinite-state systems by automatically augmenting them with auxiliary variables, enabling quantifier-free induction proofs for systems that would otherwise require quantified invariants. We combine this mechanism with a counterexample-guided abstraction refinement scheme for the theory of arrays. Our framework can thus, in many cases, reduce inductive reasoning with quantifiers and arrays to quantifier-free and array-free reasoning. We evaluate the approach on a wide set of benchmarks from the literature. The results show that our implementation often outperforms state-of-the-art tools, demonstrating its practical potential.

27. Inference Systems with Corules for Combined Safety and Liveness Properties of Binary Session Types

Luca Ciccone ; Luca Padovani.
Many properties of communication protocols combine safety and liveness aspects. Characterizing such combined properties by means of a single inference system is difficult because of the fundamentally different techniques (coinduction and induction, respectively) usually involved in defining and proving them. In this paper we show that Generalized Inference Systems allow us to obtain sound and complete characterizations of (at least some of) these combined inductive/coinductive properties of binary session types. In particular, we illustrate the role of corules in characterizing fair termination (the property of protocols that can always eventually terminate), fair compliance (the property of interactions that can always be extended to reach client satisfaction) and fair subtyping, a liveness-preserving refinement relation for session types. The characterizations we obtain are simpler compared to the previously available ones and corules provide insight on the liveness properties being ensured or preserved. Moreover, we can conveniently appeal to the bounded coinduction principle to prove the completeness of the provided characterizations.

28. Linear Dependent Type Theory for Quantum Programming Languages

Peng Fu ; Kohei Kishida ; Peter Selinger.
Modern quantum programming languages integrate quantum resources and classical control. They must, on the one hand, be linearly typed to reflect the no-cloning property of quantum resources. On the other hand, high-level and practical languages should also support quantum circuits as first-class citizens, as well as families of circuits that are indexed by some classical parameters. Quantum programming languages thus need linear dependent type theory. This paper defines a general semantic structure for such a type theory via certain fibrations of monoidal categories. The categorical model of the quantum circuit description language Proto-Quipper-M by Rios and Selinger (2017) constitutes an example of such a fibration, which means that the language can readily be integrated with dependent types. We then devise both a general linear dependent type system and a dependently typed extension of Proto-Quipper-M, and provide them with operational semantics as well as a prototype implementation.

29. The Theory of Universal Graphs for Infinite Duration Games

Thomas Colcombet ; Nathanaël Fijalkow ; Paweł Gawrychowski ; Pierre Ohlmann.
We introduce the notion of universal graphs as a tool for constructing algorithms solving games of infinite duration such as parity games and mean payoff games. In the first part we develop the theory of universal graphs, with two goals: showing an equivalence and normalisation result between different recently introduced related models, and constructing generic value iteration algorithms for any positionally determined objective. In the second part we give four applications: to parity games, to mean payoff games, to a disjunction between a parity and a mean payoff objective, and to disjunctions of several mean payoff objectives. For each of these four cases we construct algorithms achieving or improving over the best known time and space complexity.

30. Distributed Asynchronous Games With Causal Memory are Undecidable

Hugo Gimbert.
We show the undecidability of the distributed control problem when the plant is an asynchronous automaton, the controllers use causal memory and the goal of the controllers is to put each process in a local accepting state.

31. Minimality Notions via Factorization Systems and Examples

Thorsten Wißmann.
For the minimization of state-based systems (i.e. the reduction of the number of states while retaining the system's semantics), there are two obvious aspects: removing unnecessary states of the system and merging redundant states in the system. In the present article, we relate the two minimization aspects on coalgebras by defining an abstract notion of minimality. The abstract notions minimality and minimization live in a general category with a factorization system. We will find criteria on the category that ensure uniqueness, existence, and functoriality of the minimization aspects. The proofs of these results instantiate to those for reachability and observability minimization in the standard coalgebra literature. Finally, we will see how the two aspects of minimization interact and under which criteria they can be sequenced in any order, like in automata minimization.

32. Quantum Control in the Unitary Sphere: Lambda-S1 and its Categorical Model

Alejandro Díaz-Caro ; Octavio Malherbe.
In a recent paper, a realizability technique has been used to give a semantics of a quantum lambda calculus. Such a technique gives rise to an infinite number of valid typing rules, without giving preference to any subset of those. In this paper, we introduce a valid subset of typing rules, defining an expressive enough quantum calculus. Then, we propose a categorical semantics for it. Such a semantics consists of an adjunction between the category of distributive-action spaces of value distributions (that is, linear combinations of values in the lambda calculus), and the category of sets of value distributions.

33. Relating Functional and Imperative Session Types

Hannes Saffrich ; Peter Thiemann.
Imperative session types provide an imperative interface to session-typed communication. In such an interface, channel references are first-class objects with operations that change the typestate of the channel. Compared to functional session type APIs, the program structure is simpler at the surface, but typestate is required to model the current state of communication throughout. Following an early work that explored the imperative approach, a significant body of work on session types has neglected the imperative approach and opts for a functional approach that uses linear types to manage channel references soundly. We demonstrate that the functional approach subsumes the early work on imperative session types by exhibiting a typing and semantics preserving translation into a system of linear functional session types. We further show that the untyped backwards translation from the functional to the imperative calculus is semantics preserving. We restrict the type system of the functional calculus such that the backwards translation becomes type preserving. Thus, we precisely capture the difference in expressiveness of the two calculi and conclude that the lack of expressiveness in the imperative calculus is largely due to restrictions imposed by its type system.

34. A Functional Abstraction of Typed Invocation Contexts

Youyou Cong ; Chiaki Ishio ; Kaho Honda ; Kenichi Asai.
In their paper "A Functional Abstraction of Typed Contexts", Danvy and Filinski show how to derive a monomorphic type system of the shift and reset operators from a CPS semantics. In this paper, we show how this method scales to Felleisen's control and prompt operators. Compared to shift and reset, control and prompt exhibit a more dynamic behavior, in that they can manipulate a trail of contexts surrounding the invocation of previously captured continuations. Our key observation is that, by adopting a functional representation of trails in the CPS semantics, we can derive a type system that encodes all and only constraints imposed by the CPS semantics.

35. On sets of terms having a given intersection type

Andrew Polonsky ; Richard Statman.
Working in a variant of the intersection type assignment system of Coppo, Dezani-Ciancaglini and Venneri [1981], we prove several facts about sets of terms having a given intersection type. Our main result is that every strongly normalizing term M admits a *uniqueness typing*, which is a pair $(\Gamma,A)$ such that 1) $\Gamma \vdash M : A$ 2) $\Gamma \vdash N : A \Longrightarrow M =_{\beta\eta} N$ We also discuss several presentations of intersection type algebras, and the corresponding choices of type assignment rules. Moreover, we show that the set of closed terms with a given type is uniformly separable, and, if infinite, forms an adequate numeral system. The proof of this fact uses an internal version of the Böhm-out technique, adapted to terms of a given intersection type.

36. A Lower Bound of the Number of Rewrite Rules Obtained by Homological Methods

Mirai Ikebuchi.
It is well-known that some equational theories such as groups or boolean algebras can be defined by fewer equational axioms than the original axioms. However, it is not easy to determine if a given set of axioms is the smallest or not. Malbos and Mimram investigated a general method to find a lower bound of the cardinality of the set of equational axioms (or rewrite rules) that is equivalent to a given equational theory (or term rewriting systems), using homological algebra. Their method is an analog of Squier's homology theory on string rewriting systems. In this paper, we develop the homology theory for term rewriting systems more and provide a better lower bound under a stronger notion of equivalence than their equivalence. The author also implemented a program to compute the lower bounds, and experimented with 64 complete TRSs.

37. A categorical framework for congruence of applicative bisimilarity in higher-order languages

Tom Hirschowitz ; Ambroise Lafont.
Applicative bisimilarity is a coinductive characterisation of observational equivalence in call-by-name lambda-calculus, introduced by Abramsky (1990). Howe (1996) gave a direct proof that it is a congruence, and generalised the result to all languages complying with a suitable format. We propose a categorical framework for specifying operational semantics, in which we prove that (an abstract analogue of) applicative bisimilarity is automatically a congruence. Example instances include standard applicative bisimilarity in call-by-name, call-by-value, and call-by-name non-deterministic $\lambda$-calculus, and more generally all languages complying with a variant of Howe's format.