Volume 17, Issue 1

2021


1. Axiomatizing Maximal Progress and Discrete Time

Mario Bravetti.
Milner's complete proof system for observational congruence is crucially based on the possibility to equate $\tau$ divergent expressions to non-divergent ones by means of the axiom $recX. (\tau.X + E) = recX. \tau. E$. In the presence of a notion of priority, where, e.g., actions of type $\delta$ have a lower priority than silent $\tau$ actions, this axiom is no longer sound. Such a form of priority is, however, common in timed process algebra, where, due to the interpretation of $\delta$ as a time delay, it naturally arises from the maximal progress assumption. We here present our solution, based on introducing an auxiliary operator $pri(E)$ defining a "priority scope", to the long time open problem of axiomatizing priority using standard observational congruence: we provide a complete axiomatization for a basic process algebra with priority and (unguarded) recursion. We also show that, when the setting is extended by considering static operators of a discrete time calculus, an axiomatization that is complete over (a characterization of) finite-state terms can be developed by re-using techniques devised in the context of a cooperation with Prof. Jos Baeten.

2. Tameness in least fixed-point logic and McColm's conjecture

Siddharth Bhaskar ; Alex Kruckman.
We investigate four model-theoretic tameness properties in the context of least fixed-point logic over a family of finite structures. We find that each of these properties depends only on the elementary (i.e., first-order) limit theory, and we completely determine the valid entailments among them. In contrast to the context of first-order logic on arbitrary structures, the order property and independence property are equivalent in this setting. McColm conjectured that least fixed-point definability collapses to first-order definability exactly when proficiency fails. McColm's conjecture is known to be false in general. However, we show that McColm's conjecture is true for any family of finite structures whose limit theory is model-theoretically tame.

3. 2-adjoint equivalences in homotopy type theory

Daniel Carranza ; Jonathan Chang ; Chris Kapulkin ; Ryan Sandford.
We introduce the notion of (half) 2-adjoint equivalences in Homotopy Type Theory and prove their expected properties. We formalized these results in the Lean Theorem Prover.

4. On the Strong Equivalences for LPMLN Programs

Bin Wang ; Jun Shen ; Shutao Zhang ; Zhizheng Zhang.
LPMLN is a powerful knowledge representation and reasoning tool that combines the non-monotonic reasoning ability of Answer Set Programming (ASP) and the probabilistic reasoning ability of Markov Logic Networks (MLN). In this paper, we study the strong equivalence for LPMLN programs, which is an important tool for program rewriting and theoretical investigations in the field of logic programming. First of all, we present the notion of p-strong equivalence for LPMLN and present a model-theoretical characterization for the notion. And we investigate the relationships among the p-strong equivalence and other existing notions of strong equivalences for LPMLN. Then, we investigate several properties of the p-strong equivalence from the following four aspects. Firstly, we investigate two relaxed notions of the p-strong equivalence according to practical scenarios of program rewriting, and present corresponding characterizations for the notions. Secondly, we analyze the computational complexities of deciding strong equivalences for LPMLN programs. Thirdly, we investigate the relationships among the strong equivalences of LPMLN and two extensions of ASP: ASP with weak constraints and ordered disjunctions. Finally, we investigate LPMLN program simplification via the p-strong equivalence and present some syntactic conditions that decide the p-strong equivalence between a single LPMLN rule and the empty program. The contributions of the paper are as follows. Firstly, all of the results […]

5. Definable decompositions for graphs of bounded linear cliquewidth

Mikołaj Bojańczyk ; Martin Grohe ; Michał Pilipczuk.
We prove that for every positive integer k, there exists an MSO_1-transduction that given a graph of linear cliquewidth at most k outputs, nondeterministically, some cliquewidth decomposition of the graph of width bounded by a function of k. A direct corollary of this result is the equivalence of the notions of CMSO_1-definability and recognizability on graphs of bounded linear cliquewidth.

6. Modal Logics for Nominal Transition Systems

Joachim Parrow ; Johannes Borgström ; Lars-Henrik Eriksson ; Ramūnas Forsberg Gutkovas ; Tjark Weber.
We define a general notion of transition system where states and action labels can be from arbitrary nominal sets, actions may bind names, and state predicates from an arbitrary logic define properties of states. A Hennessy-Milner logic for these systems is introduced, and proved adequate and expressively complete for bisimulation equivalence. A main technical novelty is the use of finitely supported infinite conjunctions. We show how to treat different bisimulation variants such as early, late, open and weak in a systematic way, explore the folklore theorem that state predicates can be replaced by actions, and make substantial comparisons with related work. The main definitions and theorems have been formalised in Nominal Isabelle.

7. Solvability = Typability + Inhabitation

Antonio Bucciarelli ; Delia Kesner ; Simona Ronchi Della Rocca.
We extend the classical notion of solvability to a lambda-calculus equipped with pattern matching. We prove that solvability can be characterized by means of typability and inhabitation in an intersection type system P based on non-idempotent types. We show first that the system P characterizes the set of terms having canonical form, i.e. that a term is typable if and only if it reduces to a canonical form. But the set of solvable terms is properly contained in the set of canonical forms. Thus, typability alone is not sufficient to characterize solvability, in contrast to the case for the lambda-calculus. We then prove that typability, together with inhabitation, provides a full characterization of solvability, in the sense that a term is solvable if and only if it is typable and the types of all its arguments are inhabited. We complete the picture by providing an algorithm for the inhabitation problem of P.

8. Correct and Efficient Antichain Algorithms for Refinement Checking

Maurice Laveaux ; Jan Friso Groote ; Tim A. C. Willemse.
The notion of refinement plays an important role in software engineering. It is the basis of a stepwise development methodology in which the correctness of a system can be established by proving, or computing, that a system refines its specification. Wang et al. describe algorithms based on antichains for efficiently deciding trace refinement, stable failures refinement and failures-divergences refinement. We identify several issues pertaining to the soundness and performance in these algorithms and propose new, correct, antichain-based algorithms. Using a number of experiments we show that our algorithms outperform the original ones in terms of running time and memory usage. Furthermore, we show that additional run time improvements can be obtained by applying divergence-preserving branching bisimulation minimisation.

9. Computing Probabilistic Bisimilarity Distances for Probabilistic Automata

Giorgio Bacci ; Giovanni Bacci ; Kim G. Larsen ; Radu Mardare ; Qiyi Tang ; Franck van Breugel.
The probabilistic bisimilarity distance of Deng et al. has been proposed as a robust quantitative generalization of Segala and Lynch's probabilistic bisimilarity for probabilistic automata. In this paper, we present a characterization of the bisimilarity distance as the solution of a simple stochastic game. The characterization gives us an algorithm to compute the distances by applying Condon's simple policy iteration on these games. The correctness of Condon's approach, however, relies on the assumption that the games are stopping. Our games may be non-stopping in general, yet we are able to prove termination for this extended class of games. Already other algorithms have been proposed in the literature to compute these distances, with complexity in $\textbf{UP} \cap \textbf{coUP}$ and \textbf{PPAD}. Despite the theoretical relevance, these algorithms are inefficient in practice. To the best of our knowledge, our algorithm is the first practical solution. The characterization of the probabilistic bisimilarity distance mentioned above crucially uses a dual presentation of the Hausdorff distance due to Mémoli. As an additional contribution, in this paper we show that Mémoli's result can be used also to prove that the bisimilarity distance bounds the difference in the maximal (or minimal) probability of two states to satisfying arbitrary $\omega$-regular properties, expressed, eg., as LTL formulas.

10. Determinacy in Discrete-Bidding Infinite-Duration Games

Milad Aghajohari ; Guy Avni ; Thomas A. Henzinger.
In two-player games on graphs, the players move a token through a graph to produce an infinite path, which determines the winner of the game. Such games are central in formal methods since they model the interaction between a non-terminating system and its environment. In bidding games the players bid for the right to move the token: in each round, the players simultaneously submit bids, and the higher bidder moves the token and pays the other player. Bidding games are known to have a clean and elegant mathematical structure that relies on the ability of the players to submit arbitrarily small bids. Many applications, however, require a fixed granularity for the bids, which can represent, for example, the monetary value expressed in cents. We study, for the first time, the combination of discrete-bidding and infinite-duration games. Our most important result proves that these games form a large determined subclass of concurrent games, where determinacy is the strong property that there always exists exactly one player who can guarantee winning the game. In particular, we show that, in contrast to non-discrete bidding games, the mechanism with which tied bids are resolved plays an important role in discrete-bidding games. We study several natural tie-breaking mechanisms and show that, while some do not admit determinacy, most natural mechanisms imply determinacy for every pair of initial budgets.

11. Axiomatization of betweenness in order-theoretic trees

Bruno Courcelle.
The ternary betweenness relation of a tree, B(x,y,z) expresses that y is on the unique path between x and z. This notion can be extended to order-theoretic trees defined as partial orders such that the set of nodes larger than any node is linearly ordered. In such generalized trees, the unique "path" between two nodes can have infinitely many nodes. We generalize some results obtained in a previous article for the betweenness of join-trees. Join-trees are order-theoretic trees such that any two nodes have a least upper-bound. The motivation was to define conveniently the rank-width of a countable graph. We called quasi-tree the structure based on the betweenness relation of a join-tree. We proved that quasi-trees are axiomatized by a first-order sentence. Here, we obtain a monadic second-order axiomatization of betweenness in order-theoretic trees. We also define and compare several induced betweenness relations, i.e., restrictions to sets of nodes of the betweenness relations in generalized trees of different kinds. We prove that induced betweenness in quasi-trees is characterized by a first-order sentence. The proof uses order-theoretic trees.

12. Robustness Against Transactional Causal Consistency

Sidi Mohamed Beillahi ; Ahmed Bouajjani ; Constantin Enea.
Distributed storage systems and databases are widely used by various types of applications. Transactional access to these storage systems is an important abstraction allowing application programmers to consider blocks of actions (i.e., transactions) as executing atomically. For performance reasons, the consistency models implemented by modern databases are weaker than the standard serializability model, which corresponds to the atomicity abstraction of transactions executing over a sequentially consistent memory. Causal consistency for instance is one such model that is widely used in practice. In this paper, we investigate application-specific relationships between several variations of causal consistency and we address the issue of verifying automatically if a given transactional program is robust against causal consistency, i.e., all its behaviors when executed over an arbitrary causally consistent database are serializable. We show that programs without write-write races have the same set of behaviors under all these variations, and we show that checking robustness is polynomial time reducible to a state reachability problem in transactional programs over a sequentially consistent shared memory. A surprising corollary of the latter result is that causal consistency variations which admit incomparable sets of behaviors admit comparable sets of robust programs. This reduction also opens the door to leveraging existing methods and tools for the verification of concurrent […]

13. Convolution Algebras: Relational Convolution, Generalised Modalities and Incidence Algebras

Brijesh Dongol ; Ian J. Hayes ; Georg Struth.
Convolution is a ubiquitous operation in mathematics and computing. The Kripke semantics for substructural and interval logics motivates its study for quantale-valued functions relative to ternary relations. The resulting notion of relational convolution leads to generalised binary and unary modal operators for qualitative and quantitative models, and to more conventional variants, when ternary relations arise from identities over partial semigroups. Convolution-based semantics for fragments of categorial, linear and incidence (segment or interval) logics are provided as qualitative applications. Quantitative examples include algebras of durations and mean values in the duration calculus.

14. The $\pi$-Calculus is Behaviourally Complete and Orbit-Finitely Executable

Bas Luttik ; Fei Yang.
Reactive Turing machines extend classical Turing machines with a facility to model observable interactive behaviour. We call a behaviour (finitely) executable if, and only if, it is equivalent to the behaviour of a (finite) reactive Turing machine. In this paper, we study the relationship between executable behaviour and behaviour that can be specified in the $\pi$-calculus. We establish that every finitely executable behaviour can be specified in the $\pi$-calculus up to divergence-preserving branching bisimilarity. The converse, however, is not true due to (intended) limitations of the model of reactive Turing machines. That is, the $\pi$-calculus allows the specification of behaviour that is not finitely executable up to divergence-preserving branching bisimilarity. We shall prove, however, that if the finiteness requirement on reactive Turing machines and the associated notion of executability is relaxed to orbit-finiteness, then the $\pi$-calculus is executable up to (divergence-insensitive) branching bisimilarity.

15. Output-sensitive Information flow analysis

Cristian Ene ; Laurent Mounier ; Marie-Laure Potet.
Constant-time programming is a countermeasure to prevent cache based attacks where programs should not perform memory accesses that depend on secrets. In some cases this policy can be safely relaxed if one can prove that the program does not leak more information than the public outputs of the computation. We propose a novel approach for verifying constant-time programming based on a new information flow property, called output-sensitive noninterference. Noninterference states that a public observer cannot learn anything about the private data. Since real systems need to intentionally declassify some information, this property is too strong in practice. In order to take into account public outputs we proceed as follows: instead of using complex explicit declassification policies, we partition variables in three sets: input, output and leakage variables. Then, we propose a typing system to statically check that leakage variables do not leak more information about the secret inputs than the public normal output. The novelty of our approach is that we track the dependence of leakage variables with respect not only to the initial values of input variables (as in classical approaches for noninterference), but taking also into account the final values of output variables. We adapted this approach to LLVM IR and we developed a prototype to verify LLVM implementations.

16. A Formal Proof of the Irrationality of $\zeta(3)$

Assia Mahboubi ; Thomas Sibut-Pinote.
This paper presents a complete formal verification of a proof that the evaluation of the Riemann zeta function at 3 is irrational, using the Coq proof assistant. This result was first presented by Apéry in 1978, and the proof we have formalized essentially follows the path of his original presentation. The crux of this proof is to establish that some sequences satisfy a common recurrence. We formally prove this result by an a posteriori verification of calculations performed by computer algebra algorithms in a Maple session. The rest of the proof combines arithmetical ingredients and asymptotic analysis, which we conduct by extending the Mathematical Components libraries.

17. Datatype defining rewrite systems for naturals and integers

Jan A. Bergstra ; Alban Ponse.
A datatype defining rewrite system (DDRS) is an algebraic (equational) specification intended to specify a datatype. When interpreting the equations from left-to-right, a DDRS defines a term rewriting system that must be ground-complete. First we define two DDRSs for the ring of integers, each comprising twelve rewrite rules, and prove their ground-completeness. Then we introduce natural number and integer arithmetic specified according to unary view, that is, arithmetic based on a postfix unary append constructor (a form of tallying). Next we specify arithmetic based on two other views: binary and decimal notation. The binary and decimal view have as their characteristic that each normal form resembles common number notation, that is, either a digit, or a string of digits without leading zero, or the negated versions of the latter. Integer arithmetic in binary and decimal notation is based on (postfix) digit append functions. For each view we define a DDRS, and in each case the resulting datatype is a canonical term algebra that extends a corresponding canonical term algebra for natural numbers. Then, for each view, we consider an alternative DDRS based on tree constructors that yields comparable normal forms, which for that view admits expressions that are algorithmically more involved. For all DDRSs considered, ground-completeness is proven.

18. Complete Call-by-Value Calculi of Control Operators II: Strong Termination

Ryu Hasegawa.
We provide characterization of the strong termination property of the CCV (complete call-by-value) lambda-mu calculus introduced in the first part of this series of the paper. The calculus is complete with respect to the standard continuation-passing style (CPS) semantics. The union-intersection type systems for the calculus is developed in the previous paper. We characterize the strong normalizability of terms of the calculus in terms of the CPS semantics and typeability.

19. Constructive Domains with Classical Witnesses

Dirk Pattinson ; Mina Mohammadian.
We develop a constructive theory of continuous domains from the perspective of program extraction. Our goal that programs represent (provably correct) computation without witnesses of correctness is achieved by formulating correctness assertions classically. Technically, we start from a predomain base and construct a completion. We then investigate continuity with respect to the Scott topology, and present a construction of the function space. We then discuss our main motivating example in detail, and instantiate our theory to real numbers that we conceptualise as the total elements of the completion of the predomain of rational intervals, and prove a representation theorem that precisely delineates the class of representable continuous functions.

20. A Sound Algorithm for Asynchronous Session Subtyping and its Implementation

Mario Bravetti ; Marco Carbone ; Julien Lange ; Nobuko Yoshida ; Gianluigi Zavattaro.
Session types, types for structuring communication between endpoints in distributed systems, are recently being integrated into mainstream programming languages. In practice, a very important notion for dealing with such types is that of subtyping, since it allows for typing larger classes of system, where a program has not precisely the expected behaviour but a similar one. Unfortunately, recent work has shown that subtyping for session types in an asynchronous setting is undecidable. To cope with this negative result, the only approaches we are aware of either restrict the syntax of session types or limit communication (by considering forms of bounded asynchrony). Both approaches are too restrictive in practice, hence we proceed differently by presenting an algorithm for checking subtyping which is sound, but not complete (in some cases it terminates without returning a decisive verdict). The algorithm is based on a tree representation of the coinductive definition of asynchronous subtyping; this tree could be infinite, and the algorithm checks for the presence of finite witnesses of infinite successful subtrees. Furthermore, we provide a tool that implements our algorithm. We use this tool to test our algorithm on many examples that cannot be managed with the previous approaches, and to provide an empirical evaluation of the time and space cost of the algorithm.

21. Stubborn Set Reduction for Two-Player Reachability Games

Frederik Meyer Bønneland ; Peter Gjøl Jensen ; Kim Guldstrand Larsen ; Marco Muñiz ; Jiří Srba.
Partial order reductions have been successfully applied to model checking of concurrent systems and practical applications of the technique show nontrivial reduction in the size of the explored state space. We present a theory of partial order reduction based on stubborn sets in the game-theoretical setting of 2-player games with reachability objectives. Our stubborn reduction allows us to prune the interleaving behaviour of both players in the game, and we formally prove its correctness on the class of games played on general labelled transition systems. We then instantiate the framework to the class of weighted Petri net games with inhibitor arcs and provide its efficient implementation in the model checker TAPAAL. Finally, we evaluate our stubborn reduction on several case studies and demonstrate its efficiency.

22. Synthesis of Data Word Transducers

Léo Exibard ; Emmanuel Filiot ; Pierre-Alain Reynier.
In reactive synthesis, the goal is to automatically generate an implementation from a specification of the reactive and non-terminating input/output behaviours of a system. Specifications are usually modelled as logical formulae or automata over infinite sequences of signals ($\omega$-words), while implementations are represented as transducers. In the classical setting, the set of signals is assumed to be finite. In this paper, we consider data $\omega$-words instead, i.e., words over an infinite alphabet. In this context, we study specifications and implementations respectively given as automata and transducers extended with a finite set of registers. We consider different instances, depending on whether the specification is nondeterministic, universal or deterministic, and depending on whether the number of registers of the implementation is given or not. In the unbounded setting, we show undecidability for both universal and nondeterministic specifications, while decidability is recovered in the deterministic case. In the bounded setting, undecidability still holds for nondeterministic specifications, but can be recovered by disallowing tests over input data. The generic technique we use to show the latter result allows us to reprove some known result, namely decidability of bounded synthesis for universal specifications.

23. Reconfiguration and Message Losses in Parameterized Broadcast Networks

Nathalie Bertrand ; Patricia Bouyer ; Anirban Majumdar.
Broadcast networks allow one to model networks of identical nodes communicating through message broadcasts. Their parameterized verification aims at proving a property holds for any number of nodes, under any communication topology, and on all possible executions. We focus on the coverability problem which dually asks whether there exists an execution that visits a configuration exhibiting some given state of the broadcast protocol. Coverability is known to be undecidable for static networks, i.e. when the number of nodes and communication topology is fixed along executions. In contrast, it is decidable in PTIME when the communication topology may change arbitrarily along executions, that is for reconfigurable networks. Surprisingly, no lower nor upper bounds on the minimal number of nodes, or the minimal length of covering execution in reconfigurable networks, appear in the literature. In this paper we show tight bounds for cutoff and length, which happen to be linear and quadratic, respectively, in the number of states of the protocol. We also introduce an intermediary model with static communication topology and non-deterministic message losses upon sending. We show that the same tight bounds apply to lossy networks, although, reconfigurable executions may be linearly more succinct than lossy executions. Finally, we show NP-completeness for the natural optimisation problem associated with the cutoff.