Volume 17, Issue 1


1. Axiomatizing Maximal Progress and Discrete Time

Bravetti, Mario.
Milner's complete proof system for observational congruence is crucially based on the possibility to equate $\tau$ divergent expressions to non-divergent ones by means of the axiom $recX. (\tau.X + E) = recX. \tau. E$. In the presence of a notion of priority, where, e.g., actions of type $\delta$ have a lower priority than silent $\tau$ actions, this axiom is no longer sound. Such a form of priority is, however, common in timed process algebra, where, due to the interpretation of $\delta$ as a time delay, it naturally arises from the maximal progress assumption. We here present our solution, based on introducing an auxiliary operator $pri(E)$ defining a "priority scope", to the long time open problem of axiomatizing priority using standard observational congruence: we provide a complete axiomatization for a basic process algebra with priority and (unguarded) recursion. We also show that, when the setting is extended by considering static operators of a discrete time calculus, an axiomatization that is complete over (a characterization of) finite-state terms can be developed by re-using techniques devised in the context of a cooperation with Prof. Jos Baeten.

2. Tameness in least fixed-point logic and McColm's conjecture

Bhaskar, Siddharth ; Kruckman, Alex.
We investigate four model-theoretic tameness properties in the context of least fixed-point logic over a family of finite structures. We find that each of these properties depends only on the elementary (i.e., first-order) limit theory, and we completely determine the valid entailments among them. In contrast to the context of first-order logic on arbitrary structures, the order property and independence property are equivalent in this setting. McColm conjectured that least fixed-point definability collapses to first-order definability exactly when proficiency fails. McColm's conjecture is known to be false in general. However, we show that McColm's conjecture is true for any family of finite structures whose limit theory is model-theoretically tame.

3. 2-adjoint equivalences in homotopy type theory

Carranza, Daniel ; Chang, Jonathan ; Kapulkin, Chris ; Sandford, Ryan.
We introduce the notion of (half) 2-adjoint equivalences in Homotopy Type Theory and prove their expected properties. We formalized these results in the Lean Theorem Prover.

4. On the Strong Equivalences for LPMLN Programs

Wang, Bin ; Shen, Jun ; Zhang, Shutao ; Zhang, Zhizheng.
LPMLN is a powerful knowledge representation and reasoning tool that combines the non-monotonic reasoning ability of Answer Set Programming (ASP) and the probabilistic reasoning ability of Markov Logic Networks (MLN). In this paper, we study the strong equivalence for LPMLN programs, which is an important tool for program rewriting and theoretical investigations in the field of logic programming. First of all, we present the notion of p-strong equivalence for LPMLN and present a model-theoretical characterization for the notion. And we investigate the relationships among the p-strong equivalence and other existing notions of strong equivalences for LPMLN. Then, we investigate several properties of the p-strong equivalence from the following four aspects. Firstly, we investigate two relaxed notions of the p-strong equivalence according to practical scenarios of program rewriting, and present corresponding characterizations for the notions. Secondly, we analyze the computational complexities of deciding strong equivalences for LPMLN programs. Thirdly, we investigate the relationships among the strong equivalences of LPMLN and two extensions of ASP: ASP with weak constraints and ordered disjunctions. Finally, we investigate LPMLN program simplification via the p-strong equivalence and present some syntactic conditions that decide the p-strong equivalence between a single LPMLN rule and the empty program. The contributions of the paper are as follows. Firstly, all of the […]

5. Definable decompositions for graphs of bounded linear cliquewidth

Bojańczyk, Mikołaj ; Grohe, Martin ; Pilipczuk, Michał.
We prove that for every positive integer k, there exists an MSO_1-transduction that given a graph of linear cliquewidth at most k outputs, nondeterministically, some cliquewidth decomposition of the graph of width bounded by a function of k. A direct corollary of this result is the equivalence of the notions of CMSO_1-definability and recognizability on graphs of bounded linear cliquewidth.

6. Modal Logics for Nominal Transition Systems

Parrow, Joachim ; Borgström, Johannes ; Eriksson, Lars-Henrik ; Gutkovas, Ramūnas Forsberg ; Weber, Tjark.
We define a general notion of transition system where states and action labels can be from arbitrary nominal sets, actions may bind names, and state predicates from an arbitrary logic define properties of states. A Hennessy-Milner logic for these systems is introduced, and proved adequate and expressively complete for bisimulation equivalence. A main technical novelty is the use of finitely supported infinite conjunctions. We show how to treat different bisimulation variants such as early, late, open and weak in a systematic way, explore the folklore theorem that state predicates can be replaced by actions, and make substantial comparisons with related work. The main definitions and theorems have been formalised in Nominal Isabelle.

7. Solvability = Typability + Inhabitation

Bucciarelli, Antonio ; Kesner, Delia ; Della Rocca, Simona Ronchi.
We extend the classical notion of solvability to a lambda-calculus equipped with pattern matching. We prove that solvability can be characterized by means of typability and inhabitation in an intersection type system P based on non-idempotent types. We show first that the system P characterizes the set of terms having canonical form, i.e. that a term is typable if and only if it reduces to a canonical form. But the set of solvable terms is properly contained in the set of canonical forms. Thus, typability alone is not sufficient to characterize solvability, in contrast to the case for the lambda-calculus. We then prove that typability, together with inhabitation, provides a full characterization of solvability, in the sense that a term is solvable if and only if it is typable and the types of all its arguments are inhabited. We complete the picture by providing an algorithm for the inhabitation problem of P.

8. Correct and Efficient Antichain Algorithms for Refinement Checking

Laveaux, Maurice ; Groote, Jan Friso ; Willemse, Tim A. C..
The notion of refinement plays an important role in software engineering. It is the basis of a stepwise development methodology in which the correctness of a system can be established by proving, or computing, that a system refines its specification. Wang et al. describe algorithms based on antichains for efficiently deciding trace refinement, stable failures refinement and failures-divergences refinement. We identify several issues pertaining to the soundness and performance in these algorithms and propose new, correct, antichain-based algorithms. Using a number of experiments we show that our algorithms outperform the original ones in terms of running time and memory usage. Furthermore, we show that additional run time improvements can be obtained by applying divergence-preserving branching bisimulation minimisation.

9. Computing Probabilistic Bisimilarity Distances for Probabilistic Automata

Bacci, Giorgio ; Bacci, Giovanni ; Larsen, Kim G. ; Mardare, Radu ; Tang, Qiyi ; van Breugel, Franck.
The probabilistic bisimilarity distance of Deng et al. has been proposed as a robust quantitative generalization of Segala and Lynch's probabilistic bisimilarity for probabilistic automata. In this paper, we present a characterization of the bisimilarity distance as the solution of a simple stochastic game. The characterization gives us an algorithm to compute the distances by applying Condon's simple policy iteration on these games. The correctness of Condon's approach, however, relies on the assumption that the games are stopping. Our games may be non-stopping in general, yet we are able to prove termination for this extended class of games. Already other algorithms have been proposed in the literature to compute these distances, with complexity in $\textbf{UP} \cap \textbf{coUP}$ and \textbf{PPAD}. Despite the theoretical relevance, these algorithms are inefficient in practice. To the best of our knowledge, our algorithm is the first practical solution. The characterization of the probabilistic bisimilarity distance mentioned above crucially uses a dual presentation of the Hausdorff distance due to Mémoli. As an additional contribution, in this paper we show that Mémoli's result can be used also to prove that the bisimilarity distance bounds the difference in the maximal (or minimal) probability of two states to satisfying arbitrary $\omega$-regular properties, expressed, eg., as LTL formulas.

10. Determinacy in Discrete-Bidding Infinite-Duration Games

Aghajohari, Milad ; Avni, Guy ; Henzinger, Thomas A..
In two-player games on graphs, the players move a token through a graph to produce an infinite path, which determines the winner of the game. Such games are central in formal methods since they model the interaction between a non-terminating system and its environment. In bidding games the players bid for the right to move the token: in each round, the players simultaneously submit bids, and the higher bidder moves the token and pays the other player. Bidding games are known to have a clean and elegant mathematical structure that relies on the ability of the players to submit arbitrarily small bids. Many applications, however, require a fixed granularity for the bids, which can represent, for example, the monetary value expressed in cents. We study, for the first time, the combination of discrete-bidding and infinite-duration games. Our most important result proves that these games form a large determined subclass of concurrent games, where determinacy is the strong property that there always exists exactly one player who can guarantee winning the game. In particular, we show that, in contrast to non-discrete bidding games, the mechanism with which tied bids are resolved plays an important role in discrete-bidding games. We study several natural tie-breaking mechanisms and show that, while some do not admit determinacy, most natural mechanisms imply determinacy for every pair of initial budgets.

11. Axiomatization of betweenness in order-theoretic trees

Courcelle, Bruno.
The ternary betweenness relation of a tree, B(x,y,z) expresses that y is on the unique path between x and z. This notion can be extended to order-theoretic trees defined as partial orders such that the set of nodes larger than any node is linearly ordered. In such generalized trees, the unique "path" between two nodes can have infinitely many nodes. We generalize some results obtained in a previous article for the betweenness of join-trees. Join-trees are order-theoretic trees such that any two nodes have a least upper-bound. The motivation was to define conveniently the rank-width of a countable graph. We called quasi-tree the structure based on the betweenness relation of a join-tree. We proved that quasi-trees are axiomatized by a first-order sentence. Here, we obtain a monadic second-order axiomatization of betweenness in order-theoretic trees. We also define and compare several induced betweenness relations, i.e., restrictions to sets of nodes of the betweenness relations in generalized trees of different kinds. We prove that induced betweenness in quasi-trees is characterized by a first-order sentence. The proof uses order-theoretic trees.

12. Robustness Against Transactional Causal Consistency

Beillahi, Sidi Mohamed ; Bouajjani, Ahmed ; Enea, Constantin.
Distributed storage systems and databases are widely used by various types of applications. Transactional access to these storage systems is an important abstraction allowing application programmers to consider blocks of actions (i.e., transactions) as executing atomically. For performance reasons, the consistency models implemented by modern databases are weaker than the standard serializability model, which corresponds to the atomicity abstraction of transactions executing over a sequentially consistent memory. Causal consistency for instance is one such model that is widely used in practice. In this paper, we investigate application-specific relationships between several variations of causal consistency and we address the issue of verifying automatically if a given transactional program is robust against causal consistency, i.e., all its behaviors when executed over an arbitrary causally consistent database are serializable. We show that programs without write-write races have the same set of behaviors under all these variations, and we show that checking robustness is polynomial time reducible to a state reachability problem in transactional programs over a sequentially consistent shared memory. A surprising corollary of the latter result is that causal consistency variations which admit incomparable sets of behaviors admit comparable sets of robust programs. This reduction also opens the door to leveraging existing methods and tools for the verification of concurrent […]

13. Convolution Algebras: Relational Convolution, Generalised Modalities and Incidence Algebras

Dongol, Brijesh ; Hayes, Ian J. ; Struth, Georg.
Convolution is a ubiquitous operation in mathematics and computing. The Kripke semantics for substructural and interval logics motivates its study for quantale-valued functions relative to ternary relations. The resulting notion of relational convolution leads to generalised binary and unary modal operators for qualitative and quantitative models, and to more conventional variants, when ternary relations arise from identities over partial semigroups. Convolution-based semantics for fragments of categorial, linear and incidence (segment or interval) logics are provided as qualitative applications. Quantitative examples include algebras of durations and mean values in the duration calculus.

14. The $\pi$-Calculus is Behaviourally Complete and Orbit-Finitely Executable

Luttik, Bas ; Yang, Fei.
Reactive Turing machines extend classical Turing machines with a facility to model observable interactive behaviour. We call a behaviour (finitely) executable if, and only if, it is equivalent to the behaviour of a (finite) reactive Turing machine. In this paper, we study the relationship between executable behaviour and behaviour that can be specified in the $\pi$-calculus. We establish that every finitely executable behaviour can be specified in the $\pi$-calculus up to divergence-preserving branching bisimilarity. The converse, however, is not true due to (intended) limitations of the model of reactive Turing machines. That is, the $\pi$-calculus allows the specification of behaviour that is not finitely executable up to divergence-preserving branching bisimilarity. We shall prove, however, that if the finiteness requirement on reactive Turing machines and the associated notion of executability is relaxed to orbit-finiteness, then the $\pi$-calculus is executable up to (divergence-insensitive) branching bisimilarity.

15. Output-sensitive Information flow analysis

Ene, Cristian ; Mounier, Laurent ; Potet, Marie-Laure.
Constant-time programming is a countermeasure to prevent cache based attacks where programs should not perform memory accesses that depend on secrets. In some cases this policy can be safely relaxed if one can prove that the program does not leak more information than the public outputs of the computation. We propose a novel approach for verifying constant-time programming based on a new information flow property, called output-sensitive noninterference. Noninterference states that a public observer cannot learn anything about the private data. Since real systems need to intentionally declassify some information, this property is too strong in practice. In order to take into account public outputs we proceed as follows: instead of using complex explicit declassification policies, we partition variables in three sets: input, output and leakage variables. Then, we propose a typing system to statically check that leakage variables do not leak more information about the secret inputs than the public normal output. The novelty of our approach is that we track the dependence of leakage variables with respect not only to the initial values of input variables (as in classical approaches for noninterference), but taking also into account the final values of output variables. We adapted this approach to LLVM IR and we developed a prototype to verify LLVM implementations.

16. A Formal Proof of the Irrationality of $\zeta(3)$

Mahboubi, Assia ; Sibut-Pinote, Thomas.
This paper presents a complete formal verification of a proof that the evaluation of the Riemann zeta function at 3 is irrational, using the Coq proof assistant. This result was first presented by Apéry in 1978, and the proof we have formalized essentially follows the path of his original presentation. The crux of this proof is to establish that some sequences satisfy a common recurrence. We formally prove this result by an a posteriori verification of calculations performed by computer algebra algorithms in a Maple session. The rest of the proof combines arithmetical ingredients and asymptotic analysis, which we conduct by extending the Mathematical Components libraries.

17. Datatype defining rewrite systems for naturals and integers

Bergstra, Jan A. ; Ponse, Alban.
A datatype defining rewrite system (DDRS) is an algebraic (equational) specification intended to specify a datatype. When interpreting the equations from left-to-right, a DDRS defines a term rewriting system that must be ground-complete. First we define two DDRSs for the ring of integers, each comprising twelve rewrite rules, and prove their ground-completeness. Then we introduce natural number and integer arithmetic specified according to unary view, that is, arithmetic based on a postfix unary append constructor (a form of tallying). Next we specify arithmetic based on two other views: binary and decimal notation. The binary and decimal view have as their characteristic that each normal form resembles common number notation, that is, either a digit, or a string of digits without leading zero, or the negated versions of the latter. Integer arithmetic in binary and decimal notation is based on (postfix) digit append functions. For each view we define a DDRS, and in each case the resulting datatype is a canonical term algebra that extends a corresponding canonical term algebra for natural numbers. Then, for each view, we consider an alternative DDRS based on tree constructors that yields comparable normal forms, which for that view admits expressions that are algorithmically more involved. For all DDRSs considered, ground-completeness is proven.