Volume 14, Issue 4

2018


1. The Complexity of All-switches Strategy Improvement

John Fearnley ; Rahul Savani.
Strategy improvement is a widely-used and well-studied class of algorithms for solving graph-based infinite games. These algorithms are parameterized by a switching rule, and one of the most natural rules is "all switches" which switches as many edges as possible in each iteration. Continuing a recent line of work, we study all-switches strategy improvement from the perspective of computational complexity. We consider two natural decision problems, both of which have as input a game $G$, a starting strategy $s$, and an edge $e$. The problems are: 1.) The edge switch problem, namely, is the edge $e$ ever switched by all-switches strategy improvement when it is started from $s$ on game $G$? 2.) The optimal strategy problem, namely, is the edge $e$ used in the final strategy that is found by strategy improvement when it is started from $s$ on game $G$? We show $\mathtt{PSPACE}$-completeness of the edge switch problem and optimal strategy problem for the following settings: Parity games with the discrete strategy improvement algorithm of Vöge and Jurdzi\'nski; mean-payoff games with the gain-bias algorithm [14,37]; and discounted-payoff games and simple stochastic games with their standard strategy improvement algorithms. We also show $\mathtt{PSPACE}$-completeness of an analogous problem to edge switch for the bottom-antipodal algorithm for finding the sink of an Acyclic Unique Sink Orientation on a cube.

2. One-way definability of two-way word transducers

Félix Baschenis ; Olivier Gauwin ; Anca Muscholl ; Gabriele Puppis.
Functional transductions realized by two-way transducers (or, equally, by streaming transducers or MSO transductions) are the natural and standard notion of "regular" mappings from words to words. It was shown in 2013 that it is decidable if such a transduction can be implemented by some one-way transducer, but the given algorithm has non-elementary complexity. We provide an algorithm of different flavor solving the above question, that has doubly exponential space complexity. In the special case of sweeping transducers the complexity is one exponential less. We also show how to construct an equivalent one-way transducer, whenever it exists, in doubly or triply exponential time, again depending on whether the input transducer is sweeping or two-way. In the sweeping case our construction is shown to be optimal.

3. Extension by Conservation. Sikorski's Theorem

Davide Rinaldi ; Daniel Wessel.
Constructive meaning is given to the assertion that every finite Boolean algebra is an injective object in the category of distributive lattices. To this end, we employ Scott's notion of entailment relation, in which context we describe Sikorski's extension theorem for finite Boolean algebras and turn it into a syntactical conservation result. As a by-product, we can facilitate proofs of several related classical principles.

4. Do Hard SAT-Related Reasoning Tasks Become Easier in the Krom Fragment?

Nadia Creignou ; Reinhard Pichler ; Stefan Woltran.
Many reasoning problems are based on the problem of satisfiability (SAT). While SAT itself becomes easy when restricting the structure of the formulas in a certain way, the situation is more opaque for more involved decision problems. We consider here the CardMinSat problem which asks, given a propositional formula $\phi$ and an atom $x$, whether $x$ is true in some cardinality-minimal model of $\phi$. This problem is easy for the Horn fragment, but, as we will show in this paper, remains $\Theta_2$-complete (and thus $\mathrm{NP}$-hard) for the Krom fragment (which is given by formulas in CNF where clauses have at most two literals). We will make use of this fact to study the complexity of reasoning tasks in belief revision and logic-based abduction and show that, while in some cases the restriction to Krom formulas leads to a decrease of complexity, in others it does not. We thus also consider the CardMinSat problem with respect to additional restrictions to Krom formulas towards a better understanding of the tractability frontier of such problems.

5. Game Characterization of Probabilistic Bisimilarity, and Applications to Pushdown Automata

Vojtěch Forejt ; Petr Jančar ; Stefan Kiefer ; James Worrell.
We study the bisimilarity problem for probabilistic pushdown automata (pPDA) and subclasses thereof. Our definition of pPDA allows both probabilistic and non-deterministic branching, generalising the classical notion of pushdown automata (without epsilon-transitions). We first show a general characterization of probabilistic bisimilarity in terms of two-player games, which naturally reduces checking bisimilarity of probabilistic labelled transition systems to checking bisimilarity of standard (non-deterministic) labelled transition systems. This reduction can be easily implemented in the framework of pPDA, allowing to use known results for standard (non-probabilistic) PDA and their subclasses. A direct use of the reduction incurs an exponential increase of complexity, which does not matter in deriving decidability of bisimilarity for pPDA due to the non-elementary complexity of the problem. In the cases of probabilistic one-counter automata (pOCA), of probabilistic visibly pushdown automata (pvPDA), and of probabilistic basic process algebras (i.e., single-state pPDA) we show that an implicit use of the reduction can avoid the complexity increase; we thus get PSPACE, EXPTIME, and 2-EXPTIME upper bounds, respectively, like for the respective non-probabilistic versions. The bisimilarity problems for OCA and vPDA are known to have matching lower bounds (thus being PSPACE-complete and EXPTIME-complete, respectively); we show that these lower bounds also hold for fully […]

6. Codensity Lifting of Monads and its Dual

Shin-ya Katsumata ; Tetsuya Sato ; Tarmo Uustalu.
We introduce a method to lift monads on the base category of a fibration to its total category. This method, which we call codensity lifting, is applicable to various fibrations which were not supported by its precursor, categorical TT-lifting. After introducing the codensity lifting, we illustrate some examples of codensity liftings of monads along the fibrations from the category of preorders, topological spaces and extended pseudometric spaces to the category of sets, and also the fibration from the category of binary relations between measurable spaces. We also introduce the dual method called density lifting of comonads. We next study the liftings of algebraic operations to the codensity liftings of monads. We also give a characterisation of the class of liftings of monads along posetal fibrations with fibred small meets as a limit of a certain large diagram.

7. Separating regular languages with two quantifier alternations

Thomas Place.
We investigate a famous decision problem in automata theory: separation. Given a class of language C, the separation problem for C takes as input two regular languages and asks whether there exists a third one which belongs to C, includes the first one and is disjoint from the second. Typically, obtaining an algorithm for separation yields a deep understanding of the investigated class C. This explains why a lot of effort has been devoted to finding algorithms for the most prominent classes. Here, we are interested in classes within concatenation hierarchies. Such hierarchies are built using a generic construction process: one starts from an initial class called the basis and builds new levels by applying generic operations. The most famous one, the dot-depth hierarchy of Brzozowski and Cohen, classifies the languages definable in first-order logic. Moreover, it was shown by Thomas that it corresponds to the quantifier alternation hierarchy of first-order logic: each level in the dot-depth corresponds to the languages that can be defined with a prescribed number of quantifier blocks. Finding separation algorithms for all levels in this hierarchy is among the most famous open problems in automata theory. Our main theorem is generic: we show that separation is decidable for the level 3/2 of any concatenation hierarchy whose basis is finite. Furthermore, in the special case of the dot-depth, we push this result to the level 5/2. In logical terms, this solves separation for […]

8. Axioms for Modelling Cubical Type Theory in a Topos

Ian Orton ; Andrew M. Pitts.
The homotopical approach to intensional type theory views proofs of equality as paths. We explore what is required of an object $I$ in a topos to give such a path-based model of type theory in which paths are just functions with domain $I$. Cohen, Coquand, Huber and Mörtberg give such a model using a particular category of presheaves. We investigate the extent to which their model construction can be expressed in the internal type theory of any topos and identify a collection of quite weak axioms for this purpose. This clarifies the definition and properties of the notion of uniform Kan filling that lies at the heart of their constructive interpretation of Voevodsky's univalence axiom. (This paper is a revised and expanded version of a paper of the same name that appeared in the proceedings of the 25th EACSL Annual Conference on Computer Science Logic, CSL 2016.)

9. Taylor expansion in linear logic is invertible

Daniel de Carvalho.
Each Multiplicative Exponential Linear Logic (MELL) proof-net can be expanded into a differential net, which is its Taylor expansion. We prove that two different MELL proof-nets have two different Taylor expansions. As a corollary, we prove a completeness result for MELL: We show that the relational model is injective for MELL proof-nets, i.e. the equality between MELL proof-nets in the relational model is exactly axiomatized by cut-elimination.

10. The Complexity of Bisimulation and Simulation on Finite Systems

Moses Ganardi ; Stefan Göller ; Markus Lohrey.
In this paper the computational complexity of the (bi)simulation problem over restricted graph classes is studied. For trees given as pointer structures or terms the (bi)simulation problem is complete for logarithmic space or NC$^1$, respectively. This solves an open problem from Balcázar, Gabarró, and Sántha. Furthermore, if only one of the input graphs is required to be a tree, the bisimulation (simulation) problem is contained in AC$^1$ (LogCFL). In contrast, it is also shown that the simulation problem is P-complete already for graphs of bounded path-width.

11. Subsumption Algorithms for Three-Valued Geometric Resolution

Hans de Nivelle.
In our implementation of geometric resolution, the most costly operation is subsumption testing (or matching): One has to decide for a three-valued, geometric formula, if this formula is false in a given interpretation. The formula contains only atoms with variables, equality, and existential quantifiers. The interpretation contains only atoms with constants. Because the atoms have no term structure, matching for geometric resolution is hard. We translate the matching problem into a generalized constraint satisfaction problem, and discuss several approaches for solving it efficiently, one direct algorithm and two translations to propositional SAT. After that, we study filtering techniques based on local consistency checking. Such filtering techniques can a priori refute a large percentage of generalized constraint satisfaction problems. Finally, we adapt the matching algorithms in such a way that they find solutions that use a minimal subset of the interpretation. The adaptation can be combined with every matching algorithm. The techniques presented in this paper may have applications in constraint solving independent of geometric resolution.

12. Model Checking Flat Freeze LTL on One-Counter Automata

Antonia Lechner ; Richard Mayr ; Joël Ouaknine ; Amaury Pouly ; James Worrell.
Freeze LTL is a temporal logic with registers that is suitable for specifying properties of data words. In this paper we study the model checking problem for Freeze LTL on one-counter automata. This problem is known to be undecidable in general and PSPACE-complete for the special case of deterministic one-counter automata. Several years ago, Demri and Sangnier investigated the model checking problem for the flat fragment of Freeze LTL on several classes of counter automata and posed the decidability of model checking flat Freeze LTL on one-counter automata as an open problem. In this paper we resolve this problem positively, utilising a known reduction to a reachability problem on one-counter automata with parameterised equality and disequality tests. Our main technical contribution is to show decidability of the latter problem by translation to Presburger arithmetic.

13. Reasoning About Bounds in Weighted Transition Systems

Mikkel Hansen ; Kim Guldstrand Larsen ; Radu Mardare ; Mathias Ruggaard Pedersen.
We propose a way of reasoning about minimal and maximal values of the weights of transitions in a weighted transition system (WTS). This perspective induces a notion of bisimulation that is coarser than the classic bisimulation: it relates states that exhibit transitions to bisimulation classes with the weights within the same boundaries. We propose a customized modal logic that expresses these numeric boundaries for transition weights by means of particular modalities. We prove that our logic is invariant under the proposed notion of bisimulation. We show that the logic enjoys the finite model property and we identify a complete axiomatization for the logic. Last but not least, we use a tableau method to show that the satisfiability problem for the logic is decidable.

14. Inducing syntactic cut-elimination for indexed nested sequents

Revantha Ramanayake.
The key to the proof-theoretic study of a logic is a proof calculus with a subformula property. Many different proof formalisms have been introduced (e.g. sequent, nested sequent, labelled sequent formalisms) in order to provide such calculi for the many logics of interest. The nested sequent formalism was recently generalised to indexed nested sequents in order to yield proof calculi with the subformula property for extensions of the modal logic K by (Lemmon-Scott) Geach axioms. The proofs of completeness and cut-elimination therein were semantic and intricate. Here we show that derivations in the labelled sequent formalism whose sequents are `almost treelike' correspond exactly to indexed nested sequents. This correspondence is exploited to induce syntactic proofs for indexed nested sequent calculi making use of the elegant proofs that exist for the labelled sequent calculi. A larger goal of this work is to demonstrate how specialising existing proof-theoretic transformations alleviate the need for independent proofs in each formalism. Such coercion can also be used to induce new cutfree calculi. We employ this to present the first indexed nested sequent calculi for intermediate logics.

15. Termination in Convex Sets of Distributions

Ana Sokolova ; Harald Woracek.
Convex algebras, also called (semi)convex sets, are at the heart of modelling probabilistic systems including probabilistic automata. Abstractly, they are the Eilenberg-Moore algebras of the finitely supported distribution monad. Concretely, they have been studied for decades within algebra and convex geometry. In this paper we study the problem of extending a convex algebra by a single point. Such extensions enable the modelling of termination in probabilistic systems. We provide a full description of all possible extensions for a particular class of convex algebras: For a fixed convex subset $D$ of a vector space satisfying additional technical condition, we consider the algebra of convex subsets of $D$. This class contains the convex algebras of convex subsets of distributions, modelling (nondeterministic) probabilistic automata. We also provide a full description of all possible extensions for the class of free convex algebras, modelling fully probabilistic systems. Finally, we show that there is a unique functorial extension, the so-called black-hole extension.

16. A Complete Quantitative Deduction System for the Bisimilarity Distance on Markov Chains

Giorgio Bacci ; Giovanni Bacci ; Kim G. Larsen ; Radu Mardare.
In this paper we propose a complete axiomatization of the bisimilarity distance of Desharnais et al. for the class of finite labelled Markov chains. Our axiomatization is given in the style of a quantitative extension of equational logic recently proposed by Mardare, Panangaden, and Plotkin (LICS 2016) that uses equality relations $t \equiv_\varepsilon s$ indexed by rationals, expressing that `$t$ is approximately equal to $s$ up to an error $\varepsilon$'. Notably, our quantitative deduction system extends in a natural way the equational system for probabilistic bisimilarity given by Stark and Smolka by introducing an axiom for dealing with the Kantorovich distance between probability distributions. The axiomatization is then used to propose a metric extension of a Kleene's style representation theorem for finite labelled Markov chains, that was proposed (in a more general coalgebraic fashion) by Silva et al. (Inf. Comput. 2011).

17. Affine Sessions

Dimitris Mostrous ; Vasco T. Vasconcelos.
Session types describe the structure of communications implemented by channels. In particular, they prescribe the sequence of communications, whether they are input or output actions, and the type of value exchanged. Crucial to any language with session types is the notion of linearity, which is essential to ensure that channels exhibit the behaviour prescribed by their type without interference in the presence of concurrency. In this work we relax the condition of linearity to that of affinity, by which channels exhibit at most the behaviour prescribed by their types. This more liberal setting allows us to incorporate an elegant error handling mechanism which simplifies and improves related works on exceptions. Moreover, our treatment does not affect the progress properties of the language: sessions never get stuck.

18. Reasoning with Finite Sets and Cardinality Constraints in SMT

Kshitij Bansal ; Clark Barrett ; Andrew Reynolds ; Cesare Tinelli.
We consider the problem of deciding the satisfiability of quantifier-free formulas in the theory of finite sets with cardinality constraints. Sets are a common high-level data structure used in programming; thus, such a theory is useful for modeling program constructs directly. More importantly, sets are a basic construct of mathematics and thus natural to use when formalizing the properties of computational systems. We develop a calculus describing a modular combination of a procedure for reasoning about membership constraints with a procedure for reasoning about cardinality constraints. Cardinality reasoning involves tracking how different sets overlap. For efficiency, we avoid considering Venn regions directly, as done in previous work. Instead, we develop a novel technique wherein potentially overlapping regions are considered incrementally as needed, using a graph to track the interaction among the different regions. The calculus has been designed to facilitate its implementation within SMT solvers based on the DPLL($T$) architecture. Our experimental results demonstrate that the new techniques are competitive with previous techniques and can scale much better on certain classes of problems.

19. Intuitionistic Layered Graph Logic: Semantics and Proof Theory

Simon Docherty ; David Pym.
Models of complex systems are widely used in the physical and social sciences, and the concept of layering, typically building upon graph-theoretic structure, is a common feature. We describe an intuitionistic substructural logic called ILGL that gives an account of layering. The logic is a bunched system, combining the usual intuitionistic connectives, together with a non-commutative, non-associative conjunction (used to capture layering) and its associated implications. We give soundness and completeness theorems for a labelled tableaux system with respect to a Kripke semantics on graphs. We then give an equivalent relational semantics, itself proven equivalent to an algebraic semantics via a representation theorem. We utilise this result in two ways. First, we prove decidability of the logic by showing the finite embeddability property holds for the algebraic semantics. Second, we prove a Stone-type duality theorem for the logic. By introducing the notions of ILGL hyperdoctrine and indexed layered frame we are able to extend this result to a predicate version of the logic and prove soundness and completeness theorems for an extension of the layered graph semantics . We indicate the utility of predicate ILGL with a resource-labelled bigraph model.

20. Deciding Confluence and Normal Form Properties of Ground Term Rewrite Systems Efficiently

Bertram Felgenhauer.
It is known that the first-order theory of rewriting is decidable for ground term rewrite systems, but the general technique uses tree automata and often takes exponential time. For many properties, including confluence (CR), uniqueness of normal forms with respect to reductions (UNR) and with respect to conversions (UNC), polynomial time decision procedures are known for ground term rewrite systems. However, this is not the case for the normal form property (NFP). In this work, we present a cubic time algorithm for NFP, an almost cubic time algorithm for UNR, and an almost linear time algorithm for UNC, improving previous bounds. We also present a cubic time algorithm for CR.

21. On the algebraic structure of Weihrauch degrees

Vasco Brattka ; Arno Pauly.
We introduce two new operations (compositional products and implication) on Weihrauch degrees, and investigate the overall algebraic structure. The validity of the various distributivity laws is studied and forms the basis for a comparison with similar structures such as residuated lattices and concurrent Kleene algebras. Introducing the notion of an ideal with respect to the compositional product, we can consider suitable quotients of the Weihrauch degrees. We also prove some specific characterizations using the implication. In order to introduce and study compositional products and implications, we introduce and study a function space of multi-valued continuous functions. This space turns out to be particularly well-behaved for effectively traceable spaces that are closely related to admissibly represented spaces.
Section: Computability and logic

22. Local Redundancy in SAT: Generalizations of Blocked Clauses

Benjamin Kiesl ; Martina Seidl ; Hans Tompits ; Armin Biere.
Clause-elimination procedures that simplify formulas in conjunctive normal form play an important role in modern SAT solving. Before or during the actual solving process, such procedures identify and remove clauses that are irrelevant to the solving result. These simplifications usually rely on so-called redundancy properties that characterize cases in which the removal of a clause does not affect the satisfiability status of a formula. One particularly successful redundancy property is that of blocked clauses, because it generalizes several other redundancy properties. To find out whether a clause is blocked---and therefore redundant---one only needs to consider its resolution environment, i.e., the clauses with which it can be resolved. For this reason, we say that the redundancy property of blocked clauses is local. In this paper, we show that there exist local redundancy properties that are even more general than blocked clauses. We present a semantic notion of blocking and prove that it constitutes the most general local redundancy property. We furthermore introduce the syntax-based notions of set-blocking and super-blocking, and show that the latter coincides with our semantic blocking notion. In addition, we show how semantic blocking can be alternatively characterized via Davis and Putnam's rule for eliminating atomic formulas. Finally, we perform a detailed complexity analysis and relate our novel redundancy properties to prominent redundancy properties from the […]

23. Qualitative and Quantitative Monitoring of Spatio-Temporal Properties with SSTL

L. Nenzi ; L. Bortolussi ; V. Ciancia ; M. Loreti ; M. Massink.
In spatially located, large scale systems, time and space dynamics interact and drives the behaviour. Examples of such systems can be found in many smart city applications and Cyber-Physical Systems. In this paper we present the Signal Spatio-Temporal Logic (SSTL), a modal logic that can be used to specify spatio-temporal properties of linear time and discrete space models. The logic is equipped with a Boolean and a quantitative semantics for which efficient monitoring algorithms have been developed. As such, it is suitable for real-time verification of both white box and black box complex systems. These algorithms can also be combined with stochastic model checking routines. SSTL combines the until temporal modality with two spatial modalities, one expressing that something is true somewhere nearby and the other capturing the notion of being surrounded by a region that satisfies a given spatio-temporal property. The monitoring algorithms are implemented in an open source Java tool. We illustrate the use of SSTL analysing the formation of patterns in a Turing Reaction-Diffusion system and spatio-temporal aspects of a large bike-sharing system.
Section: Modal and temporal logics

24. Applicable Mathematics in a Minimal Computational Theory of Sets

Arnon Avron ; Liron Cohen.
In previous papers on this project a general static logical framework for formalizing and mechanizing set theories of different strength was suggested, and the power of some predicatively acceptable theories in that framework was explored. In this work we first improve that framework by enriching it with means for coherently extending by definitions its theories, without destroying its static nature or violating any of the principles on which it is based. Then we turn to investigate within the enriched framework the power of the minimal (predicatively acceptable) theory in it that proves the existence of infinite sets. We show that that theory is a computational theory, in the sense that every element of its minimal transitive model is denoted by some of its closed terms. (That model happens to be the second universe in Jensen's hierarchy.) Then we show that already this minimal theory suffices for developing very large portions (if not all) of scientifically applicable mathematics. This requires treating the collection of real numbers as a proper class, that is: a unary predicate which can be introduced in the theory by the static extension method described in the first part of the paper.