![]() |
![]() |
2025
We incorporate strong negation in the theory of computable functionals TCF, a common extension of Plotkin's PCF and Gödel's system $\mathbf{T}$, by defining simultaneously strong negation $A^{\mathbf{N}}$ of a formula $A$ and strong negation $P^{\mathbf{N}}$ of a predicate $P$ in TCF. As a special case of the latter, we get strong negation of an inductive and a coinductive predicate of TCF. We prove appropriate versions of the Ex falso quodlibet and of double negation elimination for strong negation in TCF. We introduce the so-called tight formulas of TCF i.e., formulas implied by the weak negation of their strong negation, and the relative tight formulas. We present various case-studies and examples, which reveal the naturality of our definition of strong negation in TCF and justify the use of TCF as a formal system for a large part of Bishop-style constructive mathematics.
Safety and liveness stand as fundamental concepts in formal languages, playing a key role in verification. The safety-liveness classification of boolean properties characterizes whether a given property can be falsified by observing a finite prefix of an infinite computation trace (always for safety, never for liveness). In the quantitative setting, properties are arbitrary functions from infinite words to partially-ordered domains. Extending this paradigm to the quantitative domain, where properties are arbitrary functions mapping infinite words to partially-ordered domains, we introduce and study the notions of quantitative safety and liveness. First, we formally define quantitative safety and liveness, and prove that our definitions induce conservative quantitative generalizations of both the safety-progress hierarchy and the safety-liveness decomposition of boolean properties. Consequently, like their boolean counterparts, quantitative properties can be min-decomposed into safety and liveness parts, or alternatively, max-decomposed into co-safety and co-liveness parts. We further establish a connection between quantitative safety and topological continuity and provide alternative characterizations of quantitative safety and liveness in terms of their boolean analogs. Second, we instantiate our framework with the specific classes of quantitative properties expressed by automata. These quantitative automata contain finitely many states and rational-valued transition […]
We introduce and study pawn games, a class of two-player zero-sum turn-based graph games. A turn-based graph game proceeds by placing a token on an initial vertex, and whoever controls the vertex on which the token is located, chooses its next location. This leads to a path in the graph, which determines the winner. Traditionally, the control of vertices is predetermined and fixed. The novelty of pawn games is that control of vertices changes dynamically throughout the game as follows. Each vertex of a pawn game is owned by a pawn. In each turn, the pawns are partitioned between the two players, and the player who controls the pawn that owns the vertex on which the token is located, chooses the next location of the token. Control of pawns changes dynamically throughout the game according to a fixed mechanism. Specifically, we define several grabbing-based mechanisms in which control of at most one pawn transfers at the end of each turn. We study the complexity of solving pawn games, where we focus on reachability objectives and parameterize the problem by the mechanism that is being used and by restrictions on pawn ownership of vertices. On the positive side, even though pawn games are exponentially-succinct turn-based games, we identify several natural classes that can be solved in PTIME. On the negative side, we identify several EXPTIME-complete classes, where our hardness proofs are based on a new class of games called Lock & Key games, which may be of independent […]
In this paper, we investigate the relationship between two elementary operations on derivations in the framework of graph transformation based on adhesive categories: moving a derivation along a derivation based on parallel and sequential independence on one hand and restriction of a derivation with respect to a monomorphism into the start object on the other hand. Intuitively, a restriction clips off parts of the start object that are never matched by a rule application throughout the derivation on the other hand. As main result, it is shown that moving a derivation preserves its spine being the minimal restriction.
Session types provide a typing discipline for message-passing systems. However, their theory often assumes an ideal world: one in which everything is reliable and without failures. Yet this is in stark contrast with distributed systems in the real world. To address this limitation, we introduce a new asynchronous multiparty session types (MPST) theory with crash-stop failures, where processes may crash arbitrarily and cease to interact after crashing. We augment asynchronous MPST and processes with crash handling branches, and integrate crash-stop failure semantics into types and processes. Our approach requires no user-level syntax extensions for global types, and features a formalisation of global semantics, which captures complex behaviours induced by crashed/crash handling processes. Our new theory covers the entire spectrum, ranging from the ideal world of total reliability to entirely unreliable scenarios where any process may crash, using optional reliability assumptions. Under these assumptions, we demonstrate the sound and complete correspondence between global and local type semantics, which guarantee deadlock-freedom, protocol conformance, and liveness of well-typed processes by construction, even in the presence of crashes.
In this paper we present a quantifier elimination method for conjunctions of linear real arithmetic constraints. Our algorithm is based on the Fourier-Motzkin variable elimination procedure, but by case splitting we are able to reduce the worst-case complexity from doubly to singly exponential. The adaption of the procedure for SMT solving has strong correspondence to the simplex algorithm, therefore we name it FMplex. Besides the theoretical foundations, we provide an experimental evaluation in the context of SMT solving. This is an extended version of the authors' work previously published at the fourteenth International Symposium on Games, Automata, Logics, and Formal Verification (GandALF 2023).
We propose a generic framework for establishing the decidability of a wide range of logical entailment problems (briefly called querying), based on the existence of countermodels that are structurally simple, gauged by certain types of width measures (with treewidth and cliquewidth as popular examples). As an important special case of our framework, we identify logics exhibiting width-finite finitely universal model sets, warranting decidable entailment for a wide range of homomorphism-closed queries, subsuming a diverse set of practically relevant query languages. As a particularly powerful width measure, we propose to employ Blumensath's partitionwidth, which subsumes various other commonly considered width measures and exhibits highly favorable computational and structural properties. Focusing on the formalism of existential rules as a popular showcase, we explain how finite partitionwidth sets of rules subsume other known abstract decidable classes but - leveraging existing notions of stratification - also cover a wide range of new rulesets. We expose natural limitations for fitting the class of finite unification sets into our picture and suggest several options for remedy.
We propose a new step-wise approach to proving observational equivalence, and in particular reasoning about fragility of observational equivalence. Our approach is based on what we call local reasoning. The local reasoning exploits the graphical concept of neighbourhood, and it extracts a new, formal, concept of robustness as a key sufficient condition of observational equivalence. Moreover, our proof methodology is capable of proving a generalised notion of observational equivalence. The generalised notion can be quantified over syntactically restricted contexts instead of all contexts, and also quantitatively constrained in terms of the number of reduction steps. The operational machinery we use is given by a hypergraph-rewriting abstract machine inspired by Girard's Geometry of Interaction. The behaviour of language features, including function abstraction and application, is provided by hypergraph-rewriting rules. We demonstrate our proof methodology using the call-by-value lambda-calculus equipped with (higher-order) state.