Volume 21, Issue 3

2025


1. A Behavioral Theory for Distributed Systems with Weak Recovery

Giovanni Fabbretti ; Ivan Lanese ; Jean-Bernard Stefani.
Distributed systems can be subject to various kinds of partial failures, therefore building fault-tolerance or failure mitigation mechanisms for distributed systems remains an important domain of research. In this paper, we present a calculus to formally model distributed systems subject to crash failures with recovery. The recovery model considered in the paper is weak, in the sense that it makes no assumption on the exact state in which a failed node resumes its execution, only its identity has to be distinguishable from past incarnations of itself. Our calculus is inspired in part by the Erlang programming language and in part by the distributed $π$-calculus with nodes and link failures (D$π$F) introduced by Francalanza and Hennessy. In order to reason about distributed systems with failures and recovery we develop a behavioral theory for our calculus, in the form of a contextual equivalence, and of a fully abstract coinductive characterization of this equivalence by means of a labelled transition system semantics and its associated weak bisimilarity. This result is valuable for it provides a compositional proof technique for proving or disproving contextual equivalence between systems.

2. Automata Linear Dynamic Logic on Finite Traces

Kevin W. Smith ; Moshe Y. Vardi.
Temporal logics are widely used by the Formal Methods and AI communities. Linear Temporal Logic is a popular temporal logic and is valued for its ease of use as well as its balance between expressiveness and complexity. LTL is equivalent in expressiveness to Monadic First-Order Logic and satisfiability for LTL is PSPACE-complete. Linear Dynamic Logic (LDL), another temporal logic, is equivalent to Monadic Second-Order Logic, but its method of satisfiability checking cannot be applied to a nontrivial subset of LDL formulas. Here we introduce Automata Linear Dynamic Logic on Finite Traces (ALDL_f) and show that satisfiability for ALDL_f formulas is in PSPACE. A variant of Linear Dynamic Logic on Finite Traces (LDL_f), ALDL_f combines propositional logic with nondeterministic finite automata (NFA) to express temporal constraints. ALDL$_f$ is equivalent in expressiveness to Monadic Second-Order Logic. This is a gain in expressiveness over LTL at no cost.

3. The Big-O Problem for Max-Plus Automata is Decidable (PSPACE-Complete)

Laure Daviaud ; David Purser ; Marie Tcheng.
We show that the big-O problem for max-plus automata is decidable and PSPACE-complete. The big-O (or affine domination) problem asks whether, given two max-plus automata computing functions f and g, there exists a constant c such that f < cg+ c. This is a relaxation of the containment problem asking whether f < g, which is undecidable. Our decidability result uses Simon's forest factorisation theorem, and relies on detecting specific elements, that we call witnesses, in a finite semigroup closed under two special operations: stabilisation and flattening.

4. A Curry-Howard Correspondence for Linear, Reversible Computation

Kostia Chardonnet ; Alexis Saurin ; Benoît Valiron.
In this paper, we present a linear and reversible programming language with inductives types and recursion. The semantics of the languages is based on pattern-matching; we show how ensuring syntactical exhaustivity and non-overlapping of clauses is enough to ensure reversibility. The language allows to represent any Primitive Recursive Function. We then give a Curry-Howard correspondence with the logic $μ$MALL: linear logic extended with least fixed points allowing inductive statements. The critical part of our work is to show how primitive recursion yields circular proofs that satisfy $μ$MALL validity criterion and how the language simulates the cut-elimination procedure of $μ$MALL.

5. A Monoidal View on Fixpoint Checks

Paolo Baldan ; Richard Eggert ; Barbara König ; Timo Matt ; Tommaso Padoan.
Fixpoints are ubiquitous in computer science as they play a central role in providing a meaning to recursive and cyclic definitions. Bisimilarity, behavioural metrics, termination probabilities for Markov chains and stochastic games are defined in terms of least or greatest fixpoints. Here we show that our recent work which proposes a technique for checking whether the fixpoint of a function is the least (or the largest) admits a natural categorical interpretation in terms of gs-monoidal categories. The technique is based on a construction that maps a function to a suitable approximation. We study the compositionality properties of this mapping and show that under some restrictions it can naturally be interpreted as a (lax) gs-monoidal functor. This guides the development of a tool, called UDEfix that allows us to build functions (and their approximations) like a circuit out of basic building blocks and subsequently perform the fixpoints checks. We also show that a slight generalisation of the theory allows one to treat a new relevant case study: coalgebraic behavioural metrics based on Wasserstein liftings.

6. Timeout Asynchronous Session Types: Safe Asynchronous Mixed-Choice For Timed Interactions

Jonah Pears ; Laura Bocchi ; Maurizio Murgia ; Andy King.
Mixed-choice has long been barred from models of asynchronous communication since it compromises the decidability of key properties of communicating finite-state machines. Session types inherit this restriction, which precludes them from fully modelling timeouts -- a core property of web and cloud services. To address this deficiency, we present (binary) Timeout Asynchronous Session Types (TOAST) as an extension to (binary) asynchronous timed session types, that permits mixed-choice. TOAST deploys timing constraints to regulate the use of mixed-choice so as to preserve communication safety. We provide a new behavioural semantics for TOAST which guarantees progress in the presence of mixed-choice. Building upon TOAST, we provide a calculus featuring process timers which is capable of modelling timeouts using a receive-after pattern, much like Erlang, and capture the correspondence with TOAST specifications via a type system for which we prove subject reduction.

7. Rewriting techniques for relative coherence

Samuel Mimram.
A series of works has established rewriting as an essential tool in order to prove coherence properties of algebraic structures, such as MacLane's coherence theorem for monoidal categories, based on the observation that, under reasonable assumptions, confluence diagrams for critical pairs provide the required coherence axioms. We are interested here in extending this approach simultaneously in two directions. Firstly, we want to take into account situations where coherence is partial, in the sense that it only applies to a subset of the structural morphisms. Secondly, we are interested in structures which are cartesian in the sense that variables can be duplicated or erased. We develop theorems and rewriting techniques in order to achieve this, first in the setting of abstract rewriting systems, and then extend them to term rewriting systems, suitably generalized to take coherence into account. As an illustration of our results, we explain how to recover the coherence theorems for monoidal and symmetric monoidal categories.

8. The Church Synthesis Problem over Continuous Time

Alexander Rabinovich ; Daniel Fattal.
The Church Problem asks for the construction of a procedure which, given a logical specification A(I,O) between input omega-strings I and output omega-strings O, determines whether there exists an operator F that implements the specification in the sense that A(I, F(I)) holds for all inputs I. Buchi and Landweber provided a procedure to solve the Church problem for MSO specifications and operators computable by finite-state automata. We investigate a generalization of the Church synthesis problem to the continuous time domain of the non-negative reals. We show that in the continuous time domain there are phenomena which are very different from the canonical discrete time domain of the natural numbers.

9. Learning Concepts Definable in First-Order Logic with Counting

Steffen van Bergerem.
We study Boolean classification problems over relational background structures in the logical framework introduced by Grohe and Turán (TOCS 2004). It is known (Grohe and Ritzert, LICS 2017) that classifiers definable in first-order logic over structures of polylogarithmic degree can be learned in sublinear time, where the degree of the structure and the running time are measured in terms of the size of the structure. We generalise the results to the first-order logic with counting FOCN, which was introduced by Kuske and Schweikardt (LICS 2017) as an expressive logic generalising various other counting logics. Specifically, we prove that classifiers definable in FOCN over classes of structures of polylogarithmic degree can be consistently learned in sublinear time. This can be seen as a first step towards extending the learning framework to include numerical aspects of machine learning. We extend the result to agnostic probably approximately correct (PAC) learning for classes of structures of degree at most $(\log \log n)^c$ for some constant $c$. Moreover, we show that bounding the degree is crucial to obtain sublinear-time learning algorithms. That is, we prove that, for structures of unbounded degree, learning is not possible in sublinear time, even for classifiers definable in plain first-order logic.

10. Boolean basis, formula size, and number of modal operators

Christoph Berkholz ; Dietrich Kuske ; Christian Schwarz.
Is it possible to write significantly smaller formulae when using Boolean operators other than those of the De Morgan basis (and, or, not, and the constants)? For propositional logic, a negative answer was given by Pratt: formulae over one set of operators can always be translated into an equivalent formula over any other complete set of operators with only polynomial increase in size. Surprisingly, for modal logic the picture is different: we show that elimination of bi-implication is only possible at the cost of an exponential number of occurrences of the modal operator $\lozenge$ and therefore of an exponential increase in formula size, i.e., the De Morgan basis and its extension by bi-implication differ in succinctness. Moreover, we prove that any complete set of Boolean operators agrees in succinctness with the De Morgan basis or with its extension by bi-implication. More precisely, these results are shown for the modal logic $\mathrm{T}$ (and therefore for $\mathrm{K}$). We complement them showing that the modal logic $\mathrm{S5}$ behaves as propositional logic: the choice of Boolean operators has no significant impact on the size of formulae.

11. A Categorical Treatment of Open Linear Systems

Dario Stein ; Richard Samuelson.
An open stochastic system à la Jan Willems is a system affected by two qualitatively different kinds of uncertainty: one is probabilistic fluctuation, and the other one is nondeterminism caused by a fundamental lack of information. We present a formalization of open stochastic systems in the language of category theory. Central to this is the notion of copartiality which models how the lack of information propagates through a system (corresponding to the coarseness of sigma-algebras in Willems' work). As a concrete example, we study extended Gaussian distributions, which combine Gaussian probability with nondeterminism and correspond precisely to Willems' notion of Gaussian linear systems. We describe them both as measure-theoretic and abstract categorical entities, which enables us to rigorously describe a variety of phenomena like noisy physical laws and uninformative priors in Bayesian statistics. The category of extended Gaussian maps can be seen as a mutual generalization of Gaussian probability and linear relations, which connects the literature on categorical probability with ideas from control theory like signal-flow diagrams.

12. Synthesis with Privacy Against an Observer

Orna Kupferman ; Ofer Leshkowitz ; Namma Shamash Halevy.
We study automatic synthesis of systems that interact with their environment and maintain privacy against an observer to the interaction. The system and the environment interact via sets $I$ and $O$ of input and output signals. The input to the synthesis problem contains, in addition to a specification, also a list of secrets, a function $cost: I\cup O\rightarrow\mathbb{N}$, which maps each signal to the cost of hiding it, and a bound $b\in\mathbb{N}$ on the budget that the system may use for hiding of signals. The desired output is an $(I/O)$-transducer $T$ and a set $H\subseteq I\cup O$ of signals that respects the bound on the budget, thus $\sum_{s\in H} cost(s)\leq b$, such that for every possible interaction of $T$, the generated computation satisfies the specification, yet an observer, from whom the signals in $H$ are hidden, cannot evaluate the secrets. We first show that the problem's complexity is 2EXPTIME-complete for specifications and secrets in LTL, making it no harder than synthesis without privacy requirements. We then analyze the complexity further, isolating the two aspects that do not exist in traditional synthesis: the need to hide secret values and the need to choose the set $H$. We do this by studying settings in which traditional synthesis is solvable in polynomial time -- when the specification formalism is deterministic automata and when the system is closed -- and show that each of these aspects adds an exponential blow-up in complexity. We […]

13. MacroSwarm: A Field-based Compositional Framework for Swarm Programming

Gianluca Aguzzi ; Roberto Casadei ; Mirko Viroli.
Swarm behaviour engineering is an area of research that seeks to investigate methods and techniques for coordinating computation and action within groups of simple agents to achieve complex global goals like pattern formation, collective movement, clustering, and distributed sensing. Despite recent progress in the analysis and engineering of swarms (of drones, robots, vehicles), there is still a need for general design and implementation methods and tools that can be used to define complex swarm behaviour in a principled way. To contribute to this quest, this article proposes a new field-based coordination approach, called MacroSwarm, to design and program swarm behaviour in terms of reusable and fully composable functional blocks embedding collective computation and coordination. Based on the macroprogramming paradigm of aggregate computing, MacroSwarm builds on the idea of expressing each swarm behaviour block as a pure function, mapping sensing fields into actuation goal fields, e.g., including movement vectors. In order to demonstrate the expressiveness, compositionality, and practicality of MacroSwarm as a framework for swarm programming, we perform a variety of simulations covering common patterns of flocking, pattern formation, and collective decision-making. The implications of the inherent self-stabilisation properties of field-based computations in MacroSwarm are discussed, which formally guarantee some resilience properties and guided the design of the library.

14. Exploiting Uncertainty for Querying Inconsistent Description Logics Knowledge Bases

Riccardo Zese ; Evelina Lamma ; Fabrizio Riguzzi.
The necessity to manage inconsistency in Description Logics Knowledge Bases (KBs) has come to the fore with the increasing importance gained by the Semantic Web, where information comes from different sources that constantly change their content and may contain contradictory descriptions when considered either alone or together. Classical reasoning algorithms do not handle inconsistent KBs, forcing the debugging of the KB in order to remove the inconsistency. In this paper, we exploit an existing probabilistic semantics called DISPONTE to overcome this problem and allow queries also in case of inconsistent KBs. We implemented our approach in the reasoners TRILL and BUNDLE and empirically tested the validity of our proposal. Moreover, we formally compare the presented approach to that of the repair semantics, one of the most established semantics when considering DL reasoning tasks.

15. Feasability of Learning Weighted Automata on a Semiring

Laure Daviaud ; Marianne Johnson.
Since the seminal work by Angluin and the introduction of the L*-algorithm, active learning of automata by membership and equivalence queries has been extensively studied to learn various extensions of automata. For weighted automata, algorithms for restricted cases have been developed in the literature, but so far there was no global approach or understanding how these algorithms could apply (or not) in the general case. In this paper we chart the boundaries of the Angluin approach. We use a class of hypothesis automata which are constructed, in Angluin's style, by using membership and equivalence queries and solving certain finite systems of linear equations over the semiring, and we show the theoretical limitations of this approach. We classify functions with respect to how guessable they are, corresponding to the existence of hypothesis automata computing a given function, and how such an hypothesis automaton can be found. Of course, from an algorithmic standpoint, knowing that a solution (hypothesis automaton) exists need not translate into an effective algorithm to find one. We relate our work to the existing literature with a discussion of some known properties ensuring algorithmic solutions, illustrating the ideas over several familiar semirings (including the natural numbers).

16. Decomposition Strategies and Multi-shot ASP Solving for Job-shop Scheduling

Mohammed M. S. El-Kholany ; Martin Gebser ; Konstantin Schekotihin.
The Job-shop Scheduling Problem (JSP) is a well-known and challenging combinatorial optimization problem in which tasks sharing a machine are to be arranged in a sequence such that encompassing jobs can be completed as early as possible. In this paper, we investigate problem decomposition into time windows whose operations can be successively scheduled and optimized by means of multi-shot Answer Set Programming (ASP) solving. From a computational perspective, decomposition aims to split highly complex scheduling tasks into better manageable subproblems with a balanced number of operations such that good-quality or even optimal partial solutions can be reliably found in a small fraction of runtime. We devise and investigate a variety of decomposition strategies in terms of the number and size of time windows as well as heuristics for choosing their operations. Moreover, we incorporate time window overlapping and compression techniques into the iterative scheduling process to counteract optimization limitations due to the restriction to window-wise partial schedules. Our experiments on different JSP benchmark sets show that successive optimization by multi-shot ASP solving leads to substantially better schedules within tight runtime limits than single-shot optimization on the full problem. In particular, we find that decomposing initial solutions obtained with proficient heuristic methods into time windows leads to improved solution quality.

17. A Fully Abstract Model of PCF Based on Extended Addressing Machines

Benedetto Intrigila ; Giulio Manzonetto ; Nicolas Munnich.
Extended addressing machines (EAMs) have been introduced to represent higher-order sequential computations. Previously, we have shown that they are capable of simulating -- via an easy encoding -- the operational semantics of PCF, extended with explicit substitutions. In this paper we prove that the simulation is actually an equivalence: a PCF program terminates in a numeral exactly when the corresponding EAM terminates in the same numeral. It follows that the model of PCF obtained by quotienting typable EAMs by a suitable logical relation is adequate. From a definability result stating that every EAM in the model can be transformed into a PCF program with the same observational behavior, we conclude that the model is fully abstract for PCF.

18. Coinductive Streams in Monoidal Categories

Elena Di Lavore ; Giovanni de Felice ; Mario Román.
We introduce monoidal streams. Monoidal streams are a generalization of causal stream functions, which can be defined in cartesian monoidal categories, to arbitrary symmetric monoidal categories. In the same way that streams provide semantics to dataflow programming with pure functions, monoidal streams provide semantics to dataflow programming with theories of processes represented by a symmetric monoidal category. Monoidal streams also form a feedback monoidal category. In the same way that we can use a coinductive stream calculus to reason about signal flow graphs, we can use coinductive string diagrams to reason about feedback monoidal categories. As an example, we study syntax for a stochastic dataflow language, with semantics in stochastic monoidal streams.

19. Totality for Mixed Inductive and Coinductive Types

Pierre Hyvernat.
This paper introduces an ML / Haskell like programming language with nested inductive and coinductive algebraic datatypes called \chariot. Functions are defined by arbitrary recursive definitions and can thus lead to non-termination and other ``bad'' behavior. \chariot comes with a totality checker that tags possibly ill-behaved definitions. Such a totality checker is mandatory in the context of proof assistants based on type theory like Agda. Proving correctness of this checker is far from trivial and relies on - an interpretation of types as parity games, - an interpretation of correct values as winning strategies for those games, - the Lee, Jones and Ben Amram's size-change principle, used to check that the strategies induced by recursive definitions are winning. This paper develops the first two points, the last step being the subject of an upcoming paper. A prototype has been implemented and can be used to experiment with the resulting totality checker, giving a practical argument in favor of this principle.

20. The Size-Change Principle for Mixed Inductive and Coinductive types

Pierre Hyvernat.
This paper shows how to use Lee, Jones and Ben Amram's size-change principle to check correctness of arbitrary recursive definitions in an ML / Haskell like programming language with inductive and coinductive types. Naively using the size-change principle to check productivity and termination is straightforward but unsound when inductive and coinductive types are nested. We can however adapt the size-change principle to check ``totality'', which corresponds exactly to correctness with respect to the corresponding (co)inductive type.

21. On Bisimilarity for Quasi-discrete Closure Spaces

Vincenzo Ciancia ; Diego Latella ; Mieke Massink ; Erik P. de Vink.
Closure spaces, a generalisation of topological spaces, have shown to be a convenient theoretical framework for spatial model checking. The closure operator of closure spaces and quasi-discrete closure spaces induces a notion of neighborhood akin to that of topological spaces that build on open sets. For closure models and quasi-discrete closure models, in this paper we present three notions of bisimilarity that are logically characterised by corresponding modal logics with spatial modalities: (i) CM-bisimilarity for closure models (CMs) is shown to generalise topo-bisimilarity for topological models and to be an instantiation of neighbourhood bisimilarity, when CMs are seen as (augmented) neighbourhood models. CM-bisimilarity corresponds to equivalence with respect to the infinitary modal logic IML that includes the modality ${\cal N}$ for ``being near to''. (ii) CMC-bisimilarity, with `CMC' standing for CM-bisimilarity with converse, refines CM-bisimilarity for quasi-discrete closure spaces, carriers of quasi-discrete closure models. Quasi-discrete closure models come equipped with two closure operators, Direct ${\cal C}$ and Converse ${\cal C}$, stemming from the binary relation underlying closure and its converse. CMC-bisimilarity, is captured by the infinitary modal logic IMLC including two modalities, Direct ${\cal N}$ and Converse ${\cal N}$, corresponding to the two closure operators. (iii) CoPa-bisimilarity on quasi-discrete closure models, which is […]

22. Craig Interpolation for Decidable First-Order Fragments

Balder ten Cate ; Jesse Comer.
We show that the guarded-negation fragment is, in a precise sense, the smallest extension of the guarded fragment with Craig interpolation. In contrast, we show that full first-order logic is the smallest extension of both the two-variable fragment and the forward fragment with Craig interpolation. Similarly, we also show that all extensions of the two-variable fragment and of the fluted fragment with Craig interpolation are undecidable.

23. Aczel-Mendler Bisimulations in a Regular Category

Jeremy Dubut.
Aczel-Mendler bisimulations are a coalgebraic extension of a variety of computational relations between systems. It is usual to assume that the underlying category satisfies some form of the axiom of choice, so that the collection of bisimulations enjoys desirable properties, such as closure under composition. In this paper, we accommodate the definition in general regular categories and toposes. We show that this general definition: 1) is closed under composition without using the axiom of choice, 2) coincides with other types of coalgebraic formulations under milder conditions, 3) coincides with the usual definition when the category satisfies the regular axiom of choice. In particular, the case of toposes heavily relies on power-objects, for which we recover some favourable properties along the way. Finally, we describe several examples in Stone spaces, toposes for name-passing, and modules over a ring.

24. Distributed controller synthesis for deadlock avoidance

Hugo Gimbert ; Corto Mascle ; Anca Muscholl ; Igor Walukiewicz.
We consider the distributed control synthesis problem for systems with locks. The goal is to find local controllers so that the global system does not deadlock. With no restriction this problem is undecidable even for three processes each using a fixed number of locks. We propose two restrictions that make distributed control decidable. The first one is to allow each process to use at most two locks. The problem then becomes $Σ_2^P$-complete, and even in PTIME under some additional assumptions. The dining philosophers problem satisfies these assumptions. The second restriction is a nested usage of locks. In this case the synthesis problem is NEXPTIME-complete. The drinking philosophers problem falls in this case.

25. Extensional and Non-extensional Functions as Processes

Ken Sakayori ; Davide Sangiorgi.
Following Milner's seminal paper, the representation of functions as processes has received considerable attention. For pure $λ$-calculus, the process representations yield (at best) non-extensional $λ$-theories (i.e., $β$ rule holds, whereas $η$ does not). In the paper, we study how to obtain extensional representations, and how to move between extensional and non-extensional representations. Using Internal $π$, $\mathrm{I}π$ (a subset of the $π$-calculus in which all outputs are bound), we develop a refinement of Milner's original encoding of functions as processes that is parametric on certain abstract components called wires. These are, intuitively, processes whose task is to connect two end-point channels. We show that when a few algebraic properties of wires hold, the encoding yields a $λ$-theory. Exploiting the symmetries and dualities of $\mathrm{I}π$, we isolate three main classes of wires. The first two have a sequential behaviour and are dual of each other; the third has a parallel behaviour and is the dual of itself. We show the adoption of the parallel wires yields an extensional $λ$-theory; in fact, it yields an equality that coincides with that of Böhm trees with infinite $η$. In contrast, the other two classes of wires yield non-extensional $λ$-theories whose equalities are those of the Lévy-Longo and Böhm trees.

26. On the Unprovability of Circuit Size Bounds in Intuitionistic $\mathsf{S}^1_2$

Lijie Chen ; Jiatu Li ; Igor C. Oliveira.
We show that there is a constant $k$ such that Buss's intuitionistic theory $\mathsf{IS}^1_2$ does not prove that SAT requires co-nondeterministic circuits of size at least $n^k$. To our knowledge, this is the first unconditional unprovability result in bounded arithmetic in the context of worst-case fixed-polynomial size circuit lower bounds. We complement this result by showing that the upper bound $\mathsf{NP} \subseteq \mathsf{coNSIZE}[n^k]$ is unprovable in $\mathsf{IS}^1_2$. In order to establish our main result, we obtain new unconditional lower bounds against refuters that might be of independent interest. In particular, we show that there is no efficient refuter for the lower bound $\mathsf{NP} \nsubseteq \mathsf{i.o.}\text{-}\mathsf{coNP}/\mathsf{poly}$, addressing in part a question raised by Atserias (2006).

27. Enumeration Algorithms for Conjunctive Queries with Projection

Shaleen Deep ; Xiao Hu ; Paraschos Koutris.
We investigate the enumeration of query results for an important subset of CQs with projections, namely star and path queries. The task is to design data structures and algorithms that allow for efficient enumeration with delay guarantees after a preprocessing phase. Our main contribution is a series of results based on the idea of interleaving precomputed output with further join processing to maintain delay guarantees, which maybe of independent interest. In particular, for star queries, we design combinatorial algorithms that provide instance-specific delay guarantees in linear preprocessing time. These algorithms improve upon the currently best known results. Further, we show how existing results can be improved upon by using fast matrix multiplication. We also present new results involving tradeoff between preprocessing time and delay guarantees for enumeration of path queries that contain projections. Boolean matrix multiplication is an important query that can be expressed as a CQ with projection where the join attribute is projected away. Our results can therefore also be interpreted as sparse, output-sensitive matrix multiplication with delay guarantees.

28. Rigorous Function Calculi in Ariadne

Pieter Collins ; Luca Geretti ; Sanja Zivanovic Gonzalez ; Davide Bresolin ; Tiziano Villa.
Almost all problems in applied mathematics, including the analysis of dynamical systems, deal with spaces of real-valued functions on Euclidean domains in their formulation and solution. In this paper, we describe the the tool Ariadne, which provides a rigorous calculus for working with Euclidean functions. We first introduce the Ariadne framework, which is based on a clean separation of objects as providing exact, effective, validated and approximate information. We then discuss the function calculus as implemented in Ariadne, including polynomial function models which are the fundamental class for concrete computations. We then consider solution of some core problems of functional analysis, namely solution of algebraic equations and differential equations, and briefly discuss their use for the analysis of hybrid systems. We will give examples of C++ and Python code for performing the various calculations. Finally, we will discuss progress on extensions, including improvements to the function calculus and extensions to more complicated classes of system.

29. Multi-representation associated to the numbering of a subbasis and formal inclusion relations

Emmanuel Rauzy.
We revisit Dieter Spreen's notion of a representation associated to a numbered basis equipped with a strong inclusion relation. We show that by relaxing his requirements, we obtain different classically considered representations as subcases, including representations considered by Grubba, Weihrauch and Schröder. We show that the use of an appropriate strong inclusion relation guarantees that the representation associated to a computable metric space seen as a topological space always coincides with the Cauchy representation. We also show how the use of a formal inclusion relation guarantees that when defining multi-representations on a set and on one of its subsets, the obtained multi-representations will be compatible, i.e. inclusion will be a computable map. The proposed definitions are also more robust under change of equivalent bases.

30. TSO Games -- On the decidability of safety games under the total store order semantics (extended LMCS version with appendix)

Stephan Spengler ; Sanchari Sil.
We consider an extension of the classical Total Store Order (TSO) semantics by expanding it to turn-based 2-player safety games. During her turn, a player can select any of the communicating processes and perform its next transition. We consider different formulations of the safety game problem depending on whether one player or both of them transfer messages from the process buffers to the shared memory. We give the complete decidability picture for all the possible alternatives.

31. FlexFringe: Modeling Software Behavior by Learning Probabilistic Automata

Sicco Verwer ; Christian Hammerschmidt.
We present the efficient implementations of probabilistic deterministic finite automaton learning methods available in FlexFringe. These implement well-known strategies for state-merging including several modifications to improve their performance in practice. We show experimentally that these algorithms obtain competitive results and significant improvements over a default implementation. We also demonstrate how to use FlexFringe to learn interpretable models from software logs and use these for anomaly detection. Although less interpretable, we show that learning smaller more convoluted models improves the performance of FlexFringe on anomaly detection, outperforming an existing solution based on neural nets.

32. Relating homotopy equivalences to conservativity in dependent type theories with computation axioms

Matteo Spadetto.
We prove a conservativity result for extensional type theories over propositional ones, i.e. dependent type theories with propositional computation rules, or computation axioms, using insights from homotopy type theory. The argument exploits a notion of canonical homotopy equivalence between contexts, and uses the notion of a category with attributes to phrase the semantics of theories of dependent types. Informally, our main result asserts that, for judgements essentially concerning h-sets, reasoning with extensional or propositional type theories is equivalent.