![]() |
![]() |
2025
We show that the extension of Presburger arithmetic by a quadratic generalised polynomial of a specific form is undecidable.
We describe a framework for maintaining forest algebra representations that are of logarithmic height for unranked trees. Such representations can be computed in O(n) time and updated in O(log(n)) time. The framework is of potential interest for data structures and algorithms for trees whose complexity depend on the depth of the tree (representation). We provide an exemplary application of the framework to the problem of efficiently enumerating answers to MSO-definable queries over trees which are subject to local updates. We exhibit an algorithm that uses an O(n) preprocessing phase and enumerates answers with O(log(n)) delay between them. When the tree is updated, the algorithm can avoid repeating expensive preprocessing and restart the enumeration phase within O(log(n)) time. Our algorithms and complexity results in the paper are presented in terms of node-selecting tree automata representing the MSO queries.
Recent trends like the Internet of Things (IoT) suggest a vision of dense and multi-scale deployments of computing devices in nearly all kinds of environments. A prominent engineering challenge revolves around programming the collective adaptive behaviour of such computational ecosystems. This requires abstractions able to capture concepts like ensembles (dynamic groups of cooperating devices) and collective tasks (joint activities carried out by ensembles). In this work, we consider collections of devices interacting with neighbours and that execute in nearly-synchronised sense-compute-interact rounds, where the computation is given by a single program mapping sensing values and incoming messages to output and outcoming messages. To support programming whole computational collectives, we propose the abstraction of a distributed collective process, which can be used to define at once the ensemble formation logic and its collective task. We formalise the abstraction in the eXchange Calculus (XC), a core functional language based on neighbouring values (maps from neighbours to values) where state and interaction is handled through a single primitive, exchange, and provide a corresponding implementation in the FCPP language. Then, we exercise distributed collective processes using two case studies: multi-hop message propagation and distributed monitoring of spatial properties. Finally, we discuss the features of the abstraction and its suitability for different kinds of […]
Extensions of Stone-type dualities have a long history in algebraic logic and have also been instrumental in proving results in algebraic language theory. We show how to extend abstract categorical dualities via monoidal adjunctions, subsuming various incarnations of classical extended Stone and Priestley duality as special cases, and providing the foundation for two new concrete dualities: First, we investigate residuation algebras, which are lattices with additional residual operators modeling language derivatives algebraically. We show that the subcategory of derivation algebras is dually equivalent to the category of profinite ordered monoids, restricting to a duality between Boolean residuation algebras and profinite monoids. We further refine this duality to capture relational morphisms of profinite ordered monoids, which dualize to natural morphisms of residuation algebras. Second, we apply the categorical extended duality to the discrete setting of sets and complete atomic Boolean algebras to obtain a concrete description for the dual of the category of all small categories.
The delay monad provides a way to introduce general recursion in type theory. To write programs that use a wide range of computational effects directly in type theory, we need to combine the delay monad with the monads of these effects. Here we present a first systematic study of such combinations. We study both the coinductive delay monad and its guarded recursive cousin, giving concrete examples of combining these with well-known computational effects. We also provide general theorems stating which algebraic effects distribute over the delay monad, and which do not. Lastly, we salvage some of the impossible cases by considering distributive laws up to weak bisimilarity.
None of the first-order modal logics between $\mathsf{K}$ and $\mathsf{S5}$ under the constant domain semantics enjoys Craig interpolation or projective Beth definability, even in the language restricted to a single individual variable. It follows that the existence of a Craig interpolant for a given implication or of an explicit definition for a given predicate cannot be directly reduced to validity as in classical first-order and many other logics. Our concern here is the decidability and computational complexity of the interpolant and definition existence problems. We first consider two decidable fragments of first-order modal logic $\mathsf{S5}$: the one-variable fragment $\mathsf{Q^1S5}$ and its extension $\mathsf{S5}_{\mathcal{ALC}^u}$ that combines $\mathsf{S5}$ and the description logic$\mathcal{ALC}$ with the universal role. We prove that interpolant and definition existence in $\mathsf{Q^1S5}$ and $\mathsf{S5}_{\mathcal{ALC}^u}$ is decidable in coN2ExpTime, being 2ExpTime-hard, while uniform interpolant existence is undecidable. These results transfer to the two-variable fragment $\mathsf{FO^2}$ of classical first-order logic without equality. We also show that interpolant and definition existence in the one-variable fragment $\mathsf{Q^1K}$ of first-order modal logic $\mathsf{K}$ is non-elementary decidable, while uniform interpolant existence is again undecidable.
We study monoidal transducers, transition systems arising as deterministic automata whose transitions also produce outputs in an arbitrary monoid, for instance allowing outputs to commute or to cancel out. We use the categorical framework for minimization and learning of Colcombet, Petrişan and Stabile to recover the notion of minimal transducer recognizing a language, and give necessary and sufficient conditions on the output monoid for this minimal transducer to exist and be unique (up to isomorphism). The categorical framework then provides an abstract algorithm for learning it using membership and equivalence queries, and we discuss practical aspects of this algorithm's implementation.
Motivated by an application where we try to make proofs for Description Logic inferences smaller by rewriting, we consider the following decision problem, which we call the small term reachability problem: given a term rewriting system $R$, a term $s$, and a natural number $n$, decide whether there is a term $t$ of size $\leq n$ reachable from $s$ using the rules of $R$. We investigate the complexity of this problem depending on how termination of $R$ can be established. We show that the problem is in general NP-complete for length-reducing term rewriting systems. Its complexity increases to N2ExpTime-complete (NExpTime-complete) if termination is proved using a (linear) polynomial order and to PSpace-complete for systems whose termination can be shown using a restricted class of Knuth-Bendix orders. Confluence reduces the complexity to P for the length-reducing case, but has no effect on the worst-case complexity in the other two cases. Finally, we consider the large term reachability problem, a variant of the problem where we are interested in reachability of a term of size $\geq n$. It turns out that this seemingly innocuous modification in some cases changes the complexity of the problem, which may also become dependent on whether the number $n$ is is represented in unary or binary encoding, whereas this makes no difference for the complexity of the small term reachability problem.
As shown by Tsukada and Ong, simply-typed, normal and eta-long resource terms correspond to plays in Hyland-Ong games, quotiented by Melliès' homotopy equivalence. The original proof of this inspiring result is indirect, relying on the injectivity of the relational model w.r.t. both sides of the correspondence -- in particular, the dynamics of the resource calculus is taken into account only via the compatibility of the relational model with the composition of normal terms defined by normalization. In the present paper, we revisit and extend these results. Our first contribution is to restate the correspondence by considering causal structures we call augmentations, which are canonical representatives of Hyland-Ong plays up to homotopy. This allows us to give a direct and explicit account of the connection with normal resource terms. As a second contribution, we extend this account to the reduction of resource terms: building on a notion of strategies as weighted sums of augmentations, we provide a denotational model of the resource calculus, invariant under reduction. A key step -- and our third contribution -- is a categorical model we call a resource category, which is to the resource calculus what differential categories are to the differential lambda-calculus.
We present a categorical theory of the composition methods in finite model theory -- a key technique enabling modular reasoning about complex structures by building them out of simpler components. The crucial results required by the composition methods are Feferman--Vaught--Mostowski (FVM) type theorems, which characterize how logical equivalence behaves under composition and transformation of models. Our results are developed by extending the recently introduced game comonad semantics for model comparison games. This level of abstraction allow us to give conditions yielding FVM type results in a uniform way. Our theorems are parametric in the classes of models, logics and operations involved. Furthermore, they naturally account for the existential and positive existential fragments, and extensions with counting quantifiers of these logics. We also reveal surprising connections between FVM type theorems, and classical concepts in the theory of monads. We illustrate our methods by recovering many classical theorems of practical interest, including a refinement of a previous result by Dawar, Severini, and Zapata concerning the 3-variable counting logic and cospectrality. To highlight the importance of our techniques being parametric in the logic of interest, we prove a family of FVM theorems for products of structures, uniformly in the logic in question, which cannot be done using specific game arguments. This is an extended version of the LiCS 2023 conference paper of the […]
Control and state estimation procedures need to be robust against imprecisely known parameters, uncertainty in initial conditions, and external disturbances. Interval methods and other set-based techniques form the basis for the implementation of powerful approaches that can be used to identify parameters of dynamic system models in the presence of the aforementioned types of uncertainty. Moreover, they are applicable to a verified feasibility and stability analysis of controllers and state estimators. In addition to these approaches which are typically used offline for analysis of system models designed with classical floating point procedures, interval and set-based methods have also been developed in recent years, which allow to directly solve the associated design tasks and to implement reliable techniques that are applicable online, i.e., during system operation. The latter approaches include set-based model predictive control, online parameter adaptation techniques for nonlinear variable-structure and backstepping controllers, interval observers, and fault diagnosis techniques. This paper provides an overview of the methodological background and reviews numerous practical applications for which interval and other set-valued approaches have been employed successfully.
Two families of denotational models have emerged from the semantic analysis of linear logic: dynamic models, typically presented as game semantics, and static models, typically based on a category of relations. In this paper we introduce a formal bridge between a dynamic model and a static model: the model of thin concurrent games and strategies, based on event structures, and the model of generalized species of structures, based on distributors. A special focus of this paper is the two-dimensional nature of the dynamic-static relationship, which we formalize with double categories and bicategories. In the first part of the paper, we construct a symmetric monoidal oplax functor from linear concurrent strategies to distributors. We highlight two fundamental differences between the two models: the composition mechanism, and the representation of resource symmetries. In the second part of the paper, we adapt established methods from game semantics (visible strategies, payoff structure) to enforce a tighter connection between the two models. We obtain a cartesian closed pseudofunctor, which we exploit to shed new light on recent results in the theory of the lambda-calculus.
We study three levels in a hierarchy of nondeterminism: A nondeterministic automaton $\mathcal{A}$ is determinizable by pruning (DBP) if we can obtain a deterministic automaton equivalent to $\mathcal{A}$ by removing some of its transitions. Then, $\mathcal{A}$ is history deterministic (HD) if its nondeterministic choices can be resolved in a way that only depends on the past. Finally, $\mathcal{A}$ is semantically deterministic (SD) if different nondeterministic choices in $\mathcal{A}$ lead to equivalent states. Some applications of automata in formal methods require deterministic automata, yet in fact can use automata with some level of nondeterminism. For example, DBP automata are useful in the analysis of online algorithms, and HD automata are useful in synthesis and control. For automata on finite words, the three levels in the hierarchy coincide. We study the hierarchy for Büchi, co-Büchi, and weak automata on infinite words. We show that the hierarchy is strict, study the expressive power of the different levels in it, as well as the complexity of deciding the membership of a language in a given level. Finally, we describe a probability-based analysis of the hierarchy, which relates the level of nondeterminism with the probability that a random run on a word in the language is accepting. We relate the latter to nondeterministic automata that can be used when reasoning about probabilistic systems.
Labelled transitions systems can be studied in terms of modal logic and in terms of bisimulation. These two notions are connected by Hennessy-Milner theorems, that show that two states are bisimilar precisely when they satisfy the same modal logic formulas. Recently, apartness has been studied as a dual to bisimulation, which also gives rise to a dual version of the Hennessy-Milner theorem: two states are apart precisely when there is a modal formula that distinguishes them. In this paper, we introduce "directed" versions of Hennessy-Milner theorems that characterize when the theory of one state is included in the other. For this we introduce "positive modal logics" that only allow a limited use of negation. Furthermore, we introduce directed notions of bisimulation and apartness, and then show that, for this positive modal logic, the theory of $s$ is included in the theory of $t$ precisely when $s$ is directed bisimilar to $t$. Or, in terms of apartness, we show that $s$ is directed apart from $t$ precisely when the theory of $s$ is not included in the theory of $t$. From the directed version of the Hennessy-Milner theorem, the original result follows. In particular, we study the case of branching bisimulation and Hennessy-Milner Logic with Until (HMLU) as a modal logic. We introduce "directed branching bisimulation" (and directed branching apartness) and "Positive Hennessy-Milner Logic with Until" (PHMLU) and we show the directed […]
Feder and Vardi showed that the class Monotone Monadic SNP without inequality (MMSNP) has a P vs NP-complete dichotomy if and only if such a dichotomy holds for finite-domain Constraint Satisfaction Problems (CSPs). Moreover, they showed that none of the three classes obtained by removing one of the defining properties of MMSNP (monotonicity, monadicity, no inequality) has a dichotomy. The overall objective of this paper is to study the gaps between MMSNP and each of these three superclasses, where the existence of a dichotomy remains unknown. For the gap between MMSNP and Monotone SNP without inequality, we study the class Guarded Monotone SNP without inequality (GMSNP) introduced by Bienvenu, ten Cate, Lutz, and Wolter, and prove that GMSNP has a dichotomy if and only if a dichotomy holds for GMSNP problems over signatures consisting of a unique relation symbol. For the gap between MMSNP and MMSNP with inequality, we introduce a new class MMSNP with guarded inequality, that lies between MMSNP and MMSNP with inequality and that is strictly more expressive than the former and still has a dichotomy. For the gap between MMSNP and Monadic SNP without inequality, we introduce a logic that extends the class of Matrix Partitions in a similar way how MMSNP extends finite-domain CSP, and pose an open question about the existence of a dichotomy for this class. Finally, we revisit the theorem of Feder and Vardi, which claims that the class NP embeds into MMSNP with inequality. We give […]
Rathjen proved that Aczel's constructive set theory $\mathbf{CZF}$ extended with inaccessible sets of all transfinite orders can be interpreted in Martin-Löf type theory $\mathbf{MLTT}$ extended with Setzer's Mahlo universe and another universe above it. In this paper we show that this interpretation can be carried out bottom-up without the universe above the Mahlo universe, provided we add an accessibility predicate instead. If we work in Martin-Löf type theory with extensional identity types the accessibility predicate can be defined in terms of $\mathrm{W}$-types. The main part of our interpretation has been formalised in the proof assistant Agda.
Data partitioning that maximizes/minimizes the Shannon entropy, or more generally the Rényi entropy is a crucial subroutine in data compression, columnar storage, and cardinality estimation algorithms. These partition algorithms can be accelerated if we have a data structure to compute the entropy in different subsets of data when the algorithm needs to decide what block to construct. Such a data structure will also be useful for data analysts exploring different subsets of data to identify areas of interest. While it is generally known how to compute the Shannon or the Rényi entropy of a discrete distribution in the offline or streaming setting efficiently, we focus on the query setting where we aim to efficiently derive the entropy among a subset of data that satisfy some linear predicates. We solve this problem in a typical setting when we deal with real data, where data items are geometric points and each requested area is a query (hyper)rectangle. More specifically, we consider a set $P$ of $n$ weighted and colored points in $\mathbb{R}^d$, where $d$ is a constant. For the range S-entropy (resp. R-entropy) query problem, the goal is to construct a low space data structure, such that given a query (hyper)rectangle $R$, it computes the Shannon (resp. Rényi) entropy based on the colors and the weights of the points in $P\cap R$, in sublinear time. We show conditional lower bounds proving that we cannot hope for data structures with near-linear space and near-constant […]
We study the computational expressivity of proof systems with fixed point operators, within the 'proofs-as-programs' paradigm. We start with a calculus muLJ (due to Clairambault) that extends intuitionistic logic by least and greatest positive fixed points. Based in the sequent calculus, muLJ admits a standard extension to a 'circular' calculus CmuLJ. Our main result is that, perhaps surprisingly, both muLJ and CmuLJ represent the same first-order functions: those provably total in $Π^1_2$-$\mathsf{CA}_0$, a subsystem of second-order arithmetic beyond the 'big five' of reverse mathematics and one of the strongest theories for which we have an ordinal analysis (due to Rathjen). This solves various questions in the literature on the computational strength of (circular) proof systems with fixed points. For the lower bound we give a realisability interpretation from an extension of Peano Arithmetic by fixed points that has been shown to be arithmetically equivalent to $Π^1_2$-$\mathsf{CA}_0$ (due to Möllerfeld). For the upper bound we construct a novel computability model in order to give a totality argument for circular proofs with fixed points. In fact we formalise this argument itself within $Π^1_2$-$\mathsf{CA}_0$ in order to obtain the tight bounds we are after. Along the way we develop some novel reverse mathematics for the Knaster-Tarski fixed point theorem.
The class of type-two basic feasible functionals ($\mathtt{BFF}_2$) is the analogue of $\mathtt{FP}$ (polynomial time functions) for type-2 functionals, that is, functionals that can take (first-order) functions as arguments. $\mathtt{BFF}_2$ can be defined through Oracle Turing machines with running time bounded by second-order polynomials. On the other hand, higher-order term rewriting provides an elegant formalism for expressing higher-order computation. We address the problem of characterizing $\mathtt{BFF}_2$ by higher-order term rewriting. Various kinds of interpretations for first-order term rewriting have been introduced in the literature for proving termination and characterizing first-order complexity classes. In this paper, we consider a recently introduced notion of cost-size interpretations for higher-order term rewriting and see second order rewriting as ways of computing type-2 functionals. We then prove that the class of functionals represented by higher-order terms admitting polynomially bounded cost-size interpretations exactly corresponds to $\mathtt{BFF}_2$.
In this paper, we explore the descriptive complexity theory of finite groups by examining the power of the second Ehrenfeucht--Fraïssé bijective pebble game in Hella's (Ann. Pure Appl. Log., 1989) hierarchy. This is a Spoiler--Duplicator game in which Spoiler can place up to two pebbles each round. While it trivially solves graph isomorphism, it may be nontrivial for finite groups, and other ternary relational structures. We first provide a novel generalization of Weisfeiler--Leman (WL) coloring, which we call 2-ary WL. We then show that 2-ary WL is equivalent to the second Ehrenfeucht--Fraïssé bijective pebble game in Hella's hierarchy. Our main result is that, in the pebble game characterization, only $O(1)$ pebbles and $O(1)$ rounds are sufficient to identify all groups without Abelian normal subgroups (a class of groups for which isomorphism testing is known to be in $\mathsf{P}$; Babai, Codenotti, & Qiao, ICALP 2012). We actually show that $7$ pebbles and $7$ rounds suffice. In particular, we show that within the first few rounds, Spoiler can force Duplicator to select an isomorphism between two such groups at each subsequent round. By Hella's results (ibid.), this is equivalent to saying that these groups are identified by formulas in first-order logic with generalized 2-ary quantifiers, using only $7$ variables and $7$ quantifier depth.
In relational verification, judicious alignment of computational steps facilitates proof of relations between programs using simple relational assertions. Relational Hoare logics (RHL) provide compositional rules that embody various alignments of executions. Seemingly more flexible alignments can be expressed in terms of product automata based on program transition relations. A single degenerate alignment rule (sequential composition), atop a complete Hoare logic, comprises a RHL for $\forall\forall$ properties that is complete in the sense of Cook. The notion of alignment completeness was previously proposed as an additional measure, and some rules were shown to be alignment complete with respect to a few ad hoc forms of alignment automata. This paper proves alignment completeness with respect to a general class of $\forall\forall$ alignment automata, for a RHL comprised of standard rules together with a rule of semantics-preserving rewrites based on Kleene algebra with tests. A new logic for $\forall\exists$ properties is introduced and shown to be sound and alignment complete for a new general class of automata. The $\forall\forall$ and $\forall\exists$ automata are shown to be semantically complete. Thus both logics are complete in the sense of Cook. The paper includes discussion of why alignment is not the only important principle for relational reasoning and proposes entailment completeness as further means to evaluate RHLs.
Verifying quantum systems has attracted a lot of interest in the last decades.In this paper, we study the quantitative model-checking of quantum continuous-time Markov chains (quantum CTMCs). The branching-time properties of quantum CTMCs are specified by continuous stochastic logic (CSL), which is well-known for verifying real-time systems, including classical CTMCs. The core of checking the CSL formulas lies in tackling multiphase until formulas. We develop an algebraic method using proper projection, matrix exponentiation, and definite integration to symbolically calculate the probability measures of path formulas. Thus the decidability of CSL is established. To be efficient, numerical methods are incorporated to guarantee that the time complexity is polynomial in the encoding size of the input model and linear in the size of the input formula. A running example of Apollonian networks is further provided to demonstrate our method.
It was recently shown by Atserias, Buss and Mueller that the standard complexity-theoretic conjecture NEXP not in P / poly is consistent with the relatively strong bounded arithmetic theory V^0_2, which can prove a substantial part of complexity theory. We observe that their approach can be extended to show that the stronger conjectures NEXP not in EXP / poly and NEXP not in coNEXP are consistent with a stronger theory, which includes every true universal number-sort sentence.
We characterize type isomorphisms in the multiplicative-additive fragment of linear logic (MALL), and thus in *-autonomous categories with finite products, extending a result for the multiplicative fragment by Balat and Di Cosmo. This yields a much richer equational theory involving distributivity and cancellation laws. The unit-free case is obtained by relying on the proof-net syntax introduced by Hughes and Van Glabbeek. We use the sequent calculus to extend our results to full MALL, including all units, thanks to a study of cut-elimination and rule commutations.
Algebraic effect handlers are becoming an increasingly popular way of structuring effectful computations, and their performance is often a concern. One of the proposed approaches towards efficient compilation is tracking effect information through explicit subtyping coercions. However, in the presence of polymorphism, these coercions are compiled into additional arguments of compiled functions, incurring significant overhead. In this paper, we present a polymorphic effectful calculus, identify simplification phases needed to reduce the number of unnecessary constraints, and prove that they preserve semantics. In addition, we implement the simplification algorithm in the Eff language and evaluate its performance on a number of benchmarks. Though we do not prove the optimality of the presented simplifications, the results show that the algorithm eliminates all coercions, resulting in code as efficient as manually monomorphised one.