2017

We introduce a new class of (dynamical) systems that inherently capture cascading effects (viewed as consequential effects) and are naturally amenable to combinations. We develop an axiomatic general theory around those systems, and guide the endeavor towards an understanding of cascading failure. The theory evolves as an interplay of lattices and fixed points, and its results may be instantiated to commonly studied models of cascade effects. We characterize the systems through their fixed points, and equip them with two operators. We uncover properties of the operators, and express global systems through combinations of local systems. We enhance the theory with a notion of failure, and understand the class of shocks inducing a system to failure. We develop a notion of mu-rank to capture the energy of a system, and understand the minimal amount of effort required to fail a system, termed resilience. We deduce a dual notion of fragility and show that the combination of systems sets a limit on the amount of fragility inherited.

We extend the constructive dependent type theory of the Logical Framework $\mathsf{LF}$ with monadic, dependent type constructors indexed with predicates over judgements, called Locks. These monads capture various possible proof attitudes in establishing the judgment of the object logic encoded by an $\mathsf{LF}$ type. Standard examples are factoring-out the verification of a constraint or delegating it to an external oracle, or supplying some non-apodictic epistemic evidence, or simply discarding the proof witness of a precondition deeming it irrelevant. This new framework, called Lax Logical Framework, $\mathsf{LLF}_{\cal P}$, is a conservative extension of $\mathsf{LF}$, and hence it is the appropriate metalanguage for dealing formally with side-conditions in rules or external evidence in logical systems. $\mathsf{LLF}_{\cal P}$ arises once the monadic nature of the lock type-constructor, ${\cal L}^{\cal P}_{M,\sigma}[\cdot]$, introduced by the authors in a series of papers, together with Marina Lenisa, is fully exploited. The nature of the lock monads permits to utilize the very Lock destructor, ${\cal U}^{\cal P}_{M,\sigma}[\cdot]$, in place of Moggi's monadic $let_T$, thus simplifying the equational theory. The rules for ${\cal U}^{\cal P}_{M,\sigma}[\cdot]$ permit also the removal of the monad once the constraint is satisfied. We derive the meta-theory of $\mathsf{LLF}_{\cal P}$ by a novel indirect method based on the encoding of $\mathsf{LLF}_{\cal P}$ […]

The class of uniformly computable real functions with respect to a small subrecursive class of operators computes the elementary functions of calculus, restricted to compact subsets of their domains. The class of conditionally computable real functions with respect to the same class of operators is a proper extension of the class of uniformly computable real functions and it computes the elementary functions of calculus on their whole domains. The definition of both classes relies on certain transformations of infinitistic names of real numbers. In the present paper, the conditional computability of real functions is characterized in the spirit of Tent and Ziegler, avoiding the use of infinitistic names.

In this paper we investigate important categories lying strictly between the Kleisli category and the Eilenberg-Moore category, for a Kock-Zöberlein monad on an order-enriched category. Firstly, we give a characterisation of free algebras in the spirit of domain theory. Secondly, we study the existence of weighted (co)limits, both on the abstract level and for specific categories of domain theory like the category of algebraic lattices. Finally, we apply these results to give a description of the idempotent split completion of the Kleisli category of the filter monad on the category of topological spaces.

The Internet of Things (IoT) offers the infrastructure of the information society. It hosts smart objects that automatically collect and exchange data of various kinds, directly gathered from sensors or generated by aggregations. Suitable coordination primitives and analysis mechanisms are in order to design and reason about IoT systems, and to intercept the implied technological shifts. We address these issues from a foundational point of view. To study them, we define IoT-LySa, a process calculus endowed with a static analysis that tracks the provenance and the manipulation of IoT data, and how they flow in the system. The results of the analysis can be used by a designer to check the behaviour of smart objects, in particular to verify non-functional properties, among which security.

We propose new sequent calculus systems for orthologic (also known as minimal quantum logic) which satisfy the cut elimination property. The first one is a simple system relying on the involutive status of negation. The second one incorporates the notion of focusing (coming from linear logic) to add constraints on proofs and to optimise proof search. We demonstrate how to take benefits from the new systems in automatic proof search for orthologic.

Quasi-trees generalize trees in that the unique "path" between two nodes may be infinite and have any countable order type. They are used to define the rank-width of a countable graph in such a way that it is equal to the least upper-bound of the rank-widths of its finite induced subgraphs. Join-trees are the corresponding directed trees. They are useful to define the modular decomposition of a countable graph. We also consider ordered join-trees, that generalize rooted trees equipped with a linear order on the set of sons of each node. We define algebras with finitely many operations that generate (via infinite terms) these generalized trees. We prove that the associated regular objects (those defined by regular terms) are exactly the ones that are the unique models of monadic second-order sentences. These results use and generalize a similar result by W. Thomas for countable linear orders.

Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of the left-hand sides; the computational intuition is that rules cannot build new data structures. In programming language research, cons-free languages have been used to characterize hierarchies of computational complexity classes; in term rewriting, cons-free first-order TRSs have been used to characterize the class PTIME. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the type order of the systems. We prove that, for every K $\geq$ 1, left-linear cons-free systems with type order K characterize E$^K$TIME if unrestricted evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with no assumptions on reduction strategy, (ii) we consequently obtain much larger classes for each type order (E$^K$TIME versus EXP$^{K-1}$TIME), and (iii) results for cons-free term rewriting systems have previously only been obtained for K = 1, and with additional syntactic restrictions besides cons-freeness and left-linearity. Our results are among the first implicit characterizations of the hierarchy E = E$^1$TIME $\subsetneq$ E$^2$TIME $\subsetneq$ ... Our work confirms prior results that having full non-determinism (via […]

The algebraic intersection type unification problem is an important component in proof search related to several natural decision problems in intersection type systems. It is unknown and remains open whether the algebraic intersection type unification problem is decidable. We give the first nontrivial lower bound for the problem by showing (our main result) that it is exponential time hard. Furthermore, we show that this holds even under rank 1 solutions (substitutions whose codomains are restricted to contain rank 1 types). In addition, we provide a fixed-parameter intractability result for intersection type matching (one-sided unification), which is known to be NP-complete. We place the algebraic intersection type unification problem in the context of unification theory. The equational theory of intersection types can be presented as an algebraic theory with an ACI (associative, commutative, and idempotent) operator (intersection type) combined with distributivity properties with respect to a second operator (function type). Although the problem is algebraically natural and interesting, it appears to occupy a hitherto unstudied place in the theory of unification, and our investigation of the problem suggests that new methods are required to understand the problem. Thus, for the lower bound proof, we were not able to reduce from known results in ACI-unification theory and use game-theoretic methods for two-player tiling games.

In this paper, we show that Markov's principle is not derivable in dependent type theory with natural numbers and one universe. One way to prove this would be to remark that Markov's principle does not hold in a sheaf model of type theory over Cantor space, since Markov's principle does not hold for the generic point of this model. Instead we design an extension of type theory, which intuitively extends type theory by the addition of a generic point of Cantor space. We then show the consistency of this extension by a normalization argument. Markov's principle does not hold in this extension, and it follows that it cannot be proved in type theory.

Information flow is the branch of security that studies the leakage of information due to correlation between secrets and observables. Since in general such correlation cannot be avoided completely, it is important to quantify the leakage. The most followed approaches to defining appropriate measures are those based on information theory. In particular, one of the most successful approaches is the recently proposed $g$-leakage framework, which encompasses most of the information-theoretic ones. A problem with $g$-leakage, however, is that it is defined in terms of a minimization problem, which, in the case of large systems, can be computationally rather heavy. In this paper we study the case in which the channel associated to the system can be decomposed into simpler channels, which typically happens when the observables consist of multiple components. Our main contribution is the derivation of bounds on the (multiplicative version of) $g$-leakage of the whole system in terms of the $g$-leakages of its components. We also consider the particular cases of min-entropy leakage and of parallel channels, generalizing and systematizing results from the literature. We demonstrate the effectiveness of our method and evaluate the precision of our bounds using examples.

For a quantale ${\sf{V}}$, the category $\sf V$-${\bf Top}$ of ${\sf{V}}$-valued topological spaces may be introduced as a full subcategory of those ${\sf{V}}$-valued closure spaces whose closure operation preserves finite joins. In generalization of Barr's characterization of topological spaces as the lax algebras of a lax extension of the ultrafilter monad from maps to relations of sets, for ${\sf{V}}$ completely distributive, ${\sf{V}}$-topological spaces have recently been shown to be characterizable by a lax extension of the ultrafilter monad to ${\sf{V}}$-valued relations. As a consequence, ${\sf{V}}$-$\bf Top$ is seen to be a topological category over $\bf Set$, provided that ${\sf{V}}$ is completely distributive. In this paper we give a choice-free proof that ${\sf{V}}$-$\bf Top$ is a topological category over $\bf Set$ under the considerably milder provision that ${\sf{V}}$ be a spatial coframe. When ${\sf{V}}$ is a continuous lattice, that provision yields complete distributivity of ${\sf{V}}$ in the constructive sense, hence also in the ordinary sense whenever the Axiom of Choice is granted.

In this paper, we define a new realizability semantics for the simply typed lambda-mu-calculus. We show that if a term is typable, then it inhabits the interpretation of its type. We also prove a completeness result of our realizability semantics using a particular term model. This paper corrects some errors in [21] by the first author and Saber.

We prove that connectors are stable under quotients in any (regular) Goursat category. As a consequence, the category $\mathsf{Conn}(\mathbb{C})$ of connectors in $\mathbb{C}$ is a Goursat category whenever $\mathbb C$ is. This implies that Goursat categories can be characterised in terms of a simple property of internal groupoids.

Section:
Algebraic methods

Session contracts is a formalism enabling to investigate client/server interaction protocols and to interpret session types. We extend session contracts in order to represent outputs whose actual sending in an interaction depends on a third party or on a mutual agreement between the partners. Such contracts are hence adaptable, or as we say "affectible". In client/server systems, in general, compliance stands for the satisfaction of all client's requests by the server. We define an abstract notion of "affectible compliance" and show it to have a precise three-party game-theoretic interpretation. This in turn is shown to be equivalent to a compliance based on interactions that can undergo a sequence of failures and rollbacks, as well as to a compliance based on interactions which can be mediated by an orchestrator. Besides, there is a one-to-one effective correspondence between winning strategies and orchestrators. The relation of subcontract for affectible contracts is also investigated.

We show that the first order theory of the lattice of open sets in some natural topological spaces is $m$-equivalent to second order arithmetic. We also show that for many natural computable metric spaces and computable domains the first order theory of the lattice of effectively open sets is undecidable. Moreover, for several important spaces (e.g., $\mathbb{R}^n$, $n\geq1$, and the domain $P\omega$) this theory is $m$-equivalent to first order arithmetic.

Normalisation in probability theory turns a subdistribution into a proper distribution. It is a partial operation, since it is undefined for the zero subdistribution. This partiality makes it hard to reason equationally about normalisation. A novel description of normalisation is given as a mathematically well-behaved total function. The output of this `hyper' normalisation operation is a distribution of distributions. It improves reasoning about normalisation. After developing the basics of this theory of (hyper) normalisation, it is put to use in a similarly new description of conditioning, producing a distribution of conditional distributions. This is used to give a clean abstract reformulation of refinement in quantitative information flow.

In the context of protomodular categories, several additional conditions have been considered in order to obtain a closer group-like behavior. Among them are locally algebraic cartesian closedness and algebraic coherence. The recent notion of S-protomodular category, whose main examples are the category of monoids and, more generally, categories of monoids with operations and Joónsson-Tarski varieties, raises a similar question: how to get a description of S-protomodular categories with a strong monoid-like behavior. In this paper we consider relative versions of the conditions mentioned above, in order to exhibit the parallelism with the "absolute" protomodular context and to obtain a hierarchy among S-protomodular categories.

Metric temporal logic (MTL) and timed propositional temporal logic (TPTL) are quantitative extensions of linear temporal logic, which are prominent and widely used in the verification of real-timed systems. It was recently shown that the path checking problem for MTL, when evaluated over finite timed words, is in the parallel complexity class NC. In this paper, we derive precise complexity results for the path-checking problem for MTL and TPTL when evaluated over infinite data words over the non-negative integers. Such words may be seen as the behaviours of one-counter machines. For this setting, we give a complete analysis of the complexity of the path-checking problem depending on the number of register variables and the encoding of constraint numbers (unary or binary). As the two main results, we prove that the path-checking problem for MTL is P-complete, whereas the path-checking problem for TPTL is PSPACE-complete. The results yield the precise complexity of model checking deterministic one-counter machines against formulae of MTL and TPTL.

Notions of simulation, among other uses, provide a computationally tractable and sound (but not necessarily complete) proof method for language inclusion. They have been comprehensively studied by Lynch and Vaandrager for nondeterministic and timed systems; for Büchi automata the notion of fair simulation has been introduced by Henzinger, Kupferman and Rajamani. We contribute to a generalization of fair simulation in two different directions: one for nondeterministic tree automata previously studied by Bomhard; and the other for probabilistic word automata with finite state spaces, both under the Büchi acceptance condition. The former nondeterministic definition is formulated in terms of systems of fixed-point equations, hence is readily translated to parity games and is then amenable to Jurdzi\'{n}ski's algorithm; the latter probabilistic definition bears a strong ranking-function flavor. These two different-looking definitions are derived from one source, namely our coalgebraic modeling of Büchi automata. Based on these coalgebraic observations, we also prove their soundness: a simulation indeed witnesses language inclusion.

This paper investigates second-order representations in the sense of Kawamura and Cook for spaces of integrable functions that regularly show up in analysis. It builds upon prior work about the space of continuous functions on the unit interval: Kawamura and Cook introduced a representation inducing the right complexity classes and proved that it is the weakest second-order representation such that evaluation is polynomial-time computable. The first part of this paper provides a similar representation for the space of integrable functions on a bounded subset of Euclidean space: The weakest representation rendering integration over boxes is polynomial-time computable. In contrast to the representation of continuous functions, however, this representation turns out to be discontinuous with respect to both the norm and the weak topology. The second part modifies the representation to be continuous and generalizes it to Lp-spaces. The arising representations are proven to be computably equivalent to the standard representations of these spaces as metric spaces and to still render integration polynomial-time computable. The family is extended to cover Sobolev spaces on the unit interval, where less basic operations like differentiation and some Sobolev embeddings are shown to be polynomial-time computable. Finally as a further justification quantitative versions of the Arzelà-Ascoli and Fréchet-Kolmogorov Theorems are presented and used to argue that these representations […]

We extend the notion of localic completion of generalised metric spaces by Steven Vickers to the setting of generalised uniform spaces. A generalised uniform space (gus) is a set X equipped with a family of generalised metrics on X, where a generalised metric on X is a map from the product of X to the upper reals satisfying zero self-distance law and triangle inequality. For a symmetric generalised uniform space, the localic completion lifts its generalised uniform structure to a point-free generalised uniform structure. This point-free structure induces a complete generalised uniform structure on the set of formal points of the localic completion that gives the standard completion of the original gus with Cauchy filters. We extend the localic completion to a full and faithful functor from the category of locally compact uniform spaces into that of overt locally compact completely regular formal topologies. Moreover, we give an elementary characterisation of the cover of the localic completion of a locally compact uniform space that simplifies the existing characterisation for metric spaces. These results generalise the corresponding results for metric spaces by Erik Palmgren. Furthermore, we show that the localic completion of a symmetric gus is equivalent to the point-free completion of the uniform formal topology associated with the gus. We work in Aczel's constructive set theory CZF with the Regular Extension Axiom. Some of our results also require Countable […]

The edit distance between two words $w_1, w_2$ is the minimal number of word operations (letter insertions, deletions, and substitutions) necessary to transform $w_1$ to $w_2$. The edit distance generalizes to languages $\mathcal{L}_1, \mathcal{L}_2$, where the edit distance from $\mathcal{L}_1$ to $\mathcal{L}_2$ is the minimal number $k$ such that for every word from $\mathcal{L}_1$ there exists a word in $\mathcal{L}_2$ with edit distance at most $k$. We study the edit distance computation problem between pushdown automata and their subclasses. The problem of computing edit distance to a pushdown automaton is undecidable, and in practice, the interesting question is to compute the edit distance from a pushdown automaton (the implementation, a standard model for programs with recursion) to a regular language (the specification). In this work, we present a complete picture of decidability and complexity for the following problems: (1)~deciding whether, for a given threshold $k$, the edit distance from a pushdown automaton to a finite automaton is at most $k$, and (2)~deciding whether the edit distance from a pushdown automaton to a finite automaton is finite.

The well-quasi-ordering (i.e., a well-founded quasi-ordering such that all antichains are finite) that defines well-structured transition systems (WSTS) is shown not to be the weakest hypothesis that implies decidability of the coverability problem. We show coverability decidable for monotone transition systems that only require the absence of infinite antichains and call well behaved transitions systems (WBTS) the new strict superclass of the class of WSTS that arises. By contrast, we confirm that boundedness and termination are undecidable for WBTS under the usual hypotheses, and show that stronger monotonicity conditions can enforce decidability. Proofs are similar or even identical to existing proofs but the surprising message is that a hypothesis implicitely assumed minimal for twenty years in the theory of WSTS can meaningfully be relaxed, allowing more orderings to be handled in an abstract way.

We give a new order-theoretic characterization of a complete Heyting and co-Heyting algebra $C$. This result provides an unexpected relationship with the field of Nash equilibria, being based on the so-called Veinott ordering relation on subcomplete sublattices of $C$, which is crucially used in Topkis' theorem for studying the order-theoretic stucture of Nash equilibria of supermodular games.

The computation of the winning set for parity objectives and for Streett objectives in graphs as well as in game graphs are central problems in computer-aided verification, with application to the verification of closed systems with strong fairness conditions, the verification of open systems, checking interface compatibility, well-formedness of specifications, and the synthesis of reactive systems. We show how to compute the winning set on $n$ vertices for (1) parity-3 (aka one-pair Streett) objectives in game graphs in time $O(n^{5/2})$ and for (2) k-pair Streett objectives in graphs in time $O(n^2 + nk \log n)$. For both problems this gives faster algorithms for dense graphs and represents the first improvement in asymptotic running time in 15 years.

We present sound and complete environmental bisimilarities for a variant of Dybvig et al.'s calculus of multi-prompted delimited-control operators with dynamic prompt generation. The reasoning principles that we obtain generalize and advance the existing techniques for establishing program equivalence in calculi with single-prompted delimited control. The basic theory that we develop is presented using Madiot et al.'s framework that allows for smooth integration and composition of up-to techniques facilitating bisimulation proofs. We also generalize the framework in order to express environmental bisimulations that support equivalence proofs of evaluation contexts representing continuations. This change leads to a novel and powerful up-to technique enhancing bisimulation proofs in the presence of control operators.

Traditionally, formal languages are defined as sets of words. More recently, the alternative coalgebraic or coinductive representation as infinite tries, i.e., prefix trees branching over the alphabet, has been used to obtain compact and elegant proofs of classic results in language theory. In this article, we study this representation in the Isabelle proof assistant. We define regular operations on infinite tries and prove the axioms of Kleene algebra for those operations. Thereby, we exercise corecursion and coinduction and confirm the coinductive view being profitable in formalizations, as it improves over the set-of-words view with respect to proof automation.

The winning condition of a parity game with costs requires an arbitrary, but fixed bound on the cost incurred between occurrences of odd colors and the next occurrence of a larger even one. Such games quantitatively extend parity games while retaining most of their attractive properties, i.e, determining the winner is in NP and co-NP and one player has positional winning strategies. We show that the characteristics of parity games with costs are vastly different when asking for strategies realizing the minimal such bound: The solution problem becomes PSPACE-complete and exponential memory is both necessary in general and always sufficient. Thus, solving and playing parity games with costs optimally is harder than just winning them. Moreover, we show that the tradeoff between the memory size and the realized bound is gradual in general. All these results hold true for both a unary and binary encoding of costs. Moreover, we investigate Streett games with costs. Here, playing optimally is as hard as winning, both in terms of complexity and memory.

We consider a special case of Dickson's lemma: for any two functions $f,g$ on the natural numbers there are two numbers $i<j$ such that both $f$ and $g$ weakly increase on them, i.e., $f_i\le f_j$ and $g_i \le g_j$. By a combinatorial argument (due to the first author) a simple bound for such $i,j$ is constructed. The combinatorics is based on the finite pigeon hole principle and results in a descent lemma. From the descent lemma one can prove Dickson's lemma, then guess what the bound might be, and verify it by an appropriate proof. We also extract (via realizability) a bound from (a formalization of) our proof of the descent lemma. Keywords: Dickson's lemma, finite pigeon hole principle, program extraction from proofs, non-computational quantifiers.

Presentations of categories are a well-known algebraic tool to provide descriptions of categories by means of generators, for objects and morphisms, and relations on morphisms. We generalize here this notion, in order to consider situations where the objects are considered modulo an equivalence relation, which is described by equational generators. When those form a convergent (abstract) rewriting system on objects, there are three very natural constructions that can be used to define the category which is described by the presentation: one consists in turning equational generators into identities (i.e. considering a quotient category), one consists in formally adding inverses to equational generators (i.e. localizing the category), and one consists in restricting to objects which are normal forms. We show that, under suitable coherence conditions on the presentation, the three constructions coincide, thus generalizing celebrated results on presentations of groups, and we extend those conditions to presentations of monoidal categories.

We show that, for a quantale $V$ and a $\mathsf{Set}$-monad $\mathbb{T}$ laxly extended to $V$-$\mathsf{Rel}$, the presheaf monad on the category of $(\mathbb{T},V)$-categories is simple, giving rise to a lax orthogonal factorisation system (lofs) whose corresponding weak factorisation system has embeddings as left part. In addition, we present presheaf submonads and study the LOFSs they define. This provides a method of constructing weak factorisation systems on some well-known examples of topological categories over $\mathsf{Set}$.

Kleene algebra axioms are complete with respect to both language models and binary relation models. In particular, two regular expressions recognise the same language if and only if they are universally equivalent in the model of binary relations. We consider Kleene allegories, i.e., Kleene algebras with two additional operations and a constant which are natural in binary relation models: intersection, converse, and the full relation. While regular languages are closed under those operations, the above characterisation breaks. Putting together a few results from the literature, we give a characterisation in terms of languages of directed and labelled graphs. By taking inspiration from Petri nets, we design a finite automata model, Petri automata, allowing to recognise such graphs. We prove a Kleene theorem for this automata model: the sets of graphs recognisable by Petri automata are precisely the sets of graphs definable through the extended regular expressions we consider. Petri automata allow us to obtain decidability of identity-free relational Kleene lattices, i.e., the equational theory generated by binary relations on the signature of regular expressions with intersection, but where one forbids unit. This restriction is used to ensure that the corresponding graphs are acyclic. We actually show that this decision problem is EXPSPACE-complete.

In this paper we give an arithmetical proof of the strong normalization of lambda-Sym-Prop of Berardi and Barbanera [1], which can be considered as a formulae-as-types translation of classical propositional logic in natural deduction style. Then we give a translation between the lambda-Sym-Prop-calculus and the lambda-bar-mu-mu-tilde-star-calculus, which is the implicational part of the lambda-bar-mu-mu-tilde-calculus invented by Curien and Herbelin [3] extended with negation. In this paper we adapt the method of David and Nour [4] for proving strong normalization. The novelty in our proof is the notion of zoom-in sequences of redexes, which leads us directly to the proof of the main theorem.

In 2011, Rideau and Winskel introduced concurrent games and strategies as event structures, generalizing prior work on causal formulations of games. In this paper we give a detailed, self-contained and slightly-updated account of the results of Rideau and Winskel: a notion of pre-strategy based on event structures; a characterisation of those pre-strategies (deemed strategies) which are preserved by composition with a copycat strategy; and the construction of a bicategory of these strategies. Furthermore, we prove that the corresponding category has a compact closed structure, and hence forms the basis for the semantics of concurrent higher-order computation.