2022

We extend the meet-implication fragment of propositional intuitionistic logicwith a meet-preserving modality. We give semantics based on semilattices and aduality result with a suitable notion of descriptive frame. As a consequence weobtain completeness and identify a common (modal) fragment of a large class ofmodal intuitionistic logics. We recognise this logic as a dialgebraic logic,and as a consequence obtain expressivity-somewhere-else. Within the dialgebraicframework, we then investigate the extension of the meet-implication fragmentof propositional intuitionistic logic with a monotone modality and provecompleteness and expressivity-somewhere-else for it.

In probabilistic coherence spaces, a denotational model of probabilisticfunctional languages, morphisms are analytic and therefore smooth. We exploretwo related applications of the corresponding derivatives. First we show howderivatives allow to compute the expectation of execution time in the weak headreduction of probabilistic PCF (pPCF). Next we apply a general notion of"local" differential of morphisms to the proof of a Lipschitz property of thesemorphisms allowing in turn to relate the observational distance on pPCF termsto a distance the model is naturally equipped with. This suggests thatextending probabilistic programming languages with derivatives, in the spiritof the differential lambda-calculus, could be quite meaningful.

This paper is a contribution to the search for efficient and high-levelmathematical tools to specify and reason about (abstract) programming languagesor calculi. Generalising the reduction monads of Ahrens et al., we introducetransition monads, thus covering new applications such aslambda-bar-mu-calculus, pi-calculus, Positive GSOS specifications, differentiallambda-calculus, and the big-step, simply-typed, call-by-value lambda-calculus.Moreover, we design a suitable notion of signature for transition monads.

We develop a uniform coalgebraic approach to Jónsson-Tarski and Thomasontype dualities for various classes of neighborhood frames and neighborhoodalgebras. In the first part of the paper we construct an endofunctor on thecategory of complete and atomic Boolean algebras that is dual to the doublepowerset functor on $\mathsf{Set}$. This allows us to show that Thomasonduality for neighborhood frames can be viewed as an algebra-coalgebra duality.We generalize this approach to any class of algebras for an endofunctorpresented by one-step axioms in the language of infinitary modal logic. As aconsequence, we obtain a uniform approach to dualities for various classes ofneighborhood frames, including monotone neighborhood frames, pretopologicalspaces, and topological spaces. In the second part of the paper we develop a coalgebraic approach toJónsson-Tarski duality for neighborhood algebras and descriptiveneighborhood frames. We introduce an analogue of the Vietoris endofunctor onthe category of Stone spaces and show that descriptive neighborhood frames areisomorphic to coalgebras for this endofunctor. This allows us to obtain acoalgebraic proof of the duality between descriptive neighborhood frames andneighborhood algebras. Using one-step axioms in the language of finitary modallogic, we restrict this duality to other classes of neighborhood algebrasstudied in the literature, including monotone modal algebras and contingencyalgebras. We conclude the paper by connecting the two […]

Adding propositional quantification to the modal logics K, T or S4 is knownto lead to undecidability but CTL with propositional quantification under thetree semantics (tQCTL) admits a non-elementary Tower-complete satisfiabilityproblem. We investigate the complexity of strict fragments of tQCTL as well asof the modal logic K with propositional quantification under the treesemantics. More specifically, we show that tQCTL restricted to the temporaloperator EX is already Tower-hard, which is unexpected as EX can only enforcelocal properties. When tQCTL restricted to EX is interpreted on N-bounded treesfor some N >= 2, we prove that the satisfiability problem is AExpPol-complete;AExpPol-hardness is established by reduction from a recently introduced tilingproblem, instrumental for studying the model-checking problem for intervaltemporal logics. As consequences of our proof method, we prove Tower-hardnessof tQCTL restricted to EF or to EXEF and of the well-known modal logics such asK, KD, GL, K4 and S4 with propositional quantification under a semantics basedon classes of trees.

Inspired by a mathematical riddle involving fuses, we define the "fusiblenumbers" as follows: $0$ is fusible, and whenever $x,y$ are fusible with$|y-x|<1$, the number $(x+y+1)/2$ is also fusible. We prove that the set offusible numbers, ordered by the usual order on $\mathbb R$, is well-ordered,with order type $\varepsilon_0$. Furthermore, we prove that the density of thefusible numbers along the real line grows at an incredibly fast rate: Letting$g(n)$ be the largest gap between consecutive fusible numbers in the interval$[n,\infty)$, we have $g(n)^{-1} \ge F_{\varepsilon_0}(n-c)$ for some constant$c$, where $F_\alpha$ denotes the fast-growing hierarchy. Finally, we derivesome true statements that can be formulated but not proven in Peano Arithmetic,of a different flavor than previously known such statements: PA cannot provethe true statement "For every natural number $n$ there exists a smallestfusible number larger than $n$." Also, consider the algorithm "$M(x)$: if $x<0$return $-x$, else return $M(x-M(x-1))/2$." Then $M$ terminates on real inputs,although PA cannot prove the statement "$M$ terminates on all natural inputs."

Non-volatile memory (NVM), also known as persistent memory, is an emergingparadigm for memory that preserves its contents even after power loss. NVM iswidely expected to become ubiquitous, and hardware architectures are alreadyproviding support for NVM programming. This has stimulated interest in thedesign of novel concepts ensuring correctness of concurrent programmingabstractions in the face of persistency and in the development of associatedverification approaches. Software transactional memory (STM) is a key programming abstraction thatsupports concurrent access to shared state. In a fashion similar tolinearizability as the correctness condition for concurrent data structures,there is an established notion of correctness for STMs known as opacity. Wehave recently proposed durable opacity as the natural extension of opacity to asetting with non-volatile memory. Together with this novel correctnesscondition, we designed a verification technique based on refinement. In thispaper, we extend this work in two directions. First, we develop a durablyopaque version of NOrec (no ownership records), an existing STM algorithmproven to be opaque. Second, we modularise our existing verification approachby separating the proof of durability of memory accesses from the proof ofopacity. For NOrec, this allows us to re-use an existing opacity proof andcomplement it with a proof of the durability of accesses to shared state.

In the author's PhD thesis (2019) universal envelopes were introduced as atool for studying the continuously obtainable information on discontinuousfunctions. To any function $f \colon X \to Y$ between $\operatorname{qcb}_0$-spaces onecan assign a so-called universal envelope which, in a well-defined sense,encodes all continuously obtainable information on the function. A universalenvelope consists of two continuous functions $F \colon X \to L$ and $\xi_L\colon Y \to L$ with values in a $\Sigma$-split injective space $L$. Anycontinuous function with values in an injective space whose composition withthe original function is again continuous factors through the universalenvelope. However, it is not possible in general to uniformly compute thisfactorisation. In this paper we propose the notion of uniform envelopes. A uniform envelopeis additionally endowed with a map $u_L \colon L \to \mathcal{O}^2(Y)$ that iscompatible with the multiplication of the double powerspace monad$\mathcal{O}^2$ in a certain sense. This yields for every continuous map withvalues in an injective space a choice of uniformly computable extension. Undera suitable condition which we call uniform universality, this extension yieldsa uniformly computable solution for the above factorisation problem. Uniform envelopes can be endowed with a composition operation. We establishcriteria that ensure that the composition of two uniformly universal envelopesis again uniformly universal. These criteria admit a […]

In this paper, we investigate the problem of synthesizing computablefunctions of infinite words over an infinite alphabet (data $\omega$-words).The notion of computability is defined through Turing machines with infiniteinputs which can produce the corresponding infinite outputs in the limit. Weuse non-deterministic transducers equipped with registers, an extension ofregister automata with outputs, to describe specifications. Beingnon-deterministic, such transducers may not define functions but more generallyrelations of data $\omega$-words. In order to increase the expressive power ofthese machines, we even allow guessing of arbitrary data values when updatingtheir registers. For functions over data $\omega$-words, we identify a sufficient condition(the possibility of determining the next letter to be outputted, which we callnext letter problem) under which computability (resp. uniform computability)and continuity (resp. uniform continuity) coincide. We focus on two kinds of data domains: first, the general setting ofoligomorphic data, which encompasses any data domain with equality, as well asthe setting of rational numbers with linear order; and second, the set ofnatural numbers equipped with linear order. For both settings, we prove thatfunctionality, i.e. determining whether the relation recognized by thetransducer is actually a function, is decidable. We also show that theso-called next letter problem is decidable, yielding equivalence between(uniform) continuity and […]

Turing machines and register machines have been used for decades intheoretical computer science as abstract models of computation. Also the$\lambda$-calculus has played a central role in this domain as it allows tofocus on the notion of functional computation, based on the substitutionmechanism, while abstracting away from implementation details. The presentarticle starts from the observation that the equivalence between theseformalisms is based on the Church-Turing Thesis rather than an actual encodingof $\lambda$-terms into Turing (or register) machines. The reason is that thesemachines are not well-suited for modelling $\lambda$-calculus programs. We study a class of abstract machines that we call "addressing machine" sincethey are only able to manipulate memory addresses of other machines. Theoperations performed by these machines are very elementary: load an address ina register, apply a machine to another one via their addresses, and call theaddress of another machine. We endow addressing machines with an operationalsemantics based on leftmost reduction and study their behaviour. The set ofaddresses of these machines can be easily turned into a combinatory algebra. Inorder to obtain a model of the full untyped $\lambda$-calculus, we need tointroduce a rule that bares similarities with the $\omega$-rule and the rule$\zeta_\beta$ from combinatory logic.

The problem of model checking procedural programs has fostered much researchtowards the definition of temporal logics for reasoning on context-freestructures. The most notable of such results are temporal logics on NestedWords, such as CaRet and NWTL. Recently, the logic OPTL was introduced, basedon the class of Operator Precedence Languages (OPLs), more powerful than NestedWords. We define the new OPL-based logic POTL and prove its FO-completeness.POTL improves on NWTL by enabling the formulation of requirements involvingpre/post-conditions, stack inspection, and others in the presence ofexception-like constructs. It improves on OPTL too, which instead we show notto be FO-complete; it also allows to express more easily stack inspection andfunction-local properties. In a companion paper we report a model checkingprocedure for POTL and experimental results based on a prototype tool developedtherefor. For completeness a short summary of this complementary result isprovided in this paper too.

Timed automata (TA) have been widely adopted as a suitable formalism to modeltime-critical systems. Furthermore, contemporary model-checking tools allow thedesigner to check whether a TA complies with a system specification. However,the exact timing constants are often uncertain during the design phase.Consequently, the designer is often able to build a TA with a correctstructure, however, the timing constants need to be tuned to satisfy thespecification. Moreover, even if the TA initially satisfies the specification,it can be the case that just a slight perturbation during the implementationcauses a violation of the specification. Unfortunately, model-checking toolsare usually not able to provide any reasonable guidance on how to fix the modelin such situations. In this paper, we propose several concepts and techniquesto cope with the above mentioned design phase issues when dealing withreachability and safety specifications.

The notion of comparison between system runs is fundamental in formalverification. This concept is implicitly present in the verification ofqualitative systems, and is more pronounced in the verification of quantitativesystems. In this work, we identify a novel mode of comparison in quantitativesystems: the online comparison of the aggregate values of two sequences ofquantitative weights. This notion is embodied by comparator automata(comparators, in short), a new class of automata that read two infinitesequences of weights synchronously and relate their aggregate values. We show that aggregate functions that can be represented with Büchiautomaton result in comparators that are finite-state and accept by the Büchicondition as well. Such $\omega$-regular comparators further lead to genericalgorithms for a number of well-studied problems, including the quantitativeinclusion and winning strategies in quantitative graph games with incompleteinformation, as well as related non-decision problems, such as obtaining afinite representation of all counterexamples in the quantitative inclusionproblem. We study comparators for two aggregate functions: discounted-sum andlimit-average. We prove that the discounted-sum comparator is $\omega$-regulariff the discount-factor is an integer. Not every aggregate function, however,has an $\omega$-regular comparator. Specifically, we show that the language ofsequence-pairs for which limit-average aggregates exist is neither$\omega$-regular nor […]

The program-over-monoid model of computation originates with Barrington'sproof that the model captures the complexity class $\mathsf{NC^1}$. Here wemake progress in understanding the subtleties of the model. First, we identifya new tameness condition on a class of monoids that entails a naturalcharacterization of the regular languages recognizable by programs over monoidsfrom the class. Second, we prove that the class known as $\mathbf{DA}$satisfies tameness and hence that the regular languages recognized by programsover monoids in $\mathbf{DA}$ are precisely those recognizable in the classicalsense by morphisms from $\mathbf{QDA}$. Third, we show by contrast that thewell studied class of monoids called $\mathbf{J}$ is not tame. Finally, weexhibit a program-length-based hierarchy within the class of languagesrecognized by programs over monoids from $\mathbf{DA}$.

We define a point-free construction of real exponentiation and logarithms,i.e.\ we construct the maps $\exp\colon (0, \infty)\times \mathbb{R}\rightarrow \!(0,\infty),\, (x, \zeta) \mapsto x^\zeta$ and $\log\colon(1,\infty)\times (0, \infty) \rightarrow\mathbb{R},\, (b, y) \mapsto\log_b(y)$, and we develop familiar algebraic rules for them. The point-freeapproach is constructive, and defines the points of a space as models of ageometric theory, rather than as elements of a set - in particular, this allowsgeometric constructions to be applied to points living in toposes other thanSet. Our geometric development includes new lifting and gluing techniques inpoint-free topology, which highlight how properties of $\mathbb{Q}$ determineproperties of real exponentiation. This work is motivated by our broader research programme of developing aversion of adelic geometry via topos theory. In particular, we wish toconstruct the classifying topos of places of $\mathbb{Q}$, which will provide ageometric perspective into the subtle relationship between $\mathbb{R}$ and$\mathbb{Q}_p$, a question of longstanding number-theoretic interest.

While many applications of automata in formal methods can usenondeterministic automata, some applications, most notably synthesis, needdeterministic or good-for-games (GFG) automata. The latter are nondeterministicautomata that can resolve their nondeterministic choices in a way that onlydepends on the past. The minimization problem for deterministic Büchi andco-Büchi word automata is NP-complete. In particular, no canonical minimaldeterministic automaton exists, and a language may have different minimaldeterministic automata. We describe a polynomial minimization algorithm for GFGco-Büchi word automata with transition-based acceptance. Thus, a run isaccepting if it traverses a set $\alpha$ of designated transitions onlyfinitely often. Our algorithm is based on a sequence of transformations weapply to the automaton, on top of which a minimal quotient automaton isdefined. We use our minimization algorithm to show canonicity fortransition-based GFG co-Büchi word automata: all minimal automata haveisomorphic safe components (namely components obtained by restricting thetransitions to these not in $\alpha$) and once we saturate the automata with$\alpha$-transitions, we get full isomorphism.

Priced timed games are two-player zero-sum games played on priced timedautomata (whose locations and transitions are labeled by weights modelling thecost of spending time in a state and executing an action, respectively). Thegoals of the players are to minimise and maximise the cost to reach a targetlocation, respectively. We consider priced timed games with one clock andarbitrary integer weights and show that, for an important subclass of them (theso-called simple priced timed games), one can compute, in pseudo-polynomialtime, the optimal values that the players can achieve, with their associatedoptimal strategies. As side results, we also show that one-clock priced timedgames are determined and that we can use our result on simple priced timedgames to solve the more general class of so-called negative-reset-acyclicpriced timed games (with arbitrary integer weights and one clock). Thedecidability status of the full class of priced timed games with one-clock andarbitrary integer weights still remains open.

We present a finitary version of Moss' coalgebraic logic for $T$-coalgebras,where $T$ is a locally monotone endofunctor of the category of posets andmonotone maps. The logic uses a single cover modality whose arity is given bythe least finitary subfunctor of the dual of the coalgebra functor$T_\omega^\partial$, and the semantics of the modality is given by relationlifting. For the semantics to work, $T$ is required to preserve exact squares.For the finitary setting to work, $T_\omega^\partial$ is required to preservefinite intersections. We develop a notion of a base for subobjects of $T_\omegaX$. This in particular allows us to talk about the finite poset of subformulasfor a given formula. The notion of a base is introduced generally for acategory equipped with a suitable factorisation system. We prove that the resulting logic has the Hennessy-Milner property for thenotion of similarity based on the notion of relation lifting. We define asequent proof system for the logic, and prove its completeness.

We introduce a generalization of the bisimulation game that findsdistinguishing Hennessy-Milner logic formulas from every finitary,subformula-closed language in van Glabbeek's linear-time--branching-timespectrum between two finite-state processes. We identify the relevantdimensions that measure expressive power to yield formulas belonging to thecoarsest distinguishing behavioral preorders and equivalences; the comparedprocesses are equivalent in each coarser behavioral equivalence from thespectrum. We prove that the induced algorithm can determine the best fit of(in)equivalences for a pair of processes.

We identify a notion of reducibility between predicates, called instancereducibility, which commonly appears in reverse constructive mathematics. Thenotion can be generally used to compare and classify various principles studiedin reverse constructive mathematics (formal Church's thesis, Brouwer'sContinuity principle and Fan theorem, Excluded middle, Limited principle,Function choice, Markov's principle, etc.). We show that the instance degreesform a frame, i.e., a complete lattice in which finite infima distribute overset-indexed suprema. They turn out to be equivalent to the frame of upper setsof truth values, ordered by the reverse Smyth partial order. We study theoverall structure of the lattice: the subobject classifier embeds into thelattice in two different ways, one monotone and the other antimonotone, and the$\lnot\lnot$-dense degrees coincide with those that are reducible to the degreeof Excluded middle. We give an explicit formulation of instance degrees in a relativerealizability topos, and call these extended Weihrauch degrees, because inKleene-Vesley realizability the $\lnot\lnot$-dense modest instance degreescorrespond precisely to Weihrauch degrees. The extended degrees improve thestructure of Weihrauch degrees by equipping them with computable infima andsuprema, an implication, the ability to control access to parameters andcomputation of results, and by generally widening the scope of Weihrauchreducibility.

We prove a result, similar to the ones known as Ishihara's First and SecondTrick, for sequences of functions.

It is well-known that typability, type inhabitation and type inference areundecidable in the Girard-Reynolds polymorphic system F. It has recently beenproven that type inhabitation remains undecidable even in the predicativefragment of system F in which all universal instantiations have an atomicwitness (system Fat). In this paper we analyze typability and type inference inCurry style variants of system Fat and show that typability is decidable andthat there is an algorithm for type inference which is capable of dealing withnon-redundancy constraints.

Language-integrated query is a powerful programming construct allowingdatabase queries and ordinary program code to interoperate seamlessly andsafely. Language-integrated query techniques rely on classical results aboutthe nested relational calculus, stating that its queries can be algorithmicallytranslated to SQL, as long as their result type is a flat relation. Cooper andothers advocated higher-order nested relational calculi as a basis forlanguage-integrated queries in functional languages such as Links and F#.However, the translation of higher-order relational queries to SQL relies on arewrite system for which no strong normalization proof has been published: aprevious proof attempt does not deal correctly with rewrite rules thatduplicate subterms. This paper fills the gap in the literature, explaining thedifficulty with a previous proof attempt, and showing how to extend the$\top\top$-lifting approach of Lindley and Stark to accommodate duplicatingrewrites. We also show how to extend the proof to a recently-introducedcalculus for heterogeneous queries mixing set and multiset semantics.

We extract verified algorithms for exact real number computation fromconstructive proofs. To this end we use a coinductive representation of realsas streams of binary signed digits. The main objective of this paper is theformalisation of a constructive proof that real numbers are closed with respectto limits. All the proofs of the main theorem and the first application areimplemented in the Minlog proof system and the extracted terms are furthertranslated into Haskell. We compare two approaches. The first approach is adirect proof. In the second approach we make use of the representation of realsby a Cauchy-sequence of rationals. Utilizing translations between the tworepresenation and using the completeness of the Cauchy-reals, the proof is veryshort. In both cases we use Minlog's program extraction mechanism toautomatically extract a formally verified program that transforms a convergingsequence of reals, i.e.~a sequence of streams of binary signed digits togetherwith a modulus of convergence, into the binary signed digit representation ofits limit. The correctness of the extracted terms follows directly from thesoundness theorem of program extraction. As a first application we use theextracted algorithms together with Heron's method to construct an algorithmthat computes square roots with respect to the binary signed digitrepresentation. In a second application we use the convergence theorem to showthat the signed digit representation of real numbers is closed […]

For relational structures A, B of the same signature, the Promise ConstraintSatisfaction Problem PCSP(A,B) asks whether a given input structure mapshomomorphically to A or does not even map to B. We are promised that the inputsatisfies exactly one of these two cases. If there exists a structure C with homomorphisms $A\to C\to B$, thenPCSP(A,B) reduces naturally to CSP(C). To the best of our knowledge all knowntractable PCSPs reduce to tractable CSPs in this way. However Barto showed thatsome PCSPs over finite structures A, B require solving CSPs over infinite C. We show that even when such a reduction to finite C is possible, thisstructure may become arbitrarily large. For every integer $n>1$ and every primep we give A, B of size n with a single relation of arity $n^p$ such thatPCSP(A, B) reduces via a chain of homomorphisms $ A\to C\to B$ to a tractableCSP over some C of size p but not over any smaller structure. In a secondfamily of examples, for every prime $p\geq 7$ we construct A, B of size $p-1$with a single ternary relation such that PCSP(A, B) reduces via $A\to C\to B$to a tractable CSP over some C of size p but not over any smaller structure. Incontrast we show that if A, B are graphs and PCSP(A,B) reduces to tractableCSP(C) for some finite digraph C, then already A or B has a tractable CSP. Thisextends results and answers a question of Deng et al.

We develop a framework for model checking infinite-state systems byautomatically augmenting them with auxiliary variables, enablingquantifier-free induction proofs for systems that would otherwise requirequantified invariants. We combine this mechanism with a counterexample-guidedabstraction refinement scheme for the theory of arrays. Our framework can thus,in many cases, reduce inductive reasoning with quantifiers and arrays toquantifier-free and array-free reasoning. We evaluate the approach on a wideset of benchmarks from the literature. The results show that our implementationoften outperforms state-of-the-art tools, demonstrating its practicalpotential.

Many properties of communication protocols combine safety and livenessaspects. Characterizing such combined properties by means of a single inferencesystem is difficult because of the fundamentally different techniques(coinduction and induction, respectively) usually involved in defining andproving them. In this paper we show that Generalized Inference Systems allow usto obtain sound and complete characterizations of (at least some of) thesecombined inductive/coinductive properties of binary session types. Inparticular, we illustrate the role of corules in characterizing fairtermination (the property of protocols that can always eventually terminate),fair compliance (the property of interactions that can always be extended toreach client satisfaction) and fair subtyping, a liveness-preserving refinementrelation for session types. The characterizations we obtain are simplercompared to the previously available ones and corules provide insight on theliveness properties being ensured or preserved. Moreover, we can convenientlyappeal to the bounded coinduction principle to prove the completeness of theprovided characterizations.

Modern quantum programming languages integrate quantum resources andclassical control. They must, on the one hand, be linearly typed to reflect theno-cloning property of quantum resources. On the other hand, high-level andpractical languages should also support quantum circuits as first-classcitizens, as well as families of circuits that are indexed by some classicalparameters. Quantum programming languages thus need linear dependent typetheory. This paper defines a general semantic structure for such a type theoryvia certain fibrations of monoidal categories. The categorical model of thequantum circuit description language Proto-Quipper-M by Rios and Selinger(2017) constitutes an example of such a fibration, which means that thelanguage can readily be integrated with dependent types. We then devise both ageneral linear dependent type system and a dependently typed extension ofProto-Quipper-M, and provide them with operational semantics as well as aprototype implementation.

We introduce the notion of universal graphs as a tool for constructingalgorithms solving games of infinite duration such as parity games and meanpayoff games. In the first part we develop the theory of universal graphs, withtwo goals: showing an equivalence and normalisation result between differentrecently introduced related models, and constructing generic value iterationalgorithms for any positionally determined objective. In the second part wegive four applications: to parity games, to mean payoff games, to a disjunctionbetween a parity and a mean payoff objective, and to disjunctions of severalmean payoff objectives. For each of these four cases we construct algorithmsachieving or improving over the best known time and space complexity.

We show the undecidability of the distributed control problem when the plantis an asynchronous automaton, the controllers use causal memory and the goal ofthe controllers is to put each process in a local accepting state.

For the minimization of state-based systems (i.e. the reduction of the numberof states while retaining the system's semantics), there are two obviousaspects: removing unnecessary states of the system and merging redundant statesin the system. In the present article, we relate the two minimization aspectson coalgebras by defining an abstract notion of minimality. The abstract notions minimality and minimization live in a general categorywith a factorization system. We will find criteria on the category that ensureuniqueness, existence, and functoriality of the minimization aspects. Theproofs of these results instantiate to those for reachability and observabilityminimization in the standard coalgebra literature. Finally, we will see how thetwo aspects of minimization interact and under which criteria they can besequenced in any order, like in automata minimization.

In a recent paper, a realizability technique has been used to give asemantics of a quantum lambda calculus. Such a technique gives rise to aninfinite number of valid typing rules, without giving preference to any subsetof those. In this paper, we introduce a valid subset of typing rules, definingan expressive enough quantum calculus. Then, we propose a categorical semanticsfor it. Such a semantics consists of an adjunction between the category ofdistributive-action spaces of value distributions (that is, linear combinationsof values in the lambda calculus), and the category of sets of valuedistributions.

Imperative session types provide an imperative interface to session-typedcommunication. In such an interface, channel references are first-class objectswith operations that change the typestate of the channel. Compared tofunctional session type APIs, the program structure is simpler at the surface,but typestate is required to model the current state of communicationthroughout. Following an early work that explored the imperative approach, asignificant body of work on session types has neglected the imperative approachand opts for a functional approach that uses linear types to manage channelreferences soundly. We demonstrate that the functional approach subsumes theearly work on imperative session types by exhibiting a typing and semanticspreserving translation into a system of linear functional session types. Wefurther show that the untyped backwards translation from the functional to theimperative calculus is semantics preserving. We restrict the type system of thefunctional calculus such that the backwards translation becomes typepreserving. Thus, we precisely capture the difference in expressiveness of thetwo calculi and conclude that the lack of expressiveness in the imperativecalculus is largely due to restrictions imposed by its type system.

In their paper "A Functional Abstraction of Typed Contexts", Danvy andFilinski show how to derive a monomorphic type system of the shift and resetoperators from a CPS semantics. In this paper, we show how this method scalesto Felleisen's control and prompt operators. Compared to shift and reset,control and prompt exhibit a more dynamic behavior, in that they can manipulatea trail of contexts surrounding the invocation of previously capturedcontinuations. Our key observation is that, by adopting a functionalrepresentation of trails in the CPS semantics, we can derive a type system thatencodes all and only constraints imposed by the CPS semantics.

Working in a variant of the intersection type assignment system of Coppo,Dezani-Ciancaglini and Venneri [1981], we prove several facts about sets ofterms having a given intersection type. Our main result is that every stronglynormalizing term M admits a *uniqueness typing*, which is a pair $(\Gamma,A)$such that 1) $\Gamma \vdash M : A$ 2) $\Gamma \vdash N : A \Longrightarrow M =_{\beta\eta} N$ We also discuss several presentations of intersection type algebras, and thecorresponding choices of type assignment rules. Moreover, we show that the set of closed terms with a given type is uniformlyseparable, and, if infinite, forms an adequate numeral system. The proof ofthis fact uses an internal version of the Böhm-out technique, adapted toterms of a given intersection type.

It is well-known that some equational theories such as groups or booleanalgebras can be defined by fewer equational axioms than the original axioms.However, it is not easy to determine if a given set of axioms is the smallestor not. Malbos and Mimram investigated a general method to find a lower boundof the cardinality of the set of equational axioms (or rewrite rules) that isequivalent to a given equational theory (or term rewriting systems), usinghomological algebra. Their method is an analog of Squier's homology theory onstring rewriting systems. In this paper, we develop the homology theory forterm rewriting systems more and provide a better lower bound under a strongernotion of equivalence than their equivalence. The author also implemented aprogram to compute the lower bounds, and experimented with 64 complete TRSs.

Applicative bisimilarity is a coinductive characterisation of observationalequivalence in call-by-name lambda-calculus, introduced by Abramsky (1990).Howe (1996) gave a direct proof that it is a congruence, and generalised theresult to all languages complying with a suitable format. We propose acategorical framework for specifying operational semantics, in which we provethat (an abstract analogue of) applicative bisimilarity is automatically acongruence. Example instances include standard applicative bisimilarity incall-by-name, call-by-value, and call-by-name non-deterministic$\lambda$-calculus, and more generally all languages complying with a variantof Howe's format.