We introduce two-player games which build words over infinite alphabets, and
we study the problem of checking the existence of winning strategies. These
games are played by two players, who take turns in choosing valuations for
variables ranging over an infinite data domain, thus generating
multi-attributed data words. The winner of the game is specified by formulas in
the Logic of Repeating Values, which can reason about repetitions of data
values in infinite data words. We prove that it is undecidable to check if one
of the players has a winning strategy, even in very restrictive settings.
However, we prove that if one of the players is restricted to choose valuations
ranging over the Boolean domain, the games are effectively equivalent to
single-sided games on vector addition systems with states (in which one of the
players can change control states but cannot change counter values), known to
be decidable and effectively equivalent to energy games.
Previous works have shown that the satisfiability problem for various
variants of the logic of repeating values is equivalent to the reachability and
coverability problems in vector addition systems. Our results raise this
connection to the level of games, augmenting further the associations between
logics on data words and counter systems.
We demonstrate that the most well-known approach to rewriting graphical
structures, the Double-Pushout (DPO) approach, possesses a notion of sequential
compositions of rules along an overlap that is associative in a natural sense.
Notably, our results hold in the general setting of $\mathcal{M}$-adhesive
categories. This observation complements the classical Concurrency Theorem of
DPO rewriting. We then proceed to define rule algebras in both settings, where
the most general categories permissible are the finitary (or finitary
restrictions of) $\mathcal{M}$-adhesive categories with $\mathcal{M}$-effective
unions. If in addition a given such category possess an $\mathcal{M}$-initial
object, the resulting rule algebra is unital (in addition to being
associative). We demonstrate that in this setting a canonical representation of
the rule algebras is obtainable, which opens the possibility of applying the
concept to define and compute the evolution of statistical moments of
observables in stochastic DPO rewriting systems.
We develop a novel method to analyze the dynamics of stochastic rewriting
systems evolving over finitary adhesive, extensive categories. Our formalism is
based on the so-called rule algebra framework and exhibits an intimate
relationship between the combinatorics of the rewriting rules (as encoded in
the rule algebra) and the dynamics which these rules generate on observables
(as encoded in the stochastic mechanics formalism). We introduce the concept of
combinatorial conversion, whereby under certain technical conditions the
evolution equation for (the exponential generating function of) the statistical
moments of observables can be expressed as the action of certain differential
operators on formal power series. This permits us to formulate the novel
concept of moment-bisimulation, whereby two dynamical systems are compared in
terms of their evolution of sets of observables that are in bijection. In
particular, we exhibit non-trivial examples of graphical rewriting systems that
are moment-bisimilar to certain discrete rewriting systems (such as branching
processes or the larger class of stochastic chemical reaction systems). Our
results point towards applications of a vast number of existing
well-established exact and approximate analysis techniques developed for
chemical reaction systems to the far richer class of general stochastic
rewriting systems.
A semantics of concurrent programs can be given using precubical sets, in
order to study (higher) commutations between the actions, thus encoding the
"geometry" of the space of possible executions of the program. Here, we study
the particular case of programs using only mutexes, which are the most widely
used synchronization primitive. We show that in this case, the resulting
programs have non-positive curvature, a notion that we introduce and study here
for precubical sets, and can be thought of as an algebraic analogue of the
well-known one for metric spaces. Using this it, as well as categorical
rewriting techniques, we are then able to show that directed and non-directed
homotopy coincide for directed paths in these precubical sets. Finally, we
study the geometric realization of precubical sets in metric spaces, to show
that our conditions on precubical sets actually coincide with those for metric
spaces. Since the category of metric spaces is not cocomplete, we are lead to
work with generalized metric spaces and study some of their properties.
This paper proves the approximate intermediate value theorem, constructively
and from notably weak hypotheses: from pointwise rather than uniform
continuity, without assuming that reals are presented with rational
approximants, and without using countable choice. The theorem is that if a
pointwise continuous function has both a negative and a positive value, then it
has values arbitrarily close to 0. The proof builds on the usual classical
proof by bisection, which repeatedly selects the left or right half of an
interval; the algorithm here selects an interval of half the size in a
continuous way, interpolating between those two possibilities.
The call-by-value lambda calculus can be endowed with permutation rules,
arising from linear logic proof-nets, having the advantage of unblocking some
redexes that otherwise get stuck during the reduction. We show that such an
extension allows to define a satisfying notion of Böhm(-like) tree and a
theory of program approximation in the call-by-value setting. We prove that all
lambda terms having the same Böhm tree are observationally equivalent, and
characterize those Böhm-like trees arising as actual Böhm trees of lambda
terms.
We also compare this approach with Ehrhard's theory of program approximation
based on the Taylor expansion of lambda terms, translating each lambda term
into a possibly infinite set of so-called resource terms. We provide sufficient
and necessary conditions for a set of resource terms in order to be the Taylor
expansion of a lambda term. Finally, we show that the normal form of the Taylor
expansion of a lambda term can be computed by performing a normalized Taylor
expansion of its Böhm tree. From this it follows that two lambda terms have
the same Böhm tree if and only if the normal forms of their Taylor expansions
coincide.
Corrado Böhm once observed that if $Y$ is any fixed point combinator (fpc),
then $Y(\lambda yx.x(yx))$ is again fpc. He thus discovered the first "fpc
generating scheme" -- a generic way to build new fpcs from old. Continuing this
idea, define an $\textit{fpc generator}$ to be any sequence of terms
$G_1,\dots,G_n$ such that \[ Y \in FPC \Rightarrow Y G_1 \cdots G_n \in FPC \]
In this contribution, we take first steps in studying the structure of (weak)
fpc generators. We isolate several robust classes of such generators, by
examining their elementary properties like injectivity and (weak) constancy. We
provide sufficient conditions for existence of fixed points of a given
generator $(G_1,\cdots,G_n)$: an fpc $Y$ such that $Y = Y G_1 \cdots G_n$. We
conjecture that weak constancy is a necessary condition for existence of such
(higher-order) fixed points. This statement generalizes Statman's conjecture on
non-existence of "double fpcs": fixed points of the generator $(G) = (\lambda
yx.x(yx))$ discovered by Böhm.
Finally, we define and make a few observations about the monoid of (weak) fpc
generators. This enables us to formulate new a conjecture about their
structure.
We prove the canonicity of inductive inequalities in a constructive
meta-theory, for classes of logics algebraically captured by varieties of
normal and regular lattice expansions. This result encompasses
Ghilardi-Meloni's and Suzuki's constructive canonicity results for Sahlqvist
formulas and inequalities, and is based on an application of the tools of
unified correspondence theory. Specifically, we provide an alternative
interpretation of the language of the algorithm ALBA for lattice expansions:
nominal and conominal variables are respectively interpreted as closed and open
elements of canonical extensions of normal/regular lattice expansions, rather
than as completely join-irreducible and meet-irreducible elements of perfect
normal/regular lattice expansions. We show the correctness of ALBA with respect
to this interpretation. From this fact, the constructive canonicity of the
inequalities on which ALBA succeeds follows by an adaptation of the standard
argument. The claimed result then follows as a consequence of the success of
ALBA on inductive inequalities.
We prove a limitation on a variant of the KPT theorem proposed for
propositional proof systems by Pich and Santhanam (2020), for all proof systems
that prove the disjointness of two NP sets that are hard to distinguish.
We present natural deduction systems and associated modal lambda calculi for
the necessity fragments of the normal modal logics K, T, K4, GL and S4. These
systems are in the dual-context style: they feature two distinct zones of
assumptions, one of which can be thought as modal, and the other as
intuitionistic. We show that these calculi have their roots in in sequent
calculi. We then investigate their metatheory, equip them with a confluent and
strongly normalizing notion of reduction, and show that they coincide with the
usual Hilbert systems up to provability. Finally, we investigate a categorical
semantics which interprets the modality as a product-preserving functor.
We show that deterministic collapsible pushdown automata of second order can
recognize a language that is not recognizable by any deterministic higher-order
pushdown automaton (without collapse) of any order. This implies that there
exists a tree generated by a second order collapsible pushdown system
(equivalently, by a recursion scheme of second order) that is not generated by
any deterministic higher-order pushdown system (without collapse) of any order
(equivalently, by any safe recursion scheme of any order). As a side effect, we
present a pumping lemma for deterministic higher-order pushdown automata, which
potentially can be useful for other applications.
We study the Sierpinski object $\Sigma$ in the realizability topos based on
Scott's graph model of the $\lambda$-calculus. Our starting observation is that
the object of realizers in this topos is the exponential $\Sigma ^N$, where $N$
is the natural numbers object. We define order-discrete objects by
orthogonality to $\Sigma$. We show that the order-discrete objects form a
reflective subcategory of the topos, and that many fundamental objects in
higher-type arithmetic are order-discrete. Building on work by Lietz, we give
some new results regarding the internal logic of the topos. Then we consider
$\Sigma$ as a dominance; we explicitly construct the lift functor and
characterize $\Sigma$-subobjects. Contrary to our expectations the dominance
$\Sigma$ is not closed under unions. In the last section we build a model for
homotopy theory, where the order-discrete objects are exactly those objects
which only have constant paths.
The study of polarity in computation has revealed that an "ideal" programming
language combines both call-by-value and call-by-name evaluation; the two
calling conventions are each ideal for half the types in a programming
language. But this binary choice leaves out call-by-need which is used in
practice to implement lazy-by-default languages like Haskell. We show how the
notion of polarity can be extended beyond the value/name dichotomy to include
call-by-need by adding a mechanism for sharing which is enough to compile a
Haskell-like functional language with user-defined types. The key to capturing
sharing in this mixed-evaluation setting is to generalize the usual notion of
polarity "shifts:" rather than just two shifts (between positive and negative)
we have a family of four dual shifts.
We expand on this idea of logical duality---"and" is dual to "or;" proof is
dual to refutation---for the purpose of compiling a variety of types. Based on
a general notion of data and codata, we show how classical connectives can be
used to encode a wide range of built-in and user-defined types. In contrast
with an intuitionistic logic corresponding to pure functional programming,
these classical connectives bring more of the pleasant symmetries of classical
logic to the computationally-relevant, constructive setting. In particular, an
involutive pair of negations bridges the gulf between the wide-spread notions
of parametric polymorphism and […]
We prove that rooted divergence-preserving branching bisimilarity is a
congruence for the process specification language consisting of nil, action
prefix, choice, and the recursion construct.
The main scientific heritage of Corrado Böhm consists of ideas about
computing, concerning concrete algorithms, as well as models of computability.
The following will be presented. 1. A compiler that can compile itself. 2.
Structured programming, eliminating the 'goto' statement. 3. Functional
programming and an early implementation. 4. Separability in {\lambda}-calculus.
5. Compiling combinators without parsing. 6. Self-evaluation in
{\lambda}-calculus.
We study two extensions of FO2[<], first-order logic interpreted in finite
words, in which formulas are restricted to use only two variables. We adjoin to
this language two-variable atomic formulas that say, "the letter $a$ appears
between positions $x$ and $y$" and "the factor $u$ appears between positions
$x$ and $y$". These are, in a sense, the simplest properties that are not
expressible using only two variables.
We present several logics, both first-order and temporal, that have the same
expressive power, and find matching lower and upper bounds for the complexity
of satisfiability for each of these formulations. We give effective conditions,
in terms of the syntactic monoid of a regular language, for a property to be
expressible in these logics. This algebraic analysis allows us to prove, among
other things, that our new logics have strictly less expressive power than full
first-order logic FO[<]. Our proofs required the development of novel
techniques concerning factorizations of words.
We set up a parametrised monadic translation for a class of call-by-value
functional languages, and prove a corresponding soundness theorem. We then
present a series of concrete instantiations of our translation, demonstrating
that a number of fundamental notions concerning higher-order computation,
including termination, continuity and complexity, can all be subsumed into our
framework. Our main goal is to provide a unifying scheme which brings together
several concepts which are often treated separately in the literature. However,
as a by-product, we also obtain (i) a method for extracting moduli of
continuity for closed functionals of type
$(\mathbb{N}\to\mathbb{N})\to\mathbb{N}$ definable in (extensions of) System T,
and (ii) a characterisation of the time complexity of bar recursion.
Choreographies specify multiparty interactions via message passing. A
realisation of a choreography is a composition of independent processes that
behave as specified by the choreography. Existing relations of
correctness/completeness between choreographies and realisations are based on
models where choices are non-deterministic. Resolving non-deterministic choices
into deterministic choices (e.g., conditional statements) is necessary to
correctly characterise the relationship between choreographies and their
implementations with concrete programming languages. We introduce a notion of
realisability for choreographies - called whole-spectrum implementation - where
choices are still non-deterministic in choreographies, but are deterministic in
their implementations. Our notion of whole spectrum implementation rules out
deterministic implementations of roles that, no matter which context they are
placed in, will never follow one of the branches of a non-deterministic choice.
We give a type discipline for checking whole-spectrum implementations. As a
case study, we analyse the POP protocol under the lens of whole-spectrum
implementation.
We present a symmetrical protocol to repeatedly negotiate a desired service
level between two parties, where the service levels are taken from some totally
ordered finite domain. The agreed service level is selected from levels
dynamically proposed by both parties and parties can only decrease the desired
service level during a negotiation. The correctness of the protocol is stated
using modal formulas and its behaviour is explained using behavioural
reductions of the external behaviour modulo weak trace equivalence and
divergence-preserving branching bisimulation. Our protocol originates from an
industrial use case and it turned out to be remarkably tricky to design
correctly.