In this paper we extend and prove in detail the Finite Rank Theorem for
connection matrices of graph parameters definable in Monadic Second Order Logic
with counting (CMSOL) from B. Godlin, T. Kotek and J.A. Makowsky (2008) and
J.A. Makowsky (2009). We demonstrate its vast applicability in simplifying
known and new non-definability results of graph properties and finding new
non-definability results for graph parameters. We also prove a Feferman-Vaught
Theorem for the logic CFOL, First Order Logic with the modular counting
quantifiers.
In previous work with Pous, we defined a semantics for CCS which may both be
viewed as an innocent form of presheaf semantics and as a concurrent form of
game semantics. We define in this setting an analogue of fair testing
equivalence, which we prove fully abstract w.r.t. standard fair testing
equivalence. The proof relies on a new algebraic notion called playground,
which represents the `rule of the game'. From any playground, we derive two
languages equipped with labelled transition systems, as well as a strong,
functional bisimulation between them.
We study a natural hierarchy in first-order logic, namely the quantifier
structure hierarchy, which gives a systematic classification of first-order
formulas based on structural quantifier resource. We define a variant of
Ehrenfeucht-Fraisse games that characterizes quantifier classes and use it to
prove that this hierarchy is strict over finite structures, using strategy
compositions. Moreover, we prove that this hierarchy is strict even over
ordered finite structures, which is interesting in the context of descriptive
complexity.
We introduce Priority Channel Systems, a new class of channel systems where
messages carry a numeric priority and where higher-priority messages can
supersede lower-priority messages preceding them in the fifo communication
buffers. The decidability of safety and inevitability properties is shown via
the introduction of a priority embedding, a well-quasi-ordering that has not
previously been used in well-structured systems. We then show how Priority
Channel Systems can compute Fast-Growing functions and prove that the
aforementioned verification problems are
$\mathbf{F}_{\varepsilon_{0}}$-complete.
We introduce a nominal actor-based language and study its expressive power.
We have identified the presence/absence of fields as a crucial feature: the
dynamic creation of names in combination with fields gives rise to Turing
completeness. On the other hand, restricting to stateless actors gives rise to
systems for which properties such as termination are decidable. This
decidability result still holds for actors with states when the number of
actors is bounded and the state is read-only.
A stochastic timed automaton is a purely stochastic process defined on a
timed automaton, in which both delays and discrete choices are made randomly.
We study the almost-sure model-checking problem for this model, that is, given
a stochastic timed automaton A and a property $\Phi$, we want to decide whether
A satisfies $\Phi$ with probability 1. In this paper, we identify several
classes of automata and of properties for which this can be decided. The proof
relies on the construction of a finite abstraction, called the thick graph,
that we interpret as a finite Markov chain, and for which we can decide the
almost-sure model-checking problem. Correctness of the abstraction holds when
automata are almost-surely fair, which we show, is the case for two large
classes of systems, single- clock automata and so-called weak-reactive
automata. Techniques employed in this article gather tools from real-time
verification and probabilistic verification, as well as topological games
played on timed automata.
Lanford has shown that Feigenbaum's functional equation has an analytic
solution. We show that this solution is a polynomial time computable function.
This implies in particular that the so-called first Feigenbaum constant is a
polynomial time computable real number.
We examine the relationship between the algebraic lambda-calculus, a fragment
of the differential lambda-calculus and the linear-algebraic lambda-calculus, a
candidate lambda-calculus for quantum computation. Both calculi are algebraic:
each one is equipped with an additive and a scalar-multiplicative structure,
and their set of terms is closed under linear combinations. However, the two
languages were built using different approaches: the former is a call-by-name
language whereas the latter is call-by-value; the former considers algebraic
equalities whereas the latter approaches them through rewrite rules. In this
paper, we analyse how these different approaches relate to one another. To this
end, we propose four canonical languages based on each of the possible choices:
call-by-name versus call-by-value, algebraic equality versus algebraic
rewriting. We show that the various languages simulate one another. Due to
subtle interaction between beta-reduction and algebraic rewriting, to make the
languages consistent some additional hypotheses such as confluence or
normalisation might be required. We carefully devise the required properties
for each proof, making them general enough to be valid for any sub-language
satisfying the corresponding properties.
We present an effect system for core Eff, a simplified variant of Eff, which
is an ML-style programming language with first-class algebraic effects and
handlers. We define an expressive effect system and prove safety of operational
semantics with respect to it. Then we give a domain-theoretic denotational
semantics of core Eff, using Pitts's theory of minimal invariant relations, and
prove it adequate. We use this fact to develop tools for finding useful
contextual equivalences, including an induction principle. To demonstrate their
usefulness, we use these tools to derive the usual equations for mutable state,
including a general commutativity law for computations using non-interfering
references. We have formalized the effect system, the operational semantics,
and the safety theorem in Twelf.
In game semantics and related approaches to programming language semantics,
programs are modelled by interaction dialogues. Such models have recently been
used in the design of new compilation methods, e.g. for hardware synthesis or
for programming with sublinear space. This paper relates such semantically
motivated non-standard compilation methods to more standard techniques in the
compilation of functional programming languages, namely continuation passing
and defunctionalization. We first show for the linear {\lambda}-calculus that
interpretation in a model of computation by interaction can be described as a
call-by-name CPS-translation followed by a defunctionalization procedure that
takes into account control-flow information. We then establish a relation
between these two compilation methods for the simply-typed {\lambda}-calculus
and end by considering recursion.
Probabilistic automata constitute a versatile and elegant model for
concurrent probabilistic systems. They are equipped with a compositional theory
supporting abstraction, enabled by weak probabilistic bisimulation serving as
the reference notion for summarising the effect of abstraction. This paper
considers probabilistic automata augmented with costs. It extends the notions
of weak transitions in probabilistic automata in such a way that the costs
incurred along a weak transition are captured. This gives rise to
cost-preserving and cost-bounding variations of weak probabilistic
bisimilarity, for which we establish compositionality properties with respect
to parallel composition. Furthermore, polynomial-time decision algorithms are
proposed, that can be effectively used to compute reward-bounding abstractions
of Markov decision processes in a compositional manner.
We study Doob's martingale convergence theorem for computable continuous time
martingales on Brownian motion, in the context of algorithmic randomness. A
characterization of the class of sample points for which the theorem holds is
given. Such points are given the name of Doob random points. It is shown that a
point is Doob random if its tail is computably random in a certain sense.
Moreover, Doob randomness is strictly weaker than computable randomness and is
incomparable with Schnorr randomness.
Complementation of Büchi automata has been studied for over five decades
since the formalism was introduced in 1960. Known complementation constructions
can be classified into Ramsey-based, determinization-based, rank-based, and
slice-based approaches. Regarding the performance of these approaches, there
have been several complexity analyses but very few experimental results. What
especially lacks is a comparative experiment on all of the four approaches to
see how they perform in practice. In this paper, we review the four approaches,
propose several optimization heuristics, and perform comparative
experimentation on four representative constructions that are considered the
most efficient in each approach. The experimental results show that (1) the
determinization-based Safra-Piterman construction outperforms the other three
in producing smaller complements and finishing more tasks in the allocated time
and (2) the proposed heuristics substantially improve the Safra-Piterman and
the slice-based constructions.
We consider the quantified constraint satisfaction problem (QCSP) which is to
decide, given a structure and a first-order sentence (not assumed here to be in
prenex form) built from conjunction and quantification, whether or not the
sentence is true on the structure. We present a proof system for certifying the
falsity of QCSP instances and develop its basic theory; for instance, we
provide an algorithmic interpretation of its behavior. Our proof system places
the established Q-resolution proof system in a broader context, and also allows
us to derive QCSP tractability results.
This paper defines a new notion of bounded computable randomness for certain
classes of sub-computable functions which lack a universal machine. In
particular, we define such versions of randomness for primitive recursive
functions and for PSPACE functions. These new notions are robust in that there
are equivalent formulations in terms of (1) Martin-Löf tests, (2) Kolmogorov
complexity, and (3) martingales. We show these notions can be equivalently
defined with prefix-free Kolmogorov complexity. We prove that one direction of
van Lambalgen's theorem holds for relative computability, but the other
direction fails. We discuss statistical properties of these notions of
randomness.
We analyze the strength of Helly's selection theorem HST, which is the most
important compactness theorem on the space of functions of bounded variation.
For this we utilize a new representation of this space intermediate between
$L_1$ and the Sobolev space W1,1, compatible with the, so called, weak*
topology. We obtain that HST is instance-wise equivalent to the
Bolzano-Weierstra\ss\ principle over RCA0. With this HST is equivalent to ACA0
over RCA0. A similar classification is obtained in the Weihrauch lattice.
While it was defined long ago, the extension of CTL with quantification over
atomic propositions has never been studied extensively. Considering two
different semantics (depending whether propositional quantification refers to
the Kripke structure or to its unwinding tree), we study its expressiveness
(showing in particular that QCTL coincides with Monadic Second-Order Logic for
both semantics) and characterise the complexity of its model-checking and
satisfiability problems, depending on the number of nested propositional
quantifiers (showing that the structure semantics populates the polynomial
hierarchy while the tree semantics populates the exponential hierarchy).
Inductive and coinductive types are commonly construed as ontological
(Church-style) types, denoting canonical data-sets such as natural numbers,
lists, and streams. For various purposes, notably the study of programs in the
context of global semantics, it is preferable to think of types as semantical
properties (Curry-style). Intrinsic theories were introduced in the late 1990s
to provide a purely logical framework for reasoning about programs and their
semantic types. We extend them here to data given by any combination of
inductive and coinductive definitions. This approach is of interest because it
fits tightly with syntactic, semantic, and proof theoretic fundamentals of
formal logic, with potential applications in implicit computational complexity
as well as extraction of programs from proofs. We prove a Canonicity Theorem,
showing that the global definition of program typing, via the usual (Tarskian)
semantics of first-order logic, agrees with their operational semantics in the
intended model. Finally, we show that every intrinsic theory is interpretable
in a conservative extension of first-order arithmetic. This means that
quantification over infinite data objects does not lead, on its own, to
proof-theoretic strength beyond that of Peano Arithmetic. Intrinsic theories
are perfectly amenable to formulas-as-types Curry-Howard morphisms, and were
used to characterize major computational complexity classes Their extensions
described here have similar potential which has […]
We introduce session automata, an automata model to process data words, i.e.,
words over an infinite alphabet. Session automata support the notion of fresh
data values, which are well suited for modeling protocols in which sessions
using fresh values are of major interest, like in security protocols or ad-hoc
networks. Session automata have an expressiveness partly extending, partly
reducing that of classical register automata. We show that, unlike register
automata and their various extensions, session automata are robust: They (i)
are closed under intersection, union, and (resource-sensitive) complementation,
(ii) admit a symbolic regular representation, (iii) have a decidable inclusion
problem (unlike register automata), and (iv) enjoy logical characterizations.
Using these results, we establish a learning algorithm to infer session
automata through membership and equivalence queries.
This paper proposes a bisimulation theory based on multiparty session types
where a choreography specification governs the behaviour of session typed
processes and their observer. The bisimulation is defined with the observer
cooperating with the observed process in order to form complete global session
scenarios and usable for proving correctness of optimisations for globally
coordinating threads and processes. The induced bisimulation is strictly more
fine-grained than the standard session bisimulation. The difference between the
governed and standard bisimulations only appears when more than two interleaved
multiparty sessions exist. This distinct feature enables to reason real
scenarios in the large-scale distributed system where multiple choreographic
sessions need to be interleaved. The compositionality of the governed
bisimilarity is proved through the soundness and completeness with respect to
the governed reduction-based congruence. Finally, its usage is demonstrated by
a thread transformation governed under multiple sessions in a real usecase in
the large-scale cyberinfrustracture.
We give an algorithm for solving stochastic parity games with almost-sure
winning conditions on {\it lossy channel systems}, under the constraint that
both players are restricted to finite-memory strategies. First, we describe a
general framework, where we consider the class of 2 1/2-player games with
almost-sure parity winning conditions on possibly infinite game graphs,
assuming that the game contains a {\it finite attractor}. An attractor is a set
of states (not necessarily absorbing) that is almost surely re-visited
regardless of the players' decisions. We present a scheme that characterizes
the set of winning states for each player. Then, we instantiate this scheme to
obtain an algorithm for {\it stochastic game lossy channel systems}.