Field-based coordination has been proposed as a model for coordinating
collective adaptive systems, promoting a view of distributed computations as
functions manipulating data structures spread over space and evolving over
time, called computational fields. The field calculus is a formal foundation
for field computations, providing specific constructs for evolution (time) and
neighbor interaction (space), which are handled by separate operators (called
rep and nbr, respectively). This approach, however, intrinsically limits the
speed of information propagation that can be achieved by their combined use. In
this paper, we propose a new field-based coordination operator called share,
which captures the space-time nature of field computations in a single operator
that declaratively achieves: (i) observation of neighbors' values; (ii)
reduction to a single local value; and (iii) update and converse sharing to
neighbors of a local variable. We show that for an important class of
self-stabilising computations, share can replace all occurrences of rep and nbr
constructs. In addition to conceptual economy, use of the share operator also
allows many prior field calculus algorithms to be greatly accelerated, which we
validate empirically with simulations of frequently used network propagation
and collection algorithms.
In the last two decades, there has been much progress on model checking of
both probabilistic systems and higher-order programs. In spite of the emergence
of higher-order probabilistic programming languages, not much has been done to
combine those two approaches. In this paper, we initiate a study on the
probabilistic higher-order model checking problem, by giving some first
theoretical and experimental results. As a first step towards our goal, we
introduce PHORS, a probabilistic extension of higher-order recursion schemes
(HORS), as a model of probabilistic higher-order programs. The model of PHORS
may alternatively be viewed as a higher-order extension of recursive Markov
chains. We then investigate the probabilistic termination problem -- or,
equivalently, the probabilistic reachability problem. We prove that almost sure
termination of order-2 PHORS is undecidable. We also provide a fixpoint
characterization of the termination probability of PHORS, and develop a sound
(but possibly incomplete) procedure for approximately computing the termination
probability. We have implemented the procedure for order-2 PHORSs, and
confirmed that the procedure works well through preliminary experiments that
are reported at the end of the article.
We introduce a framework for approximate analysis of Markov decision
processes (MDP) with bounded-, unbounded-, and infinite-horizon properties. The
main idea is to identify a "core" of an MDP, i.e., a subsystem where we
provably remain with high probability, and to avoid computation on the less
relevant rest of the state space. Although we identify the core using
simulations and statistical techniques, it allows for rigorous error bounds in
the analysis. Consequently, we obtain efficient analysis algorithms based on
partial exploration for various settings, including the challenging case of
strongly connected systems.
The decidability and complexity of reachability problems and model-checking
for flat counter machines have been explored in detail. However, only few
results are known for flat (lossy) FIFO machines, only in some particular cases
(a single loop or a single bounded expression). We prove, by establishing
reductions between properties, and by reducing SAT to a subset of these
properties that many verification problems like reachability, non-termination,
unboundedness are NP-complete for flat FIFO machines, generalizing similar
existing results for flat counter machines. We also show that reachability is
NP-complete for flat lossy FIFO machines and for flat front-lossy FIFO
machines. We construct a trace-flattable system of many counter machines
communicating via rendez-vous that is bisimilar to a given flat FIFO machine,
which allows to model-check the original flat FIFO machine. Our results lay the
theoretical foundations and open the way to build a verification tool for
(general) FIFO machines based on analysis of flat sub-machines.
Petri nets are a well-known model of concurrency and provide an ideal setting
for the study of fundamental aspects in concurrent systems. Despite their
simplicity, they still lack a satisfactory causally reversible semantics. We
develop such semantics for Place/Transitions Petri nets (P/T nets) based on two
observations. Firstly, a net that explicitly expresses causality and conflict
among events, for example an occurrence net, can be straightforwardly reversed
by adding a reverse transition for each of its forward transitions. Secondly,
given a P/T net the standard unfolding construction associates with it an
occurrence net that preserves all of its computation. Consequently, the
reversible semantics of a P/T net can be obtained as the reversible semantics
of its unfolding. We show that such reversible behaviour can be expressed as a
finite net whose tokens are coloured by causal histories. Colours in our
encoding resemble the causal memories that are typical in reversible process
calculi.
This paper presents a complete axiomatization of Monadic Second-Order Logic
(MSO) over infinite trees. MSO on infinite trees is a rich system, and its
decidability ("Rabin's Tree Theorem") is one of the most powerful known results
concerning the decidability of logics. By a complete axiomatization we mean a
complete deduction system with a polynomial-time recognizable set of axioms. By
naive enumeration of formal derivations, this formally gives a proof of Rabin's
Tree Theorem. The deduction system consists of the usual rules for second-order
logic seen as two-sorted first-order logic, together with the natural
adaptation In addition, it contains an axiom scheme expressing the (positional)
determinacy of certain parity games. The main difficulty resides in the limited
expressive power of the language of MSO. We actually devise an extension of
MSO, called Functional (Monadic) Second-Order Logic (FSO), which allows us to
uniformly manipulate (hereditarily) finite sets and corresponding labeled
trees, and whose language allows for higher abstraction than that of MSO.
The BPMN 2.0 standard is a widely used semi-formal notation to model
distributed information systems from different perspectives. The standard makes
available a set of diagrams to represent such perspectives. Choreography
diagrams represent global constraints concerning the interactions among system
components without exposing their internal structure. Collaboration diagrams
instead permit to depict the internal behaviour of a component, also referred
as process, when integrated with others so to represent a possible
implementation of the distributed system.
This paper proposes a design methodology and a formal framework for checking
conformance of choreographies against collaborations. In particular, the paper
presents a direct formal operational semantics for both BPMN choreography and
collaboration diagrams. Conformance aspects are proposed through two relations
defined on top of the defined semantics. The approach benefits from the
availability of a tool we have developed, named C4, that permits to experiment
the theoretical framework in practical contexts. The objective here is to make
the exploited formal methods transparent to system designers, thus fostering a
wider adoption by practitioners.
We study multiplayer quantitative reachability games played on a finite
directed graph, where the objective of each player is to reach his target set
of vertices as quickly as possible. Instead of the well-known notion of Nash
equilibrium (NE), we focus on the notion of subgame perfect equilibrium (SPE),
a refinement of NE well-suited in the framework of games played on graphs. It
is known that there always exists an SPE in quantitative reachability games and
that the constrained existence problem is decidable. We here prove that this
problem is PSPACE-complete. To obtain this result, we propose a new algorithm
that iteratively builds a set of constraints characterizing the set of SPE
outcomes in quantitative reachability games. This set of constraints is
obtained by iterating an operator that reinforces the constraints up to
obtaining a fixpoint. With this fixpoint, the set of SPE outcomes can be
represented by a finite graph of size at most exponential. A careful inspection
of the computation allows us to establish PSPACE membership.
In this paper, we extend the notion of Lyndon word to transfinite words. We
prove two main results. We first show that, given a transfinite word, there
exists a unique factorization in Lyndon words that are densely non-increasing,
a relaxation of the condition used in the case of finite words.
In the annex, we prove that the factorization of a rational word has a
special form and that it can be computed from a rational expression describing
the word.
Kegelspitzen are mathematical structures coined by Keimel and Plotkin, in
order to encompass the structure of a convex set and the structure of a dcpo.
In this paper, we ask ourselves what are Kegelspitzen the model of. We adopt a
categorical viewpoint and show that Kegelspitzen model stochastic matrices onto
a category of domains. Consequently, Kegelspitzen form a denotational model of
pPCF, an abstract functional programming language for probabilistic computing.
We conclude the present work with a discussion of the interpretation of
(probabilistic) recursive types, which are types for entities which might
contain other entities of the same type, such as lists and trees.
In this article we relate a family of methods for automated inductive theorem
proving based on cycle detection in saturation-based provers to well-known
theories of induction. To this end we introduce the notion of clause set cycles
-- a formalism abstracting a certain type of cyclic dependency between clause
sets. We first show that the formalism of clause set cycles is contained in the
theory of $\exists_1$ induction. Secondly we consider the relation between
clause set cycles and the theory of open induction. By providing a finite
axiomatization of a theory of triangular numbers with open induction we show
that the formalism of clause set cycles is not contained in the theory of open
induction. Furthermore we conjecture that open induction and clause set cycles
are incomparable. Finally, we transfer these results to a concrete method of
automated inductive theorem proving called the n-clause calculus.
We present the system $\mathtt{d}$, an extended type system with lambda-typed
lambda-expressions. It is related to type systems originating from the Automath
project. $\mathtt{d}$ extends existing lambda-typed systems by an existential
abstraction operator as well as propositional operators. $\beta$-reduction is
extended to also normalize negated expressions using a subset of the laws of
classical negation, hence $\mathtt{d}$ is normalizing both proofs and formulas
which are handled uniformly as functional expressions. $\mathtt{d}$ is using a
reflexive type axiom for a constant $\tau$ to which no function can be typed.
Some properties are shown including confluence, subject reduction, uniqueness
of types, strong normalization, and consistency. We illustrate how, when using
$\mathtt{d}$, due to its limited logical strength, additional axioms must be
added both for negation and for the mathematical structures whose deductions
are to be formalized.
The window mechanism was introduced by Chatterjee et al. to strengthen
classical game objectives with time bounds. It permits to synthesize system
controllers that exhibit acceptable behaviors within a configurable time frame,
all along their infinite execution, in contrast to the traditional objectives
that only require correctness of behaviors in the limit. The window concept has
proved its interest in a variety of two-player zero-sum games because it
enables reasoning about such time bounds in system specifications, but also
thanks to the increased tractability that it usually yields.
In this work, we extend the window framework to stochastic environments by
considering Markov decision processes. A fundamental problem in this context is
the threshold probability problem: given an objective it aims to synthesize
strategies that guarantee satisfying runs with a given probability. We solve it
for the usual variants of window objectives, where either the time frame is set
as a parameter, or we ask if such a time frame exists. We develop a generic
approach for window-based objectives and instantiate it for the classical
mean-payoff and parity objectives, already considered in games. Our work paves
the way to a wide use of the window mechanism in stochastic models.
Interpretation methods and their restrictions to polynomials have been deeply
used to control the termination and complexity of first-order term rewrite
systems. This paper extends interpretation methods to a pure higher order
functional language. We develop a theory of higher order functions that is
well-suited for the complexity analysis of this programming language. The
interpretation domain is a complete lattice and, consequently, we express
program interpretation in terms of a least fixpoint. As an application, by
bounding interpretations by higher order polynomials, we characterize Basic
Feasible Functions at any order.
Process calculi based in logic, such as $\pi$DILL and CP, provide a
foundation for deadlock-free concurrent programming, but exclude
non-determinism and races. HCP is a reformulation of CP which addresses a
fundamental shortcoming: the fundamental operator for parallel composition from
the $\pi$-calculus does not correspond to any rule of linear logic, and
therefore not to any term construct in CP.
We introduce non-deterministic HCP, which extends HCP with a novel account of
non-determinism. Our approach draws on bounded linear logic to provide a
strongly-typed account of standard process calculus expressions of
non-determinism. We show that our extension is expressive enough to capture
many uses of non-determinism in untyped calculi, such as non-deterministic
choice, while preserving HCP's meta-theoretic properties, including deadlock
freedom.
Psi-calculi is a parametric framework for process calculi similar to popular
pi-calculus extensions such as the explicit fusion calculus, the applied
pi-calculus and the spi calculus. Mechanised proofs of standard algebraic and
congruence properties of bisimilarity apply to all calculi within the
framework. A limitation of psi-calculi is that communication channels must be
symmetric and transitive. In this paper, we give a new operational semantics to
psi-calculi that allows us to lift these restrictions and simplify some of the
proofs. The key technical innovation is to annotate transitions with a
provenance -- a description of the scope and channel they originate from. We
give mechanised proofs that our extension is conservative, and that the
standard algebraic and congruence properties of strong and weak bisimilarity
are maintained. We show correspondence with a reduction semantics and barbed
bisimulation. We show how a pi-calculus with preorders that was previously
beyond the scope of psi-calculi can be captured, and how to encode mixed choice
under very strong quality criteria.
Clocked Type Theory (CloTT) is a type theory for guarded recursion useful for
programming with coinductive types, allowing productivity to be encoded in
types, and for reasoning about advanced programming language features using an
abstract form of step-indexing. CloTT has previously been shown to enjoy a
number of syntactic properties including strong normalisation, canonicity and
decidability of the equational theory. In this paper we present a denotational
semantics for CloTT useful, e.g., for studying future extensions of CloTT with
constructions such as path types.
The main challenge for constructing this model is to model the notion of
ticks on a clock used in CloTT for coinductive reasoning about coinductive
types. We build on a category previously used to model guarded recursion with
multiple clocks. In this category there is an object of clocks but no object of
ticks, and so tick-assumptions in a context can not be modelled using standard
tools. Instead we model ticks using dependent right adjoint functors, a
generalisation of the category theoretic notion of adjunction to the setting of
categories with families. Dependent right adjoints are known to model
Fitch-style modal types, but in the case of CloTT, the modal operators
constitute a family indexed internally in the type theory by clocks. We model
this family using a dependent right adjoint on the slice category over the
object of clocks. Finally we show how to model the tick constant of CloTT using
a semantic […]
We introduce and study a new class of $T_0$ spaces, called open well-filtered
spaces. The main results we proved include (1) every well-filtered space is an
open well-filtered space; (2) every core-compact open well-filtered space is
sober. As an immediate corollary, we deduce that every core-compact
well-filtered space is sober. This provides another different and relatively
more straight forward method to answer the open problem posed by Jia and Jung:
Is every core-compact well-filtered space sober?
The stabilizer ZX-calculus is a rigorous graphical language for reasoning
about quantum mechanics. The language is sound and complete: one can transform
a stabilizer ZX-diagram into another one using the graphical rewrite rules if
and only if these two diagrams represent the same quantum evolution or quantum
state. We previously showed that the stabilizer ZX-calculus can be simplified
by reducing the number of rewrite rules, without losing the property of
completeness [Backens, Perdrix & Wang, EPTCS 236:1--20, 2017]. Here, we show
that most of the remaining rules of the language are indeed necessary. We do
however leave as an open question the necessity of two rules. These include,
surprisingly, the bialgebra rule, which is an axiomatisation of
complementarity, the cornerstone of the ZX-calculus. Furthermore, we show that
a weaker ambient category -- a braided autonomous category instead of the usual
compact closed category -- is sufficient to recover the meta rule 'only
connectivity matters', even without assuming any symmetries of the generators.