We define and study LNL polycategories, which abstract the judgmental
structure of classical linear logic with exponentials. Many existing structures
can be represented as LNL polycategories, including LNL adjunctions, linear
exponential comonads, LNL multicategories, IL-indexed categories, linearly
distributive categories with storage, commutative and strong monads,
CBPV-structures, models of polarized calculi, Freyd-categories, and skew
multicategories, as well as ordinary cartesian, symmetric, and planar
multicategories and monoidal categories, symmetric polycategories, and linearly
distributive and *-autonomous categories. To study such classes of structures
uniformly, we define a notion of LNL doctrine, such that each of these classes
of structures can be identified with the algebras for some such doctrine. We
show that free algebras for LNL doctrines can be presented by a sequent
calculus, and that every morphism of doctrines induces an adjunction between
their 2-categories of algebras.
QBF solvers implementing the QCDCL paradigm are powerful algorithms that
successfully tackle many computationally complex applications. However, our
theoretical understanding of the strength and limitations of these QCDCL
solvers is very limited.
In this paper we suggest to formally model QCDCL solvers as proof systems. We
define different policies that can be used for decision heuristics and unit
propagation and give rise to a number of sound and complete QBF proof systems
(and hence new QCDCL algorithms). With respect to the standard policies used in
practical QCDCL solving, we show that the corresponding QCDCL proof system is
incomparable (via exponential separations) to Q-resolution, the classical QBF
resolution system used in the literature. This is in stark contrast to the
propositional setting where CDCL and resolution are known to be p-equivalent.
This raises the question what formulas are hard for standard QCDCL, since
Q-resolution lower bounds do not necessarily apply to QCDCL as we show here. In
answer to this question we prove several lower bounds for QCDCL, including
exponential lower bounds for a large class of random QBFs.
We also introduce a strengthening of the decision heuristic used in classical
QCDCL, which does not necessarily decide variables in order of the prefix, but
still allows to learn asserting clauses. We show that with this decision
policy, QCDCL can be exponentially faster on some formulas.
We further exhibit a QCDCL proof system that […]
Describing systems in terms of choices and their resulting costs and rewards
offers the promise of freeing algorithm designers and programmers from
specifying how those choices should be made; in implementations, the choices
can be realized by optimization techniques and, increasingly, by
machine-learning methods. We study this approach from a programming-language
perspective. We define two small languages that support decision-making
abstractions: one with choices and rewards, and the other additionally with
probabilities. We give both operational and denotational semantics.
In the case of the second language we consider three denotational semantics,
with varying degrees of correlation between possible program values and
expected rewards. The operational semantics combine the usual semantics of
standard constructs with optimization over spaces of possible execution
strategies. The denotational semantics, which are compositional, rely on the
selection monad, to handle choice, augmented with an auxiliary monad to handle
other effects, such as rewards or probability.
We establish adequacy theorems that the two semantics coincide in all cases.
We also prove full abstraction at base types, with varying notions of
observation in the probabilistic case corresponding to the various degrees of
correlation. We present axioms for choice combined with rewards and
probability, establishing completeness at base types for the case of rewards
without probability.
In the simplest form of event structure, a prime event structure, an event is
associated with a unique causal history, its prime cause. However, it is quite
common for an event to have disjunctive causes in that it can be enabled by any
one of multiple sets of causes. Sometimes the sets of causes may be mutually
exclusive, inconsistent one with another, and sometimes not, in which case they
coexist consistently and constitute parallel causes of the event. The
established model of general event structures can model parallel causes. On
occasion however such a model abstracts too far away from the precise causal
histories of events to be directly useful. For example, sometimes one needs to
associate probabilities with different, possibly coexisting, causal histories
of a common event. Ideally, the causal histories of a general event structure
would correspond to the configurations of its causal unfolding to a prime event
structure; and the causal unfolding would arise as a right adjoint to the
embedding of prime in general event structures. But there is no such
adjunction. However, a slight extension of prime event structures remedies this
defect and provides a causal unfolding as a universal construction. Prime event
structures are extended with an equivalence relation in order to dissociate the
two roles, that of an event and its enabling; in effect, prime causes are
labelled by a disjunctive event, an equivalence class of its prime causes. With
this enrichment a suitable […]
We study the learnability of symbolic finite state automata (SFA), a model
shown useful in many applications in software verification. The
state-of-the-art literature on this topic follows the query learning paradigm,
and so far all obtained results are positive. We provide a necessary condition
for efficient learnability of SFAs in this paradigm, from which we obtain the
first negative result. The main focus of our work lies in the learnability of
SFAs under the paradigm of identification in the limit using polynomial time
and data, and its strengthening efficient identifiability, which are concerned
with the existence of a systematic set of characteristic samples from which a
learner can correctly infer the target language. We provide a necessary
condition for identification of SFAs in the limit using polynomial time and
data, and a sufficient condition for efficient learnability of SFAs. From these
conditions we derive a positive and a negative result. The performance of a
learning algorithm is typically bounded as a function of the size of the
representation of the target language. Since SFAs, in general, do not have a
canonical form, and there are trade-offs between the complexity of the
predicates on the transitions and the number of transitions, we start by
defining size measures for SFAs. We revisit the complexity of procedures on
SFAs and analyze them according to these measures, paying attention to the
special forms of SFAs: normalized SFAs and neat SFAs, as well as […]
Causal reversibility blends reversibility and causality for concurrent
systems. It indicates that an action can be undone provided that all of its
consequences have been undone already, thus making it possible to bring the
system back to a past consistent state. Time reversibility is instead
considered in the field of stochastic processes, mostly for efficient analysis
purposes. A performance model based on a continuous-time Markov chain is time
reversible if its stochastic behavior remains the same when the direction of
time is reversed. We bridge these two theories of reversibility by showing the
conditions under which causal reversibility and time reversibility are both
ensured by construction. This is done in the setting of a stochastic process
calculus, which is then equipped with a variant of stochastic bisimilarity
accounting for both forward and backward directions.
In this paper we propose a new approach to realizability interpretations for
nonstandard arithmetic. We deal with nonstandard analysis in the context of
(semi)intuitionistic realizability, focusing on the Lightstone-Robinson
construction of a model for nonstandard analysis through an ultrapower. In
particular, we consider an extension of the $\lambda$-calculus with a memory
cell, that contains an integer (the state), in order to indicate in which slice
of the ultrapower $\cal{M}^{\mathbb{N}}$ the computation is being done. We pay
attention to the nonstandard principles (and their computational content)
obtainable in this setting. In particular, we give non-trivial realizers to
Idealization and a non-standard version of the LLPO principle. We then discuss
how to quotient this product to mimic the Lightstone-Robinson construction.
We investigate predicative aspects of constructive univalent foundations. By
predicative and constructive, we respectively mean that we do not assume
Voevodsky's propositional resizing axioms or excluded middle. Our work
complements existing work on predicative mathematics by exploring what cannot
be done predicatively in univalent foundations. Our first main result is that
nontrivial (directed or bounded) complete posets are necessarily large. That
is, if such a nontrivial poset is small, then weak propositional resizing
holds. It is possible to derive full propositional resizing if we strengthen
nontriviality to positivity. The distinction between nontriviality and
positivity is analogous to the distinction between nonemptiness and
inhabitedness. Moreover, we prove that locally small, nontrivial (directed or
bounded) complete posets necessarily lack decidable equality. We prove our
results for a general class of posets, which includes e.g. directed complete
posets, bounded complete posets, sup-lattices and frames. Secondly, the fact
that these nontrivial posets are necessarily large has the important
consequence that Tarski's theorem (and similar results) cannot be applied in
nontrivial instances. Furthermore, we explain that generalizations of Tarski's
theorem that allow for large structures are provably false by showing that the
ordinal of ordinals in a univalent universe has small suprema in the presence
of set quotients. The latter also leads us to investigate […]
We generalize several propositional preprocessing techniques to higher-order
logic, building on existing first-order generalizations. These techniques
eliminate literals, clauses, or predicate symbols from the problem, with the
aim of making it more amenable to automatic proof search. We also introduce a
new technique, which we call quasipure literal elimination, that strictly
subsumes pure literal elimination. The new techniques are implemented in the
Zipperposition theorem prover. Our evaluation shows that they sometimes help
prove problems originating from Isabelle formalizations and the TPTP library.
We provide time lower bounds for sequential and parallel algorithms deciding
bisimulation on labeled transition systems that use partition refinement. For
sequential algorithms this is $\Omega((m \mkern1mu {+} \mkern1mu n ) \mkern-1mu
\log \mkern-1mu n)$ and for parallel algorithms this is $\Omega(n)$, where $n$
is the number of states and $m$ is the number of transitions. The lowerbounds
are obtained by analysing families of deterministic transition systems,
ultimately with two actions in the sequential case, and one action for parallel
algorithms. For deterministic transition systems with one action, bisimilarity
can be decided sequentially with fundamentally different techniques than
partition refinement. In particular, Paige, Tarjan, and Bonic give a linear
algorithm for this specific situation. We show, exploiting the concept of an
oracle, that this approach is not of help to develop a faster generic algorithm
for deciding bisimilarity. For parallel algorithms there is a similar situation
where these techniques may be applied, too.
A linear inference is a valid inequality of Boolean algebra in which each
variable occurs at most once on each side.
In this work we leverage recently developed graphical representations of
linear formulae to build an implementation that is capable of more efficiently
searching for switch-medial-independent inferences. We use it to find four
`minimal' 8-variable independent inferences and also prove that no smaller ones
exist; in contrast, a previous approach based directly on formulae reached
computational limits already at 7 variables. Two of these new inferences derive
some previously found independent linear inferences. The other two (which are
dual) exhibit structure seemingly beyond the scope of previous approaches we
are aware of; in particular, their existence contradicts a conjecture of Das
and Strassburger.
We were also able to identify 10 minimal 9-variable linear inferences
independent of all the aforementioned inferences, comprising 5 dual pairs, and
present applications of our implementation to recent `graph logics'.
We present $\cal L$, an extension of Parigot's $\lambda\mu$-calculus by
adding negation as a type constructor, together with syntactic constructs that
represent negation introduction and elimination. We will define a notion of
reduction that extends $\lambda\mu$'s reduction system with two new reduction
rules, and show that the system satisfies subject reduction. Using Aczel's
generalisation of Tait and Martin-Löf's notion of parallel reduction, we show
that this extended reduction is confluent. Although the notion of type
assignment has its limitations with respect to representation of proofs in
natural deduction with implication and negation, we will show that all
propositions that can be shown in there have a witness in $\cal L$. Using
Girard's approach of reducibility candidates, we show that all typeable terms
are strongly normalisable, and conclude the paper by showing that type
assignment for $\cal L$ enjoys the principal typing property.
Hyperproperties are system properties that relate multiple computation paths
in a system and are commonly used to, e.g., define information-flow policies.
In this paper, we study a novel class of hyperproperties that allow reasoning
about strategic abilities in multi-agent systems. We introduce HyperATL*, an
extension of computation tree logic with path variables and strategy
quantifiers. Our logic supports quantification over paths in a system - as is
possible in hyperlogics such as HyperCTL* - but resolves the paths based on the
strategic choices of a coalition of agents. This allows us to capture many
previously studied (strategic) security notions in a unifying hyperlogic.
Moreover, we show that HyperATL* is particularly useful for specifying
asynchronous hyperproperties, i.e., hyperproperties where the execution speed
on the different computation paths depends on the choices of a scheduler. We
show that finite-state model checking of HyperATL* is decidable and present a
model checking algorithm based on alternating automata. We establish that our
algorithm is asymptotically optimal by proving matching lower bounds. We have
implemented a prototype model checker for a fragment of HyperATL* that can
check various security properties in small finite-state systems.
The concept of bounded expansion provides a robust way to capture sparse
graph classes with interesting algorithmic properties. Most notably, every
problem definable in first-order logic can be solved in linear time on bounded
expansion graph classes. First-order interpretations and transductions of
sparse graph classes lead to more general, dense graph classes that seem to
inherit many of the nice algorithmic properties of their sparse counterparts.
In this paper, we show that one can encode graphs from a class with
structurally bounded expansion via lacon-, shrub- and parity-decompositions
from a class with bounded expansion. These decompositions are useful for
lifting properties from sparse to structurally sparse graph classes.
Knaster-Tarski's theorem, characterising the greatest fixpoint of a monotone
function over a complete lattice as the largest post-fixpoint, naturally leads
to the so-called coinduction proof principle for showing that some element is
below the greatest fixpoint (e.g., for providing bisimilarity witnesses). The
dual principle, used for showing that an element is above the least fixpoint,
is related to inductive invariants. In this paper we provide proof rules which
are similar in spirit but for showing that an element is above the greatest
fixpoint or, dually, below the least fixpoint. The theory is developed for
non-expansive monotone functions on suitable lattices of the form
$\mathbb{M}^Y$, where $Y$ is a finite set and $\mathbb{M}$ an MV-algebra, and
it is based on the construction of (finitary) approximations of the original
functions. We show that our theory applies to a wide range of examples,
including termination probabilities, metric transition systems, behavioural
distances for probabilistic automata and bisimilarity. Moreover it allows us to
determine original algorithms for solving simple stochastic games.
Correctness-by-Construction (CbC) is an incremental program construction
process to construct functionally correct programs. The programs are
constructed stepwise along with a specification that is inherently guaranteed
to be satisfied. CbC is complex to use without specialized tool support, since
it needs a set of predefined refinement rules of fixed granularity which are
additional rules on top of the programming language. Each refinement rule
introduces a specific programming statement and developers cannot depart from
these rules to construct programs. CbC allows to develop software in a
structured and incremental way to ensure correctness, but the limited
flexibility is a disadvantage of CbC. In this work, we compare classic CbC with
CbC-Block and TraitCbC. Both approaches CbC-Block and TraitCbC, are related to
CbC, but they have new language constructs that enable a more flexible software
construction approach. We provide for both approaches a programming guideline,
which similar to CbC, leads to well-structured programs. CbC-Block extends CbC
by adding a refinement rule to insert any block of statements. Therefore, we
introduce CbC-Block as an extension of CbC. TraitCbC implements
correctness-by-construction on the basis of traits with specified methods. We
formally introduce TraitCbC and prove soundness of the construction strategy.
All three development approaches are qualitatively compared regarding their
programming constructs, tool support, and usability to assess […]
Milner (1984) defined an operational semantics for regular expressions as
finite-state processes. In order to axiomatize bisimilarity of regular
expressions under this process semantics, he adapted Salomaa's proof system
that is complete for equality of regular expressions under the language
semantics. Apart from most equational axioms, Milner's system Mil inherits from
Salomaa's system a non-algebraic rule for solving single fixed-point equations.
Recognizing distinctive properties of the process semantics that render
Salomaa's proof strategy inapplicable, Milner posed completeness of the system
Mil as an open question.
As a proof-theoretic approach to this problem we characterize the
derivational power that the fixed-point rule adds to the purely equational part
Mil$^-$ of Mil. We do so by means of a coinductive rule that permits cyclic
derivations that consist of a finite process graph with empty steps that
satisfies the layered loop existence and elimination property LLEE, and two of
its Mil$^{-}$-provable solutions. With this rule as replacement for the
fixed-point rule in Mil, we define the coinductive reformulation cMil as an
extension of Mil$^{-}$. In order to show that cMil and Mil are theorem
equivalent we develop effective proof transformations from Mil to cMil, and
vice versa. Since it is located half-way in between bisimulations and proofs in
Milner's system Mil, cMil may become a beachhead for a completeness proof of
Mil.
This article extends our […]