String rewriting systems have proved very useful to study monoids. In good
cases, they give finite presentations of monoids, allowing computations on
those and their manipulation by a computer. Even better, when the presentation
is confluent and terminating, they provide one with a notion of canonical
representative of the elements of the presented monoid. Polygraphs are a
higher-dimensional generalization of this notion of presentation, from the
setting of monoids to the much more general setting of n-categories. One of the
main purposes of this article is to give a progressive introduction to the
notion of higher-dimensional rewriting system provided by polygraphs, and
describe its links with classical rewriting theory, string and term rewriting
systems in particular. After introducing the general setting, we will be
interested in proving local confluence for polygraphs presenting 2-categories
and introduce a framework in which a finite 3-dimensional rewriting system
admits a finite number of critical pairs.
We construct a symmetric monoidal closed category of polynomial endofunctors
(as objects) and simulation cells (as morphisms). This structure is defined
using universal properties without reference to representing polynomial
diagrams and is reminiscent of Day's convolution on presheaves. We then make
this category into a model for intuitionistic linear logic by defining an
additive and exponential structure.
Evaluating a Boolean conjunctive query Q against a guarded first-order theory
F is equivalent to checking whether "F and not Q" is unsatisfiable. This
problem is relevant to the areas of database theory and description logic.
Since Q may not be guarded, well known results about the decidability,
complexity, and finite-model property of the guarded fragment do not obviously
carry over to conjunctive query answering over guarded theories, and had been
left open in general. By investigating finite guarded bisimilar covers of
hypergraphs and relational structures, and by substantially generalising
Rosati's finite chase, we prove for guarded theories F and (unions of)
conjunctive queries Q that (i) Q is true in each model of F iff Q is true in
each finite model of F and (ii) determining whether F implies Q is
2EXPTIME-complete. We further show the following results: (iii) the existence
of polynomial-size conformal covers of arbitrary hypergraphs; (iv) a new proof
of the finite model property of the clique-guarded fragment; (v) the small
model property of the guarded fragment with optimal bounds; (vi) a
polynomial-time solution to the canonisation problem modulo guarded
bisimulation, which yields (vii) a capturing result for guarded bisimulation
invariant PTIME.
As observed by Intrigila, there are hardly techniques available in the
lambda-calculus to prove that two lambda-terms are not beta-convertible.
Techniques employing the usual Boehm Trees are inadequate when we deal with
terms having the same Boehm Tree (BT). This is the case in particular for fixed
point combinators, as they all have the same BT. Another interesting equation,
whose consideration was suggested by Scott, is BY = BYS, an equation valid in
the classical model P-omega of lambda-calculus, and hence valid with respect to
BT-equality but nevertheless the terms are beta-inconvertible. To prove such
beta-inconvertibilities, we employ `clocked' BT's, with annotations that convey
information of the tempo in which the data in the BT are produced. Boehm Trees
are thus enriched with an intrinsic clock behaviour, leading to a refined
discrimination method for lambda-terms. The corresponding equality is strictly
intermediate between beta-convertibility and Boehm Tree equality, the equality
in the model P-omega. An analogous approach pertains to Levy-Longo and
Berarducci Trees. Our refined Boehm Trees find in particular an application in
beta-discriminating fixed point combinators (fpc's). It turns out that Scott's
equation BY = BYS is the key to unlocking a plethora of fpc's, generated by a
variety of production schemes of which the simplest was found by Boehm, stating
that new fpc's are obtained by postfixing the term SI, also known as Smullyan's
Owl. We prove that all these […]
Deep inference is a proof theoretic methodology that generalizes the standard
notion of inference of the sequent calculus, whereby inference rules become
applicable at any depth inside logical expressions. Deep inference provides
more freedom in the design of deductive systems for different logics and a rich
combinatoric analysis of proofs. In particular, construction of exponentially
shorter analytic proofs becomes possible, however with the cost of a greater
nondeterminism than in the sequent calculus. In this paper, we show that the
nondeterminism in proof search can be reduced without losing the shorter proofs
and without sacrificing proof theoretic cleanliness. For this, we exploit an
interaction and depth scheme in the logical expressions. We demonstrate our
method on deep inference systems for multiplicative linear logic and classical
logic, discuss its proof complexity and its relation to focusing, and present
implementations.
We study an alternative model of infinitary term rewriting. Instead of a
metric on terms, a partial order on partial terms is employed to formalise
convergence of reductions. We consider both a weak and a strong notion of
convergence and show that the metric model of convergence coincides with the
partial order model restricted to total terms. Hence, partial order convergence
constitutes a conservative extension of metric convergence, which additionally
offers a fine-grained distinction between different levels of divergence. In
the second part, we focus our investigation on strong convergence of orthogonal
systems. The main result is that the gap between the metric model and the
partial order model can be bridged by extending the term rewriting system by
additional rules. These extensions are the well-known Böhm extensions. Based
on this result, we are able to establish that -- contrary to the metric setting
-- orthogonal systems are both infinitarily confluent and infinitarily
normalising in the partial order setting. The unique infinitary normal forms
that the partial order model admits are Böhm trees.
We present some contributions to the theory of infinitary rewriting for
weakly orthogonal term rewrite systems, in which critical pairs may occur
provided they are trivial. We show that the infinitary unique normal form
property fails by an example of a weakly orthogonal TRS with two collapsing
rules. By translating this example, we show that this property also fails for
the infinitary lambda-beta-eta-calculus. As positive results we obtain the
following: Infinitary confluence, and hence the infinitary unique normal forms
property, holds for weakly orthogonal TRSs that do not contain collapsing
rules. To this end we refine the compression lemma. Furthermore, we establish
the triangle and diamond properties for infinitary multi-steps (complete
developments) in weakly orthogonal TRSs, by refining an earlier
cluster-analysis for the finite case.
A semi-computable set S in a computable metric space need not be computable.
However, in some cases, if S has certain topological properties, we can
conclude that S is computable. It is known that if a semi-computable set S is a
compact manifold with boundary, then the computability of \deltaS implies the
computability of S. In this paper we examine the case when S is a 1-manifold
with boundary, not necessarily compact. We show that a similar result holds in
this case under assumption that S has finitely many components.
Let A be a linear space of operators on a Hilbert space H, x a vector in H,
and Ax the subspace of H comprising all vectors of the form Tx with T in A. We
discuss, within a Bishop-style constructive framework, conditions under which
the projection [Ax] of H on the closure of Ax exists. We derive a general
result that leads directly to both the open mapping theorem and our main
theorem on the existence of [Ax].
We introduce a logical foundation to reason on tree structures with
constraints on the number of node occurrences. Related formalisms are limited
to express occurrence constraints on particular tree regions, as for instance
the children of a given node. By contrast, the logic introduced in the present
work can concisely express numerical bounds on any region, descendants or
ancestors for instance. We prove that the logic is decidable in single
exponential time even if the numerical constraints are in binary form. We also
illustrate the usage of the logic in the description of numerical constraints
on multi-directional path queries on XML documents. Furthermore, numerical
restrictions on regular languages (XML schemas) can also be concisely described
by the logic. This implies a characterization of decidable counting extensions
of XPath queries and XML schemas. Moreover, as the logic is closed under
negation, it can thus be used as an optimal reasoning framework for testing
emptiness, containment and equivalence.
The design and verification of cryptographic protocols is a notoriously
difficult task, even in symbolic models which take an abstract view of
cryptography. This is mainly due to the fact that protocols may interact with
an arbitrary attacker which yields a verification problem that has several
sources of unboundedness (size of messages, number of sessions, etc. In this
paper, we characterize a class of protocols for which deciding security for an
unbounded number of sessions is decidable. More precisely, we present a simple
transformation which maps a protocol that is secure for a bounded number of
protocol sessions (a decidable problem) to a protocol that is secure for an
unbounded number of sessions. The precise number of sessions that need to be
considered is a function of the security property and we show that for several
classical security properties a single session is sufficient. Therefore, in
many cases our results yields a design strategy for security protocols: (i)
design a protocol intended to be secure for a {single session}; and (ii) apply
our transformation to obtain a protocol which is secure for an unbounded number
of sessions.
Classification problems have been introduced by M. Ziegler as a
generalization of promise problems. In this paper we are concerned with
solvability and unsolvability questions with respect to a given set or language
family, especially with cores of unsolvability. We generalize the results about
unsolvability cores in promise problems to classification problems. Our main
results are a characterization of unsolvability cores via cohesiveness and
existence theorems for such cores in unsolvable classification problems. In
contrast to promise problems we have to strengthen the conditions to assert the
existence of such cores. In general unsolvable classification problems with
more than two components exist, which possess no cores, even if the set family
under consideration satisfies the assumptions which are necessary to prove the
existence of cores in unsolvable promise problems. But, if one of the
components is fixed we can use the results on unsolvability cores in promise
problems, to assert the existence of such cores in general. In this case we
speak of conditional classification problems and conditional cores. The
existence of conditional cores can be related to complexity cores. Using this
connection we can prove for language families, that conditional cores with
recursive components exist, provided that this family admits an uniform
solution for the word problem.
Probability logic has contributed to significant developments in belief types
for game-theoretical economics. We present a new probability logic for Harsanyi
Type spaces, show its completeness, and prove both a de-nesting property and a
unique extension theorem. We then prove that multi-agent interactive
epistemology has greater complexity than its single-agent counterpart by
showing that if the probability indices of the belief language are restricted
to a finite set of rationals and there are finitely many propositional letters,
then the canonical space for probabilistic beliefs with one agent is finite
while the canonical one with at least two agents has the cardinality of the
continuum. Finally, we generalize the three notions of definability in
multimodal logics to logics of probabilistic belief and knowledge, namely
implicit definability, reducibility, and explicit definability. We find that
S5-knowledge can be implicitly defined by probabilistic belief but not reduced
to it and hence is not explicitly definable by probabilistic belief.
We consider two-player games played on finite graphs equipped with costs on
edges and introduce two winning conditions, cost-parity and cost-Streett, which
require bounds on the cost between requests and their responses. Both
conditions generalize the corresponding classical omega-regular conditions and
the corresponding finitary conditions. For parity games with costs we show that
the first player has positional winning strategies and that determining the
winner lies in NP and coNP. For Streett games with costs we show that the first
player has finite-state winning strategies and that determining the winner is
EXPTIME-complete. The second player might need infinite memory in both games.
Both types of games with costs can be solved by solving linearly many instances
of their classical variants.
We define a pi-calculus variant with a costed semantics where channels are
treated as resources that must explicitly be allocated before they are used and
can be deallocated when no longer required. We use a substructural type system
tracking permission transfer to construct coinductive proof techniques for
comparing behaviour and resource usage efficiency of concurrent processes. We
establish full abstraction results between our coinductive definitions and a
contextual behavioural preorder describing a notion of process efficiency
w.r.t. its management of resources. We also justify these definitions and
respective proof techniques through numerous examples and a case study
comparing two concurrent implementations of an extensible buffer.
The Stone tautologies are known to have polynomial size resolution
refutations and require exponential size regular refutations. We prove that the
Stone tautologies also have polynomial size proofs in both pool resolution and
the proof system of regular tree-like resolution with input lemmas (regRTI).
Therefore, the Stone tautologies do not separate resolution from DPLL with
clause learning.
Synthesis is the automatic construction of a system from its specification.
In classical synthesis algorithms, it is always assumed that the system is
"constructed from scratch" rather than composed from reusable components. This,
of course, rarely happens in real life, where almost every non-trivial
commercial software system relies heavily on using libraries of reusable
components. Furthermore, other contexts, such as web-service orchestration, can
be modeled as synthesis of a system from a library of components. Recently,
Lustig and Vardi introduced dataflow and control-flow synthesis from libraries
of reusable components. They proved that dataflow synthesis is undecidable,
while control-flow synthesis is decidable. In this work, we consider the
problem of control-flow synthesis from libraries of probabilistic components .
We show that this more general problem is also decidable.