We improve the answer to the question: what set of excluded middles for
propositional variables in a formula suffices to prove the formula in
intuitionistic propositional logic whenever it is provable in classical
propositional logic.
We prove decidability of the boundedness problem for monadic least
fixed-point recursion based on positive monadic second-order (MSO) formulae
over trees. Given an MSO-formula phi(X,x) that is positive in X, it is
decidable whether the fixed-point recursion based on phi is spurious over the
class of all trees in the sense that there is some uniform finite bound for the
number of iterations phi takes to reach its least fixed point, uniformly across
all trees. We also identify the exact complexity of this problem. The proof
uses automata-theoretic techniques. This key result extends, by means of
model-theoretic interpretations, to show decidability of the boundedness
problem for MSO and guarded second-order logic (GSO) over the classes of
structures of fixed finite tree-width. Further model-theoretic transfer
arguments allow us to derive major known decidability results for boundedness
for fragments of first-order logic as well as new ones.
In this paper we study the expressive power of Horn-formulae in dependence
logic and show that they can express NP-complete problems. Therefore we define
an even smaller fragment D-Horn* and show that over finite successor structures
it captures the complexity class P of all sets decidable in polynomial time.
Furthermore we study the question which of our results can ge generalized to
the case of open formulae of D-Horn* and so-called downwards monotone
polynomial time properties of teams.
We study languages over infinite alphabets equipped with some structure that
can be tested by recognizing automata. We develop a framework for studying such
alphabets and the ensuing automata theory, where the key role is played by an
automorphism group of the alphabet. In the process, we generalize nominal sets
due to Gabbay and Pitts.
This paper is motivated by a conjecture that BPP can be characterized in
terms of polynomial-time nonadaptive reductions to the set of Kolmogorov-random
strings. In this paper we show that an approach laid out in [Allender et al] to
settle this conjecture cannot succeed without significant alteration, but that
it does bear fruit if we consider time-bounded Kolmogorov complexity instead.
We show that if a set A is reducible in polynomial time to the set of
time-t-bounded Kolmogorov random strings (for all large enough time bounds t),
then A is in P/poly, and that if in addition such a reduction exists for any
universal Turing machine one uses in the definition of Kolmogorov complexity,
then A is in PSPACE.
We give a new characterization of $\mathsf{NL}$ as the class of languages
whose members have certificates that can be verified with small error in
polynomial time by finite state machines that use a constant number of random
bits, as opposed to its conventional description in terms of deterministic
logarithmic-space verifiers. It turns out that allowing two-way interaction
with the prover does not change the class of verifiable languages, and that no
polynomially bounded amount of randomness is useful for constant-memory
computers when used as language recognizers, or public-coin verifiers. A
corollary of our main result is that the class of outcome problems
corresponding to O(log n)-space bounded games of incomplete information where
the universal player is allowed a constant number of moves equals NL.
This article is a fundamental study in computable measure theory. We use the
framework of TTE, the representation approach, where computability on an
abstract set X is defined by representing its elements with concrete "names",
possibly countably infinite, over some alphabet {\Sigma}. As a basic
computability structure we consider a computable measure on a computable
$\sigma$-algebra. We introduce and compare w.r.t. reducibility several natural
representations of measurable sets. They are admissible and generally form four
different equivalence classes. We then compare our representations with those
introduced by Y. Wu and D. Ding in 2005 and 2006 and claim that one of our
representations is the most useful one for studying computability on measurable
functions.
We address the problem of conditional termination, which is that of defining
the set of initial configurations from which a given program always terminates.
First we define the dual set, of initial configurations from which a
non-terminating execution exists, as the greatest fixpoint of the function that
maps a set of states into its pre-image with respect to the transition
relation. This definition allows to compute the weakest non-termination
precondition if at least one of the following holds: (i) the transition
relation is deterministic, (ii) the descending Kleene sequence
overapproximating the greatest fixpoint converges in finitely many steps, or
(iii) the transition relation is well founded. We show that this is the case
for two classes of relations, namely octagonal and finite monoid affine
relations. Moreover, since the closed forms of these relations can be defined
in Presburger arithmetic, we obtain the decidability of the termination problem
for such loops.
Let F be a set of relational trees and let Forbh(F) be the class of all
structures that admit no homomorphism from any tree in F; all this happens over
a fixed finite relational signature $\sigma$. There is a natural way to expand
Forbh(F) by unary relations to an amalgamation class. This expanded class,
enhanced with a linear ordering, has the Ramsey property.
Concurrent pattern calculus (CPC) drives interaction between processes by
comparing data structures, just as sequential pattern calculus drives
computation. By generalising from pattern matching to pattern unification,
interaction becomes symmetrical, with information flowing in both directions.
CPC provides a natural language to express trade where information exchange is
pivotal to interaction. The unification allows some patterns to be more
discriminating than others; hence, the behavioural theory must take this aspect
into account, so that bisimulation becomes subject to compatibility of
patterns. Many popular process calculi can be encoded in CPC; this allows for a
gain in expressiveness, formalised through encodings.
This paper studies a difference operator for stochastic systems whose
specifications are represented by Abstract Probabilistic Automata (APAs). In
the case refinement fails between two specifications, the target of this
operator is to produce a specification APA that represents all witness PAs of
this failure. Our contribution is an algorithm that allows to approximate the
difference of two APAs with arbitrary precision. Our technique relies on new
quantitative notions of distances between APAs used to assess convergence of
the approximations, as well as on an in-depth inspection of the refinement
relation for APAs. The procedure is effective and not more complex to implement
than refinement checking.
An {\omega}-language is a set of infinite words over a finite alphabet X. We
consider the class of recursive {\omega}-languages, i.e. the class of
{\omega}-languages accepted by Turing machines with a Büchi acceptance
condition, which is also the class {\Sigma}11 of (effective) analytic subsets
of X{\omega} for some finite alphabet X. We investigate here the notion of
ambiguity for recursive {\omega}-languages with regard to acceptance by Büchi
Turing machines. We first present in detail essentials on the literature on
{\omega}-languages accepted by Turing Machines. Then we give a complete and
broad view on the notion of ambiguity and unambiguity of Büchi Turing
machines and of the {\omega}-languages they accept. To obtain our new results,
we make use of results and methods of effective descriptive set theory.
By the Riesz representation theorem using the Riemann-Stieltjes integral,
linear continuous functionals on the set of continuous functions from the unit
interval into the reals can either be characterized by functions of bounded
variation from the unit interval into the reals, or by signed measures on the
Borel-subsets. Each of these objects has an (even minimal) Jordan decomposition
into non-negative or non-decreasing objects. Using the representation approach
to computable analysis, a computable version of the Riesz representation
theorem has been proved by Jafarikhah, Lu and Weihrauch. In this article we
extend this result. We study the computable relation between three Banach
spaces, the space of linear continuous functionals with operator norm, the
space of (normalized) functions of bounded variation with total variation norm,
and the space of bounded signed Borel measures with variation norm. We
introduce natural representations for defining computability. We prove that the
canonical linear bijections between these spaces and their inverses are
computable. We also prove that Jordan decomposition is computable on each of
these spaces.
Abbott, Altenkirch, Ghani and others have taught us that many parameterized
datatypes (set functors) can be usefully analyzed via container representations
in terms of a set of shapes and a set of positions in each shape. This paper
builds on the observation that datatypes often carry additional structure that
containers alone do not account for. We introduce directed containers to
capture the common situation where every position in a data-structure
determines another data-structure, informally, the sub-data-structure rooted by
that position. Some natural examples are non-empty lists and node-labelled
trees, and data-structures with a designated position (zippers). While
containers denote set functors via a fully-faithful functor, directed
containers interpret fully-faithfully into comonads. But more is true: every
comonad whose underlying functor is a container is represented by a directed
container. In fact, directed containers are the same as containers that are
comonads. We also describe some constructions of directed containers. We have
formalized our development in the dependently typed programming language Agda.
We investigate the truth-table degrees of (co-)c.e.\ sets, in particular,
sets of random strings. It is known that the set of random strings with respect
to any universal prefix-free machine is Turing complete, but that truth-table
completeness depends on the choice of universal machine. We show that for such
sets of random strings, any finite set of their truth-table degrees do not meet
to the degree~0, even within the c.e. truth-table degrees, but when taking the
meet over all such truth-table degrees, the infinite meet is indeed~0. The
latter result proves a conjecture of Allender, Friedman and Gasarch. We also
show that there are two Turing complete c.e. sets whose truth-table degrees
form a minimal pair.
To provide a categorical semantics for co-intuitionistic logic one has to
face the fact, noted by Tristan Crolard, that the definition of co-exponents as
adjuncts of coproducts does not work in the category Set, where coproducts are
disjoint unions. Following the familiar construction of models of
intuitionistic linear logic with exponential"!", we build models of
co-intuitionistic logic in symmetric monoidal left-closed categories with
additional structure, using a variant of Crolard's term assignment to
co-intuitionistic logic in the construction of a free category.
Markov automata (MAs) extend labelled transition systems with random delays
and probabilistic branching. Action-labelled transitions are instantaneous and
yield a distribution over states, whereas timed transitions impose a random
delay governed by an exponential distribution. MAs are thus a nondeterministic
variation of continuous-time Markov chains. MAs are compositional and are used
to provide a semantics for engineering frameworks such as (dynamic) fault
trees, (generalised) stochastic Petri nets, and the Architecture Analysis &
Design Language (AADL). This paper considers the quantitative analysis of MAs.
We consider three objectives: expected time, long-run average, and timed
(interval) reachability. Expected time objectives focus on determining the
minimal (or maximal) expected time to reach a set of states. Long-run
objectives determine the fraction of time to be in a set of states when
considering an infinite time horizon. Timed reachability objectives are about
computing the probability to reach a set of states within a given time
interval. This paper presents the foundations and details of the algorithms and
their correctness proofs. We report on several case studies conducted using a
prototypical tool implementation of the algorithms, driven by the MAPA
modelling language for efficiently generating MAs.
In this paper we revise and simplify the notion of observationally induced
algebra introduced by Simpson and Schroeder for the purpose of modelling
computational effects in the particular case where the ambient category is
given by classical domain theory. As examples of the general framework we
consider the various powerdomains. For the particular case of the Plotkin
powerdomain the general recipe leads to a somewhat unexpected result which,
however, makes sense from a Computer Science perspective. We analyze this
"deviation" and show how to reobtain the original Plotkin powerdomain by
imposing further conditions previously considered by R.~Heckmann and
J.~Goubault-Larrecq.
An algebra is called corecursive if from every coalgebra a unique
coalgebra-to-algebra homomorphism exists into it. We prove that free
corecursive algebras are obtained as coproducts of the terminal coalgebra
(considered as an algebra) and free algebras. The monad of free corecursive
algebras is proved to be the free corecursive monad, where the concept of
corecursive monad is a generalization of Elgot's iterative monads, analogous to
corecursive algebras generalizing completely iterative algebras. We also
characterize the Eilenberg-Moore algebras for the free corecursive monad and
call them Bloom algebras.
In this paper we study the behaviour at infinity of the Fourier transform of
Radon measures supported by the images of fractal sets under an algorithmically
random Brownian motion. We show that, under some computability conditions on
these sets, the Fourier transform of the associated measures have, relative to
the Hausdorff dimensions of these sets, optimal asymptotic decay at infinity.
The argument relies heavily on a direct characterisation, due to Asarin and
Pokrovskii, of algorithmically random Brownian motion in terms of the prefix
free Kolmogorov complexity of finite binary sequences. The study also
necessitates a closer look at the potential theory over fractals from a
computable point of view.
We present a complete polymorphic effect inference algorithm for an ML-style
language with handlers of not only exceptions, but of any other algebraic
effect such as input & output, mutable references and many others. Our main aim
is to offer the programmer a useful insight into the effectful behaviour of
programs. Handlers help here by cutting down possible effects and the resulting
lengthy output that often plagues precise effect systems. Additionally, we
present a set of methods that further simplify the displayed types, some even
by deliberately hiding inferred information from the programmer.
Polynomial interpretations are a useful technique for proving termination of
term rewrite systems. They come in various flavors: polynomial interpretations
with real, rational and integer coefficients. As to their relationship with
respect to termination proving power, Lucas managed to prove in 2006 that there
are rewrite systems that can be shown polynomially terminating by polynomial
interpretations with real (algebraic) coefficients, but cannot be shown
polynomially terminating using polynomials with rational coefficients only. He
also proved the corresponding statement regarding the use of rational
coefficients versus integer coefficients. In this article we extend these
results, thereby giving the full picture of the relationship between the
aforementioned variants of polynomial interpretations. In particular, we show
that polynomial interpretations with real or rational coefficients do not
subsume polynomial interpretations with integer coefficients. Our results hold
also for incremental termination proofs with polynomial interpretations.
In formal proof checking environments such as Mizar it is not merely the
validity of mathematical formulas that is evaluated in the process of adoption
to the body of accepted formalizations, but also the readability of the proofs
that witness validity. As in case of computer programs, such proof scripts may
sometimes be more and sometimes be less readable. To better understand the
notion of readability of formal proofs, and to assess and improve their
readability, we propose in this paper a method of improving proof readability
based on Behaghel's First Law of sentence structure. Our method maximizes the
number of local references to the directly preceding statement in a proof
linearisation. It is shown that our optimization method is NP-complete.
A separator for two languages is a third language containing the first one
and disjoint from the second one. We investigate the following decision
problem: given two regular input languages, decide whether there exists a
locally testable (resp. a locally threshold testable) separator. In both cases,
we design a decision procedure based on the occurrence of special patterns in
automata accepting the input languages. We prove that the problem is
computationally harder than deciding membership. The correctness proof of the
algorithm yields a stronger result, namely a description of a possible
separator. Finally, we discuss the same problem for context-free input
languages.
In this paper we consider the problem of building rich categories of setoids,
in standard intensional Martin-Löf type theory (MLTT), and in particular how
to handle the problem of equality on objects in this context. Any
(proof-irrelevant) family F of setoids over a setoid A gives rise to a category
C(A, F) of setoids with objects A. We may regard the family F as a setoid of
setoids, and a crucial issue in this article is to construct rich or large
enough such families. Depending on closure conditions of F, the category C(A,
F) has corresponding categorical constructions. We exemplify this with finite
limits. A very large family F may be obtained from Aczel's model construction
of CZF in type theory. It is proved that the category so obtained is isomorphic
to the internal category of sets in this model. Set theory can thus establish
(categorical) properties of C(A, F) which may be used in type theory. We also
show that Aczel's model construction may be extended to include the elements of
any setoid as atoms or urelements. As a byproduct we obtain a natural extension
of CZF, adding atoms. This extension, CZFU, is validated by the extended model.
The main theorems of the paper have been checked in the proof assistant Coq
which is based on MLTT. A possible application of this development is to
integrate set-theoretic and type-theoretic reasoning in proof assistants.