Willem Conradie ; Salih Durhan ; Guido Sciavicco.
There are two natural and well-studied approaches to temporal ontology and
reasoning: point-based and interval-based. Usually, interval-based temporal
reasoning deals with points as a particular case of duration-less intervals. A
recent result by Balbiani, Goranko, and Sciavicco presented an explicit
two-sorted point-interval temporal framework in which time instants (points)
and time periods (intervals) are considered on a par, allowing the perspective
to shift between these within the formal discourse. We consider here two-sorted
first-order languages based on the same principle, and therefore including
relations, as first studied by Reich, among others, between points, between
intervals, and inter-sort. We give complete classifications of its
sub-languages in terms of relative expressive power, thus determining how many,
and which, are the intrinsically different extensions of two-sorted first-order
logic with one or more such relations. This approach roots out the classical
problem of whether or not points should be included in a interval-based
semantics. In this Part II, we deal with the cases of all dense and the case of
all unbounded linearly ordered sets.
Section:
Logic for knowledge representation
Merlin Carl ; Benjamin Rin ; Philipp Schlicht.
Infinite time Turing machine models with tape length $\alpha$, denoted
$T_\alpha$, strengthen the machines of Hamkins and Kidder [HL00] with tape
length $\omega$. A new phenomenon is that for some countable ordinals $\alpha$,
some cells cannot be halting positions of $T_\alpha$ given trivial input. The
main open question in [Rin14] asks about the size of the least such ordinal
$\delta$.
We answer this by providing various characterizations. For instance, $\delta$
is the least ordinal with any of the following properties: (a) For some
$\xi<\alpha$, there is a $T_\xi$-writable but not $T_\alpha$-writable subset of
$\omega$. (b) There is a gap in the $T_\alpha$-writable ordinals. (c) $\alpha$
is uncountable in $L_{\lambda_\alpha}$. Here $\lambda_\alpha$ denotes the
supremum of $T_\alpha$-writable ordinals, i.e. those with a $T_\alpha$-writable
code of length $\alpha$.
We further use the above characterizations, and an analogue to Welch's
submodel characterization of the ordinals $\lambda$, $\zeta$ and $\Sigma$, to
show that $\delta$ is large in the sense that it is a closure point of the
function $\alpha \mapsto \Sigma_\alpha$, where $\Sigma_\alpha$ denotes the
supremum of the $T_\alpha$-accidentally writable ordinals.
G. Michele Pinna.
Event structures where the causality may explicitly change during a
computation have recently gained the stage. In this kind of event structures
the changes in the set of the causes of an event are triggered by modifiers
that may add or remove dependencies, thus making the happening of an event
contextual. Still the focus is always on the dependencies of the event. In this
paper we promote the idea that the context determined by the modifiers plays a
major role, and the context itself determines not only the causes but also what
causality should be. Modifiers are then used to understand when an event (or a
set of events) can be added to a configuration, together with a set of events
modeling dependencies, which will play a less important role. We show that most
of the notions of Event Structure presented in literature can be translated
into this new kind of event structure, preserving the main notion, namely the
one of configuration.
Amir M. Ben-Amram ; Geoff Hamilton.
In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple
programming language - representing non-deterministic imperative programs with
bounded loops, and arithmetics limited to addition and multiplication - it is
possible to decide precisely whether a program has certain growth-rate
properties, in particular whether a computed value, or the program's running
time, has a polynomial growth rate.
A natural and intriguing problem was to move from answering the decision
problem to giving a quantitative result, namely, a tight polynomial upper
bound. This paper shows how to obtain asymptotically-tight, multivariate,
disjunctive polynomial bounds for this class of programs. This is a complete
solution: whenever a polynomial bound exists it will be found.
A pleasant surprise is that the algorithm is quite simple; but it relies on
some subtle reasoning. An important ingredient in the proof is the forest
factorization theorem, a strong structural result on homomorphisms into a
finite monoid.
Ludwig Staiger.
Using an iterative tree construction we show that for simple computable
subsets of the Cantor space Hausdorff, constructive and computable dimensions
might be incomputable.
Karoliina Lehtinen ; Udi Boker.
The complexity of parity games is a long standing open problem that saw a
major breakthrough in 2017 when two quasi-polynomial algorithms were published.
This article presents a third, independent approach to solving parity games in
quasi-polynomial time, based on the notion of register game, a parameterised
variant of a parity game. The analysis of register games leads to a
quasi-polynomial algorithm for parity games, a polynomial algorithm for
restricted classes of parity games and a novel measure of complexity, the
register index, which aims to capture the combined complexity of the priority
assignement and the underlying game graph.
We further present a translation of alternating parity word automata into
alternating weak automata with only a quasi-polynomial increase in size, based
on register games; this improves on the previous exponential translation.
We also use register games to investigate the parity index hierarchy: while
for words the index hierarchy of alternating parity automata collapses to the
weak level, and for trees it is strict, for structures between trees and words,
it collapses logarithmically, in the sense that any parity tree automaton of
size n is equivalent, on these particular classes of structures, to an
automaton with a number of priorities logarithmic in n.
Ulrik Buchholtz ; Kuen-Bang Hou.
We present a development of cellular cohomology in homotopy type theory.
Cohomology associates to each space a sequence of abelian groups capturing part
of its structure, and has the advantage over homotopy groups in that these
abelian groups of many common spaces are easier to compute. Cellular cohomology
is a special kind of cohomology designed for cell complexes: these are built in
stages by attaching spheres of progressively higher dimension, and cellular
cohomology defines the groups out of the combinatorial description of how
spheres are attached. Our main result is that for finite cell complexes, a wide
class of cohomology theories (including the ones defined through
Eilenberg-MacLane spaces) can be calculated via cellular cohomology. This
result was formalized in the Agda proof assistant.
Mirai Ikebuchi ; Keisuke Nakano.
$B$-terms are built from the $B$ combinator alone defined by $B\equiv\lambda
fgx. f(g~x)$, which is well known as a function composition operator. This
paper investigates an interesting property of $B$-terms, that is, whether
repetitive right applications of a $B$-term cycles or not. We discuss
conditions for $B$-terms to have and not to have the property through a sound
and complete equational axiomatization. Specifically, we give examples of
$B$-terms which have the cyclic property and show that there are infinitely
many $B$-terms which do not have the property. Also, we introduce another
interesting property about a canonical representation of $B$-terms that is
useful to detect cycles, or equivalently, to prove the cyclic property, with an
efficient algorithm.
Davide Basile ; Maurice H. ter Beek ; Rosario Pugliese.
We present a number of contributions to bridging the gap between supervisory
control theory and coordination of services in order to explore the frontiers
between coordination and control systems. Firstly, we modify the classical
synthesis algorithm from supervisory control theory for obtaining the so-called
most permissive controller in order to synthesise orchestrations and
choreographies of service contracts formalised as contract automata. The key
ingredient to make this possible is a novel notion of controllability. Then, we
present an abstract parametric synthesis algorithm and show that it generalises
the classical synthesis as well as the orchestration and choreography
syntheses. Finally, through the novel abstract synthesis, we show that the
concrete syntheses are in a refinement order. A running example from the
service domain illustrates our contributions.
NathanaĆ«l Fijalkow ; Stefan Kiefer ; Mahsa Shirmohammadi.
Given two labelled Markov decision processes (MDPs), the trace-refinement
problem asks whether for all strategies of the first MDP there exists a
strategy of the second MDP such that the induced labelled Markov chains are
trace-equivalent. We show that this problem is decidable in polynomial time if
the second MDP is a Markov chain. The algorithm is based on new results on a
particular notion of bisimulation between distributions over the states.
However, we show that the general trace-refinement problem is undecidable, even
if the first MDP is a Markov chain. Decidability of those problems was stated
as open in 2008. We further study the decidability and complexity of the
trace-refinement problem provided that the strategies are restricted to be
memoryless.
Emmanuel Jeandel ; Simon Perdrix ; Renaud Vilmart.
The ZX-Calculus is a graphical language for diagrammatic reasoning in quantum
mechanics and quantum information theory. It comes equipped with an equational
presentation. We focus here on a very important property of the language:
completeness, which roughly ensures the equational theory captures all of
quantum mechanics. We first improve on the known-to-be-complete presentation
for the so-called Clifford fragment of the language - a restriction that is not
universal - by adding some axioms. Thanks to a system of back-and-forth
translation between the ZX-Calculus and a third-party complete graphical
language, we prove that the provided axiomatisation is complete for the first
approximately universal fragment of the language, namely Clifford+T.
We then prove that the expressive power of this presentation, though aimed at
achieving completeness for the aforementioned restriction, extends beyond
Clifford+T, to a class of diagrams that we call linear with Clifford+T
constants. We use another version of the third-party language - and an adapted
system of back-and-forth translation - to complete the language for the
ZX-Calculus as a whole, that is, with no restriction. We briefly discuss the
added axioms, and finally, we provide a complete axiomatisation for an altered
version of the language which involves an additional generator, making the
presentation simpler.
Jan Bydzovsky ; Jan Krajicek ; Igor C. Oliveira.
Proving that there are problems in $\mathsf{P}^\mathsf{NP}$ that require
boolean circuits of super-linear size is a major frontier in complexity theory.
While such lower bounds are known for larger complexity classes, existing
results only show that the corresponding problems are hard on infinitely many
input lengths. For instance, proving almost-everywhere circuit lower bounds is
open even for problems in $\mathsf{MAEXP}$. Giving the notorious difficulty of
proving lower bounds that hold for all large input lengths, we ask the
following question: Can we show that a large set of techniques cannot prove
that $\mathsf{NP}$ is easy infinitely often? Motivated by this and related
questions about the interaction between mathematical proofs and computations,
we investigate circuit complexity from the perspective of logic.
Among other results, we prove that for any parameter $k \geq 1$ it is
consistent with theory $T$ that computational class ${\mathcal C} \not
\subseteq \textit{i.o.}\mathrm{SIZE}(n^k)$, where $(T, \mathcal{C})$ is one of
the pairs: $T = \mathsf{T}^1_2$ and ${\mathcal C} = \mathsf{P}^\mathsf{NP}$, $T
= \mathsf{S}^1_2$ and ${\mathcal C} = \mathsf{NP}$, $T = \mathsf{PV}$ and
${\mathcal C} = \mathsf{P}$. In other words, these theories cannot establish
infinitely often circuit upper bounds for the corresponding problems. This is
of interest because the weaker theory $\mathsf{PV}$ already formalizes
sophisticated arguments, such as a proof of the PCP Theorem. These […]
Michael Blondin ; Alain Finkel ; Jean Goubault-Larrecq.
This paper is a sequel of "Forward Analysis for WSTS, Part I: Completions"
[STACS 2009, LZI Intl. Proc. in Informatics 3, 433-444] and "Forward Analysis
for WSTS, Part II: Complete WSTS" [Logical Methods in Computer Science 8(3),
2012]. In these two papers, we provided a framework to conduct forward
reachability analyses of WSTS, using finite representations of downward-closed
sets. We further develop this framework to obtain a generic Karp-Miller
algorithm for the new class of very-WSTS. This allows us to show that
coverability sets of very-WSTS can be computed as their finite ideal
decompositions. Under natural effectiveness assumptions, we also show that LTL
model checking for very-WSTS is decidable. The termination of our procedure
rests on a new notion of acceleration levels, which we study. We characterize
those domains that allow for only finitely many accelerations, based on ordinal
ranks.
Andreas Abel ; Thierry Coquand.
Normalization fails in type theory with an impredicative universe of
propositions and a proof-irrelevant propositional equality. The counterexample
to normalization is adapted from Girard's counterexample against normalization
of System F equipped with a decider for type equality. It refutes Werner's
normalization conjecture [LMCS 2008].