We introduce a new domain for finding precise numerical invariants of
programs by abstract interpretation. This domain, which consists of level sets
of non-linear functions, generalizes the domain of linear "templates"
introduced by Manna, Sankaranarayanan, and Sipma. In the case of quadratic
templates, we use Shor's semi-definite relaxation to derive computable yet
precise abstractions of semantic functionals, and we show that the abstract
fixpoint equation can be solved accurately by coupling policy iteration and
semi-definite programming. We demonstrate the interest of our approach on a
series of examples (filters, integration schemes) including a degenerate one
(symplectic scheme).
This paper describes a formalization of discrete real closed fields in the
Coq proof assistant. This abstract structure captures for instance the theory
of real algebraic numbers, a decidable subset of real numbers with good
algorithmic properties. The theory of real algebraic numbers and more generally
of semi-algebraic varieties is at the core of a number of effective methods in
real analysis, including decision procedures for non linear arithmetic or
optimization methods for real valued functions. After defining an abstract
structure of discrete real closed field and the elementary theory of real roots
of polynomials, we describe the formalization of an algebraic proof of
quantifier elimination based on pseudo-remainder sequences following the
standard computer algebra literature on the topic. This formalization covers a
large part of the theory which underlies the efficient algorithms implemented
in practice in computer algebra. The success of this work paves the way for
formal certification of these efficient methods.
This paper defines a sound and complete semantic criterion, based on
reducibility candidates, for strong normalization of theories expressed in
minimal deduction modulo à la Curry. The use of Curry-style proof-terms
allows to build this criterion on the classic notion of pre-Heyting algebras
and makes that criterion concern all theories expressed in minimal deduction
modulo. Compared to using Church-style proof-terms, this method provides both a
simpler definition of the criterion and a simpler proof of its completeness.
The Parameterised Model Checking Problem asks whether an implementation
Impl(t) satisfies a specification Spec(t) for all instantiations of parameter
t. In general, t can determine numerous entities: the number of processes used
in a network, the type of data, the capacities of buffers, etc. The main theme
of this paper is automation of uniform verification of a subclass of PMCP with
the parameter of the first kind, i.e. the number of processes in the network.
We use CSP as our formalism. We present a type reduction theory, which, for a
given verification problem, establishes a function \phi that maps all
(sufficiently large) instantiations T of the parameter to some fixed type T^
and allows us to deduce that if Spec(T^) is refined by \phi(Impl(T)), then
(subject to certain assumptions) Spec(T) is refined by Impl(T). The theory can
be used in practice by combining it with a suitable abstraction method that
produces a t-independent process Abstr that is refined by {\phi}(Impl(T)) for
all sufficiently large T. Then, by testing (with a model checker) if the
abstract model Abstr refines Spec(T^), we can deduce a positive answer to the
original uniform verification problem. The type reduction theory relies on
symbolic representation of process behaviour. We develop a symbolic operational
semantics for CSP processes that satisfy certain normality requirements, and we
provide a set of translation rules that allow us to concretise symbolic
transition graphs. Based on this, we prove […]
We define a new kind of automata recognizing properties of data words or data
trees and prove that the automata capture all queries definable in Regular
XPath. We show that the automata-theoretic approach may be applied to answer
decidability and expressibility questions for XPath.
Theory interpolation has found several successful applications in model
checking. We present a novel method for computing interpolants for ground
formulas in the theory of equality. The method produces interpolants from
colored congruence graphs representing derivations in that theory. These graphs
can be produced by conventional congruence closure algorithms in a
straightforward manner. By working with graphs, rather than at the level of
individual proof steps, we are able to derive interpolants that are pleasingly
simple (conjunctions of Horn clauses) and smaller than those generated by other
tools. Our interpolation method can be seen as a theory-specific implementation
of a cooperative interpolation game between two provers. We present a generic
version of the interpolation game, parametrized by the theory T, and define a
general method to extract runs of the game from proofs in T and then generate
interpolants from these runs.
The Algebraic Dichotomy Conjecture states that the Constraint Satisfaction
Problem over a fixed template is solvable in polynomial time if the algebra of
polymorphisms associated to the template lies in a Taylor variety, and is
NP-complete otherwise. This paper provides two new characterizations of
finitely generated Taylor varieties. The first characterization is using
absorbing subalgebras and the second one cyclic terms. These new conditions
allow us to reprove the conjecture of Bang-Jensen and Hell (proved by the
authors) and the characterization of locally finite Taylor varieties using weak
near-unanimity terms (proved by McKenzie and Maróti) in an elementary and
self-contained way.
Nominal abstract syntax is an approach to representing names and binding
pioneered by Gabbay and Pitts. So far nominal techniques have mostly been
studied using classical logic or model theory, not type theory. Nominal
extensions to simple, dependent and ML-like polymorphic languages have been
studied, but decidability and normalization results have only been established
for simple nominal type theories. We present a LF-style dependent type theory
extended with name-abstraction types, prove soundness and decidability of
beta-eta-equivalence checking, discuss adequacy and canonical forms via an
example, and discuss extensions such as dependently-typed recursion and
induction principles.
Analysis of execution traces plays a fundamental role in many program
analysis approaches, such as runtime verification, testing, monitoring, and
specification mining. Execution traces are frequently parametric, i.e., they
contain events with parameter bindings. Each parametric trace usually consists
of many meaningful trace slices merged together, each slice corresponding to
one parameter binding. This gives a semantics-based solution to parametric
trace analysis. A general-purpose parametric trace slicing technique is
introduced, which takes each event in the parametric trace and dispatches it to
its corresponding trace slices. This parametric trace slicing technique can be
used in combination with any conventional, non-parametric trace analysis
technique, by applying the later on each trace slice. As an instance, a
parametric property monitoring technique is then presented. The presented
parametric trace slicing and monitoring techniques have been implemented and
extensively evaluated. Measurements of runtime overhead confirm that the
generality of the discussed techniques does not come at a performance expense
when compared with existing parametric trace monitoring systems.
Using the proof-program (Curry-Howard) correspondence, we give a new method
to obtain models of ZF and relative consistency results in set theory. We show
the relative consistency of ZF + DC + there exists a sequence of subsets of R
the cardinals of which are strictly decreasing + other similar properties of R.
These results seem not to have been previously obtained by forcing.
The Algebraic lambda-calculus and the Linear-Algebraic lambda-calculus extend
the lambda-calculus with the possibility of making arbitrary linear
combinations of terms. In this paper we provide a fine-grained, System F-like
type system for the linear-algebraic lambda-calculus. We show that this
"scalar" type system enjoys both the subject-reduction property and the
strong-normalisation property, our main technical results. The latter yields a
significant simplification of the linear-algebraic lambda-calculus itself, by
removing the need for some restrictions in its reduction rules. But the more
important, original feature of this scalar type system is that it keeps track
of 'the amount of a type' that is present in each term. As an example of its
use, we shown that it can serve as a guarantee that the normal form of a term
is barycentric, i.e that its scalars are summing to one.
We propose a novel, type-elimination-based method for reasoning in the
description logic SHIQbs including DL-safe rules. To this end, we first
establish a knowledge compilation method converting the terminological part of
an ALCIb knowledge base into an ordered binary decision diagram (OBDD) which
represents a canonical model. This OBDD can in turn be transformed into
disjunctive Datalog and merged with the assertional part of the knowledge base
in order to perform combined reasoning. In order to leverage our technique for
full SHIQbs, we provide a stepwise reduction from SHIQbs to ALCIb that
preserves satisfiability and entailment of positive and negative ground facts.
The proposed technique is shown to be worst case optimal w.r.t. combined and
data complexity and easily admits extensions with ground conjunctive queries.
We compare tools for complementing nondeterministic Büchi automata with a
recent termination-analysis algorithm. Complementation of Büchi automata is a
key step in program verification. Early constructions using a Ramsey-based
argument have been supplanted by rank-based constructions with exponentially
better bounds. In 2001 Lee et al. presented the size-change termination (SCT)
problem, along with both a reduction to Büchi automata and a Ramsey-based
algorithm. The Ramsey-based algorithm was presented as a more practical
alternative to the automata-theoretic approach, but strongly resembles the
initial complementation constructions for Büchi automata. We prove that the
SCT algorithm is a specialized realization of the Ramsey-based complementation
construction. To do so, we extend the Ramsey-based complementation construction
to provide a containment-testing algorithm. Surprisingly, empirical analysis
suggests that despite the massive gap in worst-case complexity, Ramsey-based
approaches are superior over the domain of SCT problems. Upon further analysis
we discover an interesting property of the problem space that both explains
this result and provides a chance to improve rank-based tools. With these
improvements, we show that theoretical gains in efficiency of the rank-based
approach are mirrored in empirical performance.
Is there any Cartesian-closed category of continuous domains that would be
closed under Jones and Plotkin's probabilistic powerdomain construction? This
is a major open problem in the area of denotational semantics of probabilistic
higher-order languages. We relax the question, and look for quasi-continuous
dcpos instead. We introduce a natural class of such quasi-continuous dcpos, the
omega-QRB-domains. We show that they form a category omega-QRB with pleasing
properties: omega-QRB is closed under the probabilistic powerdomain functor,
under finite products, under taking bilimits of expanding sequences, under
retracts, and even under so-called quasi-retracts. But... omega-QRB is not
Cartesian closed. We conclude by showing that the QRB domains are just one half
of an FS-domain, merely lacking control.
It is shown that the finite satisfiability problem for two-variable logic
over structures with one total preorder relation, its induced successor
relation, one linear order relation and some further unary relations is
EXPSPACE-complete. Actually, EXPSPACE-completeness already holds for structures
that do not include the induced successor relation. As a special case, the
EXPSPACE upper bound applies to two-variable logic over structures with two
linear orders. A further consequence is that satisfiability of two-variable
logic over data words with a linear order on positions and a linear order and
successor relation on the data is decidable in EXPSPACE. As a complementing
result, it is shown that over structures with two total preorder relations as
well as over structures with one total preorder and two linear order relations,
the finite satisfiability problem for two-variable logic is undecidable.
We present a reflexive tactic for deciding the equational theory of Kleene
algebras in the Coq proof assistant. This tactic relies on a careful
implementation of efficient finite automata algorithms, so that it solves
casual equations instantaneously and properly scales to larger expressions. The
decision procedure is proved correct and complete: correctness is established
w.r.t. any model by formalising Kozen's initiality theorem; a counter-example
is returned when the given equation does not hold. The correctness proof is
challenging: it involves both a precise analysis of the underlying automata
algorithms and a lot of algebraic reasoning. In particular, we have to
formalise the theory of matrices over a Kleene algebra. We build on the recent
addition of firstorder typeclasses in Coq in order to work efficiently with the
involved algebraic structures.
We present a calculus that models a form of process interaction based on
copyless message passing, in the style of Singularity OS. The calculus is
equipped with a type system ensuring that well-typed processes are free from
memory faults, memory leaks, and communication errors. The type system is
essentially linear, but we show that linearity alone is inadequate, because it
leaves room for scenarios where well-typed processes leak significant amounts
of memory. We address these problems basing the type system upon an original
variant of session types.
The paper describes the refinement algorithm for the Calculus of
(Co)Inductive Constructions (CIC) implemented in the interactive theorem prover
Matita. The refinement algorithm is in charge of giving a meaning to the terms,
types and proof terms directly written by the user or generated by using
tactics, decision procedures or general automation. The terms are written in an
"external syntax" meant to be user friendly that allows omission of
information, untyped binders and a certain liberal use of user defined
sub-typing. The refiner modifies the terms to obtain related well typed terms
in the internal syntax understood by the kernel of the ITP. In particular, it
acts as a type inference algorithm when all the binders are untyped. The
proposed algorithm is bi-directional: given a term in external syntax and a
type expected for the term, it propagates as much typing information as
possible towards the leaves of the term. Traditional mono-directional
algorithms, instead, proceed in a bottom-up way by inferring the type of a
sub-term and comparing (unifying) it with the type expected by its context only
at the end. We propose some novel bi-directional rules for CIC that are
particularly effective. Among the benefits of bi-directionality we have better
error message reporting and better inference of dependent types. Moreover,
thanks to bi-directionality, the coercion system for sub-typing is more
effective and type inference generates simpler unification problems that […]
Let T be Goedel's system of primitive recursive functionals of finite type in
the lambda formulation. We define by constructive means using recursion on
nested multisets a multivalued function I from the set of terms of T into the
set of natural numbers such that if a term a reduces to a term b and if a
natural number I(a) is assigned to a then a natural number I(b) can be assigned
to b such that I(a) is greater than I(b). The construction of I is based on
Howard's 1970 ordinal assignment for T and Weiermann's 1996 treatment of T in
the combinatory logic version. As a corollary we obtain an optimal derivation
length classification for the lambda formulation of T and its fragments.
Compared with Weiermann's 1996 exposition this article yields solutions to
several non-trivial problems arising from dealing with lambda terms instead of
combinatory logic terms. It is expected that the methods developed here can be
applied to other higher order rewrite systems resulting in new powerful
termination orderings since T is a paradigm for such systems.
Gödel logic with the projection operator Delta (G_Delta) is an important
many-valued as well as intermediate logic. In contrast to classical logic, the
validity and the satisfiability problems of G_Delta are not directly dual to
each other. We nevertheless provide a uniform, computational treatment of both
problems for prenex formulas by describing appropriate translations into sets
of order clauses that can be subjected to chaining resolution. For validity a
version of Herbrand's Theorem allows us to show the soundness of standard
Skolemization. For satisfiability the translation involves a novel, extended
Skolemization method.
Logics for security protocol analysis require the formalization of an
adversary model that specifies the capabilities of adversaries. A common model
is the Dolev-Yao model, which considers only adversaries that can compose and
replay messages, and decipher them with known keys. The Dolev-Yao model is a
useful abstraction, but it suffers from some drawbacks: it cannot handle the
adversary knowing protocol-specific information, and it cannot handle
probabilistic notions, such as the adversary attempting to guess the keys. We
show how we can analyze security protocols under different adversary models by
using a logic with a notion of algorithmic knowledge. Roughly speaking,
adversaries are assumed to use algorithms to compute their knowledge; adversary
capabilities are captured by suitable restrictions on the algorithms used. We
show how we can model the standard Dolev-Yao adversary in this setting, and how
we can capture more general capabilities including protocol-specific knowledge
and guesses.
We study alternating register automata on data words and data trees in
relation to logics. A data word (resp. data tree) is a word (resp. tree) whose
every position carries a label from a finite alphabet and a data value from an
infinite domain. We investigate one-way automata with alternating control over
data words or trees, with one register for storing data and comparing them for
equality. This is a continuation of the study started by Demri, Lazic and
Jurdzinski. From the standpoint of register automata models, this work aims at
two objectives: (1) simplifying the existent decidability proofs for the
emptiness problem for alternating register automata; and (2) exhibiting
decidable extensions for these models. From the logical perspective, we show
that (a) in the case of data words, satisfiability of LTL with one register and
quantification over data values is decidable; and (b) the satisfiability
problem for the so-called forward fragment of XPath on XML documents is
decidable, even in the presence of DTDs and even of key constraints. The
decidability is obtained through a reduction to the automata model introduced.
This fragment contains the child, descendant, next-sibling and
following-sibling axes, as well as data equality and inequality tests.
By considering a counting-type argument on Brownian sample paths, we prove a
result similar to that of Orey and Taylor on the exact Hausdorff dimension of
the rapid points of Brownian motion. Because of the nature of the proof we can
then apply the concepts to so-called complex oscillations (or 'algorithmically
random Brownian motion'), showing that their rapid points have the same
dimension.
Global types are formal specifications that describe communication protocols
in terms of their global interactions. We present a new, streamlined language
of global types equipped with a trace-based semantics and whose features and
restrictions are semantically justified. The multi-party sessions obtained
projecting our global types enjoy a liveness property in addition to the
traditional progress and are shown to be sound and complete with respect to the
set of traces of the originating global type. Our notion of completeness is
less demanding than the classical ones, allowing a multi-party session to leave
out redundant traces from an underspecified global type. In addition to the
technical content, we discuss some limitations of our language of global types
and provide an extensive comparison with related specification languages
adopted in different communities.
We introduce two-sorted theories in the style of [CN10] for the complexity
classes \oplusL and DET, whose complete problems include determinants over Z2
and Z, respectively. We then describe interpretations of Soltys' linear algebra
theory LAp over arbitrary integral domains, into each of our new theories. The
result shows equivalences of standard theorems of linear algebra over Z2 and Z
can be proved in the corresponding theory, but leaves open the interesting
question of whether the theorems themselves can be proved.
We present a static analysis by Abstract Interpretation to check for run-time
errors in parallel and multi-threaded C programs. Following our work on
Astrée, we focus on embedded critical programs without recursion nor dynamic
memory allocation, but extend the analysis to a static set of threads
communicating implicitly through a shared memory and explicitly using a finite
set of mutual exclusion locks, and scheduled according to a real-time
scheduling policy and fixed priorities. Our method is thread-modular. It is
based on a slightly modified non-parallel analysis that, when analyzing a
thread, applies and enriches an abstract set of thread interferences. An
iterator then re-analyzes each thread in turn until interferences stabilize. We
prove the soundness of our method with respect to the sequential consistency
semantics, but also with respect to a reasonable weakly consistent memory
semantics. We also show how to take into account mutual exclusion and thread
priorities through a partitioning over an abstraction of the scheduler state.
We present preliminary experimental results analyzing an industrial program
with our prototype, Thésée, and demonstrate the scalability of our
approach.
One of Courcelle's celebrated results states that if C is a class of graphs
of bounded tree-width, then model-checking for monadic second order logic
(MSO_2) is fixed-parameter tractable (fpt) on C by linear time parameterized
algorithms, where the parameter is the tree-width plus the size of the formula.
An immediate question is whether this is best possible or whether the result
can be extended to classes of unbounded tree-width. In this paper we show that
in terms of tree-width, the theorem cannot be extended much further. More
specifically, we show that if C is a class of graphs which is closed under
colourings and satisfies certain constructibility conditions and is such that
the tree-width of C is not bounded by \log^{84} n then MSO_2-model checking is
not fpt unless SAT can be solved in sub-exponential time. If the tree-width of
C is not poly-logarithmically bounded, then MSO_2-model checking is not fpt
unless all problems in the polynomial-time hierarchy can be solved in
sub-exponential time.
Inspired by a recent graphical formalism for lambda-calculus based on linear
logic technology, we introduce an untyped structural lambda-calculus, called
lambda j, which combines actions at a distance with exponential rules
decomposing the substitution by means of weakening, contraction and
derelicition. First, we prove some fundamental properties of lambda j such as
confluence and preservation of beta-strong normalisation. Second, we add a
strong bisimulation to lambda j by means of an equational theory which captures
in particular Regnier's sigma-equivalence. We then complete this bisimulation
with two more equations for (de)composition of substitutions and we prove that
the resulting calculus still preserves beta-strong normalization. Finally, we
discuss some consequences of our results.
Dependently typed programs contain an excessive amount of static terms which
are necessary to please the type checker but irrelevant for computation. To
separate static and dynamic code, several static analyses and type systems have
been put forward. We consider Pfenning's type theory with irrelevant
quantification which is compatible with a type-based notion of equality that
respects eta-laws. We extend Pfenning's theory to universes and large
eliminations and develop its meta-theory. Subject reduction, normalization and
consistency are obtained by a Kripke model over the typed equality judgement.
Finally, a type-directed equality algorithm is described whose completeness is
proven by a second Kripke model.
We propose a synthesis of the two proof styles of interactive theorem
proving: the procedural style (where proofs are scripts of commands, like in
Coq) and the declarative style (where proofs are texts in a controlled natural
language, like in Isabelle/Isar). Our approach combines the advantages of the
declarative style - the possibility to write formal proofs like normal
mathematical text - and the procedural style - strong automation and help with
shaping the proofs, including determining the statements of intermediate steps.
Our approach is new, and differs significantly from the ways in which the
procedural and declarative proof styles have been combined before in the
Isabelle, Ssreflect and Matita systems. Our approach is generic and can be
implemented on top of any procedural interactive theorem prover, regardless of
its architecture and logical foundations. To show the viability of our proposed
approach, we fully implemented it as a proof interface called miz3, on top of
the HOL Light interactive theorem prover. The declarative language that this
interface uses is a slight variant of the language of the Mizar system, and can
be used for any interactive theorem prover regardless of its logical
foundations. The miz3 interface allows easy access to the full set of tactics
and formal libraries of HOL Light, and as such has "industrial strength". Our
approach gives a way to automatically convert any procedural proof to a
declarative counterpart, where the […]
We give a method to prove confluence of term rewriting systems that contain
non-terminating rewrite rules such as commutativity and associativity. Usually,
confluence of term rewriting systems containing such rules is proved by
treating them as equational term rewriting systems and considering E-critical
pairs and/or termination modulo E. In contrast, our method is based solely on
usual critical pairs and it also (partially) works even if the system is not
terminating modulo E. We first present confluence criteria for term rewriting
systems whose rewrite rules can be partitioned into a terminating part and a
possibly non-terminating part. We then give a reduction-preserving completion
procedure so that the applicability of the criteria is enhanced. In contrast to
the well-known Knuth-Bendix completion procedure which preserves the
equivalence relation of the system, our completion procedure preserves the
reduction relation of the system, by which confluence of the original system is
inferred from that of the completed system.
We introduce tree-width for first order formulae \phi, fotw(\phi). We show
that computing fotw is fixed-parameter tractable with parameter fotw. Moreover,
we show that on classes of formulae of bounded fotw, model checking is fixed
parameter tractable, with parameter the length of the formula. This is done by
translating a formula \phi\ with fotw(\phi)<k into a formula of the k-variable
fragment L^k of first order logic. For fixed k, the question whether a given
first order formula is equivalent to an L^k formula is undecidable. In
contrast, the classes of first order formulae with bounded fotw are fragments
of first order logic for which the equivalence is decidable.
Our notion of tree-width generalises tree-width of conjunctive queries to
arbitrary formulae of first order logic by taking into account the quantifier
interaction in a formula. Moreover, it is more powerful than the notion of
elimination-width of quantified constraint formulae, defined by Chen and Dalmau
(CSL 2005): for quantified constraint formulae, both bounded elimination-width
and bounded fotw allow for model checking in polynomial time. We prove that
fotw of a quantified constraint formula \phi\ is bounded by the
elimination-width of \phi, and we exhibit a class of quantified constraint
formulae with bounded fotw, that has unbounded elimination-width. A similar
comparison holds for strict tree-width of non-recursive stratified datalog as
defined by Flum, Frick, and Grohe (JACM 49, 2002).
Finally, […]