2014
Stepwise refinement of algebraic specifications is a well known formal methodology for program development. However, traditional notions of refinement based on signature morphisms are often too rigid to capture a number of relevant transformations in the context of software design, reuse, and adaptation. This paper proposes a new approach to refinement in which signature morphisms are replaced by logical interpretations as a means to witness refinements. The approach is first presented in the context of equational logic, and later generalised to deductive systems of arbitrary dimension. This allows, for example, refining sentential into equational specifications and the latter into modal ones.
We study the question of whether, for a given class of finite graphs, one can define, for each graph of the class, a linear ordering in monadic second-order logic, possibly with the help of monadic parameters. We consider two variants of monadic second-order logic: one where we can only quantify over sets of vertices and one where we can also quantify over sets of edges. For several special cases, we present combinatorial characterisations of when such a linear ordering is definable. In some cases, for instance for graph classes that omit a fixed graph as a minor, the presented conditions are necessary and sufficient; in other cases, they are only necessary. Other graph classes we consider include complete bipartite graphs, split graphs, chordal graphs, and cographs. We prove that orderability is decidable for the so called HR-equational classes of graphs, which are described by equation systems and generalize the context-free languages.
Recently, A. Polonsky has shown that the range property fails for H. We give here some conditions on a closed term that imply that its range has an infinite cardinality.
Regular cost functions have been introduced recently as an extension to the notion of regular languages with counting capabilities, which retains strong closure, equivalence, and decidability properties. The specificity of cost functions is that exact values are not considered, but only estimated. In this paper, we define an extension of Linear Temporal Logic (LTL) over finite words to describe cost functions. We give an explicit translation from this new logic to two dual form of cost automata, and we show that the natural decision problems for this logic are PSPACE-complete, as it is the case in the classical setting. We then algebraically characterize the expressive power of this logic, using a new syntactic congruence for cost functions introduced in this paper.
We investigate unification problems related to the Cipher Block Chaining (CBC) mode of encryption. We first model chaining in terms of a simple, convergent, rewrite system over a signature with two disjoint sorts: list and element. By interpreting a particular symbol of this signature suitably, the rewrite system can model several practical situations of interest. An inference procedure is presented for deciding the unification problem modulo this rewrite system. The procedure is modular in the following sense: any given problem is handled by a system of `list-inferences', and the set of equations thus derived between the element-terms of the problem is then handed over to any (`black-box') procedure which is complete for solving these element-equations. An example of application of this unification procedure is given, as attack detection on a Needham-Schroeder like protocol, employing the CBC encryption mode based on the associative-commutative (AC) operator XOR. The 2-sorted convergent rewrite system is then extended into one that fully captures a block chaining encryption-decryption mode at an abstract level, using no AC-symbols; and unification modulo this extended system is also shown to be decidable.
The computational complexity of the solutions $h$ to the ordinary differential equation $h(0)=0$, $h'(t) = g(t, h(t))$ under various assumptions on the function $g$ has been investigated. Kawamura showed in 2010 that the solution $h$ can be PSPACE-hard even if $g$ is assumed to be Lipschitz continuous and polynomial-time computable. We place further requirements on the smoothness of $g$ and obtain the following results: the solution $h$ can still be PSPACE-hard if $g$ is assumed to be of class $C^1$; for each $k\ge2$, the solution $h$ can be hard for the counting hierarchy even if $g$ is of class $C^k$.
We prove that the complexity of the uniform first-order theory of ground tree rewrite graphs is in ATIME(2^{2^{poly(n)}},O(n)). Providing a matching lower bound, we show that there is some fixed ground tree rewrite graph whose first-order theory is hard for ATIME(2^{2^{poly(n)}},poly(n)) with respect to logspace reductions. Finally, we prove that there exists a fixed ground tree rewrite graph together with a single unary predicate in form of a regular tree language such that the resulting structure has a non-elementary first-order theory.
In this paper we show that {\omega}B- and {\omega}S-regular languages satisfy the following separation-type theorem If L1,L2 are disjoint languages of {\omega}-words both recognised by {\omega}B- (resp. {\omega}S)-automata then there exists an {\omega}-regular language Lsep that contains L1, and whose complement contains L2. In particular, if a language and its complement are recognised by {\omega}B- (resp. {\omega}S)-automata then the language is {\omega}-regular. The result is especially interesting because, as shown by Boja\'nczyk and Colcombet, {\omega}B-regular languages are complements of {\omega}S-regular languages. Therefore, the above theorem shows that these are two mutually dual classes that both have the separation property. Usually (e.g. in descriptive set theory or recursion theory) exactly one class from a pair C, Cc has the separation property. The proof technique reduces the separation property for {\omega}-word languages to profinite languages using Ramsey's theorem and topological methods. After that reduction, the analysis of the separation property in the profinite monoid is relatively simple. The whole construction is technically not complicated, moreover it seems to be quite extensible. The paper uses a framework for the analysis of B- and S-regular languages in the context of the profinite monoid that was proposed by Toru\'nczyk.
We propose a theory of learning aimed to formalize some ideas underlying Coquand's game semantics and Krivine's realizability of classical logic. We introduce a notion of knowledge state together with a new topology, capturing finite positive and negative information that guides a learning strategy. We use a leading example to illustrate how non-constructive proofs lead to continuous and effective learning strategies over knowledge spaces, and prove that our learning semantics is sound and complete w.r.t. classical truth, as it is the case for Coquand's and Krivine's approaches.
A discounted-sum automaton (NDA) is a nondeterministic finite automaton with edge weights, valuing a run by the discounted sum of visited edge weights. More precisely, the weight in the i-th position of the run is divided by $\lambda^i$, where the discount factor $\lambda$ is a fixed rational number greater than 1. The value of a word is the minimal value of the automaton runs on it. Discounted summation is a common and useful measuring scheme, especially for infinite sequences, reflecting the assumption that earlier weights are more important than later weights. Unfortunately, determinization of NDAs, which is often essential in formal verification, is, in general, not possible. We provide positive news, showing that every NDA with an integral discount factor is determinizable. We complete the picture by proving that the integers characterize exactly the discount factors that guarantee determinizability: for every nonintegral rational discount factor $\lambda$, there is a nondeterminizable $\lambda$-NDA. We also prove that the class of NDAs with integral discount factors enjoys closure under the algebraic operations min, max, addition, and subtraction, which is not the case for general NDAs nor for deterministic NDAs. For general NDAs, we look into approximate determinization, which is always possible as the influence of a word's suffix decays. We show that the naive approach, of unfolding the automaton computations up to a sufficient level, is doubly exponential in the […]
This paper describes an automatic termination checker for a generic first-order call-by-value language in ML style. We use the fact that value are built from variants and tuples to keep some information about how arguments of recursive call evolve during evaluation. The result is a criterion for termination extending the size-change termination principle of Lee, Jones and Ben-Amram that can detect size changes inside subvalues of arguments. Moreover the corresponding algorithm is easy to implement, making it a good candidate for experimentation.
We study the synthesis problem for distributed architectures with a parametric number of finite-state components. Parameterized specifications arise naturally in a synthesis setting, but thus far it was unclear how to detect realizability and how to perform synthesis in a parameterized setting. Using a classical result from verification, we show that for a class of specifications in indexed LTL\X, parameterized synthesis in token ring networks is equivalent to distributed synthesis in a network consisting of a few copies of a single process. Adapting a well-known result from distributed synthesis, we show that the latter problem is undecidable. We describe a semi-decision procedure for the parameterized synthesis problem in token rings, based on bounded synthesis. We extend the approach to parameterized synthesis in token-passing networks with arbitrary topologies, and show applicability on a simple case study. Finally, we sketch a general framework for parameterized synthesis based on cutoffs and other parameterized verification techniques.
We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k limit-average functions, in the expectation objective the goal is to maximize the expected limit-average value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the case of one limit-average function, both randomization and memory are necessary for strategies even for epsilon-approximation, and that finite-memory randomized strategies are sufficient for achieving Pareto optimal values. Under the satisfaction objective, in contrast to the case of one limit-average function, infinite memory is necessary for strategies achieving a specific value (i.e. randomized finite-memory strategies are not sufficient), whereas memoryless randomized strategies are sufficient for epsilon-approximation, for all epsilon>0. We further prove that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be epsilon-approximated in time polynomial in the size of the MDP and 1/epsilon, and exponential in the number of limit-average functions, for all epsilon>0. Our analysis also reveals flaws in previous work for MDPs with multiple mean-payoff […]
We provide a simple proof of Kamp's theorem.
In this work we continue the syntactic study of completeness that began with the works of Immerman and Medina. In particular, we take a conjecture raised by Medina in his dissertation that says if a conjunction of a second-order and a first-order sentences defines an NP-complete problems via fops, then it must be the case that the second-order conjoint alone also defines a NP-complete problem. Although this claim looks very plausible and intuitive, currently we cannot provide a definite answer for it. However, we can solve in the affirmative a weaker claim that says that all ``consistent'' universal first-order sentences can be safely eliminated without the fear of losing completeness. Our methods are quite general and can be applied to complexity classes other than NP (in this paper: to NLSPACE, PTIME, and coNP), provided the class has a complete problem satisfying a certain combinatorial property.
Two of the most studied extensions of trace and testing equivalences to nondeterministic and probabilistic processes induce distinctions that have been questioned and lack properties that are desirable. Probabilistic trace-distribution equivalence differentiates systems that can perform the same set of traces with the same probabilities, and is not a congruence for parallel composition. Probabilistic testing equivalence, which relies only on extremal success probabilities, is backward compatible with testing equivalences for restricted classes of processes, such as fully nondeterministic processes or generative/reactive probabilistic processes, only if specific sets of tests are admitted. In this paper, new versions of probabilistic trace and testing equivalences are presented for the general class of nondeterministic and probabilistic processes. The new trace equivalence is coarser because it compares execution probabilities of single traces instead of entire trace distributions, and turns out to be compositional. The new testing equivalence requires matching all resolutions of nondeterminism on the basis of their success probabilities, rather than comparing only extremal success probabilities, and considers success probabilities in a trace-by-trace fashion, rather than cumulatively on entire resolutions. It is fully backward compatible with testing equivalences for restricted classes of processes; as a consequence, the trace-by-trace approach uniformly captures the standard […]
We investigate the phenomenon that "every monad is a linear state monad". We do this by studying a fully-complete state-passing translation from an impure call-by-value language to a new linear type theory: the enriched call-by-value calculus. The results are not specific to store, but can be applied to any computational effect expressible using algebraic operations, even to effects that are not usually thought of as stateful. There is a bijective correspondence between generic effects in the source language and state access operations in the enriched call-by-value calculus. From the perspective of categorical models, the enriched call-by-value calculus suggests a refinement of the traditional Kleisli models of effectful call-by-value languages. The new models can be understood as enriched adjunctions.
A well-known result by Frick and Grohe shows that deciding FO logic on trees involves a parameter dependence that is a tower of exponentials. Though this lower bound is tight for Courcelle's theorem, it has been evaded by a series of recent meta-theorems for other graph classes. Here we provide some additional non-elementary lower bound results, which are in some senses stronger. Our goal is to explain common traits in these recent meta-theorems and identify barriers to further progress. More specifically, first, we show that on the class of threshold graphs, and therefore also on any union and complement-closed class, there is no model-checking algorithm with elementary parameter dependence even for FO logic. Second, we show that there is no model-checking algorithm with elementary parameter dependence for MSO logic even restricted to paths (or equivalently to unary strings), unless E=NE. As a corollary, we resolve an open problem on the complexity of MSO model-checking on graphs of bounded max-leaf number. Finally, we look at MSO on the class of colored trees of depth d. We show that, assuming the ETH, for every fixed d>=1 at least d+1 levels of exponentiation are necessary for this problem, thus showing that the (d+1)-fold exponential algorithm recently given by Gajarsk\`{y} and Hlin\u{e}n\`{y} is essentially optimal.
All current investigations to analyze the derivational complexity of term rewrite systems are based on a single termination method, possibly preceded by transformations. However, the exclusive use of direct criteria is problematic due to their restricted power. To overcome this limitation the article introduces a modular framework which allows to infer (polynomial) upper bounds on the complexity of term rewrite systems by combining different criteria. Since the fundamental idea is based on relative rewriting, we study how matrix interpretations and match-bounds can be used and extended to measure complexity for relative rewriting, respectively. The modular framework is proved strictly more powerful than the conventional setting. Furthermore, the results have been implemented and experiments show significant gains in power.