2011
Computability logic (CoL) (see http://www.cis.upenn.edu/~giorgi/cl.html) is a recently introduced semantical platform and ambitious program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth that logic has more traditionally been. Its expressions represent interactive computational tasks seen as games played by a machine against the environment, and "truth" is understood as existence of an algorithmic winning strategy. With logical operators standing for operations on games, the formalism of CoL is open-ended, and has already undergone series of extensions. This article extends the expressive power of CoL in a qualitatively new way, generalizing formulas (to which the earlier languages of CoL were limited) to circuit-style structures termed cirquents. The latter, unlike formulas, are able to account for subgame/subtask sharing between different parts of the overall game/task. Among the many advantages offered by this ability is that it allows us to capture, refine and generalize the well known independence-friendly logic which, after the present leap forward, naturally becomes a conservative fragment of CoL, just as classical logic had been known to be a conservative fragment of the formula-based version of CoL. Technically, this paper is self-contained, and can be read without any prior familiarity with CoL.
We show how to extract existential witnesses from classical proofs using Krivine's classical realizability---where classical proofs are interpreted as lambda-terms with the call/cc control operator. We first recall the basic framework of classical realizability (in classical second-order arithmetic) and show how to extend it with primitive numerals for faster computations. Then we show how to perform witness extraction in this framework, by discussing several techniques depending on the shape of the existential formula. In particular, we show that in the Sigma01-case, Krivine's witness extraction method reduces to Friedman's through a well-suited negative translation to intuitionistic second-order arithmetic. Finally we discuss the advantages of using call/cc rather than a negative translation, especially from the point of view of an implementation.
In a functional language, the dominant control-flow mechanism is function call and return. Most higher-order flow analyses, including k-CFA, do not handle call and return well: they remember only a bounded number of pending calls because they approximate programs with control-flow graphs. Call/return mismatch introduces precision-degrading spurious control-flow paths and increases the analysis time. We describe CFA2, the first flow analysis with precise call/return matching in the presence of higher-order functions and tail calls. We formulate CFA2 as an abstract interpretation of programs in continuation-passing style and describe a sound and complete summarization algorithm for our abstract semantics. A preliminary evaluation shows that CFA2 gives more accurate data-flow information than 0CFA and 1CFA.
We define a logical framework with singleton types and one universe of small types. We give the semantics using a PER model; it is used for constructing a normalisation-by-evaluation algorithm. We prove completeness and soundness of the algorithm; and get as a corollary the injectivity of type constructors. Then we give the definition of a correct and complete type-checking algorithm for terms in normal form. We extend the results to proof-irrelevant propositions.
Consider a linear ordering equipped with a finite sequence of monadic predicates. If the ordering contains an interval of order type \omega or -\omega, and the monadic second-order theory of the combined structure is decidable, there exists a non-trivial expansion by a further monadic predicate that is still decidable.
This paper presents a method for synthesising sound and complete tableau calculi. Given a specification of the formal semantics of a logic, the method generates a set of tableau inference rules that can then be used to reason within the logic. The method guarantees that the generated rules form a calculus which is sound and constructively complete. If the logic can be shown to admit finite filtration with respect to a well-defined first-order semantics then adding a general blocking mechanism provides a terminating tableau calculus. The process of generating tableau rules can be completely automated and produces, together with the blocking mechanism, an automated procedure for generating tableau decision procedures. For illustration we show the workability of the approach for a description logic with transitive roles and propositional intuitionistic logic.
The operational semantics of interactive systems is usually described by labeled transition systems. Abstract semantics (that is defined in terms of bisimilarity) is characterized by the final morphism in some category of coalgebras. Since the behaviour of interactive systems is for many reasons infinite, symbolic semantics were introduced as a mean to define smaller, possibly finite, transition systems, by employing symbolic actions and avoiding some sources of infiniteness. Unfortunately, symbolic bisimilarity has a different shape with respect to ordinary bisimilarity, and thus the standard coalgebraic characterization does not work. In this paper, we introduce its coalgebraic models. We will use as motivating examples two asynchronous formalisms: open Petri nets and asynchronous pi-calculus. Indeed, as we have shown in a previous paper, asynchronous bisimilarity can be seen as an instance of symbolic bisimilarity.
We consider two styles of proof calculi for a family of tense logics, presented in a formalism based on nested sequents. A nested sequent can be seen as a tree of traditional single-sided sequents. Our first style of calculi is what we call "shallow calculi", where inference rules are only applied at the root node in a nested sequent. Our shallow calculi are extensions of Kashima's calculus for tense logic and share an essential characteristic with display calculi, namely, the presence of structural rules called "display postulates". Shallow calculi enjoy a simple cut elimination procedure, but are unsuitable for proof search due to the presence of display postulates and other structural rules. The second style of calculi uses deep-inference, whereby inference rules can be applied at any node in a nested sequent. We show that, for a range of extensions of tense logic, the two styles of calculi are equivalent, and there is a natural proof theoretic correspondence between display postulates and deep inference. The deep inference calculi enjoy the subformula property and have no display postulates or other structural rules, making them a better framework for proof search.
Algebras axiomatized entirely by rank 1 axioms are algebras for a functor and thus the free algebras can be obtained by a direct limit process. Dually, the final coalgebras can be obtained by an inverse limit process. In order to explore the limits of this method we look at Heyting algebras which have mixed rank 0-1 axiomatizations. We will see that Heyting algebras are special in that they are almost rank 1 axiomatized and can be handled by a slight variant of the rank 1 coalgebraic methods.
Analytic proof calculi are introduced for box and diamond fragments of basic modal fuzzy logics that combine the Kripke semantics of modal logic K with the many-valued semantics of Gödel logic. The calculi are used to establish completeness and complexity results for these fragments.
We introduce the operators "modified limit" and "accumulation" on a Banach space, and we use this to define what we mean by being internally computable over the space. We prove that any externally computable function from a computable metric space to a computable Banach space is internally computable. We motivate the need for internal concepts of computability by observing that the complexity of the set of finite sets of closed balls with a nonempty intersection is not uniformly hyperarithmetical, and thus that approximating an externally computable function is highly complex.
The model checking problem for CTL is known to be P-complete (Clarke, Emerson, and Sistla (1986), see Schnoebelen (2002)). We consider fragments of CTL obtained by restricting the use of temporal modalities or the use of negations---restrictions already studied for LTL by Sistla and Clarke (1985) and Markey (2004). For all these fragments, except for the trivial case without any temporal operator, we systematically prove model checking to be either inherently sequential (P-complete) or very efficiently parallelizable (LOGCFL-complete). For most fragments, however, model checking for CTL is already P-complete. Hence our results indicate that, in cases where the combined complexity is of relevance, approaching CTL model checking by parallelism cannot be expected to result in any significant speedup. We also completely determine the complexity of the model checking problem for all fragments of the extensions ECTL, CTL+, and ECTL+.
Ludics is peculiar in the panorama of game semantics: we first have the definition of interaction-composition and then we have semantical types, as a set of strategies which "behave well" and react in the same way to a set of tests. The semantical types which are interpretations of logical formulas enjoy a fundamental property, called internal completeness, which characterizes ludics and sets it apart also from realizability. Internal completeness entails standard full completeness as a consequence. A growing body of work start to explore the potential of this specific interactive approach. However, ludics has some limitations, which are consequence of the fact that in the original formulation, strategies are abstractions of MALL proofs. On one side, no repetitions are allowed. On the other side, the proofs tend to rely on the very specific properties of the MALL proof-like strategies, making it difficult to transfer the approach to semantical types into different settings. In this paper, we provide an extension of ludics which allows repetitions and show that one can still have interactive types and internal completeness. From this, we obtain full completeness w.r.t. a polarized version of MELL. In our extension, we use less properties than in the original formulation, which we believe is of independent interest. We hope this may open the way to applications of ludics approach to larger domains and different settings.
To produce a program guaranteed to satisfy a given specification one can synthesize it from a formal constructive proof that a computation satisfying that specification exists. This process is particularly effective if the specifications are written in a high-level language that makes it easy for designers to specify their goals. We consider a high-level specification language that results from adding knowledge to a fragment of Nuprl specifically tailored for specifying distributed protocols, called event theory. We then show how high-level knowledge-based programs can be synthesized from the knowledge-based specifications using a proof development system such as Nuprl. Methods of Halpern and Zuck then apply to convert these knowledge-based protocols to ordinary protocols. These methods can be expressed as heuristic transformation tactics in Nuprl.
Working in the Blum-Shub-Smale model of computation on the real numbers, we answer several questions of Meer and Ziegler. First, we show that, for each natural number d, an oracle for the set of algebraic real numbers of degree at most d is insufficient to allow an oracle BSS-machine to decide membership in the set of algebraic numbers of degree d + 1. We add a number of further results on relative computability of these sets and their unions. Then we show that the halting problem for BSS-computation is not decidable below any countable oracle set, and give a more specific condition, related to the cardinalities of the sets, necessary for relative BSS-computability. Most of our results involve the technique of using as input a tuple of real numbers which is algebraically independent over both the parameters and the oracle of the machine.
Appel and McAllester's "step-indexed" logical relations have proven to be a simple and effective technique for reasoning about programs in languages with semantically interesting types, such as general recursive types and general reference types. However, proofs using step-indexed models typically involve tedious, error-prone, and proof-obscuring step-index arithmetic, so it is important to develop clean, high-level, equational proof principles that avoid mention of step indices. In this paper, we show how to reason about binary step-indexed logical relations in an abstract and elegant way. Specifically, we define a logic LSLR, which is inspired by Plotkin and Abadi's logic for parametricity, but also supports recursively defined relations by means of the modal "later" operator from Appel, Melliès, Richards, and Vouillon's "very modal model" paper. We encode in LSLR a logical relation for reasoning relationally about programs in call-by-value System F extended with general recursive types. Using this logical relation, we derive a set of useful rules with which we can prove contextual equivalence and approximation results without counting steps.
Type-based amortised resource analysis following Hofmann and Jost---where resources are associated with individual elements of data structures and doled out to the programmer under a linear typing discipline---have been successful in providing concrete resource bounds for functional programs, with good support for inference. In this work we translate the idea of amortised resource analysis to imperative pointer-manipulating languages by embedding a logic of resources, based on the affine intuitionistic Logic of Bunched Implications, within Separation Logic. The Separation Logic component allows us to assert the presence and shape of mutable data structures on the heap, while the resource component allows us to state the consumable resources associated with each member of the structure. We present the logic on a small imperative language, based on Java bytecode, with procedures and mutable heap. We have formalised the logic and its soundness property within the Coq proof assistant and extracted a certified verification condition generator. We also describe an proof search procedure that allows generated verification conditions to be discharged while using linear programming to infer consumable resource annotations. We demonstrate the logic on some examples, including proving the termination of in-place list reversal on lists with cyclic tails.
Parsing is an important problem in computer science and yet surprisingly little attention has been devoted to its formal verification. In this paper, we present TRX: a parser interpreter formally developed in the proof assistant Coq, capable of producing formally correct parsers. We are using parsing expression grammars (PEGs), a formalism essentially representing recursive descent parsing, which we consider an attractive alternative to context-free grammars (CFGs). From this formalization we can extract a parser for an arbitrary PEG grammar with the warranty of total correctness, i.e., the resulting parser is terminating and correct with respect to its grammar and the semantics of PEGs; both properties formally proven in Coq.
We introduce a new type of generalized Turing machines (GTMs), which are intended as a tool for the mathematician who studies computability in Analysis. In a single tape cell a GTM can store a symbol, a real number, a continuous real function or a probability measure, for example. The model is based on TTE, the representation approach for computable analysis. As a main result we prove that the functions that are computable via given representations are closed under GTM programming. This generalizes the well known fact that these functions are closed under composition. The theorem allows to speak about objects themselves instead of names in algorithms and proofs. By using GTMs for specifying algorithms, many proofs become more rigorous and also simpler and more transparent since the GTM model is very simple and allows to apply well-known techniques from Turing machine theory. We also show how finite or infinite sequences as names can be replaced by sets (generalized representations) on which computability is already defined via representations. This allows further simplification of proofs. All of this is done for multi-functions, which are essential in Computable Analysis, and multi-representations, which often allow more elegant formulations. As a byproduct we show that the computable functions on finite and infinite sequences of symbols are closed under programming with GTMs. We conclude with examples of application.
We consider the enumeration problem of first-order queries over structures of bounded degree. It was shown that this problem is in the Constant-Delaylin class. An enumeration problem belongs to Constant-Delaylin if for an input of size n it can be solved by: - an O(n) precomputation phase building an index structure, - followed by a phase enumerating the answers with no repetition and a constant delay between two consecutive outputs. In this article we give a different proof of this result based on Gaifman's locality theorem for first-order logic. Moreover, the constants we obtain yield a total evaluation time that is triply exponential in the size of the input formula, matching the complexity of the best known evaluation algorithms.