2006

Editor: Shankar Natarajan

Imbuing machines with intelligence is a long-standing intellectual goal. Logical rea- soning is one of the hallmarks of human intelligence, yet human beings find long chains of logical inference as challenging as long sequences of calculation. The automation of reason- ing is more than an attempt to scratch a scientific itch, it has many important applications including knowledge representation, constraint solving, automated proof checking, data- base query optimization, and software and hardware verification. Advances in automated reasoning have always combined important theoretical insights in logic and algorithmics with hard-won practical implementation techniques and heuristics. The International Joint Conference on Automated Reasoning (IJCAR) brings together all the different strands of au- tomated reasoning including logical foundations, automated proof search, interactive proof checking, and applications under one umbrella. The IJCAR 2006 meeting was held under the auspices of the Federated Logic Conference at Seattle, USA, during August 16-21, 2006. IJCAR 2006 combined several individual conferences including

- CADE (International Conference on Automated Deduction)
- TABLEAUX (International Conference on Automated Reasoning with Analytic Tableaux and Related Methods)
- FTP (International Workshop on First-Order Theorem Proving)
- FroCoS (Workshop on Frontiers of Combining Systems)
- TPHOLs (International Conference on Theorem Proving in Higher-Order Logics).

Following the conference, the best papers were selected for publication in extended form. These papers included a good balance of theory and practice. The more theoretical

papers were selected for publication in the current issue of Logical Methods in Computer Science, while the more practical papers were published in the Journal of Automated

Reasoning. Gentzen’s sequent calculus has played a central role in automated reasoning. Sequents have the form Γ ⊢ ∆ where Γ and ∆ are sets of formulas. Such a sequent is

interpreted as asserting the provability Γ =⇒ ∆. The proof rules for the sequent calculus includes the axiom and the cut rule Γ,A⊢∆,Γ′⊢A,∆′ , along with rules for introducing

each con- Γ,A⊢A,∆ Γ∪Γ′ ⊢∆∪∆′ nective or quantifier in the left or the right of the conclusion sequent. The paper Canonical calculi with (n, k)-ary quantifiers by Avron and

Zamansky examines a generalization of the sequent calculus with quantifiers that bind k variables over n formulas. The propositional connectives are instances where no

variables are bound and hence k = 0, but the definition also covers universal and existential quantification and partially ordered (Henkin) quantifiers. The central result of the

paper is that for k ∈ {0,1}, there is an equivalence between

.

- 1.The coherence of the proof rules for an (n,k)-ary quantifier, a decidable property that verifies the duality of the left and right proof rules
- 2. A strong form of semantic soundness and completeness of the proof system relative to a semantics for the quantifiers given in terms of nondeterministic matrices (truth tables)
- 3.The strong cut elimination property for the proof system where derivations using the cut rule can be replaced with ones without cut.

Benzmueller, Brown, and Kohlhase’s paper on Cut-simulation and Impredicativity shows that the addition of certain comprehension and extensionality axioms to higher-order logic make it possible to simulate the cut rule. Constructing a proof with cuts for a formula is difficult since the cut formula is not constrained by the goal formula. The eliminability of the cut rule makes it possible to find proofs by systematically constructing a cut-free proof tree starting from the conclusion sequent. However, introducing axioms that can simulate the cut rule get in the way of proof search. The paper demonstrates how these axioms can be incorporated into the inference rules for Church’s higher-order logic in a manner that is conducive to effective proof search.

Given two formulas A and B such that A =⇒ B, an interpolant is a formula I in the symbols shared between A and B such that A =⇒ I and I =⇒ B. A theory identifies a class of models. Some theories have ground (i.e., quantifier-free) interpolants when A and B are ground. Interpolant generation is a key technique in computer-aided verification for approximating the reachable states of a system and in refining abstractions. Sofronie-Stokkerman’s paper Interpolation in local theory extensions describes a modular construction for generating quantifier-free interpolants from local theory extensions to a base theory that is convex and has an interpolation procedure.

Constructive logics have played an important role in automated reasoning. These logics pose significant challenges since proofs are more difficult to find, but such proofs also yield more information in the form of constructive witnesses for existential claims. Constable and Moczydlowski’s paper Extracting programs from constructive HOL proofs via IZF set- theoretic semantics provides a constructive set theoretic semantics for the constructive fragment of higher-order logic (CHOL). The semantics is defined within the intuitionistic set theory IZF which is based on an intuitionistic first-order logic with axioms similar to those for Zermelo-Fraenkel (ZF) set theory. The soundness argument also yields a method to extract the constructive content of existence proofs in CHOL.

Benjamin Werner’s paper on Proof irrelevant type theories presents a variant of the Extended Calculus of Constructions where certain proof components can be tagged as ir- relevant in the sense that the meaning of the expression only depends on the existence of a proof and not the actual proof itself. Proof irrelevance is built into the conversion rules. He shows how this type theory can simulate subset types that have been used to good effect in other type systems.

Daria Walukiewicz-Chrzaszcz and Jacek Chrzaszcz, in their paper on Consistency and completeness of rewriting in the Calculus of Constructions, examine the question of us- ing rewrite rules within a proof system based on the Curry-Howard isomorphism between propositions and types. The paper presents a sound algorithm for checking that a function definition given by terminating and confluent rewrite rules is conservative.

Collectively, these papers present elegant results that have practical significance for the challenges of supporting mathematical expressiveness with effective automation.

Natarajan Shankar

Guest editor and IJCAR’06 Program-Chair

Propositional canonical Gentzen-type systems, introduced in 2001 by Avron and Lev, are systems which in addition to the standard axioms and structural rules have only logical rules in which exactly one occurrence of a connective is introduced and no other connective is mentioned. A constructive coherence criterion for the non-triviality of such systems was defined and it was shown that a system of this kind admits cut-elimination iff it is coherent. The semantics of such systems is provided using two-valued non-deterministic matrices (2Nmatrices). In 2005 Zamansky and Avron extended these results to systems with unary quantifiers of a very restricted form. In this paper we substantially extend the characterization of canonical systems to (n,k)-ary quantifiers, which bind k distinct variables and connect n formulas, and show that the coherence criterion remains constructive for such systems. Then we focus on the case of k∈{0,1} and for a canonical calculus G show that it is coherent precisely when it has a strongly characteristic 2Nmatrix, which in turn is equivalent to admitting strong cut-elimination.

Church's Higher Order Logic is a basis for influential proof assistants -- HOL and PVS. Church's logic has a simple set-theoretic semantics, making it trustworthy and extensible. We factor HOL into a constructive core plus axioms of excluded middle and choice. We similarly factor standard set theory, ZFC, into a constructive core, IZF, and axioms of excluded middle and choice. Then we provide the standard set-theoretic semantics in such a way that the constructive core of HOL is mapped into IZF. We use the disjunction, numerical existence and term existence properties of IZF to provide a program extraction capability from proofs in the constructive core. We can implement the disjunction and numerical existence properties in two different ways: one using Rathjen's realizability for IZF and the other using a new direct weak normalization result for IZF by Moczydlowski. The latter can also be used for the term existence property.

Adding rewriting to a proof assistant based on the Curry-Howard isomorphism, such as Coq, may greatly improve usability of the tool. Unfortunately adding an arbitrary set of rewrite rules may render the underlying formal system undecidable and inconsistent. While ways to ensure termination and confluence, and hence decidability of type-checking, have already been studied to some extent, logical consistency has got little attention so far. In this paper we show that consistency is a consequence of canonicity, which in turn follows from the assumption that all functions defined by rewrite rules are complete. We provide a sound and terminating, but necessarily incomplete algorithm to verify this property. The algorithm accepts all definitions that follow dependent pattern matching schemes presented by Coquand and studied by McBride in his PhD thesis. It also accepts many definitions by rewriting, containing rules which depart from standard pattern matching.

We present a type theory with some proof-irrelevance built into the conversion rule. We argue that this feature is useful when type theory is used as the logical formalism underlying a theorem prover. We also show a close relation with the subset types of the theory of PVS. We show that in these theories, because of the additional extentionality, the axiom of choice implies the decidability of equality, that is, almost classical logic. Finally we describe a simple set-theoretic semantics.

In this paper we study interpolation in local extensions of a base theory. We identify situations in which it is possible to obtain interpolants in a hierarchical manner, by using a prover and a procedure for generating interpolants in the base theory as black-boxes. We present several examples of theory extensions in which interpolants can be computed this way, and discuss applications in verification, knowledge representation, and modular reasoning in combinations of local theories.

We investigate cut-elimination and cut-simulation in impredicative (higher-order) logics. We illustrate that adding simple axioms such as Leibniz equations to a calculus for an impredicative logic -- in our case a sequent calculus for classical type theory -- is like adding cut. The phenomenon equally applies to prominent axioms like Boolean- and functional extensionality, induction, choice, and description. This calls for the development of calculi where these principles are built-in instead of being treated axiomatically.