On the meaning of logical completeness

Goedel's completeness theorem is concerned with provability, while Girard's theorem in ludics (as well as full completeness theorems in game semantics) are concerned with proofs. Our purpose is to look for a connection between these two disciplines. Following a previous work [3], we consider an extension of the original ludics with contraction and universal nondeterminism, which play dual roles, in order to capture a polarized fragment of linear logic and thus a constructive variant of classical propositional logic. We then prove a completeness theorem for proofs in this extended setting: for any behaviour (formula) A and any design (proof attempt) P, either P is a proof of A or there is a model M of the orthogonal of A which defeats P. Compared with proofs of full completeness in game semantics, ours exhibits a striking similarity with proofs of Goedel's completeness, in that it explicitly constructs a countermodel essentially using Koenig's lemma, proceeds by induction on formulas, and implies an analogue of Loewenheim-Skolem theorem.


Introduction
Gödel's completeness theorem (for first-order classical logic) is one of the most important theorems in logic.It is concerned with a duality (in a naive sense) between proofs and models: For every proposition A, either ∃P (P ⊢ A) or ∃M (M |= ¬A).Here P ranges over the set of proofs, M over the class of models, and P ⊢ A reads "P is a proof of A." One can imagine a debate on a general proposition A, where Player tries to justify A by giving a proof and Opponent tries to refute it by giving a countermodel.The completeness theorem states that exactly one of them wins.Actually, the theorem gives us far more insights than stated.Finite proofs vs infinite models: A very crucial point is that proofs are always finite, while models can be of arbitrary cardinality.Completeness thus implies compactness and Löwenheim-Skolem theorems, leading to constructions of various nonstandard models.
In this setting, Girard shows a completeness theorem for proofs [21], which roughly claims that any "winning" design in a behaviour is a proof of it.In view of the interactive definition of behaviour, it can be rephrased as follows: For every (logical) behaviour A and every (proof-like) design P , either P ⊢ A or ∃M (M |= A ⊥ and M defeats P ).Here, "M |= A ⊥ " means M ∈ A ⊥ , and "M defeats P " means P ⊥M .Hence the right disjunct is equivalent to P ∈ A ⊥⊥ = A. Namely, P ∈ A if and only if P ⊢ A, that is a typical full completeness statement.Notice that M |= A ⊥ no more entails absolute unprovability of A (it is rather relativized to each P ), and there is a real interaction between proofs and models.
Actually, Girard's original ludics is so limited that it corresponds to a polarized fragment of multiplicative additive linear logic, which is too weak to be a stand-alone logical system.As a consequence, one does not really observe an opposition between finite proofs and infinite models, since one can always assume that the countermodel M is finite (related to the finite model property for MALL [23]).Indeed, proving the above completeness is easy once internal completeness (a form of completeness which does not refer to any proof system [21]) for each logical connective has been established.
In this paper, we employ a term syntax for designs introduced in [30], and extend Girard's ludics with duplication (contraction) and its dual: universal nondeterminism (see [3] and references therein).Although our term approach disregards some interesting locativityrelated phenomena (e.g., normalization as merging of orders and different sorts of tensors [21]), our calculus is easier to manipulate and closer to the tradition of λ, λµ, λµμ, π-calculi and other more recent syntaxes for focalized classical logic (e.g., [11]).Our resulting framework is as strong as a polarized fragment of linear logic with exponentials ( [8]; see also [25]), which is in turn as strong as a constructive version of classical propositional logic.
We then prove the completeness theorem above in this extended setting.Here, universal nondeterminism is needed on the model side to well interact with duplicative designs on the proof side.This is comparable to the need of "noninnocent" (and sometimes even nondeterministic) Opponents to have full completeness with respect to deterministic, but nonlinear Player's strategies.Unlike before, we cannot anymore assume the finiteness of models, since they are not sufficient to refute infinite proof attempts.As a result, our proof is nontrivial, even after the internal completeness theorem has been proved.Indeed, our proof exhibits a striking similarity with Schütte's proof of Gödel's completeness theorem [29].Given a (proof-like) design P which is not a proof of A, we explicitly construct a countermodel M in A ⊥ which defeats P , essentially using König's lemma.Soundness is proved by induction on proofs, while completeness is by induction on types.Thus our theorem gives matching of two inductions.Finally, it implies an analogue of Löwenheim-Skolem theorem, (and also the finite model property for the linear fragment), which well illustrates the opposition between finite proofs and infinite models with arbitrary cardinality.
In game semantics, one finds a number of similar full completeness results.However, the connection with Gödel's completeness seems less conspicuous than ours.Typically, innocent strategies in Hyland-Ong games most naturally correspond to Böhm trees, which can be infinite (cf.[9]).Thus, in contrast to our result, one has to impose finiteness/compactness on strategies in an external way, in order to have a correspondence with finite λ-terms.Although this is also the case in [3], we show that such a finiteness assumption is not needed in ludics: infinitary proof attempts are always defeated by infinitary models.
The paper is organized as follows.In Section 1 we describe the syntax of (untyped) designs; in Section 2 we move to a typed setting and introduce behaviours (semantic types).In Section 3 we introduce our proof system and prove completeness for proofs.Finally, Section 4 concludes the paper.
1. Designs 1.1.Syntax.In this paper, we employ a process calculus notation for designs, inspired by the close relationship between ludics and linear π-calculus [17].Precisely, we extend the syntax introduced by the second author [30] adding a (universal) nondeterministic choice operator .
Although [30] mainly deals with linear designs, its syntax is designed to deal with nonlinear ones without any difficulty.However, in order to obtain completeness, we also need to incorporate the dual of nonlinearity, that is universal nondeterminism [3].It is reminiscent of differential linear logic [14], which has nondeterministic sum as the dual of contraction; the duality is essential for the separation property [27] (see also [12] for separation of Böhm trees).A similar situation also arises in Hyland-Ong game semantics [22], where nonlinear strategies for Player may contain a play in which Opponent behaves noninnocently; Opponent's noninnocence is again essential for full completeness.
Designs are built over a given signature A = (A, ar), where A is a set of names a, b, c, . . .and ar : A −→ N is a function which assigns to each name a its arity ar(a).Let V be a countable set of variables V = {x, y, z, . ..}.
Over a fixed signature A, a positive action is a with a ∈ A, and a negative action is a(x 1 , . . ., x n ) where variables x 1 , . . ., x n are distinct and ar(a) = n.We often abbreviate a sequence of variables x 1 , . . ., x n by x.In the sequel, we always assume that an expression of the form a( x) stands for a negative action, i.e., ar(a) = n and x is a sequence consisting of n distinct variables.If a is a nullary name we simply write a for the negative action on name a. Definition 1.1 (Designs).For a fixed signature A, the class of positive designs P, Q, . . ., that of predesigns S, T, . . ., and that of negative designs N, M, . . .are coinductively defined as follows: where: • ar(a) = n; • x = x 1 , . . ., x n and the formal sum a( x).P a has |A|-many components {a( x).P a } a∈A ; • {S i : i ∈ I} is built from a set {S i : i ∈ I} of predesigns with I an arbitrary index set.We denote arbitrary designs by D, E, . ... The set of designs, consisting of all positive, negative and predesigns, is denoted by D. Any subterm E of D is equivalently called a subdesign of D.
Notice that designs are coinductively defined objects.In particular, infinitary designs are included in our syntax, just as in the original ludics [21].It is strictly necessary, since we want to express both proof attempts and countermodels as designs, both of which tend to be infinite.
Informally, designs may be regarded as infinitary λ-terms with named applications, named and superimposed abstractions and a universal nondeterministic choice operator .
More specifically, a predesign N 0 |a N 1 , . . ., N n can be thought of as iterated application In the sequel, we may abbreviate N 0 |a N 1 , . . ., N n by N 0 |a N .If a is a nullary name we simply write N 0 |a.
On the other hand, a negative design of the form a( x).P a can be thought of as iterated abstraction λ x.P a = λx 1 .(λx 2 .(•• • (λx n .P a ) • • • )) of an n-ary name a ∈ A. A family {a( x).P a } a∈A of abstractions indexed by A is then superimposed to form a negative design a( x).P a .Since a( x).P a is built from a family indexed by A, there cannot be any overlapping of name in the sum.Each a( x).P a is called an (additive) component.
A predesign is called a cut if it is of the form ( a( x).P a )|b N 1 , . . ., N n .Otherwise, it is of the form x|a N 1 , . . ., N n and called a head normal form.
As we shall see in detail in Subsection 1.2, cuts have substantial computational significance in our setting: in fact a cut ( a( x).P a )|b N can be reduced to another design P b [ N / y].Namely, when the application is of name b, one picks up the component b( y).P b from the family {a( x).P a } a∈A .Notice that the arities of y and N always agree.Then, one applies a simultaneous "β-reduction" (λ y. The head variable x in an head normal form x|a N 1 , . . ., N n plays the same role as a pointer in a strategy does in Hyland-Ong games and an address (or locus) in Girard's ludics.On the other hand, a variable x occurring in a bracket as in N 0 |a N 1 , . . ., N i−1 , x, N i+1 , . . ., N n does not correspond to a pointer nor address.Rather, it corresponds to an identity axiom (initial sequent) in sequent calculus, and for this reason is called an identity.If a negative design N simply consists of a variable x, then N is itself an identity.
The positive design Ω denotes divergence (or partiality) of the computation, in the sense we will make more precise in the next subsection.We also use Ω to encode partial sums.Given a set α = {a( x), b( y), c( z), . . .} of negative actions with distinct names {a, b, c, . ..} ⊆ A, we write α a( x).P a to denote the negative design a( x).R a , where R a = P a if a( x) ∈ α, and R a = Ω otherwise.We also use an informal notation a( x).P a + b( y).P b + c( z).P c + • • • to denote α a( x).P a .
So far, the syntax we are describing is essentially the same as the one introduced in [30].A novelty of this paper is the nondeterministic conjunction operator , which allows us to build a positive design {S i : i ∈ I} from a set {S i : i ∈ I} of predesigns with I an arbitrary index set.Each S i is called a conjunct of P .We write (daimon) for the empty conjunction ∅.This design plays an essential role in ludics, since it is used to define the concept of orthogonality.Although is usually given as a primitive (see e.g., [21,8,30]), we have found it convenient and natural to identify (or rather encode) with the empty conjunction.As we shall see, its computational meaning exactly corresponds to the usual one: marks the termination of a computation.Put in another way, our nondeterministic conjunction can be seen as a generalization of the termination mark.
A design D may contain free and bound variables.An occurrence of subterm a( x).P a binds the free variables x in P a .Variables which are not under the scope of the binder a( x) are free.We denote by fv(D) the set of free variables occurring in D. As in λcalculus, we would like to identify two designs which are α-equivalent i.e., up to renaming of bound variables.But it is more subtle than usual, since we also would like to identify, e.g., {S, T } with {S} whenever S and T are α-equivalent.To enforce these requirements simultaneously and hereditarily, we define an equivalence relation by coinduction.
By renaming we mean a function ρ : V −→ V. We write id for the identity renaming, and ρ[z/x] for the renaming that agrees with ρ except that ρ[z/x](x) = z.The set of renamings is denoted by RN .(2) D = {S i : i ∈ I}, E = {T j : j ∈ J} and we have: (i) for any i ∈ I there is j ∈ J such that (S i , ρ) R (T j , τ ), (ii) for any j ∈ J there is i ) for every a ∈ A and some vector z a of fresh variables.We say that two designs D and E are equivalent if there is a design equivalence R such that (D, id) R (E, id).See [30] for further details.
Henceforth we always identify two designs D and E, and write D = E by abuse of notation, if they are equivalent in the above sense.The following lemma is a straightforward extension of Lemma 2.6 of [30].It makes it easier to prove equivalence of two designs (just as the "bisimulation up-to" technique in concurrency theory makes it easier to prove bisimilarity of two processes).Lemma 1.3.Let R be a binary relation on designs such that if D R E then one of the following holds: (1) D = Ω = E; (2) D = {S i : i ∈ I}, E = {T j : j ∈ J}, and we have: (i) for any i ∈ I there is j ∈ J such that S i R T j , (ii) for any j ∈ J there is i ∈ I such that S i R T j ; As a notational convention, a unary conjunction {S} is simply written as S.This allows us to treat a predesign as a positive design.We also write: • S ∈ P if P is a conjunction and S is a conjunct of P ; • P ≤ Q if either P = Ω, or both P and Q are conjunctions and for all S ∈ Q, S ∈ P .
Thus P ≤ Q indicates that P has more conjuncts than Q unless P = Ω.We also extend the conjunction operator to positive designs and abstractions as follows.
(1) As for positive designs, we set (2) As for abstractions, i.e., negative designs of the form a( x).P a , observe that since we are working up to renaming of bound variables, it is no loss of generality to assume that in any pair a( x).P a , a( y).Q a , one has x = y for every a ∈ A. We set: a( x).P a ∧ a( x).Q a := a( x).(P a ∧ Q a ).
Observe the following: • The set of positive designs forms a semilattice with respect to ≤ and ∧.
• Ω ≤ P ≤ for any positive design P .
The previous definition can be naturally generalized to arbitrary sets as follows: Definition 1.5 ( operation).
(1) Given a set X of positive designs, we define the positive design X as follows: • If X = ∅, we set X := .
• Otherwise, X is a nonempty set of conjunctions and we set: X := {S : S ∈ P for some P ∈ X}.
(2) Given a set X of abstractions, we define the abstraction X as: X := a( x).{P a : a( x).P a ∈ X}.
In particular, if X = ∅ then X = a( x). .
Notice that {D} = D, as long as D ranges over positive designs or abstractions.A design D is said: • closed, if D has no occurrence of free variable; • linear (or affine, more precisely), if for any subdesign of the form N 0 |a N 1 , . . ., N n , the sets fv(N 0 ), . . ., fv(N n ) are pairwise disjoint; • deterministic, if in any occurrence of subdesign {S i : i ∈ I}, I is either empty (i.e., we have ) or a singleton (i.e., we have a predesign).• cut-free, if it does not contain a cut as a subdesign; • identity-free, if it does not contain an identity as subdesign.
We remark that the notion of design introduced in [30] exactly corresponds in our terminology to that of deterministic design.Furthermore, considering the specific signature G given below, we can also express in our setting Girard's original notion of design: Example 1.6 (Girard's syntax).Let us consider the signature G = (P f (N), | |) where: Girard's designs correspond to total, linear, deterministic, cut-free and identity-free designs which have a finite number of free variables over the signature G. See [30] for more details.
1.2.Normalization.Ludics is an interactive theory.This means that designs, which subsume both proofs and models, interact together via normalization, and types (behaviours) are defined by the induced orthogonality relation (Section 2).Several ways to normalize designs have been considered in the literature: abstract machines [7,15,10,3], abstract merging of orders [21,18], and terms reduction [10,30].Here we extend the last solution [30].As in untyped λ-calculus, normalization is not necessarily terminating, but in our setting a new difficulty arises through the presence of the operator .
We define the normal forms in two steps, first giving a nondeterministic reduction rule which finds head normal forms whenever possible, and then expanding it corecursively.As usual, let D[ N / x] denote the design obtained by the simultaneous and capture-free substitution of negative designs N = N 1 , . . ., N n for x = x 1 , . . ., x n in D. Given two binary relations R 1 , R 2 on designs, we write R 1 R 2 to denote the relation given by their composition i.e., For instance, we write P − ⇀ * ∋ S if there exists Q such that P − ⇀ * Q and Q ∋ S.
Examples 1.8.We now give examples of reductions and some remarks.(5) Let P = a(y).(y|bw ∧ z|c M ) | a b(t).Q .We have the following reduction: (6) The special designs and Ω do not reduce to anything (as we will see, they are normal forms).( 7) By its definition, our reduction is not "closed under context" i.e., if P − ⇀ Q and P (resp. Q) occurs as a subdesign of D (resp.E), nothing ensures that D− ⇀ E. For instance a negative design (or an head normal form) having an occurrence of cut as subdesign does not reduces to anything.To expand the reduction "under context" we will use Definition 1.9.
Notice that any closed positive design P has one of the following forms: , Ω and {S i : i ∈ I}, where S i are cuts.The conjunction then reduces to another closed positive design.Hence any sequence of reductions starting from P either terminates with or Ω or it diverges.By stipulating that the normal form of P in case of divergence is Ω, we obtain a dichotomy between and Ω: the normal form of a closed positive design is either or Ω.This leads us to the following definition of normal form: Definition 1.9 (Normal form).The normal form function : D −→ D is defined by corecursion as follows: P = Ω if there is an infinite reduction sequence or a reduction sequence ending with Ω starting from P ; = {x|a N : P − ⇀ * ∋ x|a N } otherwise; a( x).P a = a( x).P a ; x = x.
We observe that when P is a closed positive design, we have P = precisely when all reduction sequences from P are finite and terminate with ; thus our nondeterminism is universal rather than existential.This, however, does not mean that the set {Q : P − ⇀ * Q} is finite; even when it is infinite, it may happen that P = .
The following facts are easily observed: Lemma 1.10. ( X = { P : P ∈ X}, for any set X of positive designs.
Notice that the first statement means that the composed relation ≤ − ⇀ is equivalent to − ⇀ as far as total designs are concerned.
Example 1.11 (Acceptance of finite trees).In [30], it is illustrated how words and deterministic finite automata are represented by (deterministic) designs in ludics.We may extend the idea to trees and finite tree automata in presence of nondeterminism.Rather than describing it in full detail, we will only give an example which illustrates the power of nondeterminism to express (topdown) finite tree automata.We consider the set of finite trees labelled with a, b which are at most binary branching.It is defined by the following grammar: Here, a(t 1 , t 2 ) represents a tree with the root labelled by a and with two subtrees t 1 , t 2 .In particular, a(ǫ, ǫ) represents a leaf labelled by a.We simply write a in this case.
Suppose that the signature A contains a unary name ↑, binary names a, b and a nullary name ǫ.We write ↓ for the positive action ↑.We abbreviate ↑(x).x|aN by ↑a N , so that we have Each tree is then represented by a deterministic linear negative design as follows: Now consider the positive design Q = Q 0 [x 0 ] defined by the following equations: This design Q works as an automata accepting all trees of the form a(b, a(b, Indeed, given a(b, a(b, b)), it works nondeterministically as follows: , Q "accepts" the tree a(b, a(b, b)).
1.3.Associativity.In this subsection, we prove one of the fundamental properties of designs which we will need later: Theorem 1.12 (Associativity).Let D be a design and N 1 , . . ., N n be negative designs.We have: Associativity corresponds to a weak form of the Church-Rosser property: the normal form is the same even if we do not follow the head reduction strategy.In this paper we are not concerned with the full Church-Rosser property, and leave it as an open question.
The proof consists of several stages and it can be skipped at first reading.
To prove associativity, first notice that a simultaneous substitution D[N 1 /y 1 , . . ., N n /y n ] can be turned into a sequential one of the form by renaming y 1 , . . ., y n by fresh variables z 1 , . . ., z n as follows: Moreover, we have: This allows us to work with sequential substitutions rather than simultaneous ones.
We define a binary relation ≫ on designs by: Lemma 1.13.Suppose that Proof.
(1) By Lemma 1.10 (2), we have Hence by letting 1) is not the case, a cut must be created by substitution of some N j for a head variable of P 0 .Hence P 0 must contain a head normal form y j |a M as conjunct for some 1 ≤ j ≤ n and N j = a( x).R a , so that P contains a cut (the equality due to . Now the situation is as follows: Q contains

Proof.
• For the 'if' direction, we distinguish two cases.
− If there is an infinite reduction sequence from Q, then there is also an infinite sequence from P by Lemma 1.14.− If Q− ⇀ * Ω, then there is P ′ such that P − ⇀ * P ′ and P ′ ≫ Ω. Namely, P ′ can be written as Our purpose is to build either a finite reduction sequence Q− ⇀ * Ω or an infinite reduction sequence Two cases arise: − The reductions take place inside P 0 and independently of N 1 , . . ., N n .Namely, there is an infinite reduction sequence for 0 ≤ i ≤ m and P m contains a head normal form that is responsible for the reduction P m − ⇀ P m+1 .By repeatedly applying Lemma 1.13 (1), we obtain (2).Hence by Lemma 1.10 (1), we obtain In the former case, we are already done.In the latter case, we still have an infinite reduction sequence P m+1 − ⇀ P m+2 − ⇀ • • • and P m+1 ≫ Q 1 .Hence we may repeat the same argument to prolong the reduction sequence Proof.Suppose that P − ⇀ * P ′ ∋ x|a M 1 , . . ., M m .By Lemmas 1.13 and 1.10 (1) (which states that the composed relation for some K = K 1 , . . ., K m , where x ∈ {y 1 , . . ., y n }, and Hence by letting . ., L m .By Lemma 1.14, there is P ′ such that P − ⇀ * P ′ and P ′ ≫ Q ′ .The rest is similar to the above.Lemma 1.17.If M ≫ N , then either M = y = N for a variable y, or M = a( x a ).P a , N = a( x a ).Q a and P a ≫ Q a for every a ∈ A. Proof.Immediate.
The following lemma completes the proof of Theorem 1.12.
Proof.Define a binary relation R on designs as follows: and We now verify that this R satisfies the conditions of Lemma 1.3.First, let P, Q be positive designs such that P R Q, i.e., P = [ , and P 0 ≫ Q 0 for some P 0 and Q 0 .
Finally, let N, M be negative designs such that and N ≫ M 0 for some N 0 and M 0 .
• Otherwise, N must be of the form a( x a ).[[P a ]] and N 0 = a( x a ).P a .Since N 0 ≫ M 0 , M 0 is of the form a( x a ).Q a and P a ≫ Q a for every a ∈ A by Lemma 1.17.So

Behaviours
This section is concerned with the type structure of ludics.We describe orthogonality and behaviours in 2.1, logical connectives in 2.2 and finally explain (the failure of) internal completeness of logical connectives in 2.3.
2.1.Orthogonality.In the rest of this paper, we mainly restrict ourselves to a special subclass of designs: we only consider designs which are total, cut-free, and identity-free.Generalizing the terminology in [30], we call them standard designs.In other words: Definition 2.1 (Standard design).A design D is said standard if it satisfies the following two conditions: (i) Cut-freeness and identity-freeness: D can be coinductively generated by the following restricted version of the grammar given in Definition 1.1: (ii) Totality: The totality condition is due to the original work [21].It has a pleasant consequence that behaviours (see below) are never empty.We also remark that the lack of identities can be somehow compensated by considering their infinitary η expansions, called faxes in [21].In our setting, the infinitary η expansion of an identity x is expressed by the negative standard design η(x) defined by the equation: η(x) = a(y 1 , . . ., y n ).x|a η(y 1 ), . . ., η(y n ) .
We refer to [30] for more details.
We are now ready to define orthogonality and behaviours.
Definition 2.2 (Orthogonality).A positive design P is said atomic if it is standard and fv(P ) ⊆ {x 0 } for a certain fixed variable x 0 .1A negative design N is said atomic if it is standard and fv(N ) = ∅.Two atomic designs P, N of opposite polarities are said orthogonal and written P ⊥N (or equivalently N ⊥P ) when P [N/x 0 ] = .
If X is a set of atomic designs of the same polarity, then its orthogonal set, denoted by X ⊥ , is defined by X ⊥ := {E : ∀D ∈ X, D⊥E}.
The meaning of and the associated partial order ≤ can be clarified in terms of orthogonality.For atomic designs D, E of the same polarity, define D E if and only if {D} ⊥ ⊆ {E} ⊥ .D E means that E has more chances of convergence than D when interacting with other atomic designs.The following is easy to observe.
is a preorder.(2) P ≤ Q implies P Q for any pair of atomic positive designs P, Q.
(3) Let X and Y be sets of atomic designs of the same polarity.Then X ⊆ Y implies Y X.
In particular, Ω P for any atomic positive design P . 2 This justifies our identification of with the empty conjunction ∅.
Remark 2.4.Designs in [21] satisfy the separation property: for any designs D, E of the same polarity, we have D = E if and only if {D} ⊥ = {E} ⊥ .But when the constraint of linearity is removed, this property no more holds, as observed in [26] (see also [10]).
In our setting, separation does not hold, even when D and E are deterministic (atomic) designs.For instance, consider the following two designs [26]: .
It is easy to see that in our setting P ⊥N holds if and only if N has an additive component of the form ↑(z). {z|↓ M i : i ∈ I} for arbitrary index set I and arbitrary standard negative designs M i with fv(M i ) ⊆ {z}.
The same holds for Q, as can be observed from the following reduction sequence (for readability, we only consider the case in which N has a component of the form ↑(z).z|↓ M , the general case easily follows): We therefore conclude {P } ⊥ = {Q} ⊥ , even though P = Q.
Although possible, we do not define orthogonality for nonatomic designs.Accordingly, we only consider atomic behaviours which consist of atomic designs.
Definition 2.5 (Behaviour).A behaviour X is a set of atomic standard designs of the same polarity such that X ⊥⊥ = X.
A behaviour is positive or negative according to the polarity of its designs.We denote positive behaviours by P, Q, R, . . .and negative behaviours by N, M, K . . . .Orthogonality satisfies the following standard properties: Proposition 2.6.Let X, Y be sets of atomic designs of the same polarity.We have: (1) X ⊆ X ⊥⊥ . ( In particular, any orthogonal set is a behaviour. (5) (X∪Y) ⊥ = X ⊥ ∩Y ⊥ .In particular, the intersection of two behaviours is a behaviour.
We also observe that D E and D ∈ X implies E ∈ X when X is a behaviour.Among all positive (resp.negative) behaviours, there exist the least and the greatest behaviours with respect to set inclusion: where − := a( x).plays the role of the design called negative daimon in [21].Notice that behaviours are always nonempty due to the totality condition: any positive (resp.negative) behaviour contains (resp.− ).Now that we have given behaviours, we can define contexts of behaviours and then the semantical entailment |= in order to relate designs to contexts of behaviours.These constructs play the role of typing environments in type systems.They correspond to sequents of behaviours, in the terminology of [21].
Clearly, N |= N if and only if N ∈ N, and P |= y : P if and only if P [x 0 /y] ∈ P. Furthermore, associativity (Theorem 1.12) implies the following quite useful principle: Lemma 2.8 (Closure principle).
(1) Let P be a standard design with fv(P ) ⊆ {x 1 , . . ., x n , z} and Γ a context x 1 : P 1 , . . ., x n : P n .First, we claim that P [M/z] is a standard design when P |= Γ, z : P and M ∈ P ⊥ .Indeed, it is obviously cut-free.It is also identity-free because so are P, M and neither substitution P [M/z] nor normalization P [M/z] introduces identities.Totality will be shown below.We also note that fv( P [M/z] ) ⊆ {x 1 , . . ., x n }, since M is an atomic negative design that is always closed.
Next, we observe that P [ K/ x, M/z] = P [M/z] [ K/ x] for any list K = K 1 , . . ., K n of standard negative designs.Indeed, notice that P Hence by associativity, we obtain: In particular, P [ K/ x, M/z] = implies the totality of P [M/z] .
We are now ready to prove the first claim.Writing K ∈ Γ ⊥ for K 1 ∈ P ⊥ 1 , . . ., K n ∈ P ⊥ n , we have: (2) and ( 3) are proven in a similar way.We just mention that the crucial equalities which are needed to show (2) and ( 3) respectively, can be straightforwardly derived from associativity.
2.2.Logical connectives.We next describe how to build behaviours by means of logical connectives in ludics.
We can intuitively explain the structure of logical connectives in terms of standard connectives of linear logic as follows.
The variables z 1 , . . ., z n play the role of placeholders for (immediate) subformulas, while α 0 determines the logical structure of α.An action a(x 1 , . . ., x m ) ∈ α 0 can be seen as a kind of m-ary "tensor product" x 1 ⊗ • • • ⊗ x m indexed by the name a.The whole set α 0 can be thought of as k-ary "additive sum" of its elements: In Appendix A we give a more precise correspondence between logical connective in our sense and connectives of polarized linear logic [25].
Remark 2.12.An ethics is a set of atomic predesigns which are by construction linear in x 0 .It can be seen as a "generator" of a behaviour defined by logical connectives in the following sense.For positives, we have by definition α N 1 , . . ., N n = α eth N 1 , . . ., N n ⊥⊥ .For negatives, we have by Proposition 2.6 (3): Example 2.13.Let α be the logical connective as given in Example 2.10 and N, M, K, L negative behaviours.We have Example 2.14 (Linear logic connectives).Logical connectives & , &, ↑, ⊥, ⊤ can be defined if the signature A contains a nullary name * , unary names ↑, π 1 , π 2 and a binary name ℘.We also give notations to their duals for readability.
where ǫ denotes the empty sequence.We do not have exponentials here, because we are working in a nonlinear setting so that they are already incorporated into the connectives.
With these logical connectives we can build behaviours corresponding to usual linear logic types (we use infix notations such as N ⊗ ⊗ ⊗ M rather than the prefix ones ⊗ ⊗ ⊗ N, M ).
The next theorem illustrates a special feature of behaviours defined by logical connectives.It also suggests that nonlinearity and universal nondeterminism play dual roles.
Theorem 2.15.Let P be an arbitrary positive behaviour.

Proof.
(1) For any N ∈ P ⊥ , we have P [N/x 1 , N/x 2 ] = .Hence, P [x 0 /x 1 , x 0 /x 2 ][N/x 0 ] = , and so P [x 0 /x 1 , x 0 /x 2 ] ∈ P ⊥⊥ = P. (2) By Proposition 2.3 (3), we have X {N } = N for any N ∈ X.Since P ⊥ is a behaviour, it is upward closed with respect to .Hence the claim holds.(4) For the sake of readability, we consider the binary case and show that N, M |= P ⊥ implies N ∧ M |= P ⊥ .The general case can be proven using the same argument. Let To prove N ∧ M ∈ P ⊥ , by Remark 2.12, it is sufficient to show that N ∧M is orthogonal to any x 0 |a K ∈ α eth N 1 , . . ., N n .Since by construction x 0 occurs only once at the head position of x 0 |a K , we only have to show that N ∧ M | a K = .
Let N = a( x).P a and M = a( x).Q a so that N ∧ M = a( x).(P a ∧ Q a ).Since N ∧ M | a K is a predesign, we have by Lemma 1.10 ( 2), (3): Since N, M ∈ P ⊥ , we have N | a K = and M | a K = .Our claim then immediately follows. (3 holds for any N, M ∈ P ⊥ .But we have just proven that N ∧ M ∈ P ⊥ , and so Remark 2.16.Theorem 2.15 can be considered as an internal, monistic form of soundness and completeness for the contraction rule: soundness corresponds to point (1) while completeness to its converse (3), duplicability.
However, in the sequel we only use point (1) (in Theorem 3.5) and point (4) (in Lemma 3.10) of Theorem 2.15.

Internal completeness.
In [21], Girard proposes a purely monistic, local notion of completeness, called internal completeness.It means that we can give a precise and direct description to the elements of behaviours (built by logical connectives) without using the orthogonality and without referring to any proof system.It is easy to see that negative logical connectives enjoy internal completeness: Theorem 2.17 (Internal completeness (negative case)).Let α = ( z, α 0 ) be a logical connective with z = z 1 , . . ., z n and N = a( x).P a an atomic negative design.We have: where the indices i 1 , . . ., i m ∈ {1, . . ., n} are determined by the variables x = z i 1 , . . ., z im .
Proof.Let N = a( x).P a be an atomic negative design and P = x 0 |a N 1 , . . ., N m ∈ α eth P ⊥ 1 , . . ., P ⊥ n = a( x)∈α 0 a P ⊥ i 1 , . . ., P ⊥ im .Since P [N/x 0 ] is a predesign and x 0 occurs only at the head position of P , we have by Lemma 1.10 (2): This means that N ∈ α eth P ⊥ 1 , . . ., P ⊥ n ⊥ = α(P 1 , . . ., P n ) (see Remark 2.12) if and only if for every a( x) ∈ α 0 and for every Notice that in the above, P b can be arbitrary when b( y) / ∈ α 0 .Thus our approach is "immaterial" in that we do not consider material designs (see e.g., [21,8,30] for the definition of material design).The original "material" version of internal completeness [21] can be easily derived from our immaterial one.
Remark 2.18.A remarkable example of internal completeness for negative behaviours is provided for the logical connective & & & = (x 1 , x 2 , {π 1 (x 1 ), π 2 (x 2 )}): Above, the irrelevant components of the sum are suppressed by "• • • ."Up to materiality (i.e., removal of irrelevant additive components), P & & & Q, which has been defined by intersection, is isomorphic to the cartesian product of P and Q.This isomorphism is called "the mystery of incarnation" in [21].
As to positive connectives, [21] proves internal completeness theorems for additive and multiplicative ones separately in the linear and deterministic setting.They are integrated in [30] as follows: Theorem 2.19 (Internal completeness (linear, positive case)).When the universe of standard designs is restricted to linear and deterministic ones, we have However, this is no more true with nonlinear designs.A counterexample is given below.
and Q = x 0 |↓ ↑(y).P of Remark 2.4.By construction, P belongs to P. Since P Q, Q also belongs to P. However, Q ∈ ↓ ↓ ↓ eth ↑ ↑ ↑(0) , since ↑(y).P is not atomic and so cannot belong to ↑ ↑ ↑(0).This motivates us to directly prove completeness for proofs, rather than deriving it from internal completeness as in the original work [21].
In [3] a weaker form of internal completeness is proved, which is enough to derive a weaker form of full completeness: all finite "winning" designs are interpretations of proofs.While such a finiteness assumption is quite common in game semantics, we will show that it can be avoided in ludics.
We end this section with the following remark.
Remark 2.21.The main linear logic isomorphism, namely the exponential one !A ⊗ !B ∼ = !(A& B) can be expressed in our notation as In our setting it is possible to prove that those behaviours are "morally" isomorphic, in the sense that they are isomorphic if we consider designs equal up to materiality 3 .
We can in fact define a pair of maps (f, g) on designs such that: , and similarly for g; • for any P ∈ ↑ ↑ ↑P⊗ ⊗ ⊗↑ ↑ ↑Q, we have that g(f (P )) and P are equal up to materiality in ↑ ↑ ↑P⊗ ⊗ ⊗↑ ↑ ↑Q, and similarly for the other direction.
We postpone a detailed study of isomorphisms of types and related issues to a subsequent work.

Proof system and completeness for proofs
Having set up the framework, we now address the main problem: an interactive form of Gödel completeness.We first introduce the proof system in 3.1, then examine its soundness in 3.2, and finally prove completeness in 3.3, in a way quite analogous to the proof of Gödel's theorem based on proof search (often attributed to Schütte [29]).
3.1.Proof system.We will now introduce a proof system.In our system, logical rules are automatically generated by logical connectives.Since the names which constitute the logical connectives are chosen among the names of a signature A, the set of logical connectives vary for each signature A. Thus, our proof system is parameterized by A.
If one chooses A rich enough, the constant-only fragment of polarized linear logic ( [25]; see also [8]) can be embedded, as we will show in Appendix A.
In the sequel, we focus on logical behaviours, which are composed by using logical connectives only.Definition 3.1 (Logical behaviours).A behaviour is logical if it is inductively built as follows (α denotes an arbitrary logical connective): Notice that the orthogonal of a logical behaviour is again logical.
As advocated in the introduction, our monistic framework renders both proofs and models as homogeneous objects: designs.Definition 3.2 (Proofs, Models).A proof is a standard design (Definition 2.1) in which all the conjunctions are unary.In other words, a proof is a total, deterministic andfree design without cuts and identities.A model is a linear standard design (in which conjunctions of arbitrary cardinality may occur).
We will use proofs as proof-terms for syntactic derivations in the proof system to be introduced below.In that perspective, it is reasonable to exclude designs with non-unary conjunctions from proofs, because they do not have natural counterparts in logical reasoning.For instance, the nullary conjunction (daimon) and the binary one would correspond to the following "inference rules" respectively: with ⊢ Γ an arbitrary sequent.Notice that we have not specified yet what a proof actually proves.Hence it might be better called "proof attempt" or "untyped proof" or "para-proof."On the other hand, we restrict models to linear designs just to emphasize the remarkable fact that linear designs do suffice for defeating any failed proof attempt that is possibly nonlinear.
Given a design D, let ac + (D) be the set of occurrences of positive actions a in D. The cardinality of D is defined to be the cardinality of ac + (D).For instance, the fax η(x) = a(y 1 , . . ., y n ).x|a η(y 1 ), . . ., η(y n ) (see Section 2.1) is an infinite design in this sense.Also, both proofs and models can be infinite.
A positive (resp.negative) sequent is a pair of the form P ⊢ Γ (resp.N ⊢ Γ, N) where P is a positive proof (resp.N is a negative proof) and Γ is a positive context of logical behaviours (Definition 2.7 (a)) such that fv(P ) ⊆ fv(Γ) (resp.fv(N ) ⊆ fv(Γ)).
We write D ⊢ Λ for a generic sequent.Intuitively, a sequent D ⊢ Λ should be understood as a claim that "D is a proof of ⊢ Λ" or "D is of type ⊢ Λ." Our proof system consists of two sorts of inference rules: where α = ( z, α 0 ), z = z 1 , . . ., z n and a( x) ∈ α 0 so that the indices i 1 , . . ., i m ∈ {1, . . ., n} are determined by the variables x = z i 1 , . . ., z im .• A negative rule (α): where, as in the positive rule, the indices i 1 , . . ., i m are determined by the variables x = z i 1 , . . ., z im for each a( x) ∈ α 0 .
We assume that x are fresh, i.e., do not occur in Γ.This does not cause a loss of generality since variables in α can be renamed (see Definition 2.9).
Notice that a component b( y).P b of a( x).P a can be arbitrary when b( y) ∈ α 0 .Hence we again take an "immaterial" approach (cf.Theorem 2.17).
Observe that the positive rule (α, a) involves implicit uses of the contraction rule on positive behaviours.The weakening rule for positive behaviours is implicit too; in the bottom up reading of a proof derivation, unused formulas are always propagated to the premises of any instance of rule.It should also be noted that proof search in our system is deterministic.In particular, given a positive sequent z|a M 1 , . . ., M m ⊢ Γ, the head variable z and the first positive action a completely determine the next positive rule to be applied bottom-up (if there is any).
It is also possible to adopt a "material" approach in the proof system by simply requiring P b = Ω when b( y) ∈ α 0 in the rule (α).Then a proof D is finite (i.e., ac + (D) is a finite set) whenever D ⊢ Λ is derivable for some Λ.Thus, as in ordinary sequent calculi, our proof system accepts only essentially finite proofs for derivable sequents (i.e., finite up to removal of irrelevant parts).
Remark 3.3.To clarify the last point, we observe that for any (possibly infinite) negative proof N with fv(N ) ⊆ fv(Γ), the sequent N ⊢ Γ, ⊤ ⊤ ⊤ is derivable by the instance of the negative rule with α = ⊤ ⊤ ⊤ = (ǫ, ∅).In fact, this corresponds to the usual top-rule of linear logic (see also Example 3.4): This means that for a (possibly infinite) negative proof N there is a finite derivation of N ⊢ Γ, ⊤ ⊤ ⊤.By contrast, in the "material" approach we only have where a( x).Ω is the unique negative proof which has cardinality 0.
Example 3.4.For linear logic connectives (Example 2.14), the positive and negative rules specialize to the following (taking here the "material" approach): Thanks to the previous proposition, we can naturally strengthen our proof system as follows.
First, we consider sequents of the form D ⊢ Λ where D is a "proof with cuts" (i.e., a proof in the sense of Definition 3.2 except that the cut-freeness condition is not imposed).Second, we add the following cut rule: where Ξ is either empty or it consists of a negative logical behaviour N.
The soundness theorem can be naturally generalized as follows: Theorem 3.7 (Soundness (with cut rule)).If D ⊢ Λ is derivable in the proof system with the cut rule above, then D |= Λ.In particular, for any positive logical behaviour P and a proof P , P ⊢ x 0 : P is derivable if and only if P ∈ P. Similarly for the negative case.
Before proving the theorem, let us recall a well-established method for proving Gödel completeness based on proof search (often attributed to Schütte [29]).It proceeds as follows: (1) Given an unprovable sequent ⊢ Γ, find an open branch in the cut-free proof search tree.(2) From the open branch, build a countermodel M in which ⊢ Γ is false.The proof below follows the same line of argument.We can naturally adapt (1) to our setting, since the bottom-up cut-free proof search in our proof system is deterministic in the sense that at most one rule applies at each step.Moreover, it never gets stuck at the negative sequent, since a negative rule is always applicable bottom-up.Adapting (2) is more delicate.
For simplicity, we assume that the sequent D ⊢ Λ is positive; the argument below can be easily adapted to the negative case.So, suppose that a positive sequent P 0 ⊢ Θ 0 with Θ 0 = x 1 : P 1 , . . ., x n : P n does not have a derivation.By König's Lemma, there exists a branch ob in the cut-free proof search tree, ob = . . . .
which is either finite and has the topmost sequent P max ⊢ Θ max with max ∈ N to which no rule applies anymore, or infinite.In the latter case, we set max = ∞.
Our goal is to build models M(x 1 ) More generally, we define negative designs • M(i) for every i ≥ 0 (0 ≤ i ≤ max if max ∈ N); • M(x) for every variable x occurring in the branch.
We assume that max = ∞.The idea is to chop off the branch ob at height K, where K is an arbitrary natural number, and define finite approximations M K (i) and M K (y).Then M(i) and M(y) arise as the limit when K → ∞.
More concretely, given a natural number K, we define M K (i) by downward induction from i = K to i = 0 as follows: • When i = K, the sequent P K ⊢ Θ K is of the form z|a M ⊢ Γ, z : α N .We let M K (K) := a( x)∈α 0 a( x). .• When i < K, we proceed as in the case (iii) above.Namely, M K (i) := a( x).z i k |b M K (y 1 ), . . ., M K (y l ) + α 0 \{a( x)} c( w)., M K (y) := {M K (j) : i < j ≤ K and P j has head variable y}, where actions a( x), b and the index i k are determined as before.Now observe that the sequence {M K (y)} K∈N is "monotone increasing" in the sense that M K 2 (y) has more conjuncts than M K 1 (y) whenever K 1 < K 2 .The same for M K (i) with i ≤ K. Hence we can naturally obtain the "limits" This construction ends up with the same as the previous recursive one.
Observe that each M(i) and M(x) thus constructed are surely models, i.e., atomic linear designs.Theorem 3.8 is a direct consequence of the following two lemmas.
The first lemma crucially rests on induction on logical behaviours, that is an analogue of induction on formulas, which lies at the core of logical completeness in many cases.Lemma 3.10.For P i ⊢ Θ i appearing in the branch ob above, suppose that P i has a head variable z and z : R ∈ Θ i .Then: Proof.By induction on the construction of R.
The proof of the next lemma suggests a similarity between the construction of our countermodels and the Böhm-out technique (see, e.g., [2]), that constructs a suitable term context in order to visit a specific position in the Böhm tree of a given λ-term.
Recall that the initial sequent of our open branch ob is P 0 ⊢ Θ 0 with Θ 0 = x 1 : P 1 , . . ., x n : P n , so that fv(P 0 ) ⊆ {x 1 , . . ., x n }.We have: Lemma 3.11. [ Proof.We first prove that there is a reduction sequence for any i < max, where v 1 , . . ., v s and w 1 , . . ., w t are the free variables of P i and P i+1 , respectively.Suppose that P i is as in the case (iii) above, so has the head variable z ∈ {v 1 , . . ., v s }.
Our explicit construction of the countermodels yields a by-product: Corollary 3.12 (Downward Löwenheim-Skolem, Finite model property).
(1) Let P be a proof and P a logical behaviour.If P ∈ P, then there is a countable model M ∈ P ⊥ (i.e., ac + (M ) is a countable set) such that P ⊥M .(2) Furthermore, when P is linear, there is a finite and deterministic model M ∈ P ⊥ such that P ⊥M .
The second statement is due to the observation that when P is linear the positive rule (α, a) can be replaced with a linear variant: where Γ 1 , . . ., Γ m are disjoint subsets of Γ.We then immediately see that the proof search tree is always finite, and so is the model M(x).It is deterministic, since each variable occurs at most once as head variable in a branch so that all conjunctions are at most unary.

Conclusion and related work
We have presented a Gödel-like completeness theorem for proofs in the framework of ludics, aiming at linking completeness theorems for provability with those for proofs.We have explicitly constructed a countermodel against any failed proof attempt, following Schütte's idea based on cut-free proof search.Our proof employs König's lemma and reveals a sharp opposition between finite proofs and infinite models, leading to a clear analogy with Löwenhein-Skolem theorem.Our proof also employs an analogue of the Böhm-out technique [4,2] (see the proof of Lemma 3.11), though it does not lead to the separation property (Remark 2.4).In Hyland-Ong game semantics, Player's innocent strategies most naturally correspond to possibly infinite Böhm trees (see, e.g., [9]).One could of course impose finiteness (or compactness) on them to have correspondence with finite proofs.But it would not lead to an explicit construction of Opponent's strategies defeating infinite proof attempts.Although finiteness is imposed in [3] too, our current work shows that it is not necessary in ludics.
Our work also highlights the duality: The principle is that when proofs admit contraction, models have to be nondeterministic (whereas they do not have to be nonlinear).
A similar situation arises in some variants of λ-calculus and linear logic, when one proves the separation property.
We mention [12], where the authors add a nondeterministic choice operator and a numeral system to the pure λ-calculus in order to internally (interactively) discriminate two pure λ-terms that have different Böhm trees.However, in contrast to our work, the nondeterminism needed for their purpose is of existential nature: a term converges if at least one of the possible reduction sequences starting from it terminates.
In [27], the separation property for differential interaction nets [14] is proven.A key point is that the exponential modalities in differential interaction nets are more "symmetrical" than in linear logic.In our setting, the symmetry shows up between nonlinearity and nondeterministic conjunctions (i.e., nonuniform elements).It is typically found in Theorem 2.15, which reveals a tight connection between duplicability of positive logical behaviours and closure under nondeterministic conjunctions of negative logical behaviours.Similar nonuniform structures naturally arise in various semantical models based on coherence spaces and games, such as finiteness spaces [13], indexed linear logic and nonuniform coherence spaces [6], nonuniform hypercoherences [5], and asynchronous games [28] (see also [3]).
For future work, we plan to extend our setting by enriching the proof system with propositional variables, second order quantifiers and nonlogical axioms.By moving to the second order setting, we hope to give an interactive account to Gödel's incompleteness theorems as well.
A.2. Restriction to strict derivations.We call a sequent of LLP strict if it is of the form ⊢ ?Γ, D, where D is an arbitrary formula.In particular, ⊢ ?Γ is strict.We modify the inference rules as follows: • Structural rules are made implicit by absorbing weakening and contraction into logical inference rules.• The rules for positive connectives and the ?-dereliction rule are restricted to strict sequents.
• The cut rule is omitted.
We thus obtain the following inference rules: We call the resulting proof system LLP str .
Notice that a derivation of a strict sequent in LLP str may involve sequents which are not strict.For instance, consider: The following property can be easily verified by taking into account the invertibility of negative rules and the focalization property of positive rules [1].
Lemma A.1.A strict sequent is provable in LLP if and only if it is provable in LLP str .
Strict sequents will play a crucial role for the correspondence between LLP and the proof system for ludics (Theorem A.4).The intuition, which we will formalize later, is that a strict sequent ⊢ ?P 1 , . . ., ?P n , D can be thought of as a sequent of the proof system of ludics (omitting the information about designs) of the form ⊢ P 1 , . . ., P n , D.
The above decomposition motivates us to cluster the logical connectives of the same polarity into synthetic connectives (cf.[20]).Consider the expressions finitely generated by: p : x, where x ranges over the set of variables.
We write var(p) (resp.var(n)) to denote the set of variables occurring in p (resp.n).p is a positive synthetic connective if for every subexpression of p of the form p 1 ⊗ p 2 , var(p 1 ) and var(p 2 ) are disjoint.For instance, !x ⊗ (!y ⊕ !y) is a positive synthetic connective while !x ⊕ (!y ⊗ !y) is not.Likewise, n is a negative synthetic connective if for every subexpression of n of the form n 1 `n2 , var(n 1 ) and var(n 2 ) are disjoint.This condition is needed when we translate synthetic connectives to logical connectives of ludics.
We indicate the variables occurring in p by writing p = p(x 1 , . . ., x n ), and similarly for n.Given a negative synthetic connective n, its dual n d is obtained by replacing ⊤ with 0, ⊥ with 1, `with ⊗, & with ⊕, and ?with !respectively, in each occurrence of symbol.p d is similarly defined.
We thus consider proof system LLP syn that consists of three types of inference rules: A.4. Relating to the ludics proof system.Let us now move on to the proof system for ludics described in 3.1.We assume that the signature A is rich enough to interpret LLP: • A contains a nullary name * and a unary name ↑.
• If A contains an n-ary name a and an m-ary name b, it also contains n-ary names π 1 a, π 2 a and an (n + m)-ary name a℘b (cf.Example 2.14).
One can annotate derivations in L with designs as in Section 3.1.Therefore the above theorem means that ludics designs can be used as term syntax for LLP, as far as strict sequents and derivations are concerned (although we have to verify carefully that the translation preserves the reduction relation).
A.5. From ludics to LLP.It is also possible to give a converse translation from the logical behaviours of ludics to the formulas of LLP.To do so, we proceed as follows (cf.Example 2.10): • to each action a(x 1 , . . ., x m ), we associate the synthetic connective a(x 1 , . . ., x m ) • := ?x 1 `• • • `?x m (a() • := ⊥, if a is nullary); • to each logical connective α = ( z, {a 1 ( x 1 ), . . ., a k ( x k )}), we associate the synthetic connective α It is routine to define an isomorphism between D and D •• (resp.between D and D •• ) in some natural sense.Moreover, the translation of a sequent of system L always results in a strict sequent of LLP.We therefore conclude by Theorem A.4: Theorem A.

3. 3 .
Completeness for proofs.Let us finally establish the other direction of Theorem 3.5, namely: Theorem 3.8 (Completeness for proofs).A sequent D ⊢ Λ is derivable in the proof system if and only if D |= Λ.
This work is licensed under the Creative Commons Attribution-NoDerivs License.To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/2.0/or send a letter to Creative Commons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or Eisenacher Strasse 2, 10777 Berlin, Germany