Ludics with Repetitions (Exponentials, Interactive Types and Completeness)

We prove that is possible to extend Girard's Ludics so as to have repetitions (hence exponentials), and still have the results on semantical types which characterize Ludics in the panorama of Game Semantics. The results are obtained by using less structure than in the original paper; this has an interest on its own, and we hope that it will open the way to applying the approach of Ludics to a larger domain.


I. INTRODUCTION
Ludics is a research program introduced by Girard [12] with the aim of providing a foundation for logic based on interaction. It can be seen as a form of Game Semantics where first we have the definition of interaction (composition, normalization), and then we have semantical types, as sets of strategies which "behave well" w.r.t. composition. This role of interaction in the definition of types is where lies the specificity of Ludics in the panorama of Game Semantics.
strategies are untyped, in the sense that all strategies are given on a universal arena (the arena of all possible moves); strategies can always interact with each other, and the interaction may terminate well (the two strategies "accept each other", and are said orthogonal) or not (they deadlock). An interactive type is a set of strategies which "compose well", and react in the same way to a set of tests (see Section IV). A semantical type G is any set of strategies which react well to the same set of tests E, which are themselves strategies (counter-strategies),i.e. G = E ⊥ .
Internal completeness: With Ludics, Girard introduces a new notion of completeness, which is called internal completeness (see Section V). This is a key, characterizing element of Ludics. We have already mentioned that a semantical type is a set of strategy closed by biorthogonal (G = G ⊥⊥ ). Internal completeness is the property that the constructions on semantical types do not require any closure, i.e. are already closed by biorthogonal. While it is standard in realizability that a semantical type is a set S of terms closed by biorthogonal (S = S ⊥⊥ ), when interpreting types one has to perform some kind of closure, and this operation can introduce new terms. For example, the interpretation of A ⊕ B is (A ∪ B) ⊥⊥ . This set of terms could be in general strictly greater than A ∪ B. We have internal completeness if A ∪ B is proven to be equal to (A ∪ B) ⊥⊥ ). Since the closure by biorthogonal does not introduce new terms, we have a complete description of what inhabits the semantical type.
In Girard's paper [12], the semantical types which are interpretation of formulas enjoy internal completeness. This is really the key property (and the one used in [18], [20]). Full completeness (for Multiplicative Additive Linear Logic MALL, in the case of [12]) follows from it.

A. Contributions of the paper
The purpose of this paper is two-fold. On the one hand, we show that it is possible to overcome the main limitation of Ludics, namely the constraint of linearity, hence the lack of exponentials, in the sense that internal completeness (and from that full completeness) can be obtained also when having repetitions, if one extends in a rather natural way the setting of Ludics. On the other hand, we provide proofs which use less structure than the original ones by Girard. Not only we believe this improve the understanding of the results, but -more fundamentally -we hope this opens the way to the application of the approach of Ludics to a larger domain. We now give more details on the content of the paper.
Ludics Architecture: A difficulty in [12] is that there is a huge amount of structure. Strategies are an abstraction of MALL proofs, and enjoy many good properties (analytical theorems). In [12], all proofs of the high level structure of Ludics make essential use of these properties. Since the properties are very specific to the particular nature of the objects, this makes it difficult to extend the -very interesting -approach of Ludics to a different setting, or build the interactive types on different computational objects.
Ludics is introduced in [ By relying on less structure, we show that the high level architecture of Ludics is somehow independent from the low level entities (strategies), and in fact could be built on other computational objects.
In particular, separation is a strong property. This makes it a great property to have, but also a property which is not common to have in other settings. However, the fact that computational objects do not enjoy separation does not mean that it is not possible to build the "high level architecture" of Ludics. We show (Section V) in fact that the proofs of internal and full completeness rely on much less structure, namely operational properties of the interaction.
We believe that discriminating between internal completeness and the properties which are specific to the objects is important both to improve understanding of the results, and to make it possible to build the same construction on different entities.
In particular, strategies with repetitions have weaker properties than in the original version. We show that it is still possible to have interactive types, internal completeness, and from this full completeness for polarized MELL (Multiplicative -Exponentials -Linear Logic). The extension to full polarized Linear Logic [15] is straightforward.
Exponentials in Ludics: Exponentials have been the main open problem in Ludics since [12]. Maurel in [16] proposes a first solution based on the use of probabilistic strategies. This solution is limited by its technical complexity. Therefore, it is not developed till a result of full completeness. Maurel explores also a simpler solution, but does not pursue it further because of the failure of the Separation Theorem. Our work builds on his simpler solution.

B. Our approach
There are two standard branches of Game Semantics: AJM style Game Semantics [1] which is based on Girard's Geometry of Interaction and HO style Game Semantics [14], introducing innocent strategies. Strategies in [8] are a linear form of innocent strategies.
The most natural solution to extend Ludics with exponentials is hence to have as strategies standard HO innocent strategies (on an universal arena). To do so, there are two kind of difficulties, which we deal with in this paper.
The first difficulty in extending Ludics with repetitions, is that using HO-style style strategies, separation fails. We deal with this by showing that the proofs of internal completeness and full completeness can be given in a direct way, without relying on Separation (Section 4).
The second difficulty is that one needs to have enough tests. This problem is analogous to the one which has led Girard to the introduction of the daimon rule: in Ludics, one typically opposes to an abstract "proof of A" an abstract "counter-proof of A". To have enough tests (that is, to have both proofs of A and proofs of A ⊥ ) there is a new rule which allow us to justify any premise. Similarly, when we oppose to a proof of ?A a proof of !A ⊥ (= (?A) ⊥ ), we need enough counter-strategies. As we illustrate in Section 5, we need non-uniform counter-strategies. We realize this by introducing a non-deterministic sum of strategies. Motivations and a sketch of the solution are better detailed in Section VI-C.
C. Related work AJM style exponentials: A different solution that uses AJM style exponentials is developed by Basaldella in [3]: !A is interpreted as an infinite tensor product of the interpretation of A, where each copy of the interpretation of A receives a different index. However, the approach we use in this paper is considerably simpler, and we hope more suitable for more applicative uses of Ludics [9], [18], [20].
Game Semantics: We build on the variant of HO strategies introduced in [15]. Moreover, we are interested in connections with the resource modalities of Games Semantics [17].
Abstract Machines: Curien and Herbelin in [6] have studied composition of strategies as sets of views. In particular they have developed the View Abstract Machine (VAM) which is the device we use in this paper.
Non-deterministic innocent strategies: were introduced by Harmer in [13], with the purpose of modeling nondeterminism (PCF with erratic choice). In this paper we introduce non-uniform strategies, which are realized by means of non-deterministic sums relying on work developed by Faggian and Piccolo [10]. Our purpose here is not to model non-determinism, but to implement non uniformity via "formal sums" of strategies, in order to provide enough tests to make possible the interactive approach of Ludics (inside the model), similarly to Girard's introduction of strategies corresponding to "incomplete" proofs. The different purpose is reflected in the composition, which in our setting is reduced to deterministic composition. Our strategies could be seen as a "concrete" implementation of Harmer's solution, in a simplified setting (see [4] for further details).

II. CALCULUS
In this section, we introduce a calculus that we call MELLS, which is a variant of polarized MELL based on synthetic connectives. In section VIII-B, we prove that our model is fully complete for MELLS.

A. MELL
Formulas of propositional Multiplicative Exponential Linear Logic MELL [11] are finitely generated by the following grammar: where X, X ⊥ are propositional variables (also called atoms). Linear logic distinguishes formulas into: linear formulas: 0, 1, , ⊥, F ⊗ F, F`F ; exponential formulas: ?F, !F . Linear formulas can only be used once, while the modalities !, ? allow formulas to be repeated. The possibility of repeating formulas is expressed by the contraction rule on ?F formulas: ?F, ?F, Γ ?F, Γ Dually, the modality ! allows proofs to be used several times during Cut-Elimination procedure, once for each duplication of ?F . Connectives and constants of MELL are also split into two classes, according to their polarity Positive: 0, 1, ⊗, ?
Negative: , ⊥,`, ! Remark II.1 (Modalities). Following [19], we write ! for the negative modality, and ? for the positive one, because these symbols are more familiar. However in a polarized setting such as in [15], it is more common to write, resp., (negative) and (positive).

B. MELLS
We now introduce the calculus MELLS. Formulas are here built by synthetic connectives [12] i.e. maximal clusters of connectives of the same polarity. The key ingredient that allows for the definition of synthetic connectives is focalization [2]. Andreoli has proven that if a sequent is provable, then it is provable with a focusing proof. By using synthetic connectives, formulas are in a canonical form, where immediate subformulas have opposite polarity. Hence in a (cut-free) proof of MELLS, there is a positive/negative alternation of rules, which matches the standard Player (positive)/ Opponent (negative) alternation of moves in a strategy (see Section III). Formulas of MELLS split into positive (P ) and negative (N ) as follows: where X and X ⊥ are propositional variables. We use F as a variable for formulas, and indicate the polarity also by writing F + or F − . We often write F + (N 1 , . . . , N n ) and F − (P 1 , . . . , P n ).
Linear negation ⊥ is defined as usual: . A sequent of MELLS is a multi-set of formulas written F 1 , . . . , F n such that it contains at most one negative formula. For Γ multi-set of positive formulas, we have the following rules: where in the last rule Ξ is either empty or negative. Notice that usual Linear Logic structural rules (weakening, contraction, promotion and dereliction) are always implicit in our calculus. [4] for more details).

III. HO STYLE GAME SEMANTICS
An innocent strategy [14] can be described either in terms of all possible interactions for the player (strategy as set of plays), or in a compact way, which provides only the minimal information for Player to move (strategy as set of views) [13]. It is standard that the two presentations are equivalent. Here we use the "strategy as set of views" description. We revise the definitions, following Harmer's and Laurent's presentation.
Let P ol := {+, −} be the set of the polarities (here, positive and negative). We use ∈ P ol as a variable. We call positive (resp. negative) a strategy on a positive (resp. negative) arena.

Definition III.1 (Arena). An arena (A, A , λ A ) is given by:
The polarity of a move in a strategy (i.e. positive/Player or negative/Opponent) is given by the arena. We sometimes put in evidence the polarity of a move x by writing x + or x − , but we omit it when clear from the context. In the same way, we annotate strategies with their polarity, hence writing D + , D − .
Tree notation: Emphasizing the tree structure, we also write a strategy whose first move is a as D = a.D . More precisely, if D is a positive strategy, we write it as D = a.
Composition of strategies: Composition of strategies as sets of views is well studied by Curien and Herbelin, who introduce the View -Abstract -Machine (VAM) [6] by elaborating on Coquand's Debates machine [5].

IV. LUDICS
In this and the next section we give a compact but complete presentation of Ludics, in a language which fits that of Game Semantics.
Let us first stress the peculiarity of Ludics in the panorama of Game Semantics. In Game Semantics, one defines constructions on arenas which correspond to the interpretation of types. A strategy is always "typed", in the sense that it is a strategy on a specific arena. When strategies are opportunely typed, they interact (compose) well.
In the approach of Ludics, strategies are "untyped", in the sense that all strategies are defined on the universal arena. Strategies then interact with each other, and the interaction can terminate well (the two strategies "accept" each other) or not (deadlock).
Daimon: Ludics provides a homogeneous setting in which live both proofs and tests: proofs of A interact with proofs of A ⊥ ; to this end, it generalizes the notion of proof. To this purpose, a new rule is introduced, called daimon: Γ † . Such a rule allow us to assume any conclusion.
In the semantics, the daimon is a special action which acts as a termination signal.

A. Strategies on a Universal Arena
Strategies communicate on names. We can think of names as channels, which can be used to send outputs (if positive) or to receive inputs (if negative). Each strategy D has an interface, which provides the names on which D can communicate with the rest of the world, and the use (input/output) of each name.
A name (called locus in [12]) is a string of natural numbers. We use the variables ξ, σ, τ, . . . to range over names. Two names are disjoint if neither of them is the prefix of the other one.
An interface Γ (called base in [12]) is a finite set of pairwise disjoint names, together with a polarity for each name, such that at most one name is negative. If a name ξ has polarity , we write ξ ∈ Γ. We say that an interface Γ is negative if it contains a negative name, positive otherwise.
An action x is either the symbol † (called daimon) or a pair (ξ, I), where ξ is a name, and I is a finite subset of N.
Given an action (ξ, I) on the name ξ, the set I indicates the names {ξi : i ∈ I} which are generated from ξ by this action. The prefix relation (written ξ σ) induces a natural relation of dependency on names, which generates an arena.
We call roots the action † and any action (ξ, I) such that ξ ∈ Γ. A strategy which plays a key role is the strategy daimon { †}, which (with a slight abuse of notation) we denote also by †.
Dynamics: Composition of strategies is described via the VAM machine. All we need in this paper is Proposition VI.1 in Section VI-A. Linear strategies are essentially the strategies introduced in [12] (there called designs). The linearity condition expressed there is actually slightly more complex, because it takes into account also the additive structures (additive duplication is allowed). Since in this paper we do not consider additives, for our discussion it is enough to say that in a strategy each name is only used once. Linearity has as consequence that that all pointers are trivial (each move has only one possible justifier), and then can be forgotten.
Composition of linear strategies (see [12], [7]): We can compose two strategies D 1 , D 2 when they have compatible interfaces, that is they have a common name, with opposite polarity. For example, D 1 : σ + , Γ can communicate with D 2 : σ − , Δ through the name σ. The shared name σ, and all names hereditarily generated from σ, are said to be internal.
If The most important case in Ludics is the closed one, when all names in R are internal (for example we have D : ξ + and E : ξ − ). In this case, normal form can be obtained step by step by applying the following rewriting rules: †; c) If none of the case above applies, we have a deadlock (the output is empty).
Since each action only appears once, the dynamics is extremely simple: we match actions of opposite polarity. Let us give an example of how interaction works.
Let us have D interact with E. D starts by playing the move x + , E checks its answer to that move, which is x + 1 . If D receives input x 1 , its answer is †, which terminates the interaction. Summing up, the interaction produces x.x 1 . †. If we hide the internal communication, † is the output.
If we have E interacting with D , we again match x + with x − . Then E plays x 1 , but D has no answer to the action x 1 . Here we have a deadlock.

C. Orthogonality and Interactive types
In the closed case, we only have two possible outcomes: either composition fails (deadlock), or it succeeds by reaching the action †, which signals termination. Two strategies are orthogonal if at each step any positive action x + finds its negative dual action x − , and the computation terminates, that is we eventually meet a † action.
Example IV.6. In the Example IV.4, E⊥D, while E and D are not orthogonal.
Orthogonality allows the players to agree (or not), without this being guaranteed in advance by the type: D ⊥ is the set of the counter-strategies which are consensual with D.
Remark IV.7. Orthogonality also extends to strategies on arbitrary interface. Instead of considering a single counterstrategy, one has to consider a family of counter-strategies (e.g., for D : ξ + , σ + , one has to consider E : Details on this generalization are given in [12]. Definition IV.8. A behaviour (or interactive type) on the interface Γ, is a set G of strategies D : Γ such that G ⊥⊥ = G (it is closed by bi-orthogonal).
We say that a behaviour G is positive or negative according to its interface.
We now give constructions on behaviours which will interpret linear formulas.
Conversely, given any strategy D : ξ + , such that the root is linear (i.e. the action which labels the root occurs only once in D), we can write it as D = x.{D 1 , . . . D n }. By the View condition, we have that each subtree D i is a strategy on ξi. Given a strategy D as just described, we will write D i for the operation which returns us D i .
Let A 1 , A 2 be negative behaviours, respectively on ξ1 − and ξ2 − . We denote by A 1 • A 2 the set We define: In the sequel, P will always denote a positive behaviour, N a negative one.
The interpretation G of a formula G will be a behaviour, i.e. a set of strategies closed by biorthogonal: D ∈ G if and only if D⊥E, for each E ∈ G ⊥ . The interpretation of a sequent G 1 , . . . , G n naturally extends this definition: It is clear that a sequent of behaviours is itself a behaviour, i.e, a set of strategies closed by orthogonal.

COMPLETENESS
In this section we restrict our attention to linear strategies. We introduce internal completeness, as well as full completeness. All these results can be proven without relying on separation.
In [12], the set of strategies which interpret MALL formulas satisfies a remarkable closure property, called internal completeness: the set S of strategies produced by the construction is (essentially) equal to its biorthogonal (S = S ⊥⊥ ). This means that we have a complete description of all strategies in the behaviour.
The best example is the interpretation A 1 ⊗ A 2 := (A 1 • A 2 ) ⊥⊥ of a Tensor formula. One proves that (A 1 • A 2 ) = (A 1 • A 2 ) ⊥⊥ , i.e. we do not add new objects when closing by biorthogonal.
From this, full completeness follows. In fact, because of internal completeness, if D ∈ A 1 ⊗ A 2 we know we can decompose it as D 1 • D 2 , with D 1 ∈ A 1 and D 2 ∈ A 2 . This corresponds to writing the rule: For the rest of this section, we assume that A, B are negative behaviours respectively on ξ1 − and ξ2 − .

Remark V.2 (Important).
Observe that here we only use two properties of the strategies: the dynamics (normalization), and the fact that the root is linear, i.e. it is the only action on the name ξ (to say that occurrences of ξ1 only appear inside D 1 ).
Internal completeness for the connective par`is immediate, just spelling out Definition IV.9.
Full completeness for Multiplicative Linear Logic MLL follows from what we have seen in this section, by using the proof of internal completeness of Tensor and Par, and Corollary V.3.

Corollary V.3. If D ∈ Γ, P if and only if for each
VI. LUDICS WITH REPETITIONS: WHAT, HOW, WHY From this section on, we abandon the hypothesis of linearity. Here we discuss the difficulties in extending the approach of Ludics to this setting, and introduce our solution, which will be technically developed in Section VII. First, let us introduce some operations which we will use to deal with repeated actions, and describe composition.
Renaming: Given a strategy E : ξ, let us indicate by σ(E) the strategy obtained from E by renaming, in all occurrences of action, the prefix ξ into σ. I.e., each name ξ = ξ.τ becomes σ.τ . Obviously, if E : ξ, then σ(E) : σ.
Renaming of the root: Given a positive strategy D : ξ + , let us indicate by σ(D) the strategy obtained by renaming the prefix ξ into σ in the root, and in all actions which are hereditarily justified by the root. If D : ξ + , we obtain a new strategy σ(D) : σ + , ξ + .
We picture this in Figure 1, where we indicate an action on ξ simply with the name ξ. Copies of a behaviour: To emphasize that A is a set of strategies on ξ, we annotate the name ξ as a subscript: A ξ . If A ξ is a set of strategies on the name ξ, we write A σ for {σ(D), D ∈ A ξ }. A σ is a copy of A ξ : they are equal up to renaming.

A. Composition (normalization)
In strategies, actions can be repeated. Composition of strategies as sets of views can be described via the VAM abstract machines introduced in [6]. In this paper, what we use is that composition has a fundamental property, expressed by Proposition VI.1, where we use the operations of renaming and renaming of the root described above:

Proposition VI.1 (Copies). [[D, E]] = [[σ(D), E, σ(E)]].
Let us motivate this property, which actually gives a description of the composition. Let D : ξ + and E : ξ − be two strategies, which we represent in Figure 2 (a) (again, we indicate an action x on ξ simply with the name ξ). The idea behind the abstract machine in [6] is that, when the two strategies D and E interact, every time D plays an action x on ξ, a copy of E is created; i.e., composition works as if we had a copy of E for each occurrence of x in D. It is rather intuitive that the result of normalization is the same if we make this explicit, by renaming one occurrence of x (namely the root), and making an explicit copy of E, as illustrated in Figure 2 (b).

B. What are the difficulties
We are ready to discuss which are the difficulties in extending the approach of Ludics to a setting where strategies are non linear.
Problem 1: Separation: The first problem is the failure of separation (we discuss an example of this in [4]). A main reason why previous attempts at the extension of Ludics with exponentials blocked on that, is because all proofs in [12] make essential use of a property built on it. Our key observation is that, even if separation is an important property, its failure is a relative problem, in the sense that we can still have interactive types and internal completeness.

Problem 2: Enough tests (counter-strategies):
The second problem has to do with having enough tests, i.e. enough counter-strategies. Let us explain this. As in [12], we define an interactive type to be any set of strategies closed by biorthogonal. Assume we have defined how to interpret formulas, and in particular ?A and !A ⊥ .
We would like to associate to each "good" strategy in the interpretation of a formula, for example ?A, a syntactic proof of ?A (full completeness).
If D : ξ + ∈ ?A ξ , we would like to transform it into a strategy D ∈ ?A ξ , ?A σ (where distinct names indicate distinct copies). This corresponds to a contraction rule (in its upwards reading).
A natural idea is to rename the root, and all the actions which are hereditarily justified by it. We have already illustrated this operation in Figure 1. From D : ξ + , we obtain a new strategy D : ξ + , σ + , where D = σ(D). We would like to prove that: To have (ii), we need (see Definition IV.9) to know that [[σ(D), E, F]] = † for each E ∈ (?A ξ ) ⊥ and each F ∈ (?A σ ) ⊥ . Since (?A σ ) ⊥ is a copy (renamed in σ) of (?A ξ ) ⊥ , we can also write this condition as: Unfortunately Proposition VI.1 only gives us that [[σ(D), E, σ(E)]] = †, where we have two copies of the same (up to renaming) strategy E. This correspond to the fact that in the HO-style setting, strategies in !C are uniform: every time we find a repeated action of "type" ?C ⊥ , Opponent (!C) reacts in the same way.

C. A solution: non-uniform tests
The need of having enough tests is similar to the one which has led Girard to the introduction of the daimon rule. In our case, this need leads us to enlarge the universe of tests by introducing non-uniform counter-strategies. This is extremely natural to realize in an AJM-style setting [1], [3], where a strategy of type !C is a sort of infinite tensor of strategies on C, each one with its index of copy. To have HOstyle non-uniform counter-strategies, we introduce a nondeterministic sum of strategies. Let us illustrate the idea, which we will formalize in the next section.
Non-uniform counter-strategies: The idea is to allow a "non-deterministic sum" of negative strategies. Let us, for now, informally write the sum of E and F this way: τ.E + τ.F Normalization may have to use several times this strategy, hence entering the strategy several times. Every time it is presented with this choice, normalization will nondeterministically choose one of the two possible continuations. The choice can be different at each repetition. Let

D. Linearity of the root
Observe that, by construction, in σ(D) the action at the root is positive and it is the only action on the name σ. We can hence apply the argument we have already given in Section V.1 for the internal completeness of Tensor.
As a consequence, if A = A 1 ⊗ A 2 , given D ∈ ?A, we have that σ(D) actually belongs to A, ?A, and can be decomposed in strategies σ(D) i ∈ A i , ?A.
This allows us to associate to D ∈ ?A, Γ a proof which essentially has this form: . . .

STRATEGIES
In this Section, we implement technically the ideas presented in the Section VI-C. In particular, we revise the definition of arena and strategy so to accommodate neutral actions, which correspond to the τ action we have just seen.
We extend the set of the polarities with a neutral polarity, hence we have now three possibilities: positive, negative and neutral. We extend the set of actions with a set T = {τ i : i ∈ N} of indexed tau actions, whose polarity is defined to be neutral. We denote by T also the neutral arena, where the set of moves is T , the enabling relation is empty, and the polarity is neutral. We revise strategies (Definition III.2) giving the following definitions. From now on, we only consider N.U. strategies (the usual ones being a special case). Figure 3 below shows an example of N.U. strategy.

A. Sum of strategies
As anticipated in Section VI-C, our N.U. strategies can be seen as a non deterministic sum of standard strategies. We use N.U. strategies to capture the idea of "non uniform" tests. Let us make precise what does it mean for a strategy to be uniform or not.
Definition VII.2 (Uniform actions). Given an N.U. strategy D, we say that a negative occurrence of action x − is uniform if x − is immediately followed by a positive occurrence of action (and not by tau actions). If N is a set of negative strategies, we define: In Figure 3, F 1 and F 2 have uniform root, while the root of F is not. F can be seen as a non-deterministic sum of F 1 and F 2 .
It is immediate that a strategy whose root is non uniform can always be written as a sum of strategies whose root is uniform. We formalize this in the following: Definition VII.3 (τ -sum). Let {D i : Γ} i∈S be a family of negative N.U. strategies such that all D i = x − .E i have the same uniform root. We define their sum: A τ -sum of strategies can be seen as a superposition of negative strategies in a such way that they do not overlap (except for the first negative action). The following proposition is an immediate consequence of the definitions:

B. Orthogonality
We have sketched the definition of composition for N.U. strategies in Section VI-B 1 .The reader here does not need the 1 The details are given in the full paper [4]. We give constructions on behaviours, and prove that they enjoy internal completeness. Since these constructions will be used to interpret formulas, full completeness will be a consequence.
Constant types: We define the positive (resp. negative) constant behaviour on ξ as follows: It is immediate that { †} ⊥⊥ = { †} and that ! is the set of all negative strategies on ξ.
Compound types: In this section, we use the same constructions on strategies and operations on sets of strategies as in Section IV-C (observe that since strategies have repetitions, even when starting from the same set, the closure by bi-orthogonal introduces many more strategies than in the linear case).
Let N 1 , . . . , N n be negative behaviours respectively on ξ1, . . . , ξn. We define a new positive (resp. negative) behaviour on ξ as follows: From now on, we write N for negative and P for positive behaviours given by the constructions above.

A. Internal completeness
We have the following property which characterizes the relation of orthogonality for τ -sums of strategies that belong to negative behaviours.
Proposition VIII.1. Let N = (N 1 •· · ·•N n ) ⊥ be a negative behaviour. If {E i } ⊆ Unif N is a non empty denumerable set of negative strategies of the form Together with Corollary VII.6, this gives the following. This lemma expresses the fact that the study of a negative behaviour N can be reduced to the study of Unif N. We will exploit this property both in internal and full completeness. Proposition VIII.3 (Internal completeness of F − ). If x − .F ∈ Unif F − (P 1 , . . . , P n ) then F ∈ P 1 , . . . , P n .
Proof: The proof is as in the linear case.
Lemma VIII.4. If D ∈ P ξ then σ(D) ∈ P ξ , P σ . Moreover, the only occurrence of action on σ is the root.
Remark VIII.5. Non-uniform strategies allow us to superpose two negative strategies (with the same root). The capability of defining such a kind of superposition is the core of our solution to interpret contraction rule.
Proof: Let F + = F + (N 1 , . . . , N n ). By Lemma VIII.4, we have that if D ∈ F + ξ , then σ(D) ∈ F + ξ , F + σ . Moreover, the root is an action on the name σ, and it is the only occurrence of action on σ. By using the same argument as in Proposition V.1, we have that σ(D) = D 1 • · · · • D n and D i ∈ N i , F + , i.e. σ(D) i ∈ N i , F + (N 1 , . . . , N n ).

B. Full completeness
In this paper we have chosen to give enough details on internal completeness, because it is the more peculiar, and hence the more interesting, property. Once established internal completeness, one can also obtain full completeness. Our model is fully complete with respect to (the constantonly fragment of) MELLS (section II). The details are given in [4].
The interpretation of a proof is a strategy which is uniform and winning according to the definition given in [12] (i.e. daimon-free, finite and material in its behaviour). We have the following results, whose proof can be found in [4].
Theorem VIII.7 (Interpretation). Let π be a proof of a sequent Γ in MELLS. There exists a winning strategy D ∈ Γ such that D is interpretation of π.
Theorem VIII.8 (Correctness of the interpretation). If π is a proof of Γ in MELLS which reduces to π , then if D is the interpretation of π and D is the interpretation of π , we have that D = D . Theorem VIII.9 (Full Completeness). If D is a winning strategy in a sequent of behaviours Γ then D is the interpretation of a cut-free proof π of the sequent Γ in MELLS.