Non-idempotent intersection types and strong normalisation

We present a typing system with non-idempotent intersection types, typing a term syntax covering three different calculi: the pure {\lambda}-calculus, the calculus with explicit substitutions {\lambda}S, and the calculus with explicit substitutions, contractions and weakenings {\lambda}lxr. In each of the three calculi, a term is typable if and only if it is strongly normalising, as it is the case in (many) systems with idempotent intersections. Non-idempotency brings extra information into typing trees, such as simple bounds on the longest reduction sequence reducing a term to its normal form. Strong normalisation follows, without requiring reducibility techniques. Using this, we revisit models of the {\lambda}-calculus based on filters of intersection types, and extend them to {\lambda}S and {\lambda}lxr. Non-idempotency simplifies a methodology, based on such filter models, that produces modular proofs of strong normalisation for well-known typing systems (e.g. System F). We also present a filter model by means of orthogonality techniques, i.e. as an instance of an abstract notion of orthogonality model formalised in this paper and inspired by classical realisability. Compared to other instances based on terms (one of which rephrases a now standard proof of strong normalisation for the {\lambda}-calculus), the instance based on filters is shown to be better at proving strong normalisation results for {\lambda}S and {\lambda}lxr. Finally, the bounds on the longest reduction sequence, read off our typing trees, are refined into an exact measure, read off a specific typing tree (called principal); in each of the three calculi, a specific reduction sequence of such length is identified. In the case of the {\lambda}-calculus, this complexity result is, for longest reduction sequences, the counterpart of de Carvalho's result for linear head-reduction sequences.


Introduction
M : A M : B M : A ∩ B Intersection types were introduced in [CD78, CDS79], extending the simplytyped λ-calculus with a notion of finite polymorphism.This is achieved by a new construct A ∩ B in the syntax of types and new typing rules such as the one on the right, where M : A denotes that a term M is of type A.
One of the motivations was to characterise strongly normalising (SN) λ-terms, namely the property that a λ-term can be typed if and only if it is strongly normalising.Variants of systems using intersection types have been studied to characterise other evaluation properties of λ-terms and served as the basis of corresponding semantics [BCDC83, Lei86, Kri90, vB95, Ghi96, AC98, Gal98, DCGL04, DCHM05, ABDC06, CS07].
This paper develops [BL11a,BL11b] with detailed proofs and extends the results to other calculi than the pure λ-calculus, namely the calculus with explicit substitutions λS (a minor variant of the calculi λ s of [Kes07] and λ es of [Ren11]), and the calculus with explicit substitutions, explicit contractions and explicit weakenings λlxr [KL05,KL07]: It presents a typing system (for a syntax that covers those of λ, λS and λlxr) that uses non-idempotent intersection types (as pioneered by [KW99,NM04]).
Intersections were originally introduced as idempotent, with the equation A ∩ A = A either as an explicit quotient or as a consequence of the system.This corresponds to the understanding of the judgement M : A ∩ B as follows: M can be used as data of type A or as data of type B. But the meaning of M : A ∩ B can be strengthened in that M will be used once as data of type A and once as data of type B. With this understanding, A ∩ A = A, and dropping idempotency of intersections is thus a natural way to study control of resources and complexity.Using this typing system, the contributions of this paper are threefold: Measuring worst-case complexity.
In each of the three calculi, we refine with quantitative information the property that typability characterises strong normalisation.Since strong normalisation ensures that all reduction sequences are finite, we are naturally interested in identifying the length of the longest reduction sequence.This can be done as our typing system is very sensitive to the usage of resources when terms are reduced (by any of the three calculi).
Our system actually results from a long line of research inspired by Linear Logic [Gir87].The usual logical connectives of, say, classical and intuitionist logic, are decomposed therein into finer-grained connectives, separating a linear part from a part that controls how and when the structural rules of contraction and weakening are used in proofs.This can be seen as resource management when hypotheses, or more generally logical formulae, are considered as resource.The Curry-Howard correspondence, which originated in the context of intuitionist logic [How80], can be adapted to Linear Logic [Abr93,BBdH93], whose resource-awareness translates to a control of resources in the execution of programs (in the usual computational sense).From this has emerged a theory of resource λ-calculus with semantical support (such as the differential λcalculus) [BL96, BCL99, ER03, BET10, BEM10].In this line of research, de Carvalho [dC05, dC09] obtained interesting measures of reduction lengths in the λ-calculus by means of non-idempotent intersection types: he showed a correspondence between the size of a typing derivation tree and the number of steps taken by a Krivine machine to reduce the typed λ-term, which relates to the length of linear head-reductions.But if we remain in the realm of intersection systems that characterise strong normalisation, then the more interesting measure is the length of the longest reduction sequence.
In this paper we get a result similar to de Carvalho's, but with the measure corresponding to strong normalisation: the length of the longest β-reduction sequence starting from any strongly normalising λ-term can be read off its typing tree in our system. 1oreover, the idea of controlling resource usage by intersection types naturally leads to the investigation of calculi that handle resources more explicitly than the pure λ-calculus.While the resource calculi along the lines of [BEM10] are well-suited to de Carvalho's study of head reductions, our interest in longest reduction sequences (no matter where the redexes are) lead us to explicit substitution calculi along the lines of [KL05,KL07,Kes07,Ren11]. Hence the extension of our complexity results (already presented in [BL11a] for λ) to λS and λlxr. 2ilter models and strong normalisation.
Intersection types were also used to build filter models of λ-calculus as early as [BCDC83]. 3In particular, [CS07] shows how filters of intersection types can be used to produce models of various type theories; this in turn provides a modular proof that the λ-terms that are typable in some (dependent) type theory (the source system) are typable in a unique strongly normalising system of intersection types (the target system), and are therefore strongly normalising.
Following [BL11b], we show here an improvement on this methodology, changing the target system of idempotent intersection types to our system of non-idempotent intersection types. 4The benefit of that move is that the strong normalisation of this new target system follows from the fact that typing trees get strictly smaller with every β-reduction.This is significantly simpler than the strong normalisation of the simply-typed λ-calculus and, even more so, of its extension with idempotent intersection types (for which [CS07] involves reducibility techniques [Gir72,Tai75]).
Strangely enough there is no price to pay for this simplification, as the construction and correctness of the filter models with respect to a source system is not made harder by non-idempotency.
While this improvement concerns any of the source systems treated in [CS07], we choose to illustrate the methodology with a concrete source system that includes the impredicative features of System F [Gir72], as suggested in the conclusion of [CS07].
Moreover, extending our improved methodology [BL11b] to the explicit substitution calculi λS and λlxr is a new contribution that addresses problems that are reputedly difficult: as illustrated by Melliès [Mel95], strong normalisation can be hard to satisfy by an explicit substitution calculus.When it is satisfied, proof techniques often reduce the problem to the strong normalisation of pure λ-terms via a property known as Preservation of Strong Normalisation (PSN) [BBLRD96], while direct proofs (e.g. by reducibility [Gir72,Tai75]) become hugely intricated [DL03, LLD + 04] even in the simplest explicit substitution calculus λx [BR95].Here we have direct proofs of strong normalisation for λS and λlxr, when it is typed with simple types, idempotent intersection types, System F types.These are, to our knowledge, the first direct proofs for those systems (i.e.proofs that do not rely on the strong normalisation of pure λ-terms).
The third contribution of this paper is to show how the above methodology can be formalised in the framework of orthogonality.Orthogonality underlies Linear Logic and its models [Gir87] as well as classical realisability [DK00, Kri01, MM09], and is used to prove properties of proofs or programs [Par97,MV05,LM08].
We formalise here a parametric model construction by introducing an abstract notion of orthogonality model, which we illustrate with three different instances: • one instance is a model made of strongly normalising terms (which, in the case of the pure λ-calculus, captures the traditional use of orthogonality to prove strong normalisation [Par97, LM08]) • one instance is a model made of terms that are typable with intersection types • one instance is a model made of filters of intersection types To our knowledge, this is the first time that some filter models are shown to be captured by orthogonality techniques.Also, the systematic and modular approach offered by the abstract notion of orthogonality model facilitates the comparison of different proof techniques: As already showed in [BL11b], all three orthogonality models provide proofs of strong normalisation for the pure λ-calculus.But here we also show that, in the case of λS and λlxr, the term models fail to easily provide such direct proofs: one has to either infer that a term is strongly normalising from some normalisation (resp.typing) properties of its projection as a pure λ-term (as in the PSN property), or prove complex normalisation (resp.typing) properties within λS and λlxr themselves (as in the IE property identified in [Kes09]).On the contrary, the filter model provides strong normalisation results for λS and λlxr as smoothly as for the pure λ-calculus.
Structure of the paper.
This paper aims at factorising as much material as possible between the three calculi, and present the material specific to each of them in a systematic way.
Section 2 presents the generic syntax that covers those of λ-calculus, λS and λlxr; it presents the (non-idempotent) intersection types, the typing system used in the rest of this paper and its basic properties.
Section 3 proves Subject Reduction for each of the three calculi, showing that typing trees get smaller with every reduction, from which strong normalisation is inferred (Soundness).
Section 4 presents the filter structure of our intersection types, and the construction of a filter model for a very general source typing system, which is thus proved strongly normalising in each of the three calculi; the abstract notion of orthogonality model is defined with sufficient conditions for the Adequacy Lemma to hold (being typed implies having a semantics in the model); three instances of orthogonality models are defined and compared in the view of proving strong normalisation results for the three calculi.
Section 5 proves Subject Expansion for each of the three calculi, from which typing derivations are shown to exist for every strongly normalising terms (Completeness); such derivations are proved to satisfy some specific properties called optimality and principality.
Section 6 draws advantage of the optimality and principality properties to refine the upper bound on longest reduction sequences into an exact measure that is reached by some specific reduction sequence; this is done in each of the three calculi.
Section 7 discusses alternative measures for the explicit substitution calculi λS and λlxr, and Section 8 concludes.An appendix details the proofs of the theorems that would otherwise overload the paper with technicalities.

The calculus
The intersection type system we define here was first designed for the pure λ-calculus.However, it can easily be extended to other calculi such as the explicit substitution calculus λS, or the explicit substitution calculus λlxr where weakenings and contractions are also explicit.
The theories of those three calculi share a lot of material, which is why we present them in parallel, factorising what can be factorised: For instance, the syntaxes of the three calculi are fragments of a common grammar, for which a generic intersection type system specifies a notion of typing for each fragment.However, the calculi do not share the same reduction rules.
In this section, we first present the common grammar, then we define our generic intersection type system for it.

Terms
The syntaxes of the three calculi are subsets of a common grammar defined as follows: M, N :: x (M ) The free variables f v(M ) of a term M are defined by the rules of figure 1.
Figure 1: Free variables of a term We consider terms up to α-equivalence and use Barendregt's convention [Bar84] to avoid variable capture.
In this paper we consider in particular the three following fragments of the above syntax.

Definition 2 (Fragments)
Pure λ-calculus A λ-term is a term M that does not contain M [x := N ], or W x (M ) or C y,z x (M ).λS-calculus A λS-term is a term that does not contain W x (M ) or C y,z x (M ).λlxr-calculus A λlxr-term is a term M that is linear: every free and bound variable appears once and only once in the term (see [KL07]).

Types
To define the intersection type system, we first define the intersection types, which are the same for the three calculi.

Definition 3 (Types)
We consider a countable infinite set of elements called atomic types and use the type variable τ to range over it.
Intersection types are defined by the following syntax: F, G, . . .
U -types F -types are types that are not intersections, A-types are types that are not empty and U -types are types that can be empty.
We extend the intersection construct as an operation on U -types as follows: Remark 1 For all U and V we have : Note that we do not assume any implicit equivalence between intersection types (such as idempotency, associativity, commutativity).
F -types are similar to strict types defined in [vB92].However, in order to prove theorems such as subject reduction we will need associativity and commutativity of the intersection ∩.So we define an equivalence relation on types.

Definition 4 (≈)
We inductively define U ≈ V by the rules of Fig. 2. The intersection types that we use here differ from those of [BL11a], in that the associativity and commutativity (AC) of the intersection ∩ are only featured "on the surface" of types, and not underneath functional arrows →.This will make the typing rules much more syntax-directed, simplifying the proofs of soundness and completeness of typing with respect to the strong normalisation property.More to the point, this approach reduces the use of the AC properties to the only places where they are needed.
We equip intersection types with a notion of sub-typing: 1. ⊆ is a partial pre-order for intersection types, and U ≈ U ′ if and only if U ⊆ U ′ and U ′ ⊆ U .

Typing contexts
We now lift those concepts to typing contexts before presenting the typing rules.
By () we denote the context mapping every variable to ω and by x : U the context mapping x to U and every other variable to ω.
Proof: The proofs of these properties are straightforward with the use of Lemma 2 and Lemma 3.

Typing judgements
Now that we have defined types and contexts we can define typing derivations.Instead of defining three typing systems for the three calculi we can define one typing system for the common grammar.
Definition 7 (Typability in System λ ⊆ ∩ ) The judgement Γ ⊢ ⊢ ⊢ ∩⊆ M : U denotes the derivability of Γ ⊢ ⊢ ⊢ ∩⊆ M : U with the rules of Fig. 3.We write Γ ⊢ ⊢ ⊢ n ∩⊆ M : U if there exists a derivation with n uses of the (App) rule.※ x : Figure 3: System λ ⊆ ∩ Note that the rule deriving ⊢ ⊢ ⊢ ∩⊆ M : ω does not interfere with the rest of the system as ω is not an A-type.It is only here for convenience to synthetically express some statements and proofs that would otherwise need a verbose case analysis (e.g.Lemma 8).
Examples of how λ-terms are typed are given in the next section.Also, note that the introduction rule for the intersection is directed by the syntax of the types: If Γ ∩ ∆ ⊢ ⊢ ⊢ ∩⊆ M : A ∩ B, then the last rule of the derivation is necessarily (Inter) and its premises are necessarily Γ ⊢ ⊢ ⊢ ∩⊆ M : A and ∆ ⊢ ⊢ ⊢ ∩⊆ M : B. We are not aware of any intersection type system featuring this property, which is here a consequence of dropping the implicit AC properties of intersections, and a clear advantage over the system in [BL11a].Similar properties can however be found in systems such as those of [GILL11], which avoids having to explicitly type a term by an intersection type.

Lemma 5 (Basic properties of λ
∩⊆ M : U and U ⊆ V , then there exist m and ∆ such that m ≤ n, Γ ⊆ ∆ and ∆ ⊢ ⊢ ⊢ m ∩⊆ M : V Proof: The first point generalises to U -types the previous remark.The second point is by induction on the typing tree.The third point is by induction on the derivation of U ≈ U ′ , and the fourth one combines the previous points.
The following lemma is used to prove Subject Reduction in both λS and λlxr: Lemma 6 (Typing of explicit substitution) Assume Γ, x : Remark 7 The previous theorem is not true if we replace A by ω: If B is an intersection, we would need to duplicate the typing tree of N .

Soundness
In this section we prove Subject Reduction and Soundness: respectively, the property that typing is preserved by reduction, and the property that if a term is typable, then it is strongly normalising for the reduction relation.This is true for the three calculi, but specific to each of them because the reduction relation is itself specific to each calculus.Therefore in this section, we work separately on each calculus: each time, we define the reduction rules and prove the Subject Reduction property, which leads to Soundness.

Pure λ-calculus
Remember that a pure λ-term is a term M that does not contain any explicit substitutions M [x := N ], or weakenings W x (M ) or contractions C y,z x (M ).As we will see, only strongly normalising terms can be assigned an A-type by the system (Theorem 10).In fact, all of them can (Theorem 53), see for instance how the example below correctly uses the abstraction rule (A ⊆ ω).
Definition 8 (Reduction in λ-calculus) If M and N are pure λ-terms, we denote by M {x := N } the result of the (implicit) substitution (as defined in e.g.[Bar84]).
The reduction rule is β-reduction: The congruent closure of this rule is denoted −→ β .SN λ denotes the set of strongly normalising λ-terms (for β-reduction).※ Lemma 8 (Typing of implicit substitutions) The above theorem and its proof are standard but for the quantitative information in the typability properties.This is where non-idempotent intersections provide a real advantage over idempotent ones, as every β-reduction strictly reduces the number of application rules in the typing trees: no sub-tree is duplicated in the process.This is something specific to non-idempotent intersection types, and obviously false for simple types or idempotent intersection types.
As a direct corollary we obtain: The converse is also true (strongly normalising terms can be typed in λ ⊆ ∩ ), see Theorem 53 and more generally section 5.3 (with Subject Expansion, etc. ).

λS
Remember that terms of λS are terms that do not contain any weakenings W x (M ) or contractions C y,z x (M ).In other words, we consider the extension of the pure λ-calculus with explicit substitutions (M [x := N ]).This is the same syntax as that of λx [BR95], but unlike λx, the reduction rules only duplicate substitutions when needed.For example, the following rule: In the other cases, the explicit substitution will only go one way.The rules are chosen with the proof of Subject Reduction in mind, which is a simple adaptation of the proof in the pure λ-calculus.This leads to soundness (typable implies strongly normalising), and therefore Melliès's counter-example [Mel95] to strong normalisation is naturally avoided.

Definition 9 (Reduction in λS)
The reduction and equivalence rules of λS are presented in Fig. 4.
For a set of rules E ⊆ {B, S, W } from Figure 4, −→ E denotes the congruent closure of the rules in E modulo the ≡ rule.
SN λS denotes the set of strongly normalising λS-terms for −→ B,S,W .※ We call this calculus λS because it is a variant of the calculi λ s of [Kes07] and λ es of [Ren11].That of [Ren11] is more general than that of [Kes07] in the sense that it allows the reductions 2) is problematic in our approach since, even though the Subject Reduction property would still hold, it would not hold with the quantitative information from which Strong Normalisation can be proved: In the typing tree, the type of M 1 is not an intersection (it is an F -type) but the type of M 2 can be one.So we cannot directly type we can use Lemma 6, otherwise we have to duplicate the typing tree of N .
We therefore exclude (2) from the calculus, but keep (1) as one of our rules, since it is perfectly compatible with our approach.It is also needed to simulate (in several steps) the general garbage collection rule below Figure 4: Reduction and equivalence rules of λS which is present in both [Kes07] and [Ren11], and which we decide to restrict, for simplicity, to the case where M is a variable different from x.5 All of our results would still hold with the general garbage collection rule.
Proof: By a polynomial argument.More precisely, see appendix C.
Proof: We have Γ ⊢ ⊢ ⊢ n ∩⊆ M : A for some n, and strong normalisation is provided by a lexicogrphic argument as follows: • −→ S,W terminates on its own.

λlxr
Remember that λlxr-terms are terms that are linear.In particular the typing rules of abstraction, explicit substitution, weakening and contraction, degenerate into the following rules when linearity is assumed: x (M ) : F Remark 14 Had we defined the (W eakening) and (Contraction) rules as above from the start (cf.Fig. 3), we would have had to add extra conditions for Theorems 21.5 and 21.6 to hold.This would amount to assuming some of the consequences of linearity.We prefer to keep Theorem 21 and the whole of Section 4 calculus-independent, by giving a more general definition of the two rules.
Definition 10 (Reduction in λlxr) The reduction and equivalence rules of λlxr are presented in Fig. 5.
For a set of rules E from Fig. 5 denotes the congruent closure of the rules in E modulo the equivalence rules.
SN λlxr denotes the set of strongly normalising λlxr-terms for the entire reduction relation.
Proof: Similarly to λS we have Γ ⊢ ⊢ ⊢ n ∩⊆ M : A for some n, and strong normalisation is provided by a lexicogrphic argument as follows: • −→ B strictly decreases n • Any other reduction −→ E decreases n or does not change it • The reduction system without the B rule terminates on its own [KL07].

Denotational semantics for strong normalisation
In this section we show how to use non-idempotent intersection types to simplify the methodology of [CS07], which we briefly review here: The goal is to produce modular proofs of strong normalisation for various source typing systems.The problem is reduced to the strong normalisation of a unique target system of intersection types, chosen once and for all.This is done by interpreting each term t as the set t of the intersection types that can be assigned to t in the target system.Two facts then remain to be proved: 1. if t can be typed in the source system, then t is not empty 2. the target system is strongly normalising The first point is the only part that is specific to the source typing system: it amounts to turning the interpretation of terms into a filter model of the source typing system.The second point depends on the chosen target system: as [CS07] uses a system of idempotent intersection types (extending the simply-typed λ-calculus), their proof involves the usual reducibility technique [Gir72,Tai75].But this is somewhat redundant with point 1 which uses similar techniques to prove the correctness of the filter model with respect to the source system. 6n this paper we propose to use non-idempotent intersection types for the target system, so that point 2 can be proved with simpler techniques than in [CS07] while point 1 is not impacted by the move.In practice we propose λ ⊆ ∩ as the target system (that of [BL11a] would work just as well).The present section shows the details of this alternative.
Notice that λ ⊆ ∩ is not an extension of the simply-typed λ-calculus, in that a typing tree in the system of simple types is not a valid typing tree in system λ ⊆ ∩ , which uses non-idempotent intersections (while it is a valid typing tree in the system of [CS07] which uses idempotent intersections).But a nice application of our proposed methodology is that, by taking the simplytyped lambda-calculus as the source system, we can produce a typing tree in λ ⊆ ∩ from a typing tree with simple types.We do not know of any more direct encoding.

I-filters
The following filter constructions only involve the syntax of types and are independent from the chosen target system.
Definition 11 (I-filter) • An I-filter is a set v of A-types such that: for all A and B in v we have A ∩ B ∈ v for all A and B, if A ∈ v and A ⊆ B, then B ∈ v • In particular the empty set and the sets of all A-types are I-filters and we write them ⊥ and ⊤ respectively.
• Let D be the sets of all non-empty I-filters; we call such I-filters values.
• Let E be the sets of all I-filters (E = D ∪ {⊥}).※ While our intersection types differ from those in [CS07] (in that idempotency is dropped), the stability of a filter under type intersections makes it validate idempotency (it contains A if and only if it contains A ∩ A, etc).This makes our filters very similar to those in [CS07], so we can plug-in the rest of the methodology with minimal change.
Remark 17 (Basic properties of I-filters) 5. If u and v are sets of F -types such that < u >=< v >, then u = v.
Hence, in order to prove that two I-filters are equal we just have to prove that they contain the same F -types.

Definition 13 (Environments and contexts)
An environment is a map from term variables x, y, . . . to I-filters.If ρ is an environment and Γ is a context, we say that Γ ∈ ρ, or Γ is compatible with ρ, if for all x, Γ(x) = ω or Γ(x) ∈ ρ(x).
Assume ρ is an environment, u is an I-filter and x is a variable.Then, the environment ρ, x → u is defined as follows: Remark 19 (Environments are I-filters of contexts)7 Let ρ be an environment.

Semantics of terms as I-filters
The remaining ingredients now involve the target system; we treat here λ ⊆ ∩ .

Definition 14 (Interpretation of terms)
If M is a term and ρ is an environment we define Theorem 21 (Inductive characterisation of the interpretation) Proof: See Appendix C.
This theorem makes λ ⊆ ∩ a suitable alternative as a target system: the filter models of the source systems treated in [CS07] can be done with a system of non-idempotent intersection types.While we could develop those constructions, we prefer to cover a new range of source systems: those with second-order quantifiers such as System F .Typing contexts, denoted G, H, . . .are partial maps from term variables to types, and (x : A) denotes the map from x to A.
Let S be the typing system consisting of the rules in Fig. 6.Typability in system S will be expressed by judgements of the form Remark 22 Note that the typing rules in System S do not necessarily follow the philosophy of the λlxr-calculus and the λS-calculus.For example, we would expect a typing system for λS or λlxr to be such that the domain of the typing context is exactly the set of free variables in the typed term (this leads to interesting properties and better encodings into proof-nets -see e.g.[KL07]).
However here, we are only interested in strong normalisation, and we therefore consider a typing system as general as possible (hence the accumulation of rules in Fig. 6), i.e. a typing system such that the terms that are typed in an appropriate typing system (such as that of [KL07] for λlxr) can be typed here.This is the case of System S. Alternatively, we could also adapt and tailor the proof of strong normalisation below to the specific typing system in which we are interested.

An intuitionistic realisability model
We now build the model M i F as follows: Definition 16 (Realisability Predicate) A realisability predicate is a subset X of D containing ⊤.We define TP(D) as the set of realisability predicates.※ Lemma 23 (Shape of realisability predicates) The only subtle point is the second one: First, for all v ∈ X, v = ⊥ and thus We can now interpret types: Definition 17 (Interpretation of types) Valuations are mappings from type variables to elements of TP(D).
Given such a valuation σ, the interpretation of types is defined as follows: The interpretation of typing contexts is defined as follows: Finally we get Adequacy: Lemma 24 (Adequacy Lemma) If G ⊢ ⊢ ⊢ S M : A, then for all valuations σ and for all mappings ρ ∈ G σ we have M ρ ∈ A σ .
Proof: By induction on the derivation of G ⊢ ⊢ ⊢ S M : A, using Theorem 21 (and the fact that , which is proved by induction on A).
Corollary 25 (Strong normalisation of S) If G ⊢ ⊢ ⊢ S M : A, then M ∈ SN.
Proof: Applying the previous lemma with σ mapping every type variable to {⊤} and ρ mapping all term variables to ⊤, we get M ρ ∈ A σ , so M ρ = ⊥.Hence, M can be typed in λ ⊆ ∩ , so M ∈ SN.
The advantage of non-idempotent intersection types (over idempotent ones) lies in the very last step of the above proof: here the typing trees of λ ⊆ ∩ get smaller with every β-reduction (proof of Theorem 10), while a reducibility technique as in [CS07] combines yet again an induction on types with an induction on typing trees similar to that in the Adequacy Lemma.

Orthogonality models
In this section we show how the above methodology can be integrated to the theory of orthogonality, i.e. how this kind of filter model construction can be captured by orthogonality techniques [Gir87, DK00, Kri01, MM09].These techniques are particularly suitable to prove that typed terms satisfy some property [Par97, MV05, LM08], the most well-known of which being Strong Normalisation.
For this we define an abstract notion of orthogonality model for the system S defined in Fig. 6.In particular our definition also applies to sub-systems such as the simply-typed λ-calculus, the idempotent intersection type system, System F , etc.We could also adapt it with no difficulty to accommodate System F ω .
Orthogonality techniques and the filter model construction from Section 4.1 (with the sets D and E) inspire the notion of orthogonality model below.First we need the following notations: • the following axioms are satisfied: (A1) For all ρ, v, x, (A5) For all ρ, v, x, M (such that x / ∈ f v(M )) and for all values u, if In fact, D and ⊥ ⊥ are already sufficient to interpret any type A as a set A of values: if types are seen as logical formulae, we can see this construction as a way of building some of their realisability / set-theoretical models.
There is no notion of computation pertaining to values, but the interplay between the interpretation of terms and the orthogonality relation is imposed by the axioms so that the Adequacy Lemma (which relates typing to semantics) holds: If ⊢ ⊢ ⊢ S ∩⊆ M : A, then M ∈ A

Semantics of types and Adequacy Lemma
Definition 20 (Orthogonal) Given such a valuation σ, the interpretation of types is defined as follows: The interpretation of typing contexts is defined as follows: An orthogonality model is a sufficiently rich structure for Adequacy to hold: Lemma 28 (Adequacy Lemma) If G ⊢ ⊢ ⊢ S M : A, then for all valuations σ and for all mappings ρ ∈ G σ we have M ρ ∈ A σ .

The special case of applicative structures
In the next section we present instances of orthogonality models.They will have in common that E is an applicative structure, as we have seen with I-filters.This motivates the following notion: Definition 23 (Applicative orthogonality model) An applicative orthogonality model is a 4-tuple (E, D, @, ) where: • E is a set, D is a subset of E, @ is a (total) function from E × E to E, and is a function (parametrised by an environment) from λ-terms to the support.

Instances of orthogonality models
We now give instances of (applicative) orthogonality models with well-chosen sets of values, applications, and interpretations of terms, with the aim of deriving the strong normalisation of a term M of type A in S from the property M ∈ A .The first two instances are term models: terms are interpreted as pure λ-terms (see Definition 24), so the support of those term models is the set of all pure λ-terms seen as an applicative structure (using term application: Definition 24 (Interpretation of terms in a term model) In the first instance, values are those pure λ-terms that are strongly normalising (for β).If we concentrate on the interpretation (as themselves) of the pure λ-terms that are typed in S, we have an orthogonality model that rephrases standard proofs of strong normalisation by orthogonality or reducibility candidates [Par97,LM08].
In the second instance, values are those pure λ-terms that can be typed with intersection types, for instance in system λ ⊆ ∩ .
Theorem 31 The structures (where Λ λ ∩ is the set of pure λ-terms typable in λ ⊆ ∩ ) are applicative orthogonality models.Lemma 32 (Expansion) Admittedly, once λ ⊆ ∩ has been proved to characterise SN λ (Theorems 10 and 53), the two points are identical and so are the two models M ⊥ ⊥ SN and M ⊥ ⊥ ∩ .But involving the bridge of this characterisation goes much beyond what is needed for either point: point 1 is a known fact of the literature; point 2 is a simple instance of Subject Expansion (Theorem 51 in the next section) not requiring Subject Reduction (Theorem 9) while both are involved at some point in the more advanced property SN λ = Λ λ ∩ .In brief, as we are interested in comparing proof techniques for the strong normalisation of System S, the question of which properties are used and in which order matters.Now using the Adequacy Lemma (Lemma 28), we finally get: For M ⊥ ⊥ ∩ we conclude of course by using Theorem 10.Now notice that, if M is a pure λ-term, this entails M ∈ SN λ by choosing, in both models, σ to map every type variable to the empty set, and ρ to map every term variable to itself.Indeed we have: Remark 34 In both structures M ⊥ ⊥ SN and M ⊥ ⊥ ∩ we can check that: For all lists N of values, and any term variable x, x ⊥ ⊥ N .Hence, for all valuations σ and all types A, x ∈ A σ .However if M is not a pure λ-term, it is not obvious to derive an interesting normalisation result for M , given that the explicit substitutions / weakenings / contractions in M have disappeared in M term ρ (and in the case of M ⊥ ⊥ SN , relating SN λ to SN λS or SN λlxr is another task to do).An idea would be to tweak the interpretation of terms so that every term is interpreted as itself, even if it has explicit substitutions / weakenings / contractions: x ( M term ρ,y →y,z →z ) But proving axioms (A1) to (A6) then becomes much more difficult.This is however the direction taken by [Kes09] for the explicit substitution calculus λ ex , where the methodological cornerstone is a property called IE, which is nothing else but axiom (A4) in M ⊥ ⊥ SN .For M ⊥ ⊥ ∩ , it might be possible to prove the axioms by inspecting typing derivations and/or using Subject Expansion (Theorem 51 in the next section).
A quicker way is to depart from term models and turn the filter model M i F from Section 4.4 into an orthogonality model: a term is interpreted as the filter of the intersection types that it can be assigned (e.g. in λ ⊆ ∩ , see Definition 14), and orthogonality is defined in terms of filters being non-empty.
Strong Normalisation will then follow, in a very uniform way for the three calculi λ, λS, and λlxr, from the fact that terms typable with intersection types are themselves strongly normalising (Theorems 10, 13, 16 for λ ⊆ ∩ ).

Theorem 35
The structure M ⊥ ⊥ F := (E, D, @, ) (with the four components as defined in Section 4.1) is an applicative orthogonality model.
Remark 36 For M ⊥ ⊥ F we now have: For all list of values v, ⊤ ⊥ ⊥ v. Hence, for all valuations σ and all types A, ⊤ ∈ A σ .Now using the Adequacy Lemma (Lemma 28), we finally get: Corollary 37 If G ⊢ ⊢ ⊢ S M : A, then: For all valuations σ and all mappings ρ ∈ G σ we have M ρ = ⊥.Hence, there exist Γ and A such that Γ ⊢ ⊢ ⊢ ∩⊆ M : A. Finally, M ∈ SN λ (resp.M ∈ SN λS , M ∈ SN λlxr , according to the calculus considered).
Proof: The first statement holds because ⊥ / ∈ D and A σ ⊆ D. To prove the second, we need to show that there exist such a σ and such a ρ; take σ to map every type variable to the empty set and take ρ to map every term variable to ⊤.The final result comes from Theorem 10 (resp.Theorem 13, Theorem 16).

Completeness
In Section 3 we have shown that for the three calculi, if a term is typable with intersection types, then it is strongly normalising.We have briefly mentioned that the converse is true.In this section we give a proof for the three calculi.Moreover, the typing trees obtained by these completeness theorems satisfy some interesting properties that will be used in the next sections (for the complexity results): they are optimal and principal.
The proof of completeness for λS is simpler than the one for the pure λ-calculus.This is why, in this section, that we will treat the λS calculus first.

Two properties of typing trees: Optimality and Principality
In the next sub-section we will notice that the typing trees produced by the proof of completeness all satisfy a particular properties.In this section we define these properties.The first of these is optimality.
This property involves the following notions: Definition 25 (Subsumption and forgotten types) • If π is a typing tree, we say that π does not use subsumption if it features an occurrence of the abstraction rule where the condition A ⊆ U is either A ≈ U or A ⊆ ω.
• We say that a type A is forgotten in an instance of rule (Abs) or rule (Subst) if in the side-condition of the rule we have U = ω.
• If a typing tree π uses no subsumption, we collect the list of its forgotten types, written forg(π), by a standard prefix and depth-first search of the typing tree π. ※ The optimal property also involves refining the grammar of types: Definition 26 (Refined intersection types) A + , A − , A −− and U −− are defined by the following grammar: We say that Γ is of the form Γ −− if for all x, Γ(x) is of the form U −− .The degree of a type of the form A + is the number of arrows in negative positions: We can finally define the optimal property: Definition 27 (Optimal typing) A typing tree π concluding Γ ⊢ ⊢ ⊢ ∩⊆ M : A is optimal if • There is no subsumption in π • For every forgotten type B in π, B is of the form B + .We write Γ ⊢ ⊢ ⊢opt M : A + if there exists such π.
The degree of such a typing tree is defined as In this definition, A + is an output type, A − is a basic input type (i.e. for a variable to be used once), and A −− is the type of a variable that can be used several times.The intuition behind this asymmetric grammar can be found in linear logic: Remark 39 Intersection in a typing tree means duplication of resource.So intersections can be compared to exponentials in linear logic [Gir87].Having an optimal typing tree means that duplications are not needed in certain parts of the optimal typing tree.In the same way, in linear logic, we do not need to have exponentials everywhere: A simple type T can be translated as a type T * of linear logic as follows: We can find a more refined translation; it can also be translated as T + and T − as follow : And we have in linear logic : T − ⊢ T * and T * ⊢ T + .So the translation T + is sound and uses less exponentials that the usual and naive translation.In some way, it is more "optimal".The main drawback is that we cannot compose proofs of optimal translations easily.
We now introduce the second of these properties: the notion of principal typing.

Definition 28 (Principal typing)
A typing tree π of M is principal8 if it is optimal and of minimal degree: For every optimal typing tree π ′ of M , δ(π) ≤ δ(π ′ ).※

λS
In order to prove the completeness of the typing system with respect to SN λS , we first show that terms in normal form (for some adequate notion of normal form) can be typed, and then we prove Subject Expansion for a notion of reduction that can reduce any term in SN λS to a normal form (which we know to be typed).In λS, Subject Expansion is true only for −→ B,S (not for −→ W ).
We will prove that it is enough for completeness.The main reason is that −→ W can be postponed w.r.t.−→ B,S : Lemma 40 (Erasure postponement) The first two points are proved by inspection of the rules: a substitution never blocks computation.
The third point: Let L a S, W reduction sequence from M to M ′ .If L is not a of the form −→ * S −→ * W , then there exists −→ W −→ S in L and by using the second point we can replace it by −→ + S −→ + W to obtain a reduction sequence L ′ .Therefore we have a non-deterministic rewriting of L.
This rewriting increases or does not change the size of L. According to Lemma 11, M is strongly normalising for S, W .Therefore, after a certain number of steps, the size of L does not change.So, after a certain number of steps, the rewriting is just replacing −→ W −→ S by −→ S −→ W and this terminates.Hence, this rewriting terminates.
By taking a normal form of this rewriting we have M −→ * S −→ * W M ′ .Therefore, the normal forms for −→ B,S are "normal enough" to be easily typed: Lemma 41 (Typability of B, S-normal term) If M cannot be reduced by −→ B,S , then there exist Γ and A such that Γ ⊢ ⊢ ⊢opt M : A.
Proof: By induction on M .We use the fact that if M cannot be reduced by −→ B,S , then M is of one of the following form: Each of them can easily be typed by a principal typing tree using the induction hypothesis.
Remark 42 The algorithm given by the previous proof gives us a principal typing tree.
Proof: First by induction on −→ B,S and ≈, then by induction on A.
We adapt the proof of Subject Reduction.The optimality property, the degree, and the principality property are preserved in both directions (Subject Expansion and Subject Reduction): indeed, since we are considering −→ B,S and not −→ W , the interface (typing context, type of the term and forgotten types) is not changed and we do not add any subsumption.
Theorem 44 (Completeness) If M ∈ SN λS , then there exist Γ and A such that Γ ⊢ ⊢ ⊢opt M : A.
Proof: By induction on the longest reduction sequence of M .If M can be reduced by −→ B,S we can use the induction hypothesis.Otherwise, M is typable.
Remark 45 The algorithm given by the previous proof gives us a principal typing tree.

Pure λ-calculus
As in the case of λS, proving completeness of the typing system with respect to SN λ relies on the typability of (some notion of) normal forms and on some property of Subject Expansion.
For the λ-calculus, the normal forms that we consider are simply the β-normal forms.Typing them with the optimal typing trees of System λ ⊆ ∩ is therefore very reminiscent of [DCHM05] that applies a similar technique to type β-normal forms with the principal types of a system with idempotent intersections.
As we have seen for λS, proving completeness relies on the Subject Expansion property for a notion of reduction that can reduce any term in SN λ to a normal form (which we know to be typed).
As in λS, it is erasure that breaks the Subject Expansion ((λx.M )N −→ M with x / ∈ f v(M )).The problem here is that we cannot just study Subject Expansion for β-reductions that do not erase, because forbidding erasure can block a reduction sequence (for example, (λx.(λy.y))ab).
So we have to define a restricted version of β-reduction that satisfies Subject Expansion, but that is still general enough to reach β-normal forms (which can be easily typed).
If M and N are λ-terms and E a finite set of variables then we define the judgements M E N and M ⇒ E N with the rules of Fig. 7.These may not seem natural on a syntactic point of view.However, they are quite intuitive if you consider that they satisfy the following lemma: Moreover, if the typing of M ′ is optimal, then the typing of M ′ can be required to be optimal.
Proof: First by induction on M E M ′ and M ⇒ E M ′ then by induction on A. We adapt the proof of Subject Reduction (Theorem 9).
Lemma 52 (Safe execution of a term) If M can be reduced by −→ β then there exist M ′ and E such that M ⇒ E M ′ and if M is not of the form λx.M then M E M ′ .
Proof: By induction on M .
• M cannot be a variable.
• If M is of the form λx.N .Then N reduced by −→ β .By induction hypothesis there exist N ′ and E such that N ⇒ E N ′ .Therefore λx.N ⇒ E−{x} λx.N ′ .
• If M is of the form M 1 M 2 .We are in one of the following cases: -M 1 is of the form λx.M 3 .Then we have (λx.
-M 1 is not of the form λx.M 3 and M 1 reduced by −→ β .By induction hypothesis, there exist M ′ 1 and E such that -M 1 is not of the form λx.M 3 and M 1 is a β-normal form.Therefore there exists x such that Acc(M 1 , x) and M 2 reduced by −→ β .By induction hypothesis, there exist M ′ 2 and E such that Theorem 53 (Completeness) If M ∈ SN λ then there exists Γ and A such that Γ ⊢ ⊢ ⊢opt M : A.
Proof: By induction on the longest reduction sequence of M .
We can notice that to prove the completeness of the pure λ-calculus we only need a fragment of and ⇒ but by dealing with all and ⇒ we have the following result without extra difficulties: This result is purely syntactic.However, it is very hard to prove without intersection types (if we consider all and ⇒ and not just the head reduction fragment).

λlxr
In this section we provide the guidelines to obtain a similar completeness theorem for λlxr, leaving the details for further work.The methodology is similar to the cases of the λ-calculus and λS: we identify a notion of reduction for which Subject Expansion holds, and whose notion of normal forms can be easily typed.
As in the λ-calculus, some of the rules do not satisfy Subject Expansion: x (W y (M )) −→ M {z := x} Subject Expansion for the other reduction rules should be straightforward.
The fact that the last 3 rules above do not satisfy Subject Expansion is not problematic for the completeness theorem: like rule W in λS, we should prove that they can be postponed after the other rules, and that removing them from the system defines a new notion of normal forms that can still be typed.
On the other hand, the first rule above is more problematic: if we forbid it, the reduction can be blocked (like forbidding erasure can block a reduction in the pure λ-calculus).So a Subject Expansion result without that rule is not enough to prove the completeness of λlxr.Hence, we have two possibilities to achieve that goal: • We adapt the proof of the pure λ-calculus.We have to define E and ⇒ E in λlxr.
• We adapt the proof of λS.We cannot do this in the usual λlxr: if we forbid erasure, reduction can be blocked.So we have to add the rules that move . These added rules respect Subject Reduction, so we still have soundness.Both approaches would provide Completeness (strong normalisation implies typability).Moreover, as in the pure λ-calculus and λS, the proof would provide an algorithm that constructs an optimal typing tree from a strongly normlising term in λlxr.

Complexity results
With the Subject Reduction theorems of the different calculi, we have proved that for every β-(or B-) reduction, the measure of the typing tree strictly decreases.Hence, more than strong normalisation, this gives us a bound on the number of β-(or B-) reductions in a reduction sequence.So we have a complexity result which is an inequality.We would like to refine this result and have an equality instead.The main idea is to only perform β-(or B-) reductions that decreases the measure of the typing tree by exactly one.Given a term M and any typing tree for it, it is not always possible to find such a reduction step.But it is always possible provided the typing tree is optimal.Fortunately, every term M that is typable is typable with an optimal typing tree: with soundness we can prove that M is strongly normalising, and then, with completeness, we can prove that M is typable with an optimal typing tree.This is the main reason for introducing the notion of optimality.As in Section 5, the case of λS is simpler than the case of pure λ-calculus, so we will deal with it first.

λS
In λS, we take advantage of the fact that −→ W can be postponed w.r.t. to −→ B,S steps.This allows us to concentrate on −→ B,S and the normal forms for it.
A Moreover the degree of the typing tree does not change, and the principality property is preserved.
Proof: We simply check that, in the proof of of Subject Reduction (Theorem 12), the optimality property, the degree and the principality property are preserved, as already mentioned in the proof of Subject Expansion (Theorem 43).
Proof: Again, we adapt the proof of Subject Reduction (Theorem 12).More precisely, see appendix C.

Lemma 57 (Resources of a normal term)
If Γ ⊢ ⊢ ⊢ n opt M : A, and M cannot be reduced by −→ B,S , then n is the number of applications in M .
Moreover, if the typing tree is principal, then n is the degree of the typing tree.
Theorem 58 (Complexity result) If Γ ⊢ ⊢ ⊢ n opt M : A, then n = n 1 + n 2 where • n 1 is the maximum number of −→ B in a B, S-reduction sequence from M • n 2 is the number of applications in the B, S normal form of M .Moreover, if the typing tree is principal, then n 2 is the degree of the typing tree.
Proof: The previous lemmas give us a B, S-reduction sequence with n − n 2 B-steps, from M to the normal form of M .In this reduction sequence every reduction B decreases the measure of the typing tree by exactly one.
Assume we have a B, S-reduction sequence, from M to the normal form of M (λS is confluent), with m B-steps.By Subject Reduction (Theorem 12), the measure of the derivation typing the normal form of M is smaller than n − m, but is also n 2 .Hence m ≤ n − n 2 .
Assume we have a B, S-reduction sequence, from M to any term M 1 .It can be completed into a B, S-reduction sequence with more B-steps, from M to the normal form of M 1 .

Lemma 63 (Relating λ and λS
We proceed as follows: • Given two pure λ-terms M and N , note that M {x := N } is a pure λ-term, and it is easy to show, by induction on M and using erasure postponement (Lemma 40), that The result is a direct corollary, obtained by induction on n and using Lemma 40.Contrary to λS, we cannot have a complexity result on the weaker assumption of optimality, relating the measure to the number of applications in the normal form: This was possible in λS because we considered normal forms for a system that never erases, while here we cannot forbid β-reduction to erase terms.

Other measures of complexity
In the pure λ-calculus we measure the (maximal) number of β-steps.The equivalent result for λS and λlxr naturally counts the number of −→ B -steps if we do not change the measure on the typing trees.But there are many other reduction rules for λS and λlxr, for which we may want similar complexity results.For some of these rules we can obtain such results without changing the typing system, by changing what we count in the typing trees.

Number of replacements
To get the number of replacements we measure in the typing tree the number of use of the variable rule.
Theorem 65 (Complexity result on the number of replacements) The longest reduction sequence from M by measuring the number of −→ B is the longest reduction sequence from M by measuring −→ SR (head reduction strategy).
And the number of use of −→ SR in this sequence plus the number of variables in the normal form (without weakening) is equal to the number of use of the variable rule in an optimal typing tree.

Number of duplications
By measuring the number of use of the Intersection rule in the typing tree we get a bound on the number of duplications (the number of use of rules that duplicate a term).
However, contrary to the other measures, we cannot have an equality result.Here is a counter example : (λx.xx)(λy.ayy)If we reduce this term to its normal form we have two duplications.However, if we type this term, we have at least 3 uses of the intersection rule.

The other measures
• If we measure the number of uses of the Abstraction rule we get a result on the maximum number of −→ B in a reduction sequence again.We just have to change the definition of degree of a principal typing tree.
• The explicit substitution rule can be produced or destroyed by the subject reduction.So we cannot use it to get a complexity result.

Conclusion
We have defined a typing system with non-idempotent intersection types.We have shown that it characterises strongly normalising terms, in the pure λ-calculus as well as in the explicit substitution calculi λS and λlxr.This characterisation has been achieved in each case by strong versions of Subject Reduction and Subject Expansion, enriched with quantitative information: • By identifying a measure on typing derivations that is decreased by Subject Reduction, we have obtained a simple proof of strong normalisation that also provides upper bounds on longest reduction sequences.
• By either proving postement of erasures (λS) or identifying appropriate sub-reduction relations (λ), we have shown how Subject Expansion garantees the existence of typing derivations satisfying extra properties (optimality and principality), where the bounds are refined into an exact measure of longest reduction sequences.In the case of λ-calculus, obtaining this exact equality departs from the issues addressed in e.g.[KW99,NM04] whose technology is similar to ours (as we found out a posteriori ).Indeed, one of the concerns of this line of research is how the process of type inference compares to that of normalisation, in terms of complexity classes (these two problems being parametrised by the size of terms and a notion of rank for types).Here we have shown how such a technology can actually provide an exact equality specific to each λ-term and its typing tree.Of course this only emphasises the fact that type inference is as hard as normalisation, but type inference as a process is not a concern of this paper.
Moreover, we have extended those results to λS and λlxr, and the technology can be adapted to other calculi featuring e.g.combinators, or algebraic constructors and destructors (to handle integers, products, sums,. . .).
We have seen how the use of non-idempotent intersection types simplifies the methodology from [CS07] by cutting a second use of reducibility techniques to prove strong normalisation properties of standard systems (here illustrated by the examples of simple types, System F , and idempotent intersections).We extended the methodology to prove strong normalisation results for λS and λlxr, providing the first direct proofs that we are aware of.
We have seen how the corresponding filter model construction can be done by orthogonality techniques; for this we have defined an abstract notion of orthogonality model which we have not seen formalised in the literature.As illustrated in Section 4.5.3, this notion allows a lot of work (e.g.proving the Adequacy Lemma) to be factorised, while building models like M ⊥ ⊥ SN , M ⊥ ⊥ ∩ and M ⊥ ⊥ F .Comparing such instances of orthogonality models, we have seen the superiority of M ⊥ ⊥ F for proving the strong normalisation results of λS and λlxr.Note that, while M ⊥ ⊥ F and M i F share the same ingredients E, D, @ and , they are different in the way types are interpreted; see the discussion in Appendix A.
In [BL11b] we also compared the models in the way they enlighten the transformation of infinite polymorphism into finite polymorphism.We leave this aspect for another paper, as more examples should be computed to illustrate (and better understand) the theoretical result; in particular we need to understand how and why the transformation of polymorphism does not require to reduce terms to their normal forms.An objective could be to identify (and eliminate), in the interpretation of a type from System F , those filters that are not the interpretation of any term of that type.What could help this, is to force filters to be stable under type instantiation, in the view that interpretations of terms are generated by a single F -type, i.e. a principal type.
Another aspect of this future work is to use the filter models to try to lift the complexity results that we have in the target system back into the source system, and see to what extent the quantitative information can be already read in the typing trees of the source system.One hope would be to recover for instance results that are known for the simply-typed calculus [Sch82,Bec01], but with our methodology that can be adapted to other source systems such as System F .
Finally, the appropriate sub-reduction relation for the λ-calculus, which we have used to prove Subject Expansion as generally as possible, also helps understanding how and when the semantics of terms is preserved, see Appendix B. This is similar to [ABDC06], and future work should adapt their methodology to accommodate our non-idempotent intersections.

A Filter models: classical vs. intuitionistic realisability
The orthogonality method comes from the denotational and operational semantics of symmetric calculi, such as proof-term calculi for classical or linear logic.
In some sense, orthogonality only builds semantics in a continuation passing style, and (as we have seen) this still makes sense for typed λ-calculi that are purely intuitionistic.While this is sufficient to prove important properties of typed λ-terms such as strong normalisation, the models are unable to reflect some of their purely intuitionistic features.This phenomenon could be seen in presence of a "positive" type (i.e.datatype) P, for which P is not closed under bi-orthogonal and [P] is defined as P Proof: 1. Corollary of Subject Reduction (Theorem 9).
2. One inclusion is the previous case and the other is a corollary of Subject Expansion (Theorem 51).

Figure 2 :
Figure 2: Equivalence on intersection types fresh sets of variables in bijection with X.In rule M erge, R z w (M ) is the renaming of z by w in M .

Figure 5 :
Figure 5: Reduction and equivalence rules of λlxr

4. 3
An example: System F and the likes Definition 15 (Types and Typing System) Types are built by the following grammar: A, B, . . .::= α | A→B | A ∩ B | ∀αA where α denotes a type variable, ∀αA binds α in A, types are considered modulo α-conversion, and f tv(A) denotes the free (type) variables of A.

Notation 18
Given a set D, let D * be the set of lists of elements of D, with [] representing the empty list and u :: v representing the list of head u and tail v. Definition 19 (Orthogonality model) An orthogonality model is a 4-tuple (E, D, ⊥ ⊥ , ) where • E is a set, called the support • D ⊆ E is a set of elements called values • ⊥ ⊥ ⊆ D × D * is called the orthogonality relation • is a function mapping every term M (typed or untyped) to an element M ρ of the support, where ρ is a parameter called environment mapping term variables to values.
then X ⊆ X ⊥⊥ and X ⊥⊥⊥ = X ⊥ .Definition 21 (Lists and Cons construct) If X ⊆ D and Y ⊆ D * , then define X :: Y := {u :: v | u ∈ X, v ∈ Y }. ※ Definition 22 (Interpretation of types) Mappings from type variables to subsets of D * are called valuations.

⊥∩
: A, then: M ⊥ ⊥ SN For all valuations σ and all mappings ρ ∈ G σ we have M term ρ ∈ SN λ .M ⊥ For all valuations σ and all mappings ρ ∈ G σ there exist Γ and A such that Γ ⊢ ⊢ ⊢ ∩⊆ M term ρ : A, and therefore M term ρ ∈ SN λ .

Lemma 56 (
Most inefficient reduction) Assume Γ ⊢ ⊢ ⊢ n opt M : A. If M can be reduced by −→ B and not by −→ S , then there exist M ′ and Γ

Theorem 64 (
Complexity result) If Γ ⊢ ⊢ ⊢ n opt M : A with a principal typing tree of degree d, then the length of the longest β-reduction sequence from M is n − d.Proof: Two previous lemmas give us a β-reduction sequence of size n − d.Let L be another β-reduction sequence from M of size m.So there exists M 1 such that M −→ m β M 1 .By the previous lemma, there exists M 2 such that M (−→ B −→ * S ) m M 2 and M 2 −→ * W M 1 .From the complexity result for λS we have m ≤ n − d.
interpretation P→P = ( P ::[P]) ⊥ = {u ∈ D | ∀v ∈ P , ∀ v ′ ∈ [P], u ⊥ ⊥ v :: v ′ } = {u ∈ D | ∀v ∈ P , ∀ v ′ ∈ [P], u@v ⊥ ⊥ v ′ } = {u ∈ D | ∀v ∈ P , u@v ∈ P ⊥⊥ } while model M i F would provide P→P = {u ∈ D | ∀v ∈ P , u@v ∈ P }B Preservation of semantics by reductionWhen models are built for a typed λ-calculus, it is sometimes expected that the interpretation of terms is preserved under β-reduction (or even β-equivalence).It is not always necessary for the purpose of the model construction (here: proving normalisation properties), and it is clearly not the case for the term models M ⊥ ⊥ SN and M ⊥ ⊥ ∩ , where terms are interpreted as themselves (at least in the case of the pure λ-calculus).The case of the filter models M ⊥ ⊥ F and M i F (which heavily rely on Theorem 21) is less obvious.Still we can prove the following:Theorem 66 1.If M −→ β M ′ , then for all ρ, M ρ ⊆ M ′ ρ .2. If M E M ′ ,then for all ρ, M ρ = M ′ ρ .3. If M −→ B,S M ′ , then for all ρ, M ρ = M ′ ρ .