Bounding linear head reduction and visible interaction through skeletons

In this paper, we study the complexity of execution in higher-order programming languages. Our study has two facets: on the one hand we give an upper bound to the length of interactions between bounded P-visible strategies in Hyland-Ong game semantics. This result covers models of programming languages with access to computational effects like non-determinism, state or control operators, but its semantic formulation causes a loose connection to syntax. On the other hand we give a syntactic counterpart of our semantic study: a non-elementary upper bound to the length of the linear head reduction sequence (a low-level notion of reduction, close to the actual implementation of the reduction of higher-order programs by abstract machines) of simply-typed lambda-terms. In both cases our upper bounds are proved optimal by giving matching lower bounds. These two results, although different in scope, are proved using the same method: we introduce a simple reduction on finite trees of natural numbers, hereby called interaction skeletons. We study this reduction and give upper bounds to its complexity. We then apply this study by giving two simulation results: a semantic one measuring progress in game-theoretic interaction via interaction skeletons, and a syntactic one establishing a correspondence between linear head reduction of terms satisfying a locality condition called local scope and the reduction of interaction skeletons. This result is then generalized to arbitrary terms by a local scopization transformation.


Introduction
In the last two decades there has been a significant interest in the study of quantitative or intensional aspects of higher-order programs; in particular, the study of their complexity has generated a lot of effort.In the context of the λ-calculus, the first result that comes to mind is the work by Schwichtenberg [19], later improved by Beckmann [4], establishing upper bounds to the length of β-reduction sequences for simply-typed λ-terms.In the related line of work of implicit complexity, type systems have been developed to characterize extensionally certain classes of functions, such as polynomial [13] or elementary [11] time. but require information on the terms whose extraction is in general as long to obtain as actual execution.The present contribution belongs to the first family.However, unlike Beckmann and Schwichtenberg our core tools are syntax-independent.Moreover we focus on linear head reduction, the notion of reduction implemented by several call-by-name abstract machines [10], closer to the actual execution of functional programming languages.
Outline.In Section 2, we introduce a few basic notions or notations useful to the rest of the paper.In Section 3 we present the game semantics framework from which interaction skeletons were originally extracted, and prove our semantic simulation result.Section 4 is a largely standalone section in which we study interaction skeletons and their reduction, and prove our main complexity result.Finally, Section 5 focuses on the syntactic implications of our study of skeletons and details their relationship with linear head reduction.
This paper is organized around the notion of interaction skeleton and their reduction, with two largely independent applications to game semantics and to the complexity of linear head reduction.We chose to present first the game-theoretic development, since it motivates the definition of interaction skeletons.However, our intention is that the paper should be accessible to semantically-minded as well as more syntactically-minded readers.In particular, readers not interested in game semantics should be able to skip Section 3 and still have everything needed to understand Sections 4 and 5. subject to the usual typing rules defining the typing relation Γ M : A (for definiteness, contexts Γ are considered to be sets of pairs x : A, where x is a variable name and A is a simple type).All the terms in this paper are considered well-typed, although we will not always make it explicit.Note that we work here with the simply-typed λ-calculus à la Church, i.e. the variables are explicitly annotated with types (although we often omit the annotations for the sake of readability).For each type A, there is a constant * A : A of type A. We will often omit the index and write * .As usual, we write fv(M ) for the set of free variables of a term M , i.e. variables appearing in the term but not bound by a λ.If A is a type, we write id A for the term λx A .x : A → A. Terms are assumed to obey Barendregt's convention, and are considered up to α-equivalence.Note that our design choices -only one atom, each type is inhabited -merely make the presentation simpler and are not strictly required for our results to hold.

Preliminaries
If Γ, x : A M : B and Γ N : A, we write M [N/x] for the substitution of x by N in M , i.e.M where all occurrences of x have been replaced by N .Although this paper focuses on linear head reduction (to be defined later), we will occasionally need β-reduction.It is the usual notion of reduction in the λ-calculus, defined by the context-closure of: Likewise we consider η-expansion to be the context closure of: valid whenever M has type A → B for some types A, B and x ∈ fv(M ).
The level of a type is defined by lv(o) = 0 and lv(A → B) = max(lv(A) + 1, lv(B)).Likewise, the level lv(M ) of a term M is the level of its type.Finally, the order ord(M ) of a term M is the maximal lv(N ), for all subterms N of M .Within a term Γ M : A such that (x : B) ∈ Γ, we write lv M (x) = lv(B).The term M will generally be implicit from the context, so we will just write lv(x).

Visible pointer structures and interaction skeletons
As we said before, the general purpose of this paper is to develop syntax-independent tools to reason about termination and complexity of programming languages.Game semantics provide such a framework: in this setting, programs are identified with the set of their possible interactions with the execution environment, presented as a strategy.All the syntactic information is forgotten but at the same time no dynamic information is lost, since the strategy exactly describes the behaviour of the term within any possible evaluation context.
More importantly, game semantics can be seen both as a denotational semantics and as an operational semantics in the sense that the interaction process at the heart of games is operationally informative and strongly related to the actual evaluation process as implemented, for instance, by abstract machines.This important intuition was made formal for the first time, to the author's knowledge, by Danos, Herbelin and Regnier in [9].There, they showed that assuming one has a simply-typed λ-term M N 1 . . .N p where M and the N i s are both η-long and β-normal, then there is a step-by-step correspondence between: • The linear head reduction sequence of M N 1 . . .N p , • The game-theoretic interaction between strategies M and Π 1≤i≤p N i .In this paper, we will refer to simply-typed λ-terms of that particular shape as game situations, because it is in those situations that the connection between game semantics and execution of programs is the most direct.Indeed (innocent) game semantics can be seen as a reduction-free way of composing Böhm trees, i.e. of computing game situations.
Game situations also provide the starting point of our present contributions.Indeed, the connection above reduces the termination and complexity analysis of the execution of a game situation M N 1 . . .N p to the syntax-independent analysis of the game-theoretic witness of this execution, i.e. the game-theoretic interaction between the strategies M and Π 1≤i≤n N i .More precisely, it turns out that from this interaction one just has to keep the structure of pointers in order to get a precise estimate of the complexity of execution.
In this section, we will start by recalling a few basic definitions of Hyland-Ong game semantics.We will show that the mechanism of composition gives rise to structure called visible pointer structures.We will then show the main result of this section, that visible pointer structures can be simulated by a simple rewriting system called interaction skeletons.

3.1.
Brief reminder of Hyland-Ong games.We start this section by recalling some of the basic definitions of Hyland-Ong games.The presentation of game semantics will be intentionally brief and informal, firstly because it is only there to provide context and is not a prerequisite to understand the paper, and secondly because good introductions can be easily found in the literature, see e.g.[14].
We are interested in games with two participants: Opponent (O, the environment) and Player (P, the program).They play on directed graphs called arenas, which are semantic versions of types.Formally, an arena is a structure A = (M A , λ A , I A , A ) where:

polarity function indicating whether a move is an Opponent or
Player move (O-move or P -move).
In our complexity analysis we will use notions of depth.A move m ∈ M A has depth d ∈ N if there is an enabling sequence: where m 0 ∈ I A , and no shortest enabling sequence exists; for instance any initial move has depth 0. Likewise, an arena A has depth d if d is the largest depth of any move in M A .
We now define plays as justified sequences over A: these are sequences s of moves of A, each non-initial move m in s being equipped with a pointer to an earlier move n in s, satisfying n A m.In other words, a justified sequence s over A is such that each reversed pointer chain s i 0 ← s i 1 ← . . .← s in is a path on A (viewed as a graph).The role of pointers is to allow reopenings or backtracking in plays.When writing justified sequences, we will often omit the justification information if this does not cause any ambiguity.The symbol will denote the prefix ordering on justified sequences, and s 1 P s 2 (resp.s 1 O s 2 will mean that s 1 is a P -ending (resp.O-ending) prefix of s 2 .If s is a justified sequence on A, |s| will denote its length.If s is a justified sequence over A and Σ ⊆ M A , then the restriction s Σ comprises the moves of s in Σ. Pointers in s Σ are those obtained by following a path of pointers in s: is a justified sequence and i ≤ n, write s ≤i for its subsequence s 0 . . .s i .The legal plays over A are the justified sequences s on A satisfying the alternation condition, i.e. that if tmn s, then λ A (m) = λ A (n).The set of legal plays on A is denoted by L A .
Given a justified sequence s on A, it has two subsequences of particular interest: the P-view and O-view.The view for P (resp.O) may be understood as the subsequence of the play where P (resp.O) only sees his own duplications.Practically, the P-view s of s is computed recursively by forgetting everything under Opponent's pointers, as follows: and n points to m.

P. CLAIRAMBAULT
The O-view s of s is defined dually, without the special treatment of initial moves.
In this subsection, we will present several classes of strategies on arena games that are of interest to us in the present paper.A strategy σ on A is a set of even-length legal plays on A, closed under even-length prefix.A strategy from A to B is a strategy σ : A ⇒ B, where A ⇒ B is the usual arrow arena defined by A + B +I B × I A where λ A means λ A with polarity O/P reversed.
3.1.1.Composition.We define composition of strategies by the usual parallel interaction plus hiding mechanism.If A, B and C are arenas, we define the set of interactions I(A, B, C) as the set of justified sequences u over A, B and is associative and admits copycat strategies as identities.
3.1.2.P -visible strategies.A strategy σ is P -visible if each of its moves points to the current P -view.Formally, for all sab ∈ σ, b points inside sa .P -visible strategies are stable under composition, and correspond to functional programs with ground type references [1].
3.1.3.Innocent strategies.The class of innocent strategies is central in game semantics, because of their correspondence with purely functional programs (or λ-terms) and of their useful definability properties.A strategy σ is innocent if Intuitively, an innocent strategy only takes its P -view into account to determine its next move.Indeed, any innocent strategy is characterized by a set of P -views.This observation is very important since P -views can be seen as abstract representations of branches of η-expanded Böhm trees (a.k.a.Nakajima trees [18]): this is the key to the definability process on innocent strategies.Arenas and innocent strategies form a cartesian closed category and are therefore a model of Λ; let us add for completeness that o is interpreted as the singleton arena and * A is interpreted as the singleton strategy on A containing only the empty play.
3.1.4.Bounded strategies.A strategy σ is bounded if it is P -visible and if the length of its P -views is bounded: formally, there exists N ∈ N such that for all s ∈ σ, | s | ≤ N .Bounded strategies only have finite interactions [8]; this result corresponds loosely to the normalisation result on simply-typed λ-calculus.Syntactically, bounded strategies include the interpretation of all terms of a higher-order programming language with ground type references, arbitrary non-determinism and control operators, but without recursion.This remark is important since it implies that our results will hold for any program written with these constructs, as long as they do not use recursion or a fixed point operator.
Our complexity results, in this game-theoretic part of this paper, will be expressed as a function of the size of the involved bounded strategies, given by: When σ is an innocent strategy coming from a Böhm tree, then |σ| is proportional to the height of this Böhm tree, as defined in the syntactic part of this paper.

Introduction of visible pointer structures.
We note here that the notion of P -view (and of its size) only takes into account the structure of pointers within the plays, and completely ignores the actual labels of the moves.In fact the underlying pointer structure of a play will be all we need to study the asymptotic complexity of execution.Definition 3.1.The pure arena I ∞ is defined by: A pointer structure is a legal play s ∈ L I∞ with at most one initial move.Note that for any arena A, a legal play s ∈ L A with only one initial move can always be mapped to its pointer structure s ∈ L I∞ by sending each move s i to its depth, i.e. the number of pointers to be taken in s before reaching the initial move.
The depth of a pointer structure s is the largest depth of Pointer structures retain some information about the control flow of the execution, but on the other hand forget most of the typing information.Our results will rely on the crucial assumption that the strategies involved act in a visible way.This is necessary, because non-visible strategies are able to express programs with general (higher-order) references within which a fixpoint operator is definable, so termination is lost.Definition 3.2.A visible pointer structure s is a pointer structure s ∈ L I∞ such that: • It is P -visible: for any s a P s, a points within s .
• It is O-visible: for any s a O s, a points within s .We write V d for the set of all visible pointer structures of depth lower than d.
We are interested in the maximal length of a visible pointer structure resulting from the interaction between two bounded strategies.Since the collapse of plays to visible pointer structures forgets the identity of moves, all that remains from bounded strategies in this framework is their size.Therefore, for n, p ∈ N, we define the set n d p of all visible pointer structures possibly resulting from an interaction of strategies of respective sizes n and p, in an ambient arena of depth d.Formally: In [8], we already examined the termination problem for visible pointer structures.We proved that any interaction between bounded strategies is necessarily finite.Therefore since n d p, regarded as a tree, is finitely branching, it follows that it is finite.So there is an upper bound N d (n, p) to the length of any visible pointer structure in n d p.

3.2.2.
The visible pointer structure of an interaction.We take here the time to detail more formally our claim that an estimation of the length of visible pointer structures in n d p is informative of the complexity of interaction between bounded strategies.
Let σ : A ⇒ B and τ : B ⇒ C be bounded strategies, of respective size n and p.We are interested in the possible length of interactions in σ τ .Of course arbitrary such interactions are not bounded in size, since σ and τ both interact with an external opponent in A and C, whose behavior is not restricted by any size condition.Therefore we restrict our interest to passive such interactions, i.e. interactions u ∈ σ τ such that the unique Opponent move in u A, C is a unique initial question in C. When σ and τ are both innocent and correspond to λ-terms, this corresponds by the results of [9] to a linear head reduction sequence from a game situation.In particular, the passivity condition ensures that the interaction stops if a free variable ever arrives in head position.Proof.Since u is passive, it consists in a play • C u with • C initial in C and u ∈ L B , possibly followed by a trailing move • in A or C. Take any strict prefix • C u u.Then u is either in σ or an immediate prefix of a play in σ, so Writing u for the pointer structure of a play u (i.e. the play of I ∞ obtained by sending any move to its depth).Then we have • C u ∈ n d p. Indeed the size of P -views and O-views in a play only depends on pointers, so it is unchanged by forgetting arena labels.Moreover • C u ∈ L B⇒{• C } (where {• C } is the singleton arena), which has depth d.
From the above, we deduce that So the study of visible pointer structure is sufficient to study the length of interactions in terms of the sizes of the strategies involved.
3.2.3.Interaction skeletons and simulation of visible pointer structures.We now introduce interaction skeletons (or just skeletons for short), the main tool used in this paper in order to study the complexity of execution.
As we mentioned repeatedly, game-theoretic interaction corresponds to linear head reduction -which is itself efficiently implemented by machines with environment such as the Krivine Abstract Machine (KAM).In such machines, game situations produce by reduction situations where the terms interacting are no longer plain closed terms but rather terms-in-environments, also known as closures.Following this phenomenon, whereas the measure of the first move s 0 of a visible pointer structure is given by the sizes of the strategies involved (so by a pair of natural numbers), the measure of a later move s i will be given by a finite tree of natural numbers reminiscent of the structure of closures.
We will call a pointed visible pointer structure a pair (s, i) where s is a visible pointer structure and i ≤ |s| − 1 is an arbitrary "starting" move.We adapt the notions of size and depth for them, and introduce a notion of context.Definition 3.4.Let (s, i) be a pointed visible pointer structure.The residual size of s at i, written rsize(s, i), is defined as follows: where s i ∈ s ≤j means that the computation of s ≤j reaches1 s i .Dually, we have the notion of residual co-size of s at i, written rcosize(s, i), defined as follows: The residual depth of s at i is the maximal length of a pointer chain in s starting from s i .Definition 3.5.Let s be a visible pointer structure.We define the context of (s, i) as: • If s i is an O-move, the set {s n 1 , . . ., s np } of O-moves appearing in s <i , • If s i is a P-move, the set {s n 1 , . . ., s np } of P-moves appearing in s <i .In other words it is the set of moves to which s i+1 can point whilst abiding to the visibility condition, except s i .We also need the dual notion of co-context, which contains the moves the other player can point to.The co-context of (s, i) is: • If s i is an O-move, the set {s n 1 , . . ., s np } of P-moves appearing in s <i , • If s i is a P-move, the set {s n 1 , . . ., s np } of O-moves appearing in s <i .Definition 3.6.A skeleton is a finite tree, whose nodes and edges are both labeled by natural numbers.If a 1 , . . ., a p are skeletons and d 1 , . . ., d p are natural numbers, we write: We now define what it means for (s, i) to respect a skeleton a.
Definition 3.7 (Trace, co-trace, interaction).The two notions T r and coT r are defined by mutual recursion, as follows: let a = n[{d 1 }a 1 , . . ., {d p }a p ] be a skeleton.We say that (s, i) is a trace (resp.a co-trace) of a, denoted (s, i) ∈ T r(a) (resp.(s, i) ∈ coT r(a)) if the following conditions are satisfied: is the context of (s, i) (resp.co-context), then for each k ∈ {1, . . ., p} the residual depth of s at n k is less than d k .Then, we define an interaction of two skeletons a and b at depth d as a pair (s, i) ∈ T r(a) ∩ coT r(b) where the residual depth of s at i is less than d, which we write (s, i) ∈ a d b.
Notice that we use the same notation both for natural numbers and skeletons.This should not generate any confusion, since the definitions above coincide with the previous ones in the special case of "atomic" skeletons: if n and p are natural numbers, then obviously s ∈ n d p (according to the former definition) if and only if (s, 0) ∈ n[] d p[] (according to the latter).In fact, we will sometimes in the sequel write n for the atomic skeleton with n at the root and without children, i.e. n[].

Simulation of visible pointer structures.
We introduce now our main tool, a reduction on skeletons which simulates visible pointer structures: if n[{d 1 }a 1 , . . ., {d p }a p ] and b are skeletons (n > 0), we define the non-deterministic reduction relation on triples (a, d, b), where d is a depth (a natural number) and a and b are skeletons, by the following two cases: where i ∈ {1, . . ., p}, d i > 0 in the first case and d > 0 in the second case.
In order to prove our simulation result, we will make use of the following lemma.
Lemma 3.8.Let s be a pointed visible pointer structure and a = n[{d 1 }a 1 , . . ., {d p }a p ] a skeleton such that (s, i) ∈ coT r(a).Then if s j → s i , (s, j) ∈ T r(a).
Proof.Let us suppose without loss of generality that s i is an Opponent move; the other case can be obtained just by switching Player/Opponent and P -views/O-views everywhere.Then s j being a Player move, we have to check first that rsize(s, j) ≤ 2n, i.e.
and the inequality is obvious.We need now to examine the context of (s, j).Since s j is a Player move, it is defined as the set {s n 1 , . . ., s np } of Player moves appearing in s <j , which is also the set of Player moves appearing in s <i and therefore the co-context of (s, i).But (s, i) ∈ coT r(a), hence for all k ∈ {1, . . ., p} we have (s, n k ) ∈ coT r(a k ) which is exactly what we needed.

BOUNDING LINEAR HEAD REDUCTION AND VISIBLE INTERACTION THROUGH SKELETONS 11
Proposition 3.9 (Simulation).Let (s, i) ∈ a d b, then if s i+1 is defined, there exists Proof.Suppose a = n[{d 1 }a 1 , . . ., {d p }a p ]. Let {s n 1 , . . ., s np } be the context of (s, i).By visibility, s i+1 must either point to s i or to an element of the context.Two cases: ) and the depth of s relative to i + 1 is at most d − 1.For the first part, we use that (s, i) ∈ a d b: in particular, (s, i) ∈ coT r(b) and since s i+1 → s i this implies by Lemma 3.8 that (s, i + 1) ∈ T r(b).For the second part, we must first check that rcosize(s, i + 1) ≤ 2(n − 1) + 1.Let us suppose without loss of generality that s i is an Opponent move, all the reasoning below can be adapted by switching Player/Opponent and P -views/O-views everywhere.We want to prove: rcosize(s, i + 1) = max For the third part, we have to prove that the depth of s relative to i + 1 is at most d − 1, but it is obvious since the depth relative to i is at most d and s i+1 → s i .• Otherwise, we have s i+1 → s n j for j ∈ {1, . . ., p}.Then, we claim that (s, i + 1) ∈ a j d i −1 (n − 1)[{d 1 }a 1 , . . ., {d p }a p , {d}b].We do have (s, i + 1) ∈ T r(a j ) because (s, i) ∈ T r(n[{d 1 }a 1 , . . ., {d p }a p ]), thus (s, n j ) ∈ coT r(a j ) and (s, i + 1) ∈ T r(a j ) by Lemma 3.8.It remains to show that (s, i + 1) ∈ coT r((n − 1)[{d 1 }a 1 , . . ., {d p }a p , {d}b]) and that the depth of s relative to i + 1 is at most d 1 − 1, but the proofs are exactly the same as in the previous case.
Before going on to the study of skeletons, let us give a last simplification.If a = n[{d 1 }t 1 , . . ., {d p }t q ] and b are skeletons, then a • d b will denote the skeleton obtained by appending b as a new son of the root of a with label d, i.e. n[{d 1 }t 1 , . . ., {d p }t q , {d}b].
Consider the following non-deterministic rewriting rule on skeletons: Both rewriting rules on triples (a, d, b) are actually instances of this reduction, by the isomorphism (a, d, b) → a • d b.We leave the obvious verification to the reader.Taking this into account, all that remains to study is this reduction on skeletons illustrated in Figures 1  and 2, and analyzed in the next section.
If N (a) denotes the length of the longest reduction sequence starting from a skeleton a, we have the following property.
Proof.Obvious from Proposition 3.9, adding 1 for the initial move which is not accounted for by the reduction on skeletons.
We postpone the analysis of the reduction of skeletons to the next section.However with the results proved there, we get the following result: Proof.Upper bound.Follows from Proposition 3.10 and Theorem 4.17.
Lower bound.To construct the example providing the lower bound, it is convenient to consider the extension Λ × of Λ with finite product types Π 1≤i≤n A i .Tupling of Γ M i : where π i : Π 1≤i≤n A i → A i is the corresponding projection.Products are interpreted in the games model following standard lines [15].In Λ × (and Λ), we define higher types for Church integers by setting For n, p ∈ N, we write n p for the Church integer for n of type A p .For A a type, if M : A → A and N : A, we write M n (N ) for the n-th iteration of M applied to N .Then for n, p, d ∈ N, we define: By elementary calculations on λ-terms, we know that this is β-equivalent to 2 p n d+1 0 . So, taking a maximal (necessarily passive) interaction: we know that u must have length at least 2 p n d+1 .Inspecting these strategies, we see that the left hand side one has size p + d + 2 and the right hand side one has size n + d + 3, and that they interact in an arena Π d+2 i=0 A d+1−i of depth d + 3. It follows that the underlying pointer structure of u (of length at least 2 , providing the lower bound.Although this example only proves the other bound for d ≥ 4 it also holds for d = 3: this is proved using the same reasoning on the maximal interaction between strategies p 0 and λx.x n (id o ) , having length at least p n .The strength of this result is that being a theorem about interactions of strategies in game semantics, its scope includes any programming language whose terms can be interpreted as bounded strategies.Its weakness however, is that it only applies syntactically to game situations.In order to increase the generality of our study and give exact bounds to the linear head reduction of arbitrary simply-typed λ-terms, we will in Section 5 detail a direct connection between linear head reduction and reduction of skeletons.Before that, as announced we focus Section 4 on the analysis of skeletons.

Skeletons and their complexity analysis
In the previous section we proved a simulation result of plays in the sense of Hyland-Ong games into a reduction on simple combinatorial objects, interaction skeletons.In the present section we investigate the properties of this reduction independently of game semantics or syntax, proving among other things Theorem 4.17 used in the previous section.
As announced in the introduction, the rest of this paper -starting from here -is essentially self-contained.We start this section by recalling the definition of interaction Figure 1: Rewriting rule on skeletons skeletons and investigating their basic properties.Then, we will prove our main result about the length of their reduction.
4.1.1.Skeletons and their dynamics.Interaction skeletons, or skeletons for short, are finite trees of natural numbers, whose nodes and edges are labeled by natural numbers.To denote these finite trees, we use the notation illustrated below: Each natural number n can be seen as an atomic skeleton n[] without subtrees, still denoted by n.That should never cause any confusion.Given a skeleton a, we define: • Its order ord(a), the maximal edge label in a, • Its maximum max(a), the maximal node label in a, • Its depth depth(a), the maximal depth of a node in a, the root having depth 1.We will also use the notation a • d b for the skeleton a with a new child b added to the root of a, with edge label d.Formally if a = n[{d 1 }a 1 , . . ., {d p }a p ]: With that in place, we define the reduction on skeletons by: which is allowed whenever n, d i ≥ 1 so that no index ever becomes negative.The reduction is illustrated in Figure 1, and performed on an example in Figure 2 (where the node selected for the next reduction is highlighted).It is important to insist that this reduction is only ever performed in head position: n needs to be the actual root of the tree for the reduction to be allowed.We do not know which properties of this reduction are preserved in the generalized system where reduction can occur everywhere.Let us write N (a) for the norm of a skeleton a, i.e. the length of its longest reduction sequence.We will show later that it is always finite; in the meantime for definiteness we define it as a member of N ∪ {+∞}.
From this it follows that permuting subtrees in a skeleton does not affect the possible reductions in any way.Perhaps more surprisingly, it shows that two identical subtrees can be merged without any effect on the possible reductions: the number of copies of identical subtrees does not matter.Following this idea, we are going to show that any skeleton embeds into a simple thread-like one, and this only increases the length of possible reductions.
so, they either take the maximum or the sum of the roots, and simply append all the subtrees of the a i s.In the binary case, we write as usual + for the sum.Lemma 4.4.We have the following embeddings: • If (a i ) 1≤i≤n , (b i ) 1≤i≤n are finite families of skeletons such that for all 1 ≤ i ≤ n, we have • If a, b, c are skeletons and d ∈ N, then: Proof.Direct from the definitions.
4.2.Upper bounds.We calculate upper bounds to the length of possible reductions on skeletons.This is done by adapting a technique used by Schwichtenberg [19] and Beckmann [4] to bound the length of possible β-reduction sequences on simply-typed λ-terms.
The idea of the proof is to define an inductive predicate ρ α on terms/skeletons whose proofs/inhabitants combine aspects of a syntax tree and of a reduction tree -witnesses of this predicate are called expanded reduction trees by Beckmann [4].Their mixed nature will allow us to define a transformation gradually eliminating their syntactic (or static) nodes, yielding an alternative expanded reduction tree for the term/skeleton under study, whose height is more easily controlled.
Definition 4.5.The predicate ρ α (where ρ, α range over natural numbers) is defined on skeletons in the following inductive way.
Definition 4.6.A context-skeleton a() is a finite tree whose edges are labeled by natural numbers, and whose nodes are labeled either by natural numbers, or by the variable x, with the constraint that all edges leading to x must be labeled by the same number d; d is called the type of x in a().We denote by a(b) the result of substituting all occurrences of x in a() by b.We denote by a(∅) the skeleton obtained by deleting in a all occurrences of x, along with the edges leading to them.Proof.Straightforward by induction on the derivation for ρ α a.
Lemma 4.8 (Permutation lemma).If a is obtained from a by permuting some subtrees in a, then for all ρ, α, ρ α a iff ρ α a .
Proof.Straightforward by induction on the derivation for ρ α a.
Lemma 4.9 (Null substitution lemma).If ρ α a(∅) and the type of x in a is 0, then for all b we still have ρ α a(b).Moreover, the witness includes as many Cut rules as for ρ α a(∅).
Proof.We prove by induction on derivations ρ α a that the property above holds for all context-skeleton a such that the type of x in a is 0 and a (∅) = a.
• Base.The root of a is 0, hence the result is trivial.
• Red.Suppose a has the form n[{d 1 }a 1 , . . ., {d p }a p , {0}x], where a 1 , . . ., a p possibly include occurrences of x (the case where x appears as a son of the root encompasses the other) and a i = a i (∅).The premises of Red are then that for 1 The IH on these premises give witnesses for the two following properties: This covers all the possible reductions of a (b), thus by Red we have Proof.We prove by induction on derivations ρ α a that the property above holds for all context-skeleton a such that the type of x in a is d ≤ ρ + 1, and such that a = a (∅).
• Base.The root of a is 0, hence the result is trivial.
• Red.Suppose a has the form n[{d 1 }a 1 , . . ., {d p }a p , {d}x], where a 1 = a 1 (∅), . . ., a p = a p (∅) (the case where x appears as a son of the root encompasses the other).The premises of Red are that for 1 and ρ α−1 (n − 1)[{d 1 }a 1 , . . .{d p }a p ].The IH on these premises give witnesses for the two following properties: Which is what was required, up to permutation.
The following lemma is the core of the proof, allowing to eliminate instances of the Cut rule in the expanded head reduction tree.
a 1 (∅), hence by the substitution lemma (since a thanks to Lemma 4.7 (since it is always true than 2 α+β−1 ≥ 2 α−1 (2 β−1 + 1)).If α = 0 then by IH we have ρ 0 a 1 and ρ β a 2 .We use then the substitution lemma (since which is stronger that what was required whatever was the value of β.The last remaining case is when α = 1 and β = 0, then by IH ρ 1 a 1 and ρ 0 a 2 , thus by the substitution lemma we have as required ρ 1 (a 1 • d a 2 ).
The lemma above allows us to transform any expanded reduction tree into a purely dynamic one (using only rules Base and Red2 ).Now, we show how an expanded reduction tree can be automatically inferred for any skeleton.Proof.First, let us show that the following rule Base' is admissible, for any α and ρ.
ρ α+n n If n = 0 this is exactly Base.Otherwise we apply Red.There is no possible reduction, so the only thing we have to prove is which is by IH.
The lemma follows by applying Lemma 4.10 once for each node.Now, we show how to deduce from a purely dynamic expanded reduction tree a bound to the length of possible reductions.Lemma 4.13 (Bound lemma).Let a be a skeleton, then if 0 α a, N (a) ≤ α.
Proof.First of all we prove that if there is a witness for 0 α a, then it can be supposed Cut-free.We reason by induction on 0 α a.
• Base.The rule has no premise, so the witness tree for 0 α a is already Cut-free.
• Red.By IH, Cut can be eliminated in the premises of 0 α+1 a. Therefore by Red, there is a Cut-free witness for 0 α+1 a.Then, we prove the lemma by induction on the Cut-free witness tree for 0 α a: • Base.Necessarily, the root of a is 0, thus N (a) = 0; there is nothing to prove.
• Red.The premises of 0 α a include in particular that for all a such that a a , we have 0 α−1 a .By IH, this means that for all such a we have From all this, it is possible to give a first upper bound by using the recomposition lemma, then iterating the cut elimination lemma.However, we will first prove here a refined version of the cut elimination lemma when ρ = 1, which will allow to decrease by one the height of the tower of exponentials.First, we need the following adaptation of the substitution lemma: Proof.We prove by induction on derivations 0 α a(∅) that the property above holds for all context-arena a such that the type of x in a is 1 and a = a (∅).
• Base.The root of a is 0, hence the result is trivial.
• Red.Suppose a has the form n[{d 1 }a 1 , . . ., {d p }a p , {1}x], with a i = a i (∅) (the case where x appears as a son of the root encompasses the other).The premises of Red are then that for 1 suppose that a has the form (a 1 • 0 a 2 ) • 1 x, with a i = a i (∅), again the case where x is a child of the root encompasses the other.The premises of Cut are then 0 α a 1 and 0 γ a 2 .
We also have (a Proof.By induction on the witness tree for 1 α a.
• Base.Trivial.From all of this put together, we deduce the main theorem of this section.This upper bound is optimal, since it yields bounds on linear head reduction that we will prove optimal in the next section.4.3.On game situations.Before we conclude this section, let us mention a specialized form of our result of special importance to the previous section.
We also have And finally 1 + Σ n k=1 2p k = 2 p n+1 −1 p−1 − 1, yielding the announced result.This result is particularly relevant in game situations: when studying the reduction length of one η-expanded Böhm tree applied to another.It also provides the answer to the question raised in the previous section about the possible length of bounded visible pointer structures -and hence of interactions between bounded strategies.Remark 4.18.We finish this section by a few remarks on the above result: • For d = 3, experiments with an implementation of skeletons and their reductions suggest that, for n ≥ 0 and p ≥ 2, N (n[{3}p]) = 2 p n −1 p−1 .This quantity is Θ(2 (n−1) log(p) ) whereas our general bound predicts Θ(2 n log(p) ).They differ but do match up to an exponential, being both of the form 2 Θ(n log(p)) .

BOUNDING LINEAR HEAD REDUCTION AND VISIBLE INTERACTION THROUGH SKELETONS 21
In fact for any d ≥ 3 we have N . The upper bound is our theorem above, and the lower bound is provided by the reduction on skeletons corresponding to the visible pointer structures used in the proof of Theorem 3.11.So, in this sense our result is optimal on game situations, just as the upper bound of Theorem 4.16 will appear later to be optimal in the general situation.

Skeletons and linear head reduction
Although it is generally understood that game-theoretic interaction (underlying interaction skeletons) has a strong operational content, the game-theoretic toolbox lacks results making this formal.One notable exception is the result of Danos, Herbelin and Regnier [9] already mentioned, which describes a step-by-step correspondence between the linear head reduction sequence of a game situation M N 1 . . .N n (where M, N 1 , . . ., N n are β-normal and η-long) and the interaction of the corresponding strategies.Along with Theorem 4.17, this connection suffices to immediately deduce an (optimal) upper bound for the length of reduction sequences on game situations.However, this reasoning has two drawbacks.Firstly it is rather indirect: the link it provides between linear head reduction and interaction skeletons, two relatively simple combinatorial objects, is obfuscated by the variety of mathematical notions involved.Indeed this connection requires elaborate semantic notions such as visible strategies and pointer structures, and third party results such as the (very technical) result of [9].Secondly it only covers game situations, and it is not clear how to obtain from that general results on arbitrary terms.
In this final section we address these two points and proceed to analyse the direct connection between interaction skeletons and syntactic reduction.This study culminates in optimal upper bounds to the length of linear head reduction sequence on arbitrary simplytyped λ-terms.This requires us, on the one hand, to construct a generalization of game situations whose reduction follows the combinatorics of interaction skeletons and, on the other hand, to show that one can compile arbitrary terms into these generalized situations in a way allowing us to obtain our upper bounds.
In Subsection 5.1 we give the definition of linear head reduction and prove some basic properties.In Subsection 5.2 we define and study generalized game situations, and in Subsection 5.3 we prove the technical core of this section: the fact that lhr on generalized game situations can be simulated within interaction skeletons.Finally, Subsections 5.4 and 5.5 are devoted to dealing respectively with η-expansion and with λ-lifting in order to compile arbitrary terms to generalized game situations and deduce our results.5.1.Linear head reduction.We start by recalling the definition of linear head reduction and proving some basic properties that are folklore, but to our knowledge unpublished under this formulation.Our notion of linear head reduction follows [9].We use it rather than the more elegant approach of Accattoli [2] because we believe it yields a more direct relationship with games.Indeed the multiplicative reductions of Accattoli's calculus have no counterpart in games/skeletons, which only take into account the variable substitutions.5.1.1.Definition of linear head reduction.This work focuses strongly on linear substitution, for which only one variable occurrence is substituted at a time.In this situation, it is convenient to have a distinguished notation for particular occurrences of variables.We will use the notations x 0 , x 1 , . . . to denote particular occurrences of the same variable x in a term M .When in need of additional variable identifiers, we will use x 1 , x 2 , . . . .Sometimes, we will still denote occurrences of x by just x when their index is not relevant.If x 0 is a specific occurrence of x, we will use M [N/x 0 ] for the substitution of x 0 by N , leaving all other occurrences of x unchanged.
Intuitively, lhr proceeds as follows.We first locate the head variable occurrence, i.e. the leftmost variable occurrence in the term M .Then we locate the abstraction, if any, that binds this variable.Then we locate (again if it exists) the subterm N of M in argument position for that abstraction, and we substitute the head occurrence by N .We touch neither the other occurrences of x nor the redex.It is worth noting that locating the argument subterm can be delicate, as it is not necessarily part of a β-redex.For instance in (λy A .(λx B .x 0 M ))N 1 N 2 , we want to replace x 0 by N 2 , even though N 2 is not directly applied to λx B .x 0 M .Therefore, the notion of redex will be generalized.
Note that a term is necessarily of the form * M 1 . . .M n , x 0 M 1 . . .M n , λx.M or (λx.M ) M 1 . . .M n .This will be used quite extensively to define and reason on lhr.The length of a term M is the number of characters in Definition 5.1.Given a term M , we define its set of prime redexes.They are written as pairs (λx, N ) where N is a subterm of M , and λx is used to denote the (if it exists, necessarily unique by Barendregt's convention) subterm of M of the form λx. N .We define the prime redexes of M by induction on its length, distinguishing several cases depending on the form of M .
The head occurrence of a term M is the leftmost occurrence of a variable or constant in M .If (λx, N ) is a prime redex of M whose head occurrence is an occurrence x 0 of the variable x, then the linear head reduct of Given a term M , we overload the notation N and write N (M ) for the length of the lhr sequence of M .It is straightforward to see that lhr is compatible with β-reduction, in the sense that if M → lhr M we have M ≡ β M .Since redexes for lhr are not necessarily β-redexes, it will be necessary to consider the following generalization of redexes: 5.2.2.Local scope.Now that we have a notion of η-long term stable under composition, let us consider the syntactic counterpart of the second aspect of skeletons: that their reduction is local.It is not clear at first what local means in this context: just like a game situation consists in two η-long normal forms interacting, a generalized game situation will consist in a "tree" of η-long normal forms.Let us start with this slightly naive definition: Definition 5.19.A term M is strongly locally scoped (abbreviated sls) iff for any generalized redex (λx, N ) in M , N is closed.
Unfortunately, sls terms do not quite fit for a syntactic counterpart of skeletons: they are not preserved by lhr.It is easy to find a counter-example, for instance: Here, a new generalized redex (λz, y) is formed where y is obviously not closed.Therefore, we must make skeletons correspond instead with a generalization of sls terms preserved by lhr.This generalization comes from the observation that in the right hand side term above, the violation of strong local scope is mitigated by the fact that the violating variable y is part of a generalized redex -so its value is somehow already provided by an environment.Hence the following definition: Definition 5.20.A variable x in M is active iff it is a free variable or if there is a generalized redex (λx, N ) in M .It is passive otherwise.A term M is locally scoped (abbreviated ls) if for any generalized redex (λx, N ) in M all the free variables in N are active in M .
Local scope will be sufficient to ensure that the interpretation to skeletons is a simulation, but the correspondence between terms and skeletons will be tighter for sls terms: the tree structure of the skeleton will match the tree structure of nested generalized redexes.
Of course, we now need to prove that locally scoped terms are preserved by lhr.However, this is still not true!Indeed, consider the following reduction: λy.(λx.x y) (λz.z) → lhr λy.(λx.(λz.z) y) (λz.z) The left hand side term is (strongly) locally scoped, but the right hand side term is not because y is passive, but appears in the argument of a generalized redex.However, the problem disappears if we apply the two terms above to a constant * .In general, we will show that closed locally scoped terms of ground type are closed under lhr.This may seem like a big restriction but it is not: an arbitrary term can be made closed and of ground type without changing its possible reduction sequences significantly, by replacing its free variables with constants and applying it to as many constants as required.
To prove stability of local scope by lhr, we start by stability under substitution.
Lemma 5.21.If M is a locally scoped term of ground type with head occurrence x 0 of a variable x, and N is a locally scoped term, then M [N/x 0 ] is locally scoped.
Proof.By induction on the length of M (of ground type, so not an abstraction).Proof.By induction on the length of M , writing (λx, S) for the prime redex fired in M → lhr M .We only detail the non-trivial cases.
. .M n is ls as well, and from that follows that M [S/x 0 ] is ls.
where x 0 is an occurrence of x.Then, necessarily N M 2 . . .M n is ls as well.Likewise, M 1 is ls, otherwise M could not be.Therefore by Lemma 5.21, N [M 1 /x 0 ] M 2 . . .M n is ls, and from that follows that M is ls.
We say that a term M is a generalized game situation if it is closed, of ground type, and both η-long and locally scoped.By Lemmas 5.18 and 5.22, we know that generalized game situations are preserved by linear head reduction.
5.3.Simulation of generalized game situations.We start this subsection by showing how one can associate a skeleton with any generalized game situation -in fact with any term, although this connection will only yield a simulation for generalized game situations.
Definition 5.23.Let Γ M : A be a term, with a bs-environment ρ, being defined as a partial function mapping each variable x of Γ on which it is defined to a skeleton ρ(x).
Then the skeleton M ρ is defined by induction on the length of M , as follows: We write M for M ∅ .5.3.1.Simulation of generalized game situations.We now prove that − is a simulation.Because − over-approximates terms it will not directly relate → lhr and .Rather, we will have that if M M , then there is a such that M a ← M .This relaxed simulation will suffice for our purposes since by Lemma 4.1 it implies that N ( M ) ≤ N (a).We show in Figure 3 the skeletons corresponding with all lhr-reducts of the term (λf o→o .λx o .f(f x)) (λy o .y) * o from Example 5.2, with explicit typing.
We now aim to prove our simulation result for generalized game situations.
Proof.Straightforward by induction on the length of M .

5.3.2.
Relating generalized game situations and their interpretation.To estimate lhr on generalized game situations, we need to define measures on terms that reflect the geometry of the corresponding skeletons.So instead of the quantities traditionally used to evaluate the complexity of λ-terms (like the height or length), we have two alternative quantities.
Definition 5.32.The depth depth(M ) of M is defined by induction on the length of M : Likewise, the local height lh(M ) of a term M is defined by: We now aim to prove that these indeed reflect quantities on the corresponding skeletons.Lemma 5.33 deals with depth, Lemma 5.34 with local height and Lemma 5.35 with order.Lemma 5.33.If ρ is a bs-environment, we set depth(ρ) = max x∈dom(ρ) depth(ρ(x)).Then, for each strongly locally scoped term M with a bs-environment ρ, we have: In particular, depth( M ) ≤ depth(M ).
Proof.By induction on the length of M , detailing the non-trivial cases.
• If M = x 0 M 1 . . .M n and ρ is not defined on x, we have depth(M ) = max 1≤i≤n depth(M i ).
On the other hand, x 0 M 1 . . .M n ρ = (1 + n i=1 M i ρ ).But then we have: where the first equality is by definition of +, depth, and , the inequality is by IH, and the last equality is by definition of maximum and depth.• If M = x 0 M 1 . . .M n , ρ(x) defined, we still have depth(M ) = max 1≤i≤n depth(M i ).On the other hand, ≤ max( max 1≤i≤n depth( M i ρ ), depth(ρ) + 1) where the first equality is by definition of •, the first inequality is by definition of +, and depth(ρ), the second inequality is by IH and definition of max.• If M = λx.M , then it directly follows from the IH.
We calculate: Where the first equality is by definition of interpretation, the second line is by IH, the third line is by definition of depth on environments, the fourth line uses that since M is strongly locally scoped M 1 must be closed, therefore ρ M 1 ∅ hence by Lemma 5.25 we have M 1 ρ = M 1 ∅ .The fifth line is by IH on M 1 , and the last two lines are by easy manipulations on maximums (using that depth is always greater than one) and definition of depth.
Proof.By induction on the length of M , omitting basic manipulations of expressions.
Proof.By induction on the length of M , omitting some basic manipulations.(lv(y) + 1)) We can now summarize the results of this subsection with the following proposition.Proposition 5.36.If M is a strongly locally scoped, η-long term of ground type, then: Proof.There is a subterm N such that N → η λy.N y 0 .But lv(λy.N y 0 ) = lv(N ) and lv(N y 0 ) ≤ lv(N ), so the new subterms have lower level than the original ones.
Finally, it remains to note that η-expansion preserves strong local scope.
Lemma 5.48.If M is strongly locally scoped and M → ηr M , then M is sls.
Proof.B straightforward induction on the length of M .
Proposition 5.49.If M is a term, then there is an η-long term M such that: Moreover if M was strongly locally scoped, so is M .
Proof.By Lemma 5.44, there is M such that M → * ηr M , and there is no further restricted η-expansion.By definition, M is η-long.Moreover, the preservations of depth, order, local height and norm follow respectively from Lemmas 5.46, 5.47, 5.45 and Proposition 5.42.The construction preserves strong local scope by Lemma 5.48.5.4.3.Bounds for strongly locally scoped terms.Putting everything together, we estimate: Proposition 5.50.Suppose M is a sls term of order at least one.Then, o is a sls term, we first make it of ground type by forming Γ M * A 1 . . .* An : o -its norm can only increase, the other quantities stay unchanged and the term is still sls.By Proposition 5.49, there is M η-long, of ground type, and sls such that lh(M ) ≤ lh(M ) + ord(M ), depth(M ) = depth(M ), ord(M ) = ord(M ) and N (M ) ≥ N (M ).We conclude by Proposition 5.36 and Theorem 4.16.
We now prove the optimality of this upper bound by exhibiting a family of terms whose reduction length asymptotically reaches it.This family of terms is closely related to the example used in Section 3 for game situations.For n, k, p ≥ 0 and M : A p , we define: M ) : A p and that for all q ∈ N, [n] k p (q p ) → * β q n k p .Exploiting this construction we set, for n, k, p ≥ 0: For which it is immediate to check that for all n, k, p ≥ 0 we have S n,k,p → * β 2 2 n k p 0 . Moreover, by construction of S n,k,p , for n ≥ 2 and p, k ≥ 1 we have lh(S n,k,p ) = n + 1, depth(S n,k,p ) = k + 1 and ord(S n,k,p ) = p + 3, and S n,k,p is sls.To deduce a lower bound from this, we use: In particular, reduction length for sls second-order terms of fixed depth is bounded by a polynomial of degree less than the depth.5.5.Generalization to arbitrary terms.In this final subsection, we deduce from the study of lhr of ls terms a bound on the length of lhr of arbitrary terms.The key observation is that any λ-term can be transformed into a locally scoped form through λ-lifting [16].5.5.1.Lambda-lifting to sls terms.Take a term M = λx A .(λy A .y) x.Obviously, M is not sls: indeed there is a prime redex (λy, x) and the subterm x has x free.In order to make the variable x "local", we modify the abstraction subterm λy.y to forward explicitly the variable x.We get the term M = λx A .(λy A→A .y x)(λx A .x ).The type of y has changed, but not the type of the overall term.Note that the terms M and M are still β-equivalent, although we are not going to use that explicitly.More importantly, the norm has increased, the order has increased by one, and the other quantities are essentially unchanged.We formalize this construction by the λ-lifting expansion → λl , defined in Figure 4.
First, we prove that → λl indeed allows us to convert any term M into a sls M .This is done by showing that → λl terminates, and that its normal forms are sls.Proof.By induction on N (M ).By Lemma 5.57, there are terms S, T such that: By Proposition 5.12, we have N (S) ≤ N (T ) = N (M ) − 1.Moreover, we have a chain: But since N (S) < N (M ), it follows by immediate induction that N (N ) ≤ N (S).Therefore, we have 5.5.3.How λ-lifting preserves other quantities on terms.Finally, We examine how λ-lifting affects the other quantities on terms.We first notice that it preserves the depth.
Proof.First, we prove by induction on M that for any x free in M and variable y, we have depth(M ) = depth(M [x y/x]).The lemma follows by induction on the definition of → λl .
Unfortunately it does affect the order and the local height; however we will show that multiple applications of → λl can only change them by one.To prove that, we will construct weighted variants of order and local height that give different weight to variables according to their behaviour with respect to → λl .By design they will be preserved by → λl , but will remain within small bounds of the original level and local height.We start with the order.
Say that a variable x in a term M is local in M iff for any generalized redex (λy, N ) of M , x does not appear free in N .If x is a variable of M define the weighted level lv (x) by lv(x) + 1 if x is local and lv(x) + 2 otherwise.Then, define the weighted order ord (M ) as being the maximum over all lv (x) for variables x in M , and all lv(A) for constants * A in M .First, we prove that ord (M ) remains closely related to ord(M ).Proof.We prove first that ord(M ) ≤ ord (M ).If M admits as a subterm a constant * A with ord(M ) = lv(A), then ord (M ) ≥ lv(A) ≥ ord(M ).Otherwise, M has a subterm λx A .M such that ord(M ) = lv(A) + 1 -indeed if a subterm of M of maximal level has the form M 1 M 2 then M 1 has higher level, and if it has the form λx A .M with lv(M ) > lv(A) + 1 then M still has maximal level so the property follows by induction.So in particular M has a variable x with ord(M ) = lv(x) + 1, which is always less than ord (M ).
We now prove that ord (M ) ≤ ord(M ) + 1.Firstly, if there is a constant * A in M such that ord (M ) = lv(A), then ord(M ) + 1 ≥ ord(M ) ≥ lv(A) as well.Secondly, if there is a local variable x A such that ord (M ) = lv(A) + 1, then since M is closed it has a subterm of the form λx A .M , so ord(M ) + 1 ≥ lv(λx A .M ) + 1 ≥ lv(A) + 2 > lv(A) + 1. Finally if there is a non-local variable x A such that ord (M ) = lv(A) + 2, same reasoning.
Since y is free in M l it is not a carrier variable, so applying the property above it is direct that lh (M l ) = lh (M r ).We do that by applying the tools developed earlier to get an upper bound on the length of reduction, and then prove a matching lower bound by providing terms whose length of reduction asymptotically reaches the upper bound.
Proof.Start with a term M , call it M 0 .Without loss of generality we can consider M closed, otherwise we replace occurrences of free variables by occurrences of constants without changing the norm, and only reducing the local height, depth and order.Then, we apply the following transformations.
First we expand variables: by Lemma 5.65, there is a term closed term M 1 with: Then, we make it strongly locally scoped.By Lemma 5.64, there is M 2 sls such that: .By Lemma 5.51 it follows that N (B p k id o ) ≥ 2 k p+2 .It is direct to check that ord(B p k ) = p + 2 and h(B p k ) = k + 3 (for k ≥ 1), concluding the proof.For a term M of height h and order n, Beckmann's results [4] predict that any β-reduction chain of M terminates in less than 2 Θ(h) n+1 steps.It might seem counter-intuitive that our bound (with lhr) is smaller than Beckmann's (with β-reduction) since we substitute only one occurrence at a time, which is obviously longer.However, Beckmann considers arbitrary β-reduction, not head β-reduction.The possibility of reducing in arbitrary locations of the term unlocks much longer reductions, since higher-order free variables or constants can isolate sections of the term that will never arrive in head position but can still be affected by arbitrary β-reduction.The fact that the length of lhr has the same order of magnitude as head β-reduction is not surprising in the light of Accattoli and Dal Lago's recent result [3] that a similar notion of lhr is quadratically related to head reduction.

2. 1 .
Syntax and dynamics of the λ-calculus.In this paper, we consider the simplytyped λ-calculus Λ built from one base type o.Its types and terms are: A, B ::= o | A → B M, N ::= λx A .M | M N | x | * A

Proposition 3 . 3 .
Let σ : A ⇒ B be a bounded strategy of size p, let τ : B ⇒ C be a bounded strategy of size n, and suppose that B has depth d − 1, for d ≥ 2. Then for all passive u ∈ σ τ : |u| ≤ N d (n, p) + 1

Figure 2 :
Figure 2: Example reduction sequence on interaction skeletons

Definition 4 . 2 .
Let d, o, m ≥ 1 be natural numbers.The thread-like skeleton T (d, o, m) is:

4 . 1 . 3 .
From the definition, we have that depth(T (d, o, m)) = d, max(T (d, o, m)) = m and ord(T (d, o, m)) = o.We also have: Proposition 4.3.If a has depth d, order o and maximum m, then N (a) ≤ N (T (d, o, m)).Proof.It is obvious that a → T (d, o, m), therefore the result follows from Lemma 4.1.Constructions on skeletons.If (a i ) 1≤i≤n is a finite family of skeletons, then writing

Lemma 4 . 7 (
Monotonicity).If ρ α a, then ρ α a for all α ≤ α and ρ ≤ ρ , where the witness trees have the same number of occurrences of Cut.

•
Cut. Suppose we have 0 α+β a • 0 b by Cut, whose premises are 0 α a and 0 β b.By IH, we can assume the witness trees for 0 α a and 0 β b to be Cut-free.Let us form the context-skeleton a() = a • 0 x.Then by Lemma 4.9 we have 0 α a(b) = a • 0 b, and by Lemma 4.7 that 0 α+β a • 0 b.Since the witness tree for 0 α a(∅) is Cut-free, so are the witness trees for 0 α a • 0 b and 0 α+β a • 0 b.
then the generalized redexes of N M 1 . . .M n are those of N, M 1 , . . ., M n (that are of the form (λy, S) where all free variables in S are active, by definition of local scope), with possibly the addition of generalized redexes of the form (λz, M i ).Free variables in M i are free in N M 1 . . .M n , so they are active.•If M = (λy.M ) M 1 . . .M n , it follows directly from IH.BOUNDING LINEAR HEAD REDUCTION AND VISIBLE INTERACTION THROUGH SKELETONS 29Lemma 5.22.If M : o is locally scoped and M → lhr M , then M is locally scoped.

Lemma 5 .
51.If M → * β n 0 , then N (M id o ) ≥ n, where id o = λx o .x. Proof.By induction on n, exploiting that lhr preserves β-equivalence.BOUNDING LINEAR HEAD REDUCTION AND VISIBLE INTERACTION THROUGH SKELETONS 41 y