Separating Sessions Smoothly

This paper introduces Hypersequent GV (HGV), a modular and extensible core calculus for functional programming with session types that enjoys deadlock freedom, confluence, and strong normalisation. HGV exploits hyper-environments, which are collections of type environments, to ensure that structural congruence is type preserving. As a consequence we obtain an operational correspondence between HGV and HCP -- a process calculus based on hypersequents and in a propositions-as-types correspondence with classical linear logic (CLL). Our translations from HGV to HCP and vice-versa both preserve and reflect reduction. HGV scales smoothly to support Girard's Mix rule, a crucial ingredient for channel forwarding and exceptions.


Introduction
Session types [Hon93,THK94,HVK98] are types used to model and verify communication protocols in concurrent and distributed systems: just as data types rule out dividing an integer by a string, session types rule out sending an unexpected message. Session types originated in process calculi, but there is a gap between process calculi, which model the evolving state of concurrent systems, and the descriptions of these systems in mainstream programming languages. This paper addresses two foundations for session types: (1) a session-typed concurrent lambda calculus called GV [LM15], intended to be a modular and extensible basis for functional programming languages with session types; and, (2) a sessiontyped process calculus called CP [Wad14], with a propositions-as-types correspondence to classical linear logic (CLL) [Gir87].
Processes in CP correspond exactly to proofs in CLL and deadlock freedom follows from cut-elimination for CLL. However, while CP is strongly tied to CLL, at the same time it departs from the π-calculus. Independent π-calculus features can only appear in combination in CP: CP combines name restriction with parallel composition ((νx)(P ∥ Q)), corresponding • Section 6 demonstrates the extensibility of HGV through: (1) unconnected processes, (2) a simplified treatment of forwarding, and (3) an improved foundation for exceptions. Section 2 reviews GV and its metatheory, Section 7 discusses why it is difficult to apply hyper-environments to term typing, Section 8 discusses related work, and Section 9 concludes and discusses future work.
This paper is an improved and extended version of a paper published at CONCUR 2021 [FKD + 21]. Additional highlights include: • a more detailed account of process structures; • a more detailed account of extensions; • a more detailed account of the metatheory for HCP; and • a modified formulation of HCP's labelled transition system and the translation of fork in Section 5 fixing errors in the operational correspondence result from the CONCUR 2021 paper. Proofs of all of the technical results are included in the paper.

The Equivalence Embroglio
GV programs are deadlock free, which GV ensures by restricting process structures to trees. A process structure is an undirected graph where nodes represent processes and edges represent channels shared between the connected nodes. Session-typed programs with an acyclic process structure are deadlock-free by construction. We illustrate this with a session-typed vending machine example written in GV.
Example 2.1. Consider the session type of a vending machine below, which sells chocolate bars and lollipops. If the vending machine is free, the customer can press 1 ○ to receive a chocolate bar or 2 ○ to receive a lollipop. If the vending machine is busy, the session ends. The customer's session type is dual : where the vending machine sends a ChocolateBar, the customer receives a ChocolateBar, and so forth. Figure 1 shows the vending machine and customer as a GV program with its process structure.
GV establishes the restriction to tree-structured processes by restricting the primitive for spawning processes. In GV, fork has type (S ⊸ end ! ) ⊸ S. It takes a closure of type S ⊸ end ! as an argument, creates a channel with endpoints of dual types S and S, spawns the closure as a new process by supplying one of the endpoints as an argument, and then returns the other endpoint. In essence, fork is a branching operation on the process structure: it creates a new node connected to the current node by a single edge. Linearity guarantees that the tree structure is preserved, even in the presence of higher-order channels.
Lindley and Morris [LM15] introduce a semantics for GV, which evaluates programs embedded in process configurations, consisting of embedded programs, flagged as main (•) or child (•) threads, ν-binders to create new channels, and parallel compositions: They introduce these process configurations together with a standard structural congruence, which allows, amongst other things, the reordering of processes using commutativity (C ∥ C ′ ≡ C ′ ∥ C), associativity (C ∥ (C ′ ∥ C ′′ ) ≡ (C ∥ C ′ ) ∥ C ′′ ), and scope extrusion ∈ fv(C)). They guarantee acyclicity by defining an extrinsic type system for configurations. In particular, the type system requires that in every parallel composition C ∥ D, configurations C and D must have exactly one channel in common, and that in a name restriction (νx)C, channel x cannot be used until it is shared across a parallel composition.
These restrictions are sufficient to guarantee deadlock freedom. Unfortunately, they are not preserved by process equivalence. As Lindley and Morris write, (noting that their name restrictions bind channels rather than endpoint pairs, and their (νxy) abbreviates (νx)(νy)): Alas, our notion of typing is not preserved by configuration equivalence. For example, assume that Γ ⊢ (νxy)(C 1 ∥ (C 2 ∥ C 3 )), where x ∈ fv(C 1 ), y ∈ fv(C 2 ), and x, y ∈ fv(C 3 ). We have that as both x and y must be shared between the processes C 1 ∥ C 2 and C 3 . As a result, standard notions of progress and preservation are not enough to guarantee deadlock freedom, as reduction sequences could include equivalence steps from well-typed to non-well-typed terms. Instead, they must prove a stronger result: Theorem 3 (Lindley and Morris [LM15]). If Γ ⊢ C, C ≡ C ′ , and C ′ −→ D ′ , then there exists D such that D ≡ D ′ and Γ ⊢ D.
Typing rules for terms Γ ⊢ M : T TM-Var Figure 2. HGV, duality and typing rules for terms.
T , U :: Session types (S) comprise output (!T.S: send a value of type T , then behave like S), input (?T.S: receive a value of type T , then behave like S), and dual end types (end ! and end ? ). The dual endpoints restrict process structure to trees [Wad14]; conflating them loosens this restriction to forests [ALM16]. We let Γ, ∆ range over type environments. The terms and typing rules are given in Figure 2. The linear λ-calculus rules are standard; communication primitives K are given as constants. Each communication primitive K has a type schema: link takes a pair of compatible endpoints and forwards all messages between them; fork takes a function, which is passed one endpoint (of type S) of a fresh channel yielding a new child thread, and returns the other endpoint (of type S); send takes a pair of a value and an endpoint, sends the value over the endpoint, and returns an updated endpoint; recv takes an endpoint, receives a value over the endpoint, and returns the pair of the received value and an updated endpoint; and wait synchronises on a terminated endpoint of type end ? . Output is dual to input, and end ! is dual to end ? . Duality is involutive, i.e., S = S.
We write M ; N for let () = M in N , let x = M in N for (λx.N ) M , λ().M for λz.z; M , and λ(x, y).M for λz.let (x, y) = z in M . We write K : T for · ⊢ K : T in typing derivations. (−→ M ) define a standard call-by-value, left-to-right evaluation strategy. A closed term either reduces to a value or is blocked on a communication action.
Thread contexts (F ) extend evaluation contexts to threads. The structural congruence rules are standard apart from SC-LinkComm, which ensures links are undirected, and SC-NewSwap, which swaps names in double binders.
The configuration reduction relation gives a semantics for HGV's communication and concurrency constructs. The first two rules, E-Reify-Fork and E-Reify-Link, create child and link threads, respectively. The next three rules, E-Comm-Link, E-Comm-Send, and E-Comm-Close perform communication actions. The final four rules enable reduction under name restriction and parallel composition, rewriting by structural congruence, and term reduction in threads. Two rules handle links: E-Reify-Link creates a new link thread x z ↔y which blocks on z of type end ? , one endpoint of a fresh channel. The other endpoint, z ′ of type end ! , is placed in the evaluation context of the parent thread. When z ′ terminates a child thread, E-Comm-Link performs forwarding by substitution.
Remark 3.2. Note that E-Comm-Link does not fire if z ′ is returned by a main thread. In closed configurations, typing ensures that such a configuration cannot arise: intuitively, a main thread can only obtain endpoints by fork or by receiving an endpoint. Endpoints generated to communicate with forked threads (i.e., those passed to a child thread) will always have a session type terminating with end ? , and a child thread cannot transmit an endpoint ending in end ! , since the endpoint must be returned. Consequently, there is no way for a main thread to obtain endpoints with dual session types as required by the type of link. The case for open configurations is accounted for by our open progress result (see Section 3.1).
Choice. HGV does not include constructs for internal and external choice (for example, as shown in the vending machine example in Section 1). Internal and external choice are instead encoded with sum types and session delegation [Kob03,DGS17]. Prior encodings of choice in GV [LM15] are asynchronous. Instead, to encode synchronous choice we add a 'dummy' synchronisation before exchanging the value of sum type, as follows: 3.1. Metatheory. HGV enjoys type preservation, deadlock freedom, confluence, and strong normalisation.
Preservation. Hyper-environments enable type preservation under structural congruence, which significantly simplifies the metatheory compared to GV. (1) If G ⊢ C : R and C ≡ D, then G ⊢ D : R.
(2) If G ⊢ C : R and C −→ D, then G ⊢ D : R.
Proof. By induction on the derivations of C ≡ D and C −→ D. See Appendix A.
Before moving onto progress, we must introduce some technical machinery to allow us to reason about the structure of HGV programs. Abstract process structures. Unlike in GV, in HGV we cannot rely on the fact that exactly one channel is split over each parallel composition. Instead, we introduce the notion of an abstract process structure (APS). Abstract process structures are a crucial ingredient in showing that HGV configurations can be written in tree canonical form, which helps both with establishing progress results and also the correspondence between HGV and GV. We begin by establishing the intuition behind the notion of an APS, and then describe the formal definitions. An APS is a graph defined over a hyper-environment G and a set of undirected pairs of co-names (a co-name set) N drawn from the names in G.
The nodes of an APS are the type environments in G. Each edge is labelled by a distinct co-name pair {x 1 , x 2 } ∈ N , such that x 1 : S ∈ Γ 1 and x 2 : S ∈ Γ 2 .
Let us now discuss the formal definition of an APS. We begin by recalling the definition of an undirected edge-labelled multigraph: an undirected graph that allows multiple edges between vertices.
Definition 3.6 (Undirected Multigraph). An undirected multigraph G is a 3-tuple (V, E, r) where: (1) V is a set of vertices (2) E is a set of edge names (3) r is a function r : E → {{v, w} : v, w ∈ V} from edge names to an unordered pair of vertices Denote the size of a set as |·|. A path is a sequence of edges connecting two vertices. A multigraph G = (V, E, r) is connected if |V| = 1, or if for every pair of vertices v, w ∈ V there is a path between v and w. A multigraph is acyclic if no path forms a cycle. A leaf is a vertex connected to the remainder of a graph by a single edge.
Definition 3.7 (Leaf). Given an undirected multigraph (V, E, r), a vertex v ∈ V is a leaf if there exists a single e ∈ E such that v ∈ r(e).
In an undirected tree containing at least two vertices, there must be at least two leaves. Proof. For G to be an undirected tree where |V | ≥ 2 and have fewer than two leaves, then there would need to be a cycle, contradicting acyclicity.
With the graph preliminaries in place, we are now ready to introduce the formal definition of an APS. Definition 3.9 (Abstract process structure). The abstract process structure of a hyperenvironment H with respect to a co-name set N = {{x 1 , y 1 }, . . . , {x n , y n }} is an undirected multigraph (V, E, r) defined as follows: Example 3.10. The formal definition of the APS described in Example 3.4 is defined as: Whereas Example 3.4 is a tree, Example 3.5 contains a cycle. Only configurations typeable under a hyper-environment with a tree structure can be written in tree canonical form.
Definition 3.11 (Tree structure). A hyper-environment H with co-name set N has a tree structure, written Tree(H, N ), if its APS is connected and acyclic.
An HGV program • M has a single type environment, so is tree-structured; the same goes for child and link threads. A key feature of HGV is a subformula principle, which states that all hyper-environments arising in the derivation of an HGV program are tree-structured. It follows that a configuration resulting from the reduction of an HGV program is also tree structured. Read bottom-up, TC-New and TC-Par preserve tree structure, which is illustrated by the following two pictures.
The following lemma states this intuition formally. By analogy to Kleene equality, we write P ≏ ⇐⇒ Q, to mean that either P or Q is undefined, or P ⇐⇒ Q.
Proof. By the definition of ≏ ⇐⇒, we need only consider the cases where both sides of the bi-implication are defined. Both results follow from the observation that adding an edge between two trees results in a tree, and removing an edge from a tree partitions the tree into two subtrees. Tree canonical form. We now define a canonical form for configurations that captures the tree structure of an APS. Tree canonical form enables a succinct statement of open progress (Lemma 3.17) and a means for embedding HGV in GV (Proposition 4.5).
Definition 3.13 (Tree canonical form). A configuration C is in tree canonical form if it can be written: Every well-typed HGV configuration typeable under a single type environment can be written in tree canonical form.
Theorem 3.14 (Well-typed configurations in tree canonical forms). If Γ ⊢ C : R, then there exists some D such that C ≡ D and D is in tree canonical form.
Proof. By induction on the number of ν-binders in C. In the case that n = 0, it must be the case that Γ ⊢ ϕM : R for some thread M , since parallel composition is only typeable under a hyper-environment containing two or more type environments. Therefore, C is in tree canonical form by definition.
In the case that n ≥ 1, by Theorem 3.3, we can rewrite the configuration as: By definition, Γ has a tree structure with respect to an empty co-name set. By repeated applications of TC-New, there exists some G such that G ⊢ •M 1 ∥ · · · ∥ •M n ∥ ϕN : T ; by Lemma 3.12 (clause 1, right-to-left), G has a tree structure. Construct the APS for G using names N ; by Lemma 3.8, there exist Γ 1 , Γ 2 ∈ envs(H) such that Γ 1 and Γ 2 are leaves of the tree and therefore by the definition of the APS contain precisely one ν-bound name. By TC-Par, there must exist two threads C 1 , C 2 such that Γ 1 ⊢ C 1 : R 1 and Γ 2 ⊢ C 2 : R 2 . By runtime type combination, at least one of R 1 , R 2 must be •; without loss of generality assume this is R 1 . Suppose (again without loss of generality) that the ν-bound name contained in Γ 1 is x 1 and L 1 = M 1 .
Let D = (νx 2 y 2 ) · · · (νx n y n )(•M 2 ∥ · · · ∥ •M n ∥ ϕN ). By Theorem 3.3 and the fact that x 1 is the only ν-bound variable in M 1 , we have that C ≡ (νx 1 y 1 )(•M 1 ∥ D). By the induction hypothesis, there exists some D ′ such that D ≡ D ′ and D ′ is in canonical form. By construction we have that C ≡ (νx 1 y 1 )(•M 1 ∥ D ′ ), which is in tree canonical form as required.
As hyper-environments capture parallelism, a configuration C typeable under hyperenvironment Γ 1 ∥ · · · ∥ Γ n is equivalent to n independent parallel processes.
Proof. By induction on the derivation of Γ 1 ∥ · · · ∥ Γ n ⊢ C : R. The cases for TC-Main, TC-Child, and TC-Link follow immediately. The cases for TC-New and TC-Par follow from the IH and structural congruence rules.
It follows from Theorem 3.14 and Proposition 3.15 that any well-typed HGV configuration can be written as a forest of independent configurations in tree canonical form.
Progress and Deadlock Freedom. With tree canonical forms defined, we can now state a progress result. A thread is blocked on an endpoint x if it is ready to perform a communication action on x.
Definition 3.16 (Blocked thread). We say that thread T is blocked on variable z, written blocked(T , z), if either: We let Ψ range over type environments containing only session-typed variables, i.e., Ψ ::= · | Ψ, x : S, which lets us reason about configurations that are closed except for runtime names. Using Lemma 3.17 we obtain open progress for configurations with free runtime names.
is in tree canonical form. Either C −→ D for some D, or: Proof. Open progress follows as a direct corollary of a slightly more verbose property which holds on HGV processes, proved by induction on the derivation of an inductive definition of tree canonical forms. See Appendix A for details.
Closed configurations enjoy a stronger result: if a closed configuration cannot reduce, then each auxiliary thread must either be a value, or be blocked on its neighbouring endpoint.
is in tree canonical form. Either C −→ D for some D, or: Proof. Since the environment is closed, by Lemma 3.17, for each A j it must be that Note that if two names x, y are co-names, and one thread is blocked on x, and another is blocked on y, then due to typing the names must be dual and reduction can occur.
Consider A 1 . Since the environment is closed, A 1 must be blocked on x 1 . Next, consider A 2 ; the thread cannot be blocked on y 1 as reduction would occur. By the definition of tree canonical forms, A 2 must contain x 2 and by the typing rules cannot contain y 2 , so the thread must be blocked on x 2 . The argument extends to the remainder of the configuration.
Finally, for ground configurations, where the main thread does not return a runtime name or capture a runtime name in a closure, we obtain a yet tighter result, global progress, which implies deadlock freedom [CDM14].
Definition 3.19 (Ground configuration). A configuration C is a ground configuration if · ⊢ C : T , C is in canonical form, and T does not contain session types or function types.
Our main progress result states that a ground configuration can reduce, or is a value. Typing rules for configurations Theorem 3.20 (Global progress). Suppose C is a ground configuration. Either there exists some D such that C −→ D, or C = •V for some value V .
Proof. By Lemma 3.18, either C can reduce, or C can be written: Since C is ground, fv(V ) = ∅. By definition, tree canonical form ensures that no cycles are present amongst threads, so no auxiliary thread can be blocked. It follows that if C ̸ −→, then there cannot be any auxiliary threads and thus C = •V for some value V .
Determinism and Strong Normalisation. HGV enjoys a strong form of determinism known as the diamond property, and due to linearity it enjoys strong normalisation. Unlike with preservation and progress, the addition of hypersequents does not substantially change the arguments from [LM15]. Proof. Similar to that of GV [LM15,Fow19]: −→ M is deterministic, and due to linearity, any overlapping reductions are separate and may be performed in either order.
Proof. As with GV [LM15,Fow19], due to linearity, HGV has an elementary strong normalisation proof. Let the size of a configuration be the sum of the sizes of all abstract syntax trees of all terms contained in threads. The size of a configuration is invariant under ≡ and strictly decreases under −→, so no infinite reduction sequences can exist.

Relation between HGV and GV
In this section, we show that well-typed GV configurations are well-typed HGV configurations, and well-typed HGV configurations with tree structure are well-typed GV configurations.
GV. HGV and GV share a common term language and reduction semantics, so only differ in their runtime typing rules. Figure 5 gives the runtime typing rules for GV. We adapt the rules to use a double-binder formulation to concentrate on the essence of the relationship with HGV, but it is trivial to translate GV with single binders into GV with double binders.
GV uses a pseudo-type S ♯ to type channels. Unlike endpoints, channels cannot appear in terms. Read bottom-up, rule TG-New types a name restriction (νxy)C, adding ⟨x, y⟩ : S ♯ to the type environment, which along with TG-Connect 1 and TG-Connect 2 ensures that a session channel of type S will be split into endpoints x and y over a parallel composition. In turn, this enforces a tree process structure. The remaining typing rules are as in HGV.
A simple embedding of GV into HGV. The simplest embedding of GV in HGV relies on the observation from Section 2 that each parallel composition splits a single channel. Let C ∥ ⟨x,y⟩ D denote two configurations C and D connected by a channel with endpoints x, y. We can write an arbitrary closed GV configuration in the form: where each C does not contain a further parallel composition, and any main thread is in C n . We can then subsequently embed the configuration in HGV as: which is well-typed by construction. As a corollary, every well-typed, closed GV configuration is equivalent to a well-typed, closed HGV configuration.
A structure-preserving embedding of GV into HGV. Though the simple embedding of GV into HGV is sound, it does not respect the intention of GV. In fact, we can provide a stronger result: every well-typed open GV configuration is exactly a well-typed HGV configuration.
Translating HGV to GV. As we saw in §2, unlike in HGV, equivalence in GV is not type-preserving. It follows that HGV types strictly more processes than GV. Let us revisit Lindley and Morris' example from §1 (adapted to use double-binders), is not typeable in GV, since we cannot split both channels over a single parallel composition: However, we can type this process in HGV: Note in particular the shaded hyper-environment, which includes hyper-environment separators to separate endpoints x and x ′ , as well as y and y ′ . It follows that, unlike in GV, both channels can be split over the same parallel composition. Similarly, the hyper-environment separator allows C and D to be composed without sharing any channels. Although HGV types more processes, every well-typed HGV configuration typeable under a singleton hyper-environment Γ is equivalent to a well-typed GV configuration, which we show using tree canonical forms. Remark 4.6. It is not the case that every HGV configuration typeable under an arbitrary hyper-environment H is equivalent to a well-typed GV configuration. This is because open HGV configurations can form forest process structures, whereas (even open) GV configurations must form a tree process structure.
Since we can write all well-typed HGV configurations in canonical form, and HGV tree canonical forms are typeable in GV, it follows that every well-typed HGV configuration typeable under a single type environment is equivalent to a well-typed GV configuration. Typing rules for processes P ⊢ G Figure 6. HCP, duality and typing rules for processes.

TP-Link
Corollary 4.7. If Γ ⊢ C : R, then there exists some D such that C ≡ D and Γ ⊢ GV D : R.

Relation between HGV and HCP
In this section, we explore two translations, from HGV to HCP and from HCP to HGV, together with their operational correspondence results.
Hypersequent CP. HCP [MP18,KMP19b] is a session-typed process calculus with a correspondence to CLL, which exploits hypersequents to fix extensibility and modularity issues with CP. Types (A, B) consist of the connectives of linear logic: the multiplicative operators (⊗, ) and units (1, ⊥) and the additive operators (⊕, &) and units (0, ⊤).
Type environments (Γ, ∆) associate names with types. Hyper-environments (G, H) are collections of type environments. The empty type environment and hyper-environment are written · and ∅, respectively. Names in type and hyper-environments must be unique and environments may be combined, written Γ, ∆ and G ∥ H, only if they are disjoint.
Processes (P , Q) are a variant of the π-calculus with forwarding [San96,Bor98], bound output [San96], and double binders [Vas12]. The syntax of processes is given by the typing rules (Figure 6), which are standard for HCP [MP18,KMP19b]: x↔ A y forwards messages between x and y; (νxy)P creates a channel with endpoints x and y, and continues as P ; P ∥ Q composes P and Q in parallel; 0 is the terminated process; x[y].P creates a new channel, outputs one endpoint over x, binds the other to y, and continues as P ; x(y).P receives a channel endpoint, binds it to y, and continues as P ; x[].P and x().P close x and continue as P ; x ◁ inl.P and x ◁ inr.P make a binary choice; x ▷ {inl : P ; inr : Q} offers a binary choice; and x ▷ {} offers a nullary choice. As HCP is synchronous, the only difference between x[y].P and x(y).P is their typing (and similarly for x[].P and x().P ). We write unbound send as x⟨y⟩.P (short for x[z].(y↔z ∥ P )), and synchronisation asx.P (short for  Figure 7. HCP, label transition semantics. x[z].(z[].0 ∥ P )) and x.P (short for x(z).z().P ). Duality is standard and is involutive, i.e., We define a standard structural congruence (≡) similar to that of HGV, i.e., parallel composition is commutative and associative, we can commute name restrictions, swap the order of endpoints, swap links, and have scope extrusion (similar to Figure 4). Note that since we base our formal developments on an LTS semantics, structural congruence is not required for reduction.
We define the labelled transition system for HCP as a small refinement of the LTS for the additive-multiplicative fragment of the πLL calculus introduced by Montesi and Peressotti [MP21], in turn inspired by their previous system CT [MP18]. The LTS is identical, save for the fact that we distinguish two types of internal actions. Action labels l represent the actions that a process can fire. Prefixes π are a convenient subset of action labels which can be written as prefixes to processes, i.e., π.P . Transition labels ℓ include action labels and the parallel composition of two action labels, along with internal actions α, β, and τ . The LTS gives rise to two types of internal action: α represents only the evaluation of links as renaming, and β represents only communication. Labels τ arise only due to saturated transition (Definition 5.4) and are not produced by the rules in the LTS.
Metatheory. Transitions preserve typeability. Since internal actions occur only under binders, they are typable under the same hyper-environment. Similarly, our LTS for HCP satisfies progress. Following [KMP19a,MP21], the key intermediate step is to note that for every type environment in a hyper-environment, there is some free name which can be acted upon. Again, the stratification of internal actions does not materially impact the proof.
Theorem 5.2 (Progress). If P ⊢ H and P ̸ ≡ 0, then there exist some ℓ, Q such that Behavioural Theory. The behavioural theory for HCP follows Kokke et al. [KMP19a], except that we distinguish two subrelations of weak bisimilarity, following the subtypes of internal actions.

Definition 5.3 (Strong bisimulation and strong bisimilarity). A symmetric relation R on processes is a strong bisimulation
Strong bisimilarity is the largest relation ∼ that is a strong bisimulation.
Definition 5.5 (Weak bisimulation and weak bisimilarity). A symmetric relation R on The L-bisimilarity relation is the largest relation ≈ L that is an L-bisimulation. We write ≈ as shorthand for ≈ {α,β} .
Lemma 5.6. Structural congruence, strong bisimilarity and the various forms of weak bisimilarity are related as follows: Differences with previous version. The LTS in Figure 7 is similar to that in the previous version of this work [FKD + 21], with the exception that we have removed the rules Tau-Alp and Tau-Bet: To see why these rules are problematic, consider processes P = (νxy)(z↔x ∥ y[].0) and Q = z[].0. Following Definition 5.5, P and Q are α-bisimilar, as P only has the α-transition P α −→ Q and Q has no transitions. In the previous version, Tau-Alp gave P the derived τ -transition P τ −→ Q, which meant that P ̸ ≈ α Q, as Q ̸ τ =⇒ Q. Therefore Tau-Alp collapses ≈ α to ∼ and Tau-Bet collapses ≈ β to ∼.
The solution we adopted was to remove Tau-Alp and Tau-Bet from the label transition relation −→, and instead lift α-and β-transitions to τ -transitions in the definition of saturated transition 1 .
Translating HGV to HCP. We factor the translation from HGV to HCP into two translations: (1) a translation into HGV * , a fine-grain call-by-value [LPT03] variant of HGV, which makes control flow explicit; and (2) a translation from HGV * to HCP. In so doing, we can concentrate on the essence of the translations as opposed to concerning ourselves with administrative reductions. HGV * . We define HGV * as a refinement of HGV in which any non-trivial term must be named by a let-binding before being used. While let is syntactic sugar in HGV, it is part of the core language in HGV * . Correspondingly, the reduction rule for let follows from the encoding in HGV, i.e., let Remark 5.7. Fine-grain call-by-value λ-calculi typically include an explicit return V construct to embed values into the term language. As there is no difference between the shapes of the value and term typing judgements, we allow ourselves to embed values directly for simplicity.
We can naïvely translate HGV to HGV * ( · ) by let-binding each subterm in a value position, e.g., inl M = let z = M in inl z.
Standard techniques can be used to avoid administrative redexes [Plo75,DMN07]. We give a full definition of HGV * in Appendix C.
HGV * to HCP. The translation from HGV * to HCP is given in Figure 8. All control flow is encapsulated in values and let-bindings. We define a pair of translations on types, · and · , such that T = T ⊥ . We extend these translations pointwise to type environments and hyper-environments. We define translations on configurations ( · c r ), terms ( · m r ) and values ( · v r ), where r is a fresh name denoting a distinguished output channel. We translate an HGV sequent G ∥ Γ ⊢ C : T as C c r ⊢ G ∥ Γ , r : T ⊥ , where Γ is the type environment corresponding to the main thread. The translation of computations includes synchronisation action in order to faithfully simulate a call-by-value reduction strategy. The (term) translation of a value V m r immediately pings the output channel r to announce that it is a value. The translation of a let-binding let w = M in N m r first evaluates M to a value, which then pings the internal channel x/x ′ and unblocks the continuation x. N m r . The translations of main and child threads each make use of an internal result channel. The translation of a child thread consumes the yielded unit endpoint once the child thread has terminated. The translation of the main thread forwards the result value along the external output channel once the main thread has terminated.
There are two changes with respect to the translation of our earlier paper [FKD + 21]. First, in the earlier work the translation of the main thread output directly to the external output channel instead of forwarding via an intermediary as in the current translation. This change is purely aesthetic. Second, in the earlier work the translation of fork was not sufficiently concurrent. Correspondingly there was an error in the case of the operational correspondence proof which is fixed in the current paper. Translation on types T and T (3) If G ∥ Γ ⊢ C : T , where Γ is the type environment for the main thread in C, then C c r ⊢ G ∥ Γ , r : T ⊥ .

Translation on configurations, terms, and values
Lemma 5.10 (Substitution). If M is a well-typed term with w ∈ fv(M ), and V is a well-typed value, then Theorem 5.11 (Operational Correspondence). Suppose C is a well-typed configuration.
(1) (Preservation of reductions) If C −→ C ′ , then there exists a P such that C c r β + =⇒ α P and P ≈ α C ′ c r ; and (2) (Reflection of transitions) • if C c r α −→ P , then P ≈ α C c r ; and • if C c r β −→ P , then there exists a C ′ and a P ′ such that C −→ C ′ and P β * =⇒ α P ′ and P ′ ≈ α C ′ c r . Furthermore, C ′ is unique up to structural congruence. The proof is in Appendix C. One might strive for a tighter operational correspondence here, but our current translation generates multiple administrative β-transitions. The only term reduction that translates to multiple β-transitions is the one for let-bindings. This is because we choose to encode synchronisation using two β-transitions. We could adjust the accounting here by treating synchronisation as a single β-transition or its own special kind of administrative transition. Many more administrative reductions arise from the configuration translation. These are due to a combination of synchronisations and also the fact that we use constants along with pairs and application for our communication primitives instead of building-in fully-applied communication primitives.
Translating HCP to HGV. We cannot translate HCP processes to HGV terms directly: HGV's term language only supports fork (see Section 7 for further discussion), so there is no way to translate an individual name restriction or parallel composition. However, we can still translate HCP into HGV via the composition of known translations.
HCP into CP: We must first reunite each parallel composition with its corresponding name restriction, i.e., translate to CP using the disentanglement translation shown by Kokke et al. [KMP19b,Lemma 4.7]. The result is a collection of independent CP processes. CP into GV: Next, we can translate each CP process into a GV configuration using (a variant of) Lindley and Morris' translation [LM15, Figure 8]. GV into HGV: Finally, we can use our embedding of GV into HGV (Theorem 4.3) to obtain a collection of well-typed HGV configurations, which can be composed using TC-Par to result in a single well-typed HGV configuration. The translation from HCP into CP and the embedding of GV into HGV preserve and reflect reduction. However, as previously mentioned, Lindley and Morris's original translation from CP to GV preserves but does not reflect reduction due to an asynchronous encoding of choice. By adapting their translation to use a synchronous encoding of choice (Section 3), we obtain a translation from CP to GV that both preserves and reflects reduction. Thus, composing all three translations together we obtain a translation from HCP to HGV that preserves and reflects reduction.

Extensions
In this section, we outline three extensions to HGV that exploit generalising the tree structure of processes to a forest structure. These extensions are of particular interest since HGV already supports a core aspect of forest structure, enabling its full utilisation merely through the addition of a structural rule. In contrast, to extend GV with forest structure one must distinguish two distinct introduction rules for parallel composition [LM15,Fow19]. Other extensions to GV such as shared channels [LM15], polymorphism [LM17], and recursive session types [LM16] adapt to HGV almost unchanged.
From trees to forests. The TC-Par rule allows two processes to be composed in parallel if they are typeable under separate hyper-environments. In a closed program, hyperenvironment separators are introduced by TC-Res, meaning that each process must be connected by a channel.
The following TC-Mix rule allows two type environments Γ 1 , Γ 2 to be split by a hyperenvironment separator without a channel connecting them, and is inspired by Girard's Mix rule [Gir87]; in the concurrent setting, Mix can be interpreted as concurrency without communication [LM15,ALM16]. TC-Mix admits a much simpler treatment of link and provides a crucial ingredient for handling exceptional behaviour.
Atkey et al. [ALM16] show that conflating the 1 and ⊥ types in CP (which correspond respectively to the end ! and end ? types in GV) is logically equivalent to adding the Mix rule and a 0-Mix rule (used to type an empty process). It follows that in the presence of TC-Mix, we use self-dual end type; in the GV setting, by using a self-dual end type, we decouple closing a channel from process termination. We therefore refine the TC-Child rule and the type schema for fork to ensure that each child thread returns the unit value, and replace the wait constant with a close constant which eliminates an endpoint of type end.
Given TC-Mix, we might expect a term-level construct spawn : (1 ⊸ 1) ⊸ 1 which spawns a parallel thread without a connecting channel. We can encode such a construct using fork and close (assuming fresh x and y): By relaxing the tree process structure restriction using TC-Mix, we can obtain a more efficient treatment of link, and can support the treatment of exceptions advocated by Fowler et al. [FLMD19]. The result of link reduction has forest structure. Well-typed closed programs in both GV and HGV must always maintain tree structure. Different versions of GV do so in various unsatisfactory ways: one is pre-emptive blocking [LM15], which breaks confluence; another is two-stage linking (Figure 4), which defers forwarding via a special link thread [LM16]. Lindley and Morris [LM15] implement link using the following rule (modified here to use a double-binder formulation): The first thread will eventually reduce to •x, at which point the second thread will synchronise to eliminate x and x ′ and then evaluate the continuation M with endpoint y substituted for x ′ . Unfortunately, this formulation of link preemptively inhibits reduction in the second thread, since the evaluation rule inserts a blocking wait. The resulting system does not satisfy the diamond property.
HGV uses the incarnation of link advocated by Lindley and Morris [LM16], where linking is split into two stages: the first generates a fresh pair of endpoints z, z ′ and a link thread of the form x z ′ ↔y, and returns z to the calling thread. Once the calling thread has evaluated to a value (which must by typing be z), then the link substitution can take place. This formulation recovers confluence, but we still lose a degree of concurrency: communication on y is blocked until the linking thread has fully evaluated. In an ideal implementation, the behaviour of the linking thread would be irrelevant to the remainder of the configuration. The operation requires additional runtime syntax and thus complicates the metatheory.
The above issues are symptomatic of the fact that the process structure after a link takes place is a forest rather than a tree. However, with TC-Mix, we can refine the type schema for link to (S × S) ⊸ 1 and we can use the following rule: This formulation enables immediate substitution, maximimising concurrency. A variant of HGV replacing E-Reify-Link and E-Comm-Link with E-Link-Mix retains HGV's metatheory.
Exceptions. In order to support exceptions in the presence of linear endpoints [FLMD19,MV18] we must have a way of cancelling an endpoint. Mostrous and Vasconcelos [MV18] describe a process calculus allowing the explicit cancellation of a channel endpoint, accounting for exceptional scenarios such as a client disconnecting, or a thread encountering an unrecoverable error. Attempting to communicate with a cancelled endpoint raises an exception. Fowler et al. [FLMD19] extend these ideas to the functional setting, introducing Exceptional GV (EGV). EGV supports exceptional behaviour by adding three term-level constructs: • a new constant, cancel : S ⊸ 1, which allows us to discard an arbitrary session endpoint with type S • a construct raise, which raises an exception • an exception handling construct try L as x in M otherwise N in the style of Benton & Kennedy [BK01], which attempts possibly-failing computation L, binding the result to x in success continuation M if successful and evaluating N if an exception is raised Cancellation generates a zapper thread ( x) which severs a tree topology into a forest as in the following example.
The configuration on the left has a tree process structure. However, after reduction, we obtain the configuration on the right which is clearly a forest and thus needs TC-Mix to be typeable. We have described a synchronous version of EGV, but extending our treatment to asynchrony as in the work of [FLMD19] is a routine adaptation.

Can we separate fork?
Hyper-environments allow us to cleanly separate name restriction and parallel composition in process configurations. A natural follow-on question is whether we could use the same technique at the level of terms in order to split fork into separate constructs for creating a channel and spawning a process. As tantalising a prospect this is, we argue that the disadvantages outweigh the benefits. Suppose we were to extend term typing to allow hyper-environments, G ⊢ M : T , and were to introduce terms let ⟨x, x ′ ⟩ = new in M to create a channel and let ⟨⟩ = spawn M in N to spawn a thread, with the following typing rules: These rather ad-hoc rules mirror hypersequent cut and hypersequent composition: TM-LetNew creates a new channel with endpoints x and x ′ , and requires them to be used in separate threads in the continuation M ; and TM-LetSpawn takes a term M , spawns it as a child thread, and continues as N . Using these rules, we can encode fork M as let ⟨x, Where else can we allow hyper-environments? In HCP, we have two options: (1) if we restrict all logical rules to singleton hypersequents and allow hyper-environments only in the rules for name restriction and parallel composition, we can use standard sequential semantics [MP18,KMP19b]; but (2) if we allow hyper-environments in any logical rules, we must use a semantics which allows the corresponding actions to be delayed [KMP19a]. This is unlikely to be a property of logical rules, but rather due to the fact that the logical rules correspond exactly to the communication actions-which block reduction-and the structural rules to name restriction and parallel composition-which do not. Therefore, we expect the positions where hypersequents can safely occur to follow from the structure of evaluation contexts and whether any blocking term perform a communication action.
Regardless of our choice, we would be left with restrictions on the syntax of terms that seem sensible in a process calculus, but are surprising in a λ-calculus. In the strictest variant, where we disallow hyper-environments in all but the above two rules, uses of TM-LetNew and TM-LetSpawn may be interleaved, but no other construct may appear between a TM-LetNew and its corresponding TM-LetSpawn. Consider the following terms, where M uses x and y, and N uses x ′ . Term (7.1) may be well-typed, but (7.2) is always ill-typed: Note that let ⟨x, x ′ ⟩ = new in M is a single, monolithic term constructor-exactly what hypersequents were meant to prevent! However, if we attempt to decompose these constructors, we find that these are not the regular product and unit types.

Related work
Session Types and Functional Languages. Session types were originally introduced in the context of process calculi [Hon93, THK94, HVK98], however they have been vastly integrated also in functional calculi, a line of work initiated by Gay and collaborators [VRG04,VGR06,GV10]. This family of calculi builds session types directly into a lambda calculus. Toninho et al. [TCP13] take an alternative approach, stratifying their system into a sessiontyped process calculus and a separate functional calculus. There are many pragmatic embeddings of session type systems in existing functional programming languages [NT04, PT08, SE08, IYA10, OY16,KD21a]. A detailed survey is given by Orchard and Yoshida [OY17].
Propositions as Sessions. When Girard introduced linear logic [Gir87] he suggested a connection with concurrency. Abramsky [Abr94] and Bellin and Scott [BS94] give embeddings of linear logic proofs in π-calculus, where cut reduction is simulated by π-calculus reduction. Both embeddings interpret tensor as parallel composition. The correspondence with πcalculus is not tight in that these systems allow independent prefixes to be reordered. Caires and Pfenning [CP10] give a propositions as types correspondence between dual intuitionistic linear logic and a session-typed π-calculus called πDILL. They interpret tensor as output. The correspondence with π-calculus is tight in that independent prefixes may not be reordered. With CP [Wad14], Wadler adapts πDILL to classical linear logic. Aschieri and Genco [AG20] give an interpretation of classical multiplicative linear logic as concurrent functional programs. They interpret`as parallel composition, and the connection to session types is less direct.
Priority-based Calculi. Systems such as πDILL, CP, and GV (and indeed HCP and HGV) ensure deadlock freedom by exploiting the type system to statically impose a tree structure on the communication topology -there can be at most one communication channel between any two processes. Another line of work explores a more liberal approach to deadlock freedom enabling some cyclic communication topologies, where deadlock freedom is guaranteed via priorities, which impose an order on actions. Priorites were introduced by Kobayashi and Padovani [Kob06,Pad14] and adopted by Dardha and Gay [DG18] in Priority CP (PCP), and Kokke and Dardha in Priority GV (PGV) [KD21b]. Dezani et al. [DCdY07] and Vieira and Vasconcelos [VV13] use a partial order on channels to guarantee deadlock freedom, following Kobayashi's work [Kob06]. Later on Dezani et al. [DCMYD06] guarantee progress by allowing only one active session at a time. Carbone et al. [CDM14] use catalysers to show that progress is a compositional form of lock freedom for standard typed π calculus. The authors describe how this technique can be used for session typed π-calculus by using the the encoding of session types to linear types [DGS17,Dar14,Dar16]. Dardha and Perez [DP22] compare the different calculi and techniques for deadlock freedom using CP and CLL as a yardstick and showing that the class of processes in CP is strictly included in the class of processes typed by Kobayashi [Kob06].
Graph-theoretic Approaches. Carbone and Debois [CD10] define a graph-theoretic approach for a session typed π-calculus. They define an explicit dependency graph defined inductively on the structure of a process, in contrast to our approach of inducing a graph on type environments given a co-name set. They ensure progress for processes with acyclic graphs using a catalyser, which provides a missing counterpart to a process. Jacobs et al. [JBK22a] also define a graph-theoretic approach to deadlock freedom, but differently from Carbone and Debois, their work is based on separation logic. A line of work on many-writer, single-reader process calculi [Pad18,dP18] uses explicit dependency graphs to both ensure resource separation and guarantee deadlock freedom, however it is not immediate how to apply this approach to functional calculi.

Conclusion and future work
HGV exploits hypersequents to resolve fundamental modularity issues with GV. As a consequence, we have obtained a tight operational correspondence between HGV and HCP. HGV is a modular and extensible core calculus for functional programming with binary session types. In future we intend to apply hypersequents to multiparty versions of CP [CLM + 16] and GV [JBK22b] to exhibit a similarly strong operational correspondence.   Figure 9 shows the derived typing rules.
A.2. Preservation Proof. Next, we detail the proof of preservation. We begin with the usual lemmas to manipulate evaluation contexts, and the usual substitution lemma. Proof. By induction on the structure of E. Runtime type merging is commutative and associative. We make use of these properties implicitly in the remainder of the proofs.
The first more major result is preservation of configuration typing under structural congruence.