Circular Proofs as Session-Typed Processes: A Local Validity Condition

Proof theory provides a foundation for studying and reasoning about programming languages, most directly based on the well-known Curry-Howard isomorphism between intuitionistic logic and the typed lambda-calculus. More recently, a correspondence between intuitionistic linear logic and the session-typed pi-calculus has been discovered. In this paper, we establish an extension of the latter correspondence for a fragment of substructural logic with least and greatest fixed points. We describe the computational interpretation of the resulting infinitary proof system as session-typed processes, and provide an effectively decidable local criterion to recognize mutually recursive processes corresponding to valid circular proofs as introduced by Fortier and Santocanale. We show that our algorithm imposes a stricter requirement than Fortier and Santocanale's guard condition, but is local and compositional and therefore more suitable as the basis for a programming language.


Introduction
Proof theory provides a solid ground for studying and reasoning about programming languages. This logical foundation is mostly based on the well-known Curry-Howard isomorphism [How69] that establishes a correspondence between natural deduction and the typed λ-calculus by mapping propositions to types, proofs to well-typed programs, and proof reduction to computation. More recently, Caires et al. [CP10,CPT16] introduced a correspondence between intuitionistic linear logic [GL87] and the session-typed π-calculus [Wad12] that relates linear propositions to session types, proofs in the sequent calculus to concurrent processes, and cut reduction to computation. In this paper, we expand the latter for a fragment of intuitionistic linear logic called subsingleton logic in which the antecedent of each sequent consists of at most one formula. We consider the sequent calculus of subsingleton logic with least and greatest fixed points and their corresponding rules [DP16,DeY20]. We closely follow Fortier and Santocanale's [FS13] development in singleton logic, where the antecedent consists of exactly one formula.
Fortier and Santoconale [FS13,San02d] extend the sequent calculus for singleton logic with rules for least and greatest fixed points. A naive extension, however, loses the cut elimination property so they call derivations pre-proofs. Circular pre-proofs are distinguished as a subset of derivations which are regular in the sense that they can be represented as finite trees with loops. They then impose a validity condition (which we call the FS guard condition) on pre-proofs to single out a class of pre-proofs that satisfy cut elimination. Moreover, they provide a cut elimination algorithm and show that it locally terminates on derivations that satisfy the guard condition. In addition, Santocanale and Fortier [FS13,San02d,San02a,San02b] introduced categorical and game semantics for interpreting cut elimination in singleton logic.
In a related line of research, Baelde et al. [BDS16,Bae12] add least and greatest fixed points to the sequent calculus for the multiplicative additive fragment of linear logic (MALL) that results in the loss of the cut elimination property. They also introduced a validity condition to distinguish circular proofs from infinite pre-proofs. Using Büchi automata, Doumane [Dou17] showed that the validity condition for identifying circular proofs in MALL with fixed points is PSPACE decidable. Nollet et al. [NST18] introduced a polynomial time algorithm for locally identifying a stricter version of Baelde's condition in MALL with fixed points.
In this paper, we study (mutually) recursive session-typed processes and their correspondence with circular pre-proofs in subsingleton logic with fixed points. We introduce an algorithm to check a stricter version of the FS guard condition. Our algorithm is local in the sense that we check validity of each process definition separately, and it is stricter in the sense that it accepts a proper subset of the proofs recognized by the FS guard condition. We further introduce a synchronous computational semantics of cut reduction in subsingleton logic with fixed points in the context of session types, based on a key step in Fortier and Santocanale's cut elimination algorithm, which is compatible with prior operational interpretations of session-typed programming languages [TCP13]. We show preservation and a strong version of the progress property that ensures that each valid process communicates along its left or right interface in a finite number of steps. A key aspect of our type system is that validity is a compositional property (as we generally expect from type systems) so that the composition of valid programs defined over the same signature are also valid and therefore also satisfy strong progress. In other words, we identify a set of processes such that their corresponding derivations are not only closed under cut elimination, but also closed under cut introduction (i.e. strong progress is preserved when processes are joined by cut).
In the session type system, a singleton judgment A B is annotated as x : A P :: (y : B) which is understood as: process P uses a service of type A offered along channel x by a process on its left and provides a service of type B along channel y to a process on its right [DP16]. The left and right interfaces of a process in the session type system inherit the symmetry of the left and right rules in the sequent calculus. Each process interacts with other processes along its pair of left and right interfaces, which correspond to the left and right sides of a sequent. For example, two processes P and Q with the typing x : A P :: (y : B) and y : B Q :: (z : C) can be composed so they interact with each other using channel y. Process P provides a service of type B and offers it along channel y and process Q uses this service to provide its own service of type C. This interaction along channel y can be of two forms: (i) process P sends a message to the right, and process Q receives it from the left, or (ii) process Q sends a message to the left and process P receives it from the right. In the first case, the session type B is a positive type, and in the second case it is a negative type. Least fixed points have a positive polarity while greatest fixed points are negative [LM16]. As we which records some important additional information, namely their relative priority (i ∈ N).
Σ ::= · | Σ, t = i µ A | Σ, t = i ν A, with the conditions that • if t = i a A ∈ Σ and t = i b B ∈ Σ, then a = b, and • if t = i a A ∈ Σ and t = j b B ∈ Σ, then i = j and A = B. For a fixed point t defined as t = i a A in Σ the subscript a is the polarity of t: if a = µ, then t is a fixed point with positive polarity and if a = ν, then it is of negative polarity. Finitely representable least fixed points (e.g., natural numbers and lists) can be represented in this system as defined propositional variables with positive polarity, while the potentially infinite greatest fixed points (e.g., streams and infinite depth trees) are represented as those with negative polarity.
The superscript i is the priority of t. Fortier and Santocanale interpreted the priority of fixed points in their system as the order in which the least and greatest fixed point equations are solved in the semantics [FS13]. We use them syntactically as central information to determine local validity of circular proofs. We write p(t) = i for the priority of t, and (i) = a for the polarity of propositional variable t with priority i. The condition on Σ ensures that is a well-defined function.
The basic judgment of the subsingleton sequent calculus has the form ω Σ γ, where ω and γ are either empty or a single proposition A and Σ is a signature. Since the signature never changes in the rules, we omit it from the turnstile symbol. The rules of subsingleton logic with fixed points are summarized in Figure 1, constituting a slight generalization of Fortier and Santocanale's. When the fixed points in the last row are included, this set of rules must be interpreted as infinitary, meaning that a judgment may have an infinite derivation in this system.
Even a cut-free derivation may be of infinite length since each defined propositional variable may be unfolded infinitely many times. Also, cut elimination no longer holds for the derivations after adding fixed point rules. What the rules define then are the so-called pre-proofs. In particular, we are interested in circular pre-proofs, which are the pre-proofs that can be illustrated as finite trees with loops [Dou17]. Fortier and Santocanale [FS13] introduced a guard condition for identifying a subset of circular proofs among all infinite pre-proofs in singleton logic with fixed points. Their guard condition states that every cycle should be supported by the unfolding of a positive (least) fixed point on the antecedent or a negative (greatest) fixed point on the succedent. Since they allow mutual dependency of least and greatest fixed points, they need to consider the priority of each fixed point as well. The supporting fixed point for each cycle has to be of the highest priority among all fixed points that are unfolded infinitely in the cycle. They proved that the guarded subset of derivations enjoys the cut elimination property; in particular, a cut composing any two guarded derivations can be eliminated effectively.
As an example, the following circular pre-proof defined on the signature nat = 1 µ 1 ⊕ nat depicts an infinite pre-proof that consists of repetitive application of µR followed by ⊕R: It is not guarded and turns out not to be locally valid either. On the other hand, on the signature conat = 1 ν 1 & conat, we can define a circular pre-proof using greatest fixed points that is guarded and locally valid:

Fixed Points in the Context of Session Types
Session types [Hon93,HVK98] describe the communication behavior of interacting processes. Binary session types, where each channel has two endpoints, have been recognized as arising from linear logic (either in its intuitionistic [CP10,CPT16] or classical [Wad12] formulation) by a Curry-Howard interpretation of propositions as types, proofs as programs, and cut reduction as communication. In the context of programming, recursive session types have also been considered [TCP13,LM16], and they seem to fit smoothly, just as recursive types fit well into functional programming languages. However, they come at a price, since we abandon the Curry-Howard correspondence.
In this paper we show that this is not necessarily the case: we can remain on a sound logical footing as long as we (a) refine general recursive session types into least and greatest fixed points, (b) are prepared to accept circular proofs, and (c) impose conditions under which recursively defined processes correspond to valid circular proofs. General (nonlinear) type theory has followed a similar path, isolating inductive and coinductive types with a variety of conditions to ensure validity of proofs. In the setting of subsingleton logic, however, we find many more symmetries than typically present in traditional type theories, which appear to be naturally biased towards least fixed points and inductive reasoning.
Under the Curry-Howard interpretation a subsingleton judgment A Σ B is annotated as x : A Σ P :: (y : B) where x and y are two different channels and A and B are their corresponding session types. One can understand this judgment as: process P provides a service of type B along channel y while using channel x of type A, a service that is provided by another process along channel x [DP16]. We can form a chain of processes P 0 , P 1 , · · · , P n with the typing · P 0 :: (x 0 : A 0 ), x 0 : A 0 P 1 :: (x 1 : A 1 ), · · · x n−1 : A n−1 P n :: (x n : A n ) which we write as P 0 | x 0 P 1 | x 1 · · · | x n−1 P n in analogy with the notation for parallel composition for processes P | Q, although here it is not commutative. In such a chain, process P i+1 uses a service of type A i provided by the process P i along the channel x i , and provides its own service of type A i+1 along the channel x i+1 . Process P 0 provides a service of type A 0 along channel x 0 without using any services. So, a process in the session type system, instead of being reduced to a value as in functional programming, interacts with both its left and right interfaces by sending and receiving messages. Processes P i and P i+1 , for example, communicate with each other along the channel x i of type A i : if process P i sends a message along channel x i to the right and process P i+1 receives it from the left (along the same channel), session type A i is called a positive type. Conversely, if process P i+1 sends a message along channel x i to the left and process P i receives it from the right (along the same channel), session type A i is called a negative type. In Section 4 we show in detail that this symmetric behavior of left and right session types results in a symmetric behaviour of least and greatest fixed point types.
In general, in a chain of processes, the leftmost type may not be empty. Also, strictly speaking, the names of the channels are redundant since every process has two distinguished ports: one to the left and one to the right, either one of which may be empty. Because of this, we may sometimes omit the channel name, but in the theory we present in this paper it is convenient to always refer to communication channels by unique names.
For programming examples, it is helpful to allow not just two, but any finite number of alternatives for internal (⊕) and external (&) choice. Such finitary choices can equally well be interpreted as propositions, so this is not a departure from the proofs as programs interpretation.
Definition 2.1. We define session types with the following grammar, where L ranges over finite sets of labels denoted by and k.
As a first programming-related example, consider natural numbers in unary form (nat) and a type to demand access to a number if desired (ctrl).
Example 2.2 (Natural numbers on demand). In this example, Σ consists of an inductive and a coinductive type; these are, respectively: (i) the type of natural numbers (nat) built using two constructors for zero (z ) and successor (s), and (ii) a type to demand access to a number if desired (ctrl) defined using two destructors for now to obtain the number and notyet to postpone access, possibly indefinitely. Here, the priorities of nat and ctrl are, respectively, 1 and 2, understood as "nat has higher priority than ctrl".
Example 2.3 (Binary numbers in standard form). As another example consider the signature with two types with positive polarity and the same priority: std and pos. Here, std is the type of standard bit strings, i.e., bit strings terminated with $ without any leading 0 bits, and pos is the type of positive standard bit strings, i.e., all standard bit strings except $.
Note that in our representation the least significant bit is sent first. bits = 1 µ ⊕{b0 : bits, b1 : bits} cobits = 2 ν &{b0 : cobits, b1 : cobits} In a functional language, the type cobits would be a greatest fixed point (an infinite stream of bits), while bits is recognized as an empty type. However, in the session type system, we treat them in a symmetric way. bits is an infinite sequence of bits with positive polarity. And its dual type, cobits, is an infinite stream of bits with negative polarity. In Examples 5.1 and 5.2, in Section 5, we further illustrate the symmetry of these types by providing two recursive processes having them as their interfaces. Even though we can, for example, write transducers of type bits bits inside the language, we cannot write a valid process of type · bits that produces an infinite stream of bits.

A Synchronous Operational Semantics
The operational semantics for process expressions under the proofs-as-programs interpretation of linear logic has been treated exhaustively elsewhere [CP10,CPT16,TCP13,Gri16]. We therefore only briefly sketch the operational semantics here. Communication is synchronous, which means both sender and receiver block until they synchronize. Asynchronous communication can be modeled using a process with just one output action followed by forwarding [GV10,DCPT12]. However, a significant difference to much prior work is that we treat types in an isorecursive way, that is, a message is sent to unfold the definition of a type t. This message is written as µ t for a least fixed point and ν t for a greatest fixed point. The language of process expressions and their operational semantics presented in this section is suitable for general isorecursive types, if those are desired in an implementation. The resulting language then satisfies a weaker progress property sometimes called global progress (see, for example, [CP10]).
where X, Y, . . . are process variables, x, y, . . . are channel names, andx (ȳ) is either empty or x (y). In (x ← P x ; Q x ), the variable x is bound and represents a channel connecting the two processes. Throughout the paper, we may subscript processes with bound variables if they are allowed to occur in them. In the programming examples, we may write y ← X ← x; Q y instead of (y ← (y ← X ← x); Q y ) when a new process executing X is spawned. The left and right session types a process interacts with are uniquely labelled with channel names: x : A P :: (y : B).
We read this as Process P uses channel x of type A and provides a service of type B along channel y. However, since a process might not use any service provided along its left channel, e.g. · P :: (y : B), or it might not provide any service along its right channel, e.g. x : A Q :: (·), the labelling of processes is generalized to be of the form: x : ω P :: (ȳ : γ), wherex (ȳ) is either empty or x (y), and ω (γ) is empty given thatx (ȳ) is empty.
Process definitions are of the formx : ω X = Px ,ȳ :: (ȳ : γ) representing that variable X is defined as process P . A program P is defined as a pair V, S , where V is a finite set of process definitions, and S is the main process variable. 1 Figure 2 shows the logical rules annotated with processes in the context of session types. This set of rules inherits the full symmetry of its underlying sequent calculus. They interpret pre-proofs: as can be seen in the rule Def, the typing rules inherit the infinitary nature of deductions from the logical rules in Figure 1 and are therefore not directly useful for type checking. 2 We obtain a finitary system to check circular pre-proofs by removing the first premise from the Def rule and checking each process definition in V separately, under the hypothesis that all process definitions are well-typed. Since the system is entirely syntax-directed we may sometimes equate (well-typed) programs with their typing derivations. This system rules out communication mismatches without forcing processes to actually communicate along their external channels. In order to also enforce communication the rules need to track additional information (see rules in Figures 3 (infinitary) and 4 (finitary) in Sections 9 and 10). x : A y ← x :: (y : A) Id x : ω P w :: (w : A) w : A Q w :: (ȳ : γ) x : ω (w ← P w ; Q w ) :: (ȳ : γ) Cut w x : ω P :: (y : A k ) (k ∈ L) x : ω Ry.k; P :: (y : ⊕{ : A } ∈L ) ⊕R ∀ ∈ L x : A P :: (ȳ : γ) x : ⊕{ : A } ∈L case Lx ( ⇒ P ) ∈L :: (ȳ : γ)
The computational semantics is defined on configurations P 0 | x 1 · · · | xn P n where | is associative and has unit (·) but is not commutative. The following transitions can be applied anywhere in a configuration: The forward rule removes process y ← x from the configuration and replaces both channels x and y in the rest of the configuration with a fresh channel z. The rule for x ← P x ; Q x spawns process P z and continues as Q z . To ensure uniqueness of channels, we need z to be a fresh channel. For internal choice, Rx.k; P sends label k along channel x to the process on its right and continues as P . The process on the right, case Lx ( ⇒ Q ), receives the label k sent from the left along channel x, and chooses the branch with label k to continue with Q k . The last transition rule unfolds the definition of a process variable X while instantiating the left and right channelsū andw in the process definition with proper channel names,x andȳ respectively.

Ensuring communication and local validity
In this section we motivate our algorithm as an effectively decidable compositional and local criterion which ensures that a program always terminates either in an empty configuration or one attempting to communicate along external channels. By defining type variables in the signature and process variables in the program, we can generate (mutually) recursive processes which correspond to circular pre-proofs in the sequent calculus. In Examples 4.1 and 4.2, we provide such recursive processes along with explanations of their computational steps and their corresponding derivations.
Example 4.1. Take the signature Σ 1 := nat = 1 µ ⊕{z : 1, s : nat}. We define a process · Loop :: (y : nat), where Loop is defined as Loop forms a program over the signature Σ 1 . It (i) sends a positive fixed point unfolding message to the right, (ii) sends the label s, as another message corresponding to successor , to the right, (iii) calls itself and loops back to (i). The program runs forever, sending successor labels to the right, without receiving any fixed point unfolding messages from the left or the right. We can obtain the following infinite derivation in the system of Figure 1 via the Curry-Howard correspondence of the unique typing derivation of process Loop: P 2 := {Block}, Block forms a program over the signature Σ 1 : (i) Block waits, until it receives a positive fixed point unfolding message from the left, (ii) waits for another message from the left to determine the path it will continue with: (a) If the message is a z label, (ii-a) the program waits until a closing message is received from the left. Upon receiving that message, it closes the left and then the right channel.
(b) If the message is an s label, (ii-b) the program calls itself and loops back to (i). Process Block corresponds to the following infinite derivation: µL Derivations corresponding to both of these programs are cut-free. Also no internal loop takes place during their computation, in the sense that they both communicate with their left or right channels after finite number of steps. For process Loop this communication is restricted to sending infinitely many unfolding and successor messages to the right. Process Block, on the other hand, receives the same type of messages after finite number of steps as long as they are provided by a process on its left. Composing these two processes as in x ← Loop ← · | x y ← Block ← x results in an internal loop: process Loop keeps providing unfolding and successor messages for process Block so that they both can continue the computation and call themselves recursively. Because of this internal loop, the composition is not acceptable: it never communicates with its left (empty channel) or right (channel y). The infinite derivation corresponding to the composition x ← Loop ← · | x y ← Block ← x therefore should be rejected as invalid: The cut elimination algorithm introduced by Fortier and Santocanale uses a reduction function Treat that may never halt. They proved that for derivations satisfying the guard condition Treat is locally terminating since it always halts on guarded proofs [FS13]. The above derivation is an example of one that does not satisfy the FS guard condition and the cut elimination algorithm does not locally terminate on it.
The progress property for a configuration of processes ensures that during its computation it either: (i) takes a step; (ii) is empty; or (iii) communicates along its left or right channel. Without (mutually) recursive processes, this property is enough to make sure that computation never gets stuck. Having (mutually) recursive processes and fixed points, however, this property is not strong enough to restrict internal loops. The composition x ← Loop ← · | x y ← Block ← x, for example, satisfies the progress property but it never interacts with any other external process. We introduce a stronger form of the progress property, in the sense that it requires one of the conditions (ii) or (iii) to hold after a finite number of computation steps.
Like cut elimination, strong progress is not compositional. Processes Loop and Block both satisfy the strong progress property but their composition x ← Loop ← · | x y ← Block ← x does not. We will show in Section 12 that FS validity implies strong progress. But, in contrast to strong progress, FS validity is compositional in the sense that composition of two disjoint valid programs is also valid. However, the FS guard condition is not local. Locality is particularly important from the programming point of view. It is the combination of two properties that are pervasive and often implicit in the study of programming languages. First, the algorithm is syntax-directed, following the structure of the program and second, it checks each process definition separately, requiring only the signature and the types of other processes but not their definition. One advantage is indeed asymptotic complexity, and, furthermore, a practically very efficient implementation. In Remark 11.12 we show that the time complexity of our validity algorithm is linear in the total input, which consists of the signature and the process definitions. Another is precision of error messages: locality implies that there is an exact program location where the condition is violated. Validity is a complex property, so the value of precise error messages cannot be overestimated. The final advantage is modularity: all we care about a process is its interface, not its definition, which means we can revise definitions individually without breaking validity of the rest of the program as long as we respect their interface. Our goal is to construct a locally checkable validity condition that accepts (a subset of) programs satisfying strong progress and is compositional.
In functional programming languages a program is called terminating if it reduces to a value in a finite number of steps, and is called productive if every piece of the output is generated in a finite number of steps (even if the program potentially runs forever). As in the current work, the theoretical underpinnings for terminating and productive programs are also least and greatest fixed points, respectively, but due to the functional nature of computation they take a different and less symmetric form than here (see, for example, [BM13,Gra16] In our system of session types, least and greatest fixed points correspond to defined type variables with positive and negative polarity, respectively, and their behaviors are quite symmetric: As in Definition 3.1, an unfolding message µ t for a type variable t with positive polarity is received from the left and sent to the right, while for a variable t with negative polarity, the unfolding message ν t is received from the right and sent to the left. Going back to Examples 4.1 and 4.2, process Loop seems less acceptable than process Block: process Loop does not receive any least or greatest fixed point unfolding messages. It is neither a terminating nor a productive process. We want our algorithm to accept process Block rather than Loop, since it cannot accept both. This motivates a definition of reactivity on session-typed processes.
A program defined over a signature Σ is reactive to the left if it only continues forever if for a positive fixed point t ∈ Σ with priority i it receives a fixed point unfolding message µ t from the left infinitely often. A program is reactive to the right if it only continues forever if for a negative fixed point t ∈ Σ with priority i it receives a fixed point unfolding message ν t from the right infinitely often.
A program is called reactive if it is either reactive to the right or to the left. By this definition, process Block is reactive while process Loop is not. Although reactivity is not local we use it as a motivation behind our algorithm. We construct our local validity condition one step at a time. In each step, we expand the condition to accept one more family of interesting reactive programs, provided that we can check the condition locally. We first establish a local algorithm for programs with only direct recursion. We expand the algorithm further to support mutual recursions as well. Then we examine a subtlety regarding the cut rule to accept more programs locally. The reader may skip to Section 10 which provides our complete finitary algorithm. Later, in Sections 11 and 12 we prove that our algorithm ensures the FS guard condition and strong progress.
Priorities of type variables in a signature are central to ensure that a process defined based on them satisfies strong progress. Throughout the paper we assume that the priorities are assigned (by a programmer) based on the intuition of why strong progress holds. We conclude this section with an example of a reactive process Copy. This process, similar to Block, receives a natural number from the left but instead of consuming it, sends it over to the right along a channel of type nat.
Example 4.3. With signature Σ 1 := nat = 1 µ ⊕{z : 1, s : nat} we define process Copy, x : nat Copy :: (y : nat), as Ry.s; % send label s to right This is an example of a recursive process, and P 3 := {Copy}, Copy forms a left reactive program over the signature Σ 1 : (i) It waits until it receives a positive fixed point unfolding message from the left, (ii) waits for another message from the left to determine the path it will continue with: (a) If the message is a z label, (ii-a) the program sends a positive fixed point unfolding message to the right, followed by the label z , and then waits until a closing message is received from the left. Upon receiving that message, it closes the right channel.
(b) If the message is an s label, (ii-b) the program sends a positive fixed point unfolding message to the right, followed by the label s , and then calls itself and loops back to (i). The computational content of Copy is to simply copy a natural number given from the left to the right. Process Copy does not involve spawning (its underlying derivation is cut-free) and satisfies the strong progress property. This property is preserved when composed with Block as y ← Copy ← x | y z ← Block ← y.

Local Validity Algorithm: Naive Version
In this section we develop a first naive version of our local validity algorithm using Examples 5.1-5.2.
Example 5.1. Let the signature be Σ 2 := bits = 1 µ ⊕{b0 : bits, b1 : bits} and define the process BitNegate x : bits BitNegate :: (y : bits) Ry.b1; % send label b1 to right Ry.b0; % send label b0 to right BitNegate forms a left reactive program over the signature Σ 2 quite similar to Copy. Computationally, BitNegate is a buffer with one bit capacity that receives a bit from the left and stores it until a process on its right asks for it. After that, the bit is negated and sent to the right and the buffer becomes free to receive another bit.
Example 5.2. Dual to Example 5.1, we can define coBitNegate. Let the signature be Σ 3 := cobits = 1 ν &{b0 : cobits, b1 : cobits} with process x : cobits coBitNegate :: (y : cobits) where coBitNegate is defined as Computationally, coBitNegate is a buffer with one bit capacity. In contrast to BitNegate in Example 5.1, its types have negative polarity: it receives a bit from the right, and stores it until a process on its left asks for it. After that the bit is negated and sent to the left and the buffer becomes free to receive another bit.
Remark 5.3. The property that assures the reactivity of the previous examples lies in their step (i) in which the program blocks until an unfolding message is received, i.e., the program can only continue the computation if it receives a message at step (i), and even after receiving the message it can only take finitely many steps further before the computation ends or another unfolding message is needed.
We first develop a naive version of our algorithm which captures the property explained in Remark 5.3: associate an initial integer value (say 0) to each channel and define the basic step of our algorithm to be decreasing the value associated to a channel by one whenever it receives a fixed point unfolding message. Also, for a reason that is explained later in Remark 6.2, whenever a channel sends a fixed point unfolding message its value is increased by one. Then at each recursive call, the value of the left and right channels are compared to their initial value.
For instance, in Example 4.3, in step (i) where the process receives a µ nat message via the left channel (x), the value associated with x is decreased by one, while in steps (ii-a) and (ii-b) in which the process sends a µ nat message via the right channel (y) the value associated with y is increased by one: Ry.s; y ← Copy ← x)) −1 1 When the recursive call occurs, channel x has the value −1 < 0, meaning that at some point in the computation it received a positive fixed point unfolding message. We can simply compare Note that by the definition of Σ 1 , y never receives a fixed point unfolding message, so its value never decreases, and x never sends a fixed point unfolding message, thus its value never increases. The same criterion works for the program P 3 over the signature Σ 2 defined in Example 5.1, since Σ 2 also contains only one positive fixed point: However, for a program defined on a signature with a negative polarity such as the one defined in Example 5.2, this condition does not work: By the definition of Σ 3 , y only receives unfolding fixed point messages, so its value only decreases. On the other hand, x cannot receive an unfolding fixed point from the left and thus its value never decreases. In this case the property in Remark 5.3 is captured by comparing the initial value of the list [y, x], instead of [x, y], with its value just before the recursive call: For a signature with only a single recursive type we can form a list by looking at the polarity of its type such that the value of the channel that receives the unfolding message comes first, and the value of the other one comes second. Our algorithm ensures that the value of the list right before a recursive call is lexicographically less than the initial value of the list. In this section, we implemented the algorithm by counting the number of fixed point unfolding messages sent or received along each channel. However, keeping track of the exact number of unfolding messages is too much information and unnecessary. At the end of the next section, we introduce an alternative implementation that establishes the relation between channels after sending or receiving a fixed point unfolding message without tracking the exact number of the messages. The property explained in Remark 5.3 of previous section is not strict enough, particularly when the signature has more than one recursive type. In that case not all programs that are waiting for a fixed point unfolding message before a recursive call are reactive.
Example 6.1. Consider the signature Σ 4 := ack = 1 µ ⊕{ack : astream}, astream = 2 ν &{head : ack, tail : astream}, nat = 3 µ ⊕{z : 1, s : nat} ack is a type with positive polarity that, upon unfolding, describes a protocol requiring an acknowledgment message to be sent to the right (or be received from the left). astream is a type with negative polarity of a potentially infinite stream where its head is always followed by an acknowledgement while tail is not.
P 6 := {Ping, Pong, PingPong}, PingPong forms a program over the signature Σ 4 with the typing of its processes x : nat Ping :: (w : astream) w : astream Pong :: (y : nat) x : nat PingPong :: (y : nat) We define processes Ping, Pong, and PingPong over Σ 4 as: Ry.s; % send label s to right (i) Program P 6 starting from PingPong, spawns a new process Ping and continues as Pong: (ii-Pong) Process Pong sends an astream unfolding and then a head message to the left, and then (iii-Pong) waits for an acknowledgment, i.e., ack , from the left.
(ii-Ping) At the same time process Ping waits for an astream fixed point unfolding message from the right, which becomes available after step (ii-Pong). Upon receiving the message, it waits to receive either head or tail from the right, which is also available from (ii-Pong) and is actually a head . So (iii-Ping) it continues with the path corresponding to head , and acknowledges receipt of the previous messages by sending an unfolding messages and the label ack to the right, and then it calls itself (ii-Ping).
(iv-Pong) Process Pong now receives the two messages sent at (iii-Ping) and thus can continue by sending a nat unfolding message and the label s to the right, and finally calling itself (ii-Pong). Although both recursive processes Ping and Pong at some point wait for a fixed point unfolding message, this program runs infinitely without receiving any messages from the outside, and thus is not reactive.
The back-and-forth exchange of fixed point unfolding messages between two processes in the previous example can arise when at least two mutually recursive types with different polarities are in the signature. To avoid such non-reactive behavior, we need to incorporate priorities of the type variables into the validity checking algorithm and track both sending and receiving of the unfolding messages.
Remark 6.2. In Example 6.1, for instance, waiting to receive an unfolding message ν astream of priority 2 in line (ii-Ping) is not enough to ensure validity of the recursive call because later in line (iii-Ping) the process sends an unfolding message of a higher priority 1.
To preclude such a call we form a list for each process. This list stores the information of the fixed point unfolding messages that the process received and sent before a recursive call for each type variable in their order of priority.
Example 6.3. Consider the signature and program P 6 as defined in Example 6.1. For the process x : nat w ← Ping ← x :: (w : astream) form the list Types with positive polarity, i.e., ack and nat, receive messages from the left channel (x) and send messages to the right channel (w), while those with negative polarity, i.e., astream, receive from the right channel (w) and send to the left one (x). Thus, the above list can be rewritten as To keep track of the sent/received messages, we start with [0, 0, 0, 0, 0, 0] as the value of the list, when the process x : nat Ping :: (w : astream) is first spawned. Then, similar to the first version of our algorithm, on the steps in which the process receives a fixed point unfolding message, the value of the corresponding element of the list is decreased by one. And on the steps it sends a fixed point unfolding message, the corresponding value is increased by one: The two last lines are the values of the list on which process Ping calls itself recursively. The validity condition as described in Remark 6.2 holds iff the value of the list at the time of the recursive call is less than the value the process started with, in lexicographical order.
We leave it to the reader to verify that no matter how we assign priorities of the type variables in Σ 4 , our condition rejects PingPong.
The following definition captures the idea of forming lists described above. Rather than directly referring to type variables such as ack or astream we just refer to their priorities, since that is the relevant information.
Definition 6.4. For a processx : ω P :: where n is the lowest priority in Σ.
In the remainder of this section we use n to denote the lowest priority in Σ (which is numerically maximal).
Example 6.5. Consider the signature Σ 1 and program P 3 := {Copy}, Copy , from Example 4.3: Σ 1 := nat = 1 µ ⊕{z : 1, s : nat}, and y ← Copy ← x = case Lx (µ nat ⇒ case Lx ( z ⇒ Ry.µ nat ; Ry.z; wait Lx; close Ry | s ⇒ Ry.µ nat ; Ry.s; y ← Copy ← x)) By Definition 6.4, for process x : nat Copy :: (y : nat), we have n = 1, and list(x, y) = [(x 1 , y 1 )] since (1) = µ. Just as for the naive version of the algorithm, we can trace the value of list(x, y): To capture the idea of decreasing/increasing the value of the elements of list(_, _) by one, as depicted in Example 6.3 and Example 6.5, we assume that a channel transforms into a new generation of itself after sending or receiving a fixed point unfolding message.
Example 6.6. Process x : nat y ← Copy ← x :: (y : nat) in Example 6.5 starts its computation with the initial generation of its left and right channels: x 0 : nat y 0 ← Copy ← x 0 :: (y 0 : nat).
The channels evolve as the process sends or receives a fixed point unfolding message along them: On the last line the process is called recursively with a new generation of variables.
In the inference rules introduced in Section 9, instead of recording the value of each element of list( _, _) as we did in Example 6.3 and Example 6.5, we introduce Ω to track the relation between different generations of a channel indexed by their priority of types.
Remark 6.7. Generally speaking, x α+1 i < x α i is added to Ω when x α receives a fixed point unfolding message for a type with priority i and transforms to x α+1 . This corresponds to the decrease by one in the previous examples.
If x α sends a fixed point unfolding message for a type with priority i is sent on x α , which then evolves to x α+1 , x α i and x α+1 i are considered to be incomparable in Ω. This corresponds to increase by one in the previous examples, since for the sake of lexicographically comparing the value of list( _, _) at the first call of a process to its value just before a recursive call, there is no difference whether x α+1 is greater than x α or incomparable to it. When x α receives/sends a fixed point unfolding message of a type with priority i and transforms to x α+1 , for any type with priority j = i, the value of x α j and x α+1 j must remain equal. In these steps, we add x α j = x α+1 j for j = i to Ω.
A process in the formalization of the intuition above is therefore typed as where x α is the α-th generation of channel x. The syntax and operational semantics of the processes with generational channels are the same as the corresponding definitions introduced in Section 3; we simply ignore generations over the channels to match processes with the previous definitions. We enforce the assumption that channel x α transforms to its next generation x α+1 upon sending/receiving a fixed point unfolding message in the typing rules of Section 9. The relation between the channels indexed by their priority of types is built step by step in Ω and represented by ≤. The reflexive transitive closure of Ω forms a partial order ≤ Ω . We extend ≤ Ω to the list of channels indexed by the priority of their types considered lexicographically. We may omit subscript Ω from ≤ Ω whenever it is clear from the context. In the next examples, we present the set of relations Ω in the rightmost column.
The reader may refer to Figure 3 for the typing rules of processes enriched with generational channels and the set Omega. The cut rule presented in Figure 3 will be explained in detail later in Section 8. For now, the reader may consider the following simplified version of the cut rule instead: x α : ω Ω P w 0 :: (w 0 : A) w 0 : A Ω Q w 0 :: (y β : C) x α : ω Ω (w ← P w ; Q w ) :: (y β : C) Cut w

Mutual Recursion in the Local Validity Condition
In examples of previous sections, the recursive calls were not mutual. In the general case, a process may call any other process variable in the program, and this call can be mutually recursive. In this section, we incorporate mutual recursive calls into our algorithm.
By analyzing the behavior of this program step by step, we see that it is a reactive program that counts the number of acknowledgements received from the left. The program starts with the process x 0 : astream ∅ y 0 ← Producer ← x 0 :: (y 0 : nat). It first sends one message to left to unfold the negative fixed point type, and its left channel evolves to a next generation. Then another message is sent to the left to request the head of the stream and after that it calls process y 0 ← Idle ← x 1 . Process x 1 : ack y 0 ← Idle ← x 1 :: (y 0 : nat), then waits to receive an acknowledgment from the left via a positive fixed point unfolding message for ack and its left channel transforms into a new generation upon receiving it. Then it waits for the label ack and, upon receiving it, sends one message to the right to unfold the positive fixed point nat (and this time the right channel evolves). Then it sends the label s to the right and calls y 1 ← Producer ← x 2 recursively: Similarly, we can observe that the actual recursive call on Idle, where Idle eventually calls itself, is valid.
To account for this situation, we introduce an order on process variables and trace the last seen variable on the path leading to the recursive call. In this example we define Idle to be less than Producer at position 2 (I ⊂ 2 P), i.e.: We incorporate process variables Producer and Idle into the lexicographical order on list(_, _) such that their values are placed exactly before the element in the list corresponding to the sent unfolding messages of the type with priority 2.
We now trace the ordering as follows:  However, not every relation over the process variables forms a partial order. For instance, having both P ⊂ 2 I and I ⊂ 2 P violates the antisymmetry condition. Introducing the position of process variables into list(_, _) is also a delicate issue. For example, if we have both I ⊂ 1 P and I ⊂ 2 P, it is not determined where to insert the value of Producer and Idle on the list(_, _). Definition 7.2 captures the idea of Example 7.1. It defines the relation ⊆, given that the programmer introduces a family of partial orders such that their domains partition the set of process variables V . We again assume that the programmer defines this family based on the intuition of why a program satisfies strong progress. Definition 7.2 ensures that ⊆ is a well-defined partial order and it is uniquely determined in which position of list(_, _) the process variables shall be inserted. Definition 7.4 gives the lexicographic order on list(_, _) augmented with the ⊆ relation.
Definition 7.2. Consider a program P = V, S defined over a signature Σ. Let {⊆ i } 0≤i≤n be a disjoint family of partial orders whose domains partition the set of process variables V , where (a) We define ⊆ as i≤n ⊆ i , i.e. F ⊆ G iff F ⊆ i G for some (unique) i ≤ n. It is straightforward to see that ⊆ is a partial order over the set of process variables V . Moreover, To integrate the order on process variables (⊂) with the order <, we need a prefix of the list from Definition 6.4. We give the following definition of list(x, y, j) to crop list(x, y) exactly before the element corresponding to a sent fixed point unfolding message for types with priority j. Definition 7.3. For a processx : A P :: y : B, over signature Σ, and 0 ≤ j ≤ n, define list(x, y, j), as a prefix of the list list(x, y) We use these prefixes in the following definition.

A Modified Rule for Cut
There is a subtle aspect of local validity that we have not discussed yet. We need to relate a fresh channel, created by spawning a new process, with the previously existing channels. Process y α : A (x ← P x ; Q x ) :: (z β : B), for example, creates a fresh channel w 0 , spawns process P w 0 providing along channel w 0 , and then continues as Q w 0 . For the sake of our algorithm, we need to identify the relation between w 0 , y α , and z β . Since w 0 is a fresh channel, a naive idea is to make w 0 incomparable to any other channel for any type variable t ∈ Σ. To represent this incomparability in our examples we write "∞" for the value of the fresh channel. While sound, we will see in Example 8.1 that we can improve on this naive approach to cover more valid processes. Example 8.1. Define the signature Σ 5 := ctr = 1 ν &{inc : ctr, val : bin}, bin = 2 µ ⊕{b0 : bin, b1 : bin, $ : 1}. which provides numbers in binary representation as well as an interface to a counter. We explore the following program P 8 = {BinSucc, Counter, NumBits, BitCount}, BitCount , where x : bin y ← BinSucc ← x :: (y : bin) x : bin y ← Counter ← x :: (y : ctr) x : bin y ← NumBits ← x :: (y : bin) x : bin y ← BitCount ← x :: (y : ctr) We define the relation ⊂ on process variables as BinSucc ⊂ 0 Counter ⊂ 0 BitCount and BinSucc ⊂ 0 NumBits ⊂ 0 BitCount. The process definitions are as follows, shown here already with their termination analysis.
The program starts with process BitCount which creates a fresh channel w 0 , spawns a new process w 0 ← NumBits ← x α , and continues as y β ← Counter ← w 0 . Process y β ← Counter ← w α as its name suggests works as a counter where w : bin is the current value of the counter. When it receives an increment message inc it computes the successor of w, accessible through channel z. If it receives a val message it simply forwards the current value (w) to the client (y). Note that in this process, both calls are valid according to the condition developed so far. This is also true for the binary successor process BinSucc, which presents no challenges. The only recursive call represents the "carry" of binary addition when a number with lowest bit b1 has to be incremented.
The process w β ← NumBits ← x α counts the number of bits in the binary number x and sends the result along w, also in the form of a binary number. It calls itself recursively for every bit received along x and increments the result z to be returned along w. Note that if there are no leading zeros, this computes essentially the integer logarithm of x. The process NumBits is reactive. However with our approach toward spawning a new process, the recursive calls have the list value [∞, 0, −1, ∞] < [0, 0, 0, 0], meaning that the local validity condition developed so far fails.
Note that we cannot just define z 0 1 = w β 1 and z 0 2 = w β 2 , or z 0 1 = z 0 2 = 0. Channel z 0 is a fresh one and its relation with the future generations depends on how it evolves in the process w β ← BinSucc ← z 0 . But by definition of type bin, no matter how z 0 : bin evolves to some z η in process BinSucc, it won't be the case that z η : ctr. In other words, the type ctr is not visible from bin and for any generation η, channel z η does not send or receive a ctr unfolding message. So in this recursive call, the value of z η 1 is not important anymore and we safely put z 0 1 = w β 1 . In the improved version of the condition we have: This version of the algorithm recognizes both recursive calls as valid. In the following definition we capture the idea of visibility from a type more formally.

Typing Rules for Session-Typed Processes with Channel Ordering
In this section we introduce inference rules for session-typed processes corresponding to derivations in subsingleton logic with fixed points. This is a refinement of the inference rules in Figure 2 to account for channel generations and orderings introduced in previous sections. The judgments are of the formx α : ω Ω P :: (y β : A), where P is a process, and x α (the α-th generation of channel x) and y β (the β-th generation of channel y) are its left and right channels of types ω and A, respectively. The order relation between the generations of left and right channels indexed by their priority of types is built step by step in Ω when reading the rules from the conclusion to the premises. We only consider judgments in which all variables x α occurring in Ω are such that α ≤ α and, similarly, for y β in Ω we have β ≤ β. This presupposition guarantees that if we construct a derivation bottom-up, any future generations for x and y are fresh and not yet constrained by Ω. All our rules, again read bottom-up, will preserve this property.
We fix a signature Σ as in Definition 2.1, a finite set of process definitions V over Σ as in Definition 3.1, and definex α : ω Ω P :: (y β : A) with the rules in Figure 3. To preserve freshness of channels and their future generations in Ω, the channel introduced by Cut rule must be distinct from any variable mentioned in Ω. Similar to its underlying sequent calculus in Section 3, this system is infinitary, i.e., an infinite derivation may be produced for a given program. However, we can remove the first premise from the Def rule and check typing for each process definition in V separately.
Programs derived in this system are all well-typed, but not necessarily valid. It is, however, the basis for our finitary condition in Section 10 and in Section 11 where we prove that local validity is stricter than Fortier and Santocanale's guard condition.

A Local Validity Condition
In Sections 4 to 7, using several examples, we developed an algorithm for identifying valid programs. Illustrating the full algorithm based on the inference rules in Section 9 was postponed to this section. We reserve for the next section our main result that the programs accepted by this algorithm satisfy the guard condition introduced by Fortier and Santocanale [FS13].
The condition checked by our algorithm is a local one in the sense that we check validity of each process definition in a program separately. The algorithm works on the sequents of the form ū γ , X, v δ ;z α : ω Ω,⊂ P :: (w β : C), whereū γ is the left channel of the process the algorithm started with and can be either empty or u γ . Similarly, v δ is the right channel of the process the algorithm started with (which cannot be empty). And X is the last process variable a definition rule has been applied to (reading the rules bottom-up). Again, in this judgment the (in)equalities in Ω can only relate variables z and w from earlier generations to guarantee freshness of later generations. Generally speaking, when analysis of the program starts withū γ : ω v δ ← X ← u γ :: (v δ : B), a snapshot of the channelsū γ and v δ and the process variable X are saved. Whenever the process reaches a callz α : _ w β ← Y ←z α :: (w β : _), the algorithm compares X, list(ū γ , v δ ) and Y, list(z α , w β ) using the (⊂, <) order to determine if the call is 8:28
Definition 10.1. A program P = V, S over signature Σ and a fixed order ⊂ satisfying the properties in Definition 7.2 is locally valid iff for everyz : A X = Pz ,w :: (w : C) ∈ V , there is a derivation for z 0 , X, w 0 ;z 0 : ω ∅,⊂ Pz0 ,w 0 :: (w 0 : C) in the rule system in Figure 4. This set of rules is finitary so it can be directly interpreted as an algorithm. This results from substituting the Def rule (of Figure 3) with the Call rule (of Figure 4). Again, to guarantee freshness of future generations of channels, the channel introduced by Cut rule is distinct from other variables mentioned in Ω.
The starting point of the algorithm can be of an arbitrary form z α , X, w β ;z α : ω Ω,⊂ P z α ,w β :: (w β : C), as long asz α+i and w β+i do not occur in Ω for every i > 0. In both the inference rules and the algorithm, it is implicitly assumed that the next generation of channels introduced in the µ/ν − R/L rules do not occur in Ω. Having this condition we can convert a proof for z 0 , X, w 0 ;z 0 : ω ∅,⊂ P z 0 ,w 0 :: (w 0 : C), to a proof for z α , X, w β ;z α : ω Ω,⊂ P z α ,w β :: (w β : C), by rewriting eachz γ and w δ in the proof asz γ+α and w δ+β , respectively. This simple proposition is used in the next section where we prove that every locally valid process accepted by our algorithm is a valid proof according to the FS guard condition.
Proof. By substitution, as explained above.
To show the algorithm in action we run it over program P 3 := {Copy}, Copy previously defined in Example 4.3.
In this example, following Definition 7.2 the programmer has to define Copy ⊆ 1 Copy since the only priority in Σ is 1. To verify local validity of this program we run our algorithm over the definition of Copy. Here we show the interesting branch of the constructed derivation: . Note that at a meta-level the generations on channel names and the set Ω are both used for bookkeeping purposes. We showed in this example that using the rules of Figure 4 as an algorithm we can annotate the given definition of a process variable with the generations and the set Ω.

Local Validity and Guard Conditions
Fortier and Santocanale [FS13] introduced a guard condition for identifying valid circular proofs among all infinite pre-proofs in the singleton logic with fixed points. They showed that the pre-proofs satisfying this condition, which is based on the definition of left µ-and right ν-traces, enjoy the cut elimination property. In this section, we translate their guard condition into the context of session-typed concurrency and generalize it for subsingleton logic. It is straightforward to show that the cut elimination property holds for a proof in subsingleton logic if it satisfies the generalized version of the guard condition. The key idea is that cut reductions for individual rules stay untouched in subsingleton logic and rules for the new constant 1 only provide more options for the cut reduction algorithm to terminate. We prove that all locally valid programs in the session typed system, determined by the algorithm in Section 10, also satisfy the guard condition. We conclude that our algorithm imposes a stricter but local version of validity on the session-typed programs corresponding to circular pre-proofs.
Here we adapt definitions of the left and right traceable paths, left µ-and right ν-traces, and then validity to our session type system.
Definition 11.1. Consider path P in the (infinite) typing derivation of a program Q = V, S defined on a signature Σ:x γ : ω Ω Q :: (y δ : C ) . . . Moreover, P is called a cycle over program Q, if for some X ∈ V , we have Q = w β ← X ←z α and Q = y δ ← X ←x γ .
Definition 11.2. A path P in the (infinite) typing derivation of a program Q = V, S defined over signature Σ is a left µ-trace if (i) it is left-traceable, (ii) there is a left fixed point rule applied in it, and (iii) the highest priority of its left fixed point rules is i ≤ n such that (i) = µ. Dually, P is a right ν-trace if (i) it is right-traceable, (ii) there is a right fixed point rule applied in it, and (iii) the highest priority of its right fixed point rules is i ≤ n such that (i) = ν.
Definition 11.3 (FS guard condition on cycles). A program Q = V, S defined on signature Σ satisfies the FS guard condition if every cycle C x γ : ω Ω y δ ← X ←x γ :: (y δ : C ) . . . z α : ω Ω w β ← X ←z α :: (w β : C) over Q is either a left µ-trace or a right ν-trace. Similarly, we say a single cycle C satisfies the guard condition if it is either a left µ-trace or a right ν-trace.
Definitions 11.1-11.3 are equivalent to the definitions of the same concepts by Fortier and Santocanale using our own notation. In particular, Definition 11.3 is equivalent to FS guard condition on cycles. For the intended use of infinite derivations in this paper in which V is a finite set of process definitions, the FS guard condition on infinite paths is equivalent to their condition on cycles.
Here, we can observe that being a left µ-trace coincides with having the relation x 1 1 < x 0 1 between the left channels, and not being a right ν-trace coincides with not having the relation y 1 1 < y 0 1 for the right channels. We can generalize this observation to every path and every signature with n priorities.
(a) For every i ∈ c(ω ) with (i) = µ, if x γ i ≤ Ω z α i then x = z and i ∈ c(ω). (b) For every i < n, if x γ i < Ω z α i , then i ∈ c(ω) and a µL rule with priority i is applied on P.
(c) For every c ≤ n with (c) = ν, if x γ c ≤ Ω z α c , then no νL rule with priority c is applied on P.
(a) For every i ∈ c(ω ) with (i) = ν, if y δ i ≤ Ω w β i , then y = w and i ∈ c(ω). (b) If y δ i < Ω w β i , then i ∈ c(ω) and a νL rule with priority i is applied on P . (c) For every c ≤ n with (c) = µ, if y δ c ≤ Ω w β c , then no µR rule with priority c is applied on P .
Proof. Dual to the proof of Lemma 11.6 given in Appendix A.
To illustrate Theorem 11.5, we present a few additional examples. The reader may skip the examples to get more directly to the main theorems (Lemma 11.10 and Theorem 11.11).
Define a program P 9 := {Succ, Copy, SuccCopy}, SuccCopy , over the signature Σ 1 , using the process w : nat Copy :: (y : nat) and two other processs: x : nat Succ :: (w : nat) and x : nat SuccCopy :: (y : nat). The processes are defined as w ← Succ ← x = Rw.µ nat ; Rw.s; w ← x y ← Copy ← w = case Lw (µ nat ⇒ case Lw ( s ⇒ Ry.µ nat ; Ry.s; y ← Copy ← w | z ⇒ Ry.µ nat ; Ry.z ; wait Lw; close Ry)) y ← SuccCopy ← x = w ← Succ ← x; y ← Copy ← w, Process SuccCopy spawns a new process Succ and continues as Copy. The Succ process prepends an s label to the beginning of the finite string representing a natural number on its left hand side and then forwards the string as a whole to the right. Copy receives this finite string representing a natural number, and forwards it to the right label by label.
The only recursive process in this program is Copy. So program P 9 , itself, does not have a further interesting point to discuss. We consider a bogus version of this program in Example 11.8 that provides further intuition for Theorem 11.5.
Example 11.8. Define program P 10 := {Succ, BogusCopy, SuccCopy}, SuccCopy over the signature Σ 1 := nat = 1 µ ⊕{z : 1, s : nat}, The processes x : nat Succ :: (w : nat), w : nat BogusCopy :: (y : nat), and x : nat SuccCopy :: (y : nat), are defined as w ← Succ ← x = Rw.µ nat ; Rw.s; w ← x y ← BogusCopy ← w = caseLw (µ nat ⇒ caseLw (s ⇒ Ry.µ nat ; Ry.s; y ← SuccCopy ← w | z ⇒ Ry.µ nat ; Ry.z ; wait Lw; close Ry)) y ← SuccCopy ← x = w ← Succ ← x; y ← BogusCopy ← w Program P 10 is a non-reactive bogus program, since BogusCopy instead of calling itself recursively, calls SuccCopy. At the very beginning SuccCopy spawns Succ and continues with BogusCopy for a fresh channel w. Succ then sends a fixed point unfolding message and a successor label via w to the right, while BogusCopy receives the two messages just sent by Succ through w and calls SuccCopy recursively again. This loop continues forever, without any messages being received from the outside. The first several steps of the derivation of x 0 : nat ∅ SuccCopy :: (y 0 : nat) in our inference system (Section 9) are given below. By Definition 11.2, this path is right traceable, but not left traceable. And by Definition 11.1, the path is neither a right ν-trace nor a left µ-trace: (1) No negative fixed point unfolding message is received from the right and y 0 does not evolve to a new generation that has a smaller value in its highest priority than y 0 1 . In other words, y 0 1 < y 0 1 since no negative fixed point rule has been applied on the right channel.
(2) The positive fixed point unfolding message that is received from the left is received through the channel w 0 , which is a fresh channel created after SuccCopy spawns the process Succ. Although w 1 1 < w 0 1 , since x 0 1 is incomparable to w 0 1 , the relation w 1 1 < x 0 1 does not hold. This path is not even a left-traceable path.
Neither As another example consider the program P 6 = {Ping, Pong, PingPong}, PingPong over the signature Σ 4 as defined in Example 6.1. We discussed in Section 6 that this program is not accepted by our algorithm as locally valid.
The first several steps of the proof of x 0 : nat ∅ PingPong :: (y 0 : nat) in our inference system (Section 9) are given below (with some abbreviations).

Def
where A = {w 1 1 = w 0 1 , w 1 2 < w 0 2 , w 1 3 = w 0 3 }, and B = {w 1 1 = w 0 1 , w 2 2 = w 1 2 < w 0 2 , w 2 3 = w 1 3 = w 0 3 }. The cycle between the processes x 0 : nat ∅ w 0 ← Ping ← x 0 :: (w 0 : astream) and x 0 : nat B w 2 ← Ping ← x 0 :: (w 2 : astream) is neither a left µ-trace, nor a right ν-trace: (1) No fixed point unfolding message is received or sent through the left channels in this path and thus (2) On the right, fixed point unfolding messages are both sent and received: (i) w 0 receives an unfolding message for a negative fixed point with priority 2 and evolves to w 1 , and then later (ii) w 1 sends an unfolding message for a positive fixed point with priority 1 and evolves to w 2 . But the positive fixed point has a higher priority than the negative fixed point, and thus this path is not a right ν-trace either. This reasoning can also be reflected in our observation about the list of channels in Theorem 11.5: When, first, w 0 evolves to w 1 by receiving a message in (i) the relations w 1 1 = w 0 1 , w 1 2 < w 0 2 , and w 1 3 = w 0 3 are recorded. And, later, when w 1 evolves to w 2 by sending a message in (ii) the relations w 2 2 = w 1 2 , and w 2 3 = w 1 3 are added to the set. This means that w 2 1 as the first element of the list [w 2 ] remains incomparable to w 0 1 and thus We are now ready to state our main theorem that proves the local validity algorithm introduced in Section 10 is stricter than the FS guard condition. Since the guard condition is defined over an infinitary system, we need to first map our local condition into the infinitary calculus given in Section 3.
Proof. This lemma is a special case of Lemma A.4 proved in Appendix A.
Theorem 11.11. A locally valid program satisfies the FS guard condition.
Therefore, there is an i ≤ n, such that either (1) (i) = µ, x γ i < z α i , and x γ l = z α l for every l < i, having thatx = x andz = z are non-empty, or (2) (i) = ν, y δ i < w β i , and y δ l = w β l for every l < i. In the first case, by part (b) of Lemma 11.6, a µL rule with priority i ∈ c(ω) is applied on C. By part (a) of the same Lemma x = z, and by its part (c), no νL rule with priority c < i is applied on C. Therefore, C is a left µ-trace. In the second case, by part (b) of Lemma 11.7, a νR rule with priority i ∈ c(ω) is applied on C. By part (a) of the same Lemma y = w and by its part (c), no µR rule with priority c < i is applied on C. Thus, C is a right ν-trace.
Checking a disjunctive condition for each cycle implies that the FS guard condition cannot simply analyze each path from the beginning of a definition to a call site in isolation and then compose the results-instead, it must unfold the definitions and examine every cycle, possibly composed of smaller individual cycles, in the infinitary derivation separately. In other words, the FS guard condition may accept two individual cycles but reject their combination.
In our algorithm, however, we form a transitive condition by merging the lists of left and right channels, e.g. [x γ ] and [y δ ] respectively, into a single list list(x γ , y δ ). The values in list(x γ , y δ ) from Definition 6.4 are still recorded in their order of priorities, but for the same priority the value corresponding to receiving a message precedes the one corresponding to sending a message. As described in Definition 7.4 we merge this list with process variables to check all immediate calls even those that do not form a cycle in the sense of the FS guard condition (that is, when process X calls process Y = X).
Transitivity of our validity check condition is the key to establishing its locality. Our algorithm only checks the condition for the immediate calls that a process makes. As this condition enjoys transitivity, it also holds for all possible non-immediate recursive calls, including any combination of the immediate calls. As a result, we do not need to search for every possible cycle in the infinitary derivation.
Remark 11.12. We briefly analyze the asymptotic complexity of our algorithm. Let n be the number of priorities and s the size of the signature, where we add in the sizes of all types A appearing in applications of the Cut rule. In time O(n s) we can compute a table to look up i ∈ c(A) for all priorities i and types A appearing in cuts. Now let m be the size of the program (not counting the signature). We traverse each process definition just once, maintaining a list of relations between the current and original channel pairs for each priority. We need to update at most 2n entries in the list at each step and compare at most 2n entries at each Call rule. Furthermore, for each Cut rule we have a constant-time table lookup to determine if i ∈ c(A) for each priority i. Therefore, analysis of the process definitions takes time O(m n).
Putting it all together, the time complexity is bounded by O(m n + n s) = O(n (m + s)). In practice the number of priorities, n, is a small constant so validity checking is linear in the total input, which consists of the signature and the process definitions. As far as we are aware of, the best upper bound for the complexity of the FS guard condition is PSPACE [Dou17]. It is also interesting to note that the complexity of type-checking itself is bounded below by O(m + s 2 ) since, in the worst case, we need to compute equality between each pair of types. That is, validity checking is faster than type-checking.
Another advantage of locality derives from the fact that our algorithm checks each process definition independent of the rest of the program: we can safely reuse a previously checked locally valid process in other programs defined over the same signature and order ⊂ without the need to verify its local validity again.

Computational Meta-theory
Fortier and Santocanale [FS13] defined a function Treat as a part of their cut elimination algorithm. They proved that this function terminates on a list of pre-proofs fused by consecutive cuts if all of them satisfy their guard condition. In our system, function Treat corresponds to computation on a configuration of processes. In this section we first show that the usual preservation and progress theorems hold even if a program does not satisfy the validity condition. Then we use Fortier and Santocanale's result to prove a stronger compositional progress property for (locally) valid programs.
In Section 3, we introduced process configurations C as a list of processes connected by the associative, noncommutative parallel composition operator | x .
with unit (·). The type checking judgments for configurationsx : ω C :: (y : B) are: x : A · :: (x : A)x : ω P :: (y : B) x : ω P :: (y : B)x : ω C 1 :: (z : A) z : A C 2 :: (y : B) x : ω C 1 | z C 2 :: (y : B) A configuration can be read as a list of processes connected by consecutive cuts. Alternatively, considering C 1 and C 2 as two processes, configuration C 1 | z C 2 can be read as their composition by a cut rule (z ← C 1 ; C 2 ). In section 3, we defined an operational semantics on configurations using transition rules. Similarly, these computational transitions can be interpreted as cut reductions called "internal operations" by Fortier and Santocanale. The usual preservation theorem ensures types of a configuration are preserved during computation [DP16]. Proof. This property follows directly from the correctness of cut reduction steps.
The usual progress property as proved below ensures that computation makes progress or it attempts to communicate with an external process.
Theorem 12.2. (Progress) Ifx : ω C :: (y : A), then either (1) C can make a transition, (2) or C = (·) is empty, (3) or C attempts to communicate either to the left or to the right. In the presence of (mutual) recursion, this progress property is not strong enough to ensure that a program does not get stuck in an infinite inner loop. Since our local validity condition implies the FS guard condition, we can use their results for a stronger version of the progress theorem on valid programs.
Theorem 12.3. (Strong Progress) Configurationx : ω C :: (y : A) of (locally) valid processes satisfies the progress property. Furthermore, after a finite number of steps, either (1) C = (·) is empty, (2) or C attempts to communicate to the left or right.
Proof. There is a correspondence between the Treat function's internal operations and the computational transitions introduced in Section 3. The only point of difference is the extra computation rule we introduced for the constant 1. Fortier and Santocanale's proof of termination of the function Treat remains intact after extending Treat's primitive operation with a reduction rule for the constant 1, since this reduction step only introduces a new way of closing a process in the configuration. Under this correspondence, termination of the function Treat on valid proofs implies the strong progress property for valid programs.
As a corollary to Theorem 12.3, computation of a closed valid program P = V, S with · S = P :: (y : 1) always terminates by closing the channel y (which follows by inversion on the typing derivation).
We conclude this section by briefly revisiting sources of invalidity in computation. In Example 4.1 we saw that process Loop is not valid, even though its proof is cut-free. Its computation satisfies the strong progress property as it attempts to communicate with its right side in finite number of steps. However, its communication with left and right sides of the configuration is solely by sending messages. Composing Loop with any process y : nat P :: (z : 1) results in exchanging an infinite number of messages between them. For instance, for Block, introduced in Example 4.2, the infinite computation of · y ← Loop | y z ← Block ← y :: (z : 1) without communication along z can be depicted as follows: y ← Loop | y z ← Block ← y → Ry.µ nat ; Ry.s; y ← Loop | y z ← Block ← y → Ry.µ nat ; Ry.s; y ← Loop | y case Ly (µ nat ⇒ case Ly · · · ) → Ry.s; y ← Loop | y case Ly (s ⇒ z ← Block ← y | z ⇒ wait Ly; close Rz) → y ← Loop | y z ← Block ← y → · · · In this example, the strong progress property of computation is violated. The configuration does not communicate to the left or right and a never ending series of internal communications takes place. This internal loop is a result of the infinite number of unfolding messages sent by Loop without any unfolding message with higher priority being received by it. In other words, it is the result of Loop not being valid.

Incompleteness of Validity Conditions
In this section we provide a straightforward example of a program with the strong progress property that our algorithm cannot identify as valid. Intuitively, this program seems to preserve the strong progress property after being composed with other valid programs. We show that this example does not satisfy the FS guard condition, either. Example 13.1. Define the signature Σ 5 := ctr = 1 ν &{inc : ctr, val : bin}, bin = 2 µ ⊕{b0 : bin, b1 : bin, $ : 1} and program P 11 = {Bit0Ctr, Bit1Ctr, Empty}, Empty , where x : ctr y ← Bit0Ctr ← x :: (y : ctr) x : ctr y ← Bit1Ctr ← x :: (y : ctr) · y ← Empty :: (y : ctr) In this example we implement a counter slightly differently from Example 8.1. We have two processes Bit0Ctr and Bit1Ctr that are holding one bit (b0 and b1 respectively) and a counter Empty that signals the end of the chain of counter processes. This program begins with an empty counter (representing value 0). If a value is requested, then it sends $ to the right and if an increment is requested it adds the counter Bit1Ctr with b1 value to the chain of counters. Then if another increment is asked, Bit1Ctr sends an increment (inc) message to its left counter (implementing the carry bit) and calls Bit0Ctr. If Bit0Ctr receives an increment from the right, it calls Bit1Ctr recursively.
All (mutually) recursive calls in this program are recognized as valid by our algorithm, except the one in which Empty calls itself. In this recursive call, y β ← Empty ← · calls w 0 ← Empty ← ·, where w is the fresh channel it shares with y β+1 ← Bit1Ctr ← w 0 . The number of increment unfolding messages Bit1Ctr can send along channel w 0 are always less than or equal to the number of increment unfolding messages it receives along channel y β+1 . This implies that the number of messages w 0 ← Empty ← · may receive along channel w 0 is strictly less than the number of messages received by any process along channel y β . There will be no infinite loop in the program without receiving an unfolding message from the right. Indeed Fortier and Santocanale's cut elimination for the cut corresponding to the composition Empty | Bit1Ctr locally terminates. Furthermore, since no valid program defined on the same signature can send infinitely many increment messages to the left, P 11 composed with any other valid program satisfies strong progress. This result is also a negative example for the FS guard condition. The path between y β ← Empty ← · and w 0 ← Empty ← · in the Empty process is neither left traceable not right traceable since w = y. By Definition 11.3 it is therefore not a valid cycle.
Example 13.1 shows that neither our algorithm nor the FS guard condition are complete. In fact, using Theorem 12.3 we can prove that no effective procedure, including our algorithm, can recognize a maximal set Ξ of programs with the strong progress property that is closed under composition.
Theorem 13.2. It is undecidable to recognize a maximal set Ξ of session-typed programs in subsingleton logic with the strong progress property that is closed under composition.
Proof. Pfenning and DeYoung showed that any Turing machine can be represented as a process in subsingleton logic with equirecursive fixed points [DP16,Pfe16], easily embedded into our setting with isorecursive fixed points. It implies that a Turing machine on a given input halts if and only if the closed process representing it terminates. By definition of strong progress, a closed process terminates if and only if it satisfies strong progress property. Using this result, we reduce the halting problem to identifying closed programs P := V, S with · S :: x : 1 that satisfy strong progress. Note that a closed program satisfying strong progress is in every maximal set Ξ.

Concluding Remarks
Related work. The main inspiration for this paper is work by Fortier and Santocanale [FS13], who provided a validity condition for pre-proofs in singleton logic with least and greatest fixed points. They showed that valid circular proofs in this system enjoy cut elimination. Circular proofs in singleton logic are interpreted as the winning strategy in parity games [San02a]. A winning strategy corresponds to an asynchronous protocol for a deadlock free communication of the players [Joy96]. The cut elimination result for circular proofs is a ground for reasoning about these protocols of communication and the related categorical concept of µ-bicomplete categories [San02c,San02d]. Although session types and game semantics both model the concept of message passing based on a protocol [CY19], another line of research is needed to fill the gap between semantics of parity games and recursive session-typed processes. Also related is work by Baelde et al. [BDS16,Bae12], in which they similarly introduced a validity condition on the pre-proofs in multiplicative-additive linear logic with fixed points and proved the cut-elimination property for valid derivations. Doumane [Dou17] proved that this condition can be decided in PSPACE by reducing it to the emptiness problem of Büchi automata. Nollet et al. [NST18] introduced a local polynomial time algorithm for identifying a stricter version of Baelde's condition. At present, it is not clear to us how their algorithm would compare with ours on the subsingleton fragment due to the differences between classical and intuitionistic sequents [Lau18], different criteria on locality, and the prevailing computational interpretation of cut reduction as communicating processes in our work. Cyclic proofs have also been used for reasoning about imperative programs with recursive procedures [RB17]. While there are similarities (such as the use of cycles to capture recursion), their system extends separation logic is therefore not based on an interpretation of cut reduction as computation. Reasoning in their logic therefore has a very different character from ours. Hyvernat [Hyv19] introduced a condition to identify terminating and productive programs in ML/Haskell like recursive programs with mixed inductive and coinductive data types. Although their language cannot be reduced to circular proofs, their condition is inspired by the FS guard condition and is shown to be PSPACE. Also related is a result by Das and Pous [DP18] on cyclic proofs in LKA, where Kleene star is interpreted as a least fixed point and proofs are interpreted as transformers. They introduced a condition that ensures cut elimination of cyclic proofs and termination of transformers. DeYoung and Pfenning [DP16] provide a computational interpretation of subsingleton logic with equirecursive fixed points and showed that cut reduction on circular pre-proofs in this system has the computational power of Turing machines. Their result implies undecidability of determining all programs with a strong progress property. Our contribution. In this paper we have established an extension of the Curry-Howard interpretation for intuitionistic linear logic by Caires et al. [CP10,CPT16] to include least and greatest fixed points that can mutually depend on each other in arbitrary ways, although restricted to the subsingleton fragment. The key is to interpret circular pre-proofs in subsingleton logic as mutually recursive processes, and to develop a locally checkable, compositional validity condition on such processes. We proved that our local condition implies Fortier and Santocanale's guard condition and therefore also implies cut elimination. Analyzing this result in more detail leads to a computational strong progress property which means that a valid program will always terminate either in an empty configuration or one attempting to communicate along external channels. Implementation. We have implemented the algorithm introduced in Section 10 in SML, which is publicly available [DDP22] 3 . Currently, the implementation collects constraints and uses them to construct a suitable priority ordering over type variables and a ⊂ ordering over process variables if they exist, and rejects the program otherwise. However, this precomputation step is not local and it makes it difficult to produce informative error messages. Our plan is to delete this step and rely on the programmer for priority and ⊂ ordering. The implementation also supports an implicit syntax where the fixed point unfolding messages are synthesized from the given communication patterns. Our experience with a range of programming examples shows that our local validity condition is surprisingly effective. Its main shortcoming arises when, intuitively, we need to know that a program's output is "smaller" than its input. Future work. The main path for future work is to extend our results to full ordered or linear logic with fixed points and address the known shortcomings we learned about through the implementation. The main shortcoming of our local validity condition arises when, intuitively, we need to know that a program's output is "smaller" than its input. An interesting item for future research is to generalize our local validity condition to handle more cuts by introducing a way to capture the relation between input and output size. Studying this generalization also allows us to compare our results with the sized-type approach introduced by Abel and Pientka [AP16]. The first step in this general direction was taken by Sprenger and Dam [SD03] who justify cyclic inductive proofs using inflationary iteration and the work by