Relational Parametricity for Computational Effects

According to Strachey, a polymorphic program is parametric if it applies a uniform algorithm independently of the type instantiations at which it is applied. The notion of relational parametricity, introduced by Reynolds, is one possible mathematical formulation of this idea. Relational parametricity provides a powerful tool for establishing data abstraction properties, proving equivalences of datatypes, and establishing equalities of programs. Such properties have been well studied in a pure functional setting. Many programs, however, exhibit computational effects, and are not accounted for by the standard theory of relational parametricity. In this paper, we develop a foundational framework for extending the notion of relational parametricity to programming languages with effects.


Introduction
The theory of relational parametricity, proposed by Reynolds [32], provides a powerful framework for establishing properties of polymorphic programs and their types.Such properties include the "theorems for free" of Wadler [41], universal properties for datatype encodings, and representation independence properties for abstract datatypes.These results are well established, see e.g.[29], for the pure Girard/Reynolds second-order λ-calculus (a.k.a.system F) which provides a concise yet remarkably powerful calculus of typed total functions.
The generalisation of relational parametricity to richer calculi can be problematic.Even the addition of recursion (hence nontermination) causes difficulties, since the fixed-point property of recursion is incompatible with certain consequences of relational parametricity as usually formulated. 1 This issue led Plotkin [28] to propose using second-order linear type theory as a framework for combining parametricity and recursion, an idea which has since been developed in an operational setting in [3] and in a denotational setting in [4].One of the many good properties of the resulting theory of linear parametricity is that it supports a rich collection of polymorphic datatype encodings with the desired universal properties following from relational parametricity.
The addition of recursion is just one possible extension of second-order λ-calculus.In [9], M. Hasegawa develops a syntactic account of relational parametricity for an orthogonal extension obtained by adding control operators (such an extension was first introduced by Parigot [24] for proof-theoretic purposes).An intriguing fact he observes is that, even though the technical frameworks for the two approaches are quite different, there are striking analogies between his "focal" parametricity and Plotkin's linear parametricity.Accordingly, Hasegawa poses the question of whether it is possible to find a unifying framework for relational parametricity that includes both his work and Plotkin's linear parametricity as special cases.
In this paper we provide a general theory of relational parametricity for computational effects, which answers Hasegawa's question in the affirmative.Not only does our approach generalise both Plotkin's and Hasegawa's, but it also applies across the full range of computational effects (e.g., nondeterminism, probabilistic choice, input/output, side effects, exceptions, etc.).
We build on the work of Moggi [22,23], who proposed incorporating effects into type theory by adding a new type constructor for typing "computations" rather than values.For every type B, one has a new type !B (our non-standard notation is justified in Section 5) whose elements represent computations that (potentially) return values in B, and which (possibly) perform effects along the way.Semantically, ! is interpreted using a computational monad that encapsulates the relevant kinds of effect.
In order to obtain an account of relational parametricity for monads, one needs to solve a problem.Basic to relational parametricity is the idea of treating types as relations.Polymorphic functions are required to preserve derived relations under all possible instantiations of relations to type variables.To extend this to computational effects it is necessary to determine how the operation !determines a relation !R ⊆ !A × !B from any relation R ⊆ A × B. That is, one needs a "relational lifting" of the !operation.The literature contains two approaches to defining such a relational lifting for ![8,14] (although neither is presented in the context of polymorphism).Rather than choosing between these approaches, we instead side-step the issue in a surprising way: we show that, given the right choice of underlying type theory, ! is polymorphically definable in terms of more basic primitives whose relational interpretations are immediately apparent.
Our type theory, which we call PE, is presented in Section 2. It is closely related to Levy's system of call by push-value (CBPV) [15], which subsumes call-by-name and call-byvalue calculi with effects.Levy, following the lead of Filinski [5], emphasises the importance of having two general classes of types: value types, which classify "values", and computation types, which classify "computations".The intuitive difference between the two is that "a value is" and "a computation does".Technically, this intuition is supported by the vast range of semantic and operational interpretations of the framework, see [15].
With general computation types at hand, one can give the !constructor the following polymorphic definition: where importantly the type variable X ranges over computation types only.As we shall see, the type constructors used in the definition all have natural relational interpretations, and hence the defined !operation inherits an induced relational lifting.
In order to reason about parametricity in PE, we build a relationally parametric model of our calculus.Even in the case of ordinary second-order λ-calculus, the construction of parametric models is a nontrivial task.In our case, the interaction between value and computation types contributes significant additional complexity.To keep things as simple as possible, we work with a set-theoretic model, exploiting the fact that it is consistent to do so if one keeps to intuitionistic reasoning.The details are presented in Sections 3 and 4. As a first application of the model, we prove in Section 5 that the !operator, as defined by (1.1) above, does indeed enjoy its expected universal property (Theorem 5.2).
In Section 7, we consider how to specialise the generic calculus PE to specific effects of interest.One useful form of specialisation recurs in many examples.It is common for effects to have associated operations that trigger and/or react to "effectful" behaviour.Typically, one would like to give an n-ary such operation the polymorphic type: ∀X.(!X) n → !X . (1.2) For example, a binary nondeterministic choice operation forms a computation by choosing between two possible continuation computations.Also, the "handle" operation for an exception e, can be viewed as a binary operation where handle e (p, q) behaves like p unless p raises exception e, in which case q is executed.Since such operations are computed in a type-independent way, they are "parametric" in the informal sense of Strachey.We show that such operations are also parametric according to our theory of relational parametricity.This involves two technical developments, each of interest in its own right.The first relates to recent work by Plotkin and Power [31], in which they observe that many operations on effects are "algebraic operations" in the sense of universal algebra.As Theorem 7.1, we obtain that n-ary algebraic operations are in one-to-one correspondence with (parametric) elements of type: ∀X.X n → X , (1.3) where again X ranges over computation types.Thus algebraic operations can be incorporated within PE as constants of the above type (which is more informative than (1.2), since monadic types !B are always computation types).
Not all useful operations on effects arise as algebraic operations; e.g., exception handling is a counterexample.However, exception handling can be added to PE using a different strengthening of (1.2) for its type: This is indeed a strengthening of (1.2) because the lollipop can be understood as restricting the full function space to a subclass of "linear" (in a sense to be explained in the sequel) functions.This correctness of the above typing is again based on a general result (Theorem 7.2) which characterises the (parametric) elements of the above type in terms of a naturality condition.
In Section 8, we outline the relationship between PE and other approaches to parametricity and effects.Plotkin's linear parametricity arises as a specialisation of PE valid in the special case of "commutative" monads.We also briefly discuss how Hasegawa's account of parametricity and control arises as a specialisation of PE.The details for this appear in a companion paper [20].Finally, in Section 9, we discuss how the theory established in this paper might be applied to derive operational properties of polymorphic languages with effects.

A polymorphic calculus
We start by defining the type theory PE for polymorphism and effects.As discussed in the introduction, following [15], PE contains both value types A, B, C, . . .and computation types A, B, C, . . . .A central feature of our type theory is that we allow polymorphic type quantification over both value types and computation types.Accordingly, we use X, Y, Z, . . . to range over a countable set of value-type variables, and X, Y , Z, . . . to range over a disjoint countable set of computation-type variables.Value types and computation types are then mutually defined by: Note that the computation types form a subcollection of the value types.The intuition here is that any (active) computation has a corresponding (static) value, its "thunk".In contrast to [15], we make this passage from computations to values syntactically invisible.
For semantic intuition, one can think of value types as representing sets, and of computation types as representing Eilenberg-Moore algebras for some computational monad on sets.Then B → C is the set of all functions.The special case B → A is a computation type because algebras are closed under powers, with the algebra structure defined pointwise.The type A ⊸ B represents the set of all algebra homomorphisms from A to B. In general, there is no natural algebra structure on this set, hence the type A ⊸ B is not a computation type.Finally ∀X.B and ∀X.B are polymorphic types, with the polymorphism ranging over value types and computation types respectively.In either case, when B is a computation type, the polymorphic type is again a computation type.This is justified by Proposition 4.1 below.
Our types, which are based on function spaces and polymorphism, are not directly comparable with Levy's [15], which include sums and products.Nonetheless, we shall see in Section 8 that we can encode Levy's calculus within ours.Given this, our calculus extends Levy's with polymorphic types (cf.[15, §12.4]) and linear function types.The latter have a particularly nice explanation in terms of Levy's stack-based operational framework, within which a value of type A ⊸ B can be understood as a stack turning a computation of type A into a computation of type B, cf.[16].In our system, linear function types will be used crucially in the computation-type encodings of Section 8.
Having computation types as special value types allows us to base our type system on a single judgement form: Γ | ∆ ⊢ t : B , where Γ and ∆ are disjoint contexts of variable typings subject to the following conditions: either (i) ∆ is empty, or (ii) B is a computation type and ∆ has the form x : A, where A is also a computation type.Thus the context ∆, which, following [6,7], we call the stoup of the typing judgement, contains at most one typing assertion.When we want to be explicit about which of (i) or (ii) applies, we write: In the first case, the intuitive interpretation of t is as an arbitrary function from the product of all types in Γ to the type B. In the second case, the interpretation of t is as a function from Γ × A to B that is an algebra homomorphism in its right-hand argument (i.e., for every fixed set of values for the Γ variables, the induced function from A to B is a homomorphism).From this interpretation, one sees why the stoup is restricted to computation types, and also why, when the stoup is nonempty, the result type is required to be a computation type.
The type system is presented in Figure 1.The side conditions refer to the set ftv(Γ) of free type variables in a context Γ, which is defined in the obvious way.Of course, the type rules are restricted to apply only when the premises satisfy the conditions on judgements imposed above.In such cases, the rule conclusions also satisfy these conditions.
The following simple lemmata state basic properties of the type system.
Lemma 2.1 (Unicity of types).For any Γ, ∆, t there is at most one type B such that Proof.Both statements are proved by induction over the depth of the typing derivation for t.
For example, consider the second statement in the case of t = u u ′ , where Γ | x : It is immediate that the type system for value types extends the standard second-order λ-calculus of Girard and Reynolds.Indeed, the typing rules for the relevant types (X, B → C and ∀X.B), when restricted to the case with empty stoup, are just the usual ones.It is well-known that the second-order λ-calculus is powerful enough to encode many type constructors including products, sums, inductive and coinductive types.We include those definitions we shall need later in Figure 2.These encodings are all standard apart from the last one which is existential quantification over computation types.The introduction and elimination constructs for the definable value types are encoded in most cases as in the Figure 2: Definable value types second-order λ-calculus, but the presence of the stoup in PE means that in some cases a slight variation of these encodings must be used.A more detailed discussion of this issue appears in [21,Sec. 4].

Semantic setting
In the previous section, we appealed to semantic intuition by explaining value types as sets and computation types as algebras for a monad on sets.Unfortunately, this intuition runs into the technical problem that there are no set-theoretic models of polymorphism [33].However, it was shown by Pitts [25] that set-theoretic models of polymorphism are possible if intuitionistic set theory is used rather than ordinary classical set theory.We shall exploit this by working with such an intuitionistic set-theoretic model.The advantage of this strategy is that the set-theoretic framework allows the development to concentrate entirely on the difficulties inherent in defining a suitable notion of relational parametricity, which are formidable in themselves, rather than on incidental details specific to a particular concrete model.Our approach results in no loss of generality.All denotational models of relational parametricity of which we are aware can be exhibited as full subcategories of models of intuitionistic set theory.
The intuitionistic set theory we use in this paper is Friedman's Intuitionistic Zermelo-Fraenkel set theory (IZF), which is the established intuitionistic counterpart of classical Zermelo-Fraenkel set theory (ZF).The theory IZF is axiomatized over intuitionistic firstorder logic with equality.The axioms of IZF are the usual axioms of classical ZF, except that Collection is taken as an axiom schema instead of Replacement, and Foundation is formulated as a principle of transfinite induction over the membership relation.One reason for assuming the Collection schema is that it is strictly stronger than Replacement under intuitionistic logic.The reformulation of Foundation is required because the usual versions of the axiom imply the Law of Excluded Middle (LEM), whence classical logic.(The Axiom of Choice also implies LEM, and so is not considered.)The naturalness of IZF is underlined by the existence of a wide range of Kripke, sheaf and realizability models.For a detailed summary of the axioms and properties of IZF, see Ščedrov's survey article [36].
Henceforth in this paper, we use IZF as our mathematical meta-theory.To keep matters readable, we work informally within IZF, just as in ordinary mathematical practice one works informally in ZF.This approach is deliberately chosen to avoid cluttering the mathematics of the arguments with the formalities of the metatheory.(Nevertheless, when it is particularly helpful to do so, we shall occasionally remark on technical aspects of the formalization.)In fact, to the casual reader, it will not seem that much out of the ordinary is going on.Given the similarity between the axioms of ZF and IZF, reasoning within IZF feels very much like reasoning within classical ZF.Essentially, the only practical difference is that one has to adhere to the discipline of intuitionistic logic.The reader should try to be sensitive to this issue, because our adherence to intuitionistic logic is essential to the consistency of this paper.Nonetheless, since IZF is a subtheory of ZF, readers who are not familiar with the distinctions between intuitionistic and classical reasoning, should anyway be able to follow the mathematical development.Such readers will, however, have to place their trust in the authors that the reasoning principles of IZF are never violated.For anyone who wishes to learn more about reasoning in intuitionistic set theory, a good starting place is [1].
As is common in set-theoretic reasoning, we shall sometimes have to work with collections of sets that are too "large" to themselves form a set; that is, with proper classes.When working with IZF (as with classical ZF), classes are accommodated by taking them as being represented by formulas: a formula φ with distinguished free variable x represents the class {x | φ}.In practice, it would be a nuisance to always have to work with concrete formulas φ.Instead, we shall typically say: "let X be a class then . . .", without specifying a particular formula φ that represents X.Such reasoning can be understood schematically as being valid relative to any possible formula instantiating X (and, in practice, there may be several different concrete instantiations that satisfy all assumed properties of X).Alternatively, it is possible to view the development as taking place in an extension of the language of set theory with a new unary predicate for every assumed class.This latter viewpoint is slightly more general, since, in models, it allows classes to be collections other than those specified by formulas in the language of set theory.Such mild added generality is natural if one interprets our reasoning in the categorical models of IZF given by algebraic set theory [13,38], where the category of classes is the primary category of interest, and class predicates can be interpreted as objects in such a category.Whichever viewpoint one takes on whether one thinks of the language as extended with class predicates or not, the underlying set theory remains "morally" unchanged, and we shall accordingly continue to refer to it as IZF.
We now begin the technical development within IZF.As discussed above, value types will be modelled as sets.However, it is known that it is not possible to interpret types in the second-order λ-calculus as arbitrary sets [26].Thus we require a collection of special sets for interpreting types.Such special sets need to be closed under the set-theoretic operations used in the interpretation.Accordingly, we assume that we have a full subcategory C of the category Set of sets that satisfies: (C1): For any set-indexed family In other words, the category C is small-complete with limits inherited from Set.Since function spaces are powers, for any set A and any B ∈ C, the function space B A is in C, i.e., C is an exponential ideal of Set.In particular, C is cartesian closed.In addition, we require: These two properties pull in opposite directions.Property (C3) requires that C enjoys a smallness constraint, which will be used to interpret polymorphism.Explicitly, (C3) says that C is weakly equivalent to its small full subcategory on the set of objects C. It is not, however, a small category itself, since (C4) forces C to have a proper class of objects.
In classical set theory, conditions (C1) and (C3) together imply that every object in C is either the empty set or a singleton set (cf. Freyd's argument that a weakly small category with small products is a preorder, see [17,Proposition V.2.3]).The reason we work in IZF is that this renders it consistent for there to be a nontrivial category satisfying all of (C1)-(C4).Indeed, it is consistent for the natural numbers to be an object of C.This consistency property derives from the work of Hyland et.al. on small-complete small categories [10,12].However, our perspective is slightly different.Rather than assuming a small category that is complete only in a restricted technical sense [12,34], our category C is assumed to be genuinely complete, but only weakly equivalent to a small category.This approach, which is taken from [35], offers several conveniences.For example, it allows us to assume (C4), which, as well as being a natural repleteness condition on C, makes it easy to show that sets we have defined explicitly are actually in C.
According to our informal explanation of computation types in Section 2, they can be interpreted as Eilenberg-Moore algebras for a monad T on C. For any such monad T , the category A of algebras comes with a forgetful functor U : A → C and the following properties are satisfied.(A1): U "weakly creates limits" in the following sense.For every diagram ∆ in A and limiting cone lim(U (∆)) of U (∆) in C, there exists a specified 2 limiting cone lim ∆ of ∆ in A such that U (lim ∆) = lim(U (∆)).(A2): U reflects isomorphisms (i.e., if Uf is an isomorphism in C then f is an isomorphism in A). (A3): For objects A, B of A, the hom-set A(A, B) is an object of C. (A4): There exists a set A of objects of A such that for every A ∈ A, there exists B ∈ A with B isomorphic to A. Proof.Properties (A1) and (A2) are standard, indeed the forgetful functor creates limits, which implies (A1).Property (A3) holds because A(A, B) arises as an equalizer in C of two evident functions (U B) U A → (U B) T UA .For property (A4) define , and ξ is an Eilenberg-Moore algebra structure on A}.
The reason for identifying (A1)-(A4) is that, in order to interpret the calculus of Section 2, it is sufficient to work with any category A and functor U : A → C satisfying (A1)-(A4) above. 3Henceforth, we assume this situation.
2 By a specified limiting cone we mean that we are given a (class) function that maps any diagram ∆ and limiting cone for U (∆) to the required limiting cone in A. 3 In particular, the weakening of limit creation in (A1) is crucial to the application in [20].
It is convenient to maintain algebraic terminology for the category A. Thus we call the objects of A algebras.By (A1) and (A2), the functor U is faithful, thus we can identify the morphisms A(A, B) with special functions from UA to UB, which we call homomorphisms.We write A ⊸ B for the set of homomorphisms from In Section 4 we interpret the type theory of Section 2 using U : A → C. In doing so, we formulate relational parametricity using binary relations in the categories C and A. As usual, these are defined as subobjects of products.First, let us review some basic properties of subobjects in C and A.
For every object A of C, we write Sub C (A) for the set of subobjects of A in the category C. Since the inclusion C ֒→ Set preserves limits and hence monomorphisms, this is explicitly defined by: Sub Similarly, we write Sub A (A) for the collection of subobjects of an algebra A in A. Because U preserves limits, every mono B A in A is mapped by U to a mono U B U A in C. Thus, for every A ∈ A, the functor U determines a function Sub A (A) → Sub C (U A).The lemma below shows that we can view subobjects of A in A as special subobjects of U A in C.

Lemma 3.2. The function Sub
preserves and reflects the ordering.
Proof.We show that it reflects the ordering.Suppose B A and C A represent subobjects of A such that the subobject represented by U B U A is smaller than that represented by U C U A. Then there exists an f such that the square below is a pullback.
By (A1) there exists a pullback diagram • in A mapped by U to (3.1), and by (A2) the map B ′ ⊸ B is an isomorphism, so B A represents a smaller subobject than C A.
We say that A ⊆ U A carries a subalgebra if it represents a subobject in the image of the map Sub A (A) → Sub C (U A) induced by U .In fact, Sub A (A) is given explicitly by: Axiom (A1) gives a way of picking representatives in A for subalgebras presented by subsets: where the last isomorphism is an isomorphism of subobjects, is non-empty.The set (3.2) indexes a diagram in A, and A is a limit in C of U applied to this diagram.Now, (A1) gives the specified mono projecting to A ⊆ U A.
We introduce notation for binary relations.For A ∈ C, we write ∆ A for the diagonal (identity) relation in Sub C (A × A).Similarly, for A ∈ A, we write ∆ A for the diagonal relation on U A, which is indeed in Sub A (A×A).For R ∈ Sub C (A×B), we write R op for its opposite relation in Sub To formulate relational parametricity, we require two specified collections of admissible relations, one on objects of A. These are required to satisfy: (R1) and (R2) imply that graphs of functions are admissible, i.e., if f : is any subset, then there exists a smallest admissible relation R • ∈ R A (A, B) containing R, as we may take R • to be the intersection of all admissible relations containing R.
In many concrete models Proof.We just show that Sub A (A, B) is closed under intersections.So suppose we are given a set (Q i ) i∈I of subsets in Sub A (A, B).We need to show that the subset i Q i ⊆ U A × U B carries a subalgebra of A×B.Denote for each i ∈ I by q i : Q ′ i ⊸ A×B the mono in A above the inclusion Q i ⊆ U A × U B as specified by Lemma 3.3.Then the limit of the diagram given by the q i as weakly created by U is a subalgebra of By a parametric model of PE we shall mean any category C satisfying (C1)-(C4), together with a category A and functor U : A → C satisfying (A1)-(A4) and collections R C and R A satisfying (R1)-(R4) above.The proposition below shows that every monad on C gives rise to a parametric model of PE.Thus the theory of relational parametricity for PE that we shall develop over such models is applicable to arbitrary computational monads.Notice that the assumption, familiar from the literature on computational monads [22,23], that the monad T is strong does not need to be included in the above result.This is for the simple reason that our set-theoretic setting renders all monads on C strong.For any monad T , one defines the strength t A,B : where x, − : B → A × B maps y to (x, y).Moreover, this strength is unique because C has enough points [23,Proposition 3.4].
Although Proposition 3.5 is a useful general result, we comment that some applications of PE require a different choice of model.For example, the application of PE to control in [20] makes crucial use of the permitted flexibility in the definition of model.Here, we briefly describe the steps taken in op.cit., in order to illustrate some of the variations of model construction available.The construction begins with a category C satisfying (C1)-(C4), together with a chosen object R of C. For technical reasons (see below), the object R is used to isolate the full subcategory C R of R-replete objects in C, in the sense of [11].Next, A together with U are obtained by building A as a certain carefully defined category equivalent to C R op , and U as a functor naturally isomorphic to R (−) .This situation satisfies (A1)-(A4).The interesting cases are: (A1), which holds by the way A and U are constructed; and (A2), which holds because we restricted A to the R-replete objects.Finally, whereas R C (A, B) is defined to be Sub C (A × B), it is necessary, for the application to parametricity for control, to define R A (A, B) to be the subset of Sub A (A × B) consisting of the ⊤⊤-closed relations, in the sense of Pitts [27] (see also [14]), as induced by the diagonal relation ∆ R on R. For full details of this construction, the reader is referred to [20].
One reason that the model construction outlined above departs from the form of model provided by Proposition 3.5 is that, although there is an underlying continuations monad R R (−) present, the category A is not in general equivalent to the category of algebras for this monad.The usefulness of such more general situations is already familiar from Levy's work on CBPV [15], where the natural adjunction model of control does not involve the Eilenberg-Moore category.One of the strengths of our axiomatic framework is that it is able to accommodate such models.
One of the drawbacks of our framework is that certain convolutions are sometimes necessary in order to construct a model satisfying the properties we require.For example, in the model of control outlined above (and described fully in [20]), awkward steps are taken in order to satisfy properties (A1) and (A2).An arguably preferable approach would be to work with the more natural model in which A is simply C op and U is R (−) , as in [15], even though (A1) and (A2) are then violated.This raises the question of whether the awkward properties (A1) and (A2) can be weakened.We shall return to this question in Section 8. Given a set of type variables Θ, a Θ-environment is a function γ mapping every valuetype variable X ∈ Θ to an object γ(X) of C, and every computation-type variable X ∈ Θ to an object γ(X) of A. A relational Θ-environment is a tuple ρ = (ρ 1 , ρ 2 , ρ R ), where: ρ 1 , ρ 2 are Θ-environments; for every value-type variable

C[[X]]
and, for every computation-type variable X ∈ Θ, For each value type B(Θ) (i.e., type B with ftv(B) ⊆ Θ) and Θ-environment γ, we define an object C[[B]] γ of C; and, for each computation type A(Θ) and Θ-environment γ, we define an object A[[A]] γ of A. Interdependently with the above, for each value type B(Θ) and relational Θ-environment ρ, we define an admissible The definitions are given in Figure 3.In these definitions, the products and powers used in the definition of C[[B]] γ are the ones in C, and those used in the definition of A[[A]] γ are those in A, as (weakly) created by U .We write ∆ γ for the relational Θ-environment that maps X (resp.X) to ∆ γ(X) (resp.∆ γ(X) ).We also use an obvious notation for update of environments Proof.The proof of well definedness is by induction over the structure of types.We focus first on showing that the relational interpretation of types defines admissible relations.Notice first that the relation R[[B → C]] ρ can be rewritten as  We include some basic lemmata about the type interpretation without proof.
(1) If B(Θ, X) and A(Θ) then The above lemmata are all easily proved by induction on types.
The interpretations of polymorphic types have been defined by taking products over the sets C, A respectively, but for the interpretation of terms below, it is crucial that we can define projections out these products for every A in C (respectively B in A) and not just for those objects in the sets C, A. Essentially, we would like to be able to treat these polymorphic types as if they had been defined using products over the classes of objects of C and A, even though set theory does not allow us to define such large products.It is a pleasing fact that restriction to the parametric elements of the products allows us to do just that, as the sequence of results from Proposition 4.5 to Lemma 4.10 below establishes.The idea essentially goes back to [35], and was used in [18] to construct a model of parametric polymorphism in the sense of fibered category theory.
To formulate the first result, we define a morphism from Θ-environments γ to another γ ′ to be a family f of functions indexed by type variables in Θ satisfying: for every valuetype variable X ∈ Θ, the function f X is a function from γ(X) to γ ′ (X); and, for every computation-type variable X ∈ Θ, the function f X is a homomorphism from γ(X) to γ ′ (X).Morphisms of Θ-environments form a category under pointwise composition, and a Θenvironment isomorphism is just an isomorphism in this category.Given a Θ-environment morphism f from γ to γ ′ , we write f for the relational Θ-environment with f 1 = γ, and f 2 = γ ′ and f R (X) = f X and f R (X) = f X .Also, given a Θ-environment, γ, we write x ∈ γ for a family of elements indexed by type variables in Θ satisfying: for every value-type variable X ∈ Θ, it holds that x X ∈ γ(X) and, for every computation-type variable X ∈ Θ, it holds that x X ∈ U (γ(X)).Given a Θ-environment morphism f : γ → γ ′ and x ∈ γ, we write f (x) for the evident pointwise function application, which is an element of γ ′ .Moreover, given a relational Θ-environment ρ, and elements x 1 ∈ ρ 1 and x 2 ∈ ρ 2 , we write ρ R (x 1 , x 2 ) to mean that: for every X ∈ Θ, it holds that ρ R (X)(x 1X , x 2X ); and, for every X ∈ Θ, it holds that ρ R (X)(x 1X , x 2X ).Proposition 4.5 (Groupoid action).For any type C(Θ), any two Θ-environments γ, γ ′ , and any Θ-environment isomorphism i : γ → γ ′ , there exists a unique isomorphism Furthermore, given relational Θ-environments ρ, ρ ′ , and given Θ-environment isomorphisms i 1 : Proof.By induction on the structure of the type C. We consider two cases.
If C is A → B then the induction hypothesis gives isomorphisms gpd ] γ ′ by itself as taken in A, and each evaluation map ev x , for x ∈ C[[A]] γ ′ , is a projection.It suffices to show that for each and gpd[[B]](i) is a homomorphism by induction hypothesis, and evaluation maps are homomorphisms because they are projections out of a product taken in A.
For the second half of the proposition, given isomorphisms i 1 : where we have used Lemma 4.3.Similarly (gpd ).So by the induction hypothesis, under the assumptions stated above from which we conclude (4.2) by a second application of the induction hypothesis.We define gpd[[∀X.B]](i) by the formula to see that this is well defined we must show that if A, C ∈ A and A , (κ 2 ) A ) for all A and so by induction hypothesis ((κ 1 ) A , (κ The pair (i and so by induction hypothesis, the pair (gpd and so by (4.4) we conclude

Since this holds for all
The type ∀X.B is a computation type exactly when B is, and in this case we must show that gpd[[∀X.B]](i) is a homomorphism.Similarly to the case of function spaces, since A[[∀X.B]] γ ′ is constructed as a limit in A it suffices to show that each composite p A • gpd[[∀X.B]](i) is a homomorphism, where p A is the projection defined as p A (κ) = κ A .Since this follows by the induction hypothesis.
For the last part of the proposition, suppose the pair (i 1 , i 2 ) maps pairs related in ρ to pairs related in ρ ′ , and suppose R

Since the pair (i
this follows from the induction hypothesis.Proof.Preservation of identities is Lemma 4.4.For preservation of composition, suppose i : ρ → ρ ′ and j : Corollary 4.7.For any type B(Θ, X), relational Θ-environment ρ, any relation R in R C (A, C), and any pair of isomorphisms i : Similarly for any type B(Θ, X), relational Θ-environment ρ, any relation R ∈ R A (A, C), and any pair of isomorphisms i : Proof.We just prove the first part.Since the pair (i, j) maps pairs related in (i, j) −1 R to pairs related in R, by Proposition 4.5 the pair (gpd Since R = (i −1 , j −1 ) −1 (i, j) −1 R we can apply the above to the pair (i −1 , j −1 ) and obtain from which we conclude The corollary is now the collected statement of (4.5) and (4.6).Now, for any set A in C, let C ∈ C be such that C ∼ = A by way of the isomorphism i : C → A. Using the groupoid action defined above, we have gpd Figure 4: Interpretation of Terms Next, we define the interpretation of terms.Given a context Γ with all free type variables in Θ, a Θ-Γ-environment is a function defined on both the type variables in Θ and the term variables in Γ, such that the restriction of γ to Θ is a Θ-environment, and, for every type assigment x : Proof (sketch).The three statements of the proposition are proved simultaneously by structural induction on t.Most of the cases are standard and we just show a few.We prove the homomorphism property in the case of application of a polymorphic term t : ∀X.B to a value type A. By definition and so by the induction hypothesis and Lemma 4.
The homomorphism property in the case of function application t(s) for t : B ⊸ C follows from well definedness: by induction hypothesis Likewise well definedness in the case of linear lambda abstraction: λ • x : A. t follows from the homomorphism property for t.
We show well definedness in one of the cases of polymorphic lambda abstraction: ΛX. t : ∀X.B.Here we must show that { This follows from the relational invariance property for t, as assumed in the induction hypothesis, since R[[Γ]] ∆γ (γ, γ) holds by the identity extension lemma.Likewise, the relational invariance property in the case of type application of polymorphic terms follows from well definedness using Lemma 4.9.
To show relational invariance in case of polymorphic application at computation types t(A) we may use the induction hypothesis From Lemma 4.9 it follows that as desired.
Our main application of the model will be to establish semantic equalities between terms.Henceforth, for Γ | ∆ ⊢ s : ] γ for all appropriate γ.For a syntactic equality theory we refer to [21].

Monadic types
In this section, we study the encoding of monadic types !B in our calculus, as defined by equation (1.1) of Section 1.One sees immediately that !B is always a computation type.We show that it enjoys the following derived introduction and elimination rules.
It is the above rules that motivate our notation for the !type constructor, since these are simply restrictions of the usual rules for the exponential ! of intuitionistic linear logic; for example, as formulated in Plotkin and Barber's DILL [2].
As a first application of relational parametricity for our system, we show that !B has the correct universal property for Moggi's monadic type.To keep the semantic notation bearable, we frequently omit semantic brackets, treating syntactic objects as the semantic elements they define, and we freely mix syntactic expressions with semantic values.For example, given any set A in C, we simply write !A rather than referring to !A as a set or as an algebra respectively when disambiguation is needed.
Proof.Item 1 is a straightforward consequence of the semantic validity of beta equality.For 2, we must show that y = y(!A)(λx : A. !x) at type ∀X.(A → X) → X.By evident extensionality properties of the model, it suffices to show that, for any algebra B and f Consider the homomorphism g : !A ⊸ B defined by ( For any x ∈ A, we have Combining (5.1) and (5.2), we obtain that i.e., h(s(B)(λx : Lemma 5.1 can be formulated as the two equality rules for the monadic type let constructor.
that the two rules above are equivalent to the three items of Lemma 5.1 and we leave this as a straightforward exercise.
For any set A in C define η A : A → !A by η A = λx.!x.
Theorem 5.2.The function η A : A → !A presents !A as the free algebra over A, i.e., for any algebra B and function f : A → U B, there exists a unique homomorphism h : !A ⊸ B such that h • η A = f .Indeed, h is given by λ • y. let !x be y in f (x).
For item 3 the "only if" direction is simply because Q is an admissible relation containing all elements of the form (η A (x), η B (y)) for which R(x, y) hold, and so by item 1 must contain !R proving (! R ⊸ Q)(f, g).

Definable computation types
The monadic type constructor ! is just one example of a type constructor definable using parametric polymorphism.In Figure 2 we have seen a collection of type constructors on value types and Figure 5 presents a collection of type constructors on computation types.The latter should be viewed as well chosen variants of Plotkin's polymorphic type encodings in second-order intuitionistic linear type theory, cf.[28,3,4].(For relations between this calculus and PE see Section 8).We briefly discuss the computation type encodings.
Semantically, because U : A → C weakly creates limits, algebras are closed under products in C. Syntactically, however, the types 1 and A × B from Figure 2 are not computation types.Thus the alternative encodings 1 • and A × • B are needed to obtain products of computation types as computation types.The types 0 • and A ⊕ B from Figure 5 define respectively an initial object and binary coproduct in the category A. This structure in not preserved by U , and coproducts of algebras behave very differently from coproducts of sets in C. (The latter are implemented by the sum types in Figure 2.) The type 5 also contains: existential types, ∃ • X.A and ∃ • X. A, packaged up as computation types; inductive computation types, µ • X. A; and coinductive computation types, ν • X. A. As is standard, the (co)inductive types rely on the functoriality of type expressions in their positive arguments.A special case of the inductive types is the isomorphism valid for all computation types A in which X does not occur free.It is a consequence of relational parametricity that the above types all enjoy the correct universal properties.The arguments are carried out most naturally using a suitable logic for relational parametricity in PE, see [21].

Specialising the calculus to specific effects
The type theory PE is a generic calculus for effects since the type !B can be interpreted as an arbitrary monad, and no further effect-specific features are included.In this regard, PE is analogous to Moggi's computational λ-calculus [22], computational metalanguage [23] and Levy's call-by-push-value [15].As with those calculi, specific effects can be incorporated by specialising the calculus appropriately.Typically, such specialisation takes place by extending the basic calculus with appropriately typed constants for any desired operations on effects.The addition of such constants takes place within the semantic theory described thus far, and so does not affect the validity of the results we have presented.For example, the universal properties of the defined types, discussed in Sections 5 and 6 (and treated in more detail in [21]), are unaltered.
In this section we consider various specialisations of the basic calculus, emphasising, in particular, the interaction with parametricity.
In a recent programme of research [31], Plotkin and Power have shown that many monads of computational interest can be profitably viewed as free algebra constructions for equational theories.This approach arises naturally from a computational viewpoint: the "algebraic operations" used to specify the theory correspond to programming primitives that cause effects, and the equational theory simply expresses natural behavioural equivalences between such primitives.We begin this section with an analysis of how to specialise PE to the case of such "algebraic effects".
Our approach is justified by a general theorem, which we now present.As one of their central results about algebraic effects, Plotkin and Power establish a one-to-one correspondence between "algebraic operations" and "generic effects" [30].The theorem below reformulates this correspondence in our setting, and adds a third equivalent induced by our polymorphic description of monadic types.We shall apply this third equivalent to obtain the correct polymorphic typing for algebraic operations in effect-specific specialisations of PE.
Theorem 7.1.For any set A in C, there are one-to-one correspondences between: (1) "algebraic operations of arity A", i.e., natural transformations from the functor (U (−)) A : A → C to U , (2) "generic effects over A", i.e., elements of T A, and (3) "polymorphic computation type operations of arity A", that is, elements of the type ∀X.(A → X) → X.
The simplifications in the formulation of statement 1 above, compared with [30], are due to our set-theoretic setting, which renders it unnecessary to consider issues relating to enrichment or tensorial strength.Also note that, by statement 2, the other two statements, in spite of appearances, depend only on the monad T on C, not on how it is resolved into an adjunction F ⊣ U : A → C.
Proof.The equivalence of statements 2 and 3 is immediate from (1.1), because T A = !A.So we establish the equivalence of 1 and 3. Suppose that θ is a natural transformation from (U (−)) A to U .We show that the mapping , and so by naturality the two squares below commute.
But this says that, for any f, g with Q(f (x), g(x)) for all x ∈ A, it holds that which is what we needed to show.For the converse direction, suppose κ is an element of ∀X.(A → X) → X.Then θ A (f ) = κ(A)(f ) is the corresponding algebraic operation.Verifying naturality is a routine use of graphs of homorphisms: if g : B ⊸ C and f : A → B then by parametricity ((∆ A → g ) → g )(κ(B), κ(C)) , so since (∆ A → g )(f, g • f ), also g (κ(B)(f ), κ(C)(g • f )), i.e., g(θ B (f )) = θ C (g • f ) proving naturality.It is obvious that the two constructions are mutually inverse.
To illustrate how Theorem 7.1 informs the specialisation of PE to algebraic effects, we consider nondeterminism as a typical example.As in [31], nondeterministic choice is naturally formulated using a binary operation "or" satisfying the semilattice equations: x or x = x, x or y = y or x, x or (y or z) = (x or y) or z .
Define the category A nd of "nondeterministic algebras" to have, as objects, structures (A, or A ) where A is a set in C and or A : A × A → A satisfies the semilattice equations, and, as morphisms from (A, or A ) to (B, or B ), functions from A to B that are homomorphisms with respect to the "or" operations.It is easily verified that the obvious forgetful functor U : A nd → C satisfies conditions (A1)-(A4).
Since the morphisms in A nd are homomorphisms, the operation mapping any nondeterministic algebra (A, or A ) to the function or A : A 2 → A is an algebraic operation of arity 2 in the sense of statement 1 of Theorem 7.1.Thus, applying Theorem 7.1 and currying, one obtains a corresponding polymorphic operation: or : ∀X.X → X → X .
Accordingly, nondeterministic choice can be incorporated in PE by adding a constant "or", typed as above, to the type theory.This example illustrates the general pattern for adding algebraic operations as polymorphic constants to our type theory, and readily adapts to the algebraic operations associated with other algebraic effects.
A limitation of the notion of algebraic operation is that there exist effect-specific programming primitives that are not algebraic operations.One well-known example of such a primitive is exception handling.Below, we show how exception handling may also be incorporated within our approach as a suitably typed polymorphic constant.The approach is justified by a general theorem, giving another instance of a coincidence between natural transformations and elements of polymorphic type.Theorem 7.2.For any n ∈ N, there are one-to-one correspondences between: (1) Natural transformations from (F (−)) n : C → A to F : C → A, and (2) elements of ∀X.(n → !X) ⊸ !X, where, in statement 2, we write n for the n-fold coproduct type 1 + • • • + 1, as defined in Figure 2.
Proof.An element of ∀X.(n → !X) ⊸ !X gives for each A ∈ C a map (F A) n ⊸ F A, and the naturality square for this family follows from the parametricity condition satisfied by elements of polymorphic type, applied to the graph of a function.The interesting part of this proof is to show that natural transformations satisfy the parametricity condition and thus define elements of ∀X.
by Proposition 5.3, as desired.
We now consider exception handling in detail.We assume we have a set E of exceptions with decidable equality (i.e., for all e, e ′ ∈ E either e = e ′ or e = e ′ ).We also assume (for simplicity) that C is closed under binary coproduct in Set (this is consistent with the axioms for C).We define the category A exc of "exception algebras" to have, as objects, structures (A, {raise e A } e∈E ) where raise e A ∈ A, and, as morphisms from (A, {raise e A } e∈E ) to (B, {raise e B } e∈E ), functions from A to B that map each raise e A to raise e B .Since the raise e elements are algebraic constants (operations of arity 0), they can be added to PE as constants: raise e : ∀X.X .As is standard, the forgetful functor from A exc to C, has as its left adjoint the functor F mapping A to the exception algebra (A + E, {inr(e)} e∈E ).For an exception e ∈ E, the handling operation over A is the function handle e A : (F (A)) 2 → F (A) defined by handle e A (p, q) = p if p = inr(e) q if p = inr(e) .
It is easily shown that this specifies a natural transformation from the functor (F (−)) 2 : C → A exc to F : C → A exc .In particular, the component handle e A of the natural transformation does lie in A exc because the interpretation of raise e in the exception algebra F (A) 2 is the pair (inr(e), inr(e)).Thus, by Theorem 7.2, exception handling can be incorporated in PE by adding typed constants: The main surprise with this typing is that exception handling is given a "linear" type.From this typing, one of course obtains an associated term of the less informative type ∀X.(2 → !X) → !X, which is isomorphic to the expected type ∀X.!X → !X → !X.
Paul Levy (personal communication) has pointed out that the above account of exception handling is not robust, in the sense that, in the presence of effects other than exceptions, the linear typing of handle e above is not always correct.In situations in which handling is non-linear, one would expect the non-linear typing ∀X.!X → !X → !X to still be correct.However, Theorem 7.2 is no longer applicable to establish parametricity.It would thus be interesting to find a general argument, valid in the presence of other effects, for the parametricity of handling.
Both Theorems 7.1 and 7.2 relate elements of certain polymorphic types with natural transformations between associated functors.In fact, more generally, for types that determine functors, parametricity implies naturality (cf.[29]).However, the exact correspondences between natural transformations and parametric elements established above depend crucially on the precise forms of types considered there.
The forms of n-ary operation considered in this section by no means exhaust the collection of operations of interest from an effects perspective.Control operators provide a particularly interesting class of examples that do not fit into this format.We briefly discuss how PE can be specialised to control at the end of Section 8.

Relation to other systems
Several computational effects of interest, including nontermination, nondeterminism, and probabilistic choice, give rise to monads on C that are commutative, cf.[23].The collection of models of PE in which A is the category of algebras for a commutative monad T is of special interest since, for such monads, the set of homomorphisms A ⊸ B between algebras A, B carries a canonical algebra structure which provides a closed structure on the category A. For such models, it is thus natural to modify our type system by including A ⊸ B as a computation type.Making this adjustment, one obtains second-order intuitionistic linear type theory as the fragment of computation types: Thus we obtain a rich collection of models for the type theory proposed by Plotkin as a foundation for combining polymorphism and recursion [28].
A simple application of the polymorphic encodings in Figures 2 and 5 is to translate Levy's CBPV calculus [15] into PE.For this, coproducts and products of value types are translated using + and × from Figure 2, products of computation types are translated using × • from Figure 5, Levy's F constructor is translated using !, and U is simply ignored.
One of the properties of Levy's CBPV calculus is that its adjunction models [16] are not required to satisfy any properties analogous to our conditions (A1) and (A2).In Sections 3 and 4, we exploited (A1) to satisfy the requirement that U (A[[A]]) = C[[A]], and (A2) to obtain that relations in A can be viewed as special relations in C (cf. Lemma 3.2), which is crucial in interpreting R[[A]] as an admissible A-relation.We comment, however, that it is possible to generalise our account of relational parametricity to models in which (A1) is weakened to the requirement that A be small-complete and U preserve limits (which always holds in Levy's models since U is a right adjoint), and in which condition (A2) is dropped altogether.For such models, condition (A1) can then be engineered by changing A to an equivalent category, and adjusting U accordingly, as in [20]; or, more naturally, the semantics can be adjusted, rather than the category, so as to obtain a specified isomorphism ], instead of an equality.Dropping condition (A2) causes a more significant complication.In its absence, it seems necessary to define a special relational semantics for computation types, rather than inheriting the relational semantics for computation types from that for value types (as done in Section 4).Moreover, while such an approach is natural, it does make the semantic definitions significantly more complicated.In this paper, we have chosen to assume properties (A1) and (A2), since we value the convenience of simplified semantic definitions (which are anyway complicated enough as they are!) over the added generality of having a wider class of models.
Finally, we mention how the interesting case of control operators can be accommodated within PE.This cannot be achieved by following the general methods of Section 7, since the continuations monad R R (−) does not arise naturally as the free algebra for an algebraic theory, and the control primitives associated with continuations are not algebraic operations.Nevertheless, it turns out that PE can be usefully specialised to the case of control by adding a polymorphic constant of type (using the defined type 0 • from Figure 5): acting as a pointwise inverse to the canonical element of type ∀X.X ⊸ ((X ⊸ 0 • ) → 0 • ).The resulting theory is studied in detail in a companion article [20], where it is shown that Hasegawa's results on polymorphic definability in the second-order λµ-calculus [9] fall out as special cases of constructions from Figure 5.

Applicability of results
We have given a semantic account of relational parametricity in the presence of computational effects.From our working perspective within IZF, this is parametrized on being given categories C and A and families of relations R C and R A , satisfying axioms (C1)-(C4), (A1)-(A4) and (R1)-(R4).Moreover, Proposition 3.5, shows that such data can be obtained whenever one has a monad T on a category C satisfying (C1)-(C4).
To conclude the paper, we outline how this theory might actually be applied to prove properties of polymorphic programs with effects.Suppose we have some given polymorphic λ-calculus L with a choice of effect-primitives as the programmming language of interest.The basic idea is to formulate both the operational and denotational semantics of L within IZF.The operational semantics is treated in the standard way, for which the use of classical logic is inessential.The denotational semantics is developed using the assumption of a category C satisfying (C1)-(C4).The construction of A and R C and R A will depend upon the effects present in the language.For (a simple) example, if the only effect is nondeterministic choice then T can be defined to be the free-semilattice functor over C, and the entire model is then obtained via Proposition 3.5.For general effects, the construction of the model will be more complex than this, especially in the presence of recursion, cf.[35].Indeed, there is need for a uniform theory of how to build such models; some hints in this direction appear in [40].
Once one has both operational semantics and model, the next step is to prove, within IZF, a computational adequacy result for the model, implying that the model is sound with respect to operational equivalence.In examples considered hitherto, such proofs have been obtained by standard logical-relations-based methods [37,39,35].They rely only on having some appropriate non-triviality property of C (for example, that the natural numbers is an object of C [37]).
Computational adequacy allows one to transfer equational properties of the denotational semantics to the operational semantics.However, the above development has taken place in IZF, together with the assumption of a category C satisfying (C1)-(C4).We can therefore infer operational properties within this metatheory; but, of course, we want to be sure that such properties are actually true in the real world.The remaining step is to use a transfer property which allows us to conclude exactly this.
The transfer property is based on the existence of realizability models of IZF which possess within them categories C satisfying (C1)-(C4) and containing the natural numbers as an object.As already discussed in Section 3, such models derive from the work of Hyland et.al. on small-complete small categories [10,12].Now, the relevant realizability models all enjoy the property of being Π 0 2 -absolute, meaning that a Π 0 2 -sentence holds in the model if and only if it is true externally.This implies that properties of operational equivalence that are true in the model are indeed true in reality, see [37,39,35] for related arguments.
We have outlined a programme of how one can potentially use the theory of parametricity developed in this paper to derive operational properties of programs.It would be good to have examples of such applications worked out in computationally interesting cases.
There is, of course, a significant drawback with the intuitionistic-set-theory-based approach we have been following.The mathematical overheads are considerable.It seems likely that a more practical theory of parametricity for effects should be achievable using direct operational methods.We leave this as an interesting direction for future research.It is plausible that the denotational approach we have been following in this paper might be useful in informing the development of such an operational theory.

Lemma 3 . 1 .
Suppose C satisfies (C1)-(C4) and let T be a monad on C. Then the category A of Eilenberg-Moore algebras for T and the forgetful functor U : A → C satisfy (A1)-(A4).
For any set of admissible C-(respectively A-)relations on the same pair of objects, the intersection is an admissible C-(respectively A-)relation.(R4): R A (A, B) ⊆ R C (U A, U B).

Proposition 3 . 5 .
Given C satisfying (C1)-(C4) and a monad T on C, let A be the category of algebras for the monad, U the forgetful functor and define R C (A, B) = Sub C (A × B) and R A (A, B) = Sub A (A × B).This data defines a parametric model of PE.Proof.We have already argued above that (A1)-(A4) are satisfied, and (R1)-(R4) are satisfied by Lemma 3.4.

Figure 3 :
Figure 3: Interpretation of Types . The algebras defined by A[[∀Y.A]] γ and A[[∀X.A]] γ are the canonical algebras carried by the subsets of the product algebras.Proposition 4.1.C[[B]] γ , A[[A]] γ and R[[B]] ρ are well defined by Figure 3. Further, for every computation type A, it holds that C ρ 1 given by evaluation at x 1 , and ev x 2 is defined likewise.For value types B, C it follows that R[[B → C]] ρ is an admissible C relation from the induction hypothesis and (R2) and (R3).If C is a computation type, B → C becomes a computation type and we must check that R[[B → C]] ρ is an admissible A relation.Since the object A[[B → C]] ρ 1 is defined as a product A[[C]] C[[B]]ρ 1 ρ 1 in A and the evaluation map ev x 1 is the projection, it is a homomorphism.So again R[[B → C]] ρ being admissible follows from the induction hypothesis and (R2), (R3).The proof of the other induction cases are similar.To prove well definedness of A[[∀X.A]] γ notice first that the formula in Figure 3 defines an element in Sub A ( A∈C A[[A]] γ[A/X] ) since it can be exhibited as the intersection A,B∈C,R∈R C (A,B)

. 1 )
where p A , p B are the projections from the product A∈C A[[A]] γ[A/X] .The projections are homomorphisms since the product is taken in the category A and thus, since R[[A]] ∆γ [R/X] is an A-subobject by induction hypothesis, (4.1) defines an A-subobject.We define A[[∀X.A]] γ to be the specified A object representing the subset as given by Lemma 3.3, thus defining A[[∀X.A]] γ up to identity and not just up to isomorphism.