Higher Order Automatic Differentiation of Higher Order Functions

We present semantic correctness proofs of automatic differentiation (AD). We consider a forward-mode AD method on a higher order language with algebraic data types, and we characterise it as the unique structure preserving macro given a choice of derivatives for basic operations. We describe a rich semantics for differentiable programming, based on diffeological spaces. We show that it interprets our language, and we phrase what it means for the AD method to be correct with respect to this semantics. We show that our characterisation of AD gives rise to an elegant semantic proof of its correctness based on a gluing construction on diffeological spaces. We explain how this is, in essence, a logical relations argument. Throughout, we show how the analysis extends to AD methods for computing higher order derivatives using a Taylor approximation.


Introduction
Automatic differentiation (AD), loosely speaking, is the process of taking a program describing a function, and constructing the derivative of that function by applying the chain rule across the program code. As gradients play a central role in many aspects of machine learning, so too do automatic differentiation systems such as TensorFlow [AAB + 16], PyTorch [PGC + 17] or Stan [CHB + 15]. Differentiation has a well-developed mathematical theory in terms of differential geometry. The aim of this paper is to formalize this connection between differential geometry and the syntactic operations of AD, particularly for AD methods that calculate higher order derivatives. In this way we achieve two things: (1) a compositional, denotational understanding of differentiable programming and AD; (2) an explanation of the correctness of AD.
This intuitive correspondence (summarized in Fig. 1) is in fact rather complicated. In this paper, we focus on resolving the following problem: higher order functions play a key role in programming, and yet they have no counterpart in traditional differential geometry. Moreover, we resolve this problem while retaining the compositionality of denotational semantics.
1.0.1. Higher order functions and differentiation. A major application of higher order functions is to support disciplined code reuse. Code reuse is particularly acute in machine learning. For example, a multi-layer neural network might be built of millions of near-identical neurons, as follows.
neuron n : (real n * (real n * real)) → real neuron n def = λ x, w, b . ς(w · x + b) layer n : ((τ 1 * P ) → τ 2 ) → (τ 1 * P n ) → τ n 2 layer n def = λf. λ x, p 1 , . . . , p n . f x, p 1 , . . . , f x, p n comp : (((τ 1 * P ) → τ 2 ) * ((τ 2 * Q) → τ 3 )) → (τ 1 * (P * Q)) → τ 3 comp def = λ f, g . λ x, (p, q) . g f x, p , q • R is a space, and the smooth functions R → R are exactly the functions that are infinitely differentiable; • The set of smooth functions X → Y between spaces again forms a space, so we can interpret function types. • The disjoint union of a sequence of spaces again forms a space, and this enables us to interpret variant types and inductive types, e.g. lists of reals form the space ∞ i=0 R i . We emphasise that the most standard formulation of differential geometry, using manifolds, does not support spaces of functions. Diffeological spaces seem to us the simplest notion of space that satisfies these conditions, but there are other candidates [BH11,Sta11]. A diffeological space is in particular a set X equipped with a chosen set of curves C X ⊆ X R and a smooth map f : X → Y must be such that if γ ∈ C X then γ; f ∈ C Y . This is reminiscent of the method of logical relations.
1.0.2. From smoothness to automatic derivatives at higher types. Our denotational semantics in diffeological spaces guarantees that all definable functions are smooth. But we need more than just to know that a definable function happens to have a mathematical derivative: we need to be able to find that derivative.
In this paper we focus on forward mode automatic differentiation methods for computing higher derivatives, which are macro translations on syntax (called − → D in Section 3). We are able to show that they are correct, using our denotational semantics.
Here there is one subtle point that is central to our development. Although differential geometry provides established derivatives for first order functions (such as neuron above), there is no canonical notion of derivative for higher order functions (such as layer and comp) in the theory of diffeological spaces (e.g. [CW14]). We propose a new way to resolve this, by interpreting types as triples (X, X , S) where, intuitively, X is a space of inhabitants of the type, X is a space serving as a chosen bundle of tangents (or jets, in the case of higher order derivatives) over X, and S ⊆ X R × X R is a binary relation between curves, informally relating curves in X with their tangent (resp. jet) curves in X . This new model gives a denotational semantics for higher order automatic differentiation on a language with higher order functions.
In Section 4 we boil this new approach down to a straightforward and elementary logical relations argument for the correctness of higher order automatic differentiation. The approach is explained in detail in Section 6. We explore some subtleties of non-uniqueness of derivatives of higher order functions in Section 7.
1.0.3. Related work and context. AD has a long history and has many implementations. AD was perhaps first phrased in a functional setting in [PS08], and there are now a number of teams working on AD in the functional setting (e.g. [WWE + 19, SFVPJ19,Ell18]), some providing efficient implementations. Although that work does not involve formal semantics, it is inspired by intuitions from differential geometry and category theory.
This paper adds to a very recent body of work on verified automatic differentiation. In the first order setting, there are recent accounts based on denotational semantics in manifolds [FST19,LYRY20] and based on synthetic differential geometry [CGM19], work making a categorical abstraction [CCG + 20] and work connecting operational semantics with denotational semantics [AP20,Plo18], as well as work focussing on how to correctly differentiate programs that operate on tensors [BML + 20] and programs that make use of  HSV20a]). We present these in Section 3 for function types, and in Section 5 we extend them to inductive types and variants. The main contributions of this paper are as follows.
• We give a denotational semantics for the language in diffeological spaces, showing that every definable expression is smooth (Section 4). • We show correctness of the higher order AD macros by a logical relations argument (Th. 4.6). • We give a categorical analysis of this correctness argument with two parts: a universal property satisfied by the macro in terms of syntactic categories, and a new notion of glued space that abstracts the logical relation (Section 6). • We then use this analysis to state and prove a correctness argument at all first order types (Th. 6.7).
Relation to previous work. This paper extends and develops the paper [HSV20a]  The chain rule for differentiation tells us that we can calculate ∇(f ; g)(a) = ∇f (a) · ∇g(f (a)). In that sense, the chain rule tells us how linear approximations to a function transform under post-composition with another function.
To find ∇f in a compositional way, using the chain rule, two generalizations are reasonable: • We need both f and ∇f when calculating ∇(f ; g) of a composition f ; g, using the chain rule, so we are really interested in the pair (f, ∇f ) : R → R × R; • In building f we will need to consider functions of multiple arguments, such as + : R 2 → R, and these functions should propagate derivatives.
Thus we are more generally interested in transforming a function g : R n → R into a function h : (R × R) n → R × R in such a way that for any f 1 . . . f n : R → R, . . , f n ); g)). (2.1) Computing automatically the program representing h, given a program representing g, is the goal of automatic differentiation. An intuition for h is often given in terms of dual numbers. The transformed function operates on pairs of numbers, (x, x ), and it is common to think of such a pair as x + x for an 'infinitesimal' . But while this is a helpful intuition, the formalization of infinitesimals can be intricate, and the development in this paper is focussed on the elementary formulation in (2.1).
A function h satisfying (2.1) encodes all the partial derivatives of g. For example, ∂x (x 1 )) and similarly h(x 1 , 0, x 2 , 1) = (g(x 1 , x 2 ), ∂g(x 1 ,x) ∂x (x 2 )). And conversely, if g is differentiable in each argument, then a unique h satisfying (2.1) can be found by taking linear combinations of partial derivatives, for example: (Here, recall that the partial derivative ∂g(x,x 2 ) ∂x (x 1 ) is a particular notation for the gradient ∇(g(−, x 2 ))(x 1 ), i.e. with x 2 fixed. ) In summary, the idea of differentiation with dual numbers is to transform a differentiable function g : R n → R to a function h : R 2n → R 2 which captures g and all its partial derivatives. We packaged this up in (2.1) as an invariant which is useful for building derivatives of compound functions R → R in a compositional way. The idea of (first order) forward mode automatic differentiation is to perform this transformation at the source code level.
We say that a macro for AD is correct if, given a semantic model sem−, the program P representing g = P is transformed by the macro to a program P representing h = P . This means in particular that P computes correct partial derivatives of (the function represented by) P .

Smooth functions.
In what follows we will often speak of smooth functions R k → R, which are functions that are continuous and differentiable, such that their derivatives are also continuous and differentiable, and so on. 2.2. Higher order differentiation: the Faà di Bruno formula and Taylor approximations. We now generalize the above in two directions: • We look for the best local approximations to f with polynomials of some order R, generalizing the above use of linear functions (R = 1). • We can work directly with multivariate functions R k → R instead of functions of one variable R → R (k = 1). To make this precise, we recall that, given a smooth function f : R k → R and a natural number R ≥ 0, the R-th order Taylor approximation of f at a ∈ R k is defined in terms of the partial derivatives of f : This is an R-th order polynomial. Similarly to the case of first order derivatives, we can recover the partial derivatives of f up to the R-th order from its Taylor approximation by evaluating the series at basis vectors. See Section 2.3 below for an example.
Recall that the ordering of partial derivatives does not matter for smooth functions (Schwarz/Clairaut's theorem). So there will be R+k−1 k−1 R-th order partial derivatives, and altogether there are R+k k summands in the R-th order Taylor approximation. (This can be seen by a 'stars-and-bars' argument.) Since there are R+k k partial derivatives of f i of order ≤ R, we can store them in the Euclidean space R ( R+k k ) , which can also be regarded as the space of k-variate polynomials of degree ≤ R.
where (α 1 1 , . . . , α 1 k ), . . . , (α q 1 , . . . , α q k ) ∈ N k are an enumeration of all the vectors (α r 1 , . . . , α r k ) of k natural numbers such that α r j ≤ α j and α r 1 + . . . + α r k > 0 and we write q for the number of such vectors. The details of this formula reflect the complicated combinatorics the arise from repeated applications of the chain and product rules for differentiation that one uses to prove it. Conceptually, however, it is rather straightforward: it tells us that the coefficients of the R-th order Taylor approximation of f ; g can be expressed exclusively in terms of those of f and g.
Thus the Faà di Bruno formula uniquely determines the Taylor approximation h : in terms of the derivatives of g : R n → R of order ≤ R, and we can also recover all such derivatives from h.
In the proper Taylor representation we explicitly include the non-mixed second order derivatives as inputs and outputs, leading to a function h : (R 6 ) l → R 6 . Above we have followed a common trick to avoid some unnecessary storage and computation, since these extra inputs and outputs are not required for computing the second order derivatives of g. For instance, if l = 2 then the last component of h((x, 1, 1, 0), (y, 0, 0, 0)) computes ∂ 2 g(x,y) ∂x 2 (x, y).

2.4.
Example: a one-dimensional second order Taylor series. As opposed to (2,2)-AD, (1,2)-AD computes the first and second order derivatives in the same direction. For example, if g : R 2 → R is a smooth function, then h : (R 3 ) 2 → R 3 . An intuition for h can be given in terms of triple numbers. The transformed function operates on triples of numbers, (x, x , x ), and it is common to think of such a triple as x + x + x 2 for an 'infinitesimal' which has the property that 3 = 0. For instance we have We see that we directly get non-mixed second-order partial derivatives but not the mixedones. We can recover ∂ 2 g(x,y) More generally, if g : R l → R, then h : (R 3 ) l → R 3 satisfies: We can always recover the mixed second order partial derivatives from this but this requires several computations involving h. This is thus different from the (2,2) method which was more direct.
2.5. Remark. In the rest of this article, we study forward-mode (k, R)-automatic differentiation for a language with higher-order functions. The reader may like to fix k = R = 1 for a standard automatic differentiation with first-order derivatives, based on dual numbers. This is the approach taken in the conference version of this paper [HSV20b]. But the generalization to higher-order derivatives with arbitrary k and R flows straightforwardly through the whole narrative.

A Higher Order Forward-Mode AD Translation
3.1. A simple language of smooth functions. We consider a standard higher order typed language with a first order type real of real numbers. The types (τ, σ) and terms (t, s) are as follows.
τ, σ, ρ :: operations (including constants) | t 1 , . . . , t n | case t of x 1 , . . . , x n → s tuples/pattern matching | λx.t | t s function abstraction/application The typing rules are in Figure 3. We have included some abstract basic n-ary operations op ∈ Op n for every n ∈ N. These are intended to include the usual (smooth) mathematical operations that are used in programs to which automatic differentiation is applied. For example, • for any real constant c ∈ R, we typically include a constant c ∈ Op 0 ; we slightly abuse notation and will simply write c for c() in our examples; • we include some unary operations such as ς ∈ Op 1 which we intend to stand for the usual 1+e −x ; • we include some binary operations such as addition and multiplication (+), ( * ) ∈ Op 2 ; We add some simple syntactic sugar t − u def = t + (−1) * u and, for some natural number n, Similarly, we will frequently denote repeated sums and products using -and -signs, respectively: for example, we write t 1 + ... + t n as i∈{1,...,n} t i and t 1 * ... * t n as i∈{1,...,n} t i . This in addition to programming sugar such as let x = t in s for (λx.s) t and λ x 1 , . . . , x n .t for λx.case x of x 1 , . . . , x n → t. 3.2. Syntactic automatic differentiation: a functorial macro. The aim of higher order forward mode AD is to find the (k, R)-Taylor representation of a function by syntactic manipulations, for some choice of (k, R) that we fix. For our simple language, we implement this as the following inductively defined macro − → D (k,R) on both types and terms (see also [WWE + 19,SFVPJ19]). For the sake of legibility, we simply write − → D (k,R) as − → D here and leave the dimension k and order R of the Taylor representation implicit. The following definition is for general k and R, but we treat specific cases afterwards in Example 3.1.   Here, (∂ β 1 ···βn op)(x 1 , . . . , x n ) are some chosen terms of type real in the language with free variables from x 1 , . . . , x n . We think of these terms as implementing the partial derivative we could choose the following representations of derivatives of order ≤ 2 of our example operations Note that our rules, in particular, imply that Example 3.1 ((1, 1)-and (2, 2)-AD). Our choices of partial derivatives of the example operations are sufficient to implement (k, R)-Taylor forward AD with R ≤ 2. To be explicit, the distinctive formulas for (1, 1)-and (2, 2)-AD methods (specializing our abstract definition of where we informally writeî for the one-hot encoding of i (the sequence of length n consisting exclusively of zeros except in position i where it has a 1) and i, j for the two-hot encoding of i and j (the sequence of length n consisting exclusively of zeros except in positions i and j where it has a 1 if i = j and a 2 if i = j) As noted in Section 2, it is often unnecessary to include all components of the (2, 2)algorithm, for example when computing a second order directional derivative. In that case, we may define a restricted (2, 2)-AD algorithm that drops the non-mixed second order derivatives from the definitions above and defines − → We extend − → D to contexts: Proof. By induction on the structure of typing derviations.

Semantics of differentiation
Consider for a moment the first order fragment of the language in Section 3, with only one type, real, and no λ's or pairs. This has a simple semantics in the category of cartesian spaces and smooth maps. Indeed, a term x 1 . . . x n : real t : real has a natural reading as a function t : R n → R by interpreting our operation symbols by the well-known operations on R n → R with the corresponding name. In fact, the functions that are definable in this first order fragment are smooth. Let us write CartSp for this category of cartesian spaces (R n for some n) and smooth functions.
The category CartSp has cartesian products, and so we can also interpret product types, tupling and pattern matching, giving us a useful syntax for constructing functions into and out of products of R. For example, the interpretation of (neuron n ) in (1.1) becomes where · n , + and ς are the usual inner product, addition and the sigmoid function on R, respectively.
Inside this category, we can straightforwardly study the first order language without λ's, and automatic differentiation. In fact, we can prove the following by plain induction on the syntax: The interpretation of the (syntactic) forward AD − → D (t) of a first order term t equals the usual (semantic) derivative of the interpretation of t as a smooth function.
However, as is well-known, the category CartSp does not support function spaces. To see this, notice that we have polynomial terms x 1 , . . . , x d : real λy. d n=1 x n y n : real → real for each d, and so if we could interpret (real → real) as a Euclidean space R p then, by interpreting these polynomial expressions, we would be able to find continuous injections R d → R p for every d, which is topologically impossible for any p, for example as a consequence of the Borsuk-Ulam theorem (see Appx. A).
This lack of function spaces means that we cannot interpret the functions (layer) and (comp) from (1.1) in CartSp, as they are higher order functions, even though they are very useful and innocent building blocks for differential programming! Clearly, we could define neural nets such as (1.1) directly as smooth functions without any higher order subcomponents, though that would quickly become cumbersome for deep networks. A problematic consequence of the lack of a semantics for higher order differential programs is that we have no obvious way of establishing compositional semantic correctness of − → D for the given implementation of (1.1).
We now show that every definable function is smooth, and then in Section 4.2 we show that the − → D macro witnesses its derivatives.

4.1.
Smoothness at higher types and diffeologies. The aim of this section is to introduce diffeological spaces as a semantic model for the simple language in Section 3. By way of motivation, we begin with a standard set theoretic semantics, where types are interpreted as follows and a term x 1 : τ 1 , . . . , x n : τ n t : σ is interpreted as a function n i=1 τ i → σ , mapping a valuation of the context to a result. We can show that the interpretation of a term x 1 : real, . . . , x n : real t : real is always a smooth function R n → R, even if it has higher order subterms. We begin with a fairly standard logical relations proof of this, and then move from this to the semantic model of diffeological spaces.
Proposition 4.1. If x 1 : real, . . . , x n : real t : real then the function t : R n → R is smooth.
Proof. For each type τ define a set Q τ ⊆ [R k → τ ] by induction on the structure of types: Now we show the fundamental lemma: if x 1 : τ 1 , . . . , x n : τ n u : σ and g 1 ∈ Q τ 1 . . . g n ∈ Q τn then ((g 1 . . . g n ); u ) ∈ Q σ . This is shown by induction on the structure of typing derivations. The only interesting step here is that the basic operations (+, * , ς etc.) are smooth. We deduce the statement of the theorem by putting u = t, k = n, and letting g i : R n → R be the projections.
At higher types, the logical relations Q show that we can only define functions that send smooth functions to smooth functions, meaning that we can never use them to build first order functions that are not smooth. For example, (comp) in (1.1) has this property.
This logical relations proof suggests to build a semantic model by interpreting types as sets with structure: for each type we have a set X together with a set Q R k X ⊆ [R k → X] of plots.
Definition 4.2. A diffeological space (X, P X ) consists of a set X together with, for each n and each open subset U of R n , a set P U X ⊆ [U → X] of functions, called plots, such that • all constant functions are plots; • if f : V → U is a smooth function and p ∈ P U X , then f ; p ∈ P V X ; is a compatible family of plots (x ∈ U i ∩ U j ⇒ p i (x) = p j (x)) and (U i ) i∈I covers U , then the gluing p : We call a function f : X → Y between diffeological spaces smooth if, for all plots p ∈ P U X , we have that p; f ∈ P U Y . We write Diff (X, Y ) for the set of smooth maps from X to Y . Smooth functions compose, and so we have a category Diff of diffeological spaces and smooth functions.
A diffeological space is thus a set equipped with structure. Many constructions of sets carry over straightforwardly to diffeological spaces. In categorical terms, this gives a full embedding of CartSp in Diff .
Example 4.4 (Product diffeologies). Given a family (X i ) i∈I of diffeological spaces, we can equip the product i∈I X i of sets with the product diffeology in which U -plots are precisely the functions of the form (p i ) i∈I for p i ∈ P U X i . This gives us the categorical product in Diff .
Example 4.5 (Functional diffeology). We can equip the set Diff (X, Y ) of smooth functions between diffeological spaces with the functional diffeology in which U -plots consist of functions This specifies the categorical function object in Diff .
We can now give a denotational semantics to our language from Section 3 in the category of diffeological spaces. We interpret each type τ as a set τ equipped with the relevant diffeology, by induction on the structure of types: Now well typed terms Γ t : τ are interpreted as smooth functions t : Γ → τ , giving a meaning for t for every valuation of the context. This is routinely defined by induction on the structure of typing derivations once we choose a smooth function op : R n → R to interpret each n-ary operation op ∈ Op n . For example, constants c : real are interpreted as constant functions; and the first order operations (+, * , ς) are interpreted by composing with the corresponding functions, which are smooth: e.g., ς(t) (ρ) def = ς( t (ρ)), where ρ ∈ Γ .
Variables are interpreted as x i (ρ) def = ρ i . The remaining constructs are interpreted as follows, and it is straightforward to show that smoothness is preserved.
The logical relations proof of Proposition 4.1 is reminiscent of diffeological spaces. We now briefly remark on the suitability of the axioms of diffeological spaces (Def 4.2) for a semantic model of smooth programs. The first axiom says that we only consider reflexive logical relations. From the perspective of the interpretation, it recognizes in particular that the semantics of an expression of type (real → real) → real is defined by its value on smooth functions rather than arbitrary arguments. That is to say, the set-theoretic semantics at the beginning of this section, (real → real) → real , is different to the diffeological semantics, (real → real) → real . The second axiom for diffeological spaces ensures that the smooth maps in Diff (U, X) are exactly the plots in P U X . The third axiom ensures that categories of manifolds fully embed into Diff ; it will not play a visible role in this paperin fact, [BCLG20] prove similar results for a simple language like ours by using plain logical relations (over Set) and without demanding the diffeology axioms. However, we expect the third axiom to be crucial for programming with other smooth structures or partiality.

Correctness of AD.
We have shown that a term x 1 : real, . . . , x n : real t : real is interpreted as a smooth function t : R n → R, even if t involves higher order functions (like (1.1)). Moreover, the macro differentiation (R ( R+k k ) ) n → R ( R+k k ) (Proposition 3.2). This enables us to state a limited version of our main correctness theorem: Theorem 4.6 (Semantic correctness of − → D (limited)). For any term x 1 : real, . . . , x n : real t : real, the function − → D (k,R) (t) is the (k, R)-Taylor representation (2.2) of t . In detail: for any smooth functions f 1 . . . f n : R k → R, For instance, if n = 2, then Proof. We prove this by logical relations. A categorical version of this proof is in Section 6.2.

Extending the language: variant and inductive types
In this section, we show that the definition of forward AD and the semantics generalize if we extend the language of Section 3 with variants and inductive types. As an example of inductive types, we consider lists. This specific choice is only for expository purposes and the whole development works at the level of generality of arbitrary algebraic data types generated as initial algebras of (polynomial) type constructors formed by finite products and variants. These types are easily interpreted in the category of diffeological spaces in much the same way. The categorically minded reader may regard this as a consequence of Diff being a concrete Grothendieck quasitopos, e.g. [BH11], and hence is complete and cocomplete.
To demonstrate the practical use of expressive type systems for differential programming, we consider the following two examples.
Γ r : σ Γ, x 1 : τ, x 2 : σ t : σ Γ fold (x 1 , x 2 ).t over s from r : σ  Example 5.1 (Lists of inputs for neural nets). Usually, we run a neural network on a large data set, the size of which might be determined at runtime. To evaluate a neural network on multiple inputs, in practice, one often sums the outcomes. This can be coded in our extended language as follows. Suppose that we have a network f : (real n * P ) → real that operates on single input vectors. We can construct one that operates on lists of inputs as follows: x 2 ).f x 1 , w + x 2 over l from 0 : (list(real n ) * P ) → real Example 5.2 (Missing data). In practically every application of statistics and machine learning, we face the problem of missing data: for some observations, only partial information is available.
In an expressive typed programming language like we consider, we can model missing data conveniently using the data type maybe(τ ) = {Nothing ( ) Just τ }. In the context of a neural network, one might use it as follows. First, define some helper functions Given a neural network f : (list(real k ) * P ) → real, we can build a new one that operates on on a data set for which some covariates (features) are missing, by passing in default values to replace the missing covariates: λ l, m, w .f map (fromMaybe k m) l, w : (list((maybe(real)) k ) * (real k * P )) → real Then, given a data set l with missing covariates, we can perform automatic differentiation on this network to optimize, simultaneously, the ordinary network parameters w and the default values for missing covariates m.

5.2.
Semantics. In Section 4 we gave a denotational semantics for the simple language in diffeological spaces. This extends to the language in this section, as follows. As before, each type τ is interpreted as a diffeological space, which is a set equipped with a family of plots: • A variant type { 1 τ 1 . . . n τ n } is inductively interpreted as the disjoint union of the semantic spaces, • A list type list(τ ) is interpreted as the union of the sets of length i tuples for all natural The constructors and destructors for variants and lists are interpreted as in the usual set theoretic semantics. It is routine to show inductively that these interpretations are smooth. Thus every term Γ t : τ in the extended language is interpreted as a smooth function t : Γ → τ between diffeological spaces. List objects as initial algebras are computed as usual in a cocomplete category (e.g. [JR11]). More generally, the interpretation for algebraic data types follows exactly the usual categorical semantics of variant types and inductive types (e.g. [Pit95]).

Categorical analysis of (higher order) forward AD and its correctness
This section has three parts. First, we give a categorical account of the functoriality of AD (Ex. 6.3). Then we introduce our gluing construction, and relate it to the correctness of AD (dgm. (6.1)). Finally, we state and prove a correctness theorem for all first order types by considering a category of manifolds (Th. 6.7).
6.1. Syntactic categories. The key contribution of this subsection is that the AD macro translation (Section 3.2) has a canonical status as a unique functor between categories with structure. To this end, we build a syntactic category Syn from our language, which has the property of a free category with certain structure. This means that for any category C with this structure, there is a unique structure-preserving functor Syn → C, which is an interpretation of our language in that category. Generally speaking, this is the categorical view of denotational semantics (e.g. [Pit95]). But in this particular setting, the category Syn itself admits alternative forms of this structure, given by the dual numbers interpretation, the triple numbers interpretation etc. of Section 2. This gives canonical functors Syn → Syn translating the language into itself, which are the AD macro translations (Section 3.2). A key point is that Syn is almost entirely determined by universal properties (for example, cartesian closure for the function space); the only freedom is in the choice of interpretation of (1) the real numbers real, which can be taken as the plain type real, or as the dual numbers interpretation real * real etc.; (2) the primitive operations op, which can be taken as the operation op itself, or as the derivative of the operation etc.. In more detail, our language induces a syntactic category as follows.
Definition 6.1. Let Syn be the category whose objects are types, and where a morphism τ → σ is a term in context x : τ t : σ modulo the βη-laws (Fig. 5). Composition is by substitution.
For simplicity, we do not impose identities involving the primitive operations, such as the arithmetic identity x + y = y + x in Syn. As is standard, this category has the following universal property.
Lemma 6.2 (e.g. [Pit95]). For every bicartesian closed category C with list objects, and every choice of an object F (real) ∈ C and morphisms F (op) ∈ C(F (real) n , F (real)) for all op ∈ Op n and n ∈ N, in C, there is a unique functor F : Syn → C respecting the interpretation and preserving the bicartesian closed structure as well as list objects. Proof notes. The functor F : Syn → C is a canonical denotational semantics for the language, interpreting types as objects of C and terms as morphisms. For instance, F (τ → σ) def = (F τ → F σ), the function space in the category C, and F (t s) is the composite (F t, F s); eval .
When C = Diff , the denotational semantics of the language in diffeological spaces (Section 4,5.2) can be understood as the unique structure preserving functor − : Syn → Diff satisfying real = R, ς = ς and so on. This observation is a categorical counterpart to Lemma 3.2.
6.2. Categorical gluing and logical relations. Gluing is a method for building new categorical models which has been used for many purposes, including logical relations and realizability [MS92]. Our logical relations argument in the proof of Theorem 4.6 can be understood in this setting. (In fact we originally found the proof of Theorem 4.6 in this way.) In this subsection, for the categorically minded, we explain this, and in doing so we quickly recover a correctness result for the more general language in Section 5 and for arbitrary first order types. The general, established idea of categorical logical relations starts from the observation that that logical relations are defined by induction on the structure of types. Types have universal properties in a categorical semantics (e.g. cartesian closure for the function space), case t 1 , . . . , t n of x 1 , . . . , x n → s = s[ t 1 / x 1 , . . . , tn / xn ] fold (x 1 , x 2 ).t over s 1 :: We write #x 1 ,...,xn = to indicate that the variables are free in the left hand side and so we can organize the logical relations argument by defining some category C of relations and observing that it has the requisite categorical structure. The interpretation of types as relations can then be understood as coming from a unique structure preserving map Syn → C. In this paper, our logical relations are not quite as simple as a binary relation on sets; rather they are relations between plots. Nonetheless, this still forms a category with the appropriate structure, which follows because it can still be regarded as arising from a gluing construction, as we now explain. We define a category Gl k whose objects are triples (X, X , S) where X and X are diffeological spaces and S ⊆ P R k X × P R k X is a relation between their k-dimensional plots. A morphism (X, X , S) → (Y, Y , T ) is a pair of smooth functions f : X → Y , f : X → Y , such that if (g, g ) ∈ S then (g; f, g ; f ) ∈ T . The idea is that this is a semantic domain where we can simultaneously interpret the language and its automatic derivatives.
Proposition 6.4. The category Gl k is bicartesian closed, has list objects, and the projection functor proj : Gl k → Diff × Diff preserves this structure.
Proof notes. The category Gl k is a full subcategory of the comma category ) | f : R k → R smooth .
At this point one checks that these interpretations are indeed morphisms in Gl k . This is equivalent to the statement that op 1 is the (k, R)-Taylor representation of op (2.2). The remaining constructions of the language are interpreted using the categorical structure of Gl k , following Lemma 6.2. Notice that the diagram below commutes. One can check this by hand or note that it follows from the initiality of Syn (Lemma 6.2): all the functors preserve all the structure.
We thus arrive at a restatement of the correctness theorem (Th. 4.6), which holds even for the extended language with variants and lists, because for any x 1 : real, ..., x n : real t : real, the interpretations ( t , − → D (k,R) (t) ) are in the image of the projection Gl k → Diff × Diff , and hence − → D (k,R) (t) is a (k, R)-Taylor representation of t .
6.3. Correctness at all first order types, via manifolds. We now generalize Theorem 4.6 to hold at all first order types, not just the reals.
So far, we have shown that our macro translation (Section 3.2) gives correct derivatives to functions of the real numbers, even if other types are involved in the definitions of the functions (Theorem 4.6 and Section 6.2). We can state this formally because functions of the real numbers have well understood derivatives (Section 2). There are no established mathematical notions of derivatives at higher types, and so we cannot even begin to argue that our syntactic derivatives of functions (real → real) → (real → real) match with some existing mathematical notion (see also Section 7).
However, for functions of first order type, like list(real) → list(real), there are established mathematical notions of derivative, because we can understand list(real) as the manifold of all tuples of reals, and then appeal to the well-known theory of manifolds and jet bundles. We do this now, to achieve a correctness theorem for all first order types (Theorem 6.7). The key high level points are that • manifolds support a notion of differentiation, and an interpretation of all first order types, but not an interpretation of higher types; • diffeological spaces support all types, including higher types, but not an established notion of differentiation in general; • manifolds and smooth maps embed full and faithfully in diffeological spaces, preserving the interpretation of first order types, so we can use the two notions together.
We now explain this development in more detail. For our purposes, a smooth manifold M is a second-countable Hausdorff topological space together with a smooth atlas. In more detail, a topological space X is second-countable when there exists a collection U := {U i } i∈N of open subsets of X such that any open subset of X can be written as a union of elements of U . A topological space X is Hausdorff if for every distinct points x and y, there exists disjoint open subsets U, V of X such that x ∈ U, y ∈ V . A smooth atlas of a topological space X is an open cover U together with homeomorphisms φ U : U → R n(U ) U ∈U (called charts, or local coordinates) such that φ −1 U ; φ V is smooth on its domain of definition for all U, V ∈ U. A function f : M → N between manifolds is smooth if φ −1 U ; f ; ψ V is smooth for all charts φ U and ψ V of M and N , respectively. Let us write Man for this category. This definition of manifolds is a slight generalisation of the more usual one from differential geometry because different charts in an atlas may have different Each open subset of R n can be regarded as a manifold. This lets us regard the category of manifolds Man as a full subcategory of the category of diffeological spaces. We consider a manifold (X, {φ U } U ) as a diffeological space with the same carrier set X and where the plots P U X , called the manifold diffeology, are the smooth functions in Man(U, X). A function X → Y is smooth in the sense of manifolds if and only if it is smooth in the sense of diffeological spaces [IZ13]. For the categorically minded reader, this means that we have a full embedding of Man into Diff . Moreover, the natural interpretation of the first order fragment of our language in Man coincides with that in Diff . That is, the embedding of Man into Diff preserves finite products and countable coproducts (hence initial algebras of polynomial endofunctors).
Proposition 6.5. Suppose that a type τ is first order, i.e. it is just built from reals, products, variants, and lists (or, again, arbitrary inductive types), and not function types. Then the diffeological space τ is a manifold.
Proof notes. This is proved by induction on the structure of types. In fact, one may show that every such τ is isomorphic to a manifold of the form n i=1 R d i where the bound n is either finite or ∞, but this isomorphism is typically not an identity function.
We can understand the jet bundle of a composite space in terms of that of its parts.
Lemma 6.6. There are canonical isomorphisms We define a canonical isomorphism φ by induction on the structure of types. We let φ . For the other types, we use Lemma 6.6. We can now phrase correctness at all first order types.
Theorem 6.7 (Semantic correctness of − → D (k,R) (full)). For any ground τ , any first order context Γ and any term Γ t : τ , the syntactic translation − → D (k,R) coincides with the (k, R)-jet bundle functor, modulo these canonical isomorphisms: Proof notes. For any k-dimensional plot γ ∈ Man(R k , M ), letγ ∈ Man(R k , J (k,R) (M )) be the (k, R)-jet curve, given byγ(x) = (γ(x), [t → γ(x + t)]). First, we note that a smooth map h : . This generalizes (2.2). Second, for any first order type τ , This is shown by induction on the structure of types. We conclude the theorem from diagram (6.1), by putting these two observations together. 7. Discussion: What are derivatives of higher order functions?
In our gluing categories Gl k of Section 6.2, we have avoided the question of what semantic derivatives should be associated with higher order functions. Our syntactic macro − → D provides a specific derivative for every definable function, but in the model Gl k there is only a relation between plots and their corresponding Taylor representations, and this relation is not necessarily single-valued. Our approach has been rather indifferent about what "the" correct derivative of a higher order function should be. Instead, all we have cared about is that we are using "a" derivative that is correct in the sense that it can never be used to produce incorrect derivatives for first order functions, where we do have an unambiguous notion of correct derivative. 7.1. Automatic derivatives of higher order functions may not be unique! For a concrete example to show that derivatives of higher order functions might not be unique in our framework, let us consider the case (k, R) = (1, 1) and focus on first derivatives of the evaluation function ev : R → (real → real) → real = R → (R ⇒ R) ⇒ R; r → (f → f (r)). Our macro − → D will return λa : R.λf : R × R ⇒ R × R.f (a, 1). In this section we show that the lambda term λa : R.λf : R × R ⇒ R × R.sortf (a, 1) is also a valid derivative of the evaluation map, where sort : f (r, 0)), π 2 − → D ((−, 0); f ; π 1 )(r))). This map is idempotent and it converts any map R × R → R × R into the dual-numbers representation of its first component. For example, (sort(swap)) is the constantly (0, 0) function, where we write According to our gluing semantics, a function g : τ 1 → σ 1 defines a correct (k, R)-Taylor representation of a function f : τ 0 → σ 0 iff (f, g) defines a morphism τ → σ in Gl k . In particular, there is no guarantee that every f has a unique correct (k, R)-Taylor representation g. (Although such Taylor representations are, in fact, unique when τ, σ are first order types.) The gluing relation (real → real) → real in Gl 1 relates curves in In this relation, the function ev is related to at least two different tangent curves.
Yet, ev 1 = ev 2 as ev 1 (a)(swap) = (1, a) and ev 2 (a)(swap) = (0, 0). This shows that ev 1 = ev 2 are both "valid" semantic derivatives of the evaluation function (ev) in our framework. In particular, it shows that semantic derivatives of higher order functions might not be unique. Our macro − → D will return ev 1 , but everything would still work just as well if it instead returned ev 2 . 7.2. Canonical derivatives of higher order functions? Differential geometers and analysts have long pursued notions of a canonical derivative of various higher order functions arising, for example, in the calculus of variations and in the study of infinite dimensional Lie groups [KM97]. Such an uncontroversial notion of derivative exists on various (infinite dimensional) spaces of functions that form suitable (so-called convenient) vector spaces, or, manifolds locally modelled on such vector spaces. At the level of generality of diffeological spaces, however, various natural notions of derivative that coincide in convenient vector spaces start to diverge and it is no longer clear what the best definition of a derivative is [CW14]. Another, fundamentally different setting that defines canonical derivatives of many higher order functions is given by synthetic differential geometry [Koc06].
While derivatives of higher order functions are of deep interest and have rightly been studied in their own right in differential geometry, we believe the situation is subtly different in computer science: (1) In programming applications, we use higher order programs only to construct the first order functions that we ultimately end up running and calculating derivatives of. Automatic differentiation methods can exploit this freedom: derivatives of higher order functions only matter in so far as they can be used to construct the correct derivatives of first order functions, so we can choose a simple and cheap notion of derivative among the valid options. As such, the fact that our semantics does not commit to a single notion of derivative of higher order functions can be seen as a feature rather than bug that models the pragmatics of programming practice.
(2) While function spaces in differential geometry are typically infinite dimensional objects that are unsuitable for representation in the finite memory of a computer, higher order functions as used in programming are much more restricted: all they can do is call a function on finitely many arguments and analyse the function outputs. As such, function types in programming can be thought of as (locally) finite dimensional. In case a canonical notion of automatic derivative of higher order function is really desired, it may be worth pursuing a more intentional notion of semantics such as one based on game semantics. Such intentional techniques could capture the computational notion of higher order function better than our current (and other) extensional semantics using existing techniques from differential geometry. We hope that an exploration of such techniques might lead to an appropriate notion of computable derivative, even for higher order functions.
8. Discussion and future work 8.1. Summary. We have shown that diffeological spaces provide a denotational semantics for a higher order language with variants and inductive types (Section 4,5). We have used this to show correctness of simple forward-mode AD translations for calculating higher derivatives (Theorem 4.6, Theomem 6.7). The structure of our elementary correctness argument for Theorem 4.6 is a typical logical relations proof over a denotational semantics. As explained in Section 6, this can equivalently be understood as a denotational semantics in a new kind of space obtained by categorical gluing.
Overall, then, there are two logical relations at play. One is in diffeological spaces, which ensures that all definable functions are smooth. The other is in the correctness proof (equivalently in the categorical gluing), which explicitly tracks the derivative of each function, and tracks the syntactic AD even at higher types.
8.2. Connection to the state of the art in AD implementation. As is common in denotational semantics research, we have here focused on an idealized language and simple translations to illustrate the main aspects of the method. There are a number of points where our approach is simplistic compared to the advanced current practice, as we now explain. In our examples we have treated n-vectors as tuples of length n. This style of programming does not scale to large n. A better solution would be to use array types, following [SFVPJ19]. As demonstrated by [CJS20], our categorical semantics and correctness proofs straightforwardly extend to cover them, in a similar way to our treatment of lists. In fact, [CJS20] formalizes our correctness arguments in Coq and extends them to apply to the system of [SFVPJ19].
8.2.2. Efficient forward-mode AD. For AD to be useful, it must be fast. The (1, 1)-AD macro − → D (1,1) that we use is the basis of an efficient AD library [SFVPJ19]. Numerous optimizations are needed, ranging from algebraic manipulations, to partial evaluations, to the use of an optimizing C compiler, but the resulting implementation is performant in experiments [SFVPJ19]. The Coq formalization [CJS20] validates some of these manipulations using a similar semantics to ours. We believe the implementation in [SFVPJ19] can be extended to apply to the more general (k, R)-AD methods we described in this paper through minor changes. 8.2.3. Reverse-mode and mixed-mode AD. While forward-mode AD methods are useful, many applications require reverse-mode AD, or even mixed-mode AD for efficiency. In [HSV20a], we described how our correctness proof applies to a continuation-based AD technique that closely resembles reverse-mode AD, but only has the correct complexity under a non-standard operational semantics [BMP20] (in particular, the linear factoring rule is crucial). It remains to be seen whether this technique and its correctness proof can be adapted to yield genuine reverse AD under a standard operational semantics.
Alternatively, by relying on a variation of our techniques, [Vák21] gives a correctness proof of a rather different (1, 1)-reverse AD algorithm that stores the (primal, adjoint)-vector pair as a struct-of-arrays rather than as an array-of-structs. Future work could explore extended its analysis to (k, R)-reverse AD and mixed-mode AD for efficiently computing higher order derivatives. 8.2.4. Other language features. The idealized languages that we considered so far do not touch on several useful language constructs. For example: the use of functions that are partial (such as division) or partly-smooth (such as ReLU); phenomena such as iteration, recursion; and probabilities. Recent work by MV [Vák20] shows how our analysis of (1, 1)-AD extends to apply to partiality, iteration, and recursion. This development is orthogonal to the one in this paper: its methods combine directly with those in the present paper to analyze (k, R)-forward mode AD of recursive programs. We leave the analysis of AD of probabilistic programs for future work.