Computing a Solution of Feigenbaum's Functional Equation in Polynomial Time

Lanford has shown that Feigenbaum's functional equation has an analytic solution. We show that this solution is a polynomial time computable function. This implies in particular that the so-called first Feigenbaum constant is a polynomial time computable real number.


Introduction
Independently, Feigenbaum [7] and Großmann and Thomae [8] observed that the behaviour of the points of bifurcations of certain parameterized classes of dynamical systems on an interval obeys certain universal laws that are governed by constants which are now called Feigenbaum constants.For detailed presentations of these notions the reader is referred to [5] and to [6].In particular the so-called first Feigenbaum constant α = −2.50290787 . . . is the inverse 1/g(1) of the value g(1) at 1 of a solution g of Feigenbaum's functional equation which was explicitly constructed by Lanford [10].In this note we show that this solution function g is a polynomial time computable function.This implies that the first Feigenbaum constant is a polynomial time computable number.
Which real numbers are computable?This question was one of the motivations for Alan Turing to write his famous papers [15,16], in which he developed the notion of a Turing machine and gave a definition of computable real numbers.Since then computable analysis has developed into a research area in which the effective solvability of problems over the real numbers or more general continuous objects, in particular all kinds of numerical problems, is analyzed using mathematically precise notions of effective solvability, based on computability theory and complexity theory; see, e.g., [13,9,17,1].Among the first questions that one can ask in this theory is the question whether particular real number constants are computable real numbers or not.For example, it is easy to see and well known that the number π and the Euler number e are computable.In fact, they can be computed quite fast.An exemplary recent result of this kind is the observation by Rettinger [14] that the Bloch constant, a famous real number constant in complex analysis, is computable.
A real number c is called computable, if there is an algorithm (a Turing machine) which, given an arbitrary n ∈ N computes a rational number q n satisfying |c − q n | < 2 −n .A real number c is called polynomial time computable if there are a Turing machine M and a polynomial p with coefficients in N such that M , given the string 1 n for any n ∈ N, computes in at most p(n) steps a binary string a = a m . . .a 0 (where m is an arbitrary natural number) and a binary string b Here, by a.b we mean the dyadic rational number defined by Instead of binary strings one might as well consider decimal strings, and instead of the upper bound 2 −n one might as well consider 10 −n .Finally, a sequence (c k ) k∈N of real numbers is called polynomial time computable if there are a Turing machine M and a two-variate polynomial p with coefficients in N such that M , given 1 k 01 n for k, n ∈ N, computes in at most p(k, n) steps a binary string a = a m . . .a 0 (where m is an arbitrary natural number) and a binary string b = b 1 . . .b n such that In order to formulate our main result precisely we need to introduce some terminology.We closely follow Lanford [10].In fact, this paper by Lanford is the basis of our analysis.
Let M be the set of all continuously differentiable functions f : [−1, 1] → [−1, 1] satisfying the following conditions: (1) f (0) = 1, (2) x • f ′ (x) < 0 for x = 0, i.e., f is strictly increasing on [−1, 0] and strictly decreasing on [0, 1], (3) f (−x) = f (x) for all x, i.e., f is even.Furthermore, let D ⊆ M be the set of all functions in M satisfying additionally the following conditions: (1) 0 It is easy to check that for any function f ∈ D, the function T f , defined by is an element of M .Lanford [10] showed the following result.The so-called first Feigenbaum constant α is given by α = 1/g (1).We prove the following addition to Lanford's theorem.Theorem 1.2.There exists a function g that has the properties stated in Theorem 1.1 and additionally the following properties.
(1) The sequence of Taylor coefficients around 0 of this analytic function g is a polynomial time computable sequence of real numbers.

COMPUTING A SOLUTION OF FEIGENBAUM'S FUNCTIONAL EQUATION IN POLYNOMIAL TIME 3
(2) The number α = 1/g(1) is a polynomial time computable real number.
Our proof is based on Lanford's paper [10].In the following section we give the proof.

A Polynomial Time Algorithm for Computing Lanford's solution of Feigenbaum's Functional Equation
Lanford uses a variant of the Newton method in order to define an operator which has the same fixed points as T .Then he gives a computer-assisted proof of a number of estimates that show that this operator is a contraction in the neighborhood of an explicitly defined polynomial ψ 0 , with respect to an ℓ 1 -type norm on the space of Taylor coefficients of functions closely related to the functions f on which T acts.Furthermore, this operator maps this polynomial ψ 0 not too far away from itself.By the contraction mapping principle it follows that the operator has a unique fixed point g.We show that this construction leads to a polynomial time algorithm.
The following terminology is copied from [10]. Let and let H be the Banach space of even functions, bounded and analytic on Ω, real on real points, equipped with the supremum norm.We also define Lanford works on a subspace of H 1 equipped with a stronger norm.Let N + := {1, 2, 3, . ..} be the set of positive integers, ℓ 1 := {ν : Let A be the set of all functions ψ defined in this way.A is a subset of H 1 and contains any element of H 1 that is analytic on a neighborhood of the closure of Ω.In the following we will identify the elements of A with elements of the space R ⊕ ℓ 1 with the ℓ 1 -norm introduced above.The first step in Lanford's construction is the explicit definition of a polynomial ψ 0 ∈ A of degree 20 of the form ψ 0 (z) = 1 + 10 i=1 g i • z 2i by choosing as the values (g 10 ): "the first ten terms of the series given in Table 1 below"; this table can be found on Page 432 in [10].Then Lanford continues by stating that for ψ ∈ A with ||ψ − ψ 0 || < 0.01 one has T ψ ∈ A as well.The goal is to compute a fixed point of T as the limit of a sequence of functions starting with ψ 0 that are computed using a contractive mapping.In order to achieve this, Lanford uses the operator J : R ⊕ ℓ 1 → R ⊕ ℓ 1 defined by and defines for any ψ ∈ A with ||ψ − ψ 0 || < 0.01 This operator Φ is an approximation of the operation iterated in the Newton algorithm applied to the function ψ → T ψ − ψ.Note that Φ has the same fixed points as T .For the proof of the following estimates Lanford uses computer calculations.By DΦ(ψ) in the following lemma we mean the Fréchet derivative of Φ at ψ, which exists and can easily be calculated.( This lemma implies that Φ maps the closed ball {ψ ∈ A : ||ψ − ψ 0 || ≤ 0.009} into itself and that Φ is a contraction with Lipschitz constant 0.9 on this ball.By the contraction mapping theorem, the sequence (φ m ) m of functions defined by φ 0 := ψ 0 and φ m+1 := Φ(φ m ) converges to a fixed point g of Φ.It satisfies (2.2) Remember that Φ has the same fixed points as T .Thus, g is a fixed point of T .Lanford shows that this fixed point of T has all of the properties stated in Theorem 1.1.
From (2.1) it is clear that by starting with the explicitly defined polynomial ψ 0 and by applying the contractive operator Φ to ψ 0 O(n) times one obtains a polynomial that approximates the fixed point g with precision 10 −n (with respect to the norm considered by Lanford and described above).We wish to show that one can approximate the fixed point g with precision 10 −n in time polynomial in n.In order to achieve that, we are going to show that the precision needed in the n-th step is not too high and that the number of coefficients that need to be considered in the n-th step is not too high as well.In fact, we will show that in the n-th step it is sufficient to consider a polynomial of a degree depending linearly on n.
First, we make some observations about the fixed point g of the operators T and Φ.For z 0 ∈ C and r > 0 let By Theorem 1.1, g is an even analytic function defined on the disc B(0, √ 8) satisfying g(0) = 1.Therefore its Taylor series around 0, ) and is equal to g in B(0, √ 8).Then the function h defined by is an analytic function in the ball B(0, 8), and for all z ∈ B(0, √ 8) we have

COMPUTING A SOLUTION OF FEIGENBAUM'S FUNCTIONAL EQUATION IN POLYNOMIAL TIME 5
The Taylor series of h around 1 converges and is identical with h in the ball B(1, 7): The Cauchy integral formula then gives with C = max{|h(z)| : z ∈ ∂B(1, 6.5)}, where for z 0 ∈ C and r > 0 We claim that results in [10] imply C ≤ 62/13.Indeed, according to [10, Remark 4.
), where i •z 2i is a polynomial of degree 80 with coefficients g (0) i given in Table 1 on Page 432 of [10].Defining j for j = 0, . . ., 39 can easily be computed from the numbers g (0) i for i = 1, . . ., 40) we obtain for z ∈ ∂B(1, 6.5).Thus, C ≤ 62/13, and we have Defining u (∞) := 10 • b 0 and, for i ∈ N + , ν Note that this implies for all k ≥ 1 We wish to approximate in time polynomial in n the function g, i.e., the sequence (u (∞) , ν (∞) ) ∈ R ⊕ ℓ 1 , with precision 10 −n in the norm introduced above.We start with the polynomial ψ 0 chosen by Lanford and define the numbers u (0) , ν These numbers can easily be computed explicitly and are given in Table 2.

COMPUTING A SOLUTION OF FEIGENBAUM'S FUNCTIONAL EQUATION IN POLYNOMIAL TIME 7
We compute the first 10 + m + 1 of these numbers with precision 10 −41−(m+1) , i.e., we compute decimal fractions u (m+1) , ν , . . ., ν 9+m+1 with at most 41 + (m + 1) digits after the decimal point such that |u (m+1) − v| ≤ 10 −41−(m+1) and for i = 1, . . ., 9 + m + 1, |ν That is, we simply forget the coefficients µ i for i > 9+m+1.This ends the description of the Central Step of the algorithm in which we compute the numbers u (m+1) , ν Remark 2.1.It is fairly easy to see that no more than O(m 3 ) elementary arithmetic operations are needed in the Central Step.Note that in order to achieve that it is important that not all of the 2•(10+m) 2 coefficients of the polynomial Φ(ψ m )(z) are computed but only the first 10 + m + 1 coefficients.By somewhat tedious estimations one can show that there are positive constants a, b with the property that it is sufficient to perform each arithmetic operation with a + b • m digits in total, that is, before or after the decimal point.Let M(m) be a function satisfying O(M(c • m)) = O(M(m)) such that two binary or decimal numbers of length m can be multiplied in time O(M(m)).For example, the Schönhage-Strassen bound m • log m • log log m is such a function.It is well known that one can also add, subtract, or divide numbers of length m within this time [2].We conclude that the Central Step can be done in time O(m 3 • M(m)).

Theorem 1 . 1 ([ 10 ,
Theorem 1 and Prop.2]).There exists a function g, analytic and even on the set {z ∈ C : |z| < √ 8} and with real values on real numbers, whose restriction to [−1, 1] is an element of D and a fixed point of the operator T .
the Central Step of the algorithm.Let us assume by induction hypothesis that for some m ≥ 0 we have computed 10 + m real numbers u (m) , ν (m) 1 , . . ., ν (m) 9+m with the following properties: • Property I: Each of these numbers is a decimal fraction of the form σa 1 a 0 .b 1 . . .b 41+m where σ ∈ {−1, +1} is a sign and a 1 , a 0 , b 1 , . . ., b 41+m are decimal digits.• Property II: The polynomial ψ m of degree 20 + 2m defined byψ m (z) := 1 − z 2 • u m − g|| < 0.01 • 0.93 m .First we observe that the polynomial ψ 0 indeed has these properties for m = 0.The first property can be checked easily by explicitly calculating the numbers u (0) , ν is clear that this Central Step can be executed in time polynomial in m.