$$ \newcommand{\RR}{\mathbb{R}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\CC}{\mathbb{C}} \newcommand{\NN}{\mathbb{N}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\FF}{\mathbb{F}} % ALTERNATE VERSIONS % \newcommand{\uppersum}[1]{{\textstyle\sum^+_{#1}}} % \newcommand{\lowersum}[1]{{\textstyle\sum^-_{#1}}} % \newcommand{\upperint}[1]{{\textstyle\smallint^+_{#1}}} % \newcommand{\lowerint}[1]{{\textstyle\smallint^-_{#1}}} % \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} \newcommand{\uppersum}[1]{U_{#1}} \newcommand{\lowersum}[1]{L_{#1}} \newcommand{\upperint}[1]{U_{#1}} \newcommand{\lowerint}[1]{L_{#1}} \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} \newcommand{\partitions}[1]{\mathcal{P}_{#1}} \newcommand{\sampleset}[1]{\mathcal{S}_{#1}} \newcommand{\erf}{\operatorname{erf}} $$

25  Power Series

Highlights of this Chapter: we prove to marvelous results about power series: we show that they are differentiable (and get a formula for their derivative), and we also prove a formula about how to approximate functions well with a power series, and in the limit get a power series representation of a known function, in terms of its derivatives at a single point.

25.1 Differentiating Power Series

The goal of this section is to prove that power series are differentiable, and that we can differentiate them term by term. That is, we seek to prove

\[\left(\sum_{k\geq 0}a_kx^k\right)^\prime=\sum_{k\geq 0}(a_k x^k)^\prime = \sum_{k\geq 1}ka_kx^{k-1}\]

Because a derivative is defined as a limit, this process of bringing the derivative inside the sum is really an exchange of limits: and we know the tool for that! Dominated Convergence

25.1.1 \(\bigstar\) Dominated Convergence

The crux of differentiating a power series is to be able to bring the derivative inside the sum. Because derivatives are limits, we can use dominated convergence to understand when we can switch sums and limits. One crucial step here is the Mean Value Theorem.

Theorem 25.1 Let \(f_k(x)\) be a series of functions on a domain \(D\).

  • For each \(k\), \(f_k(x)\) is differentiable at all \(x\in D\).
  • For each \(x\in D\), \(\sum_kf_k(x)\) is convergent.
  • There is an \(M_k\) with \(|f_k^\prime(x)|<M_K\), for all \(x\in D\).
  • The sum \(\sum M_k\) is convergent.

Then, the sum \(\sum_k f^\prime_k(x)\) is convergent, and \[\left(\sum_k f_k(x)\right)^\prime=\sum_k f^\prime_k(x)\]

Proof. Recall the limit definition of the derivative (Definition 22.1): \[\left(\sum_k f_k(x)\right)^\prime=\lim_{y\to x}\frac{\sum_k f_k(y)-\sum_k f_k(x)}{y-x}\] Writing each sum as the limit of finite sums, we may use the limit theorems (Theorem 9.3,Theorem 9.2) to combine this into a single sum \[\lim_{y\to x}\frac{\lim_N\sum_{k=0}^N f_k(y)-\lim_N\sum_{k=0}^N f_k(x)}{y-x}=\lim_{y\to x}\lim_N\sum_{k=0}^N\frac{f_k(y)-f_k(x)}{y-x}\]

And now, rewriting the limit of partial sums as an infinite sum, we see \[\left(\sum_k f_k(x)\right)^\prime=\lim_{y\to x}\sum_k \frac{f_k(y)-f_k(x)}{y-x}\]

If we are justified in switching the limit and the sum via Theorem 21.2, this becomes

\[\sum_k\lim_{y\to x}\frac{f_k(y)-f_k(x)}{y-x}=\sum_k f^\prime_k(x)\]

which is exactly what we want. Thus, all we need to do is justify that the conditions of Theorem 21.2 are satisfied, for the terms \[g_k(y)=\frac{f_k(y)-f_k(x)}{y-x}\] with \(x\) a fixed constant and \(y\) the variable, as we take the limit \(y\to x\).

Step 1: Show \(\lim_{y\to x}g_k(y)\) exists We have assumed that \(f_k\) is differentiable at each point of \(D\), which is exactly the assumption that \(\lim_{y\to x}g_k(y)\) exists.

Step 2: Show \(\sum_k g_k(y)\) is convergent We have assumed that \(\sum_k f_k(t)\) exists for all \(t\in D\). Let \(x\neq y\) be two points in \(D\). Then both \(\sum_k f_k(x)\) and \(\sum_k f_k(y)\) exist, and by the limit theorems, the following limit also exists: \[\frac{1}{y-x}\left(\sum_k f_k(y)-\sum_k f_k(x)\right)=\sum_k\frac{f_k(y)-f_k(x)}{y-x}=\sum_k g_k(y)\]

Step 3: Find an \(M_k\) with \(|g_k(y)|<M_k\) for all \(y\neq x\). We are given by assumption that there is such an \(M_k\) bounding the derivative \(f_k\) on \(D\): we need only show this suffices. If \(x\neq y\) then \(g_k(y)\) measures the slope of the secant line of \(f_k\) between \(x\) and \(y\), so by the Mean Value Theroem (Theorem 24.3) there is some \(c\) between \(x\) and \(y\) with \[|g_k(y)|=\left|\frac{f_k(y)-f_k(x)}{y-x}\right|=\left|f^\prime_k(c)\right|\] Since \(|f^\prime_k(c)|\leq M_k\) by assumption (as \(c\in D\)), \(M_k\) is a bound for \(g_k\) as required.

Step 4: Show \(\sum M_k\) is convergent This is an assumption, as the \(M_k\)’s are the same as originally given. Thus there’s nothing left to show, and dominated convergence applies!

25.1.2 Term - By - Term Differentiation

Now, we will attempt to apply dominated convergence for derivatives to a power series. Should this work, we will find the derivative can be calculated via term-by-term differentiation:

\[\sum_{k\geq 0} a_kx^k \mapsto \sum_{k\geq 1}ka_kx^{k-1}\]

So, let’s begin by investigating this series: can we figure out when it converges?

Proposition 25.1 Let \(f(x)=\sum_{k\geq 0}a_kx^k\) be a power series with radius of convergence \(R\). Then the series of term-wise derivatives also has radius of convergence \(R\): \[g(x)=\sum_{k\geq 1}ka_kx^{k-1}\]

Proof. Say we computed the radius of convergence of \(f\) using the ratio test, which implies (Theorem 20.1) \(\lim |a_{n+1}/a_n|=1/R\). Now, we wish to apply the ratio test to our new series \(g(x)=\sum_{k\geq 1}ka_kx^{k-1}\). That is, we must compute the limit

\[\lim\Big|\frac{(n+1)a_{n+1}x^n}{na_nx^{n-1}}\Big|\]

Simplifying this fraction and breaking into components gives

\[\lim \left(\frac{n+1}{n}\right)\Big|\frac{a_{n+1}}{a_n}\Big||x|\]

We can compute the limit of the first term here directly, as \(\lim (n+1)/n = \lim (1+1/n)=1\) and we know the limit of the second term is \(1/R\) by our initial assumption. As \(x\) is constant, this is all we need to apply the limit theorems and conclude

\[\lim = \frac{1}{R}|x|\] and this is \(<1\) so long as \(|x|<R\): that is, our new series converges with radius of convergence \(R\).

(A small note: while for all series we will see we can easily compute the radius of convergence via the ratio test; if we were not able to we would need a more involved argument above to help us fill in that first line).

Now that we know our proposed derivative actually makes sense (converges), its time to show we are actually justified in exchanging the sum limit and the derivative limit, using Dominated Convergence.

Theorem 25.2 (Differentiation of Power Series) Let \(f=\sum_{k\geq 0}a_kx^k\) be a power series with radius of convergence \(R\). Then for \(x\in(-R,R)\): \[f^\prime(x)= \sum_{k\geq 1} ka_k x^{k-1}\]

Proof. The terms on the right are the term-by-term derivatives of \(f\). That is, we are trying to show \[\left(\sum_ka_kx^k\right)^\prime=\sum_k \left(a_kx^k\right)^\prime\] which is precisely the situation to which dominated convergence for derivatives (Theorem 25.1) is suited. This theorem has several hypotheses we have to verify on the functions \(f_k(x)=a_kx^k\).

To start, let \(x\in(-R,R)\) be arbitrary. Since \(x\) lies strictly within the interval of convergence, we may choose some closed interval \(I\subset (-R,R)\) containing \(x\). Without loss of generality we may take \(I=[-y,y]\) for some \(y<R\), and we do so for concreteness, and use this for the domain of the power series when applying Theorem 25.1.

Requirement 1: \(f_k\) is differentiable on \(I\) This is immediate, as \(f_k(x)=a_kx^k\) is a polynomial and polynomials are differentiable on the entire real line.

Requirement 2: \(\sum_k a_kx^k\) converges for each \(x\in I\) This is also immediate by definition, since \(I\) is a proper subset of the interval of convergence for \(f\).

Requirement 3: There is an \(M_k\) bounding \(|(a_kx^k)^\prime|\) on \(I\) This derivative is \(k|a_k||x|^{k-1}\), which is a monotone increasing function of \(|x|\). Thus, if \(I=[-y,y]\) we may set \(M_k=k|a_k|y^{k-1}\) and note \[\forall x\in I,\,\, k|a_k||x|^{k-1}\leq M_k\]

Requirement 4: \(\sum_k M_k\) is convergent. Consider the sum: \[\sum_k M_k=\sum_k k|a_k|y^{k-1}\] Because \(y\) is within the radius of convergence of the original function \(f\), we know that \(\sum_k a_ky^k\) is absolutely convergent, and thus that the power series \(\sum_k |a_k|x^k\) converges at \(y\). But now applying Proposition 25.1, we see \(\sum_k k|a_k|x^{k-1}\) is also convergent at \(y\). But evaluating at \(y\) gives exactly \(\sum_k M_k\)!

Thus, all the requirements are satisfied, and dominated convergence allows us to switch the order of the sum with differentiation.

Example 25.1 We know the geometric series converges to \(1/(1-x)\) on \((-1,1)\): \[\sum_{k\geq 0}x^k=\frac{1}{1-x}\] Differentiating term by term yields a power series for \(1/(1-x)^2\): \[\begin{align*}\frac{1}{(1-x)^2}&=\left(\frac{1}{1-x}\right)^\prime\\ &=\left(\sum_{k\geq 0}x^k\right)^\prime\\ &=\sum_{k\geq 0}x^k\\ &= \sum_{k\geq 1}kx^{k-1}\\ &= 1+2x+3x^2+4x^3+\cdots \end{align*}\]

The fact that power series are differentiable on their entire radius of convergence puts a strong constraint on which sort of functions can ever be written as the limit of such a series.

Example 25.2 The absolute value \(|x|\) is not expressible as a power series.

25.2 Power Series Representations

Definition 25.1 A power series representation of a function \(f\) at a point \(a\) is a power series \(p\) where \(p(x)=f(x)\) on some neighborhood of \(a\).

How could one try to track down a power series representation of a given function? Power series - being limits of polynomials - are actually pretty constrained objects: it turns out with a little thought that for a given \(f\) there is only one possible formula for a power series representation

Theorem 25.3 (Candidate Series Representation) Let \(f\) be a smooth real valued function whose domain contains a neighborhood of \(0\), and let \(p(x)=\sum_{k\geq 0}a_kx^k\) be a power series which equals \(f\) on some neighborhood of zero. Then, the power series \(p\) is uniquely determined:

\[p(x)=\sum_{k\geq 0}\frac{f^{(k)}(0)}{k!}x^k\]

Proof. Let \(f(x)\) be a smooth function and \(p(x)=\sum_{k\geq 0 }a_kx^k\) be a power series which equals \(f\) on some neighborhood of zero. Then in particular, \(p(0)=f(0)\), so

\[\begin{align*} f(0)&=\lim_N (a_0+a_1x+a_2x^2+\cdots+a_Nx^N)\\ &= \lim_N (a_0+0+0+\cdots +0)\\ &= a_0 \end{align*}\]

Now, we know the first coefficient of \(p\). How can we get the next? Differentiate!

\[p^\prime(x)=\left(\sum_{k\geq 0}a_kx^k\right)^\prime = \sum_{k\geq 0}(a_kx^k)^\prime=\sum_{k\geq 1}ka_kx^{k-1}\]

Since \(f(x)=p(x)\) on some small neighborhood of zero and the derivative is a limit, \(f^\prime(0)=p^\prime(0)\). Evaluating this at \(0\) will give the constant term of the power series \(p^\prime\)

\[\begin{align*} f^\prime(0)&=\lim_N (a_1+2a_2x+3a_3x^2\cdots+Na_Nx^{N-1})\\ &= \lim_N (a_1+0+0+\cdots +0)\\ &= a_1 \end{align*}\]

Continuing in this way, the second derivative will have a multiple of \(a_2\) as its constant term:

\[p^{\prime\prime}(x)=2a_2 + 3\cdot 2 \cdot a_3 x+4\cdot 3\cdot a_4 x^2+\cdots\]

And evaluating the equality \(f^{\prime\prime}(x)=p^{\prime\prime}(x)\) at zero yields

\[f^{\prime\prime}(0)=2a_2,\hspace{1cm}\mathrm{so}\hspace{1cm} a_2=\frac{f^{\prime\prime}(0)}{2}\]

This pattern continues indefinitely, as \(f\) is infinitely differentiable. The term \(a_n\) arrives in the constant term after \(n\) differentiations (as it was originally the coefficient of \(x^n\)), at which point it becomes

\[a_nx^n\mapsto na_nx^{n-1}\mapsto n(n-1)a_nx^{n-2}\mapsto\cdots\mapsto n(n-1)(n-2)\cdots 3\cdot 2\cdot 1 a_n\]

As the constant term of \(p^{(n)}\) this means \(p^{(n)}(0)=n!a_n\), and so using \(f^{(n)}(0)=p^{(n)}(0)\), \[a_n=\frac{f^{(n)}(0)}{n!}\]

In each case there was no choice to be made, so long as \(f=p\) in any small neighborhood of zero, the unique formula for \(p\) is

\[p(x)=\sum_{k\geq 0}\frac{f^{(k)}(0)}{k!}x^k\]

Definition 25.2 (Taylor Series) For any smooth function \(f(x)\) we define the Taylor Polynomial (centered at \(0\)) of degree \(N\) to be \[p_N(x)=\sum_{0\leq k\leq N}\frac{f^{(k)}(0)}{k!}x^k\]

In the limit as \(N\to\infty\), this defines the Taylor Series \(p(x)\) for \(f\).

We’ve seen for example, that the geometric series \(\sum_{k\geq 0}x^k\) is a power series representation of the function \(1/(1-x)\) at zero: it actually converges on the entire interval \((-1,1)\). There are many reasons one may be interested in finding a power series representation of a function - and the above theorem tells us that if we were to search for one, there is a single natural candidate. If there is any power series representation, its this one!

So the next natural step is to study this representation: does it actually converge to \(f(x)\)?

25.2.1 Taylor’s Error Formula

Our next goal is to understand how to create power series that converge to specific functions, and more importantly prove that our series actually do what we want! To do so, we are going to need some tools relating a functions derivatives to its values. Rolle’s Theorem / the Mean Value Theorem does this for the first derivative, and so we present a generalization here the polynomial mean value theorem, which does so for \(n^{th}\) derivatives.

Theorem 25.4 (Generalized Rolle’s Theorem) Let \(f\) be a function which is \(n+1\) times differentiable on the interior of an interval \([a,b]\). Assume that \(f(a)=f(b)=0\), and further that the first \(n\) derivatives at \(a\) are zero: \[f(a)=f^\prime(a)=f^{\prime\prime}(a)=\cdots=f^{(n)}(a)=0\] Then, there exists some \(c\in(a,b)\) where \(f^{(n+1)}(c)=0\).

Proof. Because \(f\) is continuous and differentiable, and \(f(a)=f(b)\), the original Rolle’s Theorem implies that there exists some \(c\in(a,b)\) where \(f^\prime(c_1)=0\). But now, we know that \(f^\prime(a)=f^\prime(c_1)=0\), so we can apply Rolle’s theorem to \(f^\prime\) on $[a,c_1] to get a point \(c_2\in(a,c_1)\) with \(f^{\prime\prime}(c_2)=0\).

Continuing in this way, we get a \(c_3\in(a,c_2)\) with \(f^{(3)}(c)=0\), all the way up to to a \(c_n\in(a,c_{n-1})\) where \(f^{n}(c_n)=0\). This leaves one more application of Rolle’s theorem possible, as we assumed \(f^{(n)}(a)=0\), so we get a \(c\in(a,c_n)\) with \(f^{(n+1)}(c)=0\) as claimed.

Corollary 25.1 (A polynomial Mean Value Theorem) Let \(f(x)\) be an \(n+1\)-times differentiable function on \([a,b]\) and \(h(x)\) a polynomial which shares the first \(n\) derivatives with \(f\) at zero: \[f(a)=h(a),\hspace{0.2cm}f^\prime(a)=h^\prime(a),\ldots,\hspace{0.2cm}f^{(n)}(a)=p^{(n)}(a)\] Then, if additionally \(f(b)=h(b)\), there must exist some point \(c\in(a,b)\) where \[f^{(n+1)}(c)=h^{(n+1)}(c)\]

Proof. Define the function \(g(x)=f(x)-h(x)\). Then all the first \(n\) derivatives of \(g\) at \(x=a\) are zero (as \(f\) and \(h\) had the same derivatives), and furthermore \(g(b)=0\) as well, since \(f(b)=h(b)\). This means we can apply the generalized Rolle’s theorem and find a \(c\in(a,b)\) with \[g^{(n+1)}(c)=0\] That is, \(f^{(n+1)}(c)=h^{(n+1)}(c)\).

Theorem 25.5 (Taylor’s Error Formula) Let \(f(x)\) be an \(n+1\)-times differentiable function, and \(p_n(x)\) the degree \(n\) Taylor polynomial \(p(x)=\sum_{0\leq k\leq n}\frac{f^{(k)}(0)}{k!}x^k\).

Then for any fixed \(b\in\RR\), we have \[f(b)=p_n(b)+\frac{f^{(n+1)}(c)}{(n+1)!}b^{n+1}\]

For some \(c\in[0,b]\).

Proof. Fix a point \(b\), and consider the functions \(f(x)\) and \(p_n(x)\) on the interval \([0,b]\). These share their first \(n\) derivatives at \(a\), but \(f(b)\neq p_n(b)\): in fact, it is precisely this error we are trying to quantify.

We need to modify \(p_n\) in some way without affecting its first \(n\) derivatives at zero. One natural way is to add a multiple of \(x^{n+1}\), so define

\[q(x)=p_n(x)+\lambda x^{n+1}\] for some \(\lambda\in\RR\), where we choose \(\lambda\) so that \(f(b)=q(b)\). Because we ensured \(q^{(k)}(0)=f^{(k)}(0)\) for \(k\leq n\), we can now apply the polynomial mean value theorem to these two functions, and get some \(c\in(0,b)\) where \[f^{(n+1)}(c)=q^{(n+1)}(c)\]

Since \(p_n\) is degree \(n\) its \(n+1^{st}\) derivative is zero, and \[q^{(n+1)}(x)=0+\left(\lambda x^{n+1}\right)^{(n+1)}=(n+1)!\lambda\] Putting these last two observations together yields

\[f^{(n+1)}(c)=(n+1)!\lambda \implies \lambda = \frac{f^{(n+1)}(c)}{(n+1)!}\]

As \(q(b)=f(b)\) by construction, this in turn gives what we were after:

\[f(b)=p_n(b)+\frac{f^{(n+1)}(c)}{(n+1)!}b^{n+1}\]

25.2.2 Series Based at \(a\neq 0\)

All of our discussion (and indeed, everything we will need about power series for our course) dealt with defining a power series based on derivative information at zero. But of course, this was an arbitrary choice: one could do exactly the same thing based at any point \(a\in\RR\).

Theorem 25.6 Let \(f\) be a smooth function, defined in a neighborhood of \(a\in\RR\). Then there is a unique power series which has all the same derivatives as \(f\) at \(a\): \[p(x)=\sum_{k\geq 0}\frac{f^{(k)}(a)}{k!}(x-a)^k\] And, for any \(N\) the error between \(f\) and the \(N^{th}\) partial sum is quantified as \[f(x)-p_N(x)=\frac{f^{(N+1)}(\xi)}{(N+1)!}(x-a)^{N+1}\] For some \(c\in [a,x]\).

Exercise 25.1 Prove this.

25.3 \(\bigstar\) Smoothness & Analyticity

The above theorem is extremely useful for calculational purposes: it tells us how to find the power series of a derivative. But it also provides a window into the special nature of power series themselves. For, not only did we learn that a power series is differentiable, but we learned that its derivative is another power series (Theorem 25.2) with the same radius of convergence (Proposition 25.1). Since its a power series, we can apply Theorem 25.2 again to find its derivative, which is another power series, and so on.

Thus a power series isn’t only differentiable, but can be differentiated over and over again! Recall such functions are called smooth CITE DEF.

Proposition 25.2 (Power Series are Smooth Functions) Let \(f\) be a power series with radius of convergence \(R\). Then \(f\) is infinitely differentiable on the interval \((-R,R)\).

Proof. Let \(f(x)=\sum_{k\geq 0}a_kx^k\), which converges absolutely on \((-R,R)\) by assumption. Then \[f^\prime(x)=\sum_{k\geq 0} (k+1)a_{k+1}x^{k}\] is also a power series, which converges absolutely on \((-R,R)\) (here I have re-indexed the sum by powers of \(k\) for clarity, whereas the cited theorem has it indexed by powers of \(k-1\)).

This is a much stronger requirement, which allows us to much finer recognize when a function cannot be written as a power series:

Example 25.3 Consider the function

\[f(x)=\begin{cases} x^m &x\leq 0\\ x^n & x\geq 0 \end{cases}\]

This is a function which is continuous and differentiable \(\min\{m,n\}\) many times, but not differentiable infinitely many times! If we assume without loss of generality that \(m<n\) then after \(m\) differentiations we find the left hand derivative to be \(m!\) at \(x=0\) whereas the right hand derivative is \(0\). Thus, this function cannot be represented by a power series.

However, the ability to be represented by a power series is even stricter than being smooth, motivating the definition of analytic functions which pervades much of advanced analysis.

25.3.1 Analytic Functions

Definition 25.3 (Analytic Functions) An analytic function is a function \(f(x)\) where in a neighborhood of every \(a\) in its domain, \(f\) can be written as a power series \(\sum_k a_k (x-a)^k\).

Corollary 25.2 (The exponential is analytic) The function \(\exp(x)\) has a power series which converges for all \(x\in\RR\), and moreover converges to the actual exponential at all points. Thus, its analytic.

Corollary 25.3 (Sine and Cosine are Analytic) As you’ll prove on the final project, these functions have power series that converge on the entire real line, and further work (in the project, via complex exponentials; or alternatively with the Taylor Error formula) shows the limits equal the sine and cosine at all points. Thus these functions are analytic.

In both of these examples, we needed only a single power series to verify analyticity as it converged everywhere! But for functions whose power series have limited radii of convergence, one may need to use many power series to cover the entire domain of the function

Exercise 25.2 (The function \(1/1+x^2\) is analytic) Derive a power series for \(\frac{1}{1+x^2}\) by substitution from the geometric series. Show that this power series has radius of convergence \(1\).

But, then show at every \(a\in\RR\), the power series centered at \(a\) for \(\frac{1}{1+x^2}\) given by Theorem 25.6 has a nonzero radius of convergence, and converges to \(1/(1+x^2)\) within it.

Thus, while we need infinitely many power series to fully cover the graph of \(\frac{1}{1+x^2}\), its still analytic.

In fact, probably every smooth function you have ever heard of is analytic: its hard to imagine what could go wrong - somehow you can take infinitely many derivatives, but in the end, the error term does not go to zero?

It’s a surprising fact of real analysis with - with very wide implications - that there exist smooth but non-analytic functions.

Exercise 25.3 Consider the function

\[s(x)=\begin{cases} e^{-1/x} & x>0\\ 0 x\leq 0 \end{cases}\]

Show that \(s\) is infinitely differentiable at zero, and for all \(n\) \[s^{(n)}(0)=0\] This implies that the unique power series centered at \(a=0\) is the zero function. But, \(s(x)\neq 0\) on any neighborhood of \(0\) (show for any \(x>0\), \(s(x)>0\)). Thus \(s\) is smooth, but not analytic.

Hint: compute the derivative via right and left hand limits. We know the left hand limit is always zero, so you just need to show the right hand limit is zero for each derivative…