26 The Exponential
Highlights of this Chapter: we reach a culmination of several topics, drawing in theory from across series and differentiability to come up with a formula for the natural exponential \(\exp(x)\), and an explicit formula for its base \(e\).
26.1 Prior Work
It’s useful to start by summarizing what we already know. We defined the exponential function as a nonconstant solution to the law of exponents \[E(x+y)=E(x)E(y)\]
26.1.1 Properties
Such a definition does not guarantee that any such function exists, but using the functional equation one can readily begin to prove many propositions about exponentials, assuming they exist. For example some of the first we proved were
- If \(E(x)\) is an exponential then \(E(x)\) is never zero.
- If \(E(x)\) is an exponential, then \(E(0)=1\)
- If \(E(x)\) is an exponential, then so is \(E(kx)\)
Through the introduction to differentiation, we can prove even more about the exponential, such as
- If \(E(x)\) is an exponential, then \(E(x)\) is differentiable, and \(E^\prime(x)=cE(x)\) for some \(c\neq 0\), and in fact \(c=E^\prime(0)\).
Combining this with previous facts and the chain rule, we can see that if \(E(x)\) is any exponential, then \(E(x/c)\) is an exponential whose derivative at zero is \(1\). We called such a function the natural exponential, and so have proven
- If any exponential exists at all then there is a natural exponential \(\exp(x)\) satisfying \(\exp(x)^\prime=\exp(x)\).
From here, we can actually learn quite a lot about this function \(\exp\), if it exists. For instance
Example 26.1 If \(\exp\) exists, then it is a strictly increasing function on the entire real line.
To start the proof of this, note since \(\exp(0)=1\) and \(\exp(x)\) is never zero, in fact \(\exp\) is always positive: were it not then there’d be some \(y\) with \(\exp(y)<0\), and since \(0\) lies between \(1=\exp(0)\) and \(\exp(y)\), by the intermediate value theorem there would have to be a \(z\) with \(\exp(z)=0\). But we know no such points exist.
Now, because \(\exp(x)>0\) and \(\exp(x)^\prime=\exp(x)\), we see that the derivative is strictly positive. And, by an argument using the mean value theorem, we know that on any interval where the derivative is positive, the function is increasing. So \(\exp\) is increasing on all of \(\RR\).
26.1.2 Existence
This simplifies things a bit: proving the existence of any exponential at all is enough to get us to the existence of \(\exp\). But no arguments starting from the functional equation alone can prove that there are exponential functions at all! For that we need to do some additional work: and you did this, over the course of Assignment 7, where you showed
- We can define \(2^x\) as \(\lim 2^{r_n}\) for \(r_n\) an arbitrary sequence of rational numbers converging to \(x\). That is
- For any \(x\in\RR\), and any sequence \(r_n\to x\), the sequence \(2^{r_n}\) converges
- The value of \(\lim 2^{r_n}\) does not depend on the choice of sequence: so long as that sequence converges to \(x\).
- The function \(2^x\) defined this way is continuous
- The function \(2^x\) defined this way satisfies the law of exponents on the rationals (by definition), and so by continuity, satisfies the law of exponents for all real inputs.
Corollary 26.1 Exponential functions exist.
Thus, at this point we are certain that there is a mysterious real function out there called the natural exponential. We just don’t know anything about how to compute it! We are even ignorant of the most basic question: if we were to write \(\exp(x)=a^x\) in the form above for some base \(a\), what number is \(a\)?
26.2 Finding a Power Series
To work with the natural exponential efficiently, we need to find a formula that lets us compute it. And this is exactly what power series are good at! However, the theory of power series is a little tricky, as we saw in the last chapter. Not every function has a power series representation, but if a function does, there’s only one possibility:
Proposition 26.1 If the natural exponential has a power series representation, then it is \[p(x)=\sum_{k\geq 0}\frac{x^k}{k!}\]
Proof. We know the only candidate series for a function \(f(x)\) is \(\sum_{k\geq 0}\frac{f^{(k)}(0)}{k!}x^k\), so for \(\exp\) this is
\[p(x)=\sum_{k\geq 0}\frac{\exp^{(k)}(0)}{k!}x^k\]
However, we know that \(\exp^\prime=\exp\) and so inductively \(\exp^{(k)}=\exp\), and so \[\exp^{(k)}(0)=\exp(0)=1\] Thus \[p(x)=\sum_{k\geq 0}\frac{1}{k!}x^k\]
So now, while we know \(\exp\) exists we are back to talking about hypotheticals because we don’t know if it is representable by a power series! The first step to fixing this is to show that the proposed series at least converges.
Proposition 26.2 The series \(p(x)=\sum_{k\geq 0}\frac{x^k}{k!}\) converges for all \(x\in\RR\).
Proof. This series converges for all \(x\in\RR\) by the Ratio test, as \[\lim\Big|\frac{x^{n+1}/(n+1)!}{x^n/n!}\Big|=\lim \frac{|x|}{n+1}=0<1\]
Now, all that remains is to show that \(p(x)=\exp(x)\). Since \(p\) is a power series, this really means that the limit of its partial sums equals \(\exp(x)\), or
\[\forall x\in\RR\,\,\, \exp(x)=\lim_N p_N(x)\]
For any finite partial sum \(p_N\), we know that it is not exactly equal to \(\exp(x)\) (as this finite sum is just a polynomial!). Thus there must be some error term \(R_N = \exp-p_N\), or
\[\exp(x)=p_N(x)+R_N(x)\]
This is helpful, as we know from the previous chapter how to calculate such an error, using the Taylor Error Formula: for each fixed \(x\in\RR\) and each fixed \(N\in\NN\), there is some point \(c_N\in[0,x]\) such that
\[R_N(x)=\frac{\exp^{(N+1)}(c_N)}{(N+1)!}x^{N+1}\]
And, to show the power series becomes the natural exponential in the limit, we just need to show this error tends to zero!
Proposition 26.3 As \(N\to\infty\), for any \(x\in\RR\) the Taylor error term for the exponential goes to zero: \[R_N(x)\to 0\]
Proof. Fix some \(x\in\RR\). Then for an arbitrary \(N\), we know \[R_N(x)=\frac{\exp^{(N+1)}(c_N)}{(N+1)!}x^{N+1}\] where \(c_N\in[0,x]\) is some number that we don’t have much control of (as it came from an existence proof: Rolle’s theorem in our derivation of the Taylor error). Because we don’t know \(c_N\) explicitly, its hard to directly compute the limit and so instead we use the squeeze theorem:
We know that \(\exp\) is an increasing function: thus, the fact that \(0\leq c_N\leq x\) implies that \(1=\exp(0)\leq \exp(c_N)\leq \exp(x)\), and multiplying this inequality through by \(x^{N+1}{(N+1)!}\) yields the inequality
\[\frac{x^{N+1}}{(N+1)!}\leq R_N(x)=\exp(c_N)\frac{x^{N+1}}{(N+1)!}\leq \exp(x)\frac{x^{N+1}}{(N+1)!}\]
(Here I have assumed that \(x\geq 0\): if \(x<0\) then the inequalities reverse for even values of \(N\) as \(x^{N+1}\) is negative and we are multiplying through by a negative number. But this does not affect the fact that the error term \(R_N(x)\) is still sandwiched between the two.)
So now our problem reduces to showing that the upper and lower bounds converge to zero. Since \(\exp(x)\) is a constant (remember, \(N\) is our variable here as we take the limit), a limit of both the upper and lower bounds comes down to just finding the limit
\[\lim_N \frac{x^{N+1}}{(N+1)}\]
But this is just the \(N+1\)st term of the power series \(p(x)=\sum_{n\geq 0}x^n/n!\) we studied above! And since this power series converges, we know that as \(n\to\infty\) its terms must go to zero (the divergence test). Thus
\[\lim_N \frac{x^{N+1}}{(N+1)}=0\hspace{1cm}\lim_N \exp(x)\frac{x^{N+1}}{(N+1)}=0\]
and so by the squeeze theorem, \(R_N(x)\) converges and
\[\lim_N R_N(x)=0\]
Now we have all the components together at last: we know that \(\exp\) exists, we have a candidate power series representation, that candidate converges, and the error between it and the exponential goes to zero!
Theorem 26.1 The natural exponential is given by the following power series \[\exp(x)=\sum_{k\geq 0}\frac{x^k}{k!}\]
Proof. Fix an arbitrary \(x\in\RR\). Then for any \(N\) we can write \[\exp(x)=p_N(x)+R_N(x)\] For \(p_N\) the partial sum of \(p(x)=\sum_{k\geq 0}x^k/k!\) and \(R_N(x)\) the error. Since we have proven both \(p_N\) and \(R_N\) converge, we can take the limit of both sides using the limit theorems (and, as \(\exp(x)\) is constant in \(N\), clearly \(\lim_N \exp(x)=\exp(x)\)):
\[\begin{align*} \exp(x)&=\lim_N(p_N(x)+R_N(x))\\ &= \lim_N p_N(x)+\lim_N R_N(x)\\ &= p(x)+0\\ &= \sum_{k\geq 0}\frac{x^k}{k!} \end{align*}\]
Its incredible in and of itself to have such a simple, explicit formula for the natural exponential. But this is just the beginning: this series actually gives us a means to express all exponentials:
Theorem 26.2 Let \(E(x)\) be an arbitrary exponential function. Then \(E\) has a power series representation on all of \(\RR\) which can be expressed for some real nonzero \(c\) as
\[E(x)=\sum_{n\geq 0} \frac{c^n}{n!}x^n\]
Proof. Because \(E\) is an exponential we know \(E\) is differentiable, and that \(E^\prime(x)=E^\prime(0)E(x)\) for all \(x\). Note that \(E\prime(0)\) is nonzero; else we would have \(E^\prime(x)=0\) constantly, and so \(E(x)\) would be constant. Set \(c=E^\prime(0)\).
Now, inductively take derivatives at zero: \[E^\prime(0)=c\hspace{1cm}E^{\prime\prime}(0)=c^2\hspace{1cm}E^{(n)}(0)=c^n\]
Thus, if \(E\) has a power series representation it must be \[\sum_{n\geq 0}\frac{c^n}{n!}x^n=\sum_{n\geq 0}\frac{1}{n!}(cx)^n\]
This is just the series for \(\exp\) evaluated at \(cx\): since \(\exp\) exists and is an exponential, so is this function (as its defined just by a substitution). So there is such an exponential.
From this, we can directly get a formula to calculate the base of this exponential, the natural constant \(e\):
Corollary 26.2 (A series for \(e\):) The base of the natural exponential is given by \[e:=\exp(1)=\sum_{k\geq 0}\frac{1}{k!}\]
Since we know for a general exponential \(E(x)=E(1)^x\) can be written as powers of its base (where the power is defined as the limit of rational exponents…) this finally gives us our standard looking exponential function
\[\exp(x)=\exp(1)^x = e^x\]
26.2.1 Estimating \(e\)
We finally found \(e\)! And we have a relatively simple, explicit formula to compute it. As some final practice with our new tools, lets use what we know here to do some estimation
Proposition 26.4 The base of the natural exponential is between \(2\) and \(3\).
Proof. The series defining \(e\) is all positive terms, so we see that \(e\) is greater than any partial sum. Thus \[2=1+1=\frac{1}{0!}+\frac{1}{1!}< \sum_{k\geq 0}\frac{1}{k!}=e\] so we have the lower bound. To get the upper bound, we need to come up wtih a computable upper bound for our series. This turns out to be not that difficult: as the factorial grows so quickly, we can produce many upper bounds by just fining something that grows slower than the reciprocal and summing up their reciprocals. For instance, when \(k\geq 2\) \[k(k-1)\leq k!\]
and so,
\[e=\sum_{k\geq 0}\frac{1}{k!}=1+1+\sum_{k\geq 2}\frac{1}{k!}\leq 1+1+\sum_{k\geq 2}\frac{1}{k(k-1)}\]
But this upper bound now is our favorite telescoping series! After a rewrite with partial fractions, we directly see that it sums to \(1\). Plugging this in,
\[e<1+1+1=3\]
How can we get a better estimate? Since we do have a convergent infinite series just sitting here defining \(e\) for us, the answer seems obvious - why don’t we just sum up more and more terms of the series? And of course - that is part of the correct strategy, but it’s missing one key piece. If you add up the first 10 terms of the series and you get some number, how can you know how accurate this is?
Just because the first two digits are \(2.7\), who is to say that after adding a million more terms (all of which are positive) it won’t eventually become \(2.8\)? To give us any confidence in the value of \(e\) we need a way of measuring how far off any of our partial sums could be.
Our usual approach is to try and produce sequences of upper and lower estimates: nested intervals of error bars to help us out. But here we have only one sequence (and producing even a single upper bound above was a bit of work!) so we need to look elsewhere. It turns out, the correct tool for the job is the Taylor Error formula once more!
Proposition 26.5 Adding up the first \(N\) terms of the series expansion of \(e\) results in a an estimate of the true value accurate to within \(3/(N+1)!\).
Proof. The number \(e\) is defined as \(\exp(1)\), and so using \(x=1\) we are just looking at the old equation
\[\exp(1)=p_N(1)+R_N(1)\]
Where \(R_N(1)=\exp(c_N)\frac{1^{N+1}}{(N+1)!}\) for \(c_N\in[0,1]\). Since \(\exp\) is increasing, we can bound \(\exp(c_N)\) below by \(\exp(0)=1\) and above by \(\exp(1)=e\), and \(e\) above by \(3\): thus
\[\frac{1}{(N+1)!}\leq R_N(x)\leq \frac{3}{(N+1)!}\]
And so, the difference \(|e-p_N(1)|=|R_N(1)|\) is bounded above by the upper bound \(3/(N+1)!\)
This gives us a readily computable, explicit estimate. Precisely adding up to the \(N=5\)th term of the series yields
\[1+1+\frac{1}{2}+\frac{1}{6}+\frac{1}{24}+\frac{1}{120}\approx 2.71666\ldots\]
with the total error between this and \(e\) is less than \(\frac{3}{6!}=\frac{1}{240}=0.0041666\ldots\). Thus we can be confident that the first digit after the decimal is a 7, as \(2.7176-0.0041=2.7135\leq e\leq 2.7176+0.0041=2.7217\).
Adding up five more terms, to \(N=10\) gives
\[1+1+\frac{1}{2}+\frac{1}{3!}+\cdots+\frac{1}{10!}=2.71828180114638\ldots\]
now with a maximal error of \(3/11!=0.000000075156\ldots\). This means we are now absolutely confident in the first six digits:
\[e\approx 2.718281\]
Pretty good, for only having to add eleven fractions together! Thats the sort of calculation one could even manage by hand.