26  The Exponential

Highlights of this Chapter: we reach a culmination of several topics, drawing in theory from across series and differentiability to come up with a formula for the natural exponential exp(x), and an explicit formula for its base e.

26.1 Prior Work

It’s useful to start by summarizing what we already know. We defined the exponential function as a nonconstant solution to the law of exponents E(x+y)=E(x)E(y)

26.1.1 Properties

Such a definition does not guarantee that any such function exists, but using the functional equation one can readily begin to prove many propositions about exponentials, assuming they exist. For example some of the first we proved were

  • If E(x) is an exponential then E(x) is never zero.
  • If E(x) is an exponential, then E(0)=1
  • If E(x) is an exponential, then so is E(kx)

Through the introduction to differentiation, we can prove even more about the exponential, such as

  • If E(x) is an exponential, then E(x) is differentiable, and E(x)=cE(x) for some c0, and in fact c=E(0).

Combining this with previous facts and the chain rule, we can see that if E(x) is any exponential, then E(x/c) is an exponential whose derivative at zero is 1. We called such a function the natural exponential, and so have proven

  • If any exponential exists at all then there is a natural exponential exp(x) satisfying exp(x)=exp(x).

From here, we can actually learn quite a lot about this function exp, if it exists. For instance

Example 26.1 If exp exists, then it is a strictly increasing function on the entire real line.

To start the proof of this, note since exp(0)=1 and exp(x) is never zero, in fact exp is always positive: were it not then there’d be some y with exp(y)<0, and since 0 lies between 1=exp(0) and exp(y), by the intermediate value theorem there would have to be a z with exp(z)=0. But we know no such points exist.

Now, because exp(x)>0 and exp(x)=exp(x), we see that the derivative is strictly positive. And, by an argument using the mean value theorem, we know that on any interval where the derivative is positive, the function is increasing. So exp is increasing on all of R.

26.1.2 Existence

This simplifies things a bit: proving the existence of any exponential at all is enough to get us to the existence of exp. But no arguments starting from the functional equation alone can prove that there are exponential functions at all! For that we need to do some additional work: and you did this, over the course of Assignment 7, where you showed

  • We can define 2x as lim2rn for rn an arbitrary sequence of rational numbers converging to x. That is
    • For any xR, and any sequence rnx, the sequence 2rn converges
    • The value of lim2rn does not depend on the choice of sequence: so long as that sequence converges to x.
  • The function 2x defined this way is continuous
  • The function 2x defined this way satisfies the law of exponents on the rationals (by definition), and so by continuity, satisfies the law of exponents for all real inputs.

Corollary 26.1 Exponential functions exist.

Thus, at this point we are certain that there is a mysterious real function out there called the natural exponential. We just don’t know anything about how to compute it! We are even ignorant of the most basic question: if we were to write exp(x)=ax in the form above for some base a, what number is a?

26.2 Finding a Power Series

To work with the natural exponential efficiently, we need to find a formula that lets us compute it. And this is exactly what power series are good at! However, the theory of power series is a little tricky, as we saw in the last chapter. Not every function has a power series representation, but if a function does, there’s only one possibility:

Proposition 26.1 If the natural exponential has a power series representation, then it is p(x)=k0xkk!

Proof. We know the only candidate series for a function f(x) is k0f(k)(0)k!xk, so for exp this is

p(x)=k0exp(k)(0)k!xk

However, we know that exp=exp and so inductively exp(k)=exp, and so exp(k)(0)=exp(0)=1 Thus p(x)=k01k!xk

So now, while we know exp exists we are back to talking about hypotheticals because we don’t know if it is representable by a power series! The first step to fixing this is to show that the proposed series at least converges.

Proposition 26.2 The series p(x)=k0xkk! converges for all xR.

Proof. This series converges for all xR by the Ratio test, as lim|xn+1/(n+1)!xn/n!|=lim|x|n+1=0<1

Now, all that remains is to show that p(x)=exp(x). Since p is a power series, this really means that the limit of its partial sums equals exp(x), or

xRexp(x)=limNpN(x)

For any finite partial sum pN, we know that it is not exactly equal to exp(x) (as this finite sum is just a polynomial!). Thus there must be some error term RN=exppN, or

exp(x)=pN(x)+RN(x)

This is helpful, as we know from the previous chapter how to calculate such an error, using the Taylor Error Formula: for each fixed xR and each fixed NN, there is some point cN[0,x] such that

RN(x)=exp(N+1)(cN)(N+1)!xN+1

And, to show the power series becomes the natural exponential in the limit, we just need to show this error tends to zero!

Proposition 26.3 As N, for any xR the Taylor error term for the exponential goes to zero: RN(x)0

Proof. Fix some xR. Then for an arbitrary N, we know RN(x)=exp(N+1)(cN)(N+1)!xN+1 where cN[0,x] is some number that we don’t have much control of (as it came from an existence proof: Rolle’s theorem in our derivation of the Taylor error). Because we don’t know cN explicitly, its hard to directly compute the limit and so instead we use the squeeze theorem:

We know that exp is an increasing function: thus, the fact that 0cNx implies that 1=exp(0)exp(cN)exp(x), and multiplying this inequality through by xN+1(N+1)! yields the inequality

xN+1(N+1)!RN(x)=exp(cN)xN+1(N+1)!exp(x)xN+1(N+1)!

(Here I have assumed that x0: if x<0 then the inequalities reverse for even values of N as xN+1 is negative and we are multiplying through by a negative number. But this does not affect the fact that the error term RN(x) is still sandwiched between the two.)

So now our problem reduces to showing that the upper and lower bounds converge to zero. Since exp(x) is a constant (remember, N is our variable here as we take the limit), a limit of both the upper and lower bounds comes down to just finding the limit

limNxN+1(N+1)

But this is just the N+1st term of the power series p(x)=n0xn/n! we studied above! And since this power series converges, we know that as n its terms must go to zero (the divergence test). Thus

limNxN+1(N+1)=0limNexp(x)xN+1(N+1)=0

and so by the squeeze theorem, RN(x) converges and

limNRN(x)=0

Now we have all the components together at last: we know that exp exists, we have a candidate power series representation, that candidate converges, and the error between it and the exponential goes to zero!

Theorem 26.1 The natural exponential is given by the following power series exp(x)=k0xkk!

Proof. Fix an arbitrary xR. Then for any N we can write exp(x)=pN(x)+RN(x) For pN the partial sum of p(x)=k0xk/k! and RN(x) the error. Since we have proven both pN and RN converge, we can take the limit of both sides using the limit theorems (and, as exp(x) is constant in N, clearly limNexp(x)=exp(x)):

exp(x)=limN(pN(x)+RN(x))=limNpN(x)+limNRN(x)=p(x)+0=k0xkk!

Its incredible in and of itself to have such a simple, explicit formula for the natural exponential. But this is just the beginning: this series actually gives us a means to express all exponentials:

Theorem 26.2 Let E(x) be an arbitrary exponential function. Then E has a power series representation on all of R which can be expressed for some real nonzero c as

E(x)=n0cnn!xn

Proof. Because E is an exponential we know E is differentiable, and that E(x)=E(0)E(x) for all x. Note that E(0) is nonzero; else we would have E(x)=0 constantly, and so E(x) would be constant. Set c=E(0).

Now, inductively take derivatives at zero: E(0)=cE(0)=c2E(n)(0)=cn

Thus, if E has a power series representation it must be n0cnn!xn=n01n!(cx)n

This is just the series for exp evaluated at cx: since exp exists and is an exponential, so is this function (as its defined just by a substitution). So there is such an exponential.

From this, we can directly get a formula to calculate the base of this exponential, the natural constant e:

Corollary 26.2 (A series for e:) The base of the natural exponential is given by e:=exp(1)=k01k!

Since we know for a general exponential E(x)=E(1)x can be written as powers of its base (where the power is defined as the limit of rational exponents…) this finally gives us our standard looking exponential function

exp(x)=exp(1)x=ex

26.2.1 Estimating e

We finally found e! And we have a relatively simple, explicit formula to compute it. As some final practice with our new tools, lets use what we know here to do some estimation

Proposition 26.4 The base of the natural exponential is between 2 and 3.

Proof. The series defining e is all positive terms, so we see that e is greater than any partial sum. Thus 2=1+1=10!+11!<k01k!=e so we have the lower bound. To get the upper bound, we need to come up wtih a computable upper bound for our series. This turns out to be not that difficult: as the factorial grows so quickly, we can produce many upper bounds by just fining something that grows slower than the reciprocal and summing up their reciprocals. For instance, when k2 k(k1)k!

and so,

e=k01k!=1+1+k21k!1+1+k21k(k1)

But this upper bound now is our favorite telescoping series! After a rewrite with partial fractions, we directly see that it sums to 1. Plugging this in,

e<1+1+1=3

How can we get a better estimate? Since we do have a convergent infinite series just sitting here defining e for us, the answer seems obvious - why don’t we just sum up more and more terms of the series? And of course - that is part of the correct strategy, but it’s missing one key piece. If you add up the first 10 terms of the series and you get some number, how can you know how accurate this is?

Just because the first two digits are 2.7, who is to say that after adding a million more terms (all of which are positive) it won’t eventually become 2.8? To give us any confidence in the value of e we need a way of measuring how far off any of our partial sums could be.

Our usual approach is to try and produce sequences of upper and lower estimates: nested intervals of error bars to help us out. But here we have only one sequence (and producing even a single upper bound above was a bit of work!) so we need to look elsewhere. It turns out, the correct tool for the job is the Taylor Error formula once more!

Proposition 26.5 Adding up the first N terms of the series expansion of e results in a an estimate of the true value accurate to within 3/(N+1)!.

Proof. The number e is defined as exp(1), and so using x=1 we are just looking at the old equation

exp(1)=pN(1)+RN(1)

Where RN(1)=exp(cN)1N+1(N+1)! for cN[0,1]. Since exp is increasing, we can bound exp(cN) below by exp(0)=1 and above by exp(1)=e, and e above by 3: thus

1(N+1)!RN(x)3(N+1)!

And so, the difference |epN(1)|=|RN(1)| is bounded above by the upper bound 3/(N+1)!

This gives us a readily computable, explicit estimate. Precisely adding up to the N=5th term of the series yields

1+1+12+16+124+11202.71666

with the total error between this and e is less than 36!=1240=0.0041666. Thus we can be confident that the first digit after the decimal is a 7, as 2.71760.0041=2.7135e2.7176+0.0041=2.7217.

Adding up five more terms, to N=10 gives

1+1+12+13!++110!=2.71828180114638

now with a maximal error of 3/11!=0.000000075156. This means we are now absolutely confident in the first six digits:

e2.718281

Pretty good, for only having to add eleven fractions together! Thats the sort of calculation one could even manage by hand.