10 Monotone Convergence
Highlights of this Chapter: We prove the monotone convergence theorem, which is our first theorem that tells us a sequence converges, without having to first know its limiting value. We show how to use this theorem to find the limit of various recursively defined sequences, including two important examples.
- We prove the infinite sequence of roots \(\sqrt{1+\sqrt{1+\sqrt{1+\cdots}}}\) converges to the golden ratio.
- We prove the sequence \(\left(1+\frac{1}{n}\right)^n\) converges to the number \(e=2.71828\ldots\)
- We begin a treatment of irrational exponents, by looking at the limit of sequences with rational exponents.
The motivation for inventing sequences is to work with infinite processes, where we have a precise description of each finite stage, but cannot directly grasp the “completed” state “at infinity”. In the first section of this chapter we computed a few specific limits, and then in the second we showed how to find new, more complicated limits given that you know the value of some simpler ones via algebra.
But what we haven’t done, since our original motivating discussion with the nested intervals theorem, is actually return to the part of the theory we are most interested in: rigorously assuring that certain sequences converge, without knowing the value of their limit ahead of time! The most useful theorem in this direction is the monotone convergence theorem, which deals with monotone sequences.
Definition 10.1 (Monotone Sequences) A sequence \(s_n\) is monotone increasing (or more precisely, monotone non-decreasing) if \[m\leq n\,\,\,\implies\,\,\, s_m\leq s_n\] A sequence is monotone decreasing (non-increasing) if \[m\leq n\,\,\,\implies\,\,\, s_m\geq s_n\]
Note: constant sequences are monotone: both monotone increasing and monotone decreasing.
The original inspiration for a monotone sequence is the sequence of upper bounds or lower bounds from a collection of nested intervals: as the intervals get smaller, the lower bounds monotonically increase, and the upper bounds monotonically decrease. The Monotone convergence theorem guanatees that such sequences always converge. Its proof is below, but could actually be extracted directly from Theorem 5.2.
Theorem 10.1 (Monotone Convergence) Let \(s_n\) be a monotone bounded sequence. Then \(s_n\) is a convergent sequence.
Proof. Here we consider the case that \(s_n\) is monotone increasing, and leave the decreasing case as an exercise. Let \(S=\{s_n\mid n\in\NN\}\). Then \(S\) is nonempty, and is bounded above (by any upper bound for the sequence \(s_n\), which we assumed is bounded). Thus by completeness, it has a supremum \(s=\sup S\).
We claim that \(s_n\) is actually a convergent sequence, which limits to \(s_n\). To prove this, choose \(\epsilon>0\), and note that as \(s\) is the least upper bound, \(s-\epsilon\) is not an upper bound for \(S\), so there must be some \(N\) where \(s_N>s-\epsilon\). But \(s_n\) is monotone increasing, so if \(n>N\) it follows that \(s_n>s_N\). Recalling that for all \(n\) we know \(s_n\leq s\) (since \(s\) is an upper bound), we have found some \(N\) where for all \(n>N\) we know \(s-\epsilon< s_n < s\). This further implies \(|s_n-s|<\epsilon\), which is exactly the definition of convergence! Thus
\[s_n\to s\] So it is a convergent sequence, as claimed.
Though straightforward to prove, this theorem has tons of applications, as it assures us that many of the difficult to describe recursively defined sequences that show up in practice actually do converge, and thus we may rigorously reason about their limits. We will give several interesting ones below.
10.1 Infinite Recursion
Remember a recursively defined sequence is one given by iterating some function \(f\) starting from an initial value \(a_0\), as \(a_{n+1}=f(a_n)\). We saw one such sequence previously, defined by \(s_{n+1}=\sqrt{1+s_n}\) starting from \(s_0=1\):
\[s_0=1\] \[s_1=\sqrt{1+\sqrt{1}}\] \[s_2=\sqrt{1+\sqrt{1+\sqrt{1}}}\] \[\ldots\]
Because such sequences follow a regular pattern, we can use a shorthand notation with ellipsis for their terms. For example, in the original sequence above, writing the first couple steps of the pattern followed by an ellipsis
\[\sqrt{1+\sqrt{1+\sqrt{1+\ldots}}}\]
we take to mean the sequence of terms \(s_n\) where \(s_{n+1}=\sqrt{1+s_n}\) itself. Thus, writing \(\lim \sqrt{1+\sqrt{1+\cdots}}\) means the limit of this sequence, implicitly defined by this infinite expression.
Exercise 10.1 Here are some other infinite expressions defined by recursive sequences: can you give the recursion relation they satisfy? \[\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\cdots}}}\]
\[\frac{1}{1+\frac{1}{1+\cdots}}\]
\[\cos(\cos(\cos(\cdots\cos(5)\cdots)))\]
In all of these sequences it is not clear at all how to find their limit value from scratch, or how we could possibly apply any of the limit theorems about field axioms and inequalities. But, recursive sequences are set up for using induction, and monotone convergence! We can build a sort of recipe for dealing with them:
Recursive Sequence Operation Manual:
- Prove its bounded, by induction.
- Prove its monotone, by induction.
- Use Monotone convergence to conclude its convergent.
- Use the recursive definition, and the limit theorems, to find an equation satisfied by the limit.
- Solve that equation, to find the limit.
A beautiful and interesting example of this operations manual is carried out below:
Proposition 10.1 The sequence \(\sqrt{1+\sqrt{1+\cdots}}\) converges to the golden ratio.s
Proof. The infinite expression \(\sqrt{1+\sqrt{1+\cdots}}\) defines the recursive sequence \(s_{n+1}=\sqrt{1+s_n}\) with \(s_1=1\).
Step 1: \(s_n\) is monotone increasing, by induction First we show that \(s_2>s_1\). Using the formula, \(s_2=\sqrt{1+\sqrt{1}}=\sqrt{2}\), which is larger than \(s_1=1\). Next, we assume for induction \(s_n>s_{n-1}\) and we use this to prove that \(s_{n+1}>s_n\). Starting from our induction hypothesis, we add one to both sides yielding \(1+s_n>1+s_{n-1}\) and then we take the square root (which preserves the inequality, by Proposition 4.5) to get \[\sqrt{1+s_n}>\sqrt{1+s_{n-1}}\] But now, we simply note that the term on the left is the definition of \(s_{n+1}\) and the term on the right is the definition of \(s_n\). Thus we have \(s_{n+1}>s_n\) as claimed, and our induction proof works for all \(n\).
Step 2: \(s_n\) is bounded, by induction It is hard to guess an upper bound for \(s_n\) without doing a little calculation, but plugging the first few terms into a calculator shows them to be less than \(2\), so we might try to prove \(\forall n\, s_n<2\). The base case is immediate as \(s_1=1<2\), so assume for induction \(s_n<2\). Then \(1+s_n<3\) and so \(\sqrt{1+s_n}<\sqrt{1+2}=\sqrt{3}\), and \(\sqrt{3}<2\) (as \(3<2^2=4\)) so our induction has worked, and the entire sequence is bounded above by \(2\).
Conclusion: \(s_n\) converges! We have proven the sequence \(s_n\) is both monotone increasing and bounded above by \(2\). Thus the monotone convergence theorem assures us that there exists some \(L\) with \(s_n\to L\). It only remains to figure out what number this is!
Step 3: The Limit Theorems Because truncating the beginning of a sequence does not change its limit, we see that \(\lim s_n=\lim s_{n+1}=L\). But applying the limit theorems to \(s_{n+1}=\sqrt{1+s_n}\), we see that as \(s_n\to L\), it follows that \(1+s_n\to 1+L\) and thus that \(\sqrt{1+s_n}\to \sqrt{1+L}\). This gives us an equation that \(L\) must satisfy!
\[\sqrt{1+L}=L\] Simplifying this becomes \(1+L=L^2\), which has solutions \((1\pm \sqrt{5})/2\). This argument only tells us so far that one of these numbers must be our limit \(L\): to figure out which we need to bring in more information. Noticing that only one of the two is positive, and all the terms of our sequence are positive singles it out:
\[\sqrt{1+\sqrt{1+\sqrt{1+\cdots}}}=\frac{1+\sqrt{5}}{2}\approx 1.618\ldots\]
This number is known as the golden ratio.
Example 10.1 The final step of the proof above suggests a way one might find a recursive sequence to use as a calculating tool: if we started with the golden ratio \[\phi =\frac{1+\sqrt{5}}{2}\] we could observe that \(\phi\) solves the quadratic equation \(1+L=L^2\), and hence \(L=\sqrt{1+L}\). This sets up a recursive sequence, as we can plug this relation into itself over and over:
\[L=\sqrt{1+L}=\sqrt{1+\sqrt{1+L}}=\sqrt{1+\sqrt{1+\sqrt{1+\cdots}}}\]
Which immediately suggests the recursion \(s_{n+1}=\sqrt{1+s_n}\) as a candidate for generating a sequence that would solve the original equation.
Exercise 10.2 Find a recursive sequence whose limit is the positive real root of \(x^2-2x-5\). Then prove that your proposed sequence actually converges to this value.
Exercise 10.3 What number is this? \[\sqrt{1-2\sqrt{1-2\sqrt{1-2\sqrt{\cdots}}}}\]
10.2 \(\bigstar\) The Number \(e\)
In this section we aim to study, and prove the convergence of the following sequence of numbers \[\left(1+\frac{1}{n}\right)^n\] We will later see that the limit of this sequence is the number \(e\) (indeed, many authors take this sequence itself as the definition of \(e\) as it is perhaps the first natural looking sequence limiting to this special value. We will instead define \(e\) in terms of exponential functions to come, and then later show its value coincides with this limit).
We begin by proving \(a_n\) is monotone as a prelude to applying monotone convergence.
Example 10.2 The sequence \(a_n=\left(\frac{n+1}{n}\right)^n\) is monotone increasing.
Proof. To show \(a_n\) is increasing we will show that the ratio \(\frac{a_n}{a_{n-1}}\) is greater than 1. Simplifying,
\[\frac{a_n}{a_{n-1}}=\frac{\left(\frac{n+1}{n}\right)^n}{\left(\frac{n}{n-1}\right)^{n-1}}=\left(\frac{n+1}{n}\right)^n\left(\frac{n-1}{n}\right)^{n-1}\]
Multiplying by \(\frac{n-1}{n}\) and its inverse we can make the powers on each of these terms the same, and combine them:
\[=\left(\frac{n+1}{n}\right)^n\left(\frac{n-1}{n}\right)^{n}\frac{n}{n-1}=\left(\frac{n^2-1}{n^2}\right)^n\frac{n}{n-1}\] Simplifying what is in parentheses, we notice that we are actually in a perfect situation to apply Bernoulli’s Inequality (Exercise 4.6) to help us estimate this term. Recall this says that if \(r\) is any number such that \(1+r\) is positive, \((1+r)^n\geq 1+nr\). When \(n\geq 2\) we can apply this to \(r=-\frac{1}{n^2}\), yielding \[\frac{a_n}{a_{n-1}}=\left(1-\frac{1}{n^2}\right)^n\frac{n}{n-1}\geq \left(1-\frac{n}{n^2}\right)\frac{n}{n-1}\] \[=\frac{n-1}{n}\frac{n}{n-1}=1\]
Thus \(\frac{a_n}{a_{n-1}}\geq1\), so \(a_n\geq a_{n-1}\) and the sequence is monotone increasing for all \(n\), as claimed.
Next we need to show that \(a_n\) is bounded above. Computing terms numerically, it seems that \(a_n\) is bounded above by 3, but of course no amount of computation can substitute for a proof. And after a bit of trying, it seems hard to prove directly that it actually is bounded above.
So instead, we will employ a bit of an ingenious trick. We will study a second sequence, which appears very similar to the first:
\[b_n=\left(\frac{n+1}{n}\right)^{n+1}\]
Indeed, this is just our sequence \(a_n\) multiplied by one extra factor of \(\frac{n+1}{n}\)! But this extra factor changes its behavior a bit: computing the first few terms, we see that it appears to be decreasing:
\[b_1=\left(1+1\right)^2=4,\,\,b_2=\left(1+\frac{1}{2}\right)^3=\frac{27}{8}= 3.375,\,\,b_3=\left(1+\frac{1}{3}\right)^4\approx 3.1604\]
Indeed, a proof that its decreasing can be constructed following an identical strategy to \(a_n\) in Example 10.2.
Exercise 10.4 The sequence \(b_n=\left(\frac{n+1}{n}\right)^{n+1}\) is monotone decreasing.
Now that we understand the behavior of \(b_n\) we can use it to prove that \(a_n\) is bounded above:
Corollary 10.1 The sequence \(a_n=\left(1+\frac{1}{n}\right)^n\) is convergent
Proof. Note that the sequence \(b_n\) and \(a_n\) are related by \[b_n=\left(\frac{n+1}{n}\right)^{n+1}=a_n\left(\frac{n+1}{n}\right)\] Since \(\frac{n+1}{n}>1\) we see that \(b_n>a_n\) for all \(n\). But \(b_n\) is decreasing, so \(b_n\leq b_1=2^2=4\), and so \(a_n\) is bounded above by \(4\).
Note that we can also readily see that \(b_n\) is itself convergent (though we did not actually need that fact for our analysis of \(a_n\)): we proved its monotone decreasing, and its a sequence of positive terms - so its trivially bounded below by zero!
We can also see that \(a_n\) and \(b_n\) have the same limit, using the limit theorems. Since \(\frac{1}{n}\to 0\), we know that \(1+\frac{1}{n}\to 1\), and hence that
\[\begin{align*} \lim b_n &= \lim \left[a_n\left(\frac{n+1}{n}\right) \right]\\ &= (\lim a_n)\cdot\left(\lim \frac{n+1}{n}\right) \\ &=\lim a_n \end{align*}\]
As mentioned previously, we will later see that this limit is the number called \(e\). But believing for a moment that we should be interested in this particular limit, having the two sequences \(a_n\) and \(b_n\) lying around actually proves quite practically useful for estimating its value.
Since \(\lim a_n = e = \lim b_n\) and \(a_n<b_n\) for all \(n\), we see that the number \(e\) is contained in the interval \(I_n=[a_n,b_n]\), and hence is the limit of the nested intervals:
Corollary 10.2 \[\{e\}=\bigcap_{n\geq 1}\left[\left(1+\frac{1}{n}\right)^n,\left(1+\frac{1}{n}\right)^{n+1}\right]\]
Taking any finite \(n\), this interval gives us both an upper and lower bound for \(e\): for example
\[n=10 \implies 2.59374\leq e\leq 2.85311\] \[n=100 \implies 2.7048\leq e\leq 2.73186\] \[n=1,000\implies 2.71692\leq e\leq 2.71964\] \[n=1,000,000\implies 2.71826\leq e\leq 2.71829\]
Thus, correct to four decimal places we know \(e\approx 2.7182\)
10.3 Application: Defining Irrational Powers
We have already defined rational powers of a number in terms of iterated multiplication/division, and the extraction of roots: but how does one define a real numbered power? We can use sequences to do this! To motivate this, let’s consider the example of defining \(2^\pi\). We can write \(\pi\) as the limit of a sequence of rational numbers, for instance
\[3,\,3.1,\,3.14,\,3.141,\,3.1415\,\ldots\]
And since rational exponents make sense, from this we can produce a sequence of exponentials
\[2^3,\, 2^\frac{31}{10},\, 2^{\frac{314}{100}},\,2^{\frac{3141}{1000}},\,2^{\frac{31415}{10000}},\ldots\]
Then we may ask if this sequence has a limit: if it does, it’s natural to try and define this value two to the power of pi. To make sure this makes sense, we need to check several potential worries:
- Does this sequence converge?
- Does the limit depend on the particular sequence chosen?
For example if you tried to define \(3^{\sqrt{2}}\) using the babylonian sequence for \(\sqrt{2}\), and your friend tried to use the sequence coming from the partial fraction, you’d better get the same number if this is a reasonable thing to define! Because we are in the section on monotone convergence, we will restrict ourselves at the moment to monotone sequences though we will see later we can dispense with this if desired.
Proposition 10.2 If \(r_n\to x\) is a monotone sequence of rational numbers converging to \(x\), and \(a>0\) then the sequence \(a^{r_n}\) converges.
Proof. Recall for a fixed positive base \(a\), exponentiation by rational numbers is monotone increasing, so \(r<s\) implies \(a^r<a^s\).
Thus, given a monotone sequence \(r_n\), the exponentiated sequence \(a^{r_n}\) remains monotone (for monotone increasing we see \(r_n\leq r_{n+1}\implies a^{r_n}\leq a^{r_{n+1}}\) and the equalities are reversed if \(r_n\) is monotone decreasing).
Now that we know \(a^{r_n}\) is monotone, we only need to see its bounded to apply Monotone Convergence. Again we have two cases, and will deal here with the monotone increasing case. As \(r_n\to x\) and \(x\) is a real number, there must be some natural number \(N>x\). Thus, \(N\) is greater than \(r_n\) for all \(n\), and so \(a^N\) is greater than \(a^{r_n}\): our sequence is bounded above by \(a^N\). Thus all the hypotheses of monotone convergence are satisfied, and \(\lim a^{r_n}\) exists.
Now that we know such sequences make sense, we wish to clear up any potential ambiguity, and show that if two different sequences both converge to \(x\), the value we attempt to assign to \(a^x\) as a limit is the same for each. As a lemma in this direction, we look at sequences converging to zero.
Exercise 10.5 Let \(r_n\) be any sequence of rationals converging to zero. Then for any \(a>0\) we have \[\lim a^{r_n}=1\]
Corollary 10.3 If \(r_n,s_n\) are two monotone sequences of rationals each converging to \(x\), then \[\lim a^{r_n}=\lim a^{s_n}\] for any \(a>0\).
Proof. Let \(z_n=r_n-s_n\), so that \(z_n\to 0\). Because \(r_n\) and \(s_n\) are monotone, we know \(\lim a^{r_n}\) and \(\lim a^{s_n}\) exist. And by the exercise above, we have \(a^{z_n}\to 1\). Noting that \(r_n=s_n+z_n\) and that the laws of exponents apply for rational exponents, we have \[a^{r_n}=a^{s_n+z_n}=a^{s_n}a^{z_n}\] But as all quantities in question converge we can use the limit theorems to compute:
\[\begin{align*} \lim a^{r_n}&=\lim a^{s_n+z_n}\\ &=\lim a^{s_n}a^{z_n}\\ &=(\lim a^{s_n})(\lim a^{z_n})\\ &=\lim a^{s_n} \end{align*}\]
Thus, we can unambiguously define the value of \(a^x\) as the limit of any monotone sequence \(a^{r_n}\) without specifying the sequence itself.
Definition 10.2 ## Irrational Powers Let \(a>0\) and \(x\in\RR\). Then we define \(a^x\) as a limit \[a^x=\lim a^{r_n}\] For \(r_n\) any monotone sequence of rational numbers converging to \(x\).
Perhaps upon reading this definition to yourself you wonder, is the restriction to monotone sequences important, or just an artifact of our currently limited toolset?
Once we build more tools we will see the latter is the case; you will show on homework that arbitrary convergent sequences \(r_n\to x\) can be used to unambiguously define \(a^x\).