$$ \newcommand{\RR}{\mathbb{R}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\CC}{\mathbb{C}} \newcommand{\NN}{\mathbb{N}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\FF}{\mathbb{F}} % ALTERNATE VERSIONS % \newcommand{\uppersum}[1]{{\textstyle\sum^+_{#1}}} % \newcommand{\lowersum}[1]{{\textstyle\sum^-_{#1}}} % \newcommand{\upperint}[1]{{\textstyle\smallint^+_{#1}}} % \newcommand{\lowerint}[1]{{\textstyle\smallint^-_{#1}}} % \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} \newcommand{\uppersum}[1]{U_{#1}} \newcommand{\lowersum}[1]{L_{#1}} \newcommand{\upperint}[1]{U_{#1}} \newcommand{\lowerint}[1]{L_{#1}} \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} \newcommand{\partitions}[1]{\mathcal{P}_{#1}} \newcommand{\sampleset}[1]{\mathcal{S}_{#1}} \newcommand{\erf}{\operatorname{erf}} $$

9  Calculation

Highlights of this Chapter: We develop techniques for bounding limits by inequalities, and computing limits using the field axioms. We use these techniques to prove two interesting results:

  • The Babylonian sequence approximating \(\sqrt{2}\) truly does converge to this value.
  • Given any real number, there exists a sequence of rational numbers converging to it.

Now that we have a handle on the definition of convergence and divergence, our goal is to develop techniques to avoid using the definition directly, wherever possible (finding values of \(N\) for an arbitrary \(\epsilon\) is difficult, and not very enlightening!)

The natural first set of questions to investigate then are how our new definition interacts with the ordered field axioms: can we learn anything about limits and inequalities, or limits and field operations? We tackle both of these in turn below.

9.1 Limits and Inequalities

Proposition 9.1 (Limits of nonnegative sequences) Let \(a_n\) be a convergent sequence of nonnegative numbers. Then \(\lim a_n\) is nonnegative.

Proof. Assume for the sake of contradiction that \(a_n\to L\) but \(L<0\). Since \(L\) is negative, we can find a small enough epsilon (say, \(\epsilon = |L|/2\)) such that the entire interval \((L-\epsilon,L+\epsilon)\) consists of negative numbers.

The definition of convergence says for this \(\epsilon\), there must be an \(N\) where for all \(n>N\) we know \(a_n\) lies in this interval. Thus, we’ve concluded that for large enough \(n\), that \(a_n\) must be negative! This is a contradiction, as \(a_n\) is a nonnegative sequence.

Exercise 9.1 If \(a_n\) is a convergent \(a_n\geq L\) for all \(n\), then \(\lim a_n\geq L\). Similarly prove if \(a_n\) is a convergent \(a_n\leq U\) for all \(n\), then \(\lim a_n\leq U\).

This exercise provides the following useful corollary, telling you that if you can bound a sequence, you can bound its limit.

Corollary 9.1 (Inequalities and Convergence) If \(a_n\) is a convergent sequence with \(L\leq a_n\leq U\) for all \(n\), then \[L\leq \lim a_n\leq U\]

In fact, a kind of converse of this is true as well: if a sequence converges, then we know the limit ‘is bounded’ (as it exists, as a real number, and those can’t be infinite). But this is enough to conclude that the entire sequence is bounded!

Proposition 9.2 (Convergent Sequences are Bounded) Let \(s_n\) be a convergent sequence. Then there exists a \(B\) such that \(|s_n|<B\) for all \(n\in\NN\).

Proof. Let \(s_n\to L\) be a convergent sequence. Then we know for any \(\epsilon>0\) eventually the sequence stays within \(\epsilon\) of \(L\). So for example, choosing \(\epsilon=1\), this means there is some \(N\) where for \(n>N\) we are assured \(|s_n-L|<1\), or equivalently \(-1<s_n-L<1\). Adding \(L\),

\[L-1< s_n<L+1\]

Thus, we have both upper and lower bounds for the sequence after \(N\) and all we are left to worry about is the finitely many terms before this. For an upper bound on these we can just take the max of \(s_1,\ldots, s_N\) and for a lower bound we can take the min.

Thus, to get an overall upper bound, we can take \[M=\max\{s_1,s_2,\ldots, s_N, L+1\}\]

and for an overall lower bound we can take

\[m =\min\{s_1,s_2,\ldots, s_N, L-1\}\]

Then for all \(n\) we have \(m\leq s_n\leq M\) so the sequence \(s_n\) is bounded.

Theorem 9.1 (The Squeeze Theorem) Let \(a_n,b_n\) and \(c_n\) be sequences with \(a_n\leq b_n \leq c_n\) for all \(n\). Then if \(a_n\) and \(c_n\) are convergent, with \(\lim a_n=\lim c_n=L\), then \(b_n\) is also convergent, and \[\lim b_n=L\]

Proof. Theorem 3.23 on page 87 of the textbook

9.1.1 Example Computations

The squeeze theorem is incredibly useful in practice as it allows us to prove the convergence of complicated looking sequences by replacing them with two (hopefully simpler) sequences, an upper and lower bound. To illustrate, let’s look back at Exercise 8.6, and re-prove its convergence.

Example 9.1 (\(\frac{n}{n^2+1}\) converges to \(0\).) Since we are trying to converge to zero, we want to bound this sequence above and below by sequences that converge to zero. Since \(n\) is always positive, a natural lower bound is the constant sequence \(0,0,0,\ldots\).

One first thought for an upper bound may be \(\frac{n}{n+1}\): its easy to prove that \(\frac{n}{n^2+1}<\frac{n}{n+1}\) (as we’ve made the denominator smaller), and so we have bounded our sequence \(0<a_n<\frac{n}{n+1}\). Unfortunately this does not help us, as \(\lim \frac{n}{n+1}=1\) (Exercise 8.5) so the two bounds do not squeeze \(a_n\) to zero!

Another attempt at an upper bound may be \(1/n\): we know this goes to zero (Proposition 8.1) and it is also an upper bound: \[\frac{n}{n^2+1}<\frac{n}{n^2}=\frac{1}{n}\]

Thus since \(\lim 0=0\) and \(\lim \frac{1}{n}=0\), we can conclude via squeezing that \(\lim\frac{n}{n^2+1}=0\) as well.

This theorem is particularly useful for calculating limits involving functions whose values are difficult to compute. While we haven’t formally introduced the sine function yet in this class, we know (and will later confirm) that \(-1\leq \sin(x)\leq 1\) for all \(x\in\RR\). We can use this to compute many otherwise difficult limits:

Example 9.2 (\(s_n=\frac{\sin n}{n}\) converges to \(0\).) Since \(-1\leq \sin(x)\leq 1\) we know \(0\leq |\sin x|\leq 1\) for all \(x\), and thus \[0\leq \frac{\sin n}{n}\leq \frac{1}{n}\]

Since both of these bounding sequences converge to zero, we know the original does as well, by the squeeze theorem.

This sort of estimation can be applied to even quite complicated looking limits:

Example 9.3 Compute the following limit: \[\lim \left(\frac{n^2\sin(n^3-2n+1)}{n^3+n^2+n+1}\right)^n\]

Lets begin by estimating as much as we can: we know \(|\sin(x)|\leq 1\), so we can see that

\[\left |\frac{n^2\sin(n^3-2n+1)}{n^3+n^2+n+1}\right|<\frac{n^2}{n^3+n^2+1}\]

Next, we see that by shrinking the denominator we can produce yet another over estimate:

\[\frac{n^2}{n^3+n^2+1}<\frac{n^2}{n^3}=\frac{1}{n}\]

Bringing back the \(n^{th}\) power

\[\left |\frac{n^2\sin(n^3-2n+1)}{n^3+n^2+n+1}\right|^n<\frac{1}{n^n} \]

And, unpacking the definition of absolute value:

\[-\frac{1}{n^n}<\left(\frac{n^2\sin(n^3-2n+1)}{n^3+n^2+n+1}\right)^n<\frac{1}{n^n}\]

It now suffices to prove that \(1/n^n\) converges to zero, as we ve squeezed our sequence with it. But this is easiest to do with another squeeze: namely, since \(n^n>2^n\) we see \(0<1/n^n<1/2^n\), and we already proved that \(1/2^n\to 0\), so we’re done!

\[\lim \left(\frac{n^2\sin(n^3-2n+1)}{n^3+n^2+n+1}\right)^n=0\]

Exercise 9.2 Use the squeeze theorem to prove that \[\lim\left(\frac{n^3-2-\frac{1}{n^3}}{3n^3+5}\right)^{2n+7}=0\]

A nice corollary of the squeeze theorem tells us when a sequence converges by estimating its difference from the proposed limit:

Exercise 9.3 Let \(a_n\) be a sequence, and \(L\) be a real number. If there exists a sequence \(\alpha_n\) where \(|a_n-L|\leq \alpha_n\) for all \(n\), and \(\alpha_n\to 0\), then \(\lim a_n=L\).

This is useful as unpacking the definition of absolute value (Definition 4.5), a sequence \(\alpha_n\) with \[-\alpha_n\leq a_n-L\leq \alpha_n\] can be thought of as giving “error bounds” on the difference of \(a_n\) from \(L\). In this language, the proposition says if we can bound the error between \(a_n\) and \(L\) by a sequence going to zero, then \(a_n\) must actually go to \(L\).

9.2 Limits and Field Operations

Just like inequalities, the field operations themselves play nicely with limits.

Theorem 9.2 (Constant Multiples) Let \(s_n\) be a convergent sequence, and \(k\) a real number. Then the sequence \(ks_n\) is convergent, and \[\lim ks_n=k\lim s_n\]

Proof. We distinguish two cases, depending on \(k\). If \(k=0\), then \(ks_n\) is just the constant sequence \(0,0,0\ldots\) and \(k\lim s_n=0\) as well, so the theorem is true.

If \(k\neq 0\), we proceed as follows. Denote the limit of \(s_n\) by \(L\), and let \(\epsilon>0\). Choose \(N\) such that \(n>N\) implies \(|s_n-L|<\frac{\epsilon}{|k|}\) (we can do so, as \(s_n\to L\)). Now, for this same value of \(N\), choose arbitrary \(n>N\) and consider the difference \(|ks_n-kL|\):

\[|ks_n-kL|=|k(s_n-L)|=|k||s_n-L|< |k|\frac{\epsilon}{|k|}=\epsilon\]

Thus, \(ks_n\to kL\) as claimed!

To do a similar calculation for the sum of sequences requires an \(\epsilon/2\) type argument:

Theorem 9.3 (Limit of a Sum) Let \(s_n,t_n\) be convergent sequences. Then the sequence of term-wise sums \(s_n+t_n\) is convergent, with \[\lim (s_n+t_n)=\lim s_n+\lim t_n\]

Exercise 9.4 (Limit of Sums and Differences) Prove Theorem 9.3, that if \(s_n\) and \(t_n\) converge so does \(s_n+t_n\) and \[\lim (s_n+t_n)=\lim s_n + \lim t_n\]

Use this together with other limit theorems to prove the same holds for differences: \(s_n-t_n\) also converges, and \[\lim (s_n-t_n)=\lim s_n - \lim t_n\]

The case of products is a little more annoying to prove, but the end result is the same - the limit of a product is the product of the limits.

Theorem 9.4 (Limit of a Product) Let \(s_n,t_n\) be convergent sequences. Then the sequence of term-wise products \(s_nt_n\) is convergent, with \[\lim (s_nt_n)=\left(\lim s_n\right)\left(\lim t_n\right)\]

Proof (Sketch). Let \(s_n\to S\) and \(t_n\to T\) be two convergent sequences and choose \(\epsilon>0\). We wish to find an \(N\) beyond which we know \(s_nt_n\) lies within \(\epsilon\) of $ST.

To start, we consider the difference \(|s_nt_n-ST|\) and we add zero in a clever way:

\[|s_nt_n-ST|=|s_nt_n-s_nT+s_nT-ST|=|(s_nt_n-s_nT)+(s_nT-ST)|\]

applying the triangle inequality we can break this apart

\[|s_nt_n-ST|\leq |s_nt_n-s_nT|+|s_nT-ST|=|s_n||t_n-T|+|s_n-S||T|\]

The second term here is easy to bound: if \(T=0\) then its just literally zero, and if \(T\neq 0\) then we can make it as small as we want: we know \(s_n\to S\) so we can make \(|s_n-S|\) smaller than anything we need (like \(\epsilon/T\), or even \(\epsilon/2T\) if necessary).

For the first term we see it includes a term of the form \(|t_n-T|\) which we know we can make as small as we need to by choosing sufficiently large \(N\). But its being multiplied by \(|s_n|\) and we need to make sure the whole thing can be made small, so we should worry about what if \(|s_n|\) is getting really big? But this isn’t actually a worry - we know \(s_n\) is convergent, so its bounded, so there is some \(B\) where \(|s_n|<B\) for all \(n\). Now we can make \(|t_n-T|\) as small as we like, (say, smaller than \(\epsilon/B\) or \(\epsilon/2B\) or whatever we need).

Since each of these terms can be made small as we need individually, choosing large enough \(n\)’s we can make them both simultaneously small, so the whole difference \(|s_nt_n-ST|\) is small (less than \(\epsilon\)) which proves convergence.

Exercise 9.5 Write the sketch of an argument above in the right order, as a formal proof.

Corollary 9.2 If \(p\) is a positive integer then \[\lim \frac{1}{n^p}=0\] Hint: Induction on the power \(p\)

The next natural case to consider after sums and differences and products is quotients. We begin by considering the limit of a reciprocal:

Proposition 9.3 (Limit of a Reciprocal) Let \(s_n\) be a convergent nonzero sequence wtih a nonzero limit. Then the sequence \(1/s_n\) of reciprocals is convergent, with \[\lim\frac{1}{s_n}=\frac{1}{\lim s_n}\]

Proof (Sketch). For any \(\epsilon>0\), want to show when \(n\) is very large, we can make \[\left|\frac{1}{s_n}-\frac{1}{s}\right|<\epsilon\]

We can get a common denominator and rewrite this as \[\left|\frac{1}{s_n}-\frac{1}{s}\right|=\frac{|s-s_n|}{|ss_n|}\]

Since \(s_n\) is not converging to zero, we should be able to bound it away from zero: that is, find some \(m\) such that \(|s_n|>m\) for all \(n\in\NN\) (we’ll have to prove we can actually do this). Given such an \(m\) we see the denominator \(|ss_n|>m|s|\), and so \[\left|\frac{1}{s_n}-\frac{1}{s}\right|<\frac{|s_n-s|}{m|s|}\] We want this less than \(\epsilon\) so all we need to do is choose \(N\) big enough that \(|s_n-s|\) is less than \(\epsilon m|s|\) and we’re good.

Exercise 9.6 Turn the sketch argument for \(\lim\frac{1}{s_n}=\frac{1}{s_n}\) in Proposition 9.3 into a formal proof.

From here, its quick work to understand the limit of a general quotient.

Theorem 9.5 (Limit of a Quotient) Let \(s_n,t_n\) be convergent sequences, with \(t_n\neq 0\) and \(\lim t_n\neq 0\). Then the sequence \(s_n/t_n\) of quotients is convergent, with \[\lim \frac{s_n}{t_n}=\frac{\lim s_n}{\lim t_n}\]

Proof. Since \(t_n\) converges to a nonzero limit, by Proposition 9.3 we know that \(1/t_n\) converges, with limit \(1/\lim t_n\). Now, we can use Theorem 9.4 for the product \(s_n\cdot \frac{1}{t_n}\):

\[\lim\frac{s_n}{t_n}=\lim s_n\cdot \frac{1}{t_n}=\left(\lim s_n\right)\left(\lim \frac{1}{t_n}\right)\] \[=\lim s_n\frac{1}{\lim t_n}=\frac{\lim s_n}{\lim t_n}\]

Finally we look at square roots. We have already proven in Theorem 6.9 that nonnegative numbers have square roots, and so given a nonnegative sequence \(s_n\) we can consider the sequence \(\sqrt{s_n}\) of its roots. Below we see that the limit concept respects roots just as it does the other field operations:

Theorem 9.6 (Root of Convergent Sequence) Let \(s_n> 0\) be a convergent sequence, and \(\sqrt{s_n}\) its sequence of square roots. Then \(\sqrt{s_n}\) is convergent, with \[\lim \sqrt{s_n}=\sqrt{\lim s_n}\]

Proof (Sketch). Assume \(s_n\to s\), and fix \(\epsilon>0\). We seek an \(N\) where \(n>N\) implies \(|\sqrt{s_n}-\sqrt{s}|<\epsilon\). This looks hard: because the fact we know is about \(s_n-s\) and the fact we need is about \(\sqrt{s_n}-\sqrt{s}\).
But what if we multiply and divide by \(\sqrt{s_n}+\sqrt{s}\) so we can simplify using the difference of squares?

\[|\sqrt{s_n}-\sqrt{s}|\frac{\sqrt{s_n}+\sqrt{s}}{\sqrt{s_n}+\sqrt{s}}=\frac{|s_n-s|}{\sqrt{s_n}+\sqrt{s}}\]

This has the quantity \(|s_n-s|\) that we know about in it! We know we can make this as small as we like by the assumption \(s_n\to s\), so as long as the denominator does not go to zero, we can make this happen!

Proof (Formal). Let \(s_n\) be a positive sequence with \(s_n\to s\) and assume \(s\neq 0\) (we leave that case for the exercise below). Let \(\epsilon>0\), and choose \(N\) such that if \(n>N\) we have \(|s_n-s|<\epsilon\sqrt{s}\).

Now for any \(n\), rationalizing the numerator we see \[|\sqrt{s_n}-\sqrt{s}|=\frac{|s_n-s|}{\sqrt{s_n}+\sqrt{s}}<\frac{|s_n-s|}{\sqrt{s}}\]

Where the last inequality comes from the fact that \(\sqrt{s_n}>0\) by definition, so \(\sqrt{s}+\sqrt{s_n}>\sqrt{s}\). When \(n>N\) we can use the hypothesis that \(s_n\to s\) to see \[|\sqrt{s_n}-\sqrt{s}|<\frac{|s_n-s|}{\sqrt{s}}=\frac{\epsilon\sqrt{s}}{\sqrt{s}}=\epsilon\]

And so, \(\sqrt{s_n}\) is convergent, with limit \(\sqrt{s}\).

Exercise 9.7 Prove that if \(s_n\to 0\) is a sequence of nonnegative numbers, that the sequence of roots also converges to zero \(\sqrt{s_n}\to 0\).

Hint: you don’t need to rationalize the numerator or do fancy algebra like above

Together this suite of results provides an effective means of calculating limits from simpler pieces. They are often referred to together as the limit theorems

Theorem 9.7 (The Limit Theorems) Let \(a_n\) and \(b_n\) be any two convergent sequences, and \(k\in\RR\) a constant. Then \[\lim ka_n=k\lim a_n\] \[\lim (a_n\pm b_n)=(\lim a_n)\pm(\lim b_n)\] \[\lim a_nb_n=(\lim a_n)(\lim b_n)\]

If \(b_n\neq 0\) and \(\lim b_n\neq 0\), \[\lim \frac{a_n}{b_n}=\frac{\lim a_n}{\lim b_n}\] And, if \(a_n\geq 0\), then \(\sqrt{a_n}\) is convergent, with \[\lim \sqrt{a_n}=\sqrt{\lim a_n}\]

9.2.1 \(\bigstar\) Infinity

Given the formal defintion of divergence to infinity as meaning eventually gets larger than any fixed number, we can formulate analogs of the limit theorems for such divergent sequences. We will not need any of these in the main text but it is good practice to attempt their proofs:

Exercise 9.8 If \(s_n\to\infty\) and \(k>0\) then \(ks_n\to\infty\).

Exercise 9.9 If \(t_n\) diverges to infinity, and \(s_n\) either converges, or also diverges to infinity, then \(s_n+t_n\to\infty\).

Exercise 9.10 If \(t_n\) diverges to infinity, and \(s_n\) either converges, or also diverges to infinity, then \(s_nt_n\to\infty\).

Note that there is not an analog of the division theorem: if \(s_n\to\infty\) and \(t_n\to\infty\), with only this knowledge we can learn nothing about the quotient \(s_n/t_n\).

Exercise 9.11 Give examples of sequences \(s_n,t_n\to\infty\) where \[\lim\frac{s_n}{t_n}=0\] \[\lim\frac{s_n}{t_n}=2\] \[\lim\frac{s_n}{t_n}=\infty\]

These limit laws are the precise statement behind the “rules” often seen in a calculus course, where students may write \(2+\infty=\infty\), \(\infty+\infty=\infty\), or \(\infty\cdot\infty=\infty\), but they may not write \(\infty/\infty\). (If you are looking at this last case and thinking l’Hospital, we’ll get there in ?thm-Lhospital!)

9.2.2 Example Computations

Example 9.4 Compute the limit of the following sequence \(s_n\):

\[s_n=\frac{3n^3+\frac{n^6-2}{n^2+5}}{n^3-n^2+1}\]

Example 9.5 Compute the limit of the sequence \(s_n\) \[s_n=\sqrt{\frac{1}{2^n}+\sqrt{\frac{n^2-1}{n^2-n+1}}}\]

9.3 Applications

9.3.1 Babylon and \(\sqrt{2}\)

We know that \(\sqrt{2}\) exists as a real number (Theorem 6.7), and we know that the babylonian procedure produces excellent rational approximations to this value (Exercise 1.5), in the precise sense that the numerator squares to just one more than twice the square of the denominator.

Now we finally have enough tools to combine these facts, and prove that the babylonian procedure really does limit to \(\sqrt{2}\).

Theorem 9.8 Let \(s_n=\tfrac{p_n}{q_n}\) be a sequence of rational numbers where both \(p_n,q_n\to\infty\) and for each \(p_n^2=2q_n^2-1\). Then \(s_n\to\sqrt{2}\).

Proof. We compute the limit of the sequence \(s_n^2\). Using that \(p_n^2=2q_n^2+1\) we can replace the numerator and do algebra to see \[s_n^2=\frac{p_n^2}{q_n^2}=\frac{2q_n^2+1}{q_n^2}=2+\frac{1}{q_n^2}.\]

Now, as by assumption \(q_n\to\infty\) we have that \(q_n^2=q_nq_n\) also diverges to infinity (Exercise 9.10), and so its reciprocal converges to 0 (Proposition 8.3). Thus, using the limit theorems for sums, \[\lim \frac{p_n^2}{q_n^2}=\lim \left(2-\frac{1}{q_n^2}\right)=2-\lim\frac{1}{q_n^2}=2\]

That is, the limit of the squares approaches \(2\). Now we apply Theorem 9.6 to this sequence \(s_n^2\), and conclude that

  • \(s_n = \sqrt{s_n^2}\) converges.
  • \(\lim s_n = \lim \sqrt{s_n^2}=\sqrt{\lim s_n^2}=\sqrt{2}\)

This provides a rigorous justification of the babylonian’s assumption that if you are patient, and compute more and more terms of this sequence, you will always get better and better approximations of the square root of 2.

Exercise 9.12 Build a sequence that converges to \(\sqrt{n}\) by following the babylonian procedure, starting with a rectangle of area \(n\).

9.3.2 Rational and Irrational Sequences

Combining the squeeze theorem and limit theorems with the density of the (ir)rationals allows us to prove the existence of certain sequences that will prove quite useful:

Theorem 9.9 For every \(x\in\RR\) there exists a sequence \(r_n\) of rational numbers with \(r_n\to x\).

Proof. Let \(x\in\RR\) be arbirary, and consider the sequence \(x+\frac{1}{n}\). Because the constant sequence \(x,x,x\ldots\) and the sequence \(1/n\) are convergent, by the limit theorem for sums we know \(x+\frac{1}{n}\) is convergent and \[\lim \left(x + \frac{1}{n}\right)= x+\lim \frac{1}{n}=x\]

Now for each \(n\in\NN\), by the density of the rationals we can find a rational number \(r_n\) with \(x<r_n<x+\frac{1}{n}\). This defines a sequence of rational numbers squeezed between \(x\) and \(x+\frac{1}{n}\): thus, by the squeeze theorem we hav

\[x<r_n<x+\frac{1}{n}\,\implies \,\lim r_n = x\]

Through a similar argument using Exercise 6.7 we find the existence of a sequence of irrational numbers converging to any real number.

Exercise 9.13 For every \(x\in\RR\) there exists a sequence \(y_n\) of irrationals with \(y_n\to x\).