2 Dubious Computations
We already saw, when visiting infinite processes from antiquity, that it is very easy to get confused and derive a contradiction when working with infinity. But on the other hand, infinite arguments turn out to be so useful that they are irresistible! Certain objects, like
2.1 Convergence, Concern and Contradiction
2.1.1 Madhava, Leibniz &
Madhava was a Indian mathematician who discovered many infinite expressions for trigonometric functions in the 1300’s, results which today are known as Taylor Series after Brook Taylor, who worked with them in 1715. In a particularly important example, Madhava found a formula to calculate the arc length along a circle, in terms of the tangent: or phrased more geometrically, the arc of a circle contained in a triangle with base of length
The first term is the product of the given sine and radius of the desired arc divided by the cosine of the arc. The succeeding terms are obtained by a process of iteration when the first term is repeatedly multiplied by the square of the sine and divided by the square of the cosine. All the terms are then divided by the odd numbers 1, 3, 5, …. The arc is obtained by adding and subtracting respectively the terms of odd rank and those of even rank.
As an equation, this gives
If we take the arclength
This result was also derived by Leibniz (one of the founders of modern calcuous), using a method close to something you might see in Calculus II these days. It goes as follows: we know (say from the last chapter) the sum of the geometric series
Thus, substituting in
and the right hand side of this is the derivative of arctangent! So, anti-differentiating both sides of the equation yields
Finaly, we take this result and plug in
This argument is completely full of steps that should make us worried:
- Why can we substitute a variable into an infinite expression and ensure it remains valid?
- Why is the derivative of arctan a rational function?
- Why can we integrate an infinite expression?
- Why can we switch the order of taking an infinte sum, and integration?
- How do we know which values of
the resulting equation is valid for?
But beyond all of this, we should be even more worried if we try to plot the graphs of the partial sums of this supposed formula for the arctangent.
The infinite series we derived seems to match the arctangent exactly for a while, and then abruptly stop, and shoot off to infinity. Where does it stop? *Right at the point we are interested in,
And perhaps, before thinking the eventual answer might simply say the series always converges at the endpoints, it turns out at the other endpoint
2.1.2 Dirichlet &
In 1827, Dirichlet was studying the sums of infinitely many terms, thinking about the alternating harmonic series
Like the previous example, this series naturally emerges from manipulations in calculus: beginning once more with the geometric series
Finally, plugging in
What happens if we multiply both sides of this equation by
We can simplify this expression a bit, by re-ordering the terms to combine similar ones:
After simplifying, we’ve returned to exactly the same series we started with! That is, we’ve shown
What does this tell us? Well, the only difference between the two equations is the order in which we add the terms. And, we get different results! This reveals perhaps the most shocking discovery of all, in our time spent doing dubious computations: infinite addition is not always commutative, even though finite addition always is.
Here’s an even more dubious-looking example where we can prove that
Now, rewrite each of the zeroes as
Now, do some re-arranging to this:
Make sure to convince yourselves that all the same terms appear here after the rearrangement!
Simplifying this a bit shows a pattern:
Which, after removing the parentheses, is the familiar series
2.2 Infinite Expressions for
The sine function (along with the other trigonometric, exponential, and logarithmic functions) differs from the common functions of early mathematics (polynomials, rational functions and roots) in that it is defined not by a formula but geometrically.
Such a definition is difficult to work with if one actually wishes to compute: for example, Archimedes after much trouble managed to calculate the exact value of
2.2.1 Infinite Sum of Madhava
Beyond the series for the arctangent, Madhava also found an infinite series for the sine function. The first thing that needs to be proven is that
This equation mentions sine on both sides, which means we can use it as a recurrence relation to find better and better approximations of the sine function.
Definition 2.1 (Integral Recurrence For
Given any starting function
Example 2.1 (The Series for
Now, plugging in
Repeating gives the third,
Carrying out this process infinitely many times yields a conjectured formula for the sine function as an infinite polynomial:
Proposition 2.1 (Madhava Infinite Sine Series)
Exercise 2.1 Find a similar recursive equation for the cosine function, and use it to derive the first four terms of its series expansion.
One big question about this procedure is why in the world should this work? We found a function that
2.2.2 Infinite Product of Euler
Another infinite expression for the sine function arose from thinking about the behavior of polynomials, and the relation of their formulas to their roots. As an example consider a quartic polynomial
In 17334, Euler attempted to apply this same reasoning in the infinite case to the trigonometric function
GRAPH
Its roots agree with that of
Euler noticed all the factors come in pairs, each of which represented a difference of squares.
Not worrying about the fact that infinite multiplication may not be commutative (a worry we came to appreciate with Dirichlet, but this was after Euler’s time!), we may re-group this product pairing off terms like this, to yield
Finally, we may multiply back through by
Proposition 2.2 (Euler)
This incredible identity is actually correct: there’s only one problem - the argument itself is wrong!
Exercise 2.2 In his argument, Euler crucially uses that if we know
- all the zeroes of a function
- the value of that function is 1 at
then we can factor the function as an infinite polynomial in terms of its zeroes. This implies that a function is completely determined by its value at
Show that this is a serious flaw in Euler’s reasoning by finding a different function that has all the same zeroes as
Exercise 2.3 (The Wallis Product for
Using Euler’s infinite product for
2.2.3 The Basel Problem
The Italian mathematician Pietro Mengoli proposed the following problem in 1650:
Definition 2.2 (The Basel Problem) Find the exact value of the infinite sum
By directly computing the first several terms of this sum one can get an estimate of the value, for instance adding up the first 1,000 terms we find
so we might feel rather confident that the final answer is somewhat close to 1.64. But the interesting math problem isn’t to approximate the answer, but rather to figure out something exact, and knowing the first few decimals here isn’t of much help.
This problem was attempted by famous mathematicians across Europe over the next 80 years, but all failed. All until a relatively unknown 28 year old Swiss mathematician named Leonhard Euler published a solution in 1734, and immediately shot to fame. (In fact, this problem is named the Basel problem after Euler’s hometown.)
Proposition 2.3 (Euler)
Euler’s solution begins with two different expressions for the function
Because two polynomials are the same if and only if the coefficients of all their terms are equal, Euler attempts to generalize this to infinite expressions, and equate the coefficients for
From the series, we can again simply read off the coefficient as
Which quickly leads to a solution to the original problem, after multiplying by
Euler had done it! There are of course many dubious steps taken along the way in this argument, but calculating the numerical value,
We find it to be exactly the number the series is heading towards. This gave Euler the confidence to publish, and the rest is history.
But we analysis students should be looking for potential troubles in this argument. What are some that you see?
2.2.4 Viète’s Infinite Trigonometric Identity
Viete was a French mathematician in the mid 1500s, who wrote down for the first time in Europe, an exact expression for
Proposition 2.4 (Viète’s formula for
How could one derive such an incredible looking expression? One approach uses trigonometric identities…an infinite number of times! Start with the familiar function
Now we may apply the double angle identity once again to the term
and again
and again
And so on….after the
Viete realized that as
Proposition 2.5 (Viète’s Trigonometric Identity)
An incredible, infinite trigonometric identity! Of course, there’s a huge question about its derivation: are we absolutely sure we are justified in making the denominator there equal to
Now, we are left just to simplify the right hand side into something computable, using more trigonometric identities! We know
Substituting these all in gives the original product. And, while this derivation has a rather dubious step in it, the end result seems to be correct! Computing the first ten terms of this product on a computer yields
2.3 The Infinitesimal Calculus
In trying to formalize many of the above arguments, mathematicians needed to put the calculus steps on a firm footing. And this comes with a whole collection of its own issues. Arguments trying to explain in clear terms what a derivative or integral was really supposed to be often led to nonsensical steps, that cast doubt on the entire procedure. Indeed, the history of calculus is itself so full of confusion that it alone is often taken as the motivation to develop a rigorous study of analysis. Because we have already seen so many other troubles that come from the infinite, we will content ourselves with just one example here: what is a derivative?
The derivative is meant to measure the slope of the tangent line to a function. In words, this is not hard to describe. But like the sine function, this does not provide a means of computing, and we are looking for a formula. Approximate formulas are not hard to create: if
represents the slope of the secant line to
and then define the derivative as the infiniteth term in this sequence. But this is just incoherent, taken at face value. If
So, something else must be going on. One way out of this would be if our sequence of approximates did not actually converge to zero - maybe there were infinitely small nonzero numbers out there waiting to be discovered. Such hypothetical numbers were called infinitesimals.
Definition 2.3 (Infinitesimal) A positive number
This would resolve the problem as follows: if
But this leads to its own set of difficulties: its easy to see that if
Exercise 2.4 Prove this: if
So we can’t just say define the derivative by saying “choose some infinitesimal
Let’s attempt to differentiate
Here we see the derivative is not what we expected, but rather is
But this is not very sensible: when exactly are we allowed to do this? If we can discard an infinitesimal whenever its added to a finite number, shouldn’t we already have done so with the
So, the when we throw away the infinitesimal matters deeply to the answer we get! This does not seem right. How can we fix this? One approach that was suggested was to say that we cannot throw away infinitesimals, but that the square of an infinitesimal is so small that it is precisely zero: that way, we keep every infinitesimal but discard any higher powers. A number satisfying this property was called nilpotent as nil was another word for zero, and potency was an old term for powers (
Definition 2.4 A number
If our infinitesimals were nilpotent, that would solve the problem we ran into above. Now, the calculation for the derivative of
But, in trying to justify just this one calculation we’ve had to invent two new types of numbers that had never occurred previously in math: we need positive numbers smaller than any rational, and we also need them (or at least some of those numbers) to square to precisely zero. Do such numbers exist?