27 Exponentials PDEs and ODEs
Highlights of this Chapter: to close out our work on derivatives, we take a look at where power series come up in analysis beyond this first course.
- We study the complex exponential, in preparation to prove
in the final project. - We study the matrix exponential, and see its utility in solving systems of linear differential equations.
- We look to extend exponentiation to even more abstract settings, and consider the meaning of
- We use this idea of
to the power of a differential operator as a window into functional analysis.
27.1 Generalizing the Exponential
Power series are wonderful functions for many reasons, but one of the most powerful is that they are so simple. Like polynomials, to make sense of a power series you just need is addition/subtraction and multiplication, now with one more ingredient: convergence. This makes power series a very natural jumping off point to generalize familiar objects to unfamiliar places. As a first step, we will look at complex numbers:
Definition 27.1 A complex number is a pair
- Addition of complex numbers is defined component-wise:
- Multiplication of a complex number by a real number can also be computed component-wise:
- Multiplication of two complex numbers is defined by the field axioms together with the definition
:
Limits of sequences of complex numbers are defined using the fact that they are built from pairs of real numbers. A sequence
We know have a fully rigorous theory of what it means to exponentiate a real number: but what does it mean to raise something to the
Definition 27.2 The complex exponential is defined for any
Of course, after making such a bold definition we should ask ourselves, does this make sense? That is, does the series of complex numbers converge? This may sound daunting at first, but in fact the theory of complex power series inherits much from the real theory: complex numbers are built from pairs of real numbers after all!
Theorem 27.1 Let
Proof. Let
A similar argument applies to the sequence of
Now, using the definition of convergence for complex numbers, since both real sequences converge the overall sequence does as well, and we can write
Thus,
Corollary 27.1 Let
Proof. If
Thus, we have assumed that the series of magnitudes
The complex magnitude
Corollary 27.2 A complex power series is convergent, if it is absolutely convergent.
Applying this to the complex exponential, we can confirm this series makes sense for all complex number inputs:
Corollary 27.3 For any
Proof. Let
But we can go much further than this. There are many objects we know how to add/subtract and multiply in mathematics - structures with these operations are called rings. So, in any ring where one can make sense of limits, we can attempt to define an exponential function by this power series! A natural example is the ring of
Definition 27.3 Denote by
- Addition: defined entry-wise
- Scalar Multiplication: defined entry-wise
- Multiplication: defined by usual matrix multiplication
We define limits in
Definition 27.4 The matrix exponential is defined for any
Again, we are faced with the problem of convergence: for which matrices
Definition 27.5 Let
So for example
Exactly analogous to the complex case, we can prove that absolute convergence implies convergence
Theorem 27.2 If
Exercise 27.1 Prove a version of Theorem 27.1 for matrices: if
Exercise 27.2 Find the exponential of the matrix
Exercise 27.3 Prove that if
Exercise 27.4 Compute the function
27.1.1 How far can we go?
In linear algebra, one key use of matrices is to represent Linear transformations (another is just as arrays to store information, like systems of equations). But linear operators on a vector space
Definition 27.6 If
For this to make sense, we need
The derivative after all, is a linear map on the vector space of smooth real valued functions, and these functions are things we know how to take limits of (as their inputs and outputs are real numbers!). This might make us wonder: what is the exponential of the derivative?
Definition 27.7 Let
This acts on functions as follows, taking a function
You might rightly worry about convergence here: when does this expression even make sense?! The general theory of such things is beyond the scope of this course, but for functions which are themselves power series, we can actually come up with a beautifully simple answer:
Exercise 27.5 Prove that for any power series
Hint: show this happens for
Corollary 27.4 Let
27.2 Solving Differential Equations
One interesting application of exponentiation is to solving differential equations. We will not dive deeply into this topic but only take a quick view of some interesting examples, for those who enjoy differential equations.
27.2.1
If
Exercise 27.6 Let
Usually, a differential equation is given with an initial condition, specifying the functions behavior at a certain point. This picks out one solution from the many: if
27.2.2 Linear Systems
A beautiful generalization of the relatively simple idea above allows one to solve essentially all linear systems of differential equations (with constant coefficients). One learns in a differential equations course how to turn any such system into a system of first order equations so we focus on those here. For specificity, assume we have the following three differential equations, for unknown functions
And suppose further that these are constrained by specific initial conditions:
Because the right hand side of each is a linear combination of
In our continued attempt to simplify notation and make this problem more manageable, we define
And let
So, we are looking for a vector valued function
Taking this as a hint, we might attempt to solve this differential equation using the matrix exponential. First, we consider a matrix valued function, and then will come back to think about the initial conditions.
Proposition 27.1 Let
Then
Proof. If
Thus, by Exercise 27.1 the power series
Now we wish to take the derivative. Recalling
Where we know from the above that each of these
Because we know this equation holds for each entry, we have an equation for the matrices themselves:
Each term on the right side shares a common factor of
As
Now we utilize this to solve our particular differential equation. We’ve constructed a function whose derivative is
Proposition 27.2 Given a vector
Proof. Defining
First, at
Next, we wish to take the derivative of the vector equation
For each value of
But we already know the derivative of
And,
This gives us an explicit solution to our example system:
And in fact, provides a glimpse at just how powerful of a tool we’ve created. The matrix valued function
Such a perspective becomes even more important when we turn an eye towards partial differential equations below.
27.2.3 Exponential Operators
A partial differential equation is a differential equation for multivariate functions which involves derivatives with respect to multiple variables. Partial differential equations are a cornerstone of applied mathematics, and the applications of Analysis to the natural sciences. Some common examples are the heat equation from thermodynamics
The wave equation from fluids, material science, and electromagnetism
the Schrodinger equation of Quantum mechanics
and the Black-Scholes equation from economic theory:
Solving partial differential equations in general is a much more difficult process than the ordinary differential equations discussed above, and so we do not attempt a comprehensive or rigorous treatment here. Instead, we content ourselves to simply explore a few simple cases where exponentiation can play an important role.
Example 27.1 (The Equation
One way to think about a function
We can reason about this in analogy with the system of equations we discussed above. Indeed, just as the matrix
How could we attempt to build an family of operators with this property, that differentiating would “bring down an
And use this power series to give an explicit definition for how
Upon seeing this formula, one should certainly be thinking about convergence: is this infinite sum going to make sense, for all values of
Proposition 27.3 Let
Furthermore assume the following technical condition on
Then
Proof. For any fixed
Because our original power series converged absolutely for all
Plugging this back into our series, we see
Now we investigate the right-hand-side further. For any finite sum we know that
so all that needs to be justified is that this property remains true in the limit. But this is exactly what dominated convergence is built for, exchanging the limit and sum! Let’s check the conditions of dominated convergence apply:
- For each
the term is differentiable. - For each
, the sum is convergent.
These follow immediately from our assumptions on
Performing the ratio test, we find
But the right hand side here is exactly what we found earlier must equal the partial
So,
We can rephrase the result above in more abstract language:
Corollary 27.5 Let
Then
This is pretty incredible: just by analogy with the matrix case we were able to propose a solution using the power series for the exponential, and then with some real analysis prove this solution works! But we can go even farther, and understand the solution geometrically using what we know about the exponentiated derivative operator. Indeed, in Corollary 27.4 we show (following an exercise for you to complete) that at least if
Thus, after all of this hard work we end up with a ridiculously simple solution:
Corollary 27.6 If
This is trivial to confirm by hand, using the chain rule!
And in retrospect, we could have come up with this solution if we just thought hard enough, instead of diving into calculations! But our ability to write this solution in a geometrically - obvious manner is special to this case, and to the differential equation in question being particularly simple. The power of the technique above was that it did not require us to be clever the exponential may to the rescue even when - and especially when - our intuition and foresight fail us.