Partial Differential Equations
Partial Differential Equations
y = 3,
y + 12y = 0
(3)
or
There are a number of ways of classifying such differential equations. At the least, you should
know that the order of a differential equation refers to the highest order of derivative that appears
in the equation. Thus these first three differential equations are of order 1, 2 and 3 respectively.
Differential equations show up surprisingly often in a number of fields, including physics,
biology, chemistry and economics. Anytime something is known about the rate of change of a
function, or about how several variables impact the rate of change of a function, then it is likely
that there is a differential equation hidden behind the scenes. Many laws of physics take the
form of differential equations, such as the classic force equals mass times acceleration (since
acceleration is the second derivative of position with respect to time). Modeling means studying
a specific situation to understand the nature of the forces or relationships involved, with the goal
of translating the situation into a mathematical relationship. It is quite often the case that such
modeling ends up with a differential equation. One of the main goals of such modeling is to find
solutions to such equations, and then to study these solutions to provide an understanding of the
situation along with giving predictions of behavior.
In biology, for instance, if one studies populations (such as of small one-celled organisms), and
their rates of growth, then it is easy to run across one of the most basic differential equation
models, that of exponential growth. To model the population growth of a group of e-coli cells in
a Petri dish, for example, if we make the assumption that the cells have unlimited resources,
space and food, then the cells will reproduce at a fairly specific measurable rate. The trick to
figuring out how the cell population is growing is to understand that the number of new cells
created over any small time interval is proportional to the number of cells present at that time.
This means that if we look in the Petri dish and count 500 cells at a particular moment, then the
number of new cells being created at that time should be exactly five times the number of new
cells being created if we had looked in the dish and only seen 100 cells (i.e. five times the
population, five times the number of new cells being created). Curiously, this simple
observation led to population studies about humans (by Malthus and others in the 19th century),
based on exactly the same idea of proportional growth. Thus, we have a simple observation that
the rate of change of the population at any particular time is in direct proportion to the number of
cells present at that time. If you now translate this observation into a mathematical statement
involving the population function y = P(t), where t stands for time, and P(t) is the function giving
the population of cells at time t then you have become a mathematical modeler.
Answer (yielding another example of a differential equation):
(4)
To solve a differential equation means finding a solution function y = f(t), such that when the
corresponding derivatives, y , y, etc. are computed and substituted into the equation, then the
equation becomes an identity (such as "3x = 3x"). For instance, in the first example, equation (1)
from above, the differential equation y = 3 is equivalent to the condition that the derivative of
the function y = f(x) is a constant, equal to 3. To solve this means to find a function whose
derivative with respect to x is constant, and equal to 3. Many such functions come to mind
quickly, such as y = 3x, or y = 3x + 5, or y = 3x 12. Each of these functions is said to satisfy
the original differential equation, in that each one is a specific solution to the equation. In fact,
clearly anything of the form y = 3x + c, where c is any constant, will be a solution to the
equation. And on the other hand, any function that actually satisfies equation (1) will have to be
of the form y = 3x + c.
To separate the idea of a specific solution, such as y = 3x + 5, from a more general set or family
of solutions, y = 3x + c, where c is an arbitrary constant, we call an individual solution function,
such as as y = 3x + 5, a specific or particular solution (no surprise there). Next we call the set of
functions, y = 3x + c, which contain an arbitrary constant (or constants), a general solution.
Note that if someone asks you to come up with a solution to the differential equation y = 3, then
any function of the form y = 3x + c, would do as an answer. However, if the same person asked
you to solve y = 3, and at the same time make sure that the solution also satisfies another
condition, such as y(0) = 20, then this extra condition forces the constant c to equal 20. To see
this, note that the general solution is y = 3x + c. To find y(0), this just means substituting in x = 0,
so that you find that y(0) = c. Then to make the result equal to 20, according to the extra
2
condition, it must be the case that the constant c equals 20, so that the particular solution in this
situation is y = 3x + 20, now with no arbitrary constants remaining.
An extra condition in addition to a differential equation, is often called an initial condition, if the
condition involves the value of the function when x = 0 (or t = 0, or whatever the independent
variable is labeled for the function in the given situation). Sometimes to identify a specific
solution to an ODE, several initial conditions need to be given, not just about the value of the
function, y = f(x), when x = 0, but also giving the value of the first derivative of y when x = 0, or
even the value of higher order derivatives as well. This will often happen, for instance, if the
differential equation itself involves derivatives of higher order. In fact, typically the order of the
highest derivative that shows up in the equation will equal the number of constants that show up
in the general solution to a differential equation. You can picture why this happens by thinking
of the process of solving the differential equation as involving as many integrations as the
highest order of derivative, to undo each of the derivatives, then each such indefinite
integration will bring in a new constant.
y'dx y c , and
1
also that 3dx 3x c 2 where we represent different constants by writing c1 and c2, to
distinguish them from each other. Yes, all these subscripted constants can appear odd, but there
will be times when keeping good records of new constants that come along while were solving
differential equations will be especially important. Think of it as keeping track of "+" or "" in
an equation - yes it can be somewhat annoying at times, but clearly its critical!
Although we can easily do the left side integral, and just end up with P(t) + c, integrating the
right-hand side leads us in circles, how can we integrate a function such as P(t) if we dont know
what the function is (finding the function was the whole point, after all!). It appears that we need
to take a different approach. Since we dont know what P(t) is, lets try isolating everything
involving P(t) and its derivatives on one side of the equation:
(6)
1
P (t ) k
P(t )
The reason were in better shape now is that the right-hand side integral is trivial, and the lefthand side integral can be taken care almost directly, with a quick substitution: if u P(t ) then
du P (t )dt and so (7) becomes
1
(8)
u du kdt
(9)
ln u ln P(t ) kt c1
so that
where weve gone ahead and combined constants on one side. Now solving for the function P(t),
we find that
(10)
P(t ) e kt c1 e c e kt Ce kt
1
dP kdt
P
Now the integration step looks as if it is happening with respect to P on the left hand side and
with respect to x on the right hand side:
(13)
PdP kdt
and after you do these integrations, youre right back to equation (9), above.
When you can use the separation of variables technique, life is good! Unfortunately, as with the
integration tricks you learned in single variable calculus, its not the case that one trick will take
care of every possible situation. In any case, its still a really good trick to have up your sleeve!
Examples
Try to solve the following three differential equations using the separation of variables technique
(where y is a function of x):
(a)
9 yy 4 x 0
(b)
y 1 y 2
(c)
y 2 xy
2
x2 y2
2
2u
2 u
c
t 2
x 2
(2)
u
2u
c2 2
t
x
(3)
2u 2u
0
x 2 y 2
(4)
2u 2u
f ( x, y )
x 2 y 2
Note that for PDEs one typically uses some other function letter such as u instead of y, which
now quite often shows up as one of the variables involved in the multivariable function.
In general we can use the same terminology to describe PDEs as in the case of ODEs. For
starters, we will call any equation involving one or more partial derivatives of a multivariable
function a partial differential equation. The order of such an equation is the highest order
partial derivative that shows up in the equation. In addition, the equation is called linear if it is
6
of the first degree in the unknown function u, and its partial derivatives, ux, uxx, uy, etc. (this
means that the highest power of the function, u, and its derivatives is just equal to one in each
term in the equation, and that only one of them appears in each term). If each term in the
equation involves either u, or one of its partial derivatives, then the function is classified as
homogeneous.
Take a look at the list of PDEs above. Try to classify each one using the terminology given
above. Note that the f(x,y) function in the Poisson equation is just a function of the variables x
and y, it has nothing to do with u(x,y).
Answers: all of these PDEs are second order, and are linear. All are also homogeneous except
for the fourth one, the Poisson equation, as the f(x,y) term on the right hand side doesnt involve
u or any of its derivatives.
The reason for defining the classifications linear and homogeneous for PDEs is to bring up the
principle of superposition. This excellent principle (which also shows up in the study of linear
homogeneous ODEs) is useful exactly whenever one considers solutions to linear homogeneous
PDEs. The idea is that if one has two functions, u1 and u 2 that satisfy a linear homogeneous
differential equation, then since taking the derivative of a sum of functions is the same as taking
the sum of their derivatives, then as long as the highest powers of derivatives involved in the
equation are one (i.e., that its linear), and that each term has a derivative in it (i.e. that its
homogeneous), then its a straightforward exercise to see that the sum of u1 and u 2 will also be a
solution to the differential equation. In fact, so will any linear combination, au1 bu 2 , where a
and b are constants.
For instance, the two functions cos(xy ) and sin(xy ) are both solutions for the first-order linear
homogeneous PDE:
(5)
u
u
y
0
x
y
Its a simple exercise to check that cos( xy ) sin( xy ) and 3 cos( xy ) 2 sin( xy ) are also solutions
to the same PDE (as will be any linear combination of cos(xy ) and sin(xy ) )
This principle is extremely important, as it enables us to build up particular solutions out of
infinite families of solutions through the use of Fourier series. This trick of superposition is
examined in great detail at the end of math 21b, and although we will mention it during the
classes on PDEs, we wont have time in 21a to go into any specifics about the use of Fourier
series in this way (so come back for more in Math 21b!)
Solving PDEs
Solving PDEs is considerably more difficult in general than solving ODEs, as the level of
complexity involved can be great. For instance the following seemingly completely unrelated
functions are all solutions to the two-dimensional Laplace equation:
(1)
x2 y2 ,
You should check to see that these are all in fact solutions to the Laplace equation by doing the
2u
2u
same thing you would do for an ODE solution, namely, calculate
and
, substitute them
x 2
y 2
into the PDE equation and see if the two sides of the equation are identical.
Now, there are certain types of PDEs for which finding the solutions is not too hard. For
instance, consider the first-order PDE
(2)
u
3x 2 xy 2
x
where u is assumed to be a two-variable function depending on x and y. How could you solve
this PDE? Think about it, is there any reason that we couldnt just undo the partial derivative of
u with respect to x by integrating with respect to x? No, so try it out! Here, note that we are
given information about just one of the partial derivatives, so when we find a solution, there will
be an unknown factor thats not necessarily just an arbitrary constant, but in fact is a completely
arbitrary function depending on y.
To solve (2), then, integrate both sides of the equation with respect to x, as mentioned. Thus
(3)
x dx (3x
xy 2 ) dx
1 2 2
x y F . What is F? Note that it could be any function such that when
2
one takes its partial derivative with respect to x, the result is 0. This means that in the case of
PDEs, the arbitrary constants that we ran into during the course of solving ODEs are now taking
the form of whole functions. Here F, is in fact any function, F(y),of y alone. To check that this
is indeed a solution to the original PDE, it is easy enough to take the partial derivative of this
u( x, y) function and see that it indeed satisfies the PDE in (2).
so that u ( x, y) x 3
2u
5x y 2
xy
where u is again a two-variable function depending on x and y. We can solve this PDE by
integrating first with respect to x, to get to an intermediate PDE,
8
(5)
u 5 2
x xy 2 F ( y )
y 2
where F(y) is a function of y alone. Now, integrating both sides with respect to y yields
(6)
u ( x, y)
5 2
1
x y xy 3 F ( y) G( x)
2
3
where now G(x) is a function of x alone (Note that we could have integrated with respect to y
first, then x and we would have ended up with the same result). Thus, whereas in the ODE
world, general solutions typically end up with as many arbitrary constants as the order of the
original ODE, here in the PDE world, one typically ends up with as many arbitrary functions in
the general solutions.
To end up with a specific solution, then, we will need to be given extra conditions that indicate
what these arbitrary functions are. Thus the initial conditions for PDEs will typically involve
knowing whole functions, not just constant values. We will also see that the initial conditions
that appeared in specific ODE situations have slightly more involved analogs in the PDE world,
namely there are often so-called boundary conditions as well as initial conditions to take into
consideration.
We begin by looking at the simplest example of a wave PDE, the one-dimensional wave
equation. To get at this PDE, we show how it arises as we try to model a simple vibrating string,
one that is held in place between two secure ends. For instance, consider plucking a guitar string
and watching (and listening) as it vibrates. As is typically the case with modeling, reality is quite
a bit more complex than we can deal with all at once, and so we need to make some simplifying
assumptions in order to get started.
First off, assume that the string is stretched so tightly that the only real force we need to consider
is that due to the strings tension. This helps us out as we only have to deal with one force, i.e.
we can safely ignore the effects of gravity if the tension force is orders of magnitude greater than
that of gravity. Next we assume that the string is as uniform, or homogeneous, as possible, and
that it is perfectly elastic. This makes it possible to predict the motion of the string more readily
since we dont need to keep track of kinks that might occur if the string wasnt uniform. Finally,
well assume that the vibrations are pretty minimal in relation to the overall length of the string,
i.e. in terms of displacement, the amount that the string bounces up and down is pretty small.
The reason this will help us out is that we can concentrate on the simple up and down motion of
the string, and not worry about any possible side to side motion that might occur.
Now consider a string of a certain length, l, thats held in place at both ends. First off, what
exactly are we trying to do in modeling the strings vibrations? What kind of function do we
want to solve for to keep track of the motion of string? What will it be a function of? Clearly if
the string is vibrating, then its motion changes over time, so time is one variable we will want to
keep track of. To keep track of the actual motion of the string we will need to have a function
that tells us the shape of the string at any particular time. One way we can do this is by looking
for a function that tells us the vertical displacement (positive up, negative down) that exists at
any point along the string how far away any particular point on the string is from the
undisturbed resting position of the string, which is just a straight line. Thus, we would like to
find a function u ( x, t ) of two variables. The variable x can measure distance along the string,
measured away from one chosen end of the string (i.e. x = 0 is one of the tied down endpoints of
the string), and t stands for time. The function u ( x, t ) then gives the vertical displacement of the
string at any point, x, along the string, at any particular time t.
As we have seen time and time again in calculus, a good way to start when we would like to
study a surface or a curve or arc is to break it up into a series of very small pieces. At the end of
our study of one little segment of the vibrating string, we will think about what happens as the
length of the little segment goes to zero, similar to the type of limiting process weve seen as we
progress from Riemann Sums to integrals.
Suppose we were to examine a very small length of the vibrating string as shown in figure 1:
10
Now what? How can we figure out what is happening to the vibrating string? Our best hope is
to follow the standard path of modeling physical situations by studying all of the forces involved
and then turning to Newtons classic equation F ma . Its not a surprise that this will help us,
as we have already pointed out that this equation is itself a differential equation (acceleration
being the second derivative of position with respect to time). Ultimately, all we will be doing is
substituting in the particulars of our situation into this basic differential equation.
Because of our first assumption, there is only one force to keep track of in our situation, that of
the string tension. Because of our second assumption, that the string is perfectly elastic with no
kinks, we can assume that the force due to the tension of the string is tangential to the ends of the
small string segment, and so we need to keep track of the string tension forces T1 and T2 at each
end of the string segment. Assuming that the string is only vibrating up and down means that the
horizontal components of the tension forces on each end of the small segment must perfectly
balance each other out. Thus
(1)
T1 cos T2 cos T
where T is a string tension constant associated with the particular set-up (depending, for instance,
on how tightly strung the guitar string is). Then to keep track of all of the forces involved means
just summing up the vertical components of T1 and T2 . This is equal to
(2)
T2 sin T1 sin
where we keep track of the fact that the forces are in opposite direction in our diagram with the
appropriate use of the minus sign. Thats it for Force, now on to Mass and Acceleration.
The mass of the string is simple, just x , where is the mass per unit length of the string, and
x is (approximately) the length of the little segment. Acceleration is the second derivative of
position with respect to time. Considering that the position of the string segment at a particular
time is just u ( x, t ) , the function were trying to find, then the acceleration for the little segment is
2u
(computed at some point between a and a + x ). Putting all of this together, we find that:
t 2
11
(3)
2u
T2 sin T1 sin x 2
t
Now what? It appears that weve got nowhere to go with this this looks pretty unwieldy as it
stands. However, be sneaky try dividing both sides by the various respective equal parts
written down in equation (1):
(4)
T2 sin T1 sin x 2 u
T t 2
T2 cos T1 cos
or more simply:
(5)
tan tan
x 2 u
T t 2
Now, finally, note that tan is equal to the slope at the left-hand end of the string segment,
u
u
a, t and similarly tan equals u a x, t , so (5)
which is just
evaluated at a, i.e.
x
x
x
becomes
(6)
2
u
a x, t u a, t x u2
x
x
T t
2
1 u
u
u
a x, t a, t
2
x x
x
T t
Now were ready for the final push. Lets go back to the original idea start by breaking up the
vibrating string into little segments, examine each such segment using Newtons
F ma equation, and finally figure out what happens as we let the length of the little string
segment dwindle to zero, i.e. examine the result as x goes to 0. Do you see any limit
definitions of derivatives kicking around in equation (7)? As x goes to 0, the left-hand side of
u 2 u
the equation is in fact just equal to
, so the whole thing boils down to:
x x x 2
(8)
2u 2u
x 2 T t 2
12
(9)
2
2u
2 u
c
t 2
x 2
constant).
This equation, which governs the motion of the vibrating string over time, is called the onedimensional wave equation. It is clearly a second order PDE, and its linear and homogeneous.
are the boundary conditions for our wave equation. These will be key when we later on need to
sort through possible solution functions for functions that satisfy our particular vibrating string
set-up.
You might also note that we probably need to specify what the shape of the string is right when
time t = 0, and youre right - to come up with a particular solution function, we would need to
know u (x,0) . In fact we would also need to know the initial velocity of the string, which is just
u t (x,0) . These two requirements are called the initial conditions for the wave equation, and are
also necessary to specify a particular vibrating string solution. For instance, as the simplest
example of initial conditions, if no one is plucking the string, and its perfectly flat to start with,
then the initial conditions would just be u( x,0) 0 (a perfectly flat string) with initial velocity,
13
To start the separation of variables technique we make the key assumption that whatever the
solution function is, that it can be written as the product of two independent functions, each one
of which depends on just one of the two variables, x or t. Thus, imagine that the solution
function, u ( x, t ) can be written as
(2)
u( x, t ) F ( x)G(t )
where F, and G, are single variable functions of x and t respectively. Differentiating this
equation for u ( x, t ) twice with respect to each variable yields
(3)
2u
2u
and
F
(
x
)
G
(
t
)
F ( x)G (t )
t 2
x 2
Thus when we substitute these two equations back into the original wave equation, which is
(4)
2
2u
2 u
c
t 2
x 2
then we get
(5)
2
2u
2 u
F ( x)G (t ) c
c 2 F ( x)G(t )
2
2
t
x
Heres where our separation of variables assumption pays off, because now if we separate the
equation above so that the terms involving F and its second derivative are on one side, and
likewise the terms involving G and its derivative are on the other, then we get
(6)
G (t )
F ( x)
2
c G(t ) F ( x)
Now we have an equality where the left-hand side just depends on the variable t, and the righthand side just depends on x. Here comes the critical observation - how can two functions, one
just depending on t, and one just on x, be equal for all possible values of t and x? The answer is
that they must each be constant, for otherwise the equality could not possibly hold for all
possible combinations of t and x. Aha! Thus we have
(7)
G (t )
F ( x)
k
2
c G(t ) F ( x)
Case One: k = 0
Suppose k equals 0. Then the equations in (7) can be rewritten as
(8)
G (t ) 0 c 2 G(t ) 0
and
F ( x) 0 F ( x) 0
yielding with very little effort two solution functions for F and G:
(9)
G(t ) at b
and
F ( x) px r
where a,b, p and r, are constants (note how easy it is to solve such simple ODEs versus trying to
deal with two variables at once, hence the power of the separation of variables approach).
Putting these back together to form u( x, t ) F ( x)G(t ) , then the next thing we need to do is to
note what the boundary conditions from equation (1) force upon us, namely that
(10)
Unless G(t ) 0 (which would then mean that u( x, t ) 0 , giving us the very dull solution
equivalent to a flat, unplucked string) then this implies that
(11)
F (0) F (l ) 0 .
But how can a linear function have two roots? Only by being identically equal to 0, thus it must
be the case that F ( x) 0 . Sigh, then we still get that u( x, t ) 0 , and we end up with the dull
solution again, the only possible solution if we start with k = 0.
So, lets see what happens if
G (t ) kc 2 G(t )
(13)
F ( x) kF( x)
and
Try to solve these two ordinary differential equations. You are looking for functions whose
second derivatives give back the original function, multiplied by a positive constant. Possible
candidate solutions to consider include the exponential and sine and cosine functions. Of course,
15
F ( x) Ae x
where 2 k and A is a constant. Since could be positive or negative, and since solutions to
(13) can be added together to form more solutions (note (13) is an example of a second order
linear homogeneous ODE, so that the superposition principle holds), then the general solution for
(13) is
(14)
F ( x) Ae x Be x
where now A and B are constants and k . Knowing that F (0) F (l ) 0 , then
unfortunately the only possible values of A and B that work are A B 0 , i.e. that F ( x) 0 .
Thus, once again we end up with u( x, t ) F ( x)G(t ) 0 G(t ) 0 , i.e. the dull solution once
more. Now we place all of our hope on the third and final possibility for k, namely
G (t ) kc 2 G(t )
(13)
F ( x) kF( x)
and
Exponential functions wont satisfy these two ODEs, but now the sine and cosine functions will.
The general solution function for (13) is now
(15)
F ( x) A cos(x) B sin(x)
where again A and B are constants and now we have 2 k . Again, we consider the boundary
conditions that specified that F (0) F (l ) 0 . Substituting in 0 for x in (15) leads to
(16)
l n
or
n
(where n is an integer)
l
This means that there is an infinite set of solutions to consider (letting the constant B be equal to
1 for now), one for each possible integer n.
(18)
n
F ( x) sin
l
Well, we would be done at this point, except that the solution function u( x, t ) F ( x)G(t ) and
weve neglected to figure out what the other function, G(t ) , equals. So, we return to the ODE in
(12):
(12)
G (t ) kc 2 G(t )
where, again, we are working with k, a negative number. From the solution for F (x) we have
determined that the only possible values that end up leading to non-trivial solutions are with
n
k
for n some integer. Again, we get an infinite set of solutions for (12) that
l
can be written in the form
2
(19)
cn
, where n is the same integer that
l
showed up in the solution for F (x) in (18) (were labeling with a subscript n to identify
which value of n is used).
Now we really are done, for all we have to do is to drop our solutions for F (x) and G(t ) into
u( x, t ) F ( x)G(t ) , and the result is
(20)
n
u n ( x, t ) F ( x)G(t ) C cos( n t ) D sin( n t ) sin
x
l
where the integer n that was used is identified by the subscript in u n ( x, t ) and n , and C and D
are arbitrary constants.
17
At this point you should be in the habit of immediately checking solutions to differential
equations. Is (20) really a solution for the original wave equation
2
2u
2 u
c
t 2
x 2
and does it actually satisfy the boundary conditions u(0, t ) 0 and u(l , t ) 0 for all values of t?
Check this now really, dont read any more until youre completely sure that this general
solution works!
n
cycles per
2
n
unit time. In music one cycle per second is referred to as one hertz. Middle C on a piano is
typically 263 hertz (i.e. when someone presses the middle C key, a piano string is struck that
vibrates predominantly at 263 cycles per second), and the A above middle C is 440 hertz. The
solution function when n is chosen to equal 1 is called the fundamental mode (for a particular
length string under a specific tension). The other normal modes are represented by different
values of n. For instance one gets the 2nd and 3rd normal modes when n is selected to equal 2 and
3, respectively. The fundamental mode, when n equals 1 represents the simplest possible
oscillation pattern of the string, when the whole string swings back and forth in one wide swing.
In this fundamental mode the widest vibration displacement occurs in the center of the string (see
the figures below).
18
19
It is this series of fundamental mode tones that gives the basis for much of the tonal scale used in
Western music, which is based on the premise that the lower the fundamental mode differences,
down to octaves and fifths, the more pleasing the relative sounds. Think about that the next time
you listen to some Dave Matthews!
Finally note that in real life, any time a guitar or violin string is caused to vibrate, the result is
typically a combination of normal modes, so that the vibrating string produces sounds from
many different overtones. The particular combination resulting from a particular set-up, the type
of string used, the way the string is plucked or bowed, produces the characteristic tonal quality
associated with that instrument. The way in which these different modes are combined makes it
possible to produce solutions to the wave equation with different initial shapes and initial
velocities of the string. This process of combination involves Fourier Series which will be
covered at the end of Math 21b (come back to see it in action!)
Finally, finally, note that the solutions to the wave equations also show up when one considers
acoustic waves associated with columns of air vibrating inside pipes, such as in organ pipes,
trombones, saxophones or any other wind instruments (including, although you might not have
thought of it in this way, your own voice, which basically consists of a vibrating wind-pipe, i.e.
your throat!). Thus the same considerations in terms of fundamental tones, overtones and the
characteristic tonal quality of an instrument resulting from solutions to the wave equation also
occur for any of these instruments as well. So, the wave equation gets around quite a bit
musically!
20
2
2u
2 u
c
t 2
x 2
v x ct and z x ct
This then means that u, originally a function of x, and t, now becomes a function of v and z,
instead. How does this work? Note that we can solve for x and t in (2), so that
(3)
1
v z and t 1 v z
2
2c
Now using the chain rule for multivariable functions, you know that
(4)
since
u u v u z
u
u
c
c
t v t z t
v
z
v
z
c and
c , and that similarly
t
t
21
x v x z x v z
(5)
v
z
1 and
1 . Working up to second derivatives, another, more involved application
x
x
of the chain rule yields that
since
2 u u
u 2 u v 2 u z 2 u z 2 u v
c
c
z v 2 t zv t z 2 t vz t
t 2 t v
(6)
2u 2u 2 2u 2u
2u
2u 2u
c 2
c 2 2 2
c 2 2
zv
vz
zv z 2
v
z
v
Another almost identical computation using the chain rule results in the fact that
2 u u u 2 u v 2 u z 2 u z 2 u v
x 2 x v z v 2 x zv x z 2 x vz x
(7)
2u
2u 2u
v 2
zv z 2
c
t 2
x 2
(8)
2u
2u
2u 2u
2u
and
in
terms
of
,
and
.
zv
t 2
x 2
v 2 z 2
Doing this gives the following equation, ripe with cancellations:
2
2
2
2u
2u 2u
2u 2u
2 u
2 u
2 u
v 2
v 2
zv z 2
zv z 2
t 2
x 2
(9)
2u
2u
Dividing by c and canceling the terms involving
and
reduces this series of equations
v 2
z 2
to
2
22
(10)
2u
2u
2
zv
zv
2u
0
zv
So what, you might well ask, after all, we still have a second order PDE, and there are still
several variables involved. But wait, think about what (11) implies. Picture (11) as it gives you
information about the partial derivative of a partial derivative:
(12)
u
0
z v
(13)
u
considered as a function of z and v is a constant in terms of the
v
u
can only depend on v, i.e.
v
u
M (v)
v
u(v, z ) M (v)dv
This, as an indefinite integral, results in a constant of integration, which in this case is just
constant from the standpoint of the variable v. Thus, it can be any arbitrary function of z alone,
so that actually
(15)
where P(v) is a function of v alone, and N (z ) is a function of z alone, as the notation indicates.
Substituting back the original change of variable equations for v and z in (2) yields that
(16)
u( x, t ) P( x ct ) N ( x ct )
where P and N are arbitrary single variable functions. This is called DAlemberts solution to
the wave equation. Except for the somewhat annoying but easy enough chain rule computations,
this was a pretty straightforward solution technique. The reason it worked so well in this case
23
was the fact that the change of variables used in (2) were carefully selected so as to turn the
original PDE into one in which the variables basically had no interaction, so that the original
second order PDE could be solved by a series of two single variable integrations, which was easy
to do.
Check out that DAlemberts solution really works. According to this solution, you can pick any
functions for P and N such as P(v) v 2 and N (v) v 2 . Then
(17)
u( x, t ) ( x ct ) 2 ( x ct ) 2 x 2 x ct c 2 t 2 2
2u
2c 2
2
t
and that
(19)
2u
2
x 2
so that indeed
(20)
2
2u
2 u
c
t 2
x 2
2u 2u
xy y 2
v x and z x y
(Try it out! You should get that u( x, y) P( x) N ( x y) with arbitrary functions P and N )
Note that in our solution (16) to the wave equation, nothing has been specified about the initial
and boundary conditions yet, and we said we would take care of this time around. So now we
take a look at what these conditions imply for our choices for the two functions P and N.
24
If we were given an initial function u( x,0) f ( x) along with initial velocity function
u t ( x,0) g ( x) then we can match up these conditions with our solution by simply substituting in
t 0 into (16) and follow along. We start first with a simplified set-up, where we assume that
we are given the initial displacement function u( x,0) f ( x) , and that the initial velocity
function g (x) is equal to 0 (i.e. as if someone stretched the string and simply released it without
imparting any extra velocity over the string tension alone).
Now the first initial condition implies that
(23)
u( x,0) P( x c 0) N ( x c 0) P( x) N ( x) f ( x)
We next figure out what choosing the second initial condition implies. By working with an
initial condition that u t ( x,0) g ( x) 0 , we see that by using the chain rule again on the
functions P and N
(24)
u t ( x,0)
P( x ct ) N ( x ct ) cP ( x ct ) cN ( x ct )
t
(remember that P and N are just single variable functions, so the derivative indicated is just a
simple single variable derivative with respect to their input). Thus in the case where
u t ( x,0) g ( x) 0 , then
(25)
cP ( x ct ) cN ( x ct ) 0
P ( x) N ( x)
and so P( x) k N ( x) for some constant k. Combining this with the fact that
P( x) N ( x) f ( x) , means that 2P( x) k f ( x) , so that P( x) f ( x) k 2 and likewise
N ( x) f ( x) k 2 . Combining these leads to the solution
(27)
u ( x, t ) P( x ct ) N ( x ct )
1
f ( x ct ) f ( x ct )
2
25
1
f (ct ) f (ct ) 0
2
(29)
u (0, t )
(30)
f (ct ) f (ct )
or
so that to meet this condition, then the initial condition function f must be selected to be an odd
function. The second boundary condition that u(l , t ) 0 implies
(31)
u (l , t )
1
f (l ct ) f (l ct ) 0
2
so that f (l ct ) f (l ct ) . Next, since weve seen that f has to be an odd function, then
f (l ct ) f (l ct ) . Putting this all together this means that
(32)
which means that f must have period 2l, since the inputs vary by that amount. Remember that
this just means the function repeats itself every time 2l is added to the input, the same way that
the sine and cosine functions have period 2 .
What happens if the initial velocity isnt equal to 0? Thus suppose u t ( x,0) g ( x) 0 . Tracing
through the same types of arguments as the above leads to the solution function
(33)
u ( x, t )
x ct
1
f ( x ct ) f ( x ct ) 1 x ct g (s)ds
2
2c
In the next installment of this introduction to PDEs we will turn to the Heat Equation.
drink outside on a freezing cold day, then after ten minutes the drink will be a lot colder than if
you'd kept the drink inside in a warm room - this seems pretty obvious!
If the function u( x, y, z, t ) gives the temperature at time t at any point (x, y, z) in an object, then
in mathematical terms the direction of fastest decreasing temperature away from a specific point
(x, y, z), is just the gradient of u (calculated at the point (x, y, z) and a particular time t). Note
that here we are considering the gradient of u as just being with respect to the spatial coordinates
x, y and z, so that we write
(1)
grad (u ) u
u
u
u
i
j k
x
y
z
Thus the rate at which heat flows away (or toward) the point is proportional to this gradient, so
that if F is the vector field that gives the velocity of the heat flow, then
(2)
F k ( grad (u))
flux integral over the surface of E of the heat flow vector field, F. Recall that F is the vector
field that gives the velocity of the heat flow - it's the one we wrote down as F ku in the
previous section. Thus the amount of heat leaving E per unit time is just
(1)
F dS
S
where S is the surface of E. But wait, we have the highly convenient divergence theorem that
tells us that
(2)
grad (u ) u
u
u
u
i
j k
x
y
z
2u 2u 2u
div ( grad (u )) u 2 2 2
x
y
z
Incidentally, this combination of divergence and gradient is used so often that it's given a name,
the Laplacian. The notation div ( grad (u) u is usually shortened up to simply 2 u . So
we could rewrite (2), the heat leaving region E per unit time as
(5)
F dS k (
S
u )dV
On the other hand, we can calculate the total amount of heat, H, in the region, E, at a particular
time, t, by computing the triple integral over E:
(6)
H ( )u ( x, y, z, t )dV
E
where is the density of the material and the constant is the specific heat of the material (don't
worry about all these extra constants for now - we will lump them all together in one place in the
end). How does this relate to the earlier integral? On one hand (5) gives the rate of heat leaving
H
E per unit time. This is just the same as
, where H gives the total amount of heat in E.
t
This means we actually have two ways to calculate the same thing, because we can calculate
H
by differentiating equation (6) giving H, i.e.
t
28
(7)
H
u
( ) dV
t
t
E
Now since both (5) and (7) give the rate of heat leaving E per unit time, then these two equations
must equal each other, so
(8)
H
u
( ) dV k ( 2 u )dV
t
t
E
E
For these two integrals to be equal means that their two integrands must equal each other (since
this integral holds over any arbitrary region E in the object being studied), so
(9)
( )
or, if we let c 2
(10)
u
k ( 2 u )
t
, and write out the Laplacian, 2 u , then this works out simply as
2u 2u 2u
u
c 2 2 2 2
t
y
z
x
This, then, is the PDE that models the diffusion of heat in an object, i.e. the Heat Equation! This
particular version (10) is the three-dimensional heat equation.
u
2u
c2 2
t
x
One of the interesting things to note at this point is how similar this PDE appears to the wave
equation PDE. However, the resulting solution functions are remarkably different in nature.
Remember that the solutions to the wave equation had to do with oscillations, dealing with
29
vibrating strings and all that. Here the solutions to the heat equation deal with temperature flow,
not oscillation, so that means the solution functions will likely look quite different. If youre
familiar with the solution to Newtons heating and cooling differential equations, then you might
expect to see some type of exponential decay function as part of the solution function.
Before we start to solve this equation, lets mention a few more conditions that we will need to
know to nail down a specific solution. If the metal bar that were studying has a specific length,
l, then we need to know the temperatures at the ends of the bars. These temperatures will give us
boundary conditions similar to the ones we worked with for the wave equation. To make life a
bit simpler for us as we solve the heat equation, lets start with the case when the ends of the bar,
at x 0 and x l both have temperature equal to 0 for all time (you can picture this situation as
a metal bar with the ends stuck against blocks of ice, or some other cooling apparatus keeping
the ends exactly at 0 degrees). Thus we will be working with the same boundary conditions as
before, namely
(2)
Finally, to pick out a particular solution, we also need to know the initial starting temperature of
the entire bar, namely we need to know the function u (x,0) . Interestingly, thats all we would
need for an initial condition this time around (recall that to specify a particular solution in the
wave equation we needed to know two initial conditions, u (x,0) and u t (x,0) ).
The nice thing now is that since we have already solved a PDE, then we can try following the
same basic approach as the one we used to solve the last PDE, namely separation of variables.
With any luck, we will end up solving this new PDE. So, remembering back to what we did in
that case, lets start by writing
(3)
u( x, t ) F ( x)G(t )
where F, and G, are single variable functions. Differentiating this equation for u ( x, t ) with
respect to each variable yields
(4)
2u
u
F ( x)G(t ) and
F ( x)G (t )
2
t
x
When we substitute these two equations back into the original heat equation
(5)
u
2u
c2 2
t
x
(6)
u
2u
F ( x)G (t ) c 2 2 c 2 F ( x)G(t )
t
x
we get
30
If we now separate the two functions F and G by dividing through both sides, then we get
(7)
G (t )
F ( x)
2
c G(t ) F ( x)
Just as before, the left-hand side only depends on the variable t, and the right-hand side just
depends on x. As a result, to have these two be equal can only mean one thing, that they are both
equal to the same constant, k:
(8)
G (t )
F ( x)
k
2
c G(t ) F ( x)
As before, lets first take a look at the implications for F (x) as the boundary conditions will
again limit the possible solution functions. From (8) we get that F (x) has to satisfy
(9)
F ( x) kF( x) 0
Just as before, one can consider the various cases with k being positive, zero, or negative. Just as
before, to meet the boundary conditions, it turns out that k must in fact be negative (otherwise
F (x) ends up being identically equal to 0, and we end up with the trivial solution u( x, t ) 0 ).
So skipping ahead a bit, lets assume we have figured out that k must be negative (you should
check the other two cases just as before to see that what weve just written is true!). To indicate
this, we write, as before, that k 2 , so that we now need to look for solutions to
(10)
F ( x) 2 F ( x) 0
These solutions are just the same as before, namely the general solution is:
(11)
F ( x) A cos(x) B sin(x)
where again A and B are constants and now we have k . Next, lets consider the
boundary conditions u(0, t ) 0 and u(l , t ) 0 . These are equivalent to stating that
F (0) F (l ) 0 . Substituting in 0 for x in (11) leads to
(12)
(13)
l n or
(14)
n
F ( x) sin
x
l
n
(where n is an integer)
l
and so
where n is an integer. Next we solve for G(t ) , using equation (8) again. So, rewriting (8), we
see that this time
(15)
where n
G (t ) n G(t ) 0
2
cn
, since we had originally written k 2 , and we just determined that
l
n
during the solution for F (x) . The general solution to this first order differential
l
equation is just
(16)
G(t ) Ce n t
2
n n 2t
u ( x, t ) F ( x)G(t ) C sin
x e
l
cn
. As is always the case, given a
l
supposed solution to a differential equation, you should check to see that this indeed is a solution
to the original heat equation, and that it satisfies the two boundary conditions we started with.
n n 2t
u ( x, t ) C sin
x e
l
that we found in the last section, to a specific solution for a particular situation. How can one
figure out which values of n and C are needed for a specific problem? The answer lies not in
choosing one such solution function, but more typically it requires setting up an infinite series of
such solutions. Such an infinite series, because of the principle of superposition, will still be a
32
solution function to the equation, because the original heat equation PDE was linear and
homogeneous. Using the superposition principle, and by summing together various solutions
with carefully chosen values of C, then it is possible to create a specific solution function that
will match any (reasonable) given starting temperature function u (x,0) . The way in which we
add together solutions involves Fourier series, again, a topic that is too broad to go into at this
point, but one which you will see if you continue on with Math 21b!
In any case, you can see one clear feature in the solution functions, which is the presence of the
2
exponential decay term e n t in the solution involving the time variable. For instance, if the
temperature at any point on the bar started hotter than the two ends of the bar (which were kept
at a steady 0 degrees throughout, according to our boundary conditions), then the exponential
decay term shows that as time passes by, the temperature at that point of the bar will
exponentially decay down to 0.
To see a particular example in action, you will get a chance to do another Mathematica
assignment covering aspects of differential equations. In this lab you will be able to enter in
different starting functions for the temperature along a metal bar, and see how they decay as t,
the time variable increases (look in the section marked Heat Equation for details).
2
2u
2u
2 u
x 2 y 2
t 2
can be used to model waves on the surface of drumheads, or on the surface of liquids, and the
three-dimensional heat equation
(2)
2
u
2u 2u
2 u
c 2 2 2
t
y
z
x
solutions to the original PDEs, in much the same way that we saw in the two separation of
variables solutions above.
If you would like to learn more about PDEs, and who wouldnt, then you might want to take a
look at one of the classic texts on differential equations, by William Boyce and Richard DiPrima
(Elementary Differential Equations and Boundary Value Problems). Another good reference
book is Advanced Engineering Mathematics by Erwin Kreyszig (much of this handout was
modeled on the approach taken by Kreyszig in his textbook).
34
PDE Problems
(1)
Determine which of the following functions are solutions to the two-dimensional Laplace
equation
2u 2u
0
x 2 y 2
(a)
u( x, y) x 3 3xy 2
(b)
u( x, y) x 4 6 x 2 y 2 y 4 12
(c)
u( x, y) cos( x y)e x y
(d)
u( x, y) arctan y x
(e)
u( x, y) 2002 xy 2 1999 x 2 y
(f)
(2)
Determine which of the following functions are solutions to the one-dimensional wave
equation (for a suitable value of the constant c). Also determine what c must equal in each case.
2
2u
2 u
c
t 2
x 2
(a)
u( x, t ) sin( x 2t ) cos( x 2t )
(b)
u( x, t ) ln( xt )
(c)
u( x, t ) 4 x 3 12 xt 2 24
(d)
u( x, t ) sin(100 x) sin(100t )
(e)
u( x, t ) 2002 xt 2 1001x 2 t
(f)
u( x, t ) x 2 4t 2
(3)
Solve the following four PDEs where u is a function of two variables, x and y. Note that
your answer might have undetermined functions of either x or y, the same way an ODE might
have undetermined constants. Note you can solve these PDEs without having to use the
separation of variables technique.
(a)
2u
9u 0
x 2
(c)
u
2 xyu
x
(b)
u
2 yu 0
y
(d)
2u
0
xy
(4)
Solve the following systems of PDEs where u is a function of two variables, x and y.
Note that once again your answer might have undetermined functions of either x or y.
35
(a)
2u
2u
and
0
0
x 2
y 2
(b)
u
u
0 and
0
x
y
(c)
2u
2u
and
0
0
xy
x 2
(d)
2u
2u
2u
,
and
0
0
yx
x 2
y 2
(5)
Determine specific solutions to the one-dimensional wave equation for each of the
following sets of initial conditions. Suppose that each one is modeling a vibrating string of
length with fixed ends, and with constants such that c 2 1 in the wave equation PDE.
(a)
(b)
(c)
(d)
(6)
Find solutions u( x, y) to each of the following PDEs by using the separation of variables
technique.
(a)
u u
0
x y
(c)
(e)
u u
2( x y)u
x y
u
u
y
0
x
y
36
(b)
u
u
y
0
x
y
(d)
(f)
2u
u 0
xy
u
u
x
0
x
y