0% found this document useful (0 votes)
223 views36 pages

Partial Differential Equations

y + c1 = 3x + c2 This document provides an introduction to partial differential equations (PDEs). It begins by discussing ordinary differential equations (ODEs) which involve derivatives of functions of one variable. PDEs are more complex as they involve partial derivatives of functions with multiple variables. The document then discusses ODE modeling and solutions. It notes that ODEs arise in many fields including physics, biology, chemistry and economics. The document provides examples of first order, second order, and third order ODEs. It also discusses classifying ODEs by order, general solutions that involve arbitrary constants, and specific solutions that satisfy initial conditions.

Uploaded by

Kinyunyu Matuga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
223 views36 pages

Partial Differential Equations

y + c1 = 3x + c2 This document provides an introduction to partial differential equations (PDEs). It begins by discussing ordinary differential equations (ODEs) which involve derivatives of functions of one variable. PDEs are more complex as they involve partial derivatives of functions with multiple variables. The document then discusses ODE modeling and solutions. It notes that ODEs arise in many fields including physics, biology, chemistry and economics. The document provides examples of first order, second order, and third order ODEs. It also discusses classifying ODEs by order, general solutions that involve arbitrary constants, and specific solutions that satisfy initial conditions.

Uploaded by

Kinyunyu Matuga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Harvard University, Partial Differential equations

Introduction to Partial Differential Equations - Math 21a


If you took Math 1b here at Harvard, then you have already been introduced to the idea of a
differential equation. Up until now, however, if you have already worked with differential
equations then theyve probably all been ordinary differential equations (ODEs), involving
ordinary derivatives of a function of a single variable. Now that you have worked with
functions of several variables in Math 21a, you are ready to explore a new area of differential
equations, one that involves partial derivatives. These equations are aptly named partial
differential equations (PDEs). During this short section of Math 21a, you will get a chance to
see some of the most important PDEs, all of which are examples of linear second-order PDEs
(the terminology will be explained shortly).
First, however, in case you havent worked with too many differential equations at this point,
lets back up a bit and review some of the issues behind ordinary differential equations.

Ordinary Differential Equations


A differential equation, simply put, is an equation involving one or more derivatives of a
function y = f(x). These equations can be as straightforward as
(1)

y = 3,

or more complicated, such as


(2)

y + 12y = 0

(3)

(x2 y ) + ex y 3xy = (x3 + x).

or

There are a number of ways of classifying such differential equations. At the least, you should
know that the order of a differential equation refers to the highest order of derivative that appears
in the equation. Thus these first three differential equations are of order 1, 2 and 3 respectively.
Differential equations show up surprisingly often in a number of fields, including physics,
biology, chemistry and economics. Anytime something is known about the rate of change of a
function, or about how several variables impact the rate of change of a function, then it is likely
that there is a differential equation hidden behind the scenes. Many laws of physics take the
form of differential equations, such as the classic force equals mass times acceleration (since
acceleration is the second derivative of position with respect to time). Modeling means studying
a specific situation to understand the nature of the forces or relationships involved, with the goal
of translating the situation into a mathematical relationship. It is quite often the case that such
modeling ends up with a differential equation. One of the main goals of such modeling is to find
solutions to such equations, and then to study these solutions to provide an understanding of the
situation along with giving predictions of behavior.

By Professor Kinyunyu, M. +255758763579

Harvard University, Partial Differential equations

In biology, for instance, if one studies populations (such as of small one-celled organisms), and
their rates of growth, then it is easy to run across one of the most basic differential equation
models, that of exponential growth. To model the population growth of a group of e-coli cells in
a Petri dish, for example, if we make the assumption that the cells have unlimited resources,
space and food, then the cells will reproduce at a fairly specific measurable rate. The trick to
figuring out how the cell population is growing is to understand that the number of new cells
created over any small time interval is proportional to the number of cells present at that time.
This means that if we look in the Petri dish and count 500 cells at a particular moment, then the
number of new cells being created at that time should be exactly five times the number of new
cells being created if we had looked in the dish and only seen 100 cells (i.e. five times the
population, five times the number of new cells being created). Curiously, this simple
observation led to population studies about humans (by Malthus and others in the 19th century),
based on exactly the same idea of proportional growth. Thus, we have a simple observation that
the rate of change of the population at any particular time is in direct proportion to the number of
cells present at that time. If you now translate this observation into a mathematical statement
involving the population function y = P(t), where t stands for time, and P(t) is the function giving
the population of cells at time t then you have become a mathematical modeler.
Answer (yielding another example of a differential equation):
(4)

P (t) = k P(t), or, equivalently, y = k y

To solve a differential equation means finding a solution function y = f(t), such that when the
corresponding derivatives, y , y, etc. are computed and substituted into the equation, then the
equation becomes an identity (such as "3x = 3x"). For instance, in the first example, equation (1)
from above, the differential equation y = 3 is equivalent to the condition that the derivative of
the function y = f(x) is a constant, equal to 3. To solve this means to find a function whose
derivative with respect to x is constant, and equal to 3. Many such functions come to mind
quickly, such as y = 3x, or y = 3x + 5, or y = 3x 12. Each of these functions is said to satisfy
the original differential equation, in that each one is a specific solution to the equation. In fact,
clearly anything of the form y = 3x + c, where c is any constant, will be a solution to the
equation. And on the other hand, any function that actually satisfies equation (1) will have to be
of the form y = 3x + c.
To separate the idea of a specific solution, such as y = 3x + 5, from a more general set or family
of solutions, y = 3x + c, where c is an arbitrary constant, we call an individual solution function,
such as as y = 3x + 5, a specific or particular solution (no surprise there). Next we call the set of
functions, y = 3x + c, which contain an arbitrary constant (or constants), a general solution.
Note that if someone asks you to come up with a solution to the differential equation y = 3, then
any function of the form y = 3x + c, would do as an answer. However, if the same person asked
you to solve y = 3, and at the same time make sure that the solution also satisfies another
condition, such as y(0) = 20, then this extra condition forces the constant c to equal 20. To see
this, note that the general solution is y = 3x + c. To find y(0), this just means substituting in x = 0,
so that you find that y(0) = c. Then to make the result equal to 20, according to the extra
2

Harvard University, Partial Differential equations

condition, it must be the case that the constant c equals 20, so that the particular solution in this
situation is y = 3x + 20, now with no arbitrary constants remaining.
An extra condition in addition to a differential equation, is often called an initial condition, if the
condition involves the value of the function when x = 0 (or t = 0, or whatever the independent
variable is labeled for the function in the given situation). Sometimes to identify a specific
solution to an ODE, several initial conditions need to be given, not just about the value of the
function, y = f(x), when x = 0, but also giving the value of the first derivative of y when x = 0, or
even the value of higher order derivatives as well. This will often happen, for instance, if the
differential equation itself involves derivatives of higher order. In fact, typically the order of the
highest derivative that shows up in the equation will equal the number of constants that show up
in the general solution to a differential equation. You can picture why this happens by thinking
of the process of solving the differential equation as involving as many integrations as the
highest order of derivative, to undo each of the derivatives, then each such indefinite
integration will bring in a new constant.

Solving Differential Equations


Solving differential equations is an art and a science. There are so many different varieties of
differential equations that there is no one sure-fire method that can solve all differential
equations exactly (i.e. coming up with a closed form for a solution function, such as y = 3x + 5). .
There are, however, a number of numerical techniques that can give approximate solutions to
any desired degree of accuracy. Using such a technique is sometimes necessary to answer a
specific question, but often it is knowledge of an exact solution that leads to better understanding
of the situation being described by the differential equation. In our math 21a classes, we will
concentrate on solving ODEs exactly, and will not consider such numerical techniques.
However, if you are interested in seeing some numeric techniques in action then you might
consider trying solving some differential equations using the Mathematica program.
Note that to solve example (1) y = 3, we could have simply integrated both sides. This follows
from the very basic idea that anytime we are given two things that are equal, then as long as we
do the same thing to one side of an equation as to the other then equality still holds. For an
obvious example of this principle in action, if x = 2, then x + 4 = 2 + 4 and 3x = 6. So solving
the differential equation y = 3, is pretty straightforward, we just have to integrate both sides:
y'dx 3dx . The fundamental theorem of calculus then tells us that the integral of the
derivative of a function is just the function itself up to a constant, i.e. that

y'dx y c , and
1

also that 3dx 3x c 2 where we represent different constants by writing c1 and c2, to
distinguish them from each other. Yes, all these subscripted constants can appear odd, but there
will be times when keeping good records of new constants that come along while were solving
differential equations will be especially important. Think of it as keeping track of "+" or "" in
an equation - yes it can be somewhat annoying at times, but clearly its critical!

Harvard University, Partial Differential equations


Finally, then, we have y 3x (c 2 c1 ) , which we simplify as y 3x c , since until an initial
condition is given, we dont actually know the value of any of these constants, so we might as
well lump them together under one name.

Separation of Variables Technique


Can we follow the same approach for the other differential equations? Well, usually no, we
cant. The reason this worked so nicely in our first example was that the two sides of the
equation were neatly separated for us to do each of the integrations. Suppose we tried the same
thing for the differential equation in (4), i.e. P (t) = k P(t). Here we could try to integrate both
sides directly, so that we write
(5)

P(t )dt kP(t )dt

Although we can easily do the left side integral, and just end up with P(t) + c, integrating the
right-hand side leads us in circles, how can we integrate a function such as P(t) if we dont know
what the function is (finding the function was the whole point, after all!). It appears that we need
to take a different approach. Since we dont know what P(t) is, lets try isolating everything
involving P(t) and its derivatives on one side of the equation:
(6)

1
P (t ) k
P(t )

Now lets try to integrate both sides with respect to t again:


(7)

P(t ) P (t )dt kdt

The reason were in better shape now is that the right-hand side integral is trivial, and the lefthand side integral can be taken care almost directly, with a quick substitution: if u P(t ) then
du P (t )dt and so (7) becomes
1

(8)

u du kdt

(9)

ln u ln P(t ) kt c1

so that

where weve gone ahead and combined constants on one side. Now solving for the function P(t),
we find that
(10)

P(t ) e kt c1 e c e kt Ce kt

Harvard University, Partial Differential equations

where weve simply lumped all the unknown constants together as C.


Does this really satisfy the original differential equation (4)? Check and see that it does. If
someone had added an initial condition such as P(0) 1000 , then this extra information would
allow us to solve and find out the constant C = 1000 (remember that the other constant, k, would
necessarily be known before we started as it shows up in the original differential equation).
This technique of splitting up the differential equation by writing everything involving the
function and its derivatives on one side, everything else on the other, and then finally integrating
is called separation of variables. It's a really useful trick!
One notational shortcut you can use as you go through the separation of variables technique is to
dP
write P (t ) as
in the original differential equation (4) which then becomes
dt
dP
(11)
kP
dt
dP
Next, when you separate the variables, treat the
as if it were a fraction (youve probably seen
dt
dP
this type of thing done before remember, this is only a notational shortcut, the derivative
is
dt
one whole unit, of course, and not a fraction!). Thus to separate the variables in (11) we get
(12)

1
dP kdt
P

Now the integration step looks as if it is happening with respect to P on the left hand side and
with respect to x on the right hand side:
(13)

PdP kdt

and after you do these integrations, youre right back to equation (9), above.
When you can use the separation of variables technique, life is good! Unfortunately, as with the
integration tricks you learned in single variable calculus, its not the case that one trick will take
care of every possible situation. In any case, its still a really good trick to have up your sleeve!
Examples
Try to solve the following three differential equations using the separation of variables technique
(where y is a function of x):
(a)

9 yy 4 x 0

Harvard University, Partial Differential equations

(b)

y 1 y 2

(c)

y 2 xy

Answers: you should get

2
x2 y2

c , y tan(x c) and y ce x respectively


9
4

Introduction to Partial Differential Equations


Now we enter new territory. Having spent the semester studying functions of several variables,
and having worked through the concept of a partial derivative, we are in position to generalize
the concept of a differential equation to include equations that involve partial derivatives, not just
ordinary ones. Solutions to such equations will involve functions not just of one variable, but of
several variables. Such equations arise naturally, for example, when one is working with
situations that involve positions in space that vary over time. To model such a situation, one
needs to use functions that have several variables to keep track of the spatial dimensions, and an
additional variable for time.

Examples of some important PDEs:


(1)

2
2u
2 u

c
t 2
x 2

One-dimensional wave equation

(2)

u
2u
c2 2
t
x

One-dimensional heat equation

(3)

2u 2u

0
x 2 y 2

Two-dimensional Laplace equation

(4)

2u 2u

f ( x, y )
x 2 y 2

Two-dimensional Poisson equation

Note that for PDEs one typically uses some other function letter such as u instead of y, which
now quite often shows up as one of the variables involved in the multivariable function.
In general we can use the same terminology to describe PDEs as in the case of ODEs. For
starters, we will call any equation involving one or more partial derivatives of a multivariable
function a partial differential equation. The order of such an equation is the highest order
partial derivative that shows up in the equation. In addition, the equation is called linear if it is
6

Harvard University, Partial Differential equations

of the first degree in the unknown function u, and its partial derivatives, ux, uxx, uy, etc. (this
means that the highest power of the function, u, and its derivatives is just equal to one in each
term in the equation, and that only one of them appears in each term). If each term in the
equation involves either u, or one of its partial derivatives, then the function is classified as
homogeneous.
Take a look at the list of PDEs above. Try to classify each one using the terminology given
above. Note that the f(x,y) function in the Poisson equation is just a function of the variables x
and y, it has nothing to do with u(x,y).
Answers: all of these PDEs are second order, and are linear. All are also homogeneous except
for the fourth one, the Poisson equation, as the f(x,y) term on the right hand side doesnt involve
u or any of its derivatives.
The reason for defining the classifications linear and homogeneous for PDEs is to bring up the
principle of superposition. This excellent principle (which also shows up in the study of linear
homogeneous ODEs) is useful exactly whenever one considers solutions to linear homogeneous
PDEs. The idea is that if one has two functions, u1 and u 2 that satisfy a linear homogeneous
differential equation, then since taking the derivative of a sum of functions is the same as taking
the sum of their derivatives, then as long as the highest powers of derivatives involved in the
equation are one (i.e., that its linear), and that each term has a derivative in it (i.e. that its
homogeneous), then its a straightforward exercise to see that the sum of u1 and u 2 will also be a
solution to the differential equation. In fact, so will any linear combination, au1 bu 2 , where a
and b are constants.
For instance, the two functions cos(xy ) and sin(xy ) are both solutions for the first-order linear
homogeneous PDE:
(5)

u
u
y
0
x
y

Its a simple exercise to check that cos( xy ) sin( xy ) and 3 cos( xy ) 2 sin( xy ) are also solutions
to the same PDE (as will be any linear combination of cos(xy ) and sin(xy ) )
This principle is extremely important, as it enables us to build up particular solutions out of
infinite families of solutions through the use of Fourier series. This trick of superposition is
examined in great detail at the end of math 21b, and although we will mention it during the
classes on PDEs, we wont have time in 21a to go into any specifics about the use of Fourier
series in this way (so come back for more in Math 21b!)

Solving PDEs

Harvard University, Partial Differential equations

Solving PDEs is considerably more difficult in general than solving ODEs, as the level of
complexity involved can be great. For instance the following seemingly completely unrelated
functions are all solutions to the two-dimensional Laplace equation:
(1)

x2 y2 ,

e x cos( y) and ln( x 2 y 2 )

You should check to see that these are all in fact solutions to the Laplace equation by doing the
2u
2u
same thing you would do for an ODE solution, namely, calculate
and
, substitute them
x 2
y 2
into the PDE equation and see if the two sides of the equation are identical.
Now, there are certain types of PDEs for which finding the solutions is not too hard. For
instance, consider the first-order PDE
(2)

u
3x 2 xy 2
x

where u is assumed to be a two-variable function depending on x and y. How could you solve
this PDE? Think about it, is there any reason that we couldnt just undo the partial derivative of
u with respect to x by integrating with respect to x? No, so try it out! Here, note that we are
given information about just one of the partial derivatives, so when we find a solution, there will
be an unknown factor thats not necessarily just an arbitrary constant, but in fact is a completely
arbitrary function depending on y.
To solve (2), then, integrate both sides of the equation with respect to x, as mentioned. Thus
(3)

x dx (3x

xy 2 ) dx

1 2 2
x y F . What is F? Note that it could be any function such that when
2
one takes its partial derivative with respect to x, the result is 0. This means that in the case of
PDEs, the arbitrary constants that we ran into during the course of solving ODEs are now taking
the form of whole functions. Here F, is in fact any function, F(y),of y alone. To check that this
is indeed a solution to the original PDE, it is easy enough to take the partial derivative of this
u( x, y) function and see that it indeed satisfies the PDE in (2).
so that u ( x, y) x 3

Now consider a second-order PDE such as


(4)

2u
5x y 2
xy

where u is again a two-variable function depending on x and y. We can solve this PDE by
integrating first with respect to x, to get to an intermediate PDE,
8

Harvard University, Partial Differential equations

(5)

u 5 2
x xy 2 F ( y )
y 2

where F(y) is a function of y alone. Now, integrating both sides with respect to y yields
(6)

u ( x, y)

5 2
1
x y xy 3 F ( y) G( x)
2
3

where now G(x) is a function of x alone (Note that we could have integrated with respect to y
first, then x and we would have ended up with the same result). Thus, whereas in the ODE
world, general solutions typically end up with as many arbitrary constants as the order of the
original ODE, here in the PDE world, one typically ends up with as many arbitrary functions in
the general solutions.
To end up with a specific solution, then, we will need to be given extra conditions that indicate
what these arbitrary functions are. Thus the initial conditions for PDEs will typically involve
knowing whole functions, not just constant values. We will also see that the initial conditions
that appeared in specific ODE situations have slightly more involved analogs in the PDE world,
namely there are often so-called boundary conditions as well as initial conditions to take into
consideration.

Examples of PDEs the Wave Equation


For the rest of this introduction to PDEs we will explore PDEs representing some of the basic
types of linear second order PDEs: heat conduction and wave propagation. These represent two
entirely different physical processes: the process of diffusion, and the process of oscillation,
respectively. The field of PDEs is extremely large, and there is still a considerable amount of
undiscovered territory in it, but these two basic types of PDEs represent the ones that are in some
sense, the best understood and most developed of all of the PDEs. Although there is no one way
to solve all PDEs explicitly, the main technique that we will use to solve these various PDEs
represents one of the most important techniques used in the field of PDEs, namely separation of
variables (which we saw in a different form while studying ODEs). The essential manner of
using separation of variables is to try to break up a differential equation involving several partial
derivatives into a series of simpler, ordinary differential equations.
We start with the wave equation. This PDE governs a number of similarly related phenomena,
all involving oscillations. Situations described by the wave equation include acoustic waves,
such as vibrating guitar or violin strings, the vibrations of drums, waves in fluids, as well as
waves generated by electromagnetic fields, or any other physical situations involving
oscillations, such as vibrating power lines, or even suspension bridges in certain circumstances.
In short, this one type of PDE covers a lot of ground.

Harvard University, Partial Differential equations

We begin by looking at the simplest example of a wave PDE, the one-dimensional wave
equation. To get at this PDE, we show how it arises as we try to model a simple vibrating string,
one that is held in place between two secure ends. For instance, consider plucking a guitar string
and watching (and listening) as it vibrates. As is typically the case with modeling, reality is quite
a bit more complex than we can deal with all at once, and so we need to make some simplifying
assumptions in order to get started.
First off, assume that the string is stretched so tightly that the only real force we need to consider
is that due to the strings tension. This helps us out as we only have to deal with one force, i.e.
we can safely ignore the effects of gravity if the tension force is orders of magnitude greater than
that of gravity. Next we assume that the string is as uniform, or homogeneous, as possible, and
that it is perfectly elastic. This makes it possible to predict the motion of the string more readily
since we dont need to keep track of kinks that might occur if the string wasnt uniform. Finally,
well assume that the vibrations are pretty minimal in relation to the overall length of the string,
i.e. in terms of displacement, the amount that the string bounces up and down is pretty small.
The reason this will help us out is that we can concentrate on the simple up and down motion of
the string, and not worry about any possible side to side motion that might occur.
Now consider a string of a certain length, l, thats held in place at both ends. First off, what
exactly are we trying to do in modeling the strings vibrations? What kind of function do we
want to solve for to keep track of the motion of string? What will it be a function of? Clearly if
the string is vibrating, then its motion changes over time, so time is one variable we will want to
keep track of. To keep track of the actual motion of the string we will need to have a function
that tells us the shape of the string at any particular time. One way we can do this is by looking
for a function that tells us the vertical displacement (positive up, negative down) that exists at
any point along the string how far away any particular point on the string is from the
undisturbed resting position of the string, which is just a straight line. Thus, we would like to
find a function u ( x, t ) of two variables. The variable x can measure distance along the string,
measured away from one chosen end of the string (i.e. x = 0 is one of the tied down endpoints of
the string), and t stands for time. The function u ( x, t ) then gives the vertical displacement of the
string at any point, x, along the string, at any particular time t.
As we have seen time and time again in calculus, a good way to start when we would like to
study a surface or a curve or arc is to break it up into a series of very small pieces. At the end of
our study of one little segment of the vibrating string, we will think about what happens as the
length of the little segment goes to zero, similar to the type of limiting process weve seen as we
progress from Riemann Sums to integrals.
Suppose we were to examine a very small length of the vibrating string as shown in figure 1:

10

Harvard University, Partial Differential equations

Now what? How can we figure out what is happening to the vibrating string? Our best hope is
to follow the standard path of modeling physical situations by studying all of the forces involved
and then turning to Newtons classic equation F ma . Its not a surprise that this will help us,
as we have already pointed out that this equation is itself a differential equation (acceleration
being the second derivative of position with respect to time). Ultimately, all we will be doing is
substituting in the particulars of our situation into this basic differential equation.
Because of our first assumption, there is only one force to keep track of in our situation, that of
the string tension. Because of our second assumption, that the string is perfectly elastic with no
kinks, we can assume that the force due to the tension of the string is tangential to the ends of the
small string segment, and so we need to keep track of the string tension forces T1 and T2 at each
end of the string segment. Assuming that the string is only vibrating up and down means that the
horizontal components of the tension forces on each end of the small segment must perfectly
balance each other out. Thus
(1)

T1 cos T2 cos T

where T is a string tension constant associated with the particular set-up (depending, for instance,
on how tightly strung the guitar string is). Then to keep track of all of the forces involved means
just summing up the vertical components of T1 and T2 . This is equal to
(2)

T2 sin T1 sin

where we keep track of the fact that the forces are in opposite direction in our diagram with the
appropriate use of the minus sign. Thats it for Force, now on to Mass and Acceleration.
The mass of the string is simple, just x , where is the mass per unit length of the string, and
x is (approximately) the length of the little segment. Acceleration is the second derivative of
position with respect to time. Considering that the position of the string segment at a particular
time is just u ( x, t ) , the function were trying to find, then the acceleration for the little segment is
2u
(computed at some point between a and a + x ). Putting all of this together, we find that:
t 2

11

Harvard University, Partial Differential equations

(3)

2u
T2 sin T1 sin x 2
t

Now what? It appears that weve got nowhere to go with this this looks pretty unwieldy as it
stands. However, be sneaky try dividing both sides by the various respective equal parts
written down in equation (1):

(4)

T2 sin T1 sin x 2 u

T t 2
T2 cos T1 cos

or more simply:
(5)

tan tan

x 2 u
T t 2

Now, finally, note that tan is equal to the slope at the left-hand end of the string segment,
u
u
a, t and similarly tan equals u a x, t , so (5)
which is just
evaluated at a, i.e.
x
x
x
becomes
(6)

2
u
a x, t u a, t x u2
x
x
T t

or better yet, dividing both sides by x


(7)

2
1 u
u
u
a x, t a, t
2
x x
x
T t

Now were ready for the final push. Lets go back to the original idea start by breaking up the
vibrating string into little segments, examine each such segment using Newtons
F ma equation, and finally figure out what happens as we let the length of the little string
segment dwindle to zero, i.e. examine the result as x goes to 0. Do you see any limit
definitions of derivatives kicking around in equation (7)? As x goes to 0, the left-hand side of
u 2 u
the equation is in fact just equal to
, so the whole thing boils down to:

x x x 2
(8)

2u 2u

x 2 T t 2

which is often written as

12

Harvard University, Partial Differential equations

(9)

2
2u
2 u

c
t 2
x 2

by bringing in a new constant c 2

(typically written with c 2 , to show that its a positive

constant).
This equation, which governs the motion of the vibrating string over time, is called the onedimensional wave equation. It is clearly a second order PDE, and its linear and homogeneous.

Solution of the Wave Equation by Separation of Variables


There are several approaches to solving the wave equation. The first one we will work with,
using a technique called separation of variables, again, demonstrates one of the most widely used
solution techniques for PDEs. The idea behind it is to split up the original PDE into a series of
simpler ODEs, each of which we should be able to solve readily using tricks already learned.
The second technique, which we will see in the next section, uses a transformation trick that also
reduces the complexity of the original PDE, but in a very different manner. This second solution
is due to Jean Le Rond DAlembert (an 18th century French mathematician), and is called
DAlemberts solution, as a result.
First, note that for a specific wave equation situation, in addition to the actual PDE, we will also
have boundary conditions arising from the fact that the endpoints of the string are attached
solidly, at the left end of the string, when x = 0 and at the other end of the string, which we
suppose has overall length l. Lets start the process of solving the PDE by first figuring out
what these boundary conditions imply for the solution function, u ( x, t ) .
Answer: for all values of t, the time variable, it must be the case that the vertical displacement at
the endpoints is 0, since they dont move up and down at all, so that
(1)

u(0, t ) 0 and u(l , t ) 0 for all values of t

are the boundary conditions for our wave equation. These will be key when we later on need to
sort through possible solution functions for functions that satisfy our particular vibrating string
set-up.
You might also note that we probably need to specify what the shape of the string is right when
time t = 0, and youre right - to come up with a particular solution function, we would need to
know u (x,0) . In fact we would also need to know the initial velocity of the string, which is just
u t (x,0) . These two requirements are called the initial conditions for the wave equation, and are
also necessary to specify a particular vibrating string solution. For instance, as the simplest
example of initial conditions, if no one is plucking the string, and its perfectly flat to start with,
then the initial conditions would just be u( x,0) 0 (a perfectly flat string) with initial velocity,
13

Harvard University, Partial Differential equations


u t ( x,0) 0 . Here, then, the solution function is pretty unenlightening its just u( x, t ) 0 , i.e.
no movement of the string through time.

To start the separation of variables technique we make the key assumption that whatever the
solution function is, that it can be written as the product of two independent functions, each one
of which depends on just one of the two variables, x or t. Thus, imagine that the solution
function, u ( x, t ) can be written as
(2)

u( x, t ) F ( x)G(t )

where F, and G, are single variable functions of x and t respectively. Differentiating this
equation for u ( x, t ) twice with respect to each variable yields
(3)

2u
2u

and

F
(
x
)
G
(
t
)
F ( x)G (t )
t 2
x 2

Thus when we substitute these two equations back into the original wave equation, which is
(4)

2
2u
2 u
c
t 2
x 2

then we get
(5)

2
2u
2 u

F ( x)G (t ) c
c 2 F ( x)G(t )
2
2
t
x

Heres where our separation of variables assumption pays off, because now if we separate the
equation above so that the terms involving F and its second derivative are on one side, and
likewise the terms involving G and its derivative are on the other, then we get
(6)

G (t )
F ( x)

2
c G(t ) F ( x)

Now we have an equality where the left-hand side just depends on the variable t, and the righthand side just depends on x. Here comes the critical observation - how can two functions, one
just depending on t, and one just on x, be equal for all possible values of t and x? The answer is
that they must each be constant, for otherwise the equality could not possibly hold for all
possible combinations of t and x. Aha! Thus we have
(7)

G (t )
F ( x)

k
2
c G(t ) F ( x)

where k is a constant. First lets examine the possible cases for k.


14

Harvard University, Partial Differential equations

Case One: k = 0
Suppose k equals 0. Then the equations in (7) can be rewritten as
(8)

G (t ) 0 c 2 G(t ) 0

and

F ( x) 0 F ( x) 0

yielding with very little effort two solution functions for F and G:
(9)

G(t ) at b

and

F ( x) px r

where a,b, p and r, are constants (note how easy it is to solve such simple ODEs versus trying to
deal with two variables at once, hence the power of the separation of variables approach).
Putting these back together to form u( x, t ) F ( x)G(t ) , then the next thing we need to do is to
note what the boundary conditions from equation (1) force upon us, namely that
(10)

u(0, t ) F (0)G(t ) 0 and u(l , t ) F (l )G(t ) 0 for all values of t

Unless G(t ) 0 (which would then mean that u( x, t ) 0 , giving us the very dull solution
equivalent to a flat, unplucked string) then this implies that
(11)

F (0) F (l ) 0 .

But how can a linear function have two roots? Only by being identically equal to 0, thus it must
be the case that F ( x) 0 . Sigh, then we still get that u( x, t ) 0 , and we end up with the dull
solution again, the only possible solution if we start with k = 0.
So, lets see what happens if

Case Two: k > 0


So now if k is positive, then from equation (7) we again start with
(12)

G (t ) kc 2 G(t )

(13)

F ( x) kF( x)

and

Try to solve these two ordinary differential equations. You are looking for functions whose
second derivatives give back the original function, multiplied by a positive constant. Possible
candidate solutions to consider include the exponential and sine and cosine functions. Of course,

15

Harvard University, Partial Differential equations


the sine and cosine functions dont work here, as their second derivatives are negative the
original function, so we are left with the exponential functions.
Lets take a look at (13) more closely first, as we already know that the boundary conditions
imply conditions specifically for F (x) , i.e. the conditions in (11). Solutions for F (x) include
anything of the form
(14)

F ( x) Ae x

where 2 k and A is a constant. Since could be positive or negative, and since solutions to
(13) can be added together to form more solutions (note (13) is an example of a second order
linear homogeneous ODE, so that the superposition principle holds), then the general solution for
(13) is
(14)

F ( x) Ae x Be x

where now A and B are constants and k . Knowing that F (0) F (l ) 0 , then
unfortunately the only possible values of A and B that work are A B 0 , i.e. that F ( x) 0 .
Thus, once again we end up with u( x, t ) F ( x)G(t ) 0 G(t ) 0 , i.e. the dull solution once
more. Now we place all of our hope on the third and final possibility for k, namely

Case Three: k < 0


So now we go back to equations (12) and (13) again, but now working with k as a negative
constant. So, again we have
(12)

G (t ) kc 2 G(t )

(13)

F ( x) kF( x)

and

Exponential functions wont satisfy these two ODEs, but now the sine and cosine functions will.
The general solution function for (13) is now
(15)

F ( x) A cos(x) B sin(x)

where again A and B are constants and now we have 2 k . Again, we consider the boundary
conditions that specified that F (0) F (l ) 0 . Substituting in 0 for x in (15) leads to
(16)

F (0) A cos(0) B sin(0) A 0

so that F ( x) B sin(x) . Next, consider F (l ) B sin(l ) 0 . We can assume that B isnt


equal to 0, otherwise F ( x) 0 which would mean that u( x, t ) F ( x)G(t ) 0 G(t ) 0 , again,
16

Harvard University, Partial Differential equations


the trivial unplucked string solution. With B 0 , then it must be the case that sin(l ) 0 in
order to have B sin(l ) 0 . The only way that this can happen is for l to be a multiple of .
This means that
(17)

l n

or

n
(where n is an integer)
l

This means that there is an infinite set of solutions to consider (letting the constant B be equal to
1 for now), one for each possible integer n.
(18)

n
F ( x) sin
l

Well, we would be done at this point, except that the solution function u( x, t ) F ( x)G(t ) and
weve neglected to figure out what the other function, G(t ) , equals. So, we return to the ODE in
(12):
(12)

G (t ) kc 2 G(t )

where, again, we are working with k, a negative number. From the solution for F (x) we have
determined that the only possible values that end up leading to non-trivial solutions are with
n
k
for n some integer. Again, we get an infinite set of solutions for (12) that
l
can be written in the form
2

(19)

G(t ) C cos(n t ) D sin(n t )

cn
, where n is the same integer that
l
showed up in the solution for F (x) in (18) (were labeling with a subscript n to identify
which value of n is used).

where C and D are constants and n c k c

Now we really are done, for all we have to do is to drop our solutions for F (x) and G(t ) into
u( x, t ) F ( x)G(t ) , and the result is
(20)

n
u n ( x, t ) F ( x)G(t ) C cos( n t ) D sin( n t ) sin
x
l

where the integer n that was used is identified by the subscript in u n ( x, t ) and n , and C and D
are arbitrary constants.

17

Harvard University, Partial Differential equations

At this point you should be in the habit of immediately checking solutions to differential
equations. Is (20) really a solution for the original wave equation
2
2u
2 u

c
t 2
x 2

and does it actually satisfy the boundary conditions u(0, t ) 0 and u(l , t ) 0 for all values of t?
Check this now really, dont read any more until youre completely sure that this general
solution works!

Okay, now what does it mean?


The solution given in the last section really does satisfy the one-dimensional wave equation. To
think about what the solutions look like, you could graph a particular solution function for
varying values of time, t, and then examine how the string vibrates over time for solution
functions with different values of n and constants C and D. However, as the functions involved
are fairly simple, its possible to make sense of the solution u n ( x, t ) functions with just a little
more effort.
For instance, over time, we can see that the G(t ) C cos(n t ) D sin(n t ) part of the function
is periodic with period equal to

. This means that it has a frequency equal to

n
cycles per
2

n
unit time. In music one cycle per second is referred to as one hertz. Middle C on a piano is
typically 263 hertz (i.e. when someone presses the middle C key, a piano string is struck that
vibrates predominantly at 263 cycles per second), and the A above middle C is 440 hertz. The
solution function when n is chosen to equal 1 is called the fundamental mode (for a particular
length string under a specific tension). The other normal modes are represented by different
values of n. For instance one gets the 2nd and 3rd normal modes when n is selected to equal 2 and
3, respectively. The fundamental mode, when n equals 1 represents the simplest possible
oscillation pattern of the string, when the whole string swings back and forth in one wide swing.
In this fundamental mode the widest vibration displacement occurs in the center of the string (see
the figures below).

18

Harvard University, Partial Differential equations

19

Harvard University, Partial Differential equations


Thus suppose a string of length l, and string mass per unit length , is tightened so that the
T
values of T, the string tension, along the other constants make the value of 1
equal to
2l
440. Then if the string is made to vibrate by striking or plucking it, then its fundamental (lowest)
tone would be the A above middle C.
Now think about how different values of n affect the other part of u n ( x, t ) F ( x)G(t ) , namely
l
n
n
F ( x) sin
x . Since sin
x function vanishes whenever x equals a multiple of , then
n
l
l
selecting different values of n higher than 1 has the effect of identifying which parts of the
vibrating string do not move. This has the affect musically of producing overtones, which are
musically pleasing higher tones relative to the fundamental mode tone. For instance picking n =
2 produces a vibrating string that appears to have two separate vibrating sections, with the
middle of the string standing still. This mode produces a tone exactly an octave above the
fundamental mode. Choosing n = 3 produces the 3rd normal mode that sounds like an octave and
a fifth above the original fundamental mode tone, then 4th normal mode sounds an octave plus a
fifth plus a major third, above the fundamental tone, and so on.

It is this series of fundamental mode tones that gives the basis for much of the tonal scale used in
Western music, which is based on the premise that the lower the fundamental mode differences,
down to octaves and fifths, the more pleasing the relative sounds. Think about that the next time
you listen to some Dave Matthews!
Finally note that in real life, any time a guitar or violin string is caused to vibrate, the result is
typically a combination of normal modes, so that the vibrating string produces sounds from
many different overtones. The particular combination resulting from a particular set-up, the type
of string used, the way the string is plucked or bowed, produces the characteristic tonal quality
associated with that instrument. The way in which these different modes are combined makes it
possible to produce solutions to the wave equation with different initial shapes and initial
velocities of the string. This process of combination involves Fourier Series which will be
covered at the end of Math 21b (come back to see it in action!)
Finally, finally, note that the solutions to the wave equations also show up when one considers
acoustic waves associated with columns of air vibrating inside pipes, such as in organ pipes,
trombones, saxophones or any other wind instruments (including, although you might not have
thought of it in this way, your own voice, which basically consists of a vibrating wind-pipe, i.e.
your throat!). Thus the same considerations in terms of fundamental tones, overtones and the
characteristic tonal quality of an instrument resulting from solutions to the wave equation also
occur for any of these instruments as well. So, the wave equation gets around quite a bit
musically!

20

Harvard University, Partial Differential equations

DAlemberts Solution of the Wave Equation


As was mentioned previously, there is another way to solve the wave equation, found by Jean Le
Rond DAlembert in the 18th century. In the last section on the solution to the wave equation
using the separation of variables technique, you probably noticed that although we made use of
the boundary conditions in finding the solutions to the PDE, we glossed over the issue of the
initial conditions, until the very end when we claimed that one could make use of something
called Fourier Series to build up combinations of solutions. If you recall, being given specific
initial conditions meant being given both the shape of the string at time t = 0, i.e. the function
u( x,0) f ( x) , as well as the initial velocity, u t ( x,0) g ( x) (note that these two initial condition
functions are functions of x alone, as t is set equal to 0). In the separation of variables solution,
we ended up with an infinite set, or family, of solutions, u n ( x, t ) that we said could be combined
in such a way as to satisfy any reasonable initial conditions.
In using DAlemberts approach to solving the same wave equation, we dont need to use Fourier
series to build up the solution from the initial conditions. Instead, we are able to explicitly
construct solutions to the wave equation for any (reasonable) given initial condition functions
u( x,0) f ( x) and u t ( x,0) g ( x) .
The technique involves changing the original PDE into one that can be solved by a series of two
simple single variable integrations by using a special transformation of variables. Suppose that
instead of thinking of the original PDE
(1)

2
2u
2 u

c
t 2
x 2

in terms of the variables x, and t, we rewrite it to reflect two new variables


(2)

v x ct and z x ct

This then means that u, originally a function of x, and t, now becomes a function of v and z,
instead. How does this work? Note that we can solve for x and t in (2), so that
(3)

1
v z and t 1 v z
2
2c

Now using the chain rule for multivariable functions, you know that
(4)

since

u u v u z
u
u

c
c
t v t z t
v
z

v
z
c and
c , and that similarly
t
t
21

Harvard University, Partial Differential equations


u u v u z u u

x v x z x v z

(5)

v
z
1 and
1 . Working up to second derivatives, another, more involved application
x
x
of the chain rule yields that

since

2 u u
u 2 u v 2 u z 2 u z 2 u v
c

c
z v 2 t zv t z 2 t vz t
t 2 t v

(6)

2u 2u 2 2u 2u
2u
2u 2u
c 2
c 2 2 2

c 2 2

zv
vz
zv z 2
v
z
v
Another almost identical computation using the chain rule results in the fact that

2 u u u 2 u v 2 u z 2 u z 2 u v

x 2 x v z v 2 x zv x z 2 x vz x

(7)

2u
2u 2u

v 2
zv z 2

Now we revisit the original wave equation


2
2u
2 u

c
t 2
x 2

(8)

2u
2u
2u 2u
2u
and
in
terms
of
,
and
.
zv
t 2
x 2
v 2 z 2
Doing this gives the following equation, ripe with cancellations:

and substitute in what we have calculated for

2
2
2
2u
2u 2u
2u 2u
2 u
2 u
2 u

v 2
v 2
zv z 2
zv z 2
t 2
x 2

(9)

2u
2u
Dividing by c and canceling the terms involving
and
reduces this series of equations
v 2
z 2
to
2

22

Harvard University, Partial Differential equations

(10)

2u
2u
2
zv
zv

which means that


(11)

2u
0
zv

So what, you might well ask, after all, we still have a second order PDE, and there are still
several variables involved. But wait, think about what (11) implies. Picture (11) as it gives you
information about the partial derivative of a partial derivative:
(12)

u
0
z v

In this form, this implies that


variable z, so that

(13)

u
considered as a function of z and v is a constant in terms of the
v

u
can only depend on v, i.e.
v

u
M (v)
v

Now, integrating this equation with respect to v yields that


(14)

u(v, z ) M (v)dv

This, as an indefinite integral, results in a constant of integration, which in this case is just
constant from the standpoint of the variable v. Thus, it can be any arbitrary function of z alone,
so that actually
(15)

u(v, z ) M (v)dv N ( z ) P(v) N ( z )

where P(v) is a function of v alone, and N (z ) is a function of z alone, as the notation indicates.
Substituting back the original change of variable equations for v and z in (2) yields that
(16)

u( x, t ) P( x ct ) N ( x ct )

where P and N are arbitrary single variable functions. This is called DAlemberts solution to
the wave equation. Except for the somewhat annoying but easy enough chain rule computations,
this was a pretty straightforward solution technique. The reason it worked so well in this case
23

Harvard University, Partial Differential equations

was the fact that the change of variables used in (2) were carefully selected so as to turn the
original PDE into one in which the variables basically had no interaction, so that the original
second order PDE could be solved by a series of two single variable integrations, which was easy
to do.
Check out that DAlemberts solution really works. According to this solution, you can pick any
functions for P and N such as P(v) v 2 and N (v) v 2 . Then
(17)

u( x, t ) ( x ct ) 2 ( x ct ) 2 x 2 x ct c 2 t 2 2

Now check that


(18)

2u
2c 2
2
t

and that
(19)

2u
2
x 2

so that indeed
(20)

2
2u
2 u

c
t 2
x 2

and so this is in fact a solution of the original wave equation.


This same transformation trick can be used to solve a fairly wide range of PDEs. For instance
one can solve the equation
(21)

2u 2u

xy y 2

by using the transformation of variables


(22)

v x and z x y

(Try it out! You should get that u( x, y) P( x) N ( x y) with arbitrary functions P and N )
Note that in our solution (16) to the wave equation, nothing has been specified about the initial
and boundary conditions yet, and we said we would take care of this time around. So now we
take a look at what these conditions imply for our choices for the two functions P and N.
24

Harvard University, Partial Differential equations

If we were given an initial function u( x,0) f ( x) along with initial velocity function
u t ( x,0) g ( x) then we can match up these conditions with our solution by simply substituting in
t 0 into (16) and follow along. We start first with a simplified set-up, where we assume that
we are given the initial displacement function u( x,0) f ( x) , and that the initial velocity
function g (x) is equal to 0 (i.e. as if someone stretched the string and simply released it without
imparting any extra velocity over the string tension alone).
Now the first initial condition implies that
(23)

u( x,0) P( x c 0) N ( x c 0) P( x) N ( x) f ( x)

We next figure out what choosing the second initial condition implies. By working with an
initial condition that u t ( x,0) g ( x) 0 , we see that by using the chain rule again on the
functions P and N
(24)

u t ( x,0)

P( x ct ) N ( x ct ) cP ( x ct ) cN ( x ct )
t

(remember that P and N are just single variable functions, so the derivative indicated is just a
simple single variable derivative with respect to their input). Thus in the case where
u t ( x,0) g ( x) 0 , then
(25)

cP ( x ct ) cN ( x ct ) 0

Dividing out the constant factor c and substituting in t 0


(26)

P ( x) N ( x)

and so P( x) k N ( x) for some constant k. Combining this with the fact that
P( x) N ( x) f ( x) , means that 2P( x) k f ( x) , so that P( x) f ( x) k 2 and likewise
N ( x) f ( x) k 2 . Combining these leads to the solution
(27)

u ( x, t ) P( x ct ) N ( x ct )

1
f ( x ct ) f ( x ct )
2

To make sure that the boundary conditions are met, we need


(28)

u(0, t ) 0 and u(l , t ) 0 for all values of t

The first boundary condition implies that

25

Harvard University, Partial Differential equations

1
f (ct ) f (ct ) 0
2

(29)

u (0, t )

(30)

f (ct ) f (ct )

or

so that to meet this condition, then the initial condition function f must be selected to be an odd
function. The second boundary condition that u(l , t ) 0 implies
(31)

u (l , t )

1
f (l ct ) f (l ct ) 0
2

so that f (l ct ) f (l ct ) . Next, since weve seen that f has to be an odd function, then
f (l ct ) f (l ct ) . Putting this all together this means that
(32)

f (l ct ) f (l ct ) for all values of t

which means that f must have period 2l, since the inputs vary by that amount. Remember that
this just means the function repeats itself every time 2l is added to the input, the same way that
the sine and cosine functions have period 2 .
What happens if the initial velocity isnt equal to 0? Thus suppose u t ( x,0) g ( x) 0 . Tracing
through the same types of arguments as the above leads to the solution function
(33)

u ( x, t )

x ct
1
f ( x ct ) f ( x ct ) 1 x ct g (s)ds
2
2c

In the next installment of this introduction to PDEs we will turn to the Heat Equation.

Introduction to the Heat Equation


For this next PDE, we create a mathematical model of how heat spreads, or diffuses through an
object, such as a metal rod, or a body of water. To do this we take advantage of our knowledge
of vector calculus and the divergence theorem to set up a PDE that models such a situation.
Knowledge of this particular PDE can be used to model situations involving many sorts of
diffusion processes, not just heat. For instance the PDE that we will derive can be used to model
the spread of a drug in an organism, of the diffusion of pollutants in a water supply.
The key to this approach will be the observation that heat tends to flow in the direction of
decreasing temperature. The bigger the difference in temperature, the faster the heat flow, or
heat loss (remember Newton's heating and cooling differential equation). Thus if you leave a hot
26

Harvard University, Partial Differential equations

drink outside on a freezing cold day, then after ten minutes the drink will be a lot colder than if
you'd kept the drink inside in a warm room - this seems pretty obvious!
If the function u( x, y, z, t ) gives the temperature at time t at any point (x, y, z) in an object, then
in mathematical terms the direction of fastest decreasing temperature away from a specific point
(x, y, z), is just the gradient of u (calculated at the point (x, y, z) and a particular time t). Note
that here we are considering the gradient of u as just being with respect to the spatial coordinates
x, y and z, so that we write
(1)

grad (u ) u

u
u
u
i
j k
x
y
z

Thus the rate at which heat flows away (or toward) the point is proportional to this gradient, so
that if F is the vector field that gives the velocity of the heat flow, then
(2)

F k ( grad (u))

(negative as the flow is in the direction of fastest decreasing temperature).


The constant, k, is called the thermal conductivity of the object, and it determines the rate at
which heat is passed through the material that the object is made of. Some metals, for instance,
conduct heat quite rapidly, and so have high values for k, while other materials act more like
insulators, with a much lower value of k as a result.
Now suppose we know the temperature function, u( x, y, z, t ) , for an object, but just at an initial
time, when t = 0, i.e. we just know u( x, y, z,0) . Suppose we also know the thermal conductivity
of the material. What we would like to do is to figure out how the temperature of the object,
u( x, y, z, t ) , changes over time. The goal is to use the observation about the rate of heat flow to
set up a PDE involving the function u( x, y, z, t ) (i.e. the Heat Equation), and then solve the PDE
to find u( x, y, z, t ) .

Deriving the Heat Equation using the Divergence Theorem


To get to a PDE, the easiest route to take is to invoke something called the Divergence Theorem.
As this is a multivariable calculus topic that we havent even gotten to at this point in the
semester, dont worry! (It will be covered in the vector calculus section at the end of the course
in Chapter 13 of Stewart). It's such a neat application of the use of the Divergence Theorem,
however, that at this point you should just skip to the end of this short section and take it on faith
that we will get a PDE in this situation (i.e. skip to equation (10) below. Then be sure to come
back and read through this section once youve learned about the divergence theorem.
First notice if E is a region in the body of interest (the metal bar, the pool of water, etc.) then the
amount of heat that leaves E per unit time is simply a surface integral. More exactly, it is the
27

Harvard University, Partial Differential equations

flux integral over the surface of E of the heat flow vector field, F. Recall that F is the vector
field that gives the velocity of the heat flow - it's the one we wrote down as F ku in the
previous section. Thus the amount of heat leaving E per unit time is just
(1)

F dS
S

where S is the surface of E. But wait, we have the highly convenient divergence theorem that
tells us that
(2)

F dS k div( grad (u))dV


S

Okay, now what is div(grad(u))? Given that


(3)

grad (u ) u

u
u
u
i
j k
x
y
z

then div(grad(u)) is just equal to


(4)

2u 2u 2u
div ( grad (u )) u 2 2 2
x
y
z

Incidentally, this combination of divergence and gradient is used so often that it's given a name,
the Laplacian. The notation div ( grad (u) u is usually shortened up to simply 2 u . So
we could rewrite (2), the heat leaving region E per unit time as
(5)

F dS k (
S

u )dV

On the other hand, we can calculate the total amount of heat, H, in the region, E, at a particular
time, t, by computing the triple integral over E:
(6)

H ( )u ( x, y, z, t )dV
E

where is the density of the material and the constant is the specific heat of the material (don't
worry about all these extra constants for now - we will lump them all together in one place in the
end). How does this relate to the earlier integral? On one hand (5) gives the rate of heat leaving
H
E per unit time. This is just the same as
, where H gives the total amount of heat in E.
t
This means we actually have two ways to calculate the same thing, because we can calculate
H
by differentiating equation (6) giving H, i.e.
t
28

Harvard University, Partial Differential equations

(7)

H
u
( ) dV
t
t
E

Now since both (5) and (7) give the rate of heat leaving E per unit time, then these two equations
must equal each other, so
(8)

H
u
( ) dV k ( 2 u )dV
t
t
E
E

For these two integrals to be equal means that their two integrands must equal each other (since
this integral holds over any arbitrary region E in the object being studied), so
(9)

( )

or, if we let c 2

(10)

u
k ( 2 u )
t

, and write out the Laplacian, 2 u , then this works out simply as

2u 2u 2u
u
c 2 2 2 2
t
y
z
x

This, then, is the PDE that models the diffusion of heat in an object, i.e. the Heat Equation! This
particular version (10) is the three-dimensional heat equation.

Solving the Heat Equation in the one-dimensional case


We simplify our heat diffusion modeling by considering the specific case of heat flowing in a
long thin bar or wire, where the cross-section is very small, and constant, and insulated in such a
way that the heat flow is just along the length of the bar or wire. In this slightly contrived
situation, we can model the heat flow by keeping track of the temperature at any point along the
bar using just one spatial dimension, measuring the position along the bar.
This means that the function, u, that keeps track of the temperature, just depends on x, the
position along the bar, and t, time, and so the heat equation from the previous section becomes
the so-called one-dimensional heat equation:
(1)

u
2u
c2 2
t
x

One of the interesting things to note at this point is how similar this PDE appears to the wave
equation PDE. However, the resulting solution functions are remarkably different in nature.
Remember that the solutions to the wave equation had to do with oscillations, dealing with
29

Harvard University, Partial Differential equations

vibrating strings and all that. Here the solutions to the heat equation deal with temperature flow,
not oscillation, so that means the solution functions will likely look quite different. If youre
familiar with the solution to Newtons heating and cooling differential equations, then you might
expect to see some type of exponential decay function as part of the solution function.
Before we start to solve this equation, lets mention a few more conditions that we will need to
know to nail down a specific solution. If the metal bar that were studying has a specific length,
l, then we need to know the temperatures at the ends of the bars. These temperatures will give us
boundary conditions similar to the ones we worked with for the wave equation. To make life a
bit simpler for us as we solve the heat equation, lets start with the case when the ends of the bar,
at x 0 and x l both have temperature equal to 0 for all time (you can picture this situation as
a metal bar with the ends stuck against blocks of ice, or some other cooling apparatus keeping
the ends exactly at 0 degrees). Thus we will be working with the same boundary conditions as
before, namely
(2)

u(0, t ) 0 and u(l , t ) 0 for all values of t

Finally, to pick out a particular solution, we also need to know the initial starting temperature of
the entire bar, namely we need to know the function u (x,0) . Interestingly, thats all we would
need for an initial condition this time around (recall that to specify a particular solution in the
wave equation we needed to know two initial conditions, u (x,0) and u t (x,0) ).
The nice thing now is that since we have already solved a PDE, then we can try following the
same basic approach as the one we used to solve the last PDE, namely separation of variables.
With any luck, we will end up solving this new PDE. So, remembering back to what we did in
that case, lets start by writing
(3)

u( x, t ) F ( x)G(t )

where F, and G, are single variable functions. Differentiating this equation for u ( x, t ) with
respect to each variable yields
(4)

2u
u
F ( x)G(t ) and
F ( x)G (t )
2
t
x

When we substitute these two equations back into the original heat equation
(5)

u
2u
c2 2
t
x

(6)

u
2u
F ( x)G (t ) c 2 2 c 2 F ( x)G(t )
t
x

we get

30

Harvard University, Partial Differential equations

If we now separate the two functions F and G by dividing through both sides, then we get
(7)

G (t )
F ( x)

2
c G(t ) F ( x)

Just as before, the left-hand side only depends on the variable t, and the right-hand side just
depends on x. As a result, to have these two be equal can only mean one thing, that they are both
equal to the same constant, k:
(8)

G (t )
F ( x)

k
2
c G(t ) F ( x)

As before, lets first take a look at the implications for F (x) as the boundary conditions will
again limit the possible solution functions. From (8) we get that F (x) has to satisfy
(9)

F ( x) kF( x) 0

Just as before, one can consider the various cases with k being positive, zero, or negative. Just as
before, to meet the boundary conditions, it turns out that k must in fact be negative (otherwise
F (x) ends up being identically equal to 0, and we end up with the trivial solution u( x, t ) 0 ).
So skipping ahead a bit, lets assume we have figured out that k must be negative (you should
check the other two cases just as before to see that what weve just written is true!). To indicate
this, we write, as before, that k 2 , so that we now need to look for solutions to
(10)

F ( x) 2 F ( x) 0

These solutions are just the same as before, namely the general solution is:
(11)

F ( x) A cos(x) B sin(x)

where again A and B are constants and now we have k . Next, lets consider the
boundary conditions u(0, t ) 0 and u(l , t ) 0 . These are equivalent to stating that
F (0) F (l ) 0 . Substituting in 0 for x in (11) leads to
(12)

F (0) A cos(0) B sin(0) A 0

so that F ( x) B sin(x) . Next, consider F (l ) B sin(l ) 0 . As before, we check that B cant


equal 0, otherwise F ( x) 0 which would then mean that u( x, t ) F ( x)G(t ) 0 G(t ) 0 , the
trivial solution, again. With B 0 , then it must be the case that sin(l ) 0 in order to have
B sin(l ) 0 . Again, the only way that this can happen is for l to be a multiple of . This
means that once again
31

Harvard University, Partial Differential equations

(13)

l n or

(14)

n
F ( x) sin
x
l

n
(where n is an integer)
l

and so

where n is an integer. Next we solve for G(t ) , using equation (8) again. So, rewriting (8), we
see that this time
(15)
where n

G (t ) n G(t ) 0
2

cn
, since we had originally written k 2 , and we just determined that
l

n
during the solution for F (x) . The general solution to this first order differential
l
equation is just

(16)

G(t ) Ce n t
2

So, now we can put it all together to find out that


(17)

n n 2t
u ( x, t ) F ( x)G(t ) C sin
x e
l

cn
. As is always the case, given a
l
supposed solution to a differential equation, you should check to see that this indeed is a solution
to the original heat equation, and that it satisfies the two boundary conditions we started with.

where n is an integer, C is an arbitrary constant, and n

What does it mean?


The next question is how to get from the general solution to the heat equation
(1)

n n 2t
u ( x, t ) C sin
x e
l

that we found in the last section, to a specific solution for a particular situation. How can one
figure out which values of n and C are needed for a specific problem? The answer lies not in
choosing one such solution function, but more typically it requires setting up an infinite series of
such solutions. Such an infinite series, because of the principle of superposition, will still be a
32

Harvard University, Partial Differential equations

solution function to the equation, because the original heat equation PDE was linear and
homogeneous. Using the superposition principle, and by summing together various solutions
with carefully chosen values of C, then it is possible to create a specific solution function that
will match any (reasonable) given starting temperature function u (x,0) . The way in which we
add together solutions involves Fourier series, again, a topic that is too broad to go into at this
point, but one which you will see if you continue on with Math 21b!
In any case, you can see one clear feature in the solution functions, which is the presence of the
2
exponential decay term e n t in the solution involving the time variable. For instance, if the
temperature at any point on the bar started hotter than the two ends of the bar (which were kept
at a steady 0 degrees throughout, according to our boundary conditions), then the exponential
decay term shows that as time passes by, the temperature at that point of the bar will
exponentially decay down to 0.
To see a particular example in action, you will get a chance to do another Mathematica
assignment covering aspects of differential equations. In this lab you will be able to enter in
different starting functions for the temperature along a metal bar, and see how they decay as t,
the time variable increases (look in the section marked Heat Equation for details).

Final comments on PDEs


From what you have seen so far in this short introduction to PDEs, it should be clear that
knowledge of PDEs is an important part of the mathematical modeling done in many different
scientific fields. What you have seen so far is just a small sampling of the vast world of PDEs.
In each of the cases we solved, we worked with just the one-dimensional cases, but with a little
effort, it is possible to set up and solve similar PDEs for higher-dimensional situations as well.
For instance, the two-dimensional wave equation
(1)

2
2u
2u
2 u

x 2 y 2
t 2

can be used to model waves on the surface of drumheads, or on the surface of liquids, and the
three-dimensional heat equation
(2)

2
u
2u 2u
2 u

c 2 2 2
t
y
z
x

can be used to study temperature diffusion in three-dimensional objects. To solve such


equations, one can still use the separation of variables technique that we saw used for the
solutions to the one-dimensional cases. However, with more variables involved, one typically
has to invoke the technique several times in a row, splitting PDEs involving functions of more
than two variables into a sequence of PDEs each involving just one-variable functions. Solutions
are then found for each of the one-variable differential equations, and combined to yield
33

Harvard University, Partial Differential equations

solutions to the original PDEs, in much the same way that we saw in the two separation of
variables solutions above.
If you would like to learn more about PDEs, and who wouldnt, then you might want to take a
look at one of the classic texts on differential equations, by William Boyce and Richard DiPrima
(Elementary Differential Equations and Boundary Value Problems). Another good reference
book is Advanced Engineering Mathematics by Erwin Kreyszig (much of this handout was
modeled on the approach taken by Kreyszig in his textbook).

34

Harvard University, Partial Differential equations

PDE Problems
(1)
Determine which of the following functions are solutions to the two-dimensional Laplace
equation
2u 2u

0
x 2 y 2
(a)

u( x, y) x 3 3xy 2

(b)

u( x, y) x 4 6 x 2 y 2 y 4 12

(c)

u( x, y) cos( x y)e x y

(d)

u( x, y) arctan y x

(e)

u( x, y) 2002 xy 2 1999 x 2 y

(f)

u( x, y) e x (sin( y) 2 cos( y))

(2)
Determine which of the following functions are solutions to the one-dimensional wave
equation (for a suitable value of the constant c). Also determine what c must equal in each case.
2
2u
2 u

c
t 2
x 2

(a)

u( x, t ) sin( x 2t ) cos( x 2t )

(b)

u( x, t ) ln( xt )

(c)

u( x, t ) 4 x 3 12 xt 2 24

(d)

u( x, t ) sin(100 x) sin(100t )

(e)

u( x, t ) 2002 xt 2 1001x 2 t

(f)

u( x, t ) x 2 4t 2

(3)
Solve the following four PDEs where u is a function of two variables, x and y. Note that
your answer might have undetermined functions of either x or y, the same way an ODE might
have undetermined constants. Note you can solve these PDEs without having to use the
separation of variables technique.
(a)

2u
9u 0
x 2

(c)

u
2 xyu
x

(b)

u
2 yu 0
y

(d)

2u
0
xy

(4)
Solve the following systems of PDEs where u is a function of two variables, x and y.
Note that once again your answer might have undetermined functions of either x or y.

35

Harvard University, Partial Differential equations

(a)

2u
2u
and

0
0
x 2
y 2

(b)

u
u
0 and
0
x
y

(c)

2u
2u
and

0
0
xy
x 2

(d)

2u
2u
2u
,
and

0
0
yx
x 2
y 2

(5)
Determine specific solutions to the one-dimensional wave equation for each of the
following sets of initial conditions. Suppose that each one is modeling a vibrating string of
length with fixed ends, and with constants such that c 2 1 in the wave equation PDE.
(a)

u( x,0) 0 and u t ( x,0) sin(4 x)

(b)

u( x,0) 0 and u t ( x,0) 0.1sin(2 x)

(c)

u( x,0) 0.1sin( x) and u t ( x,0) 0.2 sin( x)

(d)

u( x,0) 0.3 sin(6 x) and u t ( x,0) 0.2 sin(6 x)

(6)
Find solutions u( x, y) to each of the following PDEs by using the separation of variables
technique.
(a)

u u

0
x y

(c)

(e)

u u

2( x y)u
x y

u
u
y
0
x
y

36

(b)

u
u
y
0
x
y

(d)

(f)

2u
u 0
xy

u
u
x
0
x
y

You might also like