0% found this document useful (0 votes)
105 views295 pages

PDEbook

This document provides lecture notes on partial differential equations for physical sciences and engineering students. It covers the basics of partial differential equations, including definitions of partial derivatives and higher order derivatives. It also discusses basic concepts such as the chain rule for functions with multiple variables. The document is intended as a first course in partial differential equations and assumes students have background in calculus and ordinary differential equations. It will cover classification and solution techniques for first and second order linear partial differential equations.

Uploaded by

Prince Kay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views295 pages

PDEbook

This document provides lecture notes on partial differential equations for physical sciences and engineering students. It covers the basics of partial differential equations, including definitions of partial derivatives and higher order derivatives. It also discusses basic concepts such as the chain rule for functions with multiple variables. The document is intended as a first course in partial differential equations and assumes students have background in calculus and ordinary differential equations. It will cover classification and solution techniques for first and second order linear partial differential equations.

Uploaded by

Prince Kay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 295

Lecture Notes in Mathematics

Arkansas Tech University


Department of Mathematics

A First Course in Quasi-Linear Partial


Differential Equations
for Physical Sciences and Engineering

Marcel B. Finan
Arkansas Tech University
All
c Rights Reserved

October 3, 2019
Preface

Partial differential equations are often used to construct models of the most
basic theories underlying physics and engineering. The goal of this book is to
develop the most basic ideas from the theory of partial differential equations,
and apply them to the simplest models arising from the above mentioned
fields.
It is not easy to master the theory of partial differential equations. Unlike
the theory of ordinary differential equations, which relies on the fundamental
existence and uniqueness theorem, there is no single theorem which is central
to the subject. Instead, there are separate theories used for each of the major
types of partial differential equations that commonly arise.
It is worth pointing out that the preponderance of differential equations aris-
ing in applications, in science, in engineering, and within mathematics itself,
are of either first or second order, with the latter being by far the most preva-
lent. We will mainly cover these two classes of PDEs.
This book is intended for a first course in partial differential equations at
the advanced undergraduate level for students in engineering and physical
sciences. It is assumed that the student has had the standard three semester
calculus sequence, and a course in ordinary differential equations.

Marcel B Finan
August 2009

i
ii PREFACE
Contents

Preface i

The Basics of the Theory of Partial Differential Equation 3


1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Solutions to PDEs/PDEs with constraints . . . . . . . . . . . . . 13

Review of Some ODE Results 25


3 The Method of Integrating Factor . . . . . . . . . . . . . . . . . . 26
4 The Method of Separation of Variables for ODEs . . . . . . . . . 31

First Order Partial Differential Equations 35


5 Classification of First Order PDEs . . . . . . . . . . . . . . . . . 36
6 A Review of Multivariable Calculus . . . . . . . . . . . . . . . . . 41
6.1 Multiplication of Vectors: The Scalar or Dot Product . . . 41
6.2 Directional Derivatives and the Gradient Vector . . . . . . 50
7 Solvability of Semi-linear First Order PDEs . . . . . . . . . . . . 59
8 Linear First Order PDE: The One Dimensional Spatial Transport
Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
9 Solving Quasi-Linear First Order PDE via the Method of Char-
acteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
10 The Cauchy Problem for First Order Quasilinear Equations . . . 75

Second Order Linear Partial Differential Equations 87


11 Second Order PDEs in Two Variables . . . . . . . . . . . . . . . 88
12 Hyperbolic Type: The Wave equation . . . . . . . . . . . . . . . 93
13 Parabolic Type: The Heat Equation in One-Dimensional Space . 100
14 Sequences of Functions: Pointwise and Uniform Convergence . . 109
15 An Introduction to Fourier Series . . . . . . . . . . . . . . . . . 120

1
2 CONTENTS

16 Fourier Sines Series and Fourier Cosines Series . . . . . . . . . . 133


17 Separation of Variables for PDEs . . . . . . . . . . . . . . . . . 140
17.1 Second Order Linear Homogenous ODE with Constant
Coefficients . . . . . . . . . . . . . . . . . . . . . . . . 140
17.2 The Method of Separation of Variables for PDEs . . . . . 141
18 Solutions of the Heat Equation by the Separation of Variables
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
19 Elliptic Type: Laplace’s Equations in Rectangular Domains . . . 154
20 Laplace’s Equations in Circular Regions . . . . . . . . . . . . . . 165

The Laplace Transform Solutions for PDEs 177


21 Essentials of the Laplace Transform . . . . . . . . . . . . . . . . 178
22 Solving PDEs Using Laplace Transform . . . . . . . . . . . . . . 191

The Fourier Transform Solutions for PDEs 199


23 Complex Version of Fourier Series . . . . . . . . . . . . . . . . . 200
24 An introduction to Fourier Transforms . . . . . . . . . . . . . . 206
25 Applications of Fourier Transforms to PDEs . . . . . . . . . . . 214

Appendix 221
Appendix A: The Method of Undetermined Coefficients . . . . . . . 222
Appendix B: The Method of Variation of Parameters . . . . . . . . 229

Answers and Solutions 233

Index 289
The Basics of the Theory of
Partial Differential Equation

Many fields in engineering and the physical sciences require the study of
ODEs and PDEs. Some of these fields include acoustics, aerodynamics, elas-
ticity, electrodynamics, fluid dynamics, geophysics (seismic wave propaga-
tion), heat transfer, meteorology, oceanography, optics, petroleum engineer-
ing, plasma physics (ionized liquids and gases), quantum mechanics.
So the study of partial differential equations is of great importance to the
above mentioned fields. The purpose of this chapter is to introduce the reader
to the basic terms of partial differential equations.

3
4THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION

1 Basic Concepts
The goal of this section is to introduce the reader to the basic concepts and
notations that will be used in the remainder of this book.
We start this section by reviewing the concept of partial derivatives and the
chain rule of functions in two variables.
Let u(x, y) be a function of the independent variables x and y. The first
derivative of u with respect to x is defined by
∂u u(x + h, y) − u(x, y)
(x, y) = lim
ux (x, y) =
∂x h→0 h
provided that the limit exists.
Likewise, the first derivative of u with respect to y is defined by
∂u u(x, y + h) − u(x, y)
uy (x, y) = (x, y) = lim
∂y h→0 h
provided that the limit exists.
We can define higher order derivatives such as
∂ 2u ux (x + h, y) − ux (x, y)
uxx (x, y) = 2
(x, y) = lim
∂x h→0 h
∂ 2u uy (x, y + h) − uy (x, y)
uyy (x, y) = 2
(x, y) = lim
∂y h→0 h
∂ 2u
 
∂ ∂u uy (x + h, y) − uy (x, y)
uxy (x, y) = (x, y) = = lim
∂x∂y ∂x ∂y h→0 h
and
∂ 2u ux (x, y + h) − ux (x, y)
uyx (x, y) = (x, y) = lim
∂y∂x h→0 h
1
provided that the limits exist.
An important formula of differentiation is the so-called chain rule. If u =
u(x, y), where x = x(s, t) and y = y(s, t), then
∂u ∂u ∂x ∂u ∂y
= + .
∂s ∂x ∂s ∂y ∂s
Likewise,
∂u ∂u ∂x ∂u ∂y
= + .
∂t ∂x ∂t ∂y ∂t
1
If uxy and uyx are continuous then uxy (x, y) = uyx (x, y).
1 BASIC CONCEPTS 5

Example 1.1
Compute the partial derivatives indicated:

(a) ∂y (y 2 sin xy)
∂2
(b) ∂x2
[ex+y ]2
Solution.
∂ ∂ ∂
(a) We have ∂y
(y 2 sin xy) = sin xy ∂y (y 2 )+y 2 ∂y (sin xy) = 2y sin xy+xy 2 cos xy.
∂ ∂ 2(x+y) ∂2 ∂
(b) We have ∂x
[ex+y ]2 = ∂x e = 2e2(x+y) . Thus, ∂x 2 [e
x+y 2
] = ∂x 2e2(x+y) =
4e2(x+y)
Example 1.2
Suppose u(x, y) = sin (x2 + y 2 ), where x = tes and y = s + t. Find us (s, t)
and ut (s, t).
Solution.
We have
us (s, t) =ux xs + uy ys = 2x cos (x2 + y 2 )tes + 2y cos (x2 + y 2 )
=[2t2 e2s + 2(s + t)] cos [t2 e2s + (s + t)2 ].
Likewise,
ut (s, t) =ux xt + uy yt = 2x cos (x2 + y 2 )es + 2y cos (x2 + y 2 )
=[2te2s + 2(s + t)] cos [t2 e2s + (s + t)2 ]
A differential equation is an equation that involves an unknown scalar
function (the dependent variable) and one or more of its derivatives. For
example,
d2 y dy
2
− 5 + 3y = −3 (1.1)
dx dx
or
∂u ∂ 2 u ∂ 2 u
− 2 − 2 + u = 0. (1.2)
∂t ∂x ∂y
If the unknown function is a function in one single variable then the differen-
tial equation is called an ordinary differential equation, abbreviated by
ODE. An example of an ordinary differential equation is Equation (1.1). In
contrast, when the unknown function is a function of two or more indepen-
dent variables then the differential equation is called a partial differential
equation, in short PDE. Equation (1.2) is an example of a partial differential
equation. In this book we will be focusing on partial differential equations.
6THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION

Example 1.3
Identify which variables are dependent and which are independent for the
following differential equations.
d4 y 2
(a) dx 4 − x + y = 0.

(b) utt + xutx = 0.


(c) x dx
dt
= 4.
∂y
(d) ∂u − 4 ∂y∂v
= u + 3y.

Solution.
(a) Independent variable is x and the dependent variable is y.
(b) Independent variables are x and t and the dependent variable is u.
(c) Independent variable is t and the dependent variable is x.
(d) Independent variables are u and v and the dependent variable is y

Example 1.4
Classify the following as either ODE or PDE.
(a) ut = c2 uxx .
(b) y 00 − 4y 0 + 5y = 0.
(c) zt + czx = 5.

Solution.
(a) A PDE with dependent variable u and independent variables t and x.
(b) An ODE with dependent variable y and independent variable x.
(c) A PDE with dependent variable z and independent variables t and x

The order of a differential equation is the highest order derivative occurring


in the equation. Thus, (1.1) and (1.2) are second order differential equations.

Example 1.5
Find the order of each of the following partial differential equations:
(a) xux + yuy = x2 + y 2
(b) uux + uy = 2
(c) utt − c2 uxx = f (x, t)
(d) ut + uux + uxxx = 0
(e) utt + uxxxx = 0.

Solution.
(a) First order (b) First order (c) Second order (d) Third order (e) Fourth
1 BASIC CONCEPTS 7

order

A first order partial differential equation is called quasi-linear if it can


be written in the form

a(x, y, u)ux + b(x, y, u)uy = c(x, y, u). (1.3)

If a(x, y, u) = α(x, y) and b(x, y, u) = β(x, y) then (1.3) is called semi-linear.


If futhermore, c(x, y, u) = γ(x, y)u + δ(x, y) then (1.3) is called linear.
In a similar way, a second order quasi-linear pde has the form
a(x, y, u, ux , uy )uxx +b(x, y, u, ux , uy )uxy +c(x, y, u, ux , uy )uyy = d(x, y, u, ux , uy ).
The semi-linear case has the form

a(x, y)uxx + b(x, y)uxy + c(x, y)uyy = d(x, y, u, ux , uy ).

and the linear case is


a(x, y)uxx + b(x, y)uxy + c(x, y)uyy + d(x, y)ux + e(x, y)uy + f (x, y)u = g(x, y).

Note that linear and semi-linear partial differential equations are special cases
of quasi-linear equations. However, a quasi-linear PDE needs not be linear:
A partial differential equation that is not linear is called non-linear. For
example, u2x + 2uxy = 0 is non-linear. Note that this equation is quasi-linear
and semi-linear.
As for ODEs, linear PDEs are usually simpler to analyze/solve than non-
linear PDEs.

Example 1.6
Determine whether the given PDE is linear, quasi-linear, semi-linear, or non-
linear:
(a) xux + yuy = x2 + y 2 .
(b) uux + uy = 2.
(c) utt − c2 uxx = f (x, t).
(d) ut + uux + uxxx = 0.
(e) u2tt + uxx = 0.

Solution.
(a) Linear, quasi-linear, semi-linear.
(b) Quasi-linear, non-linear.
(c) Linear, quasi-linear, semi-linear.
8THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION

(d) Quasi-linear, semi-linear, non-linear.


(e) Non-linear

A more precise definition of a linear differential equation begins with the con-
cept of a linear differential operator L. The operator L is assembled by
summing the basic partial derivative operators, with coefficients depending
on the independent variables only. The operator acts on sufficiently smooth
functions2 are depending on the relevant independent variables. Linearity
imposes two key requirements:

L[u + v] = L[u] + L[v] and L[αu] = αL[u],

for any two (sufficiently smooth) functions u, v and any constant (a scalar)
α.

Example 1.7
Define a linear differential operator for the PDE

ut = c2 uxx .

Solution.
Let L[u] = ut − c2 uxx . Then one can easily check that L[u + v] = L[u] + L[v]
and L[αu] = αL[u]

A linear partial differential equation is called homogeneous if the depen-


dent variable and/or its derivatives appear in terms with degree exactly one.
A linear partial differential equation that is not homogeneous is called non-
homogeneous. In this case, there is a term in the equation that involves
only the independent variables.
A homogeneous linear partial differential equation has the form

L[u] = 0

where L is a linear differential operator and the non-homogeneous case has


the form
L[u] = f (x, y, · · · ).
2
Smooth functions are functions that are continuously differentiable up to a certain
order.
1 BASIC CONCEPTS 9

Example 1.8
Determine whether the equation is homogeneous or non-homogeneous:
(a) xux + yuy = x2 + y 2 .
(b) utt = c2 uxx .
(c) y 2 uxx + xuyy = 0.

Solution.
(a) Non-homogeneous because of x2 + y 2 .
(b) Homogeneous.
(c) Homogeneous
10THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION

Practice Problems
Problem 1.1
Classify the following equations as either ODE or PDE.

t2
(a) (y 000 )4 + (y 0 )2 +4
= 0.

∂u y−x
(b) ∂x
+ y ∂u
∂y
= y+x
.

(c) y 00 − 4y = 0.

Problem 1.2
Write the equation
uxx + 2uxy + uyy = 0
in the coordinates s = x, t = x − y.

Problem 1.3
Write the equation
uxx − 2uxy + 5uyy = 0
in the coordinates s = x + y, t = 2x.

Problem 1.4
For each of the following PDEs, state its order and whether it is linear
or non-linear. If it is linear, also state whether it is homogeneous or non-
homogeneous:
(a) uux + x2 uyyy + sin x = 0.
2
(b) ux + ex uy = 0.
(c) utt + (sin y)uyy − et cos y = 0.

Problem 1.5
For each of the following PDEs, determine its order and whether it is linear
or not. For linear PDEs, state also whether the equation is homogeneous or
not. For non-linear PDEs, circle all term(s) that are not linear.
(a) x2 uxx + ex u = xuxyy .
(b) ey uxxx + ex u = − sin y + 10xuy .
(c) y 2 uxx + ex uux = 2xuy + u.
(d) ux uxxy + ex uuy = 5x2 ux .
(e) ut = k 2 (uxx + uyy ) + f (x, y, t).
1 BASIC CONCEPTS 11

Problem 1.6
Which of the following PDEs are linear?
(a) Laplace’s equation: uxx + uyy = 0.
(b) Convection (transport) equation: ut + cux = 0.
(c) Minimal surface equation: (1 + Zy2 )Zxx − 2Zx Zy Zxy + (1 + Zx2 )Zyy = 0.
(d) Korteweg-Vries equation: ut + 6uux = uxxx .

Problem 1.7
Classify the following differential equations as ODEs or PDEs, linear or
non-linear, and determine their order. For the linear equations, determine
whether or not they are homogeneous.
(a) The diffusion equation for u(x, t) :

ut = kuxx .

(b) The wave equation for w(x, t) :

wtt = c2 wxx .

(c) The thin film equation for h(x, t) :

ht = −(hhxxx )x .

(d) The forced harmonic oscillator for y(t) :

ytt + ω 2 y = F cos (ωt).

(e) The Poisson Equation for the electric potential Φ(x, y, z) :

Φxx + Φyy + Φzz = 4πρ(x, y, z)

where ρ(x, y, z) is a known charge density.


(f) Burger’s equation for h(x, t) :

ht + hhx = νhxx .

Problem 1.8
Write down the general form of a linear second order differential equation of
a function in three variables.
12THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION

Problem 1.9
Give the orders of the following PDEs, and classify them as linear or non-
linear. If the PDE is linear, specify whether it is homogeneous or non-
homogeneous.
(a) x2 uxxy + y 2 uyy − log (1 + y 2 )u = 0.
(b) ux + u3 = 1.
(c) uxxyy + ex ux = y.
(d) uuxx + uyy − u = 0.
(e) uxx + ut = 3u.
Problem 1.10
Consider the second-order PDE
uxx + 4uxy + 4uyy = 0.
Use the change of variables v(x, y) = y − 2x and w(x, y) = x to show that
uww = 0.
Problem 1.11
Write the one dimensional wave equation utt = c2 uxx in the coordinates
v = x + ct and w = x − ct.
Problem 1.12
Write the PDE
uxx + 2uxy − 3uyy = 0
in the coordinates v(x, y) = y − 3x and w(x, y) = x + y.
Problem 1.13
Write the PDE
aux + buy = 0, a 6= 0
in the coordinates s(x, y) = bx − ay and t(x, y) = x.
Problem 1.14
Write the PDE
ux + uy = 1
in the coordinates s = x − y and t = x.
Problem 1.15
Write the PDE
aut + bux = u, b 6= 0
in the coordinates v = ax − bt and w = x.
2 SOLUTIONS TO PDES/PDES WITH CONSTRAINTS 13

2 Solutions to PDEs/PDEs with constraints


By a classical solution or strong solution to a partial differential equation
we mean a function that satisfies the equation. A PDE might have many
classical solutions. To solve a PDE is to find all its classical solutions. In the
case of only two independent variables x and y, a classical solution u(x, y)
is visualized geometrically as a surface, called a solution surface or an
integral surface3 of the PDE in the (x, y, u) space.
A formula that expresses all the solutions of a PDE is called the general
solution of the equation.

Example 2.1
2 2
Show that u(x, t) = e−λ α t (cos λx − sin λx) is a solution to the equation
ut − α2 uxx = 0.

Solution.
Since
2 α2 t
ut − α2 uxx = − λ2 α2 e−λ (cos λx − sin λx)
−λ2 α2 t
−α2 e (−λ2 cos λx + λ2 sin λx) = 0,

the given function is a classical solution to the given equation

Example 2.2
The function u(x, y) = x2 − y 2 is a solution to Laplace’s equation

uxx + uyy = 0.

Represent this solution graphically.

Solution.
The given integral surface is the hyperbolic paraboloid shown in Figure 2.1.
3
The idea behind the name is due to the fact that integration is being used to finding
the solution.
14THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION

Figure 2.1

Example 2.3
Find the general solution of uxy = 0.

Solution.
Integrating first we respect to y we find ux (x, y) = f (x), where f is an
arbitrary Rdifferentiable function. Integrating ux with respect to x we find
u(x, y) = f (x)dx + g(y), where g is an arbitrary differentiable function

Note that the general solution in the previous example involves two arbitrary
functions. In general, the general solution of a partial differential equation
is an expression that involves arbitrary functions. This is in contrast to the
general solution of an ordinary differential equation which involves arbitrary
constants.
Usually, a classical solution enjoys properties such as smootheness (i.e. dif-
ferentiability) and continuity. However, in the theory of non-linear pdes,
there are solutions that do not require the smoothness property. Such solu-
tions are called weak solutions or generalized solutions. For example,
u(x) = x is a classical solution to the differential equation uu0 = x. In con-
trast, u(x) = |x| is a generalized solution since it is not differentiable at 0.
In this book, the word solution will refer to a classical solution.

Example 2.4
Show that u(x, t) = t + 12 x2 is a classical solution to the PDE

ut = uxx . (2.1)
2 SOLUTIONS TO PDES/PDES WITH CONSTRAINTS 15

Solution.
Assume that the domain of definition of u is D ⊂ R2 . Since u, ut , ux , utx , uxx
exist and are continuous in D(i.e., u is smooth in D) and u satisfies equation
(2.1), we conclude that u is a classical solution to the given PDE

We next consider the structure of solutions to linear partial differential equa-


tions. To this end, consider the linear differential operator L as defined in
the previous section. The defining properties of linearity immediately imply
the key facts concerning homogeneous linear differential equations.

Theorem 2.1
The sum of two solutions to a homogeneous linear differential equation is
again a solution, as is the product of a solution by any constant.

Proof.
Let u1 , u2 be solutions, meaning that L[u1 ] = 0 and L[u2 ] = 0. Then, thanks
to linearity,
L[u1 + u2 ] = L[u1 ] + L[u2 ] = 0,
and hence their sum u1 + u2 is a solution. Similarly, if α is any constant, and
u any solution, then
L[αu] = αL[u] = α0 = 0,
and so the scalar multiple αu is also a solution

The following result is known as the superposition principle for homo-


geneous linear equations. It states that from given solutions to the equation
one can create many more solutions.

Theorem 2.2
If u1 , · · · , un are solutions to a common homogeneous linear partial differen-
tial equation L[u] = 0, then the linear combination u = c1 u1 + · · · + cn un is
a solution for any choice of constants c1 , · · · , cn .

Proof.
The key fact is that, thanks to the linearity of L, for any sufficiently smooth
functions u1 , · · · , un and any constants c1 , · · · , cn ,

L[u] =L[c1 u1 + · · · + cn un ] = L[c1 u1 + · · · + cn−1 un−1 ] + L[cn un ]


= · · · = L[c1 u1 ] + · · · + L[cn un ] = c1 L[u1 ] + · · · + cn L[un ].
16THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION

In particular, if the functions are solutions, so L[u1 ] = 0, · · · , L[un ] = 0, then


the right hand side of the above equation vanishes, proving that u is also a
solution to the homogeneous equation L[u] = 0

In physical applications, homogeneous linear equations model unforced sys-


tems that are subject to their own internal constraints. External forcing
is represented by an additional term that does not involve the dependent
variable. This results in the non-homogeneous equation

L[u] = f

where L is a linear partial differential operator, u is the dependent variable,


and f is a given non-zero function of the independent variables alone.
You already learned the basic philosophy for solving of nonhomogeneous
linear equations in your study of elementary ordinary differential equations.
Step one is to determine the general solution to the homogeneous equation.
Step two is to find a particular solution to the non-homogeneous version.
The general solution to the non-homogeneous equation is then obtained by
adding the two together. Here is the general version of this procedure:

Theorem 2.3
Let up be a particular solution to the non-homogeneous linear equation
L[u] = f. Then the general solution to L[u] = f is given by u = up + uh ,
where uh is the general solution to the corresponding homogeneous equation
L[u] = 0.

Proof.
Let us first show that u = up + uh is also a solution to L[u] = f. By linearity,

L[u] = L[up + uh ] = L[up ] + L[uh ] = f + 0 = f.

To show that every solution to the non-homogeneous equation can be ex-


pressed in this manner, suppose u satisfies L[u] = f. Set w = u − up . Then,
by linearity,
L[w] = L[u − up ] = L[u] − L[up ] = 0,
and hence w is a solution to the homogeneous differential equation. Thus,
u = up + w

As you have noticed by the above discussion, one solution of a linear PDE
2 SOLUTIONS TO PDES/PDES WITH CONSTRAINTS 17

leads to the creation of lots of solutions. In contrast, non-linear equations


are much tougher to deal with, for example, knowledge of several solutions
does not necessarily help in constructing others. Indeed, even finding one
solution to a non-linear partial differential equation can be quite a challenge.

PDEs with Constraints


Also, as observed above, a linear partial differential equation has infinitely
many solutions described by the general solution. In most applications, this
general solution is of little use since it has to satisfy other supplementary
conditions, usually called initial or boundary conditions. These conditions
determine the unique solution of interest.
A boundary value problem is a partial differential equation where either
the unknown function or its derivatives have values assigned on the physical
boundary of the domain in which the problem is specified. These conditions
are called boundary conditions. For example, the domain of the following
problem is the square [0, 1] × [0, 1] with boundaries defined by x = 0, x = 1
for all 0 ≤ y ≤ 1 and y = 0, y = 1 for all 0 ≤ x ≤ 1.

uxx + uyy =0 if 0 < x, y < 1


u(x, 0) = u(x, 1) =0 if 0 < x < 1
ux (0, y) = ux (1, y) =0 if 0 < y < 1.

There are three types of boundary conditions which arise frequently in for-
mulating physical problems:

1. Dirichlet Boundary Conditions: In this case, the dependent func-


tion u is prescribed on the boundary of the bounded domain. For example, if
the bounded domain is the rectangular plate 0 < x < L1 and 0 < y < L2 , the
boundary conditions u(0, y), u(L1 , y), u(x, 0), and u(x, L2 ) are prescribed.
The boundary conditions are called homogeneous if the dependent variable
is zero at any point on the boundary, otherwise the boundary conditions are
called nonhomogeneous.

2. Neumann Boundary Conditions: In this case, first partial deriva-


tives are prescribed on the boundary of the bounded domain. For example,
the Neumann boundary conditions for a rod of length L, where 0 < x < L,
are of the form ux (0, t) = α and ux (L, t) = β, where α and β are constants.
18THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION

3. Robin or mixed Boundary Conditions: This occurs when the depen-


dent variable and its first partial derivatives are prescribed on the boundary
of the bounded domain.

An initial value problem (or Cauchy problem) is a partial differential


equation together with a set of additional conditions on the unknwon func-
tion or its derivatives at a point in the given domain of the solution. These
conditions are called initial value conditions. For example, the transport
equation
ut (x, t) + cux (x, t) =0
u(x, 0) =f (x).
It can be shown that initial conditions for a linear PDE are necessary and
sufficient for the existence of a unique solution.

We say that an initial and/or boundary value problem associated with a


PDE is well-posed if it has a solution which is unique and depends con-
tinuously on the data given in the problem. The last condition, namely the
continuous dependence is important in physical problems. This condition
means that the solution changes by a small amount when the conditions
change a little. Such solutions are said to be stable.
Example 2.5
For x ∈ R and t > 0 we consider the initial value problem
utt − uxx =0
u(x, 0) = ut (x, 0) =0.
Clearly, u(x, t) = 0 is a solution to this problem.
(a) Let 0 <  << 1 be a very small number. Show that the function u (x, t) =
2 x t
 sin  sin  is a solution to the initial value problem
utt − uxx =0
u(x, 0) =0
x
ut (x, 0) = sin .

(b) Show that sup{|u (x, t) − u(x, t)| : x ∈ R, t > 0} = 2 . Thus, a small
change in the initial data leads to a small change in the solution. Hence, the
initial value problem is well-posed.
2 SOLUTIONS TO PDES/PDES WITH CONSTRAINTS 19

Solution.
(a) We have
 
∂u x t
= sin cos
∂t  
2
 
∂ u x t
= − sin sin
∂t2  
 
∂u x t
= cos sin
∂x  
∂ 2 u x  
t
= − sin sin .
∂x2  
2 ∂ 2 u
Thus, ∂∂tu2 − ∂ x

∂x2
= 0. Moreover, u (x, 0) = 0 and u (x, 0)
∂t 
=  sin 
.
(b) We have
   
2
x t
sup{|u (x, t) − u(x, t)| : x ∈ R, t > 0} = sup{ sin
sin : x ∈ R, t > 0}
 
=2
A problem that is not well-posed is referred to as an ill-posed problem. We
illustrate this concept in the next example.

Example 2.6
For x ∈ R and t > 0 we consider the initial value problem
utt + uxx =0
u(x, 0) = ut (x, 0) =0.
Clearly, u(x, t) = 0 is a solution to this problem.
(a) Let 0 <  << 1 be a very small number. Show that the function u (x, t) =
2 sin x sinh t , where
ex − e−x
sinh x =
2
is a solution to the problem
utt + uxx =0
u(x, 0) =0
x
ut (x, 0) = sin .

20THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION
t

(b) Show that sup{|u (x, t) − u(x, t)| : x ∈ R} = 2 sinh 
.
(c) Find limt→∞ sup{|u (x, t) − u(x, t)| : x ∈ R}.

Solution.
(a) We have
 
∂u x t
= sin cosh
∂t  
2
 
∂ u x t
= sin sinh
∂t2  
 
∂u x t
= cos sinh
∂x  
∂ 2 u x  
t
= − sin sinh .
∂x2  
2 ∂ 2 u
Thus, ∂∂tu2 + ∂ x

∂x2
= 0. Moreover, u (x, 0) = 0 and u (x, 0)
∂t 
=  sin 
.
(b) We have
   x 
2 t
sup{|u (x, t) − u(x, t)| : x ∈ R} = sup{ sinh sin : x ∈ R}
 
 

2 t
= sinh .


(c) We have
 
t
2
lim sup{|u (x, t) − u(x, t)| : x ∈ R} = lim  sinh
= ∞.
t→∞ t→∞ 

Thus, a small change in the initial data leads to a catastrophically change in


the solution. Hence, the given problem is ill-posed
2 SOLUTIONS TO PDES/PDES WITH CONSTRAINTS 21

Practice Problems

Problem 2.1
Determine a and b so that u(x, y) = eax+by is a solution to the equation

uxxxx + uyyyy + 2uxxyy = 0.

Problem 2.2
Consider the following differential equation

tuxx − ut = 0.

Suppose u(t, x) = X(x)T (t). Show that there is a constant λ such that
X 00 = λX and T 0 = λtT.

Problem 2.3
Consider the initial value problem

xux + (x + 1)yuy = 0, x, y > 1

u(1, 1) = e.
xex
Show that u(x, y) = y
is the solution to this problem.

Problem 2.4
Show that u(x, y) = e−2y sin (x − y) is the solution to the initial value prob-
lem 
ux + uy + 2u = 0 for x, y > 1
u(x, 0) = sin x.

Problem 2.5
Solve each of the following differential equations:
(a) du
dx
= 0 where u = u(x).
(b) ∂u
∂x
= 0 where u = u(x, y).

Problem 2.6
Solve each of the following differential equations:
2
(a) ddxu2 = 0 where u = u(x).
∂2u
(b) ∂x∂y = 0 where u = u(x, y).
22THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION

Problem 2.7
Show that u(x, y) = f (y + 2x) + xg(y + 2x), where f and g are two arbitrary
twice differentiable functions, satisfy the equation
uxx − 4uxy + 4uyy = 0.

Problem 2.8
Find the differential equation whose general solution is given by u(x, t) =
f (x−ct)+g(x+ct), where f and g are arbitrary twice differentiable functions
in one variable.

Problem 2.9
Let p : R → R be a differentiable function in one variable. Prove that
ut = p(u)ux
has a solution satisfying u(x, t) = f (x + p(u)t), where f is an arbitrary
differentiable function. Then find the general solution to ut = (sin u)ux .

Problem 2.10
Find the general solution to the pde
uxx + 2uxy + uyy = 0.
Hint: See Problem 1.2.

Problem 2.11
Let u(x, t) be a function such that uxx exists and u(0, t) = u(L, t) = 0 for all
t ∈ R. Prove that Z L
uxx (x, t)u(x, t)dx ≤ 0.
0

Problem 2.12
Consider the initial value problem
ut + uxx = 0, x ∈ R, t > 0
u(x, 0) = 1.
(a) Show that u(x, t) ≡ 1 is a solution to this problem.
n2 t
(b) Show that un (x, t) = 1 + e n sin nx is a solution to the initial value
problem
ut + uxx = 0, x ∈ R, t > 0
2 SOLUTIONS TO PDES/PDES WITH CONSTRAINTS 23

sin nx
u(x, 0) = 1 + .
n
(c) Find sup{|un (x, 0) − 1| : x ∈ R}.
(d) Find sup{|un (x, t) − 1| : x ∈ R}.
(e) Show that the problem is ill-posed.

Problem 2.13
Find the general solution of each of the following PDEs by means of direct
integration.
(a) ux = 3x2 + y 2 , u = u(x, y).
(b) uxy = x2 y, u = u(x, y).
(c) uxyz = 0, u = u(x, y, z).
(d) uxtt = e2x+3t , u = u(x, t).

Problem 2.14
Consider the second-order PDE

uxx + 4uxy + 4uyy = 0.

(a) Use the change of variables v(x, y) = y − 2x and w(x, y) = x to show


that uww = 0.
(b) Find the general solution to the given PDE.

Problem 2.15
Derive the general solution to the PDE

utt = c2 uxx

by using the change of variables v = x + ct and w = x − ct.


24THE BASICS OF THE THEORY OF PARTIAL DIFFERENTIAL EQUATION
Review of Some ODE Results

Later on in this book, we will encounter problems where a given partial


differential equation is reduced to an ordinary differential equation by means
of a given change of variables. Then techniques from the theory of ODE are
required in solving the transformed ODE. In this chapter, we include some
of the results from ODE theory that will be needed in our future discussions.

25
26 REVIEW OF SOME ODE RESULTS

3 The Method of Integrating Factor


In this section, we discuss a technique for solving the first order linear non-
homogeneous equation

y 0 + p(t)y = g(t) (3.1)

where p(t) and g(t) are continuous on the open interval aR < t < b.
Since
R p(t) is continuous, it has an antiderivative namely p(t)dt. Let µ(t) =
p(t)dt
e . Multiply Equation (3.1) by µ(t) and notice that the left hand side of
the resulting equation is the derivative of a product. Indeed,
d
(µ(t)y) = µ(t)g(t).
dt
Integrate both sides of the last equation with respect to t to obtain
Z
µ(t)y = µ(t)g(t)dt + C

Hence, Z
1 C
y(t) = µ(t)g(t)dt +
µ(t) µ(t)
or R
Z R R
− p(t)dt
y(t) = e e p(t)dt g(t)dt + Ce− p(t)dt

Notice that the second term of the previous expression is just the general
solution for the homogeneous equation

y 0 + p(t)y = 0

whereas the first term is a solution to the nonhomogeneous equation. That


is, the general solution to Equation (3.1) is the sum of a particular solution of
the nonhomogeneous equation and the general solution of the homogeneous
equation.

Example 3.1
Solve the initial value problem
y
y0 − = 4t, y(1) = 5.
t
3 THE METHOD OF INTEGRATING FACTOR 27

Solution.
We have p(t) = − 1t so that µ(t) = 1t . Multiplying the given equation by the
integrating factor and using the product rule we notice that
 0
1
y = 4.
t

Integrating with respect to t and then solving for y we find that the general
solution is given by
Z
y(t) = t 4dt + Ct = 4t2 + Ct.

Since y(1) = 5, we find C = 1 and hence the unique solution to the IVP is
y(t) = 4t2 + t, 0 < t < ∞

Example 3.2
Find the general solution to the equation
2
y 0 + y = ln t, t > 0.
t
Solution. R 2
The integrating factor is µ(t) = e t dt = t2 . Multiplying the given equation
by t2 to obtain
(t2 y)0 = t2 ln t.
Integrating with respect to t we find
Z
t y = t2 ln tdt + C.
2

The integral on the right-hand side is evaluated using integration by parts


3
with u = ln t, dv = t2 dt, du = dtt , v = t3 obtaining

t3 t3
t2 y = ln t − + C
3 9
Thus,
t t C
y= ln t − + 2
3 9 t
28 REVIEW OF SOME ODE RESULTS

Example 3.3
Solve
aux + buy + cu = 0
by using the change of variables s = ax + by and t = bx − ay.

Solution.
By the Chain rule for functions of two variables, we have

ux =us sx + ut tx = aus + but


uy =us sy + ut ty = bus − aut .

Substituting into the given equation, we find


c
us + u = 0.
a2 + b2
Solving this equation using the integrating factor method we find
cs

u(s, t) = f (t)e a2 +b2

where f is an arbitrary differentiable function of f. Switching back to x and


y we obtain
− c (ax+by)
u(x, y) = f (bx − ay)e a2 +b2
3 THE METHOD OF INTEGRATING FACTOR 29

Practice Problems
Problem 3.1
Solve the IVP: y 0 + 2ty = t, y(0) = 0.
Problem 3.2
Find the general solution: y 0 + 3y = t + e−2t .
Problem 3.3
Find the general solution: y 0 + 1t y = 3 cos t, t > 0.
Problem 3.4
Find the general solution: y 0 + 2y = cos (3t).
Problem 3.5
Find the general solution: y 0 + (cos t)y = −3 cos t.
Problem 3.6
Given that the solution to the IVP ty 0 + 4y = αt2 , y(1) = − 31 exists on the
interval −∞ < t < ∞. What is the value of the constant α?
Problem 3.7
Suppose that y(t) = Ce−2t + t + 1 is the general solution to the equation
y 0 + p(t)y = g(t). Determine the functions p(t) and g(t).
Problem 3.8
Suppose that y(t) = −2e−t + et + sin t is the unique solution to the IVP
y 0 + y = g(t), y(0) = y0 . Determine the constant y0 and the function g(t).
Problem 3.9
Find the value (if any) of the unique solution to the IVP y 0 + (1 + cos t)y =
1 + cos t, y(0) = 3 in the long run?
Problem 3.10
Solve the initial value problem ty 0 = y + t, y(1) = 7.
Problem 3.11
Show that if a and λ are positive constants, and b is any real number, then
every solution of the equation
y 0 + ay = be−λt
has the property that y → 0 as t → ∞. Hint: Consider the cases a = λ and
a 6= λ separately.
30 REVIEW OF SOME ODE RESULTS

Problem 3.12
Solve the initial-value problem y 0 + y = et y 2 , y(0) = 1 using the substitution
1
u(t) = y(t)

Problem 3.13
Solve the initial-value problem ty 0 + 2y = t2 − t + 1, y(1) = 1
2

Problem 3.14
Solve y 0 − 1t y = sin t, y(1) = 3. Express your answer in terms of the sine
Rt
integral, Si(t) = 0 sins s ds.
4 THE METHOD OF SEPARATION OF VARIABLES FOR ODES 31

4 The Method of Separation of Variables for


ODEs
The method of separation of variables that you have seen in the theory of
ordinary differential equations has an analogue in the theory of partial dif-
ferential equations (Section 17). In this section, we review the method for
ordinary differentiable equations.
A first order differential equation is separable if it can be written with one
variable only on the left and the other variable only on the right:
f (y)y 0 = g(t)
To solve this equation, we proceed as follows. Let F (t) be an antiderivative
of f (t) and G(t) be an antiderivative of g(t). Then by the Chain Rule
d dF dy
F (y) = = f (y)y 0 .
dt dy dt
Thus,
d d d
f (y)y 0 − g(t) = F (y) − G(t) = [F (y) − G(t)] = 0.
dt dt dt
It follows that
F (y) − G(t) = C
which is equivalent to
Z Z
0
f (y)y dt = g(t)dt + C.

As you can see, the result is generally an implicit equation involving a func-
tion of y and a function of t. It may or may not be possible to solve this to
get y explicitly as a function of t. For an initial value problem, substitute the
values of t and y by t0 and y0 to get the value of C.
Remark 4.1
If F is a differentiable function of y and y is a differentiable function of t and
both F and y are given then the chain rule allows us to find dF dt
given by
dF dF dy
= ·
dt dy dt
For separable equations, we are given f (y)y 0 = dFdt
and we are asked to find
F (y). This process is referred to as “reversing the chain rule.”
32 REVIEW OF SOME ODE RESULTS

Example 4.1
Solve the initial value problem y 0 = 6ty 2 , y(1) = 1
25
.

Solution.
Separating the variables and integrating both sides we obtain
Z 0 Z
y
dt = 6tdt
y2
or Z   Z
d 1
− dt = 6tdt.
dt y
Thus,
1
− = 3t2 + C.
y(t)
1
Since y(1) = 25 , we find C = −28. The unique solution to the IVP is then
given explicitly by
1
y(t) =
28 − 3t2
Example 4.2
Solve the IVP yy 0 = 4 sin (2t), y(0) = 1.

Solution.
This is a separable differential equation. Integrating both sides we find

d y2
Z   Z
dt = 4 sin (2t)dt.
dt 2

Thus,
y 2 = −4 cos (2t) + C.
Since y(0) = 1, we find C = 5. Now, solving explicitly for y(t) we find

y(t) = ± −4 cos t + 5.

Since y(0) = 1, we have y(t) = −4 cos t + 5. The interval of existence of
the solution is the interval −∞ < t < ∞
4 THE METHOD OF SEPARATION OF VARIABLES FOR ODES 33

Practice Problems
Problem 4.1
Solve the (separable) differential equation
2 −ln y 2
y 0 = tet .

Problem 4.2
Solve the (separable) differential equation

t2 y − 4y
y0 = .
t+2

Problem 4.3
Solve the (separable) differential equation

ty 0 = 2(y − 4).

Problem 4.4
Solve the (separable) differential equation

y 0 = 2y(2 − y).

Problem 4.5
Solve the IVP
4 sin (2t)
y0 = , y(0) = 1.
y

Problem 4.6
Solve the IVP:
π
yy 0 = sin t, y( ) = −2.
2
Problem 4.7
Solve the IVP:
y 0 + y + 1 = 0, y(1) = 0.

Problem 4.8
Solve the IVP:
y 0 − ty 3 = 0, y(0) = 2.
34 REVIEW OF SOME ODE RESULTS

Problem 4.9
Solve the IVP:
π
y 0 = 1 + y 2 , y( ) = −1.
4
Problem 4.10
Solve the IVP:
1
y 0 = t − ty 2 , y(0) = .
2
Problem 4.11
Solve the equation 3uy + uxy = 0 by using the substitution v = uy .

Problem 4.12
Solve the IVP
(2y − sin y)y 0 = sin t − t, y(0) = 0.

Problem 4.13
State an initial value problem, with initial condition imposed at t0 = 2,
having implicit solution y 3 + t2 + sin y = 4.

Problem 4.14
Can the differential equation
dy
= x2 − xy
dx
be solved by the method of separation of variables? Explain.
First Order Partial Differential
Equations

Many problems in the mathematical, physical, and engineering sciences deal


with the formulation and the solution of first order partial differential equa-
tions. Our first task is to understand simple first order equations. In ap-
plications, first order partial differential equations are most commonly used
to describe dynamical processes, and so time, t, is one of the independent
variables. Most of our discussion will focus on dynamical models in a single
space dimension, bearing in mind that most of the methods can be readily
extended to higher dimensional situations. First order partial differential
equations and systems model a wide variety of wave phenomena, including
transport of solvents in fluids, flood waves, acoustics, gas dynamics, glacier
motion, traffic flow, and also a variety of biological and ecological systems.
From a mathematical point of view, first order partial differential equations
have the advantage of providing conceptual basis that can be utilized in the
study of higher order partial differential equations.
In this chapter we introduce the basic definitions of first order partial dif-
ferential equations. We then derive the one dimensional spatial transport
eqution and discuss some methods of solutions. One general method of solv-
ability for quasilinear first order partial differential equation, known as the
method of characteristics, is analyzed.

35
36 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

5 Classification of First Order PDEs


In this section, we present the basic definitions pertained to first order PDE.
By a first order partial differential equation in two variables x and y
we mean any equation of the form

F (x, y, u, ux , uy ) = 0. (5.1)

In what follows the functions a, b, and c are assumed to be continuously


differentiable functions. If Equation (5.1) can be written in the form

a(x, y, u)ux + b(x, y, u)uy = c(x, y, u) (5.2)

then we say that the equation is quasi-linear. The following are examples
of quasi-linear equations:

uux + uy + cu2 = 0

x(y 2 + u)ux − y(x2 + u)uy = (x2 − y 2 )u.


If Equation (5.1) can be written in the form

a(x, y)ux + b(x, y)uy = c(x, y, u) (5.3)

then we say that the equation is semi-linear. The following are examples
of semi-linear equations:

xux + yuy = u2 + x2

(x + 1)2 ux + (y − 1)2 uy = (x + y)u2 .


If Equation (5.1) can be written in the form

a(x, y)ux + b(x, y)uy + c(x, y)u = d(x, y) (5.4)

then we say that the equation is linear. Examples of linear equations are:

xux + yuy = cu

(y − z)ux + (z − x)uy + (x − y)uz = 0.


A first order pde that is not linear is said to be non-linear. Examples of
non-linear equations are:
ux + cu2y = xy
5 CLASSIFICATION OF FIRST ORDER PDES 37

u2x + u2y = c.
First order partial differential equations are classified as either linear or non-
linear. Clearly, linear equations are a special kind of quasi-linear equation
(5.2) if a and b are functions of x and y only and c is a linear function of u.
Likewise, semi-linear equations are quasilinear equations if a and b are func-
tions of x and y only. Also, semi-linear equations (5.3) reduces to a linear
equation if c is linear in u.
A linear first order partial differential equation is called homogeneous if
d(x, y) ≡ 0 and non-homogeneous if d(x, y) 6= 0. Examples of linear ho-
mogeneous equations are:
xux + yuy = cu
(y − z)ux + (z − x)uy + (x − y)uz = 0.
Examples of non-homogeneous equations are:
ux + (x + y)uy − u = ex
yux + xuy = xy.
Recall that for an ordinary linear differential equation, the general solution
depends mainly on arbitrary constants. Unlike ODEs, in linear partial dif-
ferential equations, the general solution depends on arbitrary functions.
Example 5.1
Solve the equation ut (x, t) = 0.
Solution.
The general solution is given by u(x, t) = f (x) where f is an arbitrary dif-
ferentiable function of x
Example 5.2
Consider the transport equation
aut (x, t) + bux (x, t) = 0
where a and b are constants. Show that u(x, t) = f (bt − ax) is a solution
to the given equation, where f is an arbitrary differentiable function in one
variable.
Solution.
Let v(x, t) = bt − ax. Using the chain rule we see that ut (x, t) = bfv (v) and
ux (x, t) = −afv (v). Hence, aut (x, t) + bux (x, t) = abfv (v) − abfv (v) = 0
38 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Practice Problems
Problem 5.1
Classify each of the following PDE as linear, quasi-linear, semi-linear, or non-
linear.
(a) xux + yuy = sin (xy).
(b) ut + uux = 0
(c) u2x + u3 u4y = 0.
(d) (x + 3)ux + xy 2 uy = u3 .

Problem 5.2
Show that u(x, y) = ex f (2x − y), where f is a differentiable function of one
variable, is a solution to the equation

ux + 2uy − u = 0.

Problem 5.3

Show that u(x, y) = x xy satisfies the equation

xux − yuy = u

subject to
u(y, y) = y 2 , y ≥ 0.

Problem 5.4
Show that u(x, y) = cos (x2 + y 2 ) satisfies the equation

−yux + xuy = 0

subject to
u(0, y) = cos y 2 .

Problem 5.5
Show that u(x, y) = y − 21 (x2 − y 2 ) satisfies the equation

1 1 1
ux + uy =
x y y

subject to u(x, 1) = 21 (3 − x2 ).
5 CLASSIFICATION OF FIRST ORDER PDES 39

Problem 5.6
Find a relationship between a and b if u(x, y) = f (ax+by) is a solution to the
equation 3ux − 7uy = 0 for any differentiable function f such that f 0 (x) 6= 0
for all x.

Problem 5.7
Reduce the partial differential equation

aux + buy + cu = 0

to a first order ODE by introducing the change of variables s = bx − ay and


t = x.

Problem 5.8
Solve the partial differential equation

ux + uy = 1

by introducing the change of variables s = x − y and t = x.

Problem 5.9
Show that u(x, y) = e−4x f (2x − 3y) is a solution to the first-order PDE

3ux + 2uy + 12u = 0.

Problem 5.10
Derive the general solution of the PDE

aut + bux = u, b 6= 0

by using the change of variables v = ax − bt and w = x.

Problem 5.11
Derive the general solution of the PDE

aux + buy = 0, a 6= 0

by using the change of variables s = bx − ay and t = x.


40 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Problem 5.12
Write the equation

ut + cux + λu = f (x, t), c 6= 0

in the coordinates v = x − ct, w = x.

Problem 5.13
Suppose that u(x, t) = w(x − ct) is a solution to the PDE

xux + tut = Au

where A and c are constants. Let v = x − ct. Write the differential equation
with unknown function w(v).
6 A REVIEW OF MULTIVARIABLE CALCULUS 41

6 A Review of Multivariable Calculus


In this section, we recall some concepts from vector calculus that we en-
counter later in the book.

6.1 Multiplication of Vectors: The Scalar or Dot Prod-


uct
Is there such thing as multiplying a vector by another vector? The answer
is yes. As a matter of fact there are two types of vector multiplication. The
first one is known as scalar or dot product4 and produces a scalar; the
second is known as the vector or cross product and produces a vector. In
this section we will discuss the former one leaving the latter one for the next
section.
One of the motivation for using the dot product is the physical situation to
which it applies, namely that of computing the work done on an object by a
given force over a given distance, as shown in Figure 6.1.1.

Figure 6.1.1

Indeed, the work W is given by the expression

−→
W = ||F~ || ||P Q|| cos θ

−→
where ||F~ || cos θ is the component of F~ in the direction of P Q.
Thus, we define the dot product of two vectors ~u and ~v to be the number

~u · ~v = ||~u|| ||~v || cos θ, 0 ≤ θ ≤ π

4
Also called inner product.
42 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

where θ is the angle between the two vectors as shown in Figure 6.1.2.

Figure 6.1.2
The above definition is the geometric definition of the dot product. We
next provide an algebraic way for computing the dot product. Indeed, let
~u = u1~i + u2~j + u3~k and ~v = v1~i + v2~j + v3~k. Then ~v − ~u = (v1 − u1 )~i + (v2 −
u2 )~j + (v3 − u3 )~k. Moreover, we have ||~u||2 = u21 + u22 + u23 , ||~v ||2 = v12 + v22 + v32
and

||~v − ~u||2 =(v1 − u1 )2 + (v2 − u2 )2 + (v3 − u3 )2


=v12 − 2v1 u1 + u21 + v22 − 2v2 u2 + u22 + v32 − 2v3 u3 + u23 .

Now, applying the Law of Cosines to Figure 6.1.3 we can write

||~v − ~u||2 = ||~u||2 + ||~v ||2 − 2||~u|| ||~v || cos θ.

Thus, by substitution we obtain

v12 −2v1 u1 +u21 +v22 −2v2 u2 +u22 +v32 −2v3 u3 +u23 = u21 +u22 +u23 +v12 +v22 +v32 −2||~u|| ||~v || cos θ

or
||~u|| ||~v || cos θ = u1 v1 + u2 v2 + u3 v3
so that we can define the dot product algebraically by

~u · ~v = u1 v1 + u2 v2 + u3 v3 .

Figure 6.1.3
6 A REVIEW OF MULTIVARIABLE CALCULUS 43

Example 6.1.1
Compute the dot product of ~u = √1 ~i + √1 ~j + √1 ~k and ~v = 12~i + 12~j + ~k and
2 2 2
the angle between these vectors.

Solution.
We have
1 1 1 1 1 1 1 1 √
~u · ~v = √ · + √ · + √ · 1 = √ + √ + √ = 2.
2 2 2 2 2 2 2 2 2 2
We also have
 2  2  2
2 1 1 1 3
||~u|| = √ + √ + √ =
2 2 2 2
 2  2
1 1 3
||~v ||2 = + +1= .
2 2 2
Thus, √
~u · ~v 2 2
cos θ = = .
||~u|| ||~v || 3
Hence,
√ !
2 2
θ = cos−1 ≈ 0.34 rad ≈ 19.5◦
3

Remark 6.1.1
The algebraic definition of the dot product extends to vectors with any num-
ber of components.

Next, we discuss few properties of the dot product.

Theorem 6.1.1
For any vectors ~u, ~v , and w ~ and any scalar λ we have
(i) Commutative law: ~u · ~v = ~v · ~u.
(ii) Distributive law: (~u + ~v ) · w ~ = ~u · w
~ + ~v · w.
~
(iii) ~u · (λ~v ) = (λ~u) · ~v = λ(~u · ~v ).
(iv) Magnitude: ||~u||2 = ~u · ~u.
(v) Two nonzero vectors ~u and ~v are orthogonal or perpendicular if and
only if ~u · ~v = 0.
(vi)) Two nonzero vectors ~u and ~v are parallel if and only if ~u ·~v = ±||~u|| ||~v ||.
(vii) ~0 · ~v = ~0.
44 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Proof.
Write ~u = u1~i + u2~j + u3~k, ~v = v1~i + v2~j + v3~k, and w ~ = w1~i + w2~j + w3~k.
Then
(i) ~u · ~v = u1 v1 + u2 v2 + u2 v3 = v1 u1 + v2 u2 + v3 u3 = ~v · ~u since product of
numbers is commutative.
~ = ((u1 + v1 )~i + (u2 + v2 )~j + (u2 + v3 )~k) · (w1~i + w2~j + w3~k) =
(ii) (~u + ~v ) · w
(u1 + v1 )w1 + (u2 + v2 )w2 + (u3 + v3 )w3 = u1 w1 + u2 w2 + u3 w3 + v1 w1 + v2 w2 +
v3 w3 = ~u · w ~ + ~v · w.
~
(iii) ~u ·(λ~v ) = (u1~i+u2~j +u3~k)·(λv1~i+λv2~j +λv3~k) = λu1 v1 +λu2 v2 +λu3 v3 =
λ(u1 v1 + u2 v2 + u3 v3 ) = λ(~u · ~v ).
(iv) ||~u||2 = ~u · ~u cos 0 = ~u · ~u.
(v) If ~u and ~v are perpendicular then the cosine of their angle is zero and so
the dot product is zero. Conversely, if the dot product of the two vectors is
zero then the cosine of their angle is zero and this happens only when the
two vectors are perpendicular.
(vi) If ~u and ~v are parallel then the cosine of their angle is either 1 or −1.
That is, ~u · ~v = ±||~u|| ||~v ||. Conversely, if ~u · ~v = ±||~u|| ||~v || then cos θ = ±1
and this implies that either θ = 0 or θ = π. In either case, the two vectors
are parallel.
(vii) In 3-D, ~0 =< 0, 0, 0 > and ~v =< a, b, c > so that ~0 · ~v = (0 × a)~i + (0 ×
b)~j + (0 × c)~k = ~0

Remark 6.1.2
Note that the unit vectors ~i, ~j, ~k associated with the coordinate axes satisfy
the equalities
~i · ~i = ~j · ~j = ~k · ~k = 1 and ~i · ~j = ~j · ~k = ~i · ~k = 0.

Example 6.1.2
(a) Show that the vectors ~u = 3~i − 2~j and ~v = 2~i + 3~j are perpendicular.
(b) Show that the vectors ~u = 2~i + 6~j − 4~k and ~v = −3~i − 9~j + 6~k are parallel.

Solution.
(a) We have: ~u · ~v = 3(2) − 2(3) = 0. Hence ~u is perpendicular to ~v .
(b) We have:

~u · ~v 2(−3) + (6)(−9) − 4(6)


cos θ = = p p = −1.
||~u||||~v || [ 22 + (6)2 + (−4)2 ][ (−3)2 + (−9)2 + 62 ]
6 A REVIEW OF MULTIVARIABLE CALCULUS 45

Hence, θ = π so that the two vectors are parallel. Another way to see that
the vectors are parallel is to notice that ~u = − 23 ~v

Projection of a vector onto a line


The orthogonal projection of a vector along a line is obtained by taking a
vector with same length and direction as the given vector but with its tail on
the line and then dropping a perpendicular onto the line from the tip of the
vector. The resulting vector on the line is the vector’s orthogonal projection
or simply its projection. See Figure 6.1.4.

Figure 6.1.4

Now, if ~u is a unit vector along the line of projection and if ~vparallel is the
vector projection of ~v onto ~u then

~vparallel = (||~v || cos θ)~u = (~v · ~u)~u.

See Figure 6.1.5. Also, the component perpendicular to ~u is given by

~vperpendicular = ~v − ~vparallel .

Figure 6.1.5
46 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

We call Comp~u~v = ~v · ~u the the scalar projection of ~v onto ~u. We call the
vector Proj~u~v = ~vparallel the vector projection of ~v onto ~u.
It follows that the vector ~v can be written in terms of ~vparallel and ~vperpendicular
~v = ~vparallel + ~vperpendicular .
Example 6.1.3
Write the vector ~v = 3~i + 2~j − 6~k as the sum of two vectors, one parallel,
and one perpendicular to w ~ = 2~i − 4~j + ~k.
Solution.
Let ~u = ||w
~
= √2 ~i − √4 ~j + √1 ~k. Then,
w||
~ 21 21 21
 
6 8 6 16 32 8
~vparallel = (~v · ~u)~u = √ −√ −√ ~u = − ~i + ~j − ~k.
21 21 21 21 21 21
Also,
     
16 ~ 32 ~ 8 ~
~vperpendicular =~v − ~vparallel = 3 + i+ 2− j + −6 + k
21 21 21
79 10 118 ~
= ~i + ~j − k.
21 21 21
Hence,
~v = ~vparallel + ~vperpendicular
Example 6.1.4
Find the scalar projection and vector projection of ~u =< 1, 1, 2 > onto
~v =< −2, 3, 1 > .
Solution.
We have
~u · ~v 1(−2) + (1)(3) + 2(1) 3
comp~v ~u = = p =√
||~v || 2 2
(−2) + 3 + 1 2 14
~u · ~v ~v
Proj~v ~u =
||~v || ||~v ||
3 2 9 3
= ~v = − ~i + ~j + ~k
14 7 14 14

Applications
As pointed out earlier in the section, scalar products are used in Physics.
For instance, in finding the work done by a force applied on an object.
6 A REVIEW OF MULTIVARIABLE CALCULUS 47

Example 6.1.5
A wagon is pulled a distance of 100 m along a horizontal path by a constant
force of 70 N. The handle of the wagon is held at an angle of 35◦ above the
horizontal. Find the work done by the force.

Solution.
The work done is

W = F · d cos 35◦ = 70(100) cos 35◦ ≈ 5734 J


48 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Practice Problems

Problem 6.1.1
Find ~a · ~b where ~a =< 4, 1, 41 > and ~b =< 6, −3, −8 > .

Problem 6.1.2
Find ~a · ~b where ||~a|| = 6, ||~b|| = 5 and the angle between the two vectors is
120◦ .

Problem 6.1.3
If ~u is a unit vector, find ~u · ~v and ~u · w
~ using the figure below.

Problem 6.1.4
Find the angle between the vectors ~a =< 4, 3 > and ~b =< 2, −1 > .

Problem 6.1.5
Find the angle between the vectors ~a =< 4, −3, 1 > and ~b =< 2, 0, −1 > .

Problem 6.1.6
Determine whether the given vectors are orthogonal, parallel, or neither.
(a) ~a =< −5, 3, 7 > and ~b =< 6, −8, 2 > .
(b) ~a =< 4, 6 > and ~b =< −3, 2 > .
(c) ~a = −~i + 2~j + ~k and ~b = 3~i + 4~j − ~k.
(d) ~a = 2~i + 6~j − 4~k and ~b = −3~i − 9~j + 6~k.

Problem 6.1.7
Use vectors to decide whether the triangle with vertices P (1, −3, −2), Q(2, 0, −4),
and R(6, −2, −5) is right-angled.

Problem 6.1.8
Find a unit vector that is orthogonal to both ~i + ~j and ~i + ~k.
6 A REVIEW OF MULTIVARIABLE CALCULUS 49

Problem 6.1.9
Find the acute angle between the lines 2x − y = 3 and 3x + y = 7.

Problem 6.1.10
Find the scalar and vector projections of the vector ~b =< 1, 2, 3 > onto
~a =< 3, 6, −2 > .

Problem 6.1.11
If ~a =< 3, 0, −1 >, find a vector ~b such that comp~a~b = 2.

Problem 6.1.12
Find the work done by a force F~ = 8~i − 6~j + 9~k that moves an object from
the point (0, 10, 8) to the point (6, 12, 20) along a straight line. The distance
is measured in meters and the force in newtons.

Problem 6.1.13
A sled is pulled along a level path through snow by a rope. A 30-lb force
acting at an angle of 40◦ above the horizontal moves the sled 80 ft. Find the
work done by the force.
50 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

6.2 Directional Derivatives and the Gradient Vector


Given a function z = f (x, y) and let (x0 , y0 ) be in the domain of f. We wish
to find the rate of change of f at (x0 , y0 ) in the direction of a unit vector
~u =< a, b > . To do this, we consider the vertical plane to the graph S of f
that passes through the point P (x0 , y0 , z0 ) in the direction of ~u. This plane
inersects the graph S in a curve C. (See Figure 6.2.1.)

Figure 6.2.1

The slope of the tangent line T to C at the point P is the rate of change
of z in the direction of ~u. Let Q(x, y, z) be an arbitrary point on C and let
P 0 (x0 , y0 , 0) and Q0 (x, y, 0) be the orthogonal projection of P and Q respec-
−−→
tively onto the xy−plane. Then the vectors P 0 Q0 =< x−x0 , y−y0 , 0 > is par-
−−→
allel to ~u so that P 0 Q0 = h~u for some scalar h. Hence, x = x0 +ha, y = y0 +hb
and
f (x, y) − f (x0 , y0 ) f (x0 + ha, y0 + hb) − f (x0 , y0 )
= .
h h
If we take the limit of the above average rate as h → 0, we obtain the rate
of change of z(with respect to distance) in the direction of ~u, which is called
the directional derivative of f at (x0 , y0 ) in the direction of ~u. We write

f (x0 + ah, y0 + bh) − f (x0 , y0 )


f~u (x0 , y0 ) = lim .
h→0 h
6 A REVIEW OF MULTIVARIABLE CALCULUS 51

Notice that if ~u = ~i then a = 1 and b = 0 so that f~u (x0 , y0 ) = fx (x0 , y0 ).


That is, fx is the rate of change of f in the x− direction. Likewise, if ~u = ~j
then a = 0 and b = 1 so that f~u (x0 , y0 ) = fy (x0 , y0 ).
The following theorem provides a formula for computing the directional
derivative.

Theorem 6.2.1
If f is a differentiable function of x and y, then f has a directional derivative
in the direction of any unit vector ~u =< a, b > and

f~u (x, y) = fx (x, y)a + fy (x, y)b.

Proof.
Fix a point (x0 , y0 ) in the domain of f and consider the single variable func-
tion g(h) = f (x0 + ha, y0 + hb). Then

g(h) − g(0) f (x0 + ah, y0 + bh) − f (x, y)


g 0 (0) = lim = lim = f~u (x0 , y0 ).
h→0 h h→0 h
Let x = x0 + ah and y = y0 + bh. Using the Chain Rule, we find
∂f dx ∂f dy
g 0 (h) = + = fx (x, y)a + fy (x, y)b.
∂x dh ∂y dh
Letting h = 0 in the above expression, we find

f~u (x0 , y0 ) = g 0 (0) = fx (x0 , y0 )a + fy (x0 , y0 )b (6.2.1)

Example 6.2.1 D √ E
Find u~v (4, 0) if u(x, y) = x + y and ~v = 21 , 23 .
2

Solution.
We have   √ !
1 3 1
u~v (4, 0) = ux (4, 0) + uy (4, 0) =
2 2 2

The Gradient Vector


The gradient is a generalization of the usual concept of derivative of a
function of one variable to functions of several variables. For a function
u(x, y) or u(x, y, z), the gradient are, respectively,
52 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

∇u(x, y) = ux~i + uy~j and ∇u(x, y, z) = ux~i + uy~j + uz~k.

Example 6.2.2
Let F (x, y, z) = u(x, y) − z. Find ∇F (x, y, z).

Solution.
We have
∇F (x, y, z) = ux~i + uy~j − ~k

Example 6.2.3
Find the gradient vector of f (x, y, z) = (2x − 3y + 5z)5 .

Solution.
We have

fx (x, y, z) =10(2x − 3y + 5z)4


fy (x, y, z) = − 15(2x − 3y + 5z)4
fz (x, y, z) =25(2x − 3y + 5z)4 .

Thus,
∇f (x, y, z) = 5(2x − 3y + 5z)4 [2~i − 3~j + 5~k]
With the notation for the gradient vector, we can rewrite the expression
(6.2.1) for the directional derivative as

f~u (x0 , y0 ) = ∇f (x0 , y0 ) · ~u.

This expresses the directional derivative in the direction of ~u as the scalar


projection of the gradient vector onto ~u.

Maximizing the Directional Derivative


Suppose we have a function of two or three variables and we consider all
possible directional derivatives of f at a given point. These give the rates
of change of f in all possible directions. We can then ask the questions: In
which of these directions does f change fastest and what is the maximum
rate of change? The answers are provided by the following theorem.

Theorem 6.2.2
The maximum value of the directional derivative of a function f (x, y) or
f (x, y, z) at a point (x, y) or (x, y, z) is ||∇f || and it occurs in the direction
of the gradient of f at that point.
6 A REVIEW OF MULTIVARIABLE CALCULUS 53

Proof.
We have
f~u (x, y) = ∇f · ~u = ||∇f ||||~u|| cos θ = ||∇f || cos θ,

where θ is the angle between ∇f and ~u. The maximum value of cos θ is 1 and
this occurs when θ = 0. Therefore the maximum value of f~u is ||∇f || and it
occurs when θ = 0, that is, when ~u has the same direction as ∇f

Example 6.2.4
Find the maximum rate of change of the function u(x, y) = 50 − x2 − 2y 2 at
the point (1, −1).

Solution.
The maximum rate of change occurs in the direction of the gradient vector:

∇u(1, −1) = ux (1, −1)~i + uy (1, −1)~j = −2~i + 4~j.

The maximum rate of change at (1, −1) is

p √
||∇u(1, −1)|| = (−2)2 + 42 = 2 5

Significance of the Gradient Vector


Suppose that a curve in 3-D is defined parametrically by the equations x =
x(t), y = y(t), z = z(t), where t is a parameter. This curve can be described
by the vector function

~r(t) = x(t)~i + y(t)~j + z(t)~k.

Its derivative is the tangent vector to the curve (See Figure 6.2.2) and is
given by
d dx dy dz
(~r(t)) = ~i + ~j + ~k.
dt dt dt dt
54 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Figure 6.2.2
Now, for a function in two variables u(x, y), the equation u(x, y) = C is
called a level curve of u( a level surface of u(x, y, z)). The level curves
u(x, y) = C are just the traces of the graph of u(x, y) in the horizontal plane
z = C projected down to the xy−plane.
An important property of the gradient of u is that it is normal to a level
surface of u at every point. To see this, let S be the level surface f (x, y, z) = k
and P0 (x0 , y0 , z0 ) be a point on S. Let C be any curve on S that passes
through P0 . We can describe C in parametric form x = x(t), y = y(t), and
z = z(t). Any point on C satisfies f (x(t), y(t), z(t)) = k. Differentiating both
sides of this equation with respect to t we find by means of the Chain Rule
fx (x, y, z)x0 (t) + fy (x, y, z)y 0 (t) + fz (x, y, z)z 0 (t) = 0

which can be written as ∇f · r0~(t) = 0. This means that the gradient is normal
to a level surface (respectively a level curve). See Figure 6.2.3.

Figure 6.2.3
6 A REVIEW OF MULTIVARIABLE CALCULUS 55

If we consider a topographical map of a hill and let f (x, y) represent the


height above sea level at a point with coordinates (x, y), then a curve of
steepest ascent can be drawn as in Figure 6.2.4 by making it perpendicular
to all of the contour lines.

Figure 6.2.4

Vector Fields and Integral Curves

In vector calculus, a vector field is a function F~ (x, y) (or F~ (x, y, z) in 3-D


space) that assigns a vector to each point of its domain as shown in Figure
6.2.5.

Figure 6.2.5
56 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Creating vector fields manually is very tedious. Thus, vector fields are gen-
erally generated using computer softwares such as Mathematica, Maple, or
Mathlab.

Example 6.2.5
The gradient vector of a function is an example of a vector field called the
gradient vector field. Sketch the gradient vector field of the function

u(x, y) = x2 + y 2 .

Describe the level curves of u(x, y).

Solution.
The gradient vector field of the given function is

∇u(x, y) = 2x~i + 2y~j.

A level curve is defined by the equation

x2 + y 2 = C, C ≥ 0.

Thus, level curves are circles centered at the origin. Figure 6.2.6 shows the
gradient vector field as well as some of the level curves.

Figure 6.2.6
6 A REVIEW OF MULTIVARIABLE CALCULUS 57

For example, at the point (1, 2), the corresponding vector in the vector field
is the vector with tail (1, 2) and tip (2, 4)

An integral curve of a vector field is a smooth curve5 Γ such that F~ (x, y)


assigns a tangent vector at each point of Γ. For example, the integral curves
of the vector field F~ (x, y) = y~i − x~j are circles centered at the origin. See
Figure 6.2.7.

Figure 6.2.7

5
If ~r(t) is a parametrization of Γ then r~0 (t) is continuous and r~0 (t) 6= ~0.
58 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Practice Problems
Problem 6.2.1
Find the gradient of the function

F (x, y, z) = exyz + sin (xy).

Problem 6.2.2
Find the gradient of the function
y 
F (x, y, z) = x cos .
z
Problem 6.2.3
Describe the level surfaces of the function f (x, y, z) = (x − 2)2 + (y − 3)2 +
(z + 5)2 .

Problem 6.2.4
Find the directional derivative of u(x, y) = 4x2 + y 2 in the direction of ~a =
~i + 2~j at the point (1, 1).

Problem 6.2.5
Find the directional derivative of u(x, y, z) = x2 z +y 3 z 2 −xyz in the direction
of ~a = −~i + 3~k at the point (x, y, z).

Problem 6.2.6
Find the maximum rate of change of the function u(x, y) = yexy at the point
(0, 2) and the direction in which this maximum occurs.

Problem 6.2.7
Find the gradient vector field for the function u(x, y, z) = ez − ln (x2 + y 2 ).
7 SOLVABILITY OF SEMI-LINEAR FIRST ORDER PDES 59

7 Solvability of Semi-linear First Order PDEs


In this section we discuss the solvability of the semi-linear first order PDE

a(x, y)ux + b(x, y)uy = f (x, y, u) (7.1)

via the method of characteristics.


To solve (7.1), we proceed as follows. Suppose we have found a solution
u(x, y) to (7.1). This solution may be interpreted geometrically as a surface
in (x, y, z) space called the integral surface where z = u(x, y). This integral
surface can be viewed as the level surface of the function

F (x, y, z) = u(x, y) − z = 0.

Then equation (7.1) can be written as the dot product

~v · ~n = 0 (7.2)

where ~v =< a, b, f > is the characteristic direction and ~n = ∇F (x, y, z) =<


ux , uy , −1 > . Note that ~n is normal to the surface F (x, y, z) = 0 and is point-
ing downward. Hence, ~n is normal to ~v and this implies that ~v is tangent
to the surface F = 0 at (x, y, z). So our task to finding a solution to (7.1) is
equivalent to finding a surface S such that at every point on the surface the
vector
~v = a~i + b~j + f (x, y, u)~k.
is tangent to the surface. How do we construct such a surface? The idea
is to find the integral curves of the vector field ~v (see Section 6.2) and then
patch all these curves together to obtain the desired surface.
To this end, we start first by constructing a curve Γ parametrized by t such
that at each point of Γ the vector ~v is tangent to Γ. A parametrization of
this curve is given by the vector function

~r(t) = x(t)~i + y(t)~j + u(t)~k.

Then the tangent vector is

d dx dy du
r~0 (t) = (~r(t)) = ~i + ~j + ~k.
dt dt dt dt
60 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Hence, the vectors r~0 (t) and ~v are parallel so these two vectors are propor-
tional and this leads to the ODE system
dx dy du
dt dt dt
= = (7.3)
a b f (x, y, u)
or in differential form
dx dy du
= = . (7.4)
a b f (x, y, u)
By solving the system (7.3) or (7.4), we are assured that the vector ~v is
tangent to the curve Γ which in turn lies in the solution surface S. In our
context, integral curves are called characteristic curves or simply char-
acteristics of the PDE (7.1). We call (7.3) the characteristic equations.
The projection of Γ into the xy−plane is called the projected character-
istic curve.
Once we have found the characteristic curves, the surface S is the union of
these characteristic curves. In summary, by introducing these characteristic
equations, we have reduced our partial differential equation to a system of
ordinary differential equations. We can use ODE theory to solve the charac-
teristic equations, then piece together these characteristic curves to form a
surface. Such a surface will provide us with a solution to our PDE.
Remark 7.1
dy
Solving dx = ab one obtains the general solution h(x, y) = k1 where k1
is constant. Likewise, solving du dx
= fa one obtains the general solution
j(x, y, u) = k2 where k2 is a constant. The constant k2 is a function of
k1 . For the sake of discussion, suppose that h(x, y) = k1 can be expressed as
y = g(x, k1 ). Then, the y in du
dx
= fa is being replaced by g(x, k1 ) so that the
constant in j(x, y, u) = k2 will depend on k1 .
Example 7.1
Find the general solution to aux + buy = 0 where a and b are constants with
a 6= 0.
Solution.
dy
From (7.3) we can write dx = ab which yields bx − ay = k1 for some arbitrary
du
constant k1 . From dx = 0 we find u(x, y) = k2 where k2 is a constant. That
is, u(x, y) is constant on Γ. Since (0, − ka1 , k2 ) is on Γ, we have
k1
u(x, y) = u(0, − ) = k2
a
7 SOLVABILITY OF SEMI-LINEAR FIRST ORDER PDES 61

which shows that k2 is a function of k1 . Hence,

u(x, y) = f (k1 ) = f (bx − ay)

where f is a differentiable function in one variable

In the next example, we show how the initial value problem for the PDE
determines the function f.

Example 7.2
Find the unique solution to aux + buy = 0, where a and b are constants with
a 6= 0, with the initial condition u(x, 0) = g(x).

Solution.
From the previous example, we found u(x, y) = f (bx − ay) for some differen-
tiable function f. Since u(x, 0) = g(x), we find g(x) = f (bx) or f (x) = g xb
assuming that b 6= 0. Thus,
 a 
u(x, y) = g x − y
b

Example 7.3
2
Find the solution to −3ux + uy = 0, u(x, 0) = e−x .

Solution.
2
We have a = −3, b = 1 and g(x) = e−x . The unique solution is given by
2
u(x, y) = e−(x+3y)

Example 7.4
Find the general solution of the equation

xux + yuy = xe−u , x > 0.

Solution.
We have a(x, y) = x, b(x, y) = y, and f (x, y, u) = xe−u . So we have to solve
the system
dy y du
= , = e−u .
dx x dx
62 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

From the first equation, we can use the separation of variables method to find
y = k1 x for some constant k1 . Solving the second equation by the method of
separation of variables, we find

eu − x = k2 .

But k2 = g(k1 ) so that


y
u
e − x = g(k1 ) = g
x
where g is a differentiable function of one variable

Example 7.5
Find the general solution of the equation

ux + uy − u = y.

Solution.
The characteristic equations are

dx dy du d(u + y + 1)
= = = .
1 1 u+y u+y+1
dy
Solving the equation dx = 1 we find y − x = k1 . Solving the equation dx =
d(u+y+1)
u+y+1
, we find u + y + 1 = k2 ex = f (y − x)ex , where f is a differentiable
function of one variable. Hence,

u = −(1 + y) + f (y − x)ex

Example 7.6
Find the general solution to x2 ux + y 2 uy = (x + y)u.

Solution.
Using properties of proportions6 we have

dx dy du dx − dy
2
= 2 = = 2 .
x y (x + y)u x − y2
6 a c a±b c±d a c e αa+βc+γe
If b = d then b = d . Also, b = d = f = αb+βd+γf .
7 SOLVABILITY OF SEMI-LINEAR FIRST ORDER PDES 63

dy y2 1
Solving dx
= x2
by the method of separation of variables we find x
− y1 = k1 .
du d(x−y)
From the equation (x+y)u
= x2 −y 2
we find

du d(x − y)
=
u x−y
which implies  
1 1
u = k2 (x − y) = f − (x − y)
x y

Example 7.7
Find the solution satisfying yux + xuy = x2 + y 2 subject to the conditions
u(x, 0) = 1 + x2 and u(0, y) = 1 + y 2 .

Solution.
dy x
Solving the equation dx
= y
we find x2 − y 2 = k1 . On the other hand, we
have

du =y −1 (x2 + y 2 )dx
=ydx + x2 y −1 dx
y 
=ydx + x2 y −1 dy
x
=ydx + xdy = d(xy).

Hence,
u(x, y) = xy + f (x2 − y 2 ).
From u(x, 0) = 1 + x2 we find f (x) = 1 + x, x ≥ 0. From u(0, y) = 1 + y 2 we
find f (y) = 1 − y, y ≤ 0. Hence, f (x) = 1 + |x| and

u(x, y) = xy + |x2 − y 2 |

Remark 7.2
The method of characteristics discussed in this section applies as well to any
quasi-linear first order PDE. See Chapter 9.
64 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Practice Problems
Problem 7.1
Solve ux + yuy = y 2 with the initial condition u(0, y) = sin y.

Problem 7.2
Solve ux + yuy = u2 with the initial condition u(0, y) = sin y.

Problem 7.3
Find the general solution of yux − xuy = 2xyu.

Problem 7.4
Find the integral surface of the IVP: xux + yuy = u, u(x, 1) = 2 + e−|x| .

Problem 7.5
1
Find the unique solution to 4ux + uy = u2 , u(x, 0) = 1+x2
.

Problem 7.6
2
Find the unique solution to e2y ux + xuy = xu2 , u(x, 0) = ex .

Problem 7.7
Find the unique solution to xux + uy = 3x − u, u(x, 0) = tan−1 x.

Problem 7.8
Solve: xux − yuy = 0, u(x, x) = x4 .

Problem 7.9
Find the general solution of yux − 3x2 yuy = 3x2 u.

Problem 7.10
Find u(x, y) that satisfies yux + xuy = 4xy 3 subject to the boundary condi-
tions u(x, 0) = −x4 and u(0, y) = 0.
8 LINEAR FIRST ORDER PDE: THE ONE DIMENSIONAL SPATIAL TRANSPORT EQUATIONS65

8 Linear First Order PDE: The One Dimen-


sional Spatial Transport Equations
Modeling is the process of writing a differential equation to describe a physi-
cal situation. In this section we derive the one-dimensional spatial transport
equation and use the method of characteristics to solve it.

Linear Transport Equation for Fluid Flows


We shall describe the transport of a dissolved chemical by water that is trav-
eling with uniform velocity c through a long thin tube G with uniform cross
section A. (The very same discussion applies to the description of the trans-
port of gas by air moving through a pipe.) We assume the velocity c > 0
is in the (rightward) positive direction of the x−axis. We will also assume
that the concentration of the chemical is constant across the cross section
A located at x so that the chemical changes only in the x−direction and
thus the term one-dimensional spatial equation. This condition says that
the quantity of the chemical in a portion of the tube is the same but it is
traveling with time.
Let u(x, t) be a continuously differentiable function denoting the concentra-
tion of the chemical (i.e. amount of chemical per unit volume) at position x
and time t. Then at time t0 , the amount of chemical stored in a section of
the tube between positions a and x0 (see Figure 8.1) is given by the definite
integral
Z x0
Au(s, t0 )ds.
a

Figure 8.1
66 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Since the water is flowing at a constant speed c, so at time t0 + h the same


quantity of chemical will exist in the portion of the tube between a + ch and
x0 + ch. That is,
Z x0 Z x0 +ch
Au(s, t0 )ds = Au(s, t0 + h)ds.
a a+ch

Taking the derivative of both sides with respect to x0 and using the Funda-
mental Theorem of Calculus, we find

u(x0 , t0 ) = u(x0 + ch, t0 + h).

Now, taking the derivative of this last equation with respect to h and using
the chain rule, with x = x0 + ch, t == t0 + h, we find

0 = ut (x0 + ch, t0 + h) + cux (x0 + ch, t0 + h).

Taking the limit of this last equation as h approaches 0 and using the fact
that ut and ux are continuous, we find

ut (x0 , t0 ) + cux (x0 , t0 ) = 0. (8.1)

Since x0 and t0 are arbitrary, Equation (8.1 is true for all (x, t). This equation
is called the transport equation in one-dimensional space. It is a linear,
homogeneous first order partial differential equation.
Note that (8.1) can be written in the form

< 1, c > · < ut , ux >= 0

so that the left-hand side of (8.1) is the directional derivative of u(t, x) at


(t, x) in the direction of the vector < 1, c > .

Solvability via the method of characteristics


We will use the method of characteristics discussed in Chapter 7 to solve
(8.1). The characteristic equations are
dx du
dt = = .
c 0
Thus, to solve (8.1), we solve the system of ODEs
dt 1 du
= , = 0.
dx c dx
8 LINEAR FIRST ORDER PDE: THE ONE DIMENSIONAL SPATIAL TRANSPORT EQUATIONS67

Solving the first equation, we find x − ct = k1 . Solving the second equation


we find
u(x, t) = k2 = f (k1 ) = f (x − ct).
One can check that this is indeed a solution to (8.1). Indeed, by using the
chain rule one finds

ut = −cf 0 (x − ct) and ux = f 0 (x − ct).

Hence, by substituting these results into the equation one finds

ut + cux = −cf 0 (x − ct) + cf 0 (x − ct) = 0.

The solution u(x, t) = f (x − ct) is called the right traveling wave, since
the graph of the function f (x − ct) at a given time t is the graph of f (x)
shifted to the right by the value ct. Thus, with growing time, the function
f (x) is moving without changes to the right at the speed c.

An initial value condition determines a unique solution to the transport equa-


tion as shown in the next example.

Example 8.1
2
Find the solution to ut − 3ux = 0, u(x, 0) = e−x .

Solution.
The characteristic equations lead to the ODEs

dt 1 du
=− , = 0.
dx 3 dx
Solving the first equation, we find 3t + x = k1 . From the second equation, we
find u(x, t) = k2 = f (k1 ) = f (3t + x). From the initial condition, u(x, 0) =
2
f (x) = e−x . Hence,
2
u(x, t) = e−(3t+x)

Transport Equation with Decay


Recall from ODE that a function u is an exponential decay function if it
satisfies the equation
du
= λu, λ < 0.
dt
68 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

A transport equation with decay is an equation given by

ut + cux + λu = f (x, t) (8.2)

where λ > 0 and c are constants and f is a given function representing


external resources. Note that the decay is characterized by the term λu.
Note that (8.2) is a first order linear partial differential equation that can be
solved by the method of characteristics by solving the chracteristic equations
dx dt du
= = .
c 1 f (x, t) − λu

Example 8.2
Find the general solution of the transport equation

ut + ux + u = t.

Solution.
The characteristic equations are
dx dt du
= = .
1 1 t−u
From the equation dx = dt we find x−t = k1 . Using a property of proportions
we can write
dt du dt − du d(1 − t + u)
= = =− .
1 t−u 1−t+u 1−t+u
Thus, 1 − t + u = k2 e−t = f (x − t)e−t or u(x, t) = t − 1 + f (x − t)e−t where
f is a differentiable function of one variable
8 LINEAR FIRST ORDER PDE: THE ONE DIMENSIONAL SPATIAL TRANSPORT EQUATIONS69

Practice Problems
Problem 8.1
Find the solution to ut + 3ux = 0, u(x, 0) = sin x.

Problem 8.2
Solve the equation aux + buy + cu = 0.

Problem 8.3
Solve the equation ux +2uy = cos (y − 2x) with the initial condition u(0, y) =
f (y), where f : R → R is a given function.

Problem 8.4
Show that the initial value problem ut + ux = x, u(x, x) = 1 has no solution.

Problem 8.5
Solve the transport equation ut + 2ux = −3u with initial condition u(x, 0) =
1
1+x2
.

Problem 8.6
Solve ut + ux − 3u = t with initial condition u(x, 0) = x2 .

Problem 8.7
Show that the decay term λu in the transport equation with decay

ut + cux + λu = 0

can be eliminated by the substitution w = ueλt .

Problem 8.8 (Well-Posed)


Let u be the unique solution to the IVP

ut + cux = 0

u(x, 0) = f (x)
and v be the unique solution to the IVP

ut + cux = 0

u(x, 0) = g(x)
70 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

where f and g are continuously differentiable functions.


(a) Show that w(x, t) = u(x, t) − v(x, t) is the unique solution to the IVP

ut + cux = 0

u(x, 0) = f (x) − g(x)


(b) Write an explicit formula for w in terms of f and g.
(c) Use (b) to conclude that the transport problem is well-posed. That is, a
small change in the initial data leads to a small change in the solution.

Problem 8.9
Solve the initial boundary value problem

ut + cux = −λu, x > 0, t > 0

u(x, 0) = 0, u(0, t) = g(t), t > 0.

Problem 8.10
Solve the first-order equation 2ut +3ux = 0 with the initial condition u(x, 0) =
sin x.

Problem 8.11
Solve the PDE ux + uy = 1.
9 SOLVING QUASI-LINEAR FIRST ORDER PDE VIA THE METHOD OF CHARACTERISTICS71

9 Solving Quasi-Linear First Order PDE via


the Method of Characteristics
In this section we develop a method for finding the general solution of a
quasi-linear first order partial differential equation

a(x, y, u)ux + b(x, y, u)uy = c(x, y, u). (9.1)

This method is called the method of characteristics or Lagrange’s method.


This method consists of transforming the PDE to a system of ODEs which
can be solved and the found solution is transformed into a solution for the
original PDE.
The method of characteristics relies on a geometrical argument. A visual-
ization of a solution is an integral surface with equation z = u(x, y). An
alternative representation of this integral surface is

F (x, y, z) = u(x, y) − z = 0.

That is, an integral surface is a level surface of the function F (x, y, z).
Now, recall from vector calculus that the gradient vector to a level surface
at the point (x, y, z) is a normal vector to the surface at that point. That
is, the gradient is a vector normal to the tangent plane to the surface at the
point (x, y, z). Thus, the normal vector to the surface F (x, y, z) = 0 is given
by
~n = ∇F = Fx~i + Fy~j + Fz~k = ux~i + uy~j − ~k.
Because of the negative z− component, the vector ~n is pointing downward.
Now, equation (9.1) can be written as the dot product

(a(x, y, u), b(x, y, u), c(x, y, u)) · (ux , uy , −1) = 0

or
~v · ~n = 0
where ~v = a(x, y, u)~i + b(x, y, u)~j + c(x, y, u)~k. Thus, ~n is normal to ~v . Since
~n is normal to the surface F (x, y, z) = 0, the vector ~v must be tangential
to the surface F (x, y, z) = 0 and hence must lie in the tangent plane to the
surface at every point. Thus, to find a solution to (9.1) we need to find an
integral surface such that the surface is tangent to the vector ~v at each of its
point.
72 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

The required surface can be found as the union of integral curves, that is,
curves that are tangent to ~v at every point on the curve. If an integral curve
has a parametrization

~r(t) = x(t)~i + y(t)~j + u(t)~k

then the integral curve (i.e. the characteristic) is a solution to the ODE
system
dx dy du
= a(x, y, u), = b(x, y, u), = c(x, y, u) (9.2)
dt dt dt
or in differential form
dx dy du
= = . (9.3)
a(x, y, u) b(x, y, u) c(x, y, u)

Equations (9.2) or (9.3) are called characteristic equations. Note that


u(t) = u(x(t), y(t)) gives the values of u along a characteristic. Thus, along
a characteristic, the PDE (9.1) degenerates to an ODE.

Example 9.1
Find the general solution of the PDE yuux + xuuy = xy.

Solution.
dx dy
The characteristic equations are yu = xu = du
xy
. Using the first two fractions
we find x −y = k1 . Using the last two fractions we find u2 −y 2 = f (x2 −y 2 ).
2 2

Hence, the general solution can be written as u2 = y 2 + f (x2 − y 2 ), where f


is an arbitrary differentiable single variable function

Example 9.2
Find the general solution of the PDE x(y 2 −u2 )ux −y(u2 +x2 )yy = (x2 +y 2 )u.

Solution.
dy
The characteristic equations are x(y2dx−u2 ) = −y(u2 +x2 )
= du
(x2 +y 2 )u
. Using a
property of proportions we can write
xdx + ydy + udu du
= .
x2 (y 2 − u2 ) − y 2 (u2 + x2 ) + u2 (x2 + y 2 ) (x2 + y 2 )u
That is
xdx + ydy + udu du
= 2
0 (x + y 2 )u
9 SOLVING QUASI-LINEAR FIRST ORDER PDE VIA THE METHOD OF CHARACTERISTICS73

or
xdx + ydy + udu = 0.
Hence, we find x2 + y 2 + u2 = k1 . Also,
dx dy
x
− y du
=
y2 − u2 + u2 + x2 (x2 + y 2 )u
or
dx dy du
− = .
x y u
yu
Hence, we find x
= k2 . The general solution is given by
x
u(x, y) = f (x2 + y 2 + u2 )
y
where f is an arbitrary differentiable single variable function
74 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Practice Problem
Problem 9.1
Find the general solution of the PDE ln (y + u)ux + uy = −1.

Problem 9.2
Find the general solution of the PDE x(y − u)ux + y(u − x)uy = u(x − y).

Problem 9.3
Find the general solution of the PDE u(u2 + xy)(xux − yuy ) = x4 .

Problem 9.4
Find the general solution of the PDE (y + xu)ux − (x + yu)uy = x2 − y 2 .

Problem 9.5
Find the general solution of the PDE (y 2 + u2 )ux − xyuy + xu = 0.

Problem 9.6
Find the general solution of the PDE ut + uux = x.

Problem 9.7
Find the general solution of the PDE (y − u)ux + (u − x)uy = x − y.

Problem 9.8
Solve
x(y 2 + u)ux − y(x2 + u)uy = (x2 − y 2 )u.

Problem 9.9
Solve √
1 − x2 ux + uy = 0.

Problem 9.10
Solve
u(x + y)ux + u(x − y)uy = x2 + y 2 .
10 THE CAUCHY PROBLEM FOR FIRST ORDER QUASILINEAR EQUATIONS75

10 The Cauchy Problem for First Order Quasi-


linear Equations
When solving a partial differential equation, it is seldom the case that one
tries to study the properties of the general solution of such equations. In
general, one deals with those partial differential equations whose solutions
satisfy certain supplementary conditions. In the case of a first order partial
differential equation, we determine the particular solution by formulating an
initial value porblem also known as a Cauchy problem.
In this section, we discuss the Cauchy problem for the first order quasilinear
partial differential equation

a(x, y, u)ux + b(x, y, u)uy = c(x, y, u). (10.1)

Recall that the initial value problem of a first order ordinary differential
equation asks for a solution of the equation which has a given value at a
given point in R. The Cauchy problem for the PDE (10.1) asks for a solution
of (10.1) which has given values on a given curve in R2 . A precise statement
of the problem is given next.

Initial Value Problem or Cauchy Problem


Let C be a given curve in R2 defined parametrically by the equations

x = x0 (t), y = y0 (t)

where x0 , y0 are continuously differentiable functions on some interval I. Let


u0 (t) be a given continuously differentiable function on I. The Cauchy prob-
lem for (10.1) asks for a continuously differentiable function u = u(x, y)
defined in a domain Ω ⊂ R2 containing the curve C and such that:
(1) u = u(x, y) is a solution of (10.1) in Ω.
(2) On the curve C, u equals the given function u0 (t), i.e.

u(x0 (t), y0 (t)) = u0 (t), t ∈ I. (10.2)

We call C the initial curve of the problem, u0 (t) the initial data, and
(10.2) the initial condition or Cauchy data of the problem. See Figure
10.1.
76 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Figure 10.1

If we view a solution u = u(x, y) of (10.1) as an integral surface of (10.1),


we can give a simple geometrical statement of the problem: Find a solu-
tion surface of (10.1) containing the curve Γ described parametrically by the
equations
Γ : x = x0 (t), y = y0 (t), u = u0 (t), t ∈ I.
Note that the projection of this curve in the xy−plane is just the curve C.
The following theorem asserts that under certain conditions the Cauchy prob-
lem (10.1) - (10.2) has a unique solution.

Theorem 10.1
Suppose that x0 (t), y0 (t), and u0 (t) are continuously differentiable functions
of t in an interval I, and that a, b, and c are functions of x, y, and u with
continuous first order partial derivatives with respect to their argument in
some domain D of (x, y, u)−space containing the initial curve

Γ : x = x0 (t), y = y0 (t), u = u0 (t)

where t ∈ I. If (x0 (t), y0 (t), u0 (t)) is a point on Γ that satisfies the condition

dy0 dx0
a(x0 (t), y0 (t), u0 (t)) (t) − b(x0 (t), y0 (t), u0 (t)) (t) 6= 0 (10.3)
dt dt
10 THE CAUCHY PROBLEM FOR FIRST ORDER QUASILINEAR EQUATIONS77

then by continuity this relation holds in a neighborhood U of (x0 (t), y0 (t), u0 (t))
so that Γ is nowhere characteristic in U. In this case, there exists a unique
solution u = u(x, y) of (10.1) in U such that the initial condition (10.2) is
satisfied for every point on C contained in U. See Figure 10.2. That is, there
is a unique integral surface of (10.1) that contains Γ in a neighborhood of
(x0 (t), y0 (t), u0 (t)).

Figure 10.2

It follows that the Cauchy problem has a unique solution if C is nowhere


characteristic.
The unique solution is found as follows: We solve the PDE by the method
of characteristics to obtain the general solution that involves an unknown
function. This unknown function is determined using the initial condition.
We illustrate this process in the next example.

Example 10.1
Solve the Cauchy problem

ux + uy =1
u(x, 0) =f (x).
78 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Solution.
The initial curve in R3 can be given parametrically as
Γ : x0 (t) = t, y0 (t) = 0, u0 (t) = f (t).
We have
dy0 dx0
(t) − b(x0 (t), y0 (t), u0 (t))
a(x0 (t), y0 (t), u0 (t)) (t) = −1 6= 0
dt dt
so by the above theorem the given Cauchy problem has a unique solution.
Next we apply the results of the previous section to find the unique solution.
If we solve the characteristic equations in non-parametric form
dx dy du
= =
1 1 1
we find x − y = c1 and u − x = c2 . Thus, the general solution of the PDE
is given by u = x + F (x − y). Using the Cauchy data u(x, 0) = f (x) we find
f (x) = x + F (x) which implies that F (x) = f (x) − x. Hence, the unique
solution is given by
u(x, y) = x + f (x − y) − (x − y) = y + f (x − y)
Next, if condition (10.3) is not satisfied and Γ is a characteristic curve, i.e.,
dy0 dx0
a(x0 , y0 , u0 ) =b(x0 , y0 , u0 )
dt dt
du0 dx0
a(x0 , y0 , u0 ) =c(x0 , y0 , u0 )
dt dt
dy0 du0
c(x0 , y0 , u0 ) =b(x0 , y0 , u0 )
dt dt
for all point on Γ then the problem has infinitely many solutions. To see
this, pick an arbitrary point P0 = (x0 , y0 , u0 ) on Γ. Pick a new initial curve
Γ0 passing through P0 which is nowhere characteristic in a neighborhood of
P0 . In this case, condition (10.3) is satisfied and the new Cauchy problem has
a unique solution. Since there are infinitely many ways of selecting Γ0 , we
obtain infinitely many solutions. We illustrate this case in the next example.
Example 10.2
Solve the Cauchy problem
ux + uy =1
u(x, x) =x.
10 THE CAUCHY PROBLEM FOR FIRST ORDER QUASILINEAR EQUATIONS79

Solution.
The initial curve in R3 can be given parametrically as

Γ : x0 (t) = t, y0 (t) = t, u0 (t) = t.

We have
dy0 dx0
a(x0 , y0 , u0 ) =b(x0 , y0 , u0 ) =1
dt dt
du0 dx0
a(x0 , y0 , u0 ) =c(x0 , y0 , u0 ) =1
dt dt
dy0 du0
c(x0 , y0 , u0 ) =b(x0 , y0 , u0 ) =1
dt dt
so that Γ is a characteristic curve. As in Example 10.1, the general solution
of the PDE is u(x, y) = y + f (x − y) where f is an arbitrary differentiable
function. Using the Cauchy data u(x, x) = x we find f (0) = 0. Thus, the
solution is given by
u(x, y) = y + f (x − y)
where f is an arbitrary function such that f (0) = 0. There are infinitely
many choices for f. Hence, the problem has infinitely many solutions

If condition (10.3) is not satisfied and if

dx0 du0
c(x0 , y0 , u0 ) 6= a(x0 , y0 , u0 )
dt dt
for some points on Γ then the Cauchy problem has no solutions. We illustrate
this case next.

Example 10.3
Solve the Cauchy problem

ux + uy =1
u(x, x) =1.

Solution.
The initial curve in R3 can be given parametrically as

Γ : x0 (t) = t, y0 (t) = t, u0 (t) = 1.


80 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

We have
dy0 dx0
a(x0 (t), y0 (t), u0 (t)) (t) − b(x0 (t), y0 (t), u0 (t)) (t) = 0.
dt dt
and
dx0 du0
6= a(x0 , y0 , u0 )
1 = c(x0 , y0 , u0 ) = 0.
dt dt
As in Example 10.1, the general solution to the PDE is given by u = y +
f (x − y). Using the Cauchy data u(x, x) = 1 we find f (0) = 1 − x, which is
not possible since the LHS is a fixed number whereas the RHS is a variable
expression. Hence, the problem has no solutions

Example 10.4
Solve the Cauchy problem
ux − uy =1
u(x, 0) =x2 .

Solution.
The initial curve is given parametrically by
Γ : x0 (t) = t, y0 (t) = 0, u0 (t) = t2 .
We have
dy0 dx0
a(x0 (t), y0 (t), u0 (t)) (t) − b(x0 (t), y0 (t), u0 (t)) (t) = 1 6= 0
dt dt
so the Cauchy problem has a unique solution.
The characteristic equations in non-parametric form are
dx dy du
= = .
1 −1 1
Using the first two fractions we find x + y = c1 . Using the first and the third
fractions we find u − x = c2 . Thus, the general solution can be represented
by
u = x + f (x + y)
where f is an arbitrary differentiable function. Using the Cauchy data
u(x, 0) = x2 we find x2 − x = f (x). Hence, the unique solution is given
by
u = x + (x + y)2 − (x + y) = (x + y)2 − y
10 THE CAUCHY PROBLEM FOR FIRST ORDER QUASILINEAR EQUATIONS81

Example 10.5
Solve the initial value problem
ut + uux = x, u(x, 0) = 1.
Solution.
The initial curve is given parametrically by
Γ : x0 (t) = t, y0 (t) = 0, u0 (t) = 1.
We have
dy0 dx0
a(x0 (t), y0 (t), u0 (t)) (t) − b(x0 (t), y0 (t), u0 (t)) (t) = −1 6= 0
dt dt
so the Cauchy problem has a unique solution.
The characteristic equations in non-parametric form are
dt dx du
= = .
1 u x
Since
dt d(x + u)
=
1 x+u
−t
we find that (x + u)e = c1 . Now, using the last two fractions we find
u2 − x2 = k2 = f ((x + u)e−t ).
Using the Cauchy data u(x, 0) = 1, we find 1 − x2 = f (1 + x) or f (1 + x) =
(1 + x)2 − 2x(1 + x). Thus, f (x) = x2 − 2x(x − 1). The unique solution is
given by
u2 − x2 = (x + u)2 e−2t − 2(x + u)e−t [(x + u)e−t − 1]
or
u − x = (x + u)e−2t − 2e−t [(x + u)e−t − 1] = 2e−t − (x + u)e−2t .
This can be reduced further as follows: u + ue−2t = x + 2e−t − xe−2t =
2e−t 1−e−2t
2e−t + x(1 − e−2t ) =⇒ u = 1+e −2t + x 1+e−2t = sech(t) + xtanh(t)

Example 10.6
Solve the initial value problem
uux + uy = 1
where u(x, y) = 0 on the curve y 2 = 2x.
82 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Solution.
A parametrization of Γ is

Γ : x0 (t) = 2t2 , y0 (t) = 2t, u0 (t) = 0, t > 0.

We have
dy0 dx0
a(x0 (t), y0 (t), u0 (t)) (t) − b(x0 (t), y0 (t), u0 (t)) (t) = −4t 6= 0, t > 0
dt dt
so the Cauchy problem has a unique solution.
The characteristic equations in non-parametric form are
dx dy du
= = .
u 1 1
Using the last two fractions, we find u − y = k1 . Using the first and the last
fractions, we find u2 − 2x = k2 = f (k1 ) = f (u − y).
Using the initial condition, we find f (x) = −x2 . Hence,

u2 − 2x = −(u − y)2

or equivalently
(u − y)2 + u2 = 2x.
Solving this quadratic equation in u to find
1
2u = y ± (4x − y 2 ) 2 .

The solution surface satisfying u = 0 on y 2 = 2x is given by


1
2u = y − (4x − y 2 ) 2 .

This represents a solution surface only when y 2 < 4x. The solution does not
exist for y 2 > 4x
10 THE CAUCHY PROBLEM FOR FIRST ORDER QUASILINEAR EQUATIONS83

Practice Problems
Problem 10.1
Solve
(y − u)ux + (u − x)uy = x − y
with the condition u x, x1 = 0, x 6= 0, 1.


Problem 10.2
Solve the linear equation
yux + xuy = u
with the Cauchy data u(x, 0) = x3 , x > 0.

Problem 10.3
Solve
x(y 2 + u)ux − y(x2 + u)uy = (x2 − y 2 )u
with the Cauchy data u(x, −x) = 1, x > 0

Problem 10.4
Solve
xux + yuy = xe−u
with the Cauchy data u(x, x2 ) = 0, x > 0.

Problem 10.5
Solve the initial value problem

xux + uy = 0, u(x, 0) = f (x).

Problem 10.6
Solve the initial value problem

ut + aux = 0, u(x, 0) = f (x).

Problem 10.7
Solve the initial value problem

aux + uy = u2 , u(x, 0) = cos x


84 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS

Problem 10.8
Solve the initial value problem

ux + xuy = u, u(1, y) = h(y).

Problem 10.9
Solve the initial value problem

uux + uy = 0, u(x, 0) = f (x)

where f is an invertible function.

Problem 10.10
Solve the initial value problem

1 − x2 ux + uy = 0, u(0, y) = y.

Problem 10.11
Consider
xux + 2yuy = 0.

(i) Find and sketch the characteristics.


(ii) Find the solution with u(1, y) = ey .
(iii) What happens if you try to find the solution satisfying either u(0, y) =
g(y) or u(x, 0) = h(x) for given functions g and h?

Problem 10.12
Solve the equation ux + uy = u subject to the condition u(x, 0) = cos x.

Problem 10.13
(a) Find the general solution of the equation

ux + yuy = u.

(b) Find the solution satisfying the Cauchy data u(x, 3ex ) = 2.
(c) Find the solution satisfying the Cauchy data u(x, ex ) = ex .

Problem 10.14
Solve the Cauchy problem

ux + 4uy = x(u + 1), u(x, 5x) = 1.


10 THE CAUCHY PROBLEM FOR FIRST ORDER QUASILINEAR EQUATIONS85

Problem 10.15
Solve the Cauchy problem
π
ux − uy = u, u(x, −x) = sin x, x 6= .
4
Problem 10.16
(a) Find the characteristics of the equation

yux + xuy = 0.

(b) Sketch some of the characteristics.


2
(c) Find the solution satisfying the boundary condition u(0, y) = e−y .

Problem 10.17
Consider the equation ux + yuy = 0. Is there a solution satisfying the extra
condition
(a) u(x, 0) = 1
(b) u(x, 0) = x?
If yes, give a formula; if no, explain why.
86 FIRST ORDER PARTIAL DIFFERENTIAL EQUATIONS
Second Order Linear Partial
Differential Equations

In this chapter we consider the three fundamental second order linear partial
differential equations of parabolic, hyperbolic, and elliptic type. These types
arise in many applications such as the wave equation, the heat equation
and the Laplace’s equation. We will study the solvability of each of these
equations.

87
88 SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

11 Second Order PDEs in Two Variables


In this section we will briefly review second order partial differential equa-
tions.
A second order partial differential equation in the variables x and y
is an equation of the form

F (x, y, u, ux , uy , uxx , uyy , uxy ) = 0. (11.1)


If Equation (11.1) can be written in the form
A(x, y, u, ux , uy )uxx +B(x, y, u, ux , uy )uxy +C(x, y, u, ux , uy )uyy = D(x, y, u, ux , uy )
(11.2)

then we say that the equation is quasi-linear.


If Equation (11.1) can be written in the form

A(x, y)uxx + B(x, y)uxy + C(x, y)uyy = D(x, y, u, ux , uy ) (11.3)

then we say that the equation is semi-linear.


If Equation (11.1) can be written in the form

A(x, y)uxx +B(x, y)uxy +C(x, y)uyy +D(x, y)ux +E(x, y)uy +F (x, y)u = G(x, y)
(11.4)
then we say that the equation is linear.

A linear equation is said to be homogeneous when G(x, y) ≡ 0 and non-


homogeneous otherwise.
Equation (11.4) resembles the general equation of a conic section

Ax2 + Bxy + Cy 2 + Dx + Ey + F = 0

which is classified as either parabolic, hyperbolic, or elliptic based on the sign


of the discriminant B 2 − 4AC. We do the same for a second order linear
partial differential equation:
• Hyperbolic: This occurs if B 2 − 4AC > 0 at a given point in the domain
of u.
• Parabolic: This occurs if B 2 − 4AC = 0 at a given point in the domain
of u.
• Elliptic: This occurs if B 2 − 4AC < 0 at a given point in the domain of
u.
11 SECOND ORDER PDES IN TWO VARIABLES 89

Example 11.1
Determine whether the equation uxx + xuyy = 0 is hyperbolic, parabolic or
elliptic.

Solution.
Here we are given A = 1, B = 0, and C = x. Since B 2 − 4AC = −4x, the
given equation is hyperbolic if x < 0, parabolic if x = 0 and elliptic if x > 0

Second order partial differential equations arise in many areas of scientific


applications. In what follows we list some of the well-known models that are
of great interest:
1. The heat equation in one-dimensional space is given by

ut = kuxx

where k is a constant.
2. The wave equation in one-dimensional space is given by

utt = c2 uxx

where c is a constant.
3. The Laplace equation is given by

∆u = uxx + uyy = 0.
90 SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Practice Problems
Problem 11.1
Classify each of the following equation as hyperbolic, parabolic, or elliptic:
(a) Wave propagation: utt = c2 uxx , c > 0.
(b) Heat conduction: ut = cuxx , c > 0.
(c) Laplace’s equation: ∆u = uxx + uyy = 0.

Problem 11.2
Classify the following linear scalar PDE with constant coefficents as hyper-
bolic, parabolic or elliptic.
(a) uxx + 4uxy + 5uyy + ux + 2uy = 0.
(b) uxx − 4uxy + 4uyy + 3ux + 4u = 0.
(c) uxx + 2uxy − 3uyy + 2ux + 6uy = 0.

Problem 11.3
Find the region(s) in the xy−plane where the equation

(1 + x)uxx + 2xyuxy − y 2 uyy = 0

is elliptic, hyperbolic, or parabolic. Sketch these regions.

Problem 11.4
Show that u(x, t) = cos x sin t is a solution to the problem

utt = uxx
u(x, 0) = 0
ut (x, 0) = cos x
ux (0, t) = 0

for all x, t > 0.

Problem 11.5
Classify each of the following PDE as linear, quasilinear, semi-linear, or non-
linear.
(a) ut + uux = uuxx
(b) xutt + tuyy + u3 u2x = t + 1
(c) utt = c2 uxx
(d) u2tt + ux = 0.
11 SECOND ORDER PDES IN TWO VARIABLES 91

Problem 11.6
Show that, for all (x, y) 6= (0, 0), u(x, y) = ln (x2 + y 2 ) is a solution of
uxx + uyy = 0,
and that, for all (x, y, z) 6= (0, 0, 0), u(x, y, z) = √ 1
is a solution of
x2 +y 2 +z 2

uxx + uyy + uzz = 0.


Problem 11.7
Consider the eigenvalue problem
uxx = λu, 0 < x < L
ux (0) = k0 u(0)
ux (L) = −kL u(L)
with Robin boundary conditions, where k0 and kL are given positive numbers
and u = u(x). Can this system have a nontrivial solution u 6≡ 0 for λ > 0?
Hint: Multiply the first equation by u and integrate over x ∈ [0, L].
Problem 11.8
Show that u(x, y) = f (x)g(y), where f and g are arbitrary differentiable
functions, is a solution to the PDE
uuxy = ux uy .
Problem 11.9
Show that for any n ∈ N, the function un (x, y) = sin nx sinh ny is a solution
to the Laplace equation
∆u = uxx + uyy = 0.
Problem 11.10
Solve
uxy = xy.
Problem 11.11
Classify each of the following second-oder PDEs according to whether they
are hyperbolic, parabolic, or elliptic:
(a) 2uxx − 4uxy + 7uyy − u = 0.
(b) uxx − 2 cos xuxy − sin2 xuyy = 0.
(c) yuxx + 2(x − 1)uxy − (y + 2)uyy = 0.
92 SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Problem 11.12
Let c > 0. By computing ux , uxx , ut , and utt show that

1 x+ct
Z
1
u(x, t) = (f (x + ct) + f (x − ct)) + g(s)ds
2 2c x−ct

is a solution to the PDE


utt = c2 uxx
where f is twice differentiable function and g is a differentiable function.
Then compute and simplify u(x, 0) and ut (x, 0).

Problem 11.13
Consider the second-order PDE

yuxx + uxy − x2 uyy − ux − u = 0.

Determine the region D in R2 , if such a region exists, that makes this PDE:
(a) hyperbolic, (b) parabolic, (c) elliptic.

Problem 11.14
Consider the second-order hyperbolic PDE

uxx + 2uxy − 3uyy = 0.

Use the change of variables v(x, y) = y − 3x and w(x, y) = x + y to solve the


given equation.

Problem 11.15
Solve the Cauchy problem

uxx + 2uxy − 3uyy = 0.

u(x, 2x) = 1, ux (x, 2x) = x.


12 HYPERBOLIC TYPE: THE WAVE EQUATION 93

12 Hyperbolic Type: The Wave equation


The wave equation has many physical applications from sound waves in air
to magnetic waves in the Sun’s atmosphere. However, the simplest systems
to visualize and describe are waves on a stretched flexible string.
A flexible homogeneous string of length L and constant mass density ρ (i.e.,
mass per unit length) is stretched horizontally along the x−axis with its left
end placed at x = 0 and its right end placed at x = L. From the left end (and
at time t = 0) we slightly shake the string and we notice a small vibrations
propagate through the string. We make the following physical assumptions:
(a) the string does not furnish any resistance to bending (i.e., perfectly elas-
tic);
(b) the (pulling) tension force on the string is the dominant force and all
other forces acting on the string are negligible (no external forces are applied
to the string, the damping forces (resistance) and gravitational forces are
negligible);
(c) clearly a point on the string moves up and down along a curve but since
the horizontal displacement is small compared to the vertical displacement,
we will assume that each point of the string moves only vertically. Thus, the
horizontal component of the tension force must be constant.
We denote the vertical displacement from the x−axis of the string by u(x, t)
which is a function of position x and time t. That is, u(x, t) is the vertical
displacement from the equilibrium at position x and time t. Our aim is to
find an equation that is satisfied by u(x, t).
A displacement of a tiny piece of the string between points P and Q is shown
in Figure 12.1,

Figure 12.1
94 SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

where
(i) θ(x, t) is the angle between T~ (x, t) and ~i at x and time t; for small vibra-
tions, we have θ ≈ 0;
(ii) T~ (x, t) is the (pulling) tension force in the string at position x and time
t and T~ (x + ∆x, t) the tension force at position x + ∆x and t.
By (c) above, we have

||T~ (x, t)|| cos [θ(x, t)] = ||T~ (x + ∆x, t)|| cos [θ(x + ∆x, t)] = T.

Now, at P the vertical component of the tension force is −||T~ (x, t)|| sin [θ(x, t)]
(the minus sign occurs due to the component at P is pointing downward)
whereas at Q the vertical component is ||T~ (x + ∆x, t)|| sin [θ(x + ∆x, t)].
Then Newton’s Law of motion

mass × acceleration = net applied forces

gives

∂ 2u
ρ∆x = ||T~ (x + ∆x, t)|| sin [θ(x + ∆x, t)] − ||T~ (x, t)|| sin [θ(x, t)].
∂t2
Next, dividing through by T, we obtain

ρ ∂ 2 u ||T~ (x + ∆x, t)|| sin [θ(x + ∆x, t)] ||T~ (x, t)|| sin [θ(x, t)]
∆x 2 = −
T ∂t ||T~ (x + ∆x, t)|| cos [θ(x + ∆x, t)] ||T~ (x, t)|| cos [θ(x, t)]
= tan [θ(x + ∆x, t)] − tan [θ(x, t)]
=ux (x + ∆x, t) − ux (x, t).

Dividing by ∆x and letting ∆x → 0 we obtain

ρ ∂ 2u
= uxx (x, t).
T ∂t2
which can be written as

utt (x, t) = c2 uxx (x, t) (12.1)

where c2 = Tρ . Note that the units of T are mass × length/time2 and the
units of ρ are mass/length2 so that the units of c are length/time. We call
c the wave speed.
12 HYPERBOLIC TYPE: THE WAVE EQUATION 95

General Solution of (12.1): D’Alembert Approach


By using the change of variables v = x + ct and w = x − ct, we find
ut =cuv − cuw
utt =c2 uvv − 2c2 uwv + c2 uww
ux =uv + uw
uxx =uvv + 2uvw + uww
Substituting into Equation (12.1), we find uvw = 0 and solving
R this equation
we find uv = F (v) and u(v, w) = f (v) + g(w) where f (v) = F (v)dv.
Finally, using the fact that v = x + ct and w = x − ct; we get d’Alembert’s
solution to the one-dimensional wave equation:
u(x, t) = f (x + ct) + g(x − ct) (12.2)
where f and g are arbitrary differentiable functions.
The function f (x + ct) represents waves that are moving to the left at a
constant speed c and the function g(x − ct) represents waves that are moving
to the right at the same speed c.
The function u(x, t) in (12.2) involves two arbitrary functions that are deter-
mined (normally) by two initial conditions.
Example 12.1
Find the solution to the Cauchy problem
utt =c2 uxx
u(x, 0) =v(x)
ut (x, 0) =w(x).
Solution.
The condition u(x, 0) is the initial position whereas ut (x, 0) is the initial
velocity. We have
u(x, 0) = f (x) + g(x) = v(x)
and
ut (x, 0) = −cf 0 (x) + cg 0 (x) = w(x)
which implies that
Z x
1 1
f (x) − g(x) = − W (x) = − w(s)ds.
c c 0
96 SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Therefore,
1 1
f (x) = (v(x) − W (x))
2 c
and
1 1
g(x) = (v(x) + W (x)).
2 c
Finally,
1 1
u(x, t) = [v(x − ct) + v(x + ct) + (W (x + ct) − W (x − ct))]
2 cZ
1 1 x+ct
= [v(x − ct) + v(x + ct) + w(s)ds]
2 c x−ct
12 HYPERBOLIC TYPE: THE WAVE EQUATION 97

Practice Problems
Problem 12.1
Show that if v(x, t) and w(x, t) satisfy equation (12.1) then αv + βw is also
a solution to (12.1), where α and β are constants.

Problem 12.2
Show that any linear time independent function u(x, t) = ax + b is a solution
to equation (12.1).

Problem 12.3
Find a solution to (12.1) that satisfies the homogeneous conditions u(x, 0) =
u(0, t) = u(L, t) = 0.

Problem 12.4
Solve the initial value problem

utt =9uxx
u(x, 0) = cos x
ut (x, 0) =0.

Problem 12.5
Solve the initial value problem

utt =uxx
1
u(x, 0) =
1 + x2
ut (x, 0) =0.

Problem 12.6
Solve the initial value problem

utt =4uxx
u(x, 0) =1
ut (x, 0) = cos (2πx).
98 SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Problem 12.7
Solve the initial value problem

utt =25uxx
u(x, 0) =v(x)
ut (x, 0) =0

where 
1 if x < 0
v(x) =
0 if x ≥ 0.

Problem 12.8
Solve the initial value problem

utt =c2 uxx


2
u(x, 0) =e−x
ut (x, 0) = cos2 x.

Problem 12.9
Prove that the wave equation, utt = c2 uxx satisfies the following properties,
which are known as invariance properties. If u(x, t) is a solution, then
(i) Any translate, u(x − y, t) where y is a fixed constant, is also a solution.
(ii) Any derivative, say ux (x, t), is also a solution.
(iii) Any dilation, u(ax, at), is a solution, for any fixed constant a.

Problem 12.10
v(r)
Find v(r) if u(r, t) = r
cos nt is a solution to the PDE

2
urr + ur = utt .
r

Problem 12.11
Find the solution of the wave equation on the real line (−∞ < x < +∞)
with the initial conditions

u(x, 0) = ex , ut (x, 0) = sin x.


12 HYPERBOLIC TYPE: THE WAVE EQUATION 99

Problem 12.12
The total energy of the string (the sum of the kinetic and potential energies)
is defined as
1 L 2
Z
E(t) = (ut + c2 u2x )dx.
2 0
(a) Using the wave equation derive the equation of conservation of energy
dE(t)
= c2 (ut (L, t)ux (L, t) − ut (0, t)ux (0, t)).
dt
(b) Assuming fixed ends boundary conditions, that is the ends of the string
are fixed so that u(0, t) = u(L, t) = 0, for all t > 0, show that the energy is
constant.
(c) Assuming free ends boundary conditions for both x = 0 and x = L, that
is both u(0, t) and u(L, t) vary with t, show that the energy is constant.

Problem 12.13
For a wave equation with damping

utt − c2 uxx + dut = 0, d > 0, 0 < x < L

with the fixed ends boundary conditions show that the total energy decreases.

Problem 12.14
(a) Verify that for any twice differentiable R(x) the function

u(x, t) = R(x − ct)

is a solution of the wave equation utt = c2 uxx . Such solutions are called
traveling waves.
(b) Show that the potential and kinetic energies (see Exercise 12.12) are
equal for the traveling wave solution in (a).

Problem 12.15
Find the solution of the Cauchy wave equation

utt = 4uxx

u(x, 0) = x2 , ut (x, 0) = sin 2x.


Simplify your answer as much as possible.
100SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

13 Parabolic Type: The Heat Equation in One-


Dimensional Space
In this section, We will look at a model for describing the distribution of
temperature in a solid material as a function of time and space. More specif-
ically, we will derive the heat equation that models the flow of heat in a rod
that is insulated everywhere except at the two ends.
Before we begin our discussion of the mathematics of the heat equation, we
must first determine what is meant by the term heat? Heat is type of energy
known as thermal energy. Heat travels in waves like other forms of energy,
and can change the matter it touches. It can heat it up and cause chemical
reactions like burning to occur.
Heat can be released through a chemical reaction (such as the nuclear re-
actions that make the Sun “burn”) or can be trapped for a limited time by
insulators. It is often released along with other kinds of energy such as light
waves or sound waves. For example, a burning candle releases light and heat
waves. On the other hand, an explosion releases light, heat, and sound waves.
The most common units of heat are BTU (British Thermal Unit), Calorie
and Joule.
Consider now a thin rod made of homogeneous heat conducting material of
uniform density ρ and constant cross section A, wrapped along the x−axis
from x = 0 to x = L as shown in Figure 13.1.

Figure 13.1

Assume the heat flows only in the x−direction, with the lateral sides well
insulated, and the only way heat can enter or leave the rod is at either end.
Since our rod is thin, the temperature of the rod can be considered constant
on any cross section and so depends on the horizontal position along the
x−axis and we can hence consider the rod to be a one spatial dimensional
rod. We will also assume that heat energy in any piece of the rod is conserved.
That is, the heat gained at one end is equal to the heat lost at the other end.
13 PARABOLIC TYPE: THE HEAT EQUATION IN ONE-DIMENSIONAL SPACE101

Let u(x, t) be the temperature of the cross section at the point x and the
time t. Consider a portion U of the rod from x to x + ∆x of length ∆x as
shown in Figure 13.2.

Figure 13.2

Divide the interval [x, x+∆x] into n sub-intervals each of length ∆s using the
partition points x = s0 < s1 < · · · < sn = x + ∆x. Consider the portion Ui of
U of height ∆s. The portion Ui is assumed to be thin so that the temperature
is constant throughout the volume. From the theory of heat conduction, the
quantity of heat Qi in Ui at time t is given by

Qi = cmi u(si−1 , t) = cρu(si−1 , t)∆Vi

where mi is the mass of Ui , ∆Vi is the volume of Ui and c is the specific


heat, that is, the amount of heat that it takes to raise one unit of mass of
the material by one unit of temperature.
But Ui is a cylinder of height ∆s and area of base A so that ∆Vi = A∆s.
Hence,
Qi = cρAu(si−1 , t)∆s.
The quantity of heat in the portion U is given by
n
X n
X Z x+∆x
Q(t) = lim Qi = lim cρAu(si−1 , t)∆s = cρAu(s, t)ds.
n→∞ n→∞ x
i=1 i=1

By differentiation, the change in heat with respect to time is


Z x+∆x
dQ
= cρAut (s, t)ds.
dt x
102SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Assuming that u is continuously differentiable, we can apply the mean value


theorem for integrals and find x ≤ ξ ≤ x + ∆x such that
Z x+∆x
ut (s, t)ds = ∆xut (ξ, t).
x

Thus, the rate of change of heat in U is given by


dQ
= cρA∆xut (ξ, t).
dt
Now, Fourier law of heat transfer says that the rate of heat transfer through
any cross section is proportional to the area A and the negative gradient of
the temperature normal to the cross section, i.e., −KAux (x, t). Note that
if the temperature increases as x increases (i.e., the temperature is hotter
to the right), ux > 0 so that the heat flows to the left. This explains the
minus sign in the formula for Fourier law. Hence, according to this law heat
is transferred from areas of high temperature to areas of low temperature.
Now, the rate of heat flowing in U through the cross section at x is −KAux (x, t)
and the rate of heat flowing out of U through the cross section at x + ∆x is
−KAux (x + ∆x, t), where K is the thermal conductivity7 of the rod.
Now, the conservation of energy law states

rate of change of heat in U = rate of heat flowing in − rate of heat flowing


out

or mathematically written as,

cρA∆xut (ξ, t) = −KAux (x, t) + KAux (x + ∆x, t)

or
cρA∆xut (ξ, t) = KA[ux (x + ∆x, t) − ux (x, t)].
Dividing this last equation by cAρ∆x and letting ∆x → 0 we obtain

ut (x, t) = kuxx (x, t) (13.1)


K
where k = cρ is called the diffusivity constant.
Equation (13.1) is the one dimensional heat equation which is second order,
7
It is a property of material to conduct heat. Heat transfer is slow in materials with
small thermal conductivity and fast in materials with large thermal conductivity.
13 PARABOLIC TYPE: THE HEAT EQUATION IN ONE-DIMENSIONAL SPACE103

linear, homogeneous, and of parabolic type.


The non-homogeneous heat equation

ut = kuxx + f (x)

is known as the heat equation with an external heat source f (x). An ex-
ample of an external heat source is the heat generated from a candle placed
under the bar.
The function Z L
E(t) = u(x, t)dx
0
8
is called the total thermal energy at time t of the entire rod.

Example 13.1
The two ends of a homogeneous rod of length L are insulated. There is a
constant source of thermal energy q0 6= 0 and the temperature is initially
u(x, 0) = f (x).
(a) Write the equation and the boundary conditions for this model.
(b) Calculate the total thermal energy of the entire rod.

Solution.
(a) The model is given by the PDE

ut (x, t) = kuxx + q0

with boundary conditions

ux (0, t) = ux (L, t) = 0.

(b) First note that

d L
Z Z L Z L Z L
u(x, t)dx = ut (x, t)dx = kuxx dx + q0 dx
dt 0 0 0 0
= kux |L0 + q0 L = q0 L

since ux (0, t) = ux (L, t) = 0. Integrating with respect to t we find

E(t) = q0 Lt + C.
8
The total internal energy in the rod generated by the rod’s temperature.
104SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS
RL RL
But C = E(0) = 0
u(x, 0)dx = 0
f (x)dx. Hence, the total thermal energy
is given by
Z L
E(t) = f (x)dx + q0 Lt
0

Initial Boundary Value Problems


In order to solve the heat equation we must give the problem some initial
conditions. If you recall from the theory of ODE, the number of conditions
required for solving initial value problems always matched the highest order
of the derivative in the equation.
In partial differential equations the same idea holds except now we have to
pay attention to the variable we are differentiating with respect to as well.
So, for the heat equation we have got a first order time derivative and so we
will need one initial condition and a second order spatial derivative and so
we will need two boundary conditions.
For the initial condition, we define the temperature of every point along the
rod at time t = 0 by
u(x, 0) = f (x)
where f is a given (prescribed) function of x. This function is known as the
initial temperature distribution.
The boundary conditions will tell us something about what the temperature
is doing at the ends of the bar. The conditions are given by

u(0, t) = T0 and u(L, t) = TL .

and they are called as the Dirichlet conditions. In this case, the general
form of the heat equation initial boundary value problem is to find u(x, t)
satisfying

ut (x, t) =kuxx (x, t), 0 ≤ x ≤ L, t > 0


u(x, 0) =f (x), 0 ≤ x ≤ L
u(0, t) =T0 , u(L, t) = TL , t > 0.

In the case of insulated endpoints, i.e., there is no heat flow out of them, we
use the boundary conditions

ux (0, t) = ux (L, t) = 0.
13 PARABOLIC TYPE: THE HEAT EQUATION IN ONE-DIMENSIONAL SPACE105

These conditions are examples of what is known as Neumann boundary


conditions. In this case, the general form of the heat equation initial bound-
ary value problem is to find u(x, t) satisfying

ut (x, t) =kuxx (x, t), 0 ≤ x ≤ L, t > 0


u(x, 0) =f (x), 0 ≤ x ≤ L
ux (0, t) =ux (L, t) = 0, t > 0.
106SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Practice Problems
Problem 13.1
Show that if u(x, t) and v(x, t) satisfy equation (13.1) then αu + βv is also a
solution to (13.1), where α and β are constants.
Problem 13.2
Show that any linear time independent function u(x, t) = ax + b is a solution
to equation (13.1).
Problem 13.3
Find a linear time independent solution u to (13.1) that satisfies u(0, t) = T0
and u(L, T ) = TL .
Problem 13.4
Show that to solve (13.1) with the boundary conditions u(0, t) = T0 and
u(L, t) = TL it suffices to solve (13.1) with the homogeneous boundary
conditions u(0, t) = u(L, t) = 0.
Problem 13.5
Find a solution to (13.1) that satisfies the conditions u(x, 0) = u(0, t) =
u(L, t) = 0.
Problem 13.6
Let (I) denote equation (13.1) together with intial condition u(x, 0) = f (x),
where f is not the zero function, and the homogeneous boundary conditions
u(0, t) = u(L, t) = 0. Suppose a nontrivial solution to (I) can be written in
the form u(x, t) = X(x)T (t). Show that X and T satisfy the ODE
X 00 − λk X = 0 and T 0 − λT = 0
for some constant λ.
Problem 13.7
Consider again the solution u(x, t) = X(x)T (t). Clearly, T (t) = T (0)eλt .
Suppose that λ > 0. √ √
(a) Show that X(x) = Aex α + Be−x α , where α = λk and A and B are
arbitrary constants. √
(b) √Show that A and B satisfy the two equations A + B = 0 and A(eL α −
e−L α ) = 0.
(c) Show that A = 0 leads to a contradiction.
√ √
(d) Using (b) and (c) show that eL α = e−L α . Show that this equality leads
to a contradiction. We conclude that λ < 0.
13 PARABOLIC TYPE: THE HEAT EQUATION IN ONE-DIMENSIONAL SPACE107

Problem 13.8
Consider the results of the previous exercise. q
−λ
(a) Show that X(x) = c1 cos βx + c2 sin βx where β = k
.
2 2
(b) Show that λ = λn = − knL2π , where n is an integer.

Problem 13.9
ki2 π 2
Show that u(x, t) = ni=1 ui (x, t), where ui (x, t) = ci e− L2 t sin iπ
P 
L
x satis-
fies (13.1) and the homogeneous boundary conditions.

Problem 13.10
Suppose that a wire is stretched between 0 and a. Describe the boundary
conditions for the temperature u(x, t) when
(i) the left end is kept at 0 degrees and the right end is kept at 100 degrees;
and
(ii) when both ends are insulated.

Problem 13.11
Let ut = uxx for 0 < x < π and t > 0 with boundary conditionsR π 2 u(0,2t) =
0 = u(π, t) and initial condition u(x, 0) = sin x. Let E(t) = 0 (ut + ux )dx.
Show that E 0 (t) < 0.

Problem 13.12
Suppose

ut = uxx + 4, ux (0, t) = 5, ux (L, t) = 6, u(x, 0) = f (x).

Calculate the total thermal energy of the one-dimensional rod (as a function
of time).

Problem 13.13
Consider the heat equation
ut = kuxx
for x ∈ (0, 1) and t > 0, with boundary conditions u(0, t) = 2 and u(1, t) = 3
for t > 0 and initial condition u(x, 0) = x for x ∈ (0, 1). A function v(x) that
satisfies the equation v 00 (x) = 0, with conditions v(0) = 2 and v(1) = 3 is
called a steady-state solution. That is, the steady-state solutions of the
heat equation are those solutions that don’t depend on time. Find v(x).
108SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Problem 13.14
Consider the equation for the one-dimensional rod of length L with given
heat energy source:
ut = uxx + q(x).
Assume that the initial temperature distribution is given by u(x, 0) = f (x).
Find the equilibrium (steady state) temperature distribution in the following
cases.
(a) q(x) = 0, u(0) = 0, u(L) = T.
(b) q(x) = 0, ux (0) = 0, u(L) = T.
(c) q(x) = 0, u(0) = T, ux (L) = α.

Problem 13.15
Consider the equation for the one-dimensional rod of length L with insulated
ends:
ut = kuxx , ux (0, t) = ux (L, t) = 0.
(a) Give the expression for the total thermal energy of the rod.
(b) Show using the equation and the boundary conditions that the total
thermal energy is constant.

Problem 13.16
Suppose

ut = uxx + x, u(x, 0) = f (x), ux (0, t) = β, ux (L, t) = 7.

(a) Calculate the total thermal energy of the one-dimensional rod (as a func-
tion of time).
(b) From part (a) find the value of β for which a steady-state solution exist.
(c) For the above value of β find the steady state solution.
14 SEQUENCES OF FUNCTIONS: POINTWISE AND UNIFORM CONVERGENCE109

14 Sequences of Functions: Pointwise and Uni-


form Convergence
In the next section, we will be constructing solutions to PDEs involving
infinite sums of sines and cosines. These infinite sums or series are called
Fourier series. Fourier series are examples of series of functions. Conver-
gence of series of functions is defined in terms of convergence of a sequence of
functions. In this section we study the two types of convergence of sequences
of functions.
Recall that a sequence of numbers {an }∞ n=1 is said to converge to a number
L if and only if for every given  > 0 there is a positive integer N = N ()
such that for all n ≥ N we have|an − L| < .
What is the analogue concept of convergence when the terms of the sequence
are variables? Let D ⊂ R and for each n ∈ N consider a function fn : D → R.
Thus, we obtain a sequence of functions {fn }∞ n=1 . For such a sequence, there
are two types of convergenve that we consider in this section: pointwise con-
vergence and uniform convergence.
We say that {fn }∞ n=1 converges pointwise in D to a function f : D → R if
and only if
lim fn (x) = f (x)
n→∞

for each x ∈ D. Equivalently, for a given x ∈ D and  > 0 there is a positive


integer N = N (x, ) such that if n ≥ N then |fn (x) − f (x)| < . It is
important to note that N is a function of both x and .

Example 14.1
nx ∞
Define fn : [0, ∞) → R by fn (x) = 1+n 2 x2 . Show that the sequence {fn }n=1

converges pointwise to the function f (x) = 0 for all x ≥ 0.

Solution.
For all x ≥ 0,
nx
lim fn (x) = lim = 0 = f (x)
n→∞ n→∞ 1 + n2 x2

Example 14.2
For each positive integer n let fn : (0, ∞) → (0, ∞) be given by fn (x) = nx.
Show that {fn }∞n=1 does not converge pointwise in D = (0, ∞).
110SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Solution.
This follows from the fact that lim nx = ∞ for all x ∈ D
n→∞

One of the weaknesses of this type of convergence is that it does not preserve
some of the properties of the base functions {fn }∞ n=1 . For example, if each fn
is continuous then the pointwise limit function need not be continuous. (See
Problem 14.1) A stronger type of convergence which preserves most of the
properties of the base functions is the uniform convergence which we define
next.
Let D be a subset of R and let {fn }∞ n=1 be a sequence of functions defined on

D. We say that {fn }n=1 converges uniformly on D to a function f : D → R
if and only if for all  > 0 there is a positive integer N = N () such that if
n ≥ N then |fn (x) − f (x)| <  for all x ∈ D.
This definition says that the integer N depends only on the given  (in con-
trast to pointwise convergence where N depends on both x and ) so that
for n ≥ N , the graph of fn (x) is bounded above by the graph of f (x) +  and
below by the graph of f (x) −  as shown in Figure 14.1.

Figure 14.1

Example 14.3
For each positive integer n let fn : [0, 1] → R be given by fn (x) = nx . Show
that {fn }∞
n=1 converges uniformly to the zero function.

Solution.
Let  > 0 be given. Let N be a positive integer such that N > 1 . Then for
14 SEQUENCES OF FUNCTIONS: POINTWISE AND UNIFORM CONVERGENCE111

n ≥ N we have
x |x| 1 1
|fn (x) − f (x)| = − 0 = ≤ ≤ <

n n n N
for all x ∈ [0, 1]

Clearly, uniform convergence implies pointwise convergence to the same limit


function. However, the converse is not true in general. Thus, one way to show
that a sequence of functions does not converge uniformly is to show that it
does not converge pointwise.

Example 14.4
nx
Define fn : [0, ∞) → R by fn (x) = 1+n 2 x2 . By Example 14.1, this sequence

converges pointwise to f (x) = 0. Let  = 31 . Show that there is no positive


integer N with the property n ≥ N implies |fn (x) − f (x)| <  for all x ≥ 0.
Hence, the given sequence does not converge uniformly to f (x).

Solution.
For any positive integer N and for n ≥ N we have
   
1 1 1
fn − f = >
n n 2

Problem 14.1 shows a sequence of continuous functions converging pointwise


to a discontinuous function. That is, pointwise convergence does not pre-
serve the property of continuity. One of the interesting features of uniform
convergence is that it preserves continuity as shown in the next example.

Example 14.5
Suppose that for each n ≥ 1 the function fn : D → R is continuous in D.
Suppose that {fn }∞
n=1 converges uniformly to f. Let a ∈ D.
(a) Let  > 0 be given. Show that there is a positive integer N such that if
n ≥ N then |fn (x) − f (x)| < 3 for all x ∈ D.
(b) Show that there is a δ > 0 such that for all |x − a| < δ we have |fN (x) −
fN (a)| < 3 .
(c) Using (a) and (b) show that for |x − a| < δ we have |f (x) − f (a)| < .
Hence, f is continuous in D since a was arbitrary.
112SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Solution.
(a) This follows from the definition of uniform convergence.
(b) This follows from the fact that fN is continuous at a ∈ D.
(c) For |x − a| < δ we have |f (x) − f (a)| = |f (a) − fN (a) + fN (a) − fN (x) +
fN (x)−f (x)| ≤ |fN (a)−f (a)|+|fN (a)−fN (x)|+|fN (x)−f (x)| < 3 + 3 + 3 =


From this example, we can write

lim lim fn (x) = lim lim fn (x).


x→a n→∞ n→∞ x→a

Indeed,

lim lim fn (x) = lim f (x)


x→a n→∞ x→a
=f (a) = lim fn (a)
n→∞
= lim lim fn (x).
n→∞ x→a

Does pointwise convergence allow the interchange of limits and integration?


The answer is no as shown in the next example.

Example 14.6
x
The sequence of function fn : (0, ∞) → R defined by fn (x) = n
converges
pointwise to the zero function. Show that
Z ∞ Z ∞
lim fn (x)dx 6= lim fn (x)dx.
n→∞ 1 1 n→∞

Solution.
We have ∞

x2
Z
x
dx = = ∞.
1 n 2n 1
Hence, Z ∞
lim fn (x)dx = ∞
n→∞ 1

whereas Z ∞
lim fn (x)dx = 0
1 n→∞
14 SEQUENCES OF FUNCTIONS: POINTWISE AND UNIFORM CONVERGENCE113

Contrary to pointwise convergence, uniform convergence preserves integra-


tion. That is, if {fn }∞
n=1 converges uniformly to f on a closed interval [a, b]
then Z b Z b
lim fn (x)dx = lim fn (x)dx.
n→∞ a a n→∞

Theorem 14.1
Suppose that fn : [a, b] → R is a sequence of continuous functions that
converges uniformly to f : [a, b] → R. Then
Z b Z b Z b
lim fn (x)dx = lim fn (x)dx = f (x)dx.
n→∞ a a n→∞ a

Proof.
From Example 14.5, we have that f is continuous and hence integrable. Let
 > 0 be given. By uniform convergence, we can find a positive integer N

such that |fn (x) − f (x) < b−a for all x in [a, b] and n ≥ N. Thus, for n ≥ N ,
we have
Z b Z b Z b


fn (x)dx − f (x)dx ≤ |fn (x) − f (x)|dx < .
a a a

This completes the proof of the theorem

Now, what about differentiability? Again, pointwise convergence fails in


general to conserve the differentiability property. See Problem 14.1. Does
uniform convergence preserve differentiability? The answer is still no as
shown in the next example.

Example 14.7 q
Consider the family of functions fn : [−1, 1] given by fn (x) = x2 + n1 .
(a) Show that fn is differentiable for each n ≥ 1.
(b) Show that for all x ∈ [−1, 1] we have
1
|fn (x) − f (x)| ≤ √
n
q √
where f (x) = |x|. Hint: Note that x2 + n1 + x2 ≥ √1n .
(c) Let  > 0 be given. Show that there is a positive integer N such that for
n ≥ N we have
114SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

|fn (x) − f (x)| <  for all x ∈ [−1, 1].

Thus, {fn }∞
n=1 converges uniformly to the non-differentiable function f (x) =
|x|.

Solution.
(a) fn is the composition of two differentiable functions so it is differentiable
with derivative
 − 1
0 2 1 2
fn (x) = x x + .
n
(b) We have
r q 2 1 √ 2 q 2 1 √ 2
1 √ ( x + n − x )( x + n + x )
|fn (x) − f (x)| = x2 + − x2 =

n
q √
x2 + n1 + x2


1
n
=q √
x2 + n1 + x2
1
n 1
≤ =√ .
√1 n
n

(c) Let  > 0 be given. Since limn→∞ √1n = 0 we can find a positive integer
N such that for all n ≥ N we have √1n < . Now the answer to the question
follows from this and part (b)

Even when uniform convergence occurs, the process of interchanging lim-


its and differentiation may fail as shown in the next example.

Example 14.8
Consider the functions fn : R → R defined by fn (x) = sinnnx .
(a) Show that {fn }∞
n=1 converges uniformly to the function f (x) = 0.
(b) Note that {fn }∞
n=1 and f are differentiable functions. Show that
h i0
lim fn0 (x) 6= f 0 (x) = lim fn (x) .
n→∞ n→∞

That is, one cannot, in general, interchange limits and derivatives.


14 SEQUENCES OF FUNCTIONS: POINTWISE AND UNIFORM CONVERGENCE115

Solution.
(a) Let  > 0 be given. Let N be a positive integer such that N > 1 . Then
for n ≥ N we have

sin nx 1
|fn (x) − f (x)| = ≤ <
n n

and this is true for all x ∈ R. Hence, {fn }∞


n=1 converges uniformly to the
function f (x) = 0.
(b) We have limn→∞ fn0 (π) = limn→∞ cos nπ = limn→∞ (−1)n which does not
converge. However, f 0 (π) = 0

Pointwise convergence was not enough to preserve differentiability, and nei-


ther was uniform convergence by itself. Even with uniform convergence the
process of interchanging limits with derivatives is not true in general. How-
ever, if we combine pointwise convergence with uniform convergence we can
indeed preserve differentiability and also switch the limit process with the
process of differentiation.

Theorem 14.2
Let {fn }∞n=1 be a sequence of differentiable functions on [a, b] that converges
pointwise to some function f defined on [a, b]. If {fn0 }∞
n=1 converges uniformly
on [a, b] to a function g, then the function f is differentiable with derivative
equals to g. Thus,
h i0
0 0
lim fn (x) = g(x) = f (x) = lim fn (x) .
n→∞ n→∞

Proof.
First, note that the function g is continuous in [a, b] since uniform convergence
preserves continuity. Let c be an arbitrary point in [a, b]. Then
Z x
fn0 (t)dt = fn (x) − fn (c), x ∈ [a, b].
c

Taking the limit of both sides and using the facts that fn0 converges uniformly
to g and fn converges pointwise to f , we can write
Z x
g(t)dt = f (x) − f (c).
c
116SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Taking the derivative of both sides of the last equation yields g(x) = f 0 (x)

Finally, we conclude this section with the following important result that
is useful in testing uniform convergence.

Theorem 14.3
Consider a sequence fn : D → R. Then this sequence converges uniformly to
f : D → R if and only if

lim sup{|fn (x) − f (x)| : x ∈ D} = 0.


n→∞

Proof.
Suppose that fn converges uniformly to f. Let  > 0 be given. Then there
is a positive integer N such that |fn (x) − f (x)| < 2 for all n ≥ N and all
x ∈ D. Thus, for n ≥ N, we have

sup{|fn (x) − f (x)| : x ∈ D} ≤ < .
2
This shows that

lim sup{|fn (x) − f (x)| : x ∈ D} = 0.


n→∞

Conversely, suppose that

lim sup{|fn (x) − f (x)| : x ∈ D} = 0.


n→∞

Let  > 0 be given. Then there is a positive interger N such that

sup{|fn (x) − f (x)| : x ∈ D} < 

for all n ≥ N. But this implies that

|fn (x) − f (x)| < 

for all x ∈ D. Hence, fn converges uniformly to f in D

Example 14.9
cos x
Show that the sequence defined by fn (x) = n
converges uniformly to the
zero function.
14 SEQUENCES OF FUNCTIONS: POINTWISE AND UNIFORM CONVERGENCE117

Solution.
We have
cos x 1
0 ≤ sup{|
| : x ∈ R} ≤ .
n n
9
Now apply the squeeze rule for sequences we find that
cos x
lim sup{| | : x ∈ R} = 0
n→∞ n
which implies that the given sequence converges uniformly to the zero func-
tion on R

9
If an ≤ bn ≤ cn for all n ≥ N and if limn→∞ an = limn→∞ cn = L then limn→∞ bn =
L.
118SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Practice Problems

Problem 14.1
Define fn : [0, 1] → R by fn (x) = xn . Define f : [0, 1] → R by

0 if 0 ≤ x < 1
f (x) =
1 if x = 1.

(a) Show that the sequence {fn }∞


n=1 converges pointwise to f.
(b) Show that the sequence {fn }∞
n=1 does not converge uniformly to f. Hint:
Suppose otherwise. Let  = 0.5 and get a contradiction by using a point
1
(0.5) N < x < 1.

Problem 14.2
Consider the sequence of functions

nx + x2
fn (x) =
n2
defined for all x in R. Show that this sequence converges pointwise to a
function f to be determined.

Problem 14.3
Consider the sequence of functions

sin (nx + 3)
fn (x) = √
n+1

defined for all x in R. Show that this sequence converges pointwise to a


function f to be determined.

Problem 14.4
Consider the sequence of functions defined by fn (x) = n2 xn for all 0 ≤ x ≤ 1.
Show that this sequence does not converge pointwise to any function.

Problem 14.5
Consider the sequence of functions defined by fn (x) = (cos x)n for all − π2 ≤
x ≤ π2 . Show that this sequence converges pointwise to a noncontinuous
function to be determined.
14 SEQUENCES OF FUNCTIONS: POINTWISE AND UNIFORM CONVERGENCE119

Problem 14.6
n
Consider the sequence of functions fn (x) = x − xn defined on [0, 1).
(a) Does {fn }∞n=1 converge to some limit function? If so, find the limit func-
tion and show whether the convergence is pointwise or uniform.
(b) Does {fn0 }∞
n=1 converge to some limit function? If so, find the limit func-
tion and show whether the convergence is pointwise or uniform.

Problem 14.7
xn
Let fn (x) = 1+x n for x ∈ [0, 2].

(a) Find the pointwise limit f (x) = limn→∞ fn (x) on [0, 2].
(b) Does fn → f uniformly on [0, 2]?

Problem 14.8
n+cos x
For each n ∈ N define fn : R → R by fn (x) = 2n+sin2 x
.
(a) Show that fn → 21 uniformly.
R7
(b) Find limn→∞ 2 fn (x)dx.

Problem 14.9
Show that the sequence defined by fn (x) = (cos x)n does not converge uni-
formly on [− π2 , π2 ].

Problem 14.10
Let {fn }∞
n=1 be a sequence of functions such that

2n
sup{|fn (x)| : 2 ≤ x ≤ 5} ≤ .
1 + 4n
(a) Show that this sequence converges uniformly
R5 to a function f to be found.
(b) What is the value of the limit limn→∞ 2 fn (x)dx?
120SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

15 An Introduction to Fourier Series


In this and the next section we will have a brief look to the subject of Fourier
series. The point here is to do just enough to allow us to do some basic so-
lutions to partial differential equations later in the book.
Motivation: In Calculus we have seen that certain functions may be repre-
sented as power series by means of the Taylor expansions. These functions
must have infinitely many derivatives, and the series provide a good approx-
imation only in some (often small) vicinity of a reference point.
Fourier series constructed of trigonometric rather than power functions, and
can be used for functions not only not differentiable, but even discontinuous
at some points. The main limitation of Fourier series is that the underlying
function should be periodic.
Recall from calculus that a function series is a series where the summands
are functions. Examples of function series include power series, Laurent se-
ries, Fourier series, etc.
Unlike series of numbers, there exist many types of convergence of series of
functions,
P∞ namely, pointwise, uniform, etc. We say that a series of functions
n=1 fn (x) converges pointwise to a function f if and only if the sequence
of partial sums
Sn (x) = f1 (x) + f2 (x) + · · · + fn (x)
converges pointwise to f. We write

X
fn (x) = lim Sn (x) = f (x).
n→∞
n=1

Example 15.1
Show that ∞ n
P
n=0 x converges pointwise to a function to be determined for
all −1 < x < 1.

Solution.
The nth term of the sequence of partial sums is given by

1 − xn+1
Sn (x) = 1 + x + x2 + · · · + xn = .
1−x
Since
lim xn+1 = 0, − 1 < x < 1,
n→∞
15 AN INTRODUCTION TO FOURIER SERIES 121

1
the partial sums converge pointwise to the function 1−x
. Thus,

X 1
xn =
n=0
1−x

Likewise, we say that a series of functions ∞


P
n=1 fn (x) converges uniformly
to a function f if and only if the sequence of partial sums {Sn }∞
n=1 converges
uniformly to f.
The following theorem provide a tool for uniform convergence of series of
functions.

Theorem 15.1 (Weierstrass M-test)


Suppose that for each x in an interval I the series ∞
P
n=1 fn (x) is well-defined.
Suppose further that
|fn (x)| ≤ Mn , ∀x ∈ I.
If n=1 Mn (a scalar series) is convergent then the series ∞
P∞ P
n=1 fn (x) is uni-
formly convergent.

Example 15.2
Show that ∞ sin (nx)
P
n=1 n2
is uniformly convergent.

Solution.
For all x ∈ R, we have

sin (nx) | sin (nx)| 1
n2 ≤ ≤ 2.

n 2 n

The series ∞ 1
P
n=1 n2 is convergent being a p−series with p = 2 > 1. Thus, by
Weierstrass M-test the given series is uniformly convergent

In this section we introduce a type of series of functions known as Fourier


series. They are given by

a0 X h  nπ   nπ i
+ an cos x + bn sin x , −L≤x≤L (15.1)
2 n=1
L L

where an and bn are called the Fourier coefficients. Note that we begin
the series with a20 as opposed to simply a0 to simplify the coefficient formula
122SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

for an that we will derive later in this section.


The main questions we want to consider next are the questions of determin-
ing which functions can be represented by Fourier series and if so how to
compute the coefficients an and bn .
Before answering these questions, we look at some of the properties of Fourier
series.

Periodicity Property
Recall that a function f is said to be periodic with period T > 0 if
f (x + T ) = f (x) for all x, x + T in the domain of f. The smallest value
of T for which f is periodic is called the fundamental period. A graph of
a T −periodic function is shown in Figure 15.1.

Figure 15.1

For a T −periodic function we have

f (x) = f (x + T ) = f (x + 2T ) = · · · .

Note that the definite integral of a T −periodic function is the same over any
interval of length T. By Problem 15.1 below, if f and g are two periodic func-
tions with common period T, then the product f g and an arbitrary linear
combination c1 f + c2 g are also periodic with period T. It is an easy exercise
to show that the Fourier series (15.1) is periodic with fundamental period 2L.

Orthogonality Property
Recall from Calculus that for each pair of vectors ~u and ~v we associate a
scalar quantity ~u · ~v called the dot product of ~u and ~v . We say that ~u and ~v
are orthogonal if and only if ~u · ~v = 0. We want to define a similar concept
for functions.
Let f and g be two functions with domain the closed interval [a, b]. We define
15 AN INTRODUCTION TO FOURIER SERIES 123

a function that takes a pair of functions to a scalar. Symbolically, we write


Z b
< f, g >= f (x)g(x)dx.
a

We call < f, g > the inner product of f and g. We say that f and g are
orthogonal if and only if < f, g >= 0. A set of functions is said to be mu-
tually orthogonal if each distinct pair of functions in the set is orthogonal.
Before we proceed any further into computations, we would like to remind
the reader of the following two facts from calculus: RL
• If f (x) is an odd function defined on the interval [−L, L] then −L f (x)dx =
0. RL
• If f (x) is an even function defined on the interval [−L, L] then −L f (x)dx =
RL
2 0 f (x)dx.

Example 15.3 
Show that the set 1, cos nπ nπ
 
L
x , sin L
x : n ∈ N , where m 6= n, is mutu-
ally orthogonal in [−L, L].

Solution.
Since the cosine function is even, we have
Z L  nπ  Z L  nπ  2L h  nπ iL
1 · cos x dx = 2 cos x dx = sin x = 0.
−L L 0 L nπ L 0

Since the sine function is odd, we have


Z L  nπ 
1 · sin x dx = 0.
−L L

Now, for n 6= m we have


Z L
1 L
Z     
 mπ   nπ  (m + n)π (m − n)π
cos x cos x dx = cos x + cos x dx
−L L L 2 −L L L
  
1 L (m + n)π
= sin x
2 (m + n)π L
 L
L (m − n)π
+ sin x =0
(m − n)π L −L
124SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

where we used the trigonometric identity


1
cos a cos b = [cos (a + b) + cos (a − b)].
2
We can also show (see Problem 15.2):
Z L  mπ   nπ 
sin x sin x dx = 0
−L L L

and Z L  mπ   nπ 
cos x sin x dx = 0
−L L L

The reason we care about these functions being orthogonal is because we will
exploit this fact to develop a formula for the coefficients in our Fourier series.

Now, in order to answer the first question mentioned earlier, that is, which
functions can be expressed as a Fourier series expansion, we need to intro-
duce some mathematical concepts.
A function f (x) is said to be piecewise continuous on [a, b] if it is contin-
uous in [a, b] except possibly at finitely many points of discontinuity within
the interval [a, b], and at each point of discontinuity, the right- and left-
handed limits of f exist. An example of a piecewise continuous function is
the function 
x 0≤x<1
f (x) = 2
x − x 1 ≤ x ≤ 2.
We will say that f is piecewise smooth in [a, b] if and only if f (x) as well
as its derivatives are piecewise continuous.
The following theorem, proven in more advanced books, ensures that a
Fourier decomposition can be found for any function which is piecewise
smooth.

Theorem 15.2
Let f be a 2L-periodic function. If f is a piecewise smooth on [−L, L] then
for all points of discontinuity x ∈ [−L, L] we have

f (x− ) + f (x+ ) a0 X h  nπ   nπ i
= + an cos x + bn sin x
2 2 n=1
L L
15 AN INTRODUCTION TO FOURIER SERIES 125

where as for points of continuity x ∈ [−L, L] we have



a0 X h  nπ   nπ i
f (x) = + an cos x + bn sin x .
2 n=1
L L
Remark 15.1
(1) Almost all functions occurring in practice are piecewise smooth functions.
(2) Given a piecewise smooth function f on [−L, L]. The above theorem
applies to the periodic extension F of f where F (x + 2nL) = f (x) (n ∈ Z)
and F (x) = f (x) on (−L, L). Note that if f (−L) = f (L) then F (x) = f (x)
on [−L, L]. Otherwise, the end points of f (x) may be jump discontinuities
of F (x).
Convergence Results of Fourier Series
We list few of the results regarding the convergence of Fourier series:
(1) The type of convergence in the above theorem is pointwise convergence.
(2) The convergence is uniform for a continuous function f on [−L, L] such
that f (−L) = f (L).
(3) The convergence is uniform whenever ∞ 2 2
P
n=1 (|an | + |bn | ) is convergent.
(4) If f (x) is periodic, continuous, and has a piecewise continuous derivative,
then the Fourier Series corresponding to f converges uniformly to f (x) for
the entire real line.
(5) The convergence is uniform on any closed interval that does not contain
a point of discontinuity.

Euler-Fourier Formulas
Next, we will answer the second question mentioned earlier, that is, the ques-
tion of finding formulas for the coefficients an and bn . These formulas for an
and bn are called Euler-Fourier formulas which we derive next. We will as-
sume that the RHS in (15.1) converges uniformly to f (x) on the interval
[−L, L]. Integrating both sides of (15.1) we obtain
Z L Z L Z LX ∞ h
a0  nπ   nπ i
f (x)dx = dx + an cos x + bn sin x dx.
−L −L 2 −L n=1 L L
Since the trigonometric series is assumed to be uniformly convergent, from
Theorem 14.2, we can interchange the order of integration and summation
to obtain
Z L Z L ∞ Z L h
a0 X  nπ   nπ i
f (x)dx = dx + an cos x + bn sin x dx.
−L −L 2 n=1 −L
L L
126SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

But Z L  nπ  L  nπ iL
cos x dx = sin x =0
−L L nπ L −L

and likewise
Z L  nπ  L  nπ iL
sin x dx = − cos x = 0.
−L L nπ L −L

Thus,
1 L
Z
a0 = f (x)dx.
L −L
To find the other Fourier coefficients, we recall the results of Problems 15.2
- 15.3 below.
Z L  nπ   mπ  
L if m = n
cos x cos x dx =
−L L L 0 if m 6= n
Z L  nπ   mπ  
L if m = n
sin x sin x dx =
−L L L 0 if m 6= n
Z L  nπ   mπ 
sin x cos x dx = 0, ∀m, n.
−L L L
Now, to find the formula for the Fourier coefficients am for m > 0, we multiply
both sides of (15.1) by cos mπ

L
x and integrate from −L to L to otbain
Z L  mπ  Z L a  mπ  ∞  Z L  nπ   mπ 
0
X
f (x) cos x = cos x dx + an cos x cos x dx
−L L −L 2 L n=1 −L L L
Z L  nπ   mπ  
+ bn sin x cos x dx .
−L L L
Hence, Z L  mπ 
f (x) cos x dx = am L
−L L
and therefore Z L
1  mπ 
am = f (x) cos x dx.
L −L L
Likewise, we can show that
Z L
1  mπ 
bm = f (x) sin x dx.
L −L L
15 AN INTRODUCTION TO FOURIER SERIES 127

Example 15.4
Find the Fourier series expansion of

0, x ≤ 0
f (x) =
x, x > 0

on the interval [−π, π].

Solution.
We have
1 π 1 π
Z Z
π
a0 = f (x)dx = xdx =
π −π π 0 2
Z π π
(−1)n − 1

1 1 x sin nx cos nx
an = x cos nxdx = + =
π 0 π n n2 0 πn2
Z π π
(−1)n+1

1 1 x cos nx sin nx
bn = x sin nxdx = − + = .
π 0 π n n2 0 n

Hence,
∞ 
π X (−1)n − 1 (−1)n+1

f (x) = + cos (nx) + sin (nx) − π < x < π
4 n=1 πn2 n

Example 15.5
Apply Theorem 15.2 to the function in Example 15.4.

Solution.
Let F be a periodic extension of f of period 2π. See Figure 152.

Figure 152
128SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Thus, f (x) = F (x) on the interval (−π, π). Note that for x = π, the Fourier
series coverges to
F (π − ) + F (π + ) π
= .
2 2
Similar result for x = −π. Clearly, F is a piecewise smooth function so that
by the previous thereom we can write

∞  n n+1
  π, if x = −π
π X (−1) − 1 (−1) 2
+ cos (nx) + sin (nx) = f (x), if −π <x<π
4 n=1 πn2 n  π
2
, if x = π.

Taking x = π we have the identity



π X (−1)n − 1 π
+ 2
(−1)n =
4 n=1 πn 2

which can be simplified to



X 1 π2
= .
n=1
(2n − 1)2 8

This provides a method for computing an approximate value of π

Remark 15.2
An example of a function that does not have a Fourier series representation
is the function f (x) = x12 on [−L, L]. For example, the coefficient a0 for this
function does not exist. Thus, not every function can be written as a Fourier
series expansion.

The final topic of discussion here is the topic of differentiation and integration
of Fourier series. In particular we want to know if we can differentiate a
Fourier series term by term and have the result be the Fourier series of the
derivative of the function. Likewise we want to know if we can integrate a
Fourier series term by term and arrive at the Fourier series of the integral of
the function. Answers to these questions are provided next.

Theorem 15.3
A Fourier series of a piecewise smooth function f can always be integrated
term by term
R L and the result is a convergent infinite series that always con-
verges to −L f (x)dx even if the original series has jumps.
15 AN INTRODUCTION TO FOURIER SERIES 129

Theorem 15.4
A Fourier series of a continuous function f (x) can be differentiated term by
term if f 0 (x) is piecewise smooth. The result of the differentiation is the
Fourier series of f 0 (x).
130SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Practice Problems
Problem 15.1
Let f and g be two functions with common domain D and common period
T. Show that
(a) f g is periodic of period T.
(b) c1 f + c2 g is periodic of period T, where c1 and c2 are real numbers.

Problem 15.2
R Lthat for m6= n we
Show have
(a) −L sin mπ nπ

L
x sin L
x dx = 0 and
RL
(b) −L cos mπ x sin nπ
 
L L
x dx = 0.

Problem 15.3
Compute
RL the following integrals:
2 nπ

(a) −L cos L x dx.
RL
(b) −L sin2 nπ

L
x dx.
RL
(c) −L cos nπ nπ
 
L
x sin L
x dx.

Problem 15.4
Find the Fourier coefficients of

 −π, −π ≤ x < 0
f (x) = π, 0<x<π
0, x = 0, π

on the interval [−π, π].

Problem 15.5
1
Find the Fourier series of f (x) = x2 − 2
on the interval [−1, 1].

Problem 15.6
Find the Fourier series of the function

 −1, −2π < x < −π
f (x) = 0, −π < x < π
1, π < x < 2π.

15 AN INTRODUCTION TO FOURIER SERIES 131

Problem 15.7
Find the Fourier series of the function

1 + x, −2 ≤ x ≤ 0
f (x) =
1 − x, 0 < x ≤ 2.

Problem 15.8
1
Show that f (x) = x
is not piecewise continuous on [−1, 1].

Problem 15.9
Assume that f (x) is continuous and has period 2L. Prove that
Z L Z L+a
f (x)dx = f (x)dx
−L −L+a

is independent of a ∈ R. In particular, it does not matter over which interval


the Fourier coefficients are computed as long as the interval length is 2L.
[Remark: This result is also true for piecewise continuous functions].

Problem 15.10
Consider the function f (x) defined by

1 0≤x<1
f (x) =
2 1≤x<3

and extended periodically with period 3 to R so that f (x + 3) = f (x) for all


x.
(i) Find the Fourier series of f (x).
(ii) Discuss its limit: In particular, does the Fourier series converge pointwise
or uniformly to its limit, and what is this limit?
(iii) Plot the graphs of f (x) and its extension F (x) on the interval [0, 3].

Problem 15.11
For the following functions f (x) on the interval −L < x < L, determine the
coefficients an , n = 0, 1, 2, · · · and bn , n ∈ N of the Fourier series expansion.
(a) f (x) = 1.
2 + sin πx

(b) f (x) =  L
.
1 x≤0
(c) f (x) =
0 x > 0.
(d) f (x) = x.
132SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Problem 15.12
Let f (t) be the function with period 2π defined as

2 if 0 ≤ x ≤ π2

f (t) =
0 if π2 < x ≤ 2π

f (t) has a Fourier series and that series is equal to



a0 X
+ (an cos nt + bn sin nt), 0 < x < 2π.
2 n=1

a3
Find b3
.

Problem 15.13
Let f (x) = x3 on [−π, π], extended periodically to all of R. Find the Fourier
coefficients an , n = 1, 2, 3, · · · .

Problem 15.14
Let f (x) be the square wave function

−π −π ≤ x < 0
f (x) =
π 0≤x≤π

extended periodically to all of R. To what value does the Fourier series of


f (x) converge when x = 0?

Problem 15.15
(a) Find the Fourier series of

1 −π ≤ x < 0
f (x) =
2 0≤x≤π

extended periodically to all of R. Simplify your coefficients as much as pos-


sible.
(−1)n+1
(b) Use (a) to evaluate the series ∞
P
n=1 (2n−1) . Hint: Evaluate the Fourier
series at x = π2 .
16 FOURIER SINES SERIES AND FOURIER COSINES SERIES 133

16 Fourier Sines Series and Fourier Cosines Se-


ries
In this section we discuss some important properties of Fourier series when
the underlying function f is either even or odd.
A function f is odd if it satisfies f (−x) = −f (x) for all x in the domain of
f whereas f is even if it satisfies f (−x) = f (x) for all x in the domain of f.
Example 16.1
Show the following
(a) If f and g are either both even or both odd then f g is even.
(b) If f is odd and g is even then f g is odd.
Solution.
(a) Suppose that both f and g are even. Then (f g)(−x) = f (−x)g(−x) =
f (x)g(x) = (f g)(x). That is, f g is even. Now, suppose that both f and g
are odd. Then (f g)(−x) = f (−x)g(−x) = [−f (x)][−g(x)] = (f g)(x). That
is, f g is even.
(b) f is odd and g is even. Then (f g)(−x) = f (−x)g(−x) = −f (x)g(x) =
−(f g)(x). That is, f g is odd
Example 16.2
(a) Show that for any even function f (x) defined on the interval [−L, L], we
have Z L Z L
f (x)dx = 2 f (x)dx.
−L 0
(b) Show that for any odd function f (x) defined on the interval [−L, L], we
have Z L
f (x)dx = 0.
−L

Solution.
(a) Since f (x) is even, we have f (−x) = f (x) for all x in [−L, L]. Thus,
Z 0 Z 0 Z 0 Z L
f (x)dx = f (−x)dx = − f (u)du = f (x)dx.
−L −L L 0

For this, it follows that


Z L Z 0 Z L Z L
f (x)dx = f (x)dx + f (x)dx = 2 f (x)dx.
−L −L 0 0
134SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

(b) Since f (x) is odd, we have f (−x) = −f (x) for all x in [−L, L]. Thus,
Z 0 Z 0 Z 0 Z L
f (x)dx = [−f (−x)]dx = f (u)du = − f (x)dx.
−L −L L 0

Hence, Z 0 Z L Z L
0= f (x)dx + f (x)dx = f (x)dx
−L 0 −L

Example 16.3 RL nπ

(a) Find the value of the integral −L f (x) sin L
x dx when f is even.
RL nπ

(b) Find the value of the integral −L f (x) cos L
x dx when f is odd.
Solution.
(a) Since the function sin nπ x is odd and f is even, we have that f (x) sin nπ
 
L L
x
is odd so that Z L  nπ 
f (x) sin x dx = 0
−L L
(b) Since the function cos nπ nπ
 
L
x is even and f is odd, we have that f (x) cos L
x
is odd so that Z L  nπ 
f (x) cos x dx = 0
−L L
Even and Odd Extensions
Let f : (0, L) → R be a piecewise smooth function. We define the odd
extension of this function on the interval −L < x < L by

f (x) 0<x<L
fodd (x) =
−f (−x) −L < x < 0.
This function will be odd on the interval (−L, L), and will be equal to f (x)
on the interval (0, L). We can then further extend this function to the entire
real line by defining it to be 2L periodic. Let f odd denote this extension. We
note that f odd is an odd function and piecewise smooth so that by Theorem
15.2 it possesses a Fourier series expansion, and from the fact that it is odd
all of the a0n s are zero. Moreover, in the interval (0, L) we have

X  nπ 
f (x) = bn sin x . (16.1)
n=1
L
16 FOURIER SINES SERIES AND FOURIER COSINES SERIES 135

We call (16.1) the Fourier sine series of f.


The coefficients bn are given by the formula

1 L 2 L
Z  nπ  Z  nπ 
bn = f sin x dx = f sin x dx
L −L odd L L 0 odd L
2 L
Z  nπ 
= f (x) sin x dx
L 0 L

since f odd sin nπ



L
x is an even function.
Likewise, we can define the even extension of f on the interval −L < x < L
by 
f (x) 0<x<L
feven (x) =
f (−x) −L < x < 0.
We can then further extend this function to the entire real line by defining
it to be 2L periodic. Let f even denote this extension. Again, we note that
f even is equal to the original function f (x) on the interval upon which f (x)
is defined. Since f even is piecewise smooth, by Theorem 15.2 it possesses a
Fourier series expansion, and from the fact that it is even all of the b0n s are
zero. Moreover, in the interval (0, L) we have

a0 X  nπ 
f (x) = + an cos x . (16.2)
2 n=1
L

We call (16.2) the Fourier cosine series of f. The coefficients an are given
by
2 L
Z  nπ 
an = f (x) cos x dx, n = 0, 1, 2, · · · .
L 0 L

Example 16.4
Graph the odd and even extensions of the function f (x) = x, 0 ≤ x ≤ 1.

Solution.
We have fodd (x) = x for −1 ≤ x ≤ 1. The odd extension of f is shown in
Figure 16.1(a). Likewise,

x 0≤x≤1
feven (x) =
−x −1 ≤ x < 0.
136SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

The even extension is shown in Figure 16.1(b)

Figure 16.1

Example 16.5
Find the Fourier sine series of the function

0 ≤ x ≤ π2

x,
f (x) = π
π − x, 2
≤ x ≤ π.

Solution.
We have
"Z π
#
Z π
2 2
bn = x sin nxdx + (π − x) sin nxdx .
π 0 π
2

Using integration by parts we find

π Z π
Z
2
h x i π2 1 2
x sin nxdx = − cos nx + cos nxdx
0 n 0 n 0
π cos (nπ/2) 1 π
=− + 2 [sin nx]02
2n n
π cos (nπ/2) sin (nπ/2)
=− +
2n n2
16 FOURIER SINES SERIES AND FOURIER COSINES SERIES 137

while
π  π π
(π − x)
Z Z
1
(π − x) sin nxdx = − cos nx − cos nxdx
π
2
n π n π
2
2

π cos (nπ/2) 1
= − 2 [sin nx]ππ
2n n 2

π cos (nπ/2) sin (nπ/2)


= +
2n n2
Thus,
4 sin (nπ/2)
bn = ,
πn2
and the Fourier sine series of f (x) is
∞ ∞
X 4 sin (nπ/2) X 4(−1)n+1
f (x) = sin nx = sin (2n − 1)x
n=1
πn2 n=1
π(2n − 1)2
138SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Practice Problems
Problem 16.1
Give an example of a function that is both even and odd.

Problem 16.2
Graph the odd and even extensions of the function f (x) = 1, 0 ≤ x ≤ 1.

Problem 16.3
Graph the odd and even extensions of the function f (x) = L − x for 0 ≤ x ≤
L.

Problem 16.4
Graph the odd and even extensions of the function f (x) = 1 + x2 for 0 ≤
x ≤ L.

Problem 16.5
Find the Fourier cosine series of the function
0 ≤ x ≤ π2

x,
f (x) =
π − x, π2 ≤ x ≤ π.

Problem 16.6
Find the Fourier cosine series of f (x) = x on the interval [0, π].

Problem 16.7
Find the Fourier sine series of f (x) = 1 on the interval [0, π].

Problem 16.8
Find the Fourier sine series of f (x) = cos x on the interval [0, π].

Problem 16.9
Find the Fourier cosine series of f (x) = e2x on the interval [0, 1].

Problem 16.10
For the following functions on the interval [0, L], find the coefficients bn of
the Fourier sine expansion.
(a) f (x) = sin 2π

L
x .
(b) f (x) = 1
(c) f (x) = cos Lπ x .

16 FOURIER SINES SERIES AND FOURIER COSINES SERIES 139

Problem 16.11
For the following functions on the interval [0, L], find the coefficients an of
the Fourier cosine expansion.
(a) f (x) = 5 + cos Lπ x .


(b) f (x) = x
(c)
1 0 < x ≤ L2

f (x) =
0 L2 < x ≤ L.
Problem 16.12
Consider a function f (x), defined on 0 ≤ x ≤ L, which is even (symmetric)
around x = L2 . Show that the even coefficients (n even) of the Fourier sine
series are zero.
Problem 16.13
Consider a function f (x), defined on 0 ≤ x ≤ L, which is odd around x = L2 .
Show that the even coefficients (n even) of the Fourier cosine series are zero.
Problem 16.14
πx

The Fourier sine series of f (x) = cos L
for 0 ≤ x ≤ L is given by
 πx  ∞
X  nπx 
cos = bn sin , n∈N
L n=1
L

where
2n
b1 = 0, bn = [1 + (−1)n ].
(n2 − 1)π
πx

Using term-by-term integration, find the Fourier cosine series of sin L
.
Problem 16.15
Consider the function

1 0≤x<1
f (x) =
2 1 ≤ x < 2.
(a) Sketch the even extension of f.
(b) Find a0 in the Fourier series for the even extension of f.
(c) Find an (n = 1, 2, · · · ) in the Fourier series for the even extension of f.
(d) Find bn in the Fourier series for the even extension of f.
(e) Write the Fourier series for the even extension of f.
140SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

17 Separation of Variables for PDEs


Finding analytic solutions to PDEs is essentially impossible. Most of the
PDE techniques involve a mixture of analytic, qualitative and numeric ap-
proaches. Of course, there are some easy PDEs too. If you are lucky your
PDE has a solution with separable variables. In this chapter we discuss the
application of the method of separation of variables in the solution of PDEs.

17.1 Second Order Linear Homogenous ODE with Con-


stant Coefficients
In this section, we review the basics of finding the general solution to the
ODE
ay 00 + by 0 + cy = 0 (17.1)
where a, b, and c are constants. The process starts by solving the charac-
teristic equation
ar2 + br + c = 0
which is a quadratic equation with roots

−b ± b2 − 4ac
r1,2 = .
2a
We consider the following three cases:
• If b2 − 4ac > 0 then the general solution to (17.1) is given by
 √   √ 
−b− b2 −4ac −b+ b2 −4ac
2a
t 2a
t
y(t) = Ae + Be .
• If b2 − 4ac = 0 then the general solution to (17.1) is given by
b b
y(t) = Ae− 2a t + Bte− 2a t .
• If b2 − 4ac < 0 then

b 4ac − b2
r1,2 = − ± i
2a 2a
and the general solution to (17.1) is given by
√  √ 
b
− 2a t 4ac − b2 b
− 2a t 4ac − b2
y(t) = Ae cos t + Be sin t.
2a 2a
17 SEPARATION OF VARIABLES FOR PDES 141

17.2 The Method of Separation of Variables for PDEs


In developing a solution to a partial differential equation by separation of
variables, one assumes that it is possible to separate the contributions of
the independent variables into separate functions that each involve only one
independent variable. Thus, the method consists of the following steps
1. Factorize the (unknown) dependent variable of the PDE into a product of
functions, each of the factors being a function of one independent variable.
That is,
u(x, y) = X(x)Y (y).
2. Substitute into the PDE, and divide the resulting equation by X(x)Y (y).
3. Then the problem turns into a set of separated ODEs (one for X(x) and
one for Y (y).)
4. The general solution of the ODEs is found, and boundary initial condi-
tions are imposed.
5. u(x, y) is formed by multiplying together X(x) and Y (y).

We illustrate these steps in the next three examples.

Example 17.1
Find all the solutions of the form u(x, t) = X(x)T (t) of the equation

uxx − ux = ut .

Solution.
It is very easy to find the derivatives of a separable function:

ux = X 0 (x)T (t), ut = X(x)T 0 (t) and uxx = X 00 (x)T (t),

which is basically a consequence of the fact that differentiation with respect


to x sees t as a constant, and vice versa. Now, the equation uxx − ux = ut
becomes
X 00 (x)T (t) − X 0 (x)T (t) = X(x)T 0 (t).
We can separate variables further. Division by X(x)T (t) gives

X 00 (x) − X 0 (x) T 0 (t)


= .
X(x) T (t)
142SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

The expression on the LHS is a function of x whereas the one on the RHS is
a function of t only. They both have to be constant. That is,
X 00 (x) − X 0 (x) T 0 (t)
= = λ.
X(x) T (t)
Thus, we have the following ODEs:
X 00 − X 0 − λX = 0 and T 0 = λT.
The second equation is easy to solve: T (t) = Ceλt . The first equation is
solved via the characteristic equation ω 2 − ω − λ = 0, whose solutions are

1 ± 1 + 4λ
ω= .
2
If λ > − 14 then √ √
1+ 1+4λ 1− 1+4λ
x x
X(x) = Ae 2 + Be 2 .
In this case, √ √
1+ 1+4λ 1− 1+4λ
x λt x
u(x, t) = De 2 e + Ee 2 eλt .
If λ = − 41 then
x x
X(x) = Ae 2 + Bxe 2
and in this case
x t
u(x, t) = (D + Ex)e 2 − 4 .
If λ < − 14 then
p ! p !
x −(1 + 4λ) x −(1 + 4λ)
X(x) = Ae cos 2 x + Be 2 sin x .
2 2

In this case,
p ! p !
x −(1 + 4λ) x −(1 + 4λ)
u(x, t) = D0 e 2
+λt
cos x + B 0 e 2 +λt sin x
2 2

Example 17.2
Solve Laplace’s equation using the separation of variables method

∆u = uxx + uyy = 0.
17 SEPARATION OF VARIABLES FOR PDES 143

Solution.
We look for a solution of the form u(x, y) = X(x)Y (y). Substituting in the
Laplace’s equation, we obtain

X 00 (x)Y (y) + X(x)Y 00 (y) = 0.


Y 00 (y)
Assuming X(x)Y (y) is nonzero, dividing for X(x)Y (y) and subtracting Y (y)
from both sides, we find:
X 00 (x) Y 00 (y)
=− .
X(x) Y (y)
The left hand side is a function of x while the right hand side is a function
of y. This says that they must equal to a constant. That is,
X 00 (x) Y 00 (y)
=− =λ
X(x) Y (y)
where λ is a constant. This results in the following two ODEs
X 00 − λX = 0 and Y 00 + λY = 0.
The solutions of these equations depend on the sign of λ.
• If λ > 0 then the solutions are given
√ √
X(x) =Ae + Be− λx
λx
√ √
Y (y) =C cos λy + D sin λy

where A, B, C, and D are constants. In this case,


√ √ √ √
u(x, t) =k1 e λx cos λy + k2 e λx sin λy
√ √ √ √
+k3 e− λx cos λy + k4 e− λx sin λy.

• If λ = 0 then

X(x) =Ax + B
Y (y) =Cy + D

where A, B, and C are arbitrary constants. In this case,

u(x, y) = k1 xy + k2 x + k3 y + k4 .
144SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

• If λ < 0 then
√ √
X(x) =A cos −λx + B sin −λx
√ √
−λy
Y (y) =Ce + De− −λy

where A, B, C, and D are arbitrary constants. In this case,


√ √ √ √
u(x, y) =k1 cos −λxe −λy + k2 cos −λxe− −λy
√ √ √ √
+k3 sin −λxe −λy + k4 sin −λxe− −λy

Example 17.3
Solve using the separation of variables method.

yux − xuy = 0.

Solution.
Substitute u(x, t) = X(x)Y (y) into the given equation we find

yX 0 Y − xXY 0 = 0.

This can be separated into


X0 Y0
= .
xX yY
The left hand side is a function of x while the right hand side is a function
of y. This says that they must equal to a constant. That is,

X0 Y0
= =λ
xX yY
where λ is a constant. This results in the following two ODEs

X 0 − λxX = 0 and Y 0 − λyY = 0.

Solving these equations using the method of separation of variable for ODEs
λx2 λy 2
we find X(x) = Ae 2 and Y (y) = Be 2 . Thus, the general solution is given
by
λ(x2 +y 2 )
u(x, y) = Ce 2
17 SEPARATION OF VARIABLES FOR PDES 145

Practice Problems
Problem 17.1
Solve using the separation of variables method

∆u + λu = 0.

Problem 17.2
Solve using the separation of variables method

ut = kuxx .

Problem 17.3
Derive the system of ordinary differential equations for R(r) and Θ(θ) that
is satisfied by solutions to
1 1
urr + ur + 2 uθθ = 0.
r r
Problem 17.4
Derive the system of ordinary differential equations and boundary conditions
for X(x) and T (t) that is satisfied by solutions to

utt = uxx − 2u, 0 < x < 1, t > 0

u(0, t) = 0 = ux (1, t) t > 0


of the form u(x, t) = X(x)T (t). (Note: you do not need to solve for X and
T .)

Problem 17.5
Derive the system of ordinary differential equations and boundary conditions
for X(x) and T (t) that is satisfied by solutions to

ut = kuxx , 0 < x < L, t > 0

u(x, 0) = f (x), u(0, t) = 0 = ux (L, t) t > 0


of the form u(x, t) = X(x)T (t). (Note: you do not need to solve for X and
T .)

Problem 17.6
Find all product solutions of the PDE ux + ut = 0.
146SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Problem 17.7
Derive the system of ordinary differential equations for X(x) and Y (y) that
is satisfied by solutions to

3uyy − 5uxxxy + 7uxxy = 0.

of the form u(x, y) = X(x)Y (y).

Problem 17.8
Find the general solution by the method of separation of variables.

uxy + u = 0.

Problem 17.9
Find the general solution by the method of separation of variables.

ux − yuy = 0.

Problem 17.10
Find the general solution by the method of separation of variables.

utt − uxx = 0.

Problem 17.11
For the following PDEs find the ODEs implied by the method of separation
of variables.
(a) ut = kr(rur )r
(b) ut = kuxx − αu
(c) ut = kuxx − aux
(d) uxx + uyy = 0
(e) ut = kuxxxx .

Problem 17.12
Find all solutions to the following partial differential equation that can be
obtained via the separation of variables.

ux − uy = 0.

Problem 17.13
Separate the PDE uxx − uy + uyy = u into two ODEs with a parameter. You
do not need to solve the ODEs.
18 SOLUTIONS OF THE HEAT EQUATION BY THE SEPARATION OF VARIABLES METHOD147

18 Solutions of the Heat Equation by the Sep-


aration of Variables Method
In this section we apply the method of separation of variables in solving the
one spatial dimension of the heat equation.

The Heat Equation with Dirichlet Boundary Conditions


Consider the problem of finding all nontrivial solutions to the heat equation
ut = kuxx that satisfies the initial time condition u(x, 0) = f (x) and the
Dirichlet boundary conditions u(0, t) = u(L, t) = 0 (that is, the endpoints
are assumed to be at zero temperature) with u not the trivial solution. Let’s
assume that the solution can be written in the form u(x, t) = X(x)T (t).
Substituting into the heat equation we obtain

X 00 T0
= .
X kT
Since the LHS only depends on x and the RHS only depends on t, there must
be a constant λ such that
X 00 T0
X
= λ and kT
= λ.

This gives the two ordinary differential equations

X 00 − λX = 0 and T 0 − kλT = 0.

As far as the boundary conditions, we have

u(0, t) = 0 = X(0)T (t) =⇒ X(0) = 0

and
u(L, t) = 0 = X(L)T (t) =⇒ X(L) = 0.
Note that T is not the zero function for otherwise u ≡ 0 and this contradicts
our assumption that u is the non-trivial solution.
Next, we consider the three cases of the sign of λ.
148SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Case 1: λ = 0
In this case, X 00 = 0. Solving this equation we find X(x) = ax + b. Since
X(0) = 0 we find b = 0. Since X(L) = 0 we find a = 0. Hence, X ≡ 0 and
u(x, t) ≡ 0. That is, u is the trivial solution.

Case 2: λ > 0 √ √
In this case, X(x) = Ae λx + Be− λx . Again, the conditions X(0) = X(L) =
0 imply A = B = 0 and hence the solution is the trivial solution.

Case 3: λ < 0 √ √
In this case, X(x) = A cos −λx + B sin −λx. The√condition X(0) = 0
implies A = 0. The condition X(L) = 0 implies B sin −λL = 0. We must
have B 6= 0 otherwise√X(x) = 0 and√this leads to the trivial solution. Since
B 6= 0, we obtain sin −λL = 0 or −λL = nπ where n ∈ N. Solving for λ
2 2
we find λ = − nLπ2 . Thus, we obtain infinitely many solutions given by

Xn (x) = An sin x, n = 1, 2, · · · .
L
Now, solving the equation
T 0 − λkT = 0
by the method of separation of variables we obtain
n2 π 2
Tn (t) = Bn e− L2
kt
, n = 1, 2, · · ·
Hence, the functions
 nπ  n2 π2
un (x, t) = Cn sin x e− L2 kt , n = 1, 2, · · ·
L
satisfy ut = kuxx and the boundary conditions u(0, t) = u(L, t) = 0.
Now, in order for these solutions to satisfy the initial value condition u(x, 0) =
f (x), we invoke the superposition principle of linear PDE to write

X  nπ  n2 π2
u(x, t) = Cn sin x e− L2 kt . (18.1)
n=1
L

To determine the unknown constants Cn we use the initial condition u(x, 0) =


f (x) in (18.1) to obtain

X  nπ 
f (x) = Cn sin x .
n=1
L
18 SOLUTIONS OF THE HEAT EQUATION BY THE SEPARATION OF VARIABLES METHOD149

Since the right-hand side is the Fourier sine series of f on the interval [0, L],
the coefficients Cn are given by
Z L
2  nπ 
Cn = f (x) sin x dx. (18.2)
L 0 L

Thus, the solution to the heat equation is given by (18.1) with the Cn0 s cal-
culated from (18.2).

The Heat Equation with Neumann Boundary Conditions


When both ends of the bar are insulated, that is, there is no heat flow out
of them, we use the boundary conditions

ux (0, t) = ux (L, t) = 0.

In this case, the general form of the heat equation initial boundary value
problem is to find u(x, t) satisfying

ut (x, t) =kuxx (x, t), 0 ≤ x ≤ L, t > 0


u(x, 0) =f (x), 0 ≤ x ≤ L
ux (0, t) =ux (L, t) = 0, t > 0.

Since 0 = ux (0, t) = X 0 (0)T (t) we obtain X 0 (0) = 0. Likewise, 0 = ux (L, t) =


X 0 (L)T (t) implies X 0 (L) = 0. We again consider the following three cases:
• If λ = 0 then X(x) = A + Bx. Since X 0 (0) = 0, we find B = 0. Thus,
X(x) = A and T (t) = constant so that u(x, t) = constant which is impossible
if f (x) is not the constant function.
• If λ > 0 then a simple calculation shows that u(x, t) is the trivial solution.
Again, because of the condition √ u(x, 0) = f (x),
√ this solution is discarded.
• If λ < 0 then X(x) = A cos −λx + B sin −λx and upon differentiation
with respect to x we find
√ √ √ √
X 0 (x) = − −λA sin −λx + −λB cos −λx.
√ √ √
The conditions X 0 (0) = X 0 (L) = 0 imply −λB = 0 and −λA sin −λL =
2 2
0. Hence, B = 0 and λ = − nLπ2 and
 nπ 
Xn (x) = An cos x , n = 1, 2, · · ·
L
150SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

and  nπ  n2 π2
un (x, t) = Cn cos x e− L2 kt .
L
By the superposition principle, the required solution to the heat equation
with Neumann boundary conditions is given by

X  nπ  n2 π2
u(x, t) = Cn cos x e− L2 kt .
n=1
L

In order to satisfy the initial condition u(x, 0) = f (x), we let



C0 X  nπ  n2 π2
u(x, t) = + Cn cos x e− L2 kt
2 n=1
L

where Z L
2  nπ 
Cn = f (x) cos x dx, n = 0, 1, 2, · · · .
L 0 L
18 SOLUTIONS OF THE HEAT EQUATION BY THE SEPARATION OF VARIABLES METHOD151

Practice Problems
Problem 18.1
Find the temperature in a bar of length 2 whose ends are kept at zero
surface insulated if the initial temperature is f (x) = sin π2 x +

and lateral
3 sin 5π

2
x .
Problem 18.2
Find the temperature in a homogeneous bar of heat conducting material of
length L with its end points kept at zero and initial temperature distribution
given by f (x) = Lxd2 (L − x), 0 ≤ x ≤ L.
Problem 18.3
Find the temperature in a thin metal rod of length L, with both ends insu-
lated (so that there is no passage of heat
 through the ends) and with initial
π
temperature in the rod f (x) = sin L x .
Problem 18.4
Solve the following heat equation with Dirichlet boundary conditions
ut = kuxx
u(0, t) = u(L, t) = 0
1 0 ≤ x < L2

u(x, 0) =
2 L2 ≤ x ≤ L.
Problem 18.5
Solve
ut = kuxx
u(0, t) = u(L, t) = 0
 

u(x, 0) = 6 sin x .
L
Problem 18.6
Solve
ut = kuxx
subject to
ux (0, t) = ux (L, t) = 0
0 0 ≤ x < L2

u(x, 0) =
1 L2 ≤ x ≤ L.
152SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Problem 18.7
Solve
ut = kuxx
subject to
ux (0, t) = ux (L, t) = 0
 

u(x, 0) = 6 + 4 cos x .
L
Problem 18.8
Solve
ut = kuxx
subject to
ux (0, t) = ux (L, t) = 0
 

u(x, 0) = −3 cos x .
L
Problem 18.9
Find the general solution u(x, t) of
ut = uxx − u, 0 < x < L, t > 0
ux (0, t) = 0 = ux (L, t), t > 0.
Briefly describe its behavior as t → ∞.
Problem 18.10 (Energy method)
Let u1 and u2 be two solutions to the Neumann boundary value problem
ut = uxx − u, 0 < x < 1, t > 0
ux (0, t) = ux (1, t) = 0, t > 0
u(x, 0) = g(x), 0 < x < 1
Define w(x, t) = u1 (x, t) − u2 (x, t).
(a) Show that w satisfies the initial value problem
wt = wxx − w, 0 < x < 1, t > 0
wx (0, t) = wx (1, t) = w(x, 0) = 0, 0 < x < 1, t > 0
R1
(b) Define E(t) = 0 w2 (x, t)dx ≥ 0 for all t ≥ 0. Show that E 0 (t) ≤ 0.
Hence, 0 ≤ E(t) ≤ E(0) for all t > 0.
(c) Show that E(t) = 0, w(x, t) = 0. Hence, conclude that u1 = u2 .
18 SOLUTIONS OF THE HEAT EQUATION BY THE SEPARATION OF VARIABLES METHOD153

Problem 18.11
Consider the heat induction in a bar where the left end temperature is main-
tained at 0, and the right end is perfectly insulated. We assume k = 1 and
L = 1.
(a) Derive the boundary conditions of the temperature at the endpoints.
(b) Following the separation of variables approach, derive the ODEs for X
and T.
(c) Consider the equation in X(x). What are√the values of X(0) and X 0 (1)?
Show that solutions of the form X(x) = sin −λx satisfy the ODE and one
of the boundary conditions. Can you choose a value of λ so that the other
boundary condition is also satisfied?

Problem 18.12
Using the method of separation of variables find the solution of the heat
equation
ut = kuxx
satisfying the following boundary and initial conditions:
(a) u(0, t) = u(L, t) = 0, u(x, 0) = 6 sin 9πx
L
(b) u(0, t) = u(L, t) = 0, u(x, 0) = 3 sin πx 3πx

L
− sin L

Problem 18.13
Using the method of separation of variables find the solution of the heat
equation
ut = kuxx
satisfying the following boundary and initial conditions: 
(a) ux (0, t) = ux (L, t) = 0, u(x, 0) = cos πx
L
+ 4 cos 5πx
L
.
(b) ux (0, t) = ux (L, t) = 0, u(x, 0) = 5.

Problem 18.14
Find the solution of the following heat conduction partial differential equation

ut = 8uxx , 0 < x < 4π, t > 0

u(0, t) = u(4π, t) = 0, t > 0


u(x, 0) = 6 sin x, 0 < x < 4π.
154SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

19 Elliptic Type: Laplace’s Equations in Rect-


angular Domains
Boundary value problems are of great importance in physical applications.
Mathematically, a boundary-value problem consists of finding a function
which satisfies a given partial differential equation and particular bound-
ary conditions. Physically speaking, the problem is independent of time,
involving only space coordinates.
Just as initial-value problems are associated with hyperbolic PDE, bound-
ary value problems are associated with PDE of elliptic type. In contrast to
initial-value problems, boundary-value problems are considerably more diffi-
cult to solve.
The main model example of an elliptic type PDE is the Laplace equation

∆u = uxx + uyy = 0 (19.1)

where the symbol ∆ is referred to as the Laplacian. Solutions of this equa-


tion are called harmonic functions.

Example 19.1
Show that, for all (x, y) 6= (0, 0), u(x, y) = ax2 − ay 2 + cx + dy + e is a
harmonic function, where a, b, c, d, and e are constants.

Solution.
We have

ux =2ax + c
uxx =2a
uy = − 2ay + d
uyy = − 2a.

Plugging these expressions into the equation we find uxx + uyy = 0. Hence,
u(x, y) is harmonic

The Laplace equation is arguably the most important differential equation in


all of applied mathematics. It arises in an astonishing variety of mathemati-
cal and physical systems, ranging through fluid mechanics, electromagnetism,
potential theory, solid mechanics, heat conduction, geometry, probability,
19 ELLIPTIC TYPE: LAPLACE’S EQUATIONS IN RECTANGULAR DOMAINS155

number theory, and on and on.


There are two main modifications of the Laplace equation: the Poisson
equation (a non-homogeneous Laplace equation):

∆u = f (x, y)

and the eigenvalue problem (the Helmholtz equation):

∆u = λu, λ ∈ R.

Solving Laplace’s Equation (19.1)


Note first that both independent variables are spatial variables and each
variable occurs in a 2nd order derivative and so we will need two boundary
conditions for each variable a total of four boundary conditions.
Consider (19.1) in the rectangle

Ω = {(x, y) : 0 ≤ x ≤ a, 0 ≤ y ≤ b}

with the Dirichlet boundary conditions

u(0, y) = f1 (y), u(a, y) = f2 (y), u(x, 0) = g1 (x), u(x, b) = g2 (x)

where 0 ≤ x ≤ a and 0 ≤ y ≤ b.
The separation of variables method is most successful when the boundary
conditions are homogeneous. Thus, solving the Laplace’s equation in Ω re-
quires solving four initial boundary conditions problems, where in each prob-
lem three of the four conditions are homogeneous. The four problems to be
solved are
 
 uxx + uyy = 0  uxx + uyy = 0
(I) u(0, y) = f1 (y), (II) u(a, y) = f2 (y),
 u(a, y) = u(x, 0) = u(x, b) = 0  u(0, y) = u(x, 0) = u(x, b) = 0
 
 uxx + uyy = 0  uxx + uyy = 0
(III) u(x, 0) = g1 (x), (IV ) u(x, b) = g2 (x),
 u(0, y) = u(a, y) = u(x, b) = 0  u(0, y) = u(a, y) = u(x, 0) = 0.

If we let ui (x, y), i = 1, 2, 3, 4, denote the solution of each of the above


problems, then the solution to our original system will be

u(x, y) = u1 (x, y) + u2 (x, y) + u3 (x, y) + u4 (x, y).


156SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

In each of the above problems, we will apply separation of variables to (19.1)


and find a product solution that will satisfy the differential equation and the
three homogeneous boundary conditions. Using the Principle of Superposi-
tion we will find a solution to the problem and then apply the final boundary
condition to determine the value of the constant(s) that are left in the prob-
lem. The process is nearly identical in many ways to what we did when we
were solving the heat equation.
We will illustrate how to find u(x, y) = u4 (x, y). So let’s assume that the so-
lution can be written in the form u(x, y) = X(x)Y (y). Substituting in (19.1),
we obtain
X 00 (x)Y (y) + X(x)Y 00 (y) = 0.
Assuming X(x)Y (y) is nonzero, that is u is the non-trivial solution. Dividing
00 (y)
for X(x)Y (y) and subtracting YY (y) from both sides, we find:

X 00 (x) Y 00 (y)
=− .
X(x) Y (y)

The left hand side is a function of x while the right hand side is a function
of y. This says that they must equal to a constant. That is,

X 00 (x) Y 00 (y)
=− =λ
X(x) Y (y)

where λ is a constant. This results in the following two ODEs

X 00 − λX = 0 and Y 00 + λY = 0.

As far as the boundary conditions, we have for all 0 ≤ x ≤ a and 0 ≤ y ≤ b

u(0, y) = 0 = X(0)Y (y) =⇒ X(0) = 0

u(a, y) = 0 = X(a)Y (y) =⇒ X(a) = 0


u(x, 0) = 0 = X(x)Y (0) =⇒ Y (0) = 0
u(x, b) = g2 (x) = X(x)Y (b).
Note that X and Y are not the zero functions for otherwise u ≡ 0 and this
contradicts our assumption that u is the non-trivial solution.
Consider the first equation: since X 00 − λX = 0 the solution depends on the
sign of λ. If λ = 0 then X(x) = Ax+B. Now, the conditions X(0) = X(a) = 0
19 ELLIPTIC TYPE: LAPLACE’S EQUATIONS IN RECTANGULAR DOMAINS157

imply A = √B = 0 and √ so u ≡ 0. So assume that λ 6= 0. If λ > 0 then


λx − λx
X(x) = Ae + Be . Now, the conditions X(0) = X(a) = 0, λ 6= 0
imply A = B = 0 and hence the solution is the trivial solution. Hence, in
order to have a nontrivial solution we must have λ < 0. In this case,
√ √
X(x) = A cos −λx + B sin −λx.

The condition
√ X(0) = 0 implies A = 0. The condition X(a) = 0 implies
B sin −λa = 0. We must have B 6= 0 otherwise √ X(x) = 0 and √
this leads to
the trivial solution. Since B 6= 0, we obtain sin −λa = 0 or −λa = nπ
2 2
where n ∈ N. Solving for λ we find λn = − naπ2 . Thus, we obtain infinitely
many solutions given by

Xn (x) = sin x, n ∈ N.
a
Now, solving the equation
Y 00 + λY = 0
we obtain
√ √ p p
−λn y
Yn (y) = an e + bn e − −λn y
= An cosh −λn y + Bn sinh −λn y, n ∈ N.

Using the boundary condition Y (0) = 0 we obtain An = 0 for all n ∈ N.


Hence, the functions
 nπ   nπ 
un (x, y) = Bn sin x sinh y , n∈N
a a
satisfy (19.1) and the boundary conditions u(0, y) = u(a, y) = u(x, 0) = 0.
Now, in order for these solutions to satisfy the boundary value condition
u(x, b) = g2 (x), we invoke the superposition principle of linear PDE to write

X  nπ   nπ 
u(x, y) = Bn sin x sinh y . (19.2)
n=1
a a

To determine the unknown constants Bn we use the boundary condition


u(x, b) = g2 (x) in (19.2) to obtain
∞ 
X  nπ   nπ 
g2 (x) = Bn sinh b sin x .
n=1
a a
158SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Since the right-hand side is the Fourier sine series of f on the interval [0, a],
the coefficients Bn are given by
 Z a  nπ  
2  nπ 
Bn = g2 (x) sin x dx [sinh b ]−1 . (19.3)
a 0 a a

Thus, the solution to the Laplace’s equation is given by (19.2) with the Bn0 s
calculated from (19.3).

Example 19.2
Solve 
 uxx + uyy = 0
u(0, y) = f1 (y),
u(a, y) = u(x, 0) = u(x, b) = 0.

Solution.
Assume that the solution can be written in the form u(x, y) = X(x)Y (y).
Substituting in (19.1), we obtain

X 00 (x)Y (y) + X(x)Y 00 (y) = 0.


Y 00 (y)
Assuming X(x)Y (y) is nonzero, dividing for X(x)Y (y) and subtracting Y (y)
from both sides, we find:
X 00 (x) Y 00 (y)
=− .
X(x) Y (y)
The left hand side is a function of x while the right hand side is a function
of y. This says that they must equal to a constant. That is,
X 00 (x) Y 00 (y)
=− =λ
X(x) Y (y)
where λ is a constant. This results in the following two ODEs
X 00 − λX = 0 and Y 00 + λY = 0.
As far as the boundary conditions, we have for all 0 ≤ x ≤ a and 0 ≤ y ≤ b

u(0, y) = f1 (y) = X(0)Y (y)

u(a, y) = 0 = X(a)Y (y) =⇒ X(a) = 0


19 ELLIPTIC TYPE: LAPLACE’S EQUATIONS IN RECTANGULAR DOMAINS159

u(x, 0) = 0 = X(x)Y (0) =⇒ Y (0) = 0


u(x, b) = 0 = X(x)Y (b) =⇒ Y (b) = 0.
Note that X and Y are not the zero functions for otherwise u ≡ 0 and this
contradicts our assumption that u is the non-trivial solution.
Consider the second equation: since Y 00 +λY = 0 the solution depends on the
sign of λ. If λ = 0 then Y (y) = Ay +B. Now, the conditions Y (0) = Y (b) = 0
imply A = √B = 0 and√ so u ≡ 0. So assume that λ 6= 0. If λ < 0 then
Y (y) = Ae −λy + Be− −λy . Now, the condition Y (0) = Y (b) = 0 imply
A = B = 0 and hence the solution is the trivial solution. Hence, in order to
have a nontrivial solution we must have λ > 0. In this case,
√ √
Y (y) = A cos λy + B sin λy.

The condition
√ Y (0) = 0 implies A = 0. The condition Y (b) = 0 implies
B sin λb = 0. We must have B 6= 0 otherwise√Y (y) = 0 and √ this leads to
the trivial solution. Since B 6= 0, we obtain sin λb = 0 or λb = nπ where
2 2
n ∈ N. Solving for λ we find λn = nb2π . Thus, we obtain infinitely many
solutions given by  nπ 
Yn (y) = sin y , n ∈ N.
b
Now, solving the equation

X 00 − λX = 0, λ > 0

we obtain
√ √  nπ   nπ 
Xn (x) = an e λn x
+ bn e − λn x
= An cosh x + Bn sinh x , n ∈ N.
b b
However, this is not really suited for dealing with the boundary condition
X(a) = 0. So, let’s also notice that the following is also a solution.
 nπ   nπ 
Xn (x) = An cosh (x − a) + Bn sinh (x − a) , n ∈ N.
b b
Now, using the boundary condition X(a) = 0 we obtain An = 0 for all n ∈ N.
Hence, the functions
 nπ   nπ 
un (x, y) = Bn sin y sinh (x − a) , n ∈ N
b b
160SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

satisfy (19.1) and the boundary conditions u(a, y) = u(x, 0) = u(x, b) = 0.


Now, in order for these solutions to satisfy the boundary value condition
u(0, y) = f1 (y), we invoke the superposition principle of linear PDE to write

X  nπ   nπ 
u(x, y) = Bn sin y sinh (x − a) . (19.4)
n=1
b b

To determine the unknown constants Bn we use the boundary condition


u(0, y) = f1 (y) in (19.4) to obtain
∞ 
X  nπ   nπ 
f1 (y) = Bn sinh − a sin y .
n=1
b b

Since the right-hand side is the Fourier sine series of f1 on the interval [0, b],
the coefficients Bn are given by
 Z b  nπ   h  nπ i−1
2
Bn = f1 (y) sin y dy sinh − a . (19.5)
b 0 b b
Thus, the solution to the Laplace’s equation is given by (19.4) with the Bn0 s
calculated from (19.5)

Example 19.3
Solve
uxx + uyy = 0, 0 < x < L, 0 < y < H
u(0, y) = u(L, y) = 0, 0 < y < H
u(x, 0) = uy (x, 0), u(x, H) = f (x), 0 < x < L.

Solution.
Using separation of variables we find
X 00 Y 00
=− = λ.
X Y
We first solve
X 00 − λX = 0

0<x<L
X(0) = X(L) = 0.
2 2
We find λn = − nLπ2 and

Xn (x) = sin x, n ∈ N.
L
19 ELLIPTIC TYPE: LAPLACE’S EQUATIONS IN RECTANGULAR DOMAINS161

Next we need to solve


Y 00 + λY = 0

0<y<H
0
Y (0) − Y (0) = 0.
The solution of the ODE is
 nπ   nπ 
Yn (y) = An cosh y + Bn sinh y , n ∈ N.
L L
The boundary condition Y (0) − Y 0 (0) = 0 implies

An − Bn = 0.
L
Hence,
nπ  nπ   nπ 
Yn = Bn cosh y + Bn sinh y , n ∈ N.
L L L
Using the superposition principle and the results above we have

X nπ h nπ  nπ   nπ i
u(x, y) = Bn sin x cosh y + sinh y .
n=1
L L L L

Substituting in the condition u(x, H) = f (x) we find



X nπ h nπ  nπ   nπ i
f (x) = Bn sin x cosh H + sinh H .
n=1
L L L L

Recall the Fourier sine series of f on [0, L] given by



X nπ
f (x) = An sin x
n=1
L
where
2 L
Z  nπ 
An = f (x) sin x dx.
L 0 L
Thus, the general solution is given by

X nπ h nπ  nπ   nπ i
u(x, y) = Bn sin x cosh y + sinh y .
n=1
L L L L
with the Bn satisfying
2 L
h nπ  nπ   nπ i Z  nπ 
Bn cosh H + sinh H = f (x) sin x dx
L L L L 0 L
162SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Practice Problems
Problem 19.1
Solve 
 uxx + uyy = 0
u(a, y) = f2 (y),
u(0, y) = u(x, 0) = u(x, b) = 0.

Problem 19.2
Solve 
 uxx + uyy = 0
u(x, 0) = g1 (x),
u(0, y) = u(a, y) = u(x, b) = 0.

Problem 19.3
Solve 
 uxx + uyy = 0
u(x, 0) = u(0, y) = 0,
u(1, y) = 2y, u(x, 1) = 3 sin πx + 2x

where 0 ≤ x ≤ 1 and 0 ≤ y ≤ 1. Hint: Define U (x, y) = u(x, y) − 2xy.


Problem 19.4
Show that u(x, y) = x2 − y 2 and u(x, y) = 2xy are harmonic functions.
Problem 19.5
Solve
H H
uxx + uyy = 0, 0 ≤ x ≤ L, − ≤y≤
2 2
subject to
H H
u(0, y) = u(L, y) = 0, − <y<
2 2
H H
u(x, − ) = f1 (x), u(x, ) = f2 (x), 0 ≤ x ≤ L.
2 2
Problem 19.6 √
Consider a complex valued function f (z) = u(x, y) + iv(x, y) where i = −1.
We say that f is holomorphic or analytic if and only if f can be expressed
as a power series in z, i.e.

X
u(x, y) + iv(x, y) = an z n .
n=0

(a) By differentiating with respect to x and y show that


19 ELLIPTIC TYPE: LAPLACE’S EQUATIONS IN RECTANGULAR DOMAINS163

ux = vy and uy = −vx

These are known as the Cauchy-Riemann equations.


(b) Show that ∆u = 0 and ∆v = 0.

Problem 19.7
Show that Laplace’s equation in polar coordinates is given by

1 1
urr + ur + 2 uθθ = 0.
r r

Problem 19.8
Solve
uxx + uyy = 0, 0 ≤ x ≤ 2, 0 ≤ y ≤ 3
subject to
x
u(x, 0) = 0, u(x, 3) =
2
 

u(0, y) = sin y , u(2, y) = 7.
3

Problem 19.9
Solve
uxx + uyy = 0, 0 ≤ x ≤ L, 0 ≤ y ≤ H
subject to
uy (x, 0) = 0, u(x, H) = 0
 πy 
u(0, y) = u(L, y) = 4 cos .
2H

Problem 19.10
Solve
uxx + uyy = 0, x > 0, 0 ≤ y ≤ H
subject to
u(0, y) = f (y), |u(x, 0)| < ∞

uy (x, 0) = uy (x, H) = 0.
164SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Problem 19.11
Consider Laplace’s equation inside a rectangle

uxx + uyy = 0, 0 ≤ x ≤ L, 0 ≤ y ≤ H

subject to the boundary conditions


 πx   
3πx
u(0, y) = 0, u(L, y) = 0, u(x, 0)−uy (x, 0) = 0, u(x, H) = 20 sin −5 sin .
L L

Find the solution u(x, y).

Problem 19.12
Solve Laplace’e equation uxx + uyy = 0 in the rectangle 0 < x, y < 1 subject
to the conditions
u(0, y) = u(1, y) = 0, 0 < y < 1
u(x, 0) = sin (2πx), uy (x, 0) = −2π sin (2πx), 0 < x < 1.

Problem 19.13
Find the solution to Laplace’s equation on the rectangle 0 < x < 1, 0 < y < 1
with boundary conditions

u(x, 0) = 0, u(x, 1) = 1

ux (0, y) = ux (1, y) = 0.

Problem 19.14
Solve Laplace’s equation on the rectangle 0 < x < a, 0 < y < b with the
boundary conditions

ux (0, y) = −a, ux (a, y) = 0

uy (x, 0) = b, uy (x, b) = 0.

Problem 19.15
Solve Laplace’s equation on the rectangle 0 < x < π, 0 < y < 2 with the
boundary conditions
u(0, y) = u(π, y) = 0
uy (x, 0) = 0, uy (x, 2) = 2 sin 3x − 5 sin 10x.
20 LAPLACE’S EQUATIONS IN CIRCULAR REGIONS 165

20 Laplace’s Equations in Circular Regions


In the previous section we solved the Dirichlet problem for Laplace’s equation
on a rectangular region. However, if the domain of the solution is a disc,
an annulus, or a circular wedge, it is useful to study the two-dimensional
Laplace’s equation in polar coordinates.
It is well known in calculus that the cartesian coordinates (x, y) and the polar
coordinates (r, θ) of a point are related by the formulas
x = r cos θ and y = r sin θ
1
where r = (x2 + y 2 ) and tan θ = xy . Using the chain rule we obtain
2

sin θ
ux =ur rx + uθ θx = cos θur − uθ
r
uxx =uxr rx + uxθ θx
 
sin θ sin θ
= cos θurr + 2 uθ − urθ cos θ
r r
  
cos θ sin θ sin θ
+ − sin θur + cos θurθ − uθ − uθθ −
r r r
cos θ
uy =ur ry + uθ θy = sin θur + uθ
r
uyy =uyr ry + uyθ θy
 
cos θ cos θ
= sin θurr − 2 uθ + urθ sin θ
r r
  
sin θ cos θ cos θ
+ cos θur + sin θurθ − uθ + uθθ .
r r r
Substituting these equations into ∆u = 0 we obtain
1 1
urr + ur + 2 uθθ = 0. (20.1)
r r
Example 20.1
Find the solution to
∆u = 0, x2 + y 2 < a2
subject to
(i) Boundary condition: u(a, θ) = f (θ), 0 ≤ θ ≤ 2π.
(ii) Boundedness at the origin: |u(0, θ)| < ∞.
(iii) Periodicity: u(r, θ + 2π) = u(r, θ), 0 ≤ θ ≤ 2π.
166SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Solution.
First, note that (iii) implies that u(r, 0) = u(r, 2π) and uθ (r, 0) = uθ (r, 2π).
Next, we will apply the method of separation of variables to (20.1). Suppose
that a solution u(r, θ) of (20.1) can be written in the form u(r, θ) = R(r)Θ(θ).
Substituting in (20.1) we obtain
1 1
R00 (r)Θ(θ) + R0 (r)Θ(θ) + 2 R(r)Θ00 (θ) = 0.
r r
Dividing by RΘ (under the assumption that RΘ 6= 0) we obtain

Θ00 (θ) R00 (r) R0 (r)


= −r2 −r .
Θ(θ) R(r) R(r)
The left-hand side is independent of r whereas the right-hand side is inde-
pendent of θ so that there is a constant λ such that
Θ00 (θ) R00 (r) R0 (r)
− = r2 +r = λ.
Θ(θ) R(r) R(r)
This results in the following ODEs

Θ00 (θ) + λΘ(θ) = 0 (20.2)

and
r2 R00 (r) + rR0 (r) − λR(r) = 0. (20.3)
The second equation is known as Euler’s equation. Both of these equations
are easily solvable. To solve (20.2), We only have to add the appropriate
boundary conditions. We have Θ(0) = Θ(2π) and Θ0 (0) = Θ0 (2π). The
periodicity of Θ implies that λ = n2 and Θ must be of the form

Θn (θ) = A0n cos nθ + Bn0 sin nθ, n = 0, 1, 2 · · ·

The equation in R is of Euler type and its solution must be of the form
R(r) = rα . Since λ = n2 , the corresponding characteristic equation is

α(α − 1)rα + αrα − n2 rα = 0.

Solving this equation we find α = ±n. Hence, we let

Rn (r) = Cn rn + Dn r−n , n ∈ N.
20 LAPLACE’S EQUATIONS IN CIRCULAR REGIONS 167

For n = 0, R = 1 is a solution. To find a second solution, we solve the


equation
r2 R00 + rR0 = 0.
This can be done by dividing through by r and using the substitution S = R0
to obtain rS 0 + S = 0. Solving this by noting that the left-hand side is just
(rS)0 we find S = rc . Hence, R0 = rc and this implies R(r) = C ln r. Thus,
R = 1 and R = ln r form a fundamental set of solutions of (20.3) and so a
general solution is given by

R0 (r) = C0 + D0 ln r.

By assumption (ii), u(r, θ) must be bounded at r = 0, and so does Rn . Since


r−n and ln r are unbounded at r = 0, we must set D0 = Dn = 0. In this case,
the solutions to Euler’s equation are given by

Rn (r) = Cn rn , n = 0, 1, 2, · · · .

Using the superposition principle, and combining the results obtained above,
we find ∞
X
u(r, θ) = C0 + rn (An cos nθ + Bn sin nθ).
n=1

Now, using the boundary condition u(a, θ) = f (θ) we can write



X
f (θ) = C0 + (an An cos nθ + an Bn sin nθ)
n=1

which is usually written in a more convenient equivalent form by



a0 X
f (θ) = + (an cos nθ + bn sin nθ).
2 n=1

It is obvious that an and bn are the Fourier coefficients, and therefore can be
determined by the formulas
1 2π
Z
an = f (θ) cos nθdθ, n = 0, 1, · · ·
π 0
and Z 2π
1
bn = f (θ) sin nθdθ, n = 1, 2, · · · .
π 0
168SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

Finally, the general solution to our problem is given by



X
u(r, θ) = C0 + rn (An cos nθ + Bn sin nθ)
n=1

where
Z 2π
a0 1
C0 = = f (θ)dθ
2 2π 0
Z 2π
an 1
An = n = n f (θ) cos nθdθ, n = 1, 2, · · ·
a a π 0
Z 2π
bn 1
Bn = n = n f (θ) sin nθdθ, n = 1, 2, · · ·
a a π 0
Example 20.2
Solve
∆u = 0, 0 ≤ θ < 2π, 1 ≤ r ≤ 2
subject to
u(1, θ) = u(2, θ) = sin θ, 0 ≤ θ < 2π.

Solution.
Use separation of variables. First, solving for Θ(θ), we see that in order to
ensure that the solution is 2π−periodic in θ, the eigenvalues are λ = n2 .
When solving the equation for R(r), we do NOT need to throw out solutions
which are not bounded as r → 0. This is because we are working in the
annulus where r is bounded away from 0 and ∞. Therefore, we obtain the
general solution

X
u(r, θ) = (C0 + C1 ln r) + [(Cn rn + Dn r−n ) cos nθ + (An rn + Bn r−n ) sin nθ].
n=1

But ∞
X
C0 + [(Cn + Dn ) cos nθ + (An + Bn ) sin nθ] = sin θ
n=1

and

X
C0 + [(Cn 2n + Dn 2−n ) cos nθ + (An 2n + Bn 2−n ) sin nθ] = sin θ
n=1
20 LAPLACE’S EQUATIONS IN CIRCULAR REGIONS 169

Hence, comparing coefficients we must have

C0 =0
Cn + Dn =0
An + Bn =0 (n 6= 1)
A1 + B1 =1
Cn 2 + Dn 2−n
n
=0
An 2n + Bn 2−n =0 (n =6 1)
2A1 + 2−1 B1 =1.

Solving these equations we find C0 = Cn = Dn = 0, A1 = 31 , B1 = 23 , and


An = Bn = 0 for n 6= 1. Hence, the solution to the problem is
 
1 2
u(r, θ) = r+ sin θ
3 r

Example 20.3
Solve Laplace’s equation inside a 60◦ wedge of radius a subject to the bound-
ary conditions
π 1 1
uθ (r, θ) = 0, uθ (r, ) = 0, u(a, θ) = cos 9θ − cos 3θ.
3 3 9
You may assume that the solution remains bounded as r → 0.

Solution.
Separating the variables we obtain the eigenvalue problem

Θ00 (θ) + λΘ(θ) = 0


π 
Θ0 (0) = Θ0 = 0.
3
As above, because of periodicity we expect the solution to be of the form
√ √
Θ(θ) = A cos λθ + B sin λθ.

The condition Θ0 (0) = 0 implies B = 0. The condition Θ0 π3 = 0 implies




λn = (3n)2 , n = 0, 1, 2, · · · . Thus, the angular solution is

Θn (θ) = An cos 3nθ, n = 0, 1, 2, · · ·


170SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

The corresponding solutions of the radial problem are

Rn (r) = Bn r3n + Cn r−3n , n = 0, 1, · · · .

To obtain a solution that remains bounded as r → 0 we take Cn = 0. Hence,



X
u(r, θ) = Dn r3n cos 3nθ, n = 0, 1, 2, · · ·
n=0

Using the boundary condition


1 1
u(a, θ) = cos 9θ − cos 3θ
3 9
we obtain D1 a3 = − 19 and D3 a9 = 1
3
and 0 otherwise. Thus,

1  r 9 1  r 3
u(r, θ) = cos 9θ − cos 3θ
3 a 9 a
20 LAPLACE’S EQUATIONS IN CIRCULAR REGIONS 171

Practice Problems
Problem 20.1
Solve the Laplace’s equation as in Example 20.1 in the unit disk with u(1, θ) =
3 sin 5θ.

Problem 20.2
Solve the Laplace’s equation in the upper half of the unit disk with u(1, θ) =
π − θ.

Problem 20.3
Solve the Laplace’s equation in the unit disk with ur (1, θ) = 2 cos 2θ.

Problem 20.4
Consider ∞
X
u(r, θ) = C0 + rn (An cos nθ + Bn sin nθ)
n=1

with
Z 2π
a0 1
C0 = = f (φ)dφ
2 2π 0
Z 2π
an 1
An = n = n f (φ) cos nφdφ, n = 1, 2, · · ·
a a π 0
Z 2π
bn 1
Bn = n = n f (φ) sin nφdφ, n = 1, 2, · · ·
a a π 0
Using the trigonometric identity

cos a cos b + sin a sin b = cos (a − b)

show that
" ∞  
#
Z 2π
1 X r n
u(r, θ) = f (φ) 1 + 2 cos n(θ − φ) dφ.
2π 0 n=1
a

Problem 20.5
(a) Using Euler’s formula from complex analysis eit = cos t + i sin t show that
1
cos t = (eit + e−it ),
2
172SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

where i = −1.
(b) Show that
∞   ∞   ∞  
X r n X r n X r n
1+2 cos n(θ − φ) = 1 + ein(θ−φ)
+ e−in(θ−φ) .
n=1
a n=1
a n=1
a

(c) Let q1 = ar ei(θ−φ) and q2 = ar e−i(θ−φ) . It is defined in complex analysis that


1
the absolute value of a complex number z = x+iy is given by |z| = (x2 +y 2 ) 2 .
Using these concepts, show that |q1 | < 1 and |q2 | < 1.

Problem 20.6
(a)Show that
∞  
X r n rei(θ−φ)
ein(θ−φ) =
n=1
a a − rei(θ−φ)
and ∞  
X r n re−i(θ−φ)
e−in(θ−φ) =
n=1
a a − re−i(θ−φ)
Hint: Each sum is a geoemtric series with a ratio less than 1 in absolute
value so that these series converges.
(b) Show that
∞  
X r n a2 − r 2
1+2 cos n(θ − φ) = .
n=1
a a2 − 2ar cos (θ − φ) + r2

Problem 20.7
Show that

a2 − r 2
Z
f (φ)
u(r, θ) = dφ.
2π 0 a2 − 2ar cos (θ − φ) + r2
This is known as the Poisson formula in polar coordinates.

Problem 20.8
Solve
uxx + uyy = 0, x2 + y 2 < 1
subject to
u(1, θ) = θ, − π ≤ θ ≤ π.
20 LAPLACE’S EQUATIONS IN CIRCULAR REGIONS 173

Problem 20.9
The vibrations of a symmetric circular membrane where the displacement
u(r, t) depends on r and t only can be describe by the one-dimensional wave
equation in polar coordinates
1
utt = c2 (urr + ur ), 0 < r < a, t > 0
r
with initial condition
u(a, t) = 0, t > 0
and boundary conditions

u(r, 0) = f (r), ut (r, 0) = g(r), 0 < r < a.

(a) Show that the assumption u(r, t) = R(r)T (t) leads to the equation
1 T 00 1 00 1 R0
= R + = λ.
c2 T R rR
(b) Show that λ < 0.

Problem 20.10
Cartesian coordinates and cylindrical coordinates are shown in Figure 20.1
below.

Figure 20.1
174SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS

(a) Show that x = r cos θ, y = r sin θ, z = z.


(b) Show that
1 1
uxx + uyy + uzz = urr + ur + 2 uθθ + uzz .
r r
Problem 20.11
An important result about harmonic functions is the so-called the maximum
principle which states: Any harmonic function u(x, y) defined in a domain
Ω satisfies the inequality

min u(x, y) ≤ u(x, y) ≤ max u(x, y), ∀(x, y) ∈ Ω ∪ ∂Ω


(x,y)∈∂Ω (x,y)∈∂Ω

where ∂Ω denotes the boundary of Ω.


Let u be harmonic in Ω = {(x, y) : x2 + y 2 < 1} and satisfies u(x, y) = 2 − x
for all (x, y) ∈ ∂Ω. Show that u(x, y) > 0 for all (x, y) ∈ Ω.

Problem 20.12
Let u be harmonic in Ω = {(x, y) : x2 + y 2 < 1} and satisfies u(x, y) = 1 + 3x
for all (x, y) ∈ ∂Ω. Determine
(i) max(x,y)∈Ω u(x, y)
(ii) min(x,y)∈Ω u(x, y)
without solving ∆u = 0.

Problem 20.13
Let u1 (x, y) and u2 (x, y) be harmonic functions on a smooth domain Ω such
that

u1 |∂Ω = g1 (x, y) and u2 |∂Ω = g3 (x, y)

where g1 and g2 are continuous functions satisfying

max g1 (x, y) < min g1 (x, y).


(x,y)∈∂Ω (x,y)∈∂Ω

Prove that u1 (x, y) < u2 (x, y) for all (x, y) ∈ Ω ∪ ∂Ω.

Problem 20.14
Show that rn cos (nθ) and rn sin (nθ) satisfy Laplace’s equation in polar co-
ordinates.
20 LAPLACE’S EQUATIONS IN CIRCULAR REGIONS 175

Problem 20.15
Solve the Dirichlet problem

∆u = 0, 0 ≤ r < a, − π ≤ θ ≤ π

u(a, θ) = sin2 θ.

Problem 20.16
Solve Laplace’s equation
uxx + uyy = 0
outside a circular disk (r ≥ a) subject to the boundary condition

u(a, θ) = ln 2 + 4 cos 3θ.

You may assume that the solution remains bounded as r → ∞.


176SECOND ORDER LINEAR PARTIAL DIFFERENTIAL EQUATIONS
The Laplace Transform
Solutions for PDEs

If in a partial differential equation the time t is one of the independent vari-


ables of the searched-for function, we say that the PDE is an evolution
equation. Examples of evolutions equations are the heat equation and the
wave equation. In contrast, when the equation involves only spatial indepen-
dent variables then the equation is called a stationary equation. Examples
of stationary equations are the Laplace’s equations and Poisson equations.
There are classes of methods that can be used for solving the initial value or
initial boundary problems for evolution equations. We refer to these meth-
ods as the methods of integral transforms. The fundamental ones are the
Laplace and the Fourier transforms. In this chapter we will just consider the
Laplace transform.

177
178 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES

21 Essentials of the Laplace Transform


Laplace transform has been introduced in an ODE course, and is used espe-
cially to solve linear ODEs with constant coefficients, where the equations
are transformed to algebraic equations. This idea can be easily extended
to PDEs, where the transformation leads to the decrease of the number of
independent variables. PDEs in two variables are thus reduced to ODEs. In
this section we review the Laplace transform and its properties.
Laplace transform is yet another operational tool for solving constant coeffi-
cients linear differential equations. The process of solution consists of three
main steps:
• The given “hard” problem is transformed into a “simple” equation.
• This simple equation is solved by purely algebraic manipulations.
• The solution of the simple equation is transformed back to obtain the so-
lution of the given problem.
In this way the Laplace transformation reduces the problem of solving a dif-
ferential equation to an algebraic problem. The third step is made easier by
tables, whose role is similar to that of integral tables in integration.
The above procedure can be summarized by Figure 21.1

Figure 21.1

In this section we introduce the concept of Laplace transform and discuss


some of its properties.
The Laplace transform is defined in the following way. Let f (t) be defined
for t ≥ 0. Then the Laplace transform of f, which is denoted by L[f (t)]
or by F (s), is defined by the following equation
Z T Z ∞
−st
L[f (t)] = F (s) = lim f (t)e dt = f (t)e−st dt
T →∞ 0 0

The integral which defines a Laplace transform is an improper integral. An


improper integral may converge or diverge, depending on the integrand.
When the improper integral is convergent then we say that the function f (t)
possesses a Laplace transform. So what types of functions possess Laplace
21 ESSENTIALS OF THE LAPLACE TRANSFORM 179

transforms, that is, what type of functions guarantees a convergent improper


integral.

Example 21.1
Find the Laplace transform, if it exists, of each of the following functions
2
(a) f (t) = eat (b) f (t) = 1 (c) f (t) = t (d) f (t) = et

Solution.
(a) Using the definition of Laplace transform we see that
Z ∞ Z T
−(s−a)t
at
L[e ] = e dt = lim e−(s−a)t dt.
0 T →∞ 0

But
T 
T if s = a
Z
−(s−a)t
e dt = 1−e−(s−a)T
0 s−a
if s 6= a.
For the improper integral to converge we need s > a. In this case,
1
L[eat ] = F (s) = , s > a.
s−a
(b) In a similar way to what was done in part (a), we find
Z ∞ Z T
−st 1
L[1] = e dt = lim e−st dt = , s > 0.
0 T →∞ 0 s
(c) We have
∞ ∞
te−st e−st
Z 
−st 1
L[t] = te dt = − − 2 = , s > 0.
0 s s 0 s2

(d) Again using the definition of Laplace transform we find


Z ∞
t2 2
L[e ] = et −st dt.
0

t2 −st
R∞ 2
2
R ∞s ≤ 0 then t −st ≥ 0 so that e
If ≥ 1 and this implies that 0 et −st dt ≥
0
dt. Since the integral on the right is divergent, by the comparison theo-
rem of improper integrals,Rthe integral on the left is also divergent. Now, if
R ∞ t(t−s) ∞
s > 0 then 0 e dt ≥ s dt. By the same reasoning the integral on the
180 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES

2
left is divergent. This shows that the function f (t) = et does not possess a
Laplace transform

The above example raises the question of what class or classes of functions
possess a Laplace transform. To answer this question we introduce few math-
ematical concepts.
A function f that satisfies

|f (t)| ≤ M eat , t≥C (21.1)

is said to be a function with an exponential order at infinity. A function


f is called piecewise continuous on an interval if the interval can be bro-
ken into a finite number of subintervals on which the function is continuous
on each open subinterval (i.e. the subinterval without its endpoints) and
has a finite limit at the endpoints (jump discontinuities and no vertical
asymptotes) of each subinterval. Below is a sketch of a piecewise continuous
function.

Note that a piecewise continuous function is a function that has a finite


number of breaks in it and doesn’t blow up to infinity anywhere. A func-
tion defined for t ≥ 0 is said to be piecewise continuous on the infinite
interval if it is piecewise continuous on 0 ≤ t ≤ T for all T > 0.

Example 21.2
Show that the following functions are piecewise continuous and of exponential
order at infinity for t ≥ 0
(a) f (t) = tn (b) f (t) = tn sin at

Solution.
(a) Since et = ∞ tn tn n t n
P
n=0 n! ≥ n! , t ≤ n!e . Hence, t is piecewise continuous and
of exponential order at infinity.
(b) Since |tn sin at| ≤ n!et , tn sin at is piecewise continuous and of exponential
order at infinity
21 ESSENTIALS OF THE LAPLACE TRANSFORM 181

The following is an existence result of Laplace transform.

Theorem 21.1
Suppose that f (t) is piecewise continuous on t ≥ 0 and has an exponential
order at infinity with |f (t)| ≤ M eat for t ≥ C. Then the Laplace transform
Z ∞
F (s) = f (t)e−st dt
0

exists as long as s > a. Note that the two conditions above are sufficient, but
not necessary, for F (s) to exist.

In what follows, we will denote the class of all piecewise continuous functions
with exponential order at infinity by PE. The next theorem shows that any
linear combination of functions in PE is also in PE. The same is true for the
product of two functions in PE.

Theorem 21.2
Suppose that f (t) and g(t) are two elements of PE with
|f (t)| ≤ M1 ea1 t , t ≥ C1 and |g(t)| ≤ M2 ea1 t , t ≥ C2 .
(i) For any constants α and β the function αf (t) + βg(t) is also a member of
PE. Moreover

L[αf (t) + βg(t)] = αL[f (t)] + βL[g(t)].

(ii) The function h(t) = f (t)g(t) is an element of PE.

We next discuss the problem of how to determine the function f (t) if F (s)
is given. That is, how do we invert the transform. The following result on
uniqueness provides a possible answer. This result establishes a one-to-one
correspondence between the set PE and its Laplace transforms. Alterna-
tively, the following theorem asserts that the Laplace transform of a member
in PE is unique.

Theorem 21.3
Let f (t) and g(t) be two elements in PE with Laplace transforms F (s) and
G(s) such that F (s) = G(s) for some s > a. Then f (t) = g(t) for all t ≥ 0
where both functions are continuous.
182 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES

With the above theorem, we can now officially define the inverse Laplace
transform as follows: For a piecewise continuous function f of exponential
order at infinity whose Laplace transform is F, we call f the inverse Laplace
transform of F and write f = L−1 [F (s)]. Symbolically

f (t) = L−1 [F (s)] ⇐⇒ F (s) = L[f (t)].

Example 21.3
Find L−1 s−1
1

, s > 1.

Solution.
1
From Example 21.1(a), we have that L[eat ] = s−a , s > a. In particular, for
1 −1 1
t

a = 1 we find that L[e ] = s−1 , s > 1. Hence, L s−1
= et , t ≥ 0 .

The above theorem states that if f (t) is continuous and has a Laplace trans-
form F (s), then there is no other function that has the same Laplace trans-
form. To find L−1 [F (s)], we can inspect tables of Laplace transforms of
known functions to find a particular f (t) that yields the given F (s).
When the function f (t) is not continuous, the uniqueness of the inverse
Laplace transform is not assured. The following example addresses the
uniqueness issue.

Example 21.4
Consider the two functions f (t) = H(t)H(3 − t) and g(t) = H(t) − H(t − 3),
where H is the Heaviside function defined by

1, t ≥ 0
H(t) =
0, t < 0

(a) Are the two functions identical?


(b) Show that L[f (t)] = L[g(t).

Solution.
(a) We have 
1, 0 ≤ t ≤ 3
f (t) =
0, t>3
and 
1, 0 ≤ t < 3
g(t) =
0, t≥3
21 ESSENTIALS OF THE LAPLACE TRANSFORM 183

Since f (3) = 1 and g(3) = 0, f and g are not identical.


(b) We have
Z 3
1 − e−3s
L[f (t)] = L[g(t)] = e−st dt = , s > 0.
0 s
Thus, both functions f (t) and g(t) have the same Laplace transform even
though they are not identical. However, they are equal on the interval(s)
where they are both continuous

The inverse Laplace transform possesses a linear property as indicated in


the following result.

Theorem 21.4
Given two Laplace transforms F (s) and G(s) then
L−1 [aF (s) + bG(s)] = aL−1 [F (s)] + bL−1 [G(s)]
for any constants a and b.

Convolution integrals are useful when finding the inverse Laplace transform
of products. They are defined as follows: The convolution of two scalar
piecewise continuous functions f (t) and g(t) defined for t ≥ 0 is the integral
Z t
(f ∗ g)(t) = f (t − s)g(s)ds.
0

Example 21.5
Find f ∗ g where f (t) = e−t and g(t) = sin t.

Solution.
Using integration by parts twice we arrive at
Z t
(f ∗ g)(t) = e−(t−s) sin sds
0
1 t
= e−(t−s) (sin s − cos s) 0
2
e−t 1
= + (sin t − cos t) (21.2)
2 2
Next, we state several properties of convolution product, which resemble
those of ordinary product.
184 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES

Theorem 21.5
Let f (t), g(t), and k(t) be three piecewise continuous scalar functions defined
for t ≥ 0 and c1 and c2 are arbitrary constants. Then
(i) f ∗ g = g ∗ f (Commutative Law)
(ii) (f ∗ g) ∗ k = f ∗ (g ∗ k) (Associative Law)
(iii) f ∗ (c1 g + c2 k) = c1 f ∗ g + c2 f ∗ k (Distributive Law)
Example 21.6
Express the solution to the initial value problem y 0 + αy = g(t), y(0) = y0
in terms of a convolution integral.
Solution.
Solving this initial value problem by the method of integrating factor we find
Z t
−αt
y(t) = e y0 + e−α(t−s) g(s)ds = e−αt y0 + (e−αt ∗ g)(t)
0
The following theorem, known as the Convolution Theorem, provides a way
for finding the Laplace transform of a convolution integral and also finding
the inverse Laplace transform of a product.
Theorem 21.6
If f (t) and g(t) are piecewise continuous for t ≥ 0, and of exponential order
at infinity then
L[(f ∗ g)(t)] = L[f (t)]L[g(t)] = F (s)G(s).
Thus, (f ∗ g)(t) = L−1 [F (s)G(s)].
Example 21.7
Use the convolution theorem to find the inverse Laplace transform of
1
P (s) = 2 .
(s + a2 )2
Solution.
Note that   
1 1
P (s) = .
s + a2
2 s + a2
2

1 1
So, in this case we have, F (s) = G(s) = s2 +a 2 so that f (t) = g(t) = a sin (at).

Thus,
1 t
Z
1
(f ∗ g)(t) = 2 sin (at − as) sin (as)ds = 3 (sin (at) − at cos (at))
a 0 2a
21 ESSENTIALS OF THE LAPLACE TRANSFORM 185

Example 21.8
Solve the initial value problem

4y 00 + y = g(t), y(0) = 3, y 0 (0) = −7

Solution.
Take the Laplace transform of all the terms and plug in the initial conditions
to obtain
4(s2 Y (s) − 3s + 7) + Y (s) = G(s)
or
(4s2 + 1)Y (s) − 12s + 28 = G(s).
Solving for Y (s) we find

12s − 28 G(s)
Y (s) = +
4 s2 + 14 4 s2 + 41
 

1 2 1 2
 
3s 2 1 2
=  −7
1 2
 + G(s)
1 2
2
2
s + (2 2
s + 2 4 s + 21
2

Hence,

1 t
    Z
t t s
y(t) = 3 cos − 7 sin + sin g(t − s)ds.
2 2 2 0 2

So, once we decide on a g(t) all we need to do is to evaluate the integral and
we’ll have the solution

We conclude this section with the following table of Laplace transform pairs
where H is the Heaviside function defined by H(t) = 1 for t ≥ 0 and 0
otherwise.
186 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES

f(t)  F(s)
1, t ≥ 0 1
H(t) = s, s>0
0, t < 0
n!
tn , n = 1, 2, · · · sn+1
, s>0
1
eαt s−α , s > α
ω
sin (ωt) s2 +ω 2
, s>0
s
cos (ωt) s2 +ω 2
, s>0
ω
sinh (ωt) s2 −ω 2
, s > |ω|
s
cosh (ωt) s2 −ω 2
, s > |ω|
eαt f (t), with |f (t)| ≤ M eat F (s − α), s > α + a
1
eαt H(t) s−α , s > α
n!
eαt tn , n = 1, 2, · · · (s−α)n+1
, s>α
ω
eαt sin (ωt) (s−α)2 +ω 2
, s>α
s−α
eαt cos (ωt) (s−α)2 +ω 2
, s>α
f (t − α)H(t − α), α ≥ 0 e−αs F (s), s > a
with |f (t)| ≤ M eat
e−αs
H(t − α), α ≥ 0 s , s >0
tf (t) -F 0 (s)
t s
2ω sin ωt (s2 +ω 2 )2
, s>0

1 1
2ω 3
[sin ωt − ωt cos ωt] (s2 +ω 2 )2
, s>0

f 0 (t), with f (t) continuous sF (s) − f (0)


and |f 0 (t)| ≤ M eat s > max{a, 0} + 1

f 00 (t), with f 0 (t) continuous s2 F (s) − sf (0) − f 0 (0)


and |f 00 (t)| ≤ M eat s > max{a, 0} + 1

f (n) (t), with f (n−1) (t) continuous sn F (s) − sn−1 f (0) − · · ·


and |f (n) (t)| ≤ M eat -sf (n−2) (0) − f (n−1) (0)
s > max{a, 0} + 1
R∞ √
2 e−α s
2

2 π α

e−u du s
2 t

Rt F (s)
0 f (u)du, with |f (t)| ≤ M eat s , s > max{a, 0} + 1

Table L
21 ESSENTIALS OF THE LAPLACE TRANSFORM 187

Practice Problems
Problem 21.1 R∞ 1
Determine whether the integral 0 1+t2
dt converges. If the integral con-
verges, give its value.

Problem 21.2 R∞ t
Determine whether the integral 0 1+t2
dt converges. If the integral con-
verges, give its value.

Problem 21.3 R∞
Determine whether the integral 0 e−t cos (e−t )dt converges. If the integral
converges, give its value.

Problem 21.4
Using the definition, find L[e3t ], if it exists. If the Laplace transform exists
then find the domain of F (s).

Problem 21.5
Using the definition, find L[t − 5], if it exists. If the Laplace transform exists
then find the domain of F (s).

Problem 21.6
2
Using the definition, find L[e(t−1) ], if it exists. If the Laplace transform
exists then find the domain of F (s).

Problem 21.7
Using the definition, find L[(t − 2)2 ], if it exists. If the Laplace transform
exists then find the domain of F (s).

Problem 21.8
Using the definition, find L[f (t)], if it exists. If the Laplace transform exists
then find the domain of F (s).

0, 0≤t<1
f (t) =
t − 1, t≥1
188 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES

Problem 21.9
Using the definition, find L[f (t)], if it exists. If the Laplace transform exists
then find the domain of F (s).

 0, 0≤t<1
f (t) = t − 1, 1 ≤ t < 2
0, t ≥ 2.

Problem 21.10
Let n be a positive integer. Using integration by parts establish the reduction
formula
tn e−st n
Z Z
n −st
t e dt = − + tn−1 e−st dt, s > 0.
s s
Problem 21.11
For s > 0 and n a positive integer evaluate the limits

(a) limt→0 tn e−st (b) limt→∞ tn e−st

Problem 21.12
Use the linearity property of Laplace transform to find L[5e−7t + t + 2e2t ].
Find the domain of F (s).

Problem 21.13
Find L−1 s−2
3

.

Problem 21.14
Find L−1 − s22 + 1

s+1
.

Problem 21.15
Find L−1 s+2
2 2

+ s−2 .

Problem 21.16
Use Table L to find L[2et + 5].

Problem 21.17
Use Table L to find L[e3t−3 H(t − 1)].

Problem 21.18
Use Table L to find L[sin2 ωt].
21 ESSENTIALS OF THE LAPLACE TRANSFORM 189

Problem 21.19
Use Table L to find L[sin 3t cos 3t].

Problem 21.20
Use Table L to find L[e2t cos 3t].

Problem 21.21
Use Table L to find L[e4t (t2 + 3t + 5)].

Problem 21.22
Use Table L to find L−1 [ s210
+25
+ 4
s−3
].

Problem 21.23
Use Table L to find L−1 [ (s−3)
5
4 ].

Problem 21.24
−2s
Use Table L to find L−1 [ es−9 ].

Problem 21.25 h i
Using the partial fraction decomposition find L−1 (s−3)(s+1)
12
.

Problem 21.26 h i
−1 24e−5s
Using the partial fraction decomposition find L s2 −9
.

Problem 21.27
Use Laplace transform technique to solve the initial value problem

y 0 + 4y = g(t), y(0) = 2

where 
 0, 0 ≤ t < 1
g(t) = 12, 1 ≤ t < 3
0, t≥3

Problem 21.28
Use Laplace transform technique to solve the initial value problem

y 00 − 4y = e3t , y(0) = 0, y 0 (0) = 0.


190 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES

Problem 21.29
Consider the functions f (t) = et and g(t) = e−2t , t ≥ 0. Compute f ∗ g in
two different ways.
(a) By directly evaluating the integral.
(b) By computing L−1 [F (s)G(s)] where F (s) = L[f (t)] and G(s) = L[g(t)].

Problem 21.30
Consider the functions f (t) = sin t and g(t) = cos t, t ≥ 0. Compute f ∗ g in
two different ways.
(a) By directly evaluating the integral.
(b) By computing L−1 [F (s)G(s)] where F (s) = L[f (t)] and G(s) = L[g(t)].

Problem 21.31
Compute t ∗ t ∗ t.

Problem 21.32
Compute H(t) ∗ e−t ∗ e−2t .

Problem 21.33
Compute t ∗ e−t ∗ et .
22 SOLVING PDES USING LAPLACE TRANSFORM 191

22 Solving PDEs Using Laplace Transform


The same idea for solving linear ODEs using Laplace transform can be ex-
ploited when solving PDEs for functions in two variables u = u(x, t). The
transformation will be done with respect to the time variable t ≥ 0, the spa-
tial variable x will be treated as a parameter unaffected by this transform.
In particular we define the Laplace transform of u(x, t) by the formula
Z ∞
L(u(x, t)) = U (x, s) = u(x, t)e−st dt.
0

The time derivatives are transformed in the same way as in the case of
functions in one variable, that is, for example
L(ut )(x, t) = sU (x, s) − u(x, 0)
and
L(utt )(x, s) = s2 U (x, s) − su(x, 0) − ut (x, 0).
The spatial derivatives remain unchanged, for example,
Z ∞ Z ∞
−sτ ∂
Lux (x, t) = ux (x, τ )e dτ = u(x, τ )e−sτ dτ = Ux (x, s).
0 ∂x 0
Likewise, we have
Luxx (x, t) = Uxx (x, s).
Thus, applying the Laplace transform to a PDE in two variables x and t we
obtain an ODE in the variable x and with the parameter s.

Example 22.1
Let u(x, t) be the concentration of a chemical contaminant dissolved in a
liquid on a half-infinte domain x > 0. Let us assume that at time t = 0 the
concentration is 0 and on the boundary x = 0, constant unit concentration of
the contaminant is kept for t > 0. The behaviour of this problem is described
by the following mathematical model


 ut − uxx = 0 , x > 0, t > 0
u(x, 0) = 0,


 u(0, t) = 1,
|u(x, t)| < ∞.

Find u(x, t).


192 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES

Solution.
Applying Laplace transform to both sides of the equation we obtain

sU (x, s) − u(x, 0) − Uxx (x, s) = 0

or
Uxx (x, s) − sU (x, s) = 0.
This is a second order linear ODE in the variable x and positive parameter
s. Its general solution is
√ √
U (x, s) = A(s)e− sx
+ B(s)e sx
.

Since U (x, s) is bounded in both variables, we must have B(s) = 0 and in


this case we obtain √
U (x, s) = A(s)e− sx .
Next, we apply Laplace transform to the boundary condition obtaining
1
U (0, s) = L(1) = .
s
1
This leads to A(s) = s
and the transformed solution becomes
1 √
U (x, s) = e− sx .
s
Thus,   Z ∞
−1 1 −√sx 2 2
u(x, t) = L e =√ e−w dw
s π x

2 t

Example 22.2
Solve the following initial boundary value problem


 ut − uxx = 0 , x > 0, t > 0
u(x, 0) = 0,


 u(0, t) = f (t),
|u(x, t)| < ∞.

Solution.
Following the argument of the previous example we find

U (x, s) = F (s)e− sx
, F (s) = Lf (t).
22 SOLVING PDES USING LAPLACE TRANSFORM 193

Thus, using Theorem 21.6 we can write


 √  √
−1 − sx
u(x, t) = L F (s)e = f ∗ L−1 (e− sx ).

It can be shown that


√ x x2
L−1 (e− sx
)= √ e− 4t .
4πt3
Hence, Z t
x x2
u(x, t) = p e− 4(t−s) f (s)ds
0 4π(t − s)3

Example 22.3
Solve the wave equation


 utt − c2 uxx = 0 , x > 0, t > 0
u(x, 0) = ut (x, 0) = 0,


 u(0, t) = f (t),
|u(x, t)| < ∞.

Solution.
Applying Laplace transform to both sides of the equation we obtain

s2 U (x, s) − su(x, 0) − ut (x, 0) − c2 Uxx (x, s) = 0

or
c2 Uxx (x, s) − s2 U (x, s) = 0.
This is a second order linear ODE in the variable x and positive parameter
s. Its general solution is
s s
U (x, s) = A(s)e− c x + B(s)e c x .

Since U (x, s) is bounded, we must have B(s) = 0 and in this case we obtain
s
U (x, s) = A(s)e− c x .

Next, we apply Laplace transform to the boundary condition obtaining

U (0, s) = L(f (t)) = F (s).


194 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES

This leads to A(s) = F (s) and the transformed solution becomes


s
U (x, s) = F (s)e− c x .

Thus,
x 
 x  x
u(x, t) = L−1 F (s)e− c s = H t − f t−
c c
Remark 22.1
Laplace transforms are useful in solving parabolic and some hyperbolic PDEs.
They are not in general useful in solving elliptic PDEs.
22 SOLVING PDES USING LAPLACE TRANSFORM 195

Practice Problems
Problem 22.1
Solve by Laplace transform

 ut + ux = 0 , x > 0, t > 0
u(x, 0) = sin x,
u(0, t) = 0

Hint: Method of integrating factor of ODEs.

Problem 22.2
Solve by Laplace transform

 ut + ux = −u , x > 0, t > 0
u(x, 0) = sin x,
u(0, t) = 0

Problem 22.3
Solve
ut = 4uxx
u(0, t) = u(1, t) = 0
u(x, 0) = 2 sin πx + 6 sin 2πx.
Hint: A particular solution of a second order ODE must be found using the
method of variation of parameters.

Problem 22.4
Solve by Laplace transform

 ut − ux = u , x > 0, t > 0
u(x, 0) = e−5x ,
|u(x, t)| < ∞

Problem 22.5
Solve by Laplace transform

 ut + ux = t , x > 0, t > 0
u(x, 0) = 0,
u(0, t) = t2

196 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES

Problem 22.6
Solve by Laplace transform

 xut + ux = 0 , x > 0, t > 0
u(x, 0) = 0,
u(0, t) = t

Problem 22.7
Solve by Laplace transform


 utt − c2 uxx = 0 , x > 0, t > 0
u(x, 0) = ut (x, 0) = 0,


 u(0, t) = sin t,
|u(x, t)| < ∞

Problem 22.8
Solve by Laplace transform

utt − 9uxx = 0, 0 ≤ x ≤ π, t > 0

u(0, t) = u(π, t) = 0,

ut (x, 0) = 0, u(x, 0) = 2 sin x.

Problem 22.9
Solve by Laplace transform

 uxy = 1 , x > 0, y > 0
u(x, 0) = 1,
u(0, y) = y + 1.

Problem 22.10
Solve by Laplace transform


 utt = c2 uxx , x > 0, t > 0
u(x, 0) = ut (x, 0) = 0,


 ux (0, t) = f (t),
|u(x, t)| < ∞.

22 SOLVING PDES USING LAPLACE TRANSFORM 197

Problem 22.11
Solve by Laplace transform

 ut + ux = u , x > 0, t > 0
u(x, 0) = sin x,
u(0, t) = 0

Problem 22.12
Solve by Laplace transform


 ut − c2 uxx = 0 , x > 0, t > 0
u(x, 0) = T,


 u(0, t) = 0,
|u(x, t)| < ∞

Problem 22.13
Solve by Laplace transform

ut − 3uxx = 0, 0 ≤ x ≤ 2, t > 0

u(0, t) = u(2, t) = 0,
u(x, 0) = 5 sin (πx)

Problem 22.14
Solve by Laplace transform

ut − 4uxx = 0, 0 ≤ x ≤ π, t > 0

ux (0, t) = u(π, t) = 0,
x
u(x, 0) = 40 cos
2
Problem 22.15
Solve by Laplace transform

utt − 4uxx = 0, 0 ≤ x ≤ 2, t > 0

u(0, t) = u(2, t) = 0,
ut (x, 0) = 0, u(x, 0) = 3 sin πx.
198 THE LAPLACE TRANSFORM SOLUTIONS FOR PDES
The Fourier Transform
Solutions for PDEs

In the previous chapter we discussed one class of integral transform meth-


ods, the Laplace transfom. In this chapter, we consider a second fundamental
class of integral transform methods, the so-called Fourier transform.
Fourier series are designed to solve boundary value problems on bounded
intervals. The extension of Fourier methods to the entire real line leads nat-
urally to the Fourier transform, an extremely powerful mathematical tool for
the analysis of non-periodic functions. The Fourier transform is of fundamen-
tal importance in a broad range of applications, including both ordinary and
partial differential equations, quantum mechanics, signal processing, control
theory, and probability, to name but a few.

199
200 THE FOURIER TRANSFORM SOLUTIONS FOR PDES

23 Complex Version of Fourier Series


We have seen in Section 15 that a 2L−periodic function f : R → R that is
piecewise smooth on [−L, L] can be expanded in a Fourier series

a0 X   nπ   nπ 
f (x) = + an cos x + bn sin x
2 n=1
L L

at all points of continuity of f. In the context of Fourier analysis, this is


referred to as the real form of the Fourier series. It is often convenient to
recast this series in complex form by means of Euler formula

eix = cos x + i sin x.

It follows from this formula that

eix + e−ix = 2 cos x and eix − e−ix = 2i sin x

or
eix +e−ix eix −e−ix
cos x = 2
and sin x = 2i
.

Hence the Fourier expansion of f can be rewritten as



" inπx inπx
!
a0 X e L + e− L
f (x) = + an
2 n=1
2
inπx inπx
!#
e L − e− L
+bn
2i


inπx
X
f (x) = cn e L (23.1)
n=−∞
a0
where c0 = 2
and for n ∈ N we have

an − ibn
cn =
2
an + ibn
c−n = .
2
23 COMPLEX VERSION OF FOURIER SERIES 201

It follows that if n ∈ N then

an = cn + c−n and bn = i(cn − c−n ). (23.2)

That is, an and bn can be easily found once we have formulas for cn . In order
to find these formulas, we need to evaluate the following integral
Z L Z L
inπx i(n−m)πx
− imπx
e L e L dx = e L dx
−L −L
L i(n−m)πx L
i
= e L
i(n − m)π −L
iL
=− [cos [(n − m)π] + i sin [(n − m)π]
(n − m)π
− cos [−(n − m)π] − i sin [−(n − m)π]]
=0

if n 6= m. If n = m then
Z L
inπx inπx
e L e− L dx = 2L.
−L

inπx
Now, if we multiply (23.1) by e− L and integrate from −L to L and apply
the last result we find
Z L
inπx
f (x)e− L dx = 2Lcn
−L

which yields the formula for coefficients of the complex form of the Fourier
series: Z L
1 inπx
cn = f (x)e− L dx, n = 0, ±1, ±2, · · · .
2L −L

Example 23.1
Find the complex Fourier coefficients of the function

f (x) = x, −π ≤x≤π

extended to be periodic of period 2π.


202 THE FOURIER TRANSFORM SOLUTIONS FOR PDES

Solution.
Using integration by parts and the fact that eiπ = e−iπ = −1 we find
Z π
1
cn = xe−inx dx
2π −π
  π Z π  
1 ix −inx i −inx
=
2π n
e − n
e dx
−π −π
    
1 iπ −inπ iπ inπ
= e + e
2π n n
 
1 1 −inπ 1 inπ
+ e − 2e
2π n2 n
1 h π i 1 (−1)n i
= 2i (−1)n + (0) =
2π n 2π n
for n ∈ N and for n = 0, we have
Z π
1
c0 = xdx = 0
2π −π

Remark 23.1
It is often the case that the complex form of the Fourier series is far simpler
to calculate than the real form. One can then use (23.2) to find the real form
of the Fourier series. For example, the Fourier coefficients of the real form of
the previous function are given by

an = (cn + c−n ) = 0 and bn = i(cn − c−n ) = n2 (−1)n+1 , n ∈ N


23 COMPLEX VERSION OF FOURIER SERIES 203

Practice Problems
Problem 23.1
Find the complex Fourier coefficients of the function

f (x) = x, −1≤x≤1

extended to be periodic of period 2.

Problem 23.2
Let
 0 −π < x < −π

2
f (x) = 1 −π
2
< x < π
2
0 π<x<π

be 2π−periodic. Find its complex series representation.

Problem 23.3
Find the complex Fourier series of the 2π−periodic function f (x) = eax over
the interval (−π, π).

Problem 23.4
Find the complex Fourier series of the 2π−periodic function f (x) = sin x
over the interval (−π, π).

Problem 23.5
Find the complex Fourier series of the 2π−periodic function defined

1 0<x<T
f (x) =
0 T < x < 2π

Problem 23.6
Let f (x) = x2 , − π < x < π, be 2π−periodic.
(a) Calculate the complex Fourier series representation of f.
(b) Using the complex Fourier series found in (a), recover the real Fourier
series representation of f.

Problem 23.7
Let f (x) = sin πx, − 21 < x < 21 , be of period 1.
(a) Calculate the coefficients an , bn and cn .
(b) Find the complex Fourier series representation of f.
204 THE FOURIER TRANSFORM SOLUTIONS FOR PDES

Problem 23.8
Let f (x) = 2 − x, − 2 < x < 2, be of period 4.
(a) Calculate the coefficients an , bn and cn .
(b) Find the complex Fourier series representation of f.
Problem 23.9
Suppose that the coefficients cn of the complex Fourier series are given by
 2
iπn
if |n| is odd
cn =
0 if |n| is even.
Find an , n = 0, 1, 2, · · · and bn , n = 1, 2, · · · .
Problem 23.10
Recall that any complex number z can be written as z = Re(z) + iIm(z)
where Re(z) is called the real part of z and Im(z) is called the imaginary
part. The complex conjugate of z is the complex number z = Re(z) −
iIm(z). Using these definitions show that an = 2Re(cn ) and bn = −2Im(cn ).
Problem 23.11
Suppose that
i
[e−inT

2πn
− 1] if n 6= 0
cn = T

if n = 0.
Find an and bn .
Problem 23.12
Find the complex Fourier series of the function f (x) = ex on [−2, 2].
Problem 23.13
Consider the wave form
23 COMPLEX VERSION OF FOURIER SERIES 205

(a) Write f (x) explicitly. What is the period of f.


(b) Determine a0 and an for n ∈ N.
(c) Determine bn for n ∈ N.
(d) Determine c0 and cn for n ∈ N.

Problem 23.14
If z is a complex number we define sin z = 21 (eiz − e−iz ). Find the complex
form of the Fourier series for sin 3x without evaluating any integrals.
206 THE FOURIER TRANSFORM SOLUTIONS FOR PDES

24 An introduction to Fourier Transforms


One of the problems with the theory of Fourier series discussed so far is that
it applies only to periodic functions. There are many times when one would
like to divide a function which is not periodic into a superposition of sines
and cosines. The Fourier transform is the tool often used for this purpose.
Like the Laplace transform, the Fourier transform is often an effective tool
in finding explicit solutions to partial differential equations.
We will introduce the Fourier transform of f (x) as a limiting case of a Fourier
series. This requires a tedious discussion which we omit and rather explain
the underlying ideas. More specifically, the approach we introduce is to
construct Fourier series of f (x) on progressively longer and longer intervals,
and then take the limit as their lengths go to infinity. This limiting process
converts the Fourier sums into integrals, and the resulting representation of
a function is renamed the Fourier transform.
To start with, let f : R → R be Ra piecewise continuous function with the

properties limx→±∞ f (x) = 0 and 0 |f (x)|dx < ∞. Define the function fL
which is equal to f in an interval of the form [−πL, πL] and vanishes outside
this interval. Note that f (x) = limL→∞ fL (x). This function can be extended
to a periodic function, denoted by fe , of period 2πT with T > L and where
fe (x) = f (x) for |x| ≤ πL and 0 for −πT ≤ x ≤ −πL and πL ≤ x ≤ πT.
Note that f (x) = limL→∞ fL (x) = limL→∞ fe (x). From the previous section
we can find the complex Fourier series of fe to be


inx
X
fe (x) = cn e T (24.1)
n=−∞

where
Z πT
1 inx
cn = fe (x)e− T dx.
2πT −πT

Let ξ ∈ R. Multiply both sides of (24.1) by e−iξx and then integrate both sides
from −πT to πT. Assuming integration and summation can be interchanged
we find
Z πT ∞ Z πT
inx
X
−iξx
fe (x)e dx = cn e−iξx e T dx.
−πT n=−∞ −πT
24 AN INTRODUCTION TO FOURIER TRANSFORMS 207

It can be shown that the RHS converges, say to fˆ(ξ), as L → ∞ (and


T → ∞) Hence, we find
Z ∞
ˆ
f (ξ) = f (x)e−iξx dx. (24.2)
−∞

The function fˆ is called the Fourier transform of f. We will use the notation
F[f (x)] = fˆ(ξ).
Next, it can be shown that
n
fˆ = 2πT cn
T
so that ∞
1 X ˆ  n  inx
fe (x) = f eT .
2πT n=−∞ T
It can be shown that as L → ∞, we have
∞ Z ∞
1 X ˆ  n  inx
lim f e =
T fˆ(ξ)eiξx dξ
T →∞ T T −∞
n=−∞

so that Z ∞
1
f (x) = fˆ(ξ)eiξx dξ (24.3)
2π −∞

Equation (24.3) is called the Fourier inversion formula and we use the
notation F −1 [fˆ(ξ)]. Now, if we make use of Euler’s formula, we can write the
Fourier inversion formula in terms of sines and cosines,
Z ∞ Z ∞
1 i
f (x) = fˆ(ξ) cos ξxdξ + fˆ(ξ) sin ξxdξ
2π −∞ 2π −∞

a superposition of sines and cosines of various frequencies.

Example 24.1
Find the Fourier transform of the function f (x) defined by
 −ax
e if x ≥ 0
f (x) =
0 if x < 0

for some a > 0.


208 THE FOURIER TRANSFORM SOLUTIONS FOR PDES

Solution.
We have
Z ∞ Z ∞
fˆ(ξ) = f (x)e−iξx
e−ax e−iξx dx
dx =
−∞ 0
Z ∞ −x(a+iξ) ∞

−ax−iξx e
= e dx =
0 −(a + iξ) 0
1
=
a + iξ
The following theorem lists the basic properties of the Fourier transform

Theorem 24.1
Let f, g, be piecewise continuous functions. Then we have the following
properties:
(1) Linearity: F[αf (x) + βg(x)] = αF[f (x)] + βF[g(x)], where α and β are
arbitrary numbers.
(2) Shifting: F[f (x− α)] = e−iαξ F[f (x)].
(3) Scaling: F[f αx ] = αF[f (αx)].
R∞
(4) Continuity: If −∞ |f (x)|dx < ∞ then fˆ is continuous in ξ.
(5) Differentiation:RF[f (n) (x)]
 = (iξ)
n
F[f (x)].
x 1
(6) Integration: F 0 f (s)ds = − iξ F[f (x)].
R∞ R∞
(7) Parseval’s Relation: −∞ |f (x)|2 dx = 2π 1
−∞
|fˆ(ξ)|2 dξ.
(8) Duality: F[F[f (x)]] = 2πf (−x).
(9) Multiplication by xn : F[xn f (x)] = in fˆ(n) (ξ).
2 ξ2
(10) Gaussians: F[e−αx ] = απ e− 4α .
p
1
(11) Product: F[(f (x)g(x)] = 2π F[f (x)] ∗ F[g(x)].
(12) Convolution: F[(f ∗ g)(x)] = F[f (x)] · F[g(x)].

Example 24.2
2
Determine the Fourier transform of the Gaussian u(x) = e−αx , α > 0.

Solution.
We have Z ∞
2
û(ξ) = e−αx e−iξx dx.
−∞

If we differentiate this relation with respect to the variable ξ and then inte-
grate by parts we obtain
24 AN INTRODUCTION TO FOURIER TRANSFORMS 209

Z ∞
0 2
û (ξ) = − i xe−αx e−iξx dx
Z −∞

i d −αx2 −iξx
= (e )e dx
2α −∞ dx
 ∞ Z ∞ 
i −αx2 −iξx −αx2 −iξx
= e + iξ (e )e dx


−∞ −∞
i2 ξ ∞ −αx2 −iξx
Z
ξ
= (e )e dx = − û(ξ)
2α −∞ 2α
ξ
Thus, we have arrived at the ODE û0 (ξ) = − 2α û(ξ) whose general solution
has the form
ξ2
û(ξ) = Ce− 4α .
Using a result from real analysis which states that
Z ∞
2 √
e−x dx = π,
−∞

we can write

Z r
−αx2 π
û(0) = e dx = = C,
−∞ α
and therefore r
π − ξ2
û(ξ) = e 4α
α
Example 24.3
Prove
F[f (−x)] = fˆ(−ξ).

Solution.
Using a change of variables we find

Z ∞ Z ∞
F[f (−x)] = f (−x)e −iξx
dx = f (x)eiξx dx = fˆ(−ξ)
−∞ −∞

Example 24.4
Prove
F[F[f (x)]] = 2πf (−x).
210 THE FOURIER TRANSFORM SOLUTIONS FOR PDES

Solution.
We have
Z ∞
1
f (x) = fˆ(ξ)eiξx dξ
2π −∞

Thus, Z ∞
2πf (−x) = fˆ(ξ)e−iξx dξ = F[fˆ(ξ)] = F[F[f (x)]]
−∞

The following theorem lists the properties of inverse Fourier transform

Theorem 24.2
Let f and g be piecewise continuous functions.
(10 ) Linearity: F −1 [αfˆ(ξ) + βĝ(ξ)] = αF −1 [fˆ(ξ)] + βF −1 [ĝ(ξ)].
(20 ) Derivatives: F −1 [fˆ(n) (ξ)] = (−ix)n f (x).
(30 ) Multiplication by ξ n : F −1 [ξ n fˆ(ξ)] = (−i)n f (n) (x).
(40 ) Multiplication by e−iξα : F −1 [e−iξα fˆ(ξ)] = f (x − α).
2 x2
(50 ) Gaussians: F −1 [e−αξ ] = √4πα1
e− 4α .
(60 ) Product: F −1 [fˆ(ξ)ĝ(ξ)] = f (x) ∗ g(x).
(70 ) Convolution: F −1 [fˆ ∗ ĝ(ξ)] = 2π(f g)(x).

Remark 24.1
It is important to mention that there exists no established convention of how
to define the Fourier transform. In the literature, we can meet an equivalent
definition of (24.3) with the constant √12π or 2π 1
in front of the integral.
There also exist definitions with positive sign in the exponent. The reader
should keep this fact in mind while working with various sources or using the
transformation tables.
24 AN INTRODUCTION TO FOURIER TRANSFORMS 211

Practice Problems
Problem 24.1
Find the Fourier transform of the function

1 if −1 ≤ x ≤ 1
f (x) =
0 otherwise.

Problem 24.2
Obtain the transformed problem when applying the Fourier transform with
respect to the spatial variable to the equation and initial condition

ut + cux = 0

u(x, 0) = f (x).

Problem 24.3
Obtain the transformed problem when applying the Fourier transform with
respect to the spatial variable to the equation and both initial conditions

utt = c2 uxx , x ∈ R, t > 0

u(x, 0) = f (x)
ut (x, 0) = g(x).

Problem 24.4
Obtain the transformed problem when applying the Fourier transform with
respect to the spatial variable to the equation and both initial conditions

∆u = uxx + uyy = 0, x ∈ R, 0 < y < L

u(x, 0) = 0

1 if −a < x < a
u(x, L) =
0 otherwise

Problem 24.5
Find the Fourier transform of f (x) = e−|x|α , where α > 0.
212 THE FOURIER TRANSFORM SOLUTIONS FOR PDES

Problem 24.6
Prove that
1
F[e−x H(x)] =
1 + iξ
where 
1 if x ≥ 0
H(x) =
0 otherwise.
Problem 24.7
Prove that  
1
F = 2πeξ H(−ξ).
1 + ix
Problem 24.8
Prove
F[f (x − α)] = e−iξα fˆ(ξ).
Problem 24.9
Prove
F[eiαx f (x)] = fˆ(x − α).
Problem 24.10
Prove the following
1
F[cos (αx)f (x)] = [fˆ(ξ + α) + fˆ(ξ − α)]
2
1
F[sin (αx)f (x)] = [fˆ(ξ + α) − fˆ(ξ − α)]
2
Problem 24.11
Prove
F[f 0 (x)] = (iξ)fˆ(ξ).
Problem 24.12
Find the Fourier transform of f (x) = 1 − |x| for −1 ≤ x ≤ 1 and 0 otherwise.
Problem 24.13
Find, using the definition, the Fourier transform of

 −1 −a < x < 0
f (x) = 1 0<x<a
0 otherwise

24 AN INTRODUCTION TO FOURIER TRANSFORMS 213

Problem 24.14
ξ2
Find the inverse Fourier transform of fˆ(ξ) = e− 2 .

Problem 24.15 
−1 1
Find F a+iξ
.
214 THE FOURIER TRANSFORM SOLUTIONS FOR PDES

25 Applications of Fourier Transforms to PDEs


Fourier transform is a useful tool for solving differential equations. In this
section, we apply Fourier transforms in solving various PDE problems. Con-
trary to Laplace transform, which usually uses the time variable, the Fourier
transform is applied to the spatial variable on the whole real line.
The Fourier transform will be applied to the spatial variable x while the vari-
able t remains fixed. The PDE in the two variables x and t passes under the
Fourier transform to an ODE in the t−variable. We solve this ODE to obtain
the transformed solution û which can be converted to the original solution u
by means of the inverse Fourier transform. We illustrate these ideas in the
examples below.

First Order Transport Equation


Consider the initial value problem

ut + cux = 0

u(x, 0) = f (x).
Let û(ξ, t) be the Fourier transform of u in x. Performing the Fourier trans-
form on both the PDE and the initial condition, we reduce the PDE into an
ODE in t
∂ û
+ iξcû = 0
∂t
û(ξ, 0) = fˆ(ξ).
Solution of the ODE gives

û(ξ, t) = fˆ(ξ)e−iξct .

Thus,
u(x, t) = F −1 [û(ξ, t)] = f (x − ct)
which is exactly the same as we obtained by using the method of character-
istics.( Section 8)

Second Order Wave Equation


Consider the two dimensional wave equation

utt = c2 uxx , x ∈ R, t > 0


25 APPLICATIONS OF FOURIER TRANSFORMS TO PDES 215

u(x, 0) = f (x)
ut (x, 0) = g(x).
Again, by performing the Fourier transform of u in x, we reduce the PDE
problem into an ODE problem in the variable t:

∂ 2 û
2
= −c2 ξ 2 û
∂t

û(ξ, 0) = fˆ(ξ)
ût (ξ, 0) = ĝ(ξ).
General solution to the ODE is

û(ξ, t) = Φ(ξ)e−iξct + Ψ(ξ)eiξct

where Φ and Ψ are two arbitrary functions of ξ. Performing the inverse


transformation and making use of the translation theorem, we get the general
solution
u(x, t) = φ(x − ct) + ψ(x + ct)
where F(φ) = Φ and F(ψ) = Ψ. But
 
1 ˆ 1
Φ(ξ) = f (ξ) − ĝ(ξ)
2 iξc
 
1 ˆ 1
Ψ(ξ) = f (ξ) + ĝ(ξ) .
2 iξc
By using the integration property, we find the inverse transforms of Φ and Ψ

1 x
 Z 
1
φ(x) = f (x) + g(s)ds
2 c 0

1 x
 Z 
1
ψ(x) = f (x) − g(s)ds .
2 c 0
Application of the translation property then yields directly the D’Alambert
solution
1 x+ct
Z
1
u(x, t) = [f (x − ct) + f (x + ct)] + g(s)ds.
2 2c x−ct
216 THE FOURIER TRANSFORM SOLUTIONS FOR PDES

Second Order Heat Equation


Next, we consider the heat equation

ut = kuxx , x ∈ R, t > 0

u(x, 0) = f (x).
Performing Fourier Transform in x for the PDE and the initial condition, we
obtain
∂ û
= −kξ 2 û
∂t
û(ξ, 0) = fˆ(ξ).
Treating ξ as a parameter, we obtain the solution to the above ODE problem
2
û(ξ, t) = fˆ(ξ)e−kξ t .

Application of the convolution theorem yields


2
u(x, t) =f (x) ∗ F −1 ]e−kξ t ]
 
1 x2
− 4kt
=f (x) ∗ √ e
4πkt
Z ∞
1 (x−s)2
=√ f (s)e− 4kt ds.
4πkt −∞

Laplace’s Equation in 2D
Consider the problem

∆u = uxx + uyy = 0, x ∈ R, 0 < y < L

u(x, 0) = 0

1 if −a < x < a
u(x, L) =
0 otherwise.
Performing Fourier Transform in x for the PDE we obtain the second order
ODE in y
ûyy = ξ 2 û.
The general solution is given by

û(ξ, y) = A(ξ) sinh (ξy) + B(ξ) cosh (ξy).


25 APPLICATIONS OF FOURIER TRANSFORMS TO PDES 217

Using the boundary condition û(ξ, 0) = 0 we find B(ξ) = 0. Using the second
boundary condition we find
Z ∞
û(ξ, L) = u(x, L)e−iξx dx
Z−∞
a Z a
−iξx
= e dx = cos (ξx)dx
−a −a
2 sin (ξa)
= .
ξ
Hence,
2 sin (ξa)
A(ξ) sinh (ξL) =
ξ
and this implies
2 sin (ξa)
A(ξ) = .
ξ sinh (ξL)
Thus,
2 sin (ξa)
û(ξ, y) = sinh (ξy).
ξ sinh (ξL)
Taking inverse Fourier transform we find
Z ∞
1 2 sin (ξa)
u(x, y) = sinh (ξy)eiξx dξ.
2π −∞ ξ sinh (ξL)

Using Euler’s formula , and the fact that

2 sin (ξa)
sinh (ξy) sin (ξx)
ξ sinh (ξL)

is odd in ξ, we arrive at
Z ∞
1 2 sin (ξa)
u(x, y) = sinh (ξy) cos (ξx)dξ
2π −∞ ξ sinh (ξL)
218 THE FOURIER TRANSFORM SOLUTIONS FOR PDES

Practice Problems
Problem 25.1
Solve, by using Fourier transform
ut + cux = 0
x2
u(x, 0) = e− 4 .
Problem 25.2
Solve, by using Fourier transform
ut = kuxx − αu, x ∈ R
x2
u(x, 0) = e− γ .
Problem 25.3
Solve the heat equation
ut = kuxx
subject to the initial condition

1 if x ≥ 0
u(x, 0) =
0 otherwise.
Problem 25.4
Use Fourier transform to solve the heat equation
ut = uxx + u, −∞<x<∞< t>0
u(x, 0) = f (x).
Problem 25.5
Prove that Z ∞
2y
e−|ξ|y eiξx dξ = .
−∞ x2 + y2
Problem 25.6
Solve the Laplace’s equation in the half plane
uxx + uyy = 0, − ∞ < x < ∞, 0 < y < ∞
subject to the boundary condition
u(x, 0) = f (x), |u(x, y)| < ∞.
25 APPLICATIONS OF FOURIER TRANSFORMS TO PDES 219

Problem 25.7
Use Fourier transform to find the transformed equation of
utt + (α + β)ut + αβu = c2 uxx
where α, β > 0.
Problem 25.8
Solve the initial value problem
ut + 3ux = 0
u(x, 0) = e−x
using the Fourier transform.
Problem 25.9
Solve the initial value problem
ut = kuxx
u(x, 0) = e−x
using the Fourier transform.
Problem 25.10
Solve the initial value problem
ut = kuxx
2
u(x, 0) = e−x
using the Fourier transform.
Problem 25.11
Solve the initial value problem
ut + cux = 0
u(x, 0) = x2
using the Fourier transform.
Problem 25.12
Solve, by using Fourier transform
∆u = 0
uy (x, 0) = f (x)
lim u(x, y) = 0.
x2 +y 2 →∞
220 THE FOURIER TRANSFORM SOLUTIONS FOR PDES
Appendix

221
222 APPENDIX

Appendix A: The Method of Undetermined


Coefficients
The general solution to the nonhomogeneous differential equation
y 00 + p(t)y 0 + q(t)y = g(t), a < t < b (26.1)
has the structure
y(t) = c1 y1 (t) + c2 y2 (t) + yp (t)
where yp (t) is a particular solution to the nonhomogeneous equation. We
will write y(t) = yh (t) + yp (t) where yh (t) = c1 y1 (t) + c2 y2 (t).
In this and the next section we discuss methods for determining yp (t). The
techinque we discuss in this section is known as the method of undeter-
mined coefficients.
This method requires that we make an initial assumption about the form of
the particular solution yp (t), but with the coefficients left unspecified, thus
the name of the method. We then substitute the assumed expression into
equation (26.1) and attempt to determine the coefficients as to satisfy that
equation.
The main advantage of this method is that it is straightforward to execute
once the assumption is made as to the form of yp (t). Its major limitation is
that it is useful only for equations with constant coefficients and the nonho-
mogeneous term g(t) is restricted to a very small class of functions, namely
functions of the form eαt Pn (t) cos βt or eαt Pn (t) sin βt where Pn (t) is a poly-
nomial of degree n.
We next illustrate the method of undetermined coefficients by several simple
examples.
Example 26.1
Find the general solution of the nonhomogeneous equation
y 00 − 2y 0 − 3y = 36e5t .
Solution.
We seek a function where the combination yp00 − 2yp0 − 3yp is equal to 36e5t .
Since the exponential function reproduces itself through differentiation, the
most plausible guessing function will be yp (t) = Ae5t where A is a constant
to be determined. Inserting this into the given equation we arrive at
25Ae5t − 10Ae5t − 3Ae5t = 36e5t .
APPENDIX A: THE METHOD OF UNDETERMINED COEFFICIENTS223

Simplifying this last equation we find 12Ae5t = 36e5t . Solving for A we find
A = 3. Thus, yp (t) = 3e5t is a particular solution to the differential equation.
Next, the characteristic equation r2 − 2r − 3 = 0 has the roots r1 = −1
and r2 = 3. Hence, the general solution to the differential equation is y(t) =
c1 e−t + c2 e3t + 3e5t
Example 26.2
Find the general solution of
y 00 − y 0 + y = 2 sin 3t.
Solution.
The combination yp00 − yp0 + yp must be equal to 2 sin 3t. Let’s try with the
guess yp (t) = A sin 3t. Inserting this into the given differential equation leads
to
(2 − 16A) sin 3t − 6A cos 3t = 0 (26.2)
and this is valid for all t. Letting t = 0 we find −6A = 0 or A = 0. Letting
t = π6 we find 2 − 16A = 0 or 2 = 0 which is impossible. This means
there is no choice of A that makes equation (26.2) true. Hence, our choice is
inadequate. The appearance of sine and cosine in equation (26.2) suggests a
guessing of the form yp (t) = A cos 3t + B sin 3t. Inserting this into the given
differential equation leads to
(−8A − 3B) cos 3t + (3A − 8B) sin 3t = 2 sin 3t.
Setting −8A − 3B = 0 and 3A − 8B = 2 and solving for A and B we find
6
A = 73 and B = − 16
73
. Thus, a particular solution is
6 16
yp (t) = cos 3t − sin 3t.
73 73

Next, the √characteristic equation r2 − r + 1 = 0 has roots r1 = 21 − i 23 and
r2 = 12 + i 23 . Thus, the general solution to the homogeneous equation is
√ √
1 3 3
yh (t) = e 2 t (c1 cos t + c2 sin t).
2 2
The general solution to the differential equation is
√ √
1 3 3 6 16
y(t) = yh (t) + yp (t) = e 2 t (c1 cos t + c2 sin t) + cos 3t − sin 3t
2 2 73 73
224 APPENDIX

Example 26.3
Find the general solution of

y 00 + 4y 0 − 2y = 2t2 − 3t + 6.

Solution.
We see from the previous two examples that the trial function has usually
the appearance of the nonhomogeneous term g(t). Since g(t) is a quadratic
function, we are going to try yp (t) = At2 + Bt + C. Inserting this into the
differential equation leads to

−2At2 + (8A − 2b)t + (2A + 4B − 2C) = 2t2 − 3t + 6.

Equating coefficients of like powers of t we find A = −1, B = − 25 , and


C = −9. Thus, a particular solution is

5
yp (t) = −t2 − t − 9.
2

We next solve the homogeneous equation.


√ The characteristic
√ equation r2 +
4r − 2 = 0 has the roots r1 = −2 − 6 and r2 = −2 + 6. Thus,
√ √
yh (t) = c1 e(−2− 6)t
+ c2 e(−2+ 6)t
.

The general solution of the given equation is


√ √ 5
y(t) = yh (t) + yp (t) = c1 e(−2− 6)t
+ c2 e(−2+ 6)t
− t2 − t − 9
2

Remark 26.1
The same principle used in the previous three examples extends to the case
where g(t) is a product of any two or all three of the three types of functions
discussed above, as the next example illustrates.

Example 26.4
Find a particular solution of

y 00 − 3y 0 − 4y = −8et cos 2t.


APPENDIX A: THE METHOD OF UNDETERMINED COEFFICIENTS225

Solution.
We are going to try yp (t) = Aet cos 2t + Bet sin 2t. Inserting into the differ-
ential equation we find

(−10A − 2B)et cos 2t + (2A − 10B)et sin 2t = −8et cos 2t.

Thus, A and B satisfy the equations 10A + 2B = 8 and 2A − 10B = 0.


10 2
Solving we find A = 13 and B = 13 . Therefore, a particular solution is given
by
10 2
y(t) = et cos 2t + et sin 2t
13 13
The following example illustrates the use of Theorem 15.2.

Example 26.5
Find the general solution of

y 00 − 2y 0 − 3y = 4t − 5 + 6te2t .

Solution.
The characteristic equation of the homogeneous equation is r2 − 2r − 3 = 0
with roots r1 = −1 and r2 = 3. Thus,

yh (t) = c1 e−t + c2 e3t .

By Theorem 15.2, a guess for the particular solution is yp (t) = At + B +


Cte2t + De2t . Inserting this into the differential equation leads to

−3At − 2A − 3B − 3Ce2t + (2C − 3D)e2t = 4t − 5 + 6te2t .

From this identity we obtain −3A = 4 so that A = − 34 . Also, −2A−3B = −5


so that B = 23 9
. Since −3C = 6 we find C = −2. From 2C − 3D = 0 we find
4
D = − 3 . It follows that
 
−t 3t 4 23 4 2t
y(t) = c1 e + c2 e − t + − 2t + e
3 9 3

Although the method of undetermined coefficients provides a nice general


method for finding a particular solution, some difficulty arise as illustrated
in the following example.
226 APPENDIX

Example 26.6
Find the general solution of the nonhomogeneous equation
y 00 − y 0 − 2y = 4e−t .
Solution.
Let’s try with yp (t) = Ae−t . Substituting this into the differential equation
leads to 0Ae−t = 4e−t . Thus, A does not exist. Why did the procedure of the
previous examples fail here? The reason is that the function e−t that appears
in yp is a solution to the homogeneous equation and so cannot possibly be
a solution to the nonhomogeneous equation at the same time. Then comes
the question of how to find a correct form for the particular solution.
We will try to solve a simpler equation with the same difficulty and to use
its general solution to suggest how to proceed with our given equation. The
simpler equation we consider is y 0 + y = 4e−t . By the method of integrating
factor we find the general solution y(t) = 4te−t +ce−t . The second term is the
solution to the homogeneous equation whereas the first one is the solution
to the nonhomogeneous equation. We conclude from this discussion that a
good guess for the original equation would be yp (t) = Ate−t . If we insert
this into the differential equation we end up with −3Ae−t = 4e−t . Solving
for A we find A = − 34 . Thus, yp (t) = − 43 te−t and the general solution to the
differential equation is y(t) = c1 e−t + c2 e2t − 43 te−t
Example 26.7
Find the general solution of the nonhomogeneous equation
y 00 + 2y 0 + y = 2e−t .
Solution.
The characteristic equation is r2 +2r +1 = 0 with double roots r1 = r2 = −1.
Thus, yh (t) = c1 e−t + c2 te−t . Our trial function can not contain either e−t or
te−t since both are solutions to the homogeneous equation. Thus, a proper
guess is yp (t) = At2 e−t . Finding derivatives up to order 2 we find yp0 (t) =
2Ate−t − At2 e−t and yp00 (t) = 2Ae−t − 4Ate−t + At2 e−t . Substituting this in
the original equation and collecting like terms we find
2Ae−t = 2e−t .
Solving for A we find A = 1 so that yp (t) = t2 e−t . Hence, the general solution
is given by
y(t) = c1 e−t + c2 te−t + t2 e−t
APPENDIX A: THE METHOD OF UNDETERMINED COEFFICIENTS227

In the following table we list examples of g(t) along with the corresponding
form of the particular solution.
Form of g(t) Form of yp (t)
Pn (t) = an tn + an−1 tn−1 + · · · + a0 tr [An tn + An−1 tn−1 + · · · + A1 t + A0 ]
Pn (t)eαt tr [An tn + An−1 tn−1 + · · · + A1 t + A0 ]eαt
Pn (t)eαt cos βt or Pn (t)eαt sin βt tr eαt [(An tn + An−1 tn−1 + · · · + A1 t + A0 ) cos βt
+(Bn tn + Bn−1 tn−1 + · · · + B1 t + B0 ) sin βt]

The number r is chosen to be the smallest nonnegative integer such that


no term in the assumed form is a solution of the homogeneous equation
ay 00 + by 0 + cy = 0. The value of r will be 0, 1, or 2.

Example 26.8
Find the general solution of y 00 − y = t + tet .

Solution.
The characteristic equation r2 − 1 = 0 has roots r = ±1. Thus, the homo-
geneous solution is yh (t) = c1 e−t + c2 et . A trial function for the particular
solution is A0 + A1 t + t(B0 + B1 t)et since et is a solution of the homogeneous
equation. Inserting into the differential equation we find

2B1 et + 2(B0 + 2B1 t)et + (tB0 + t2 B1 )et − A0 − A1 t − t(B0 + B1 t)et = t + tet

or
−A0 − A1 t + (2B1 + 2B0 + 4B1 t)et = t + tet
From this we obtain, A0 = 0, A1 = −1, B1 + B0 = 0, 4B1 = 1. Hence,
A0 = 0, A1 = −1, B0 = − 41 , B1 = 14 . So

1
yp (t) = −t + t(t − 1)et
4
and the general solution is
1
y(t) = c1 e−t + c2 et − t + t(t − 1)et
4
Example 26.9
Solve using the method of undetermined coefficients:

y 00 + y = et + t3 , y(0) = 2, y 0 (0) = 0.
228 APPENDIX

Solution.
First, the characteristic equation is r2 + 1 = 0, with roots r = ±i, so the
homogeneous solution is yh (t) = c1 sin t + c2 cos t. The trial function for the
particular solution is yp (t) = Aet + Bt3 + Ct2 + Dt + E. Plugging into the
differential equation, we see

Aet + 6Bt + 2C + Aet + Bt3 + Ct2 + Dt + E = et + t3 .

Matching coefficients, we see:

2A = 1, B = 1, C = 0, 6B + D = 0, E = 0

The particular solution is


1
yp (t) = et + t3 − 6t,
2
and so the general solution is
1
y(t) = c1 sin t + c2 cos t + et + t3 − 6t
2
When t = 0, this is y(0) = c2 + 21 = 2, so c2 = 32 . The first derivative of the
general solution is y 0 (t) = c1 cos t − 23 sin t + 21 et + 3t2 − 6. At t = 0, y 0 (0) =
c1 + 21 − 6 = 0, so c1 = 11 2
. We thus have solution y(t) = 11 2
sin t + 23 cos t +
1 t
2
e + t3 − 6t
APPENDIX B: THE METHOD OF VARIATION OF PARAMETERS 229

Appendix B: The Method of Variation of Pa-


rameters
In this section, we discuss a second method for finding a particular solution
to a nonhomogeneous differential equation

y 00 + p(t)y 0 + q(t)y = g(t), a < t < b. (27.1)

This method has no prior conditions to be satisfied by either p(t), q(t), or


g(t). Therefore, it may sound more general than the method of undetermined
coefficients. We will see that this method depends on integration while the
previous one is purely algebraic which, for some at least, is an advantage.
To use this method, we first find the general solution to the homogeneous
equation
y(t) = c1 y1 (t) + c2 y2 (t).
Then we replace the parameters c1 and c2 by two functions u1 (t) and u2 (t)
to be determined. From this the method got its name. Thus, obtaining

yp (t) = u1 (t)y1 (t) + u2 (t)y2 (t).

Observe that if u1 and u2 are constant functions then the above y is just the
homogeneous solution to the differential equation.
In order to determine the two functions one has to impose two constraints.
Finding the derivative of yp we obtain

yp0 = (y10 u1 + y20 u2 ) + (y1 u01 + y2 u02 ).

Finding the second derivative to obtain

yp00 = y100 u1 + y10 u01 + y200 u2 + y20 u02 + (y1 u01 + y2 u02 )0 .

Since it is up to us to choose u1 and u2 we decide to do that in such a way to


make our computation simple. One way to achieving that is to impose the
condition

y1 u01 + y2 u02 = 0. (27.2)

Under such a constraint yp0 and yp00 are simplified to

yp0 = y10 u1 + y20 u2


230 APPENDIX

and
yp00 = y100 u1 + y10 u01 + y200 u2 + y20 u02 .
In particular, yp00 does not involve u001 and u002 .
Inserting yp , yp0 , and yp00 into equation (27.1) to obtain

[y100 u1 + y10 u01 + y200 u2 + y20 u02 ] + p(t)(y10 u1 + y20 u2 ) + q(t)(u1 y1 + u2 y2 ) = g(t).

Rearranging terms,

[y100 + p(t)y10 + q(t)y1 ]u1 + [y200 + p(t)y20 + q(t)y2 ]u2 + [u01 y10 + u02 y20 ] = g(t).

Since y1 and y2 are solutions to the homogeneous equation, the previous


equation yields our second constraint

u01 y10 + u02 y20 = g(t). (27.3)

Combining equation (27.2) and (27.3) we find the system of two equations
in the unknowns u01 and u02

y1 u01 + y2 u02 =0
u01 y10 + u02 y20 =g(t).

Since {y1 , y2 } is a fundamental set, the expression W (t) = y1 y20 − y10 y2 is


nonzero so that one can find unique u01 and u02 . Using the method of elimi-
nation, these functions are given by

u01 (t) = − y2W


(t)g(t)
(t)
and u02 (t) = y1 (t)g(t)
W (t)
.

Computing antiderivatives to obtain

u1 (t) = − y2W(t)g(t) y1 (t)g(t)


R R
(t)
dt and u2 (t) = W (t)
dt.

Example 27.1
Find the general solution of

y 00 − y 0 − 2y = 2e−t

using the method of variation of parameters.


APPENDIX B: THE METHOD OF VARIATION OF PARAMETERS 231

Solution.
The characteristic equation r2 − r − 2 = 0 has roots r1 = −1 and r2 = 2.
Thus, y1 (t) = e−t , y2 (t) = e2t and W (t) = 3et . Hence,
e · 2e−t
Z 2t
2
u1 (t) = − t
dt = − t
3e 3
and
e−t · 2e−t
Z
2
u2 (t) = t
dt = − e−3t .
3e 9
The particular solution is
2 2
yp (t) = − te−t − e−t .
3 9
The general solution is then given by
2 2
y(t) = c1 e−t + c2 e2t − te−t − e−t
3 9
Example 27.2
Find the general solution to (2t − 1)y 00 − 4ty 0 + 4y = (2t − 1)2 e−t if y1 (t) = t
and y2 (t) = e2t form a fundamental set of solutions to the equation.
Solution.
First we rewrite the equation in standard form
4t 0 4
y 00 − y + y = (2t − 1)e−t .
2t − 1 2t − 1
Since W (t) = (2t − 1)e2t we find
e · (2t − 1)e−t
Z 2t
u1 (t) = − dt = e−t
(2t − 1)e2t
and
t · (2t − 1)e−t
Z
1 −3t 1 −3t
u2 (t) = dt = − te − e .
(2t − 1)e2t 3 9
Thus,
1 1 2 1
yp (t) = te−t − te−t − e−t = te−t − e−t .
3 9 3 9
The general solution is
2 1
y(t) = c1 t + c2 e2t + te−t − e−t
3 9
232 APPENDIX

Example 27.3
Find the general solution to the differential equation y 00 + y 0 = ln t, t > 0.
Solution.
The characterisitc equation r2 + r = 0 has roots r1 = 0 and r2 = −1 so that
y1 (t) = 1, y2 (t) = e−t , and W (t) = −e−t . Hence,
Z −t Z
e ln t
u1 (t) = − dt = ln tdt = t ln t − t
−e−t
Z Z Z t
ln t t t e
u2 (t) = −t
dt = − e ln tdt = −e ln t + dt
−e t
Thus, Z t
−t e
yp (t) = t ln t − t − ln t + e dt
t
and Z t
−t −t e
y(t) = c1 + c2 e + t ln t − t − ln t + e dt
t
Example 27.4
Find the general solution of
1
y 00 + y = .
2 + sin t
Solution.
Since the characteristic equation r2 + 1 = 0 has roots r = ±i, the general
solution of the corresponding homogeneous equation y 00 + y = 0 is given by
yh (t) = c1 cos t + c2 sin t
Since W (t) = 1 we find
Z Z
sin t 2
u1 (t) = − dt = −t + dt
2 + sin t 2 + sin t
Z
cos t
u2 (t) = dt = ln (2 + sin t)
2 + sin t
Hence, the particular solution is
Z
2
yp (t) = sin t ln (2 + sin t) + cos t( dt − t)
2 + sin t
and the general solution is
y(t) = c1 cos t + c2 sin t + yp (t)
Answers and Solutions

Section 1

1.1 (a) ODE (b) PDE (c) ODE.

1.2 uss = 0.

1.3 uss + utt = 0.

1.4 (a) Order 3, nonlinear (b) Order 1, linear, homogeneous (c) Order 2,
linear, nonhomogeneous.

1.5 (a) Linear, homogeneous, order 3.


(b) Linear, non-homogeneous, order 3. The inhomogeneity is − sin y.
(c) Nonlinear, order 2. The non-linear term is ex uux .
(d) Nonlinear, order 3. The non-linear terms are ux uxxy and ex uuy .
(e) Linear, non-homogeneous, order 2. The inhomogeneity is f (x, y, t).

1.6 (a) Linear. (b) Linear. (c) Nonlinear. (d) Nonlinear.

1.7 (a) PDE, linear, second order, homogeneous.


(b) PDE, linear, second order, homogeneous.
(c) PDE, nonlinear, fourth order.
(d) ODE, linear, second order, nonhomogeneous.
(e) PDE, linear, second order, nonhomogeneous.
(f) PDE, quasilinear, second order.

1.8 A(x, y, z)uxx +B(x, y, z)uxy +C(x, y, z)uyy +E(x, y, z)uxz +F (x, y, z)uyz +
G(x, y, z)uzz +H(x, y, z)ux +I(x, y, z)uy +J(x, y, z)uz +K(x, y, z)u = L(x, y, z).

233
234 ANSWERS AND SOLUTIONS

1.9 (a) Order 3, linear, homogeneous.


(b) Order 1, nonlinear.
(c) Order 4, linear, nonhomogeneous
(d) Order 2, nonlinear.
(e) Order 2, linear, homogeneous.

1.10 uww = 0.

1.11 uvw = 0.

1.12 uvw = 0.

1.13 ut = 0.

1.14 ut = 1.

1.15 uw = 1b u.

Section 2

2.1 a = b = 0.

2.2 Substituting into the differential equation we find

tX 00 T − XT 0 = 0

or
X 00 T0
= .
X tT
The LHS is a function of x only whereas the RHS is a function of t only.
This is true only when both sides are constant. That is, there is λ such that

X 00 T0
= =λ
X tT
and this leads to the two ODEs X 00 = λX and T 0 = λtT.
 x

x x
2.3 We have xux + (x + 1)yuy = y
(e x
+ xe ) + (x + 1)y − xe
y2
= 0 and
235

u(1, 1) = e.

2.4 We have ux +uy +2u = e−2y cos (x − y)−2e−2y sin (x − y)−e−2y cos (x − y)+
2e−2y sin (x − y) = 0 and u(x, 0) = sin x.

2.5 (a) The general solution to this equation is u(x) = C where C is an


arbitrary constant.
(b) The general solution is u(x, y) = f (y) where f is an arbitrary diferen-
tiable function of y.

2.6 (a) The general solution to this equation is u(x) = C1 x + C2 where


C1 and C2 are arbitrary constants.
(b)
R We have uy = f (y) where f is an arbitrary function of y. Hence, u(x, y) =
f (y)dy + g(x).

2.7 Let v(x, y) = y + 2x. Then


ux =2fv (v) + g(v) + 2xgv (v)
uxx =4fvv (v) + 4gv (v) + 4xgvv (v)
uy =fv (v) + xgv (v)
uyy =fvv (v) + xgvv (v)
uxy =2fvv (v) + gv (v) + 2xgvv (v)
Hence,
uxx − 4uxy + 4uyy =4fvv (v) + 4gv (v) + 4xgvv (v)
−8fvv (v) − 4gv (v) − 8xgvv (v)
+4fvv (v) + 4xgvv (v) = 0.
2.8 utt = c2 uxx .

2.9 Let v = x + p(u)t. Using the chain rule we find


ut = fv · vt = fv · (p(u) + pu ut t).
Thus
(1 − tfv pu )ut = fv p.
If 1 − tfv pu ≡ 0 on any t−interval I then fv p ≡ 0 on I which implies that
fv ≡ 0 or p ≡ 0 on I. But either condition will imply that tfv pu ≡ 0 and
236 ANSWERS AND SOLUTIONS

this will imply that 1 = 1 − tfv pu = 0, a contradiction. Hence, we must have


1 − tfv pu 6= 0. In this case,
fv p
ut = .
1 − tfv pu
Likewise,
ux = fv · (1 + pu ux t)
or
fv
ux = .
1 − tfv pu
It follows that ut = p(u)ux .
If ut = (sin u)ux then p(u) = sin u so that the general solution is given by

u(x, t) = f (x + t sin u)

where f is an arbitrary differentiable function in one variable.

2.10 u(x, y) = xf (x − y) + g(x − y).

2.11 Using integration by parts, we compute


Z L Z L
L
uxx (x, t)u(x, t)dx = ux (x, t)u(x, t)|x=0 − u2x (x, t)dx
0 0
Z L
=ux (L, t)u(L, t) − ux (0, t)u(0, t) − u2x (x, t)dx
0
Z L
=− u2x (x, t)dx ≤ 0
0

Note that we have used the boundary conditions u(0, t) = u(L, t) = 0 and
the fact that u2x (x, t) ≥ 0 for all x ∈ [0, L].

2.12 (a) This can be done by plugging in the equations.


(b) Plug in.
(c) We have sup{|un (x, 0) − 1| : x ∈ R} = n1 sup{| sin nx| : x ∈ R} = n1 .
2
en t
(d) We have sup{|un (x, t) − 1| : x ∈ R} = n
.
2
en t
(e) We have limt→∞ sup{|un (x, t) − 1| : x ∈ R, t > 0} = limt→∞ n
= ∞.
Hence, the solution is unstable and thus the problem is ill-posed.
237

2.13 (a) u(x, y) = x3 + xy 2 + f (y), where f is an arbitrary function.


3 2
(b) u(x, y) = x 6y + F (x)R+ g(y), where
R
F (x) = f (x)dx.
1 2x+3t
R
(c) u(x, t) = 18 e + t f1 (x)dx + f2 (x)dx + g(t).

2.14 (b) u(x, y) = xf (y − 2x) + g(y − 2x).

2.15 We have

ut =cuv − cuw
utt =c2 uvv − 2c2 uwv + c2 uww
ux =uv + uw
uxx =uvv + 2uvw + uww

Substituting we find uvw = 0 and solving Rthis equation we find uv = f (v)


and u(v, w) = F (v) + G(w) where F (v) = f (v)dv.
Finally, using the fact that v = x + ct and w = x − ct; we get d’Alembert’s
solution to the one-dimensional wave equation:

u(x, t) = F (x + ct) + G(x − ct)

where F and G are arbitrary differentiable functions.

Section 3
2
3.1 y = 21 (1 − e−t ).

3.2 y(t) = 3t−1


9
+ e−2t + Ce−3t .

3 cos t
3.3 y(t) = 3 sin t + t
+ Ct .

3.4 y(t) = 1
13
(3 sin (3t) + 2 cos (3t)) + Ce−2t .

3.5 y(t) = Ce− sin t − 3.

3.6 α = −2.

3.7 p(t) = 2 and g(t) = 2t + 3.


238 ANSWERS AND SOLUTIONS

3.8 y0 = y(0) = −1 and g(t) = 2et + cos t + sin t.

3.9 1.

3.10 y(t) = t ln |t| + 7t.

3.11 Since p(t) = a we find µ(t) = eat . Suppose first that a = λ. Then

y 0 + ay = be−at

and the corresponding general solution is

y(t) = bte−at + Ce−at

Thus,
limt→∞ y(t) = limt→∞ ( ebtat + eCat )
= limt→∞ aebat = 0
Now, suppose that a 6= λ then
b −λt
y(t) = e + Ce−at
a−λ
Thus,
lim y(t) = 0.
t→∞

3.12 y(t) = (−tet + et )−1 .

t2
3.13 y(t) = 4
− 3t + 12 + 1
12t2
.

3.14 y(t) = tSi(t) + (3 − Si(1))t.

Section 4
  31
3 t2
4.1 y(t) = 2
e +C .
t2
4.2 y(t) = Ce 2 −2t .

4.3 y(t) = Ct2 + 4.


239

2Ce4t
4.4 y(t) = 1+Ce4t
.
p
4.5 y(t) = 5 − 4 cos (2t).
p
4.6 y(t) = − (−2 cos t + 4).

4.7 y(t) = e1−t − 1.

2
4.8 y(t) = √
−4t2 +1
.

π

4.9 y(t) = tan (t + 2
= − cot t.
2
3−e−t
4.10 y(t) = 3+e−t2
.

4.11 u(x, y) = F (y)e−3x + G(x) where F (y) =


R
f (y)dy.

t2
4.12 y 2 + cos y + cos t + 2
= 2.

4.13 3y 2 y 0 + cos y + 2t = 0, y(2) = 0.

4.14 The ODE is not separable.

Section 5

5.1 (a) Linear (b) Quasi-linear, nonlinear (c) Nonlinear (d) Semi-linear, non-
linear.

5.2 Let w = 2x − y. Then ux + 2uy − u = ex f (w) + 2ex fw (w) − 2ex fw (w) −


ex f (w) = 0.
 1 1  3 1 √
5.3 We have xux − yuy = x 32 x 2 y 2 − y 12 x 2 y − 2 = x xy = u. Also,
u(y, y) = y 2 .

5.4 We have −yux + xuy = −2xy sin (x2 + y 2 ) + 2xy sin (x2 + y 2 ) = 0. More-
over, u(0, y) = cos y 2 .
240 ANSWERS AND SOLUTIONS

5.5 We have x1 ux + y1 uy = x1 (−x)+ y1 (1+y) = y1 . Moreover, u(x, 1) = 12 (3−x2 ).

5.6 3a − 7b = 0.

5.7 aut + cu = 0.

5.8 u(x, y) = x + f (x − y).

5.9 We have

ux = − 4e−4x f (2x − 3y) + 2e−4x f 0 (2x − 3y)


uy = − 3e−4x f 0 (2x − 3y)

Thus,

3ux + 2uy + 12u = − 12e−4x f (2x − 3y) + 6e−4x f 0 (2x − 3y)


−6e−4x f 0 (2x − 3y) + 12e−4x f (2x − 3y) = 0.
x
5.10 u(x, t) = f (ax − bt)e b .

5.11 u(x, y) = f (bx − ay).

5.12 cuw + λu(v, w) = f w, w−v



c
.

5.13 vwv (v) = Aw(v).

Section 6.1

6.1.1 19.

6.1.2 −15.

1
6.1.3 ~u · ~v = 2
~ = − 12 .
and ~u · w

6.1.4 63◦ .

6.1.5 52◦ .
241

6.1.6 (a) Neither (b) Orthogonal (c) Orthogonal (d) Parallel.

6.1.7 P QR is a right triangle at Q.

6.1.8 ~u =< − √13 , √13 , √13 > .

6.1.9 45◦ .

6.1.10 Comp~a~b = 9
7
and Proj~a~b =< 27 54
49 49
18
, , − 49 >.

6.1.11 ~b =< x, y, 3x − 2 10 > where x and y are arbitrary numbers.

6.1.12 144 J.

6.1.13 1839 ft − lb = 1839 slug.

Section 6.2

6.2.1 ∇F (x, y, z) = (yzexyz + y cos (xy))~i + (xzexyz + x cos (xy))~j + xyexyz~k.


y ~i − x sin y ~j + xy y ~k.
  
6.2.2 ∇F (x, y, z) = cos z z z z2
sin z

6.2.3
√ The level surfaces are spheres centered at (2, 3, −5) and with radius
C, C ≥ 0.

12
6.2.4 √
5
.

6.2.5 √1 (3x2 + 6y 3 z − 3xy − 2xz + yz).


10

6.2.6 The maximum rate of change is 17 and the maximum occurs in
the direction of
∇u(0, 2) 4 1
= √ ~i + √ ~j.
||∇u(0, 2)|| 17 17

6.2.7 ∇u(x, y, z) = − x22x


+y 2
~i − 2y ~
x2 +y 2
j + ez~k.

Section 7
242 ANSWERS AND SOLUTIONS

7.1 u(x, y) = 12 y 2 − 21 y 2 e−2x + sin (ye−x ).

1
7.2 u(x, y) = csc (ye−x )−x
.
2
7.3 u(x, y) = ex f (x2 + y 2 ).

7.4 u(x, y) = y(2 + e−| y | ).


x

1
7.5 u(x, y) = (x−4y)2 +1−y
.

1
7.6 u(x, y) = 2y
e−x2 +e −1 −y
.

7.7 u(x, y) = 32 x − 32 xe−2y + e−y tan−1 (xe−y ).

7.8 u(x, y) = x2 y 2 , xy ≥ 0.

7.9 uy = k2 = f (k1 ) = f (y + x3 ).

7.10 u(x, y) = y 4 − (y 2 − x2 )2 = 2x2 y 2 − x4 .

Section 8

8.1 u(x, t) = sin (x − 3t).


c c
8.2 u(x, y) = k2 e− a x = f (bx − ay)e− a x .

8.3 u(x, y) = x cos (y − 2x) + f (y − 2x).


dy
8.4 Solving the equation dx = 1 we find x − y = k1 . Solving the equation
du 1 2
dx
= x we find u(x, y) = 2
x + f (x − y) where f is a differentiable function
2
of one variable. Since u(x, x) = 1 we find 1 = 21 x2 + f (0) or f (0) = 1 − x2
which is impossible since f (0) is a constant. Hence, the given initial value
problem has no solution.

e−3t
8.5 u(x, t) = 1+(x−2t)2
.

8.6 u(x, t) = e3t (x − t)2 + 19 − 13 t − 91 .


 
243

8.7 Using the chain rule we find wt = ut eλt + λueλt and wx = ux eλt . Substi-
tuting these equations into the original equation we find

wt e−λt − λu + cwx e−λt + λu = 0

or
wt + cwx = 0
8.8 (a) w(x, t) is a solution to the equation follows from the principle of
superposition. Moreover, w(x, 0) = u(x, 0) − v(x, 0) = f (x) − g(x).
(b) w(x, t) = f (x − ct) − g(x − ct).
(c) From (b) we see that

sup{|u(x, t) − v(x, t)|} = sup{|f (x) − g(x)|}.


x,t x

Thus, small changes in the initial data produces small changes in the solu-
tion. Hence, the problem is a well-posed problem.

8.9  λ
x
e− c x

g t− if x < ct
u(x, t) = c
0 if x ≥ ct.
2x−3t

8.10 u(x, t) = sin 2
.

8.11 u(x, y) = x + f (x − y).

Section 9

9.1 u = −y+f (y ln (y + u)−x) where f is an arbitrary differentiable function.


f (x+y+u)
9.2 u = xy
where f is an arbitrary differentiable function.

9.3 x4 − u4 − 2xyu2 = f (xy) where f is an arbitrary differentiable func-


tion.

9.4 xy + u = f (x2 + y 2 − u2 ) where f is an arbitrary differentiable func-


tion.
244 ANSWERS AND SOLUTIONS

y

9.5 x2 + y 2 + u2 = f u
where f is an arbitrary differentiable function.

9.6 x+u = k2 et = et f (u2 −x2 ) where f is an arbitrary differentiable function.

9.7 x2 + y 2 + u2 = f (x + y + u) where f is an arbitrary differentiable function.

9.8 x2 + y 2 − 2u = f (xyu) where f is an arbitrary differentiable function.

9.9 y = sin−1 x + f (u) where f is a differentiable function.

u2
9.10 21 (x2 − y 2 − u2 ) = f (xy − 2
where f is a differentiable function.

Section 10
1−xy
10.1 u(x, y) = x+y
, x + y 6= 0.

10.2 u(x, y) = (x + y)(x2 − y 2 ).

10.3 2xyu + x2 + y 2 − 2u + 2 = 0.
y

10.4 u(x, y) = ln x + 1 − x
.

10.5 u(x, y) = f (xe−y ).

10.6 u(t, x) = f (x − at).

1
10.7 u(x, y) = sec (x−ay)−y
.
 
(x2 1
10.8 u(x, y) = h y − 2
+ 2
ex−1 .

10.9 u(x, y) = f (x − uy).

10.10 u(x, y) = y − sin−1 x.

10.11 (i) y = Cx2 . The characteristics are parobolas in the plane centered
at the origin. See figure below.
245

−2
(ii) u(x, y) = eyx .
(iii) In the first case, we cannot substitute x = 0 into yx−2 (the argument
of the function f, above) because x−2 is not defined at 0. Similarly, in the
second case, we’d need to find a function f so that f (0) = h(x). If h is not
constant, it is not possible to satisfy this condition for all x ∈ R.

10.12 u(x, y) = ey cos (x − y).

10.13 (a) u = ex f (ye−x ) where f is an arbitrary differential function.


(b) We want 2 = u(x, 3x) = ex f (3ex e−x ) = ex f (3). This equation is impossi-
ble so this Cauchy problem has no solutions.
(c) We want ex = ex f (ex e−x ) =⇒ f (1) = 1. In this case, there are infinitely
many solutions to this Cauchy problem, namely, u(x, y) = ex f (ye−x ) where
f is an arbitrary function satisfying f (1) = 1.
x2 (4x−y)2
10.14 u(x, y) = −1 + 2e 2 e− 2 .

10.15 The Cauchy problem has no solutions.


dy
10.16 (a) The characteristics satisfy the ODE dx = xy . Solving this equa-
tion we find x2 − y 2 = C. Thus, the characteristics are hyperbolas.
(b)
246 ANSWERS AND SOLUTIONS

(c) The general solution to the PDE is u(x, y) = f (x2 − y 2 ) where f is


2
an arbitrary differentiable function. Since u(0, y) = e−y we find f (y) = ey .
2 2
Hence, u(x, y) = ex −y .

10.17 (a) infinitely many (b) no solutions.

Section 11

11.1 (a) Hyperbolic (b) Parabolic (c) Elliptic.

11.2 (a) Ellitpic (b) Parabolic (c) Hyperbolic.

11.3 • The PDE is of hyperbolic type if 4y 2 (x2 + x + 1) > 0. This is true for
all y 6= 0. Graphically, this is the xy−plane with the x−axis removed,
• The PDE is of parabolic type if 4y 2 (x2 + x + 1) = 0. Since x2 + x + 1 > 0
for all x ∈ R, we must have y = 0. Graphically, this is x−axis.
• The PDE is of elliptic type if 4y 2 (x2 + x + 1) < 0 which can not happen.
247

11.4 We have

ux (x, t) = − sin x sin t,


uxx (x, t) = − cos x sin t,
ut (x, t) = cos x cos t,
utt (x, t) = − cos x sin t.

Thus,

uxx (x, t) = − cos x sin t = utt (x, t),


u(x, 0) = cos x sin 0 = 0,
ut (x, 0) = cos x cos 0 = cos x,
ux (0, t) = − sin 0 sin t = 0.

11.5 (a) Quasi-linear (b) Semi-linear (c) Linear (d) Nonlinear.

11.6 We have
2x
ux =
x2 + y 2
2y 2 − 2x2
uxx = 2
(x + y 2 )2
2y
uy = 2
x + y2
2x2 − 2y 2
uyy = 2
(x + y 2 )2
Plugging these expressions into the equation we find uxx + uyy = 0. Similar
argument holds for the second part of the problem.

11.7 Multiplying the equation by u and integrating, we obtain


Z L Z L
2
λ u (x)dx = uuxx (x)dx
0 0
Z L
=[u(L)ux (L) − u(0)ux (0)] − u2x (x)dx
0
 Z L 
2 2 2
= − kL u(L) + k0 u(0) + ux (x)dx
0
248 ANSWERS AND SOLUTIONS

For λ > 0, because k0 , kL > 0, the right-hand side is nonpositive and the
left-hand side is nonnegative. Therefore, both sides must be zero, and there
can be no solution other than u ≡ 0, which is the trivial solution.

11.8 Substitute u(x, y) = f (x)g(y) into the left side of the equation to obtain
f (x)g(y)(f (x)g(y))xy = f (x)g(y)f 0 (x)g 0 (y). Now, substitute the same thing
into the right side to obtain (f (x)g(y))x (f (x)g(y))y = f 0 (x)g(y)f (x)g 0 (y) =
f (x)g(y)f 0 (x)g 0 (y). So the sides are equal, which means f (x)g(y) is a solution.

11.9 We have

(un )xx = −n2 sin nx sinh ny and (un )yy = n2 sin nx sinh ny

Hence, ∆un = 0.
x2 y 2
R
11.10 u(x, y) = 4
+ F (x) + G(y), where F (x) = f (x)dx.

11.11 (a) We have A = 2, B = −4, C = 7 so B 2 −4AC = 16−56 = −40 < 0.


So this equation is elliptic everywhere in R2 .
(b) We have A = 1, B = −2 cos x, C = − sin2 x so B 2 − 4AC = 4 cos2 x +
4 sin2 x = 4 > 0. So this equation is hyperbolic everywhere in R2 .
(c) We have A = y, B = 2(x − 1), C = −(y + 2) so B 2 − 4AC =
4(x − 1)2 + 4y(y + 2) = 4[(x − 1)2 + (y + 1)2 − 1]. The equation is parabolic if
(x − 1)2 + (y + 1)2 = 1. It is hyperbolic if (x − 1)2 + (y + 1)2 > 1 and elliptic
if (x − 1)2 + (y + 1)2 < 1.

11.12 Using the chain rule we find

1 1
ut (x, t) = (cf 0 (x + ct) − cf 0 (x − ct)) + [g(x + ct)(c) − g(x − ct)(−c))
2 2c
c 0 0 1
= (f (x + ct) − f (x − ct)) + (g(x + ct) + g(x − ct))
2 2
c2 00 c
utt = (f (x + ct) + f 00 (x − ct)) + (g 0 (x + ct) − g 0 (c − xt))
2 2
1 0 1
ux (x, t) = (f (x + ct) + f 0 (x − ct)) + [g(x + ct) − g(x − ct)]
2 2c
1 00 1
uxx (x, t) = (f (x + ct) + f 00 (x − ct)) + [g 0 (x + ct) − g 0 (x − ct)]
2 2c
249

By substitutition we see that c2 uxx = utt . Moreover,


1 x
Z
1
u(x, 0) = (f (x) + f (x)) + g(s)ds = f (x)
2 2c x
and
ut (x, 0) = g(x).
11.13 (a) 1 + 4x2 y > 0, (b) 1 + 4x2 y = 0, (c) 1 + 4x2 y < 0.

11.14 u(x, y) = f (y − 3x) + g(x + y).


10x2 +y 2 −7xy+6
11.15 u(x, y) = f (y − 3x) + g(x + y) = 6
.

Section 12

12.1 Let z(x, t) = αv(x, t) + βw(x, t). Then we have

c2 zxx =c2 αvxx + c2 βwxx


=αvtt + βvtt
=ztt .

12.2 Indeed we have c2 uxx (x, t) = 0 = utt (x, t).

12.3 u(x, t) = 0.

12.4 u(x, t) = 12 (cos (x − 3t) + cos (x + 3t)).


h i
1
12.5 u(x, t) = 21 1+(x+t) 1
2 + 1+(x−t)2 .

1
12.6 u(x, t) = 1 + 8π
[sin (2πx + 4πt) − sin (2πx − 4πt)].

12.7 

 1 if x − 5t < 0 and x + 5t < 0
 1
2
if x − 5t < 0 and x + 5t > 0
u(x, t) = 1
 2 if x − 5t > 0 and x + 5t < 0

0 if x − 5t > 0 and x + 5t > 0

2 2
12.8 u(x, t) = 12 [e−(x+ct) + e−(x−ct) ] + 2t + 1
4c
cos (2x) sin (2ct).
250 ANSWERS AND SOLUTIONS

12.9 Just plug the translated/differentiated/dialated solution into the wave


equation and check that it is a solution.

12.10 v(r) = A cos (nr) + B sin (nr).

12.11 u(x, t) = 12 [ex−ct + ex+ct + 1c (cos (x − ct) − cos (x + ct))].

12.12 (a) We have


Z L Z L
dE
(t) = ut utt dx + c2 ux uxt dx
dt 0 0
Z L Z L
2 2 2
= ut utt dx + c ut (L, t)ux (L, t) − c ut (0, t)ux (0, t) − c ut uxx dx
0 0
Z L
2 2
=c ut (L, t)ux (L, t) − c ut (0, t)ux (0, t) + ut (utt − c2 uxx )dx
0
2
=c (ut (L, t)ux (L, t) − ut (0, t)ux (0, t))

since utt − c2 uxx = 0.


(b) Since the ends are fixed, we have ut (0, t) = ut (L, t) = 0. From (a) we
have
dE
(t) = c2 (ut (L, t)ux (L, t) − ut (0, t)ux (0, t)) = 0.
dt
(c) Assuming free ends boundary conditions, that is ux (0, t) = ux (L, t) = 0,
we find dE
dt
(t) = 0.

12.13 Using the previous exercise, we find


Z L
dE
(t) = −d (ut )2 dx.
dt 0

The right-hand side is nonpositive, so the energy either decreases or is con-


stant. The latter case can occur only if ut (x, t) is identically zero, which
means that the string is at rest.

12.14 (a) By the chain rule we have ut (x, t) = −cR0 (x − ct) and utt (x, t) =
c2 R00 (x − ct). Likewise, ux (x, t) = R0 (x − ct) and uxx = R00 (x − ct). Thus,
utt = c2 uxx .
251

(b) We have
L L L
c2 0 c2
Z Z Z
1 2
(ut ) dx = [R (x − ct)]2 dx = (ux )2 dx.
2 0 0 2 0 2

12.15 u(x, t) = x2 + 4t2 + 14 sin 2x sin 4t.

Section 13

13.1 Let z(x, t) = αu(x, t) + βv(x, t). Then we have

kzxx =kαuxx + kβvxx


=αut + βvt
=zt .

13.2 Indeed we have kuxx (x, t) = 0 = ut (x, t).


TL −T0
13.3 u(x, t) = T0 + L
x.

13.4 Let u be the solution to (13.1) that satisfies u(0, t) = u(L, t) = 0. Let
w(x, t) be the time independent solution to (13.1) that satisfies w(0, t) = T0
and w(L, t) = TL . That is, w(x, t) = T0 + TLL−T0 x. From Exercise 13.1,
the function u(x, t) = u(x, t) + w(x, t) is a solution to (13.1) that satis-
fies u(0, t) = T0 and u(L, t) = TL .

13.5 u(x, t) = 0.

13.6 Substituting u(x, t) = X(x)T (t) into (13.1) we obtain

X 00 T0
k = .
X T
Since X only depends on x and T only depends on t, we must have that
there is a constant λ such that
00 T0
k XX = λ and T
= λ.

This gives the two ordinary differential equations

X 00 − λk X = 0 and T 0 − λT = 0.
252 ANSWERS AND SOLUTIONS

13.7 (a) Letting α = λk > 0 we obtain



the ODE

X 00 − αX = 0 whose general
solution is given by X(x) = Aex α + Be−x α for some constants A and B.
(b) The condition u(0, t) = 0 implies that X(0) = 0 which √
in turn √implies
A + B = 0. Likewise,
√ √
the condition u(L, t) = 0 implies Ae L α
+ Be−L α = 0.
Hence, A(eL α − e−L α ) = 0.
(c) If A = 0 then B = 0 and u(x, t) is the trivial solution which contradicts
the assumption that u is non-trivial.√Hence, we √
must have√
A 6= 0.
L α −L α 2L α
(d) Using (b) and (c) we obtain
√ e = e or e = 1. This equa-
tion is impossible
√ since 2L α >
√ 0. Hence, we must have λ < 0 so that
X(x) = A cos (x −α) + B sin (x −α).
q
13.8 (a)Now, write β = − λk . Then we obtain the equation X 00 + β 2 X = 0
whose general solution is given by

X(x) = c1 cos βx + c2 sin βx.

(b) Using X(0) = 0 we obtain c1 = 0. Since c2 6= 0 we must have sin βL = 0


2 2
which implies βL = n]pi where n is an integer. Thus, λ = − knL2π , where n
is an integer.
ki2 π 2
13.9 For each integer i ≥ 0 we have ui (x, t) = ci e− L2 t sin iπ

L
x is a solution
to (13.1). By superposition, u(x, t) is also a solution to (13.1). Moreover,
u(0, t) = u(L, t) = 0 since ui (0, t) = ui (L, t) = 0 for i = 1, · · · , n.

13.10 (i) u(0, t) = 0 and u(a, t) = 100 for t > 0.


(ii) ux (0, t) = ux (a, t) = 0 for t > 0.

13.11 Solving this problem we find u(x, t) = e−t sin x. We have


Z π Z π
−2t −2t
E(t) = 2 2
[e sin x + e cos x]dx = e−2t dx = πe−2t .
0 0

Thus, E 0 (t) = −2πe−2t < 0 for all t > 0.


RL
13.12 E(t) = 0
f (x)dx + (1 + 4L)t.

13.13 v(x) = x + 2.
253

13.14 (a) v(x) = TL x.


(b) v(x) = T.
(c) v(x) = αx + T.
RL
13.15 (a) E(t) = 0 u(x, t)dx.
(b) We integrate the equation in x from 0 to L :
Z L Z L
ut (x, t)dx = kuxx dx = kux (x, t)|L0 = 0,
0 0

since ux (0, t) = ux (L, t) = 0. The left-hand side can also be written as


Z L
d
u(x, t)dx = E 0 (t).
dt 0

Thus, we have shown that E 0 (t) = 0 so that E(t) is constant.

13.16 (a) The total thermal energy is


Z L
E(t) = u(x, t)dx.
0

We have
L L
L2
Z Z
dE
= ut (x, t)dx = ux |L0 + xdx = (7 − β) + .
dt 0 0 2

Hence,
L
L2
Z  
E(t) = f (x)dx + (7 − β) + t.
0 2
(b) The steady solution (equilibrium) is possible if the right-hand side van-
ishes:
L2
(7 − β) + =0
2
2
Solving this equation for β we find β = 7 + L2 .
(c) By integrating the equation uxx + x = 0 we find the steady solution

x3
u(x) = − + C1 x + C2
6
254 ANSWERS AND SOLUTIONS

From the condition ux (0) = β we find C1 = β. The steady solution should


also have the same value of the total energy as the initial condition. This
means Z L 3  Z L
x
− + βx + C2 dx = f (x)dx = E(0).
0 6 0
Performing the integration and then solving for C2 we find

1 L L3
Z
L
C2 = f (x)dx + −β .
L 0 24 2
Therefore, the steady-state solution is

1 L L3 x3
Z
L
u(x) = f (x)dx + − β + βx − .
L 0 24 2 6

Section 14

14.1 (a) For all 0 ≤ x < 1 we have limn→∞ fn (x) = limn→∞ xn = 0. Also,
limn→∞ fn (1) = 1. Hence, the sequence {fn }∞n=1 converges pointwise to f.
(b) Suppose the contrary. Let  = 21 . Then there exists a positive integer N
such that for all n ≥ N we have
1
|fn (x) − f (x)| <
2
for all x ∈ [0, 1]. In particular, we have
1
|fN (x) − f (x)| <
2
1
for all x ∈ [0, 1]. Choose (0.5) N < x < 1. Then |fN (x)−f (x)| = xN > 0.5 = 
which is a contradiction. Hence, the given sequence does not converge uni-
formly.

14.2 For every real number x, we have


nx + x2 x x2
lim fn (x) = lim = lim + lim =0
n→∞ n→∞ n2 n→∞ n n→∞ n2

Thus, {fn }∞
n=1 converges pointwise to the zero function on R.
255

14.3 For every real number x, we have


1 1
−√ ≤ fn (x) ≤ √ .
n+1 n+1
Moreover,
1
lim √ = 0.
n→∞ n+1
Applying the squeeze rule for sequences, we obtain

lim fn (x) = 0
n→∞

for all x in R. Thus, {fn }∞


n=1 converges pointwise to the zero function on R.

14.4 First of all, observe that fn (0) = 0 for every n in N. So the sequence
{fn (0)}∞
n=1 is constant and converges to zero. Now suppose 0 < x < 1 then
n2 xn = n2 en ln x . But ln x < 0 when 0 < x < 1, it follows that

limn→∞ fn (x) = 0 for 0 < x < 1

Finally, fn (1) = n2 for all n. So,

lim fn (1) = ∞.
n→∞

Therefore, {fn }∞
n=1 is not pointwise convergent on [0, 1].

14.5 For − π2 ≤ x < 0 and 0 < x ≤ π


2
we have

lim (cos x)n = 0.


n→∞

For x = 0 we have fn (0) = 1 for all n in N. Therefore, {fn }∞


n=1 converges
pointwise to

0 if − π2 ≤ x < 0 and 0 < x ≤ π2



f (x) =
1 if x = 0.

14.6 (a) Let  > 0 be given. Let N be a positive integer such that N > 1 .
Then for n ≥ N
n |x|n


x − x 1 1
− x = < ≤ < .
n n n N
256 ANSWERS AND SOLUTIONS

Thus, the given sequence converges uniformly (and pointwise) to the function
f (x) = x.
(b) Since limn→∞ fn0 (x) = 1 for all x ∈ [0, 1), the sequence {fn0 }∞
n=1 converges
pointwise to f 0 (x) = 1. However, the convergence is not uniform. To see
this, let  = 12 and suppose that the convergence is uniform. Then there is a
positive integer N such that for n ≥ N we have

1
|1 − xn−1 − 1| = |x|n−1 < .
2

In particular, if we let n = N + 1 we must have xN < 21 for all x ∈ [0, 1).


1
But x = 12 N ∈ [0, 1) and xN = 12 which contradicts xN < 12 . Hence, the
convergence is not uniform.

14.7 (a) The pointwise limit is



 0 if 0 ≤ x < 1
1
f (x) = if x = 1
 2
1 if 1 < x ≤ 2

(b) The convergence cannot be uniform because if it were f would have to


be continuous.

14.8 (a) Let  > 0 be given. Note that

2 cos x − sin2 x

1 ≤ 3 .
|fn (x) − | =

2
2 2(2n + sin x) 4n
3
Since limn→∞ 4n = 0 we can find a positive integer N such that if n ≥ N
3
then 4n < . Thus, for n ≥ N and all x ∈ R we have

1 3
|fn (x) − | ≤ < .
2 4n
1
This shows that fn → 2
uniformly on R and also on [2, 7].
(b) We have
Z 7 Z 7 Z 7
1 5
lim fn xdx = lim fn xdx = dx = .
n→∞ 2 2 n→∞ 2 2 2
257

14.9 We have proved earlier that this sequence converges pointwise to the
discontinuous function

0 if − π2 ≤ x < 0 and 0 < x ≤ π2



f (x) =
1 if x = 0

Therefore, uniform convergence cannot occur for this given sequence.

14.10 (a) Using the squeeze rule we find

lim sup{|fn (x)| : 2 ≤ x ≤ 5} = 0.


n→∞

Thus, {fn }∞
n=1 converges uniformly to the zero function.
(b) We have
Z 5 Z 5
lim fn (x)dx = 0dx = 0.
n→∞ 2 2

Section 15.

15.1 (a) We have (f g)(x + T ) = f (x + T )g(x + T ) = f (x)g(x) = (f g)(x).


(b) We have (c1 f +c2 g)(x+T ) = c1 f (x+T )+c2 g(x+T ) = c1 f (x)+c2 g(x) =
(c1 f + c2 g)(x).

15.2 (a) For n 6= m we have

L
1 L
Z     
(m − n)π
Z  mπ   nπ  (m + n)π
sin x sin x dx = − cos x − cos x dx
−L L L 2 −L L L
  
1 L (m + n)π
=− sin x
2 (m + n)π L
 L
L (m − n)π
− sin x
(m − n)π L −L
=0

where we used the trigonometric identiy

1
sin a sin b = [− cos (a + b) + cos (a − b)].
2
258 ANSWERS AND SOLUTIONS

(b) For n 6= m we have


Z L
1 L
Z     
 mπ   nπ  (m + n)π (m − n)π
cos x sin x dx = sin x − sin x dx
−L L L 2 −L L L
  
1 L (m + n)π
= − cos x
2 (m + n)π L
 L
L (m − n)π
+ cos x
(m − n)π L −L
=0

where we used the trigonometric identiy


1
cos a sin b = [sin (a + b) − sin (a − b)].
2
15.3 (a) L (b) L (c) 0.

15.4
1 π
Z
a0 = f (x)dx = 0
π −π
1 π
Z
an = f (x) cos nxdx
π −π
Z 0 Z π
=− cos nxdx + cos nxdx = 0
−π 0
1 π
Z
bn = f (x) sin nxdx
π −π
Z 0 Z π
=− sin nxdx + sin nxdx
−π 0
2
= [1 − (−1)n ]
n
P∞
15.5 f (x) = − 61 + 4
n=1 (nπ)2 (−1)
n
cos (nπx).
P∞ 2
 nπ
  nx

15.6 f (x) = n=1 nπ cos 2
− (−1)n sin 2
.
P∞ 4 nπ

15.7 f (x) = n=1 (nπ)2 [1 − (−1)n ] cos 2
x .
259

15.8 Since the sided limits at the point of discontinuity x = 0 do not exist,
the function is not piecewise continuous in [−1, 1].

15.9 Define the function


Z L+a
g(a) = f (x)dx.
−L+a

Using the fundamental theorem of calculus, we have


Z L+a
dg d
= f (x)dx
da da −L+a
=f (L + a) − f (−L + a) = f (−L + a + 2L) − f (−L + a)
=f (−L + a) − f (−L + a) = 0

Hence, g is a constant function, and in particular we can write g(a) = g(0)


for all a ∈ R which gives the desired result.
P∞  1
15.10 (i) f (x) = 10 2nπ 2nπx 1 2nπ 2nπx
    
3
+ n=1 − nπ
sin 3
cos 3
− nπ
− cos 3
+ 1 sin 3
.
(ii) Using the theorem discussed in class, because this function and its deriva-
tive are piecewise continuous, the Fourier series will converge to the function
at each point of continuity. At any point of discontinuity, the Fourier series
will converge to the average of the left and right limits.
(iii)

15.11 (a) a0 = 2, an = bn = 0 for n ∈ N.


(b) a0 = 4, an = 0, b1 = 1, and bn = 0.
1
(c) a0 = 1, an = 0, bn = πn [(−1)n − 1], n ∈ N.
2L
(d) a0 = an = 0, bn = πn (−1)n+1 , n ∈ N.
260 ANSWERS AND SOLUTIONS

15.12 −1

15.13 an = 0 for all n ∈ N.


f (0− )+f (0+ ) −π+π
15.14 2
= 2
= 0.
P∞ sin (2n−1)x
15.15 (a) f (x) = 32 + π2 n=1 2n−1
.
(−1)n+1
(b) ∞ π
P
n=1 2n−1 = 4 .

Section 16

16.1 f (x) = 0.

16.2

16.3
261

16.4

π
P∞ 2
16.5 f (x) = 4
+ n=1 πn2 [2 cos (nπ/2) − 1 − (−1)n ] cos nx.

π
P∞ 2 n
16.6 f (x) = 2
+ n=1 n2 π [(−1) − 1] cos nx.
P∞ 2
16.7 f (x) = n=1 nπ [1 − (−1)n ] sin nx.
262 ANSWERS AND SOLUTIONS

 
2
P∞ 1+(−1)n
16.8 f (x) = π n=1 n n2 −1
sin nx.
P∞ 4[(−1)n e2 −1]
16.9 f (x) = 21 (e2 − 1) + n=1 4+n2 π 2
cos (nπx).



16.10 (a) If f (x) = sin L
x then bn = 0 if n 6= 2 and b2 = 1.
(b) If f (x) = 1 then
2 L
Z  nπ  2
bn = sin x dx = [1 − (−1)n ].
L 0 L nπ
(c) If f (x) = cos Lπ x then


2 L
Z π  π 
b1 = cos x sin x dx = 0
L 0 L L
and for n 6= 1 we have
2 L
Z π   nπ 
bn = cos x sin x dx
L 0 L L
1 2 L h  πx 
Z  πx  i
= sin (1 + n) − sin (1 − n) dx
2L 0 L L
 L
1 L  πx  L  πx 
= − cos (1 + n) + cos (1 − n)
L (1 + n)π L (1 − n)π L 0
2n
= 2 [1 + (−1)n ].
(n − 1)π

16.11 (a) a0 = 10 and a1 = 1, and an = 0 for n 6= 1.


2L n
(b) a0 = L and an = (πn) 2 [(−1) − 1], n ∈ N.
2 πn

(c) a0 = 1 and an = πn sin 2 , n ∈ N.

16.12 By definition of Fourier sine coefficients,


2 L
Z  nπ 
bn = f (x) sin x dx
L 0 L
The symmetry around x = L2 can be written as
   
L L
f +x =f −x
2 2
263

for all x ∈ R. To use this symmetry it is convenient to make the change of


variable x − L2 = u in the above integral to obtain

Z L     
2 L nπ L
bn = f + u sin + u du.
−L
2
2 L 2

Since f L2 + u is even in u and for n even sin nπ L nπu


   
L 2
+ u = sin L
is
odd in u, the integrand of the above integral is odd in u for n even. Since
the intergral is from − L2 to L2 we must have b2n = 0 for n = 0, 1, 2, · · ·

16.13 By definition of Fourier cosine coefficients,

Z L
2  nπ 
an = f (x) cos x dx
L 0 L

L
The anti-symmetry around x = 2
can be written as

   
L L
f −y = −f +y
2 2

for all y ∈ R. To use this symmetry it is convenient to make the change of


variable x = L2 + y in the above integral to obtain

Z L     
2 L nπ L
an = f + y cos + y dy.
−L
2
2 L 2

Since f L2 + y is odd in y and for n even cos nπ L


+ y = ± cos nπy
   
L 2 L
is
even in y, the integrand of the above integral is odd in y for n even. Since
the intergral is from − L2 to L2 we must have a2n = 0 for all n = 0, 1, 2, · · · .

πx 2 2
P∞ 1+(−1)n nπx
 
16.14 sin L
= π
− π n=2 n2 −1 cos L
.

16.15 (a)
264 ANSWERS AND SOLUTIONS

R2
(b) a0 = 22 0 f (x)dx = 3.
(c) We have

Z 2
2  nπx 
an = f (x) cos dx
2 0 2
Z 1  nπx  Z 2
 nπx 
= cos dx + 2 cos dx
0 2 1 2
2  nπx  1 2  nπx  2
= sin +2 sin
nπ 2 0 nπ 2 1
2  nπ 
=− sin .
nπ 2

nπx

(d) bn = 0 since f (x) sin 2
is odd in−2 ≤ x ≤ 2.
(e)
∞   nπ 
3 X 2  nπx 
f (x) = + − sin cos .
2 n=1 nπ 2 2

Section 17

17.1 We look for a solution of the form u(x, y) = X(x)Y (y). Substituting in
the given equation, we obtain

X 00 Y + XY 00 + λXY = 0.
265

Assuming X(x)Y (y) is nonzero, dividing for X(x)Y (y) and subtract both
00 (x)
sides for XX(x) , we find:

X 00 (x) Y 00 (y)
− = + λ.
X(x) Y (y)
The left hand side is a function of x while the right hand side is a function
of y. This says that they must equal to a constant. That is,
X 00 (x) Y 00 (y)
− = + λ = δ.
X(x) Y (y)
where δ is a constant. This results in the following two ODEs
X 00 + δX = 0 and Y 00 + (λ − δ)Y = 0.
• If δ > 0 and λ − δ > 0 then
√ √
X(x) =A cos δx + B sin δx
p p
Y (y) =C cos (λ − δ)y + D sin (λ − δ)y

• If δ > 0 and λ − δ < 0 then


√ √
X(x) =A cos δx + B sin δx
√ √
Y (y) =Ce− −(λ−δ)y + De −(λ−δ)y

• If δ = λ > 0 then
√ √
X(x) =A cos δx + B sin δx
Y (y) =Cy + D

• If δ = λ < 0 then
√ √
X(x) =Ae− −δx + Be −δx

Y (y) =Cy + D

• If δ < 0 and λ − δ > 0 then


√ √
X(x) =Ae− −δx
+ Be −δx
p p
Y (y) =C cos (λ − δ)y + D sin (λ − δ)y
266 ANSWERS AND SOLUTIONS

• If δ < 0 and λ − δ < 0 then


√ √
X(x) =Ae− −δx
+ Be −δx
√ √
Y (y) =Ce− −(λ−δ)y
+ De −(λ−δ)y
.

17.2 Let’s assume that the solution can be written in the form u(x, t) =
X(x)T (t). Substituting into the heat equation we obtain

X 00 T0
= .
X kT
Since X only depends on x and T only depends on t, we must have that
there is a constant λ such that
X 00 T0
X
= λ and kT
= λ.

This gives the two ordinary differential equations

X 00 − λX = 0 and T 0 − kλT = 0.

Next, we consider the three cases of the sign of λ.


Case 1: λ = 0
In this case, X 00 = 0 and T 0 = 0. Solving these equations we find X(x) =
ax + b and T (t) = c.

Case 2: λ > 0 √ √
In this case, X(x) = Ae λx + Be− λx and T (t) = Cekλt .

Case 3: λ < 0 √ √
In this case, X(x) = A cos −λx + B sin −λx and and T (t) = Cekλt .

17.3 r2 R00 (r) + rR0 (r) − λR(r) = 0 and Θ00 (θ) + λΘ(θ) = 0.

17.4 X 00 = (2 + λ)X, T 00 = λT, X(0) = 0, X 0 (1) = 0.

17.5 X 00 − λX = 0, T 0 = kλT, X(0) = 0 = X 0 (L).

17.6 u(x, t) = Ceλ(x−t) .

17.7 5X 000 − 7X 00 − λX = 0 and 3Y 00 − λY 0 = 0.


267

y
17.8 u(x, y) = Ceλx− λ .

17.9 u(x, y) = Ceλx y λ .

17.10 We look for a solution of the form u(x, y) = X(x)T (t). Substitut-
ing in the wave equation, we obtain

X 00 (x)T (t) − X(x)T 00 (t) = 0.

Assuming X(x)T (t) is nonzero, dividing for X(x)T (t) we find:

X 00 (x) T 00 (t)
= .
X(x) T (t)

The left hand side is a function of x while the right hand side is a function
of t. This says that they must equal to a constant. That is,

X 00 (x) T 00 (t)
= =λ
X(x) T (t)

where λ is a constant. This results in the following two ODEs

X 00 − λX = 0 and T 00 − λT = 0.

The solutions of these equations depend on the sign of λ.


• If λ > 0 then the solutions are given
√ √
X(x) =Ae λx
+ Be− λx
√ √
T (t) =Ce λt
+ De− λt

where A, B, C, and D are constants. In this case,


√ √ √ √
u(x, t) = k1 e λ(x+t)
+ k2 e λ(x−t)
+ k3 e− λ(x+t)
+ k4 e− λ(x−t)
.

• If λ = 0 then

X(x) =Ax + B
T (t) =Ct + D
268 ANSWERS AND SOLUTIONS

where A, B, and C are arbitrary constants. In this case,

u(x, t) = k1 xt + k2 x + k3 t + k4 .

• If λ < 0 then
√ √
X(x) =A cos −λx + B sin −λx
√ √
T (t) =A cos −λt + B sin −λt

where A, B, C, and D are arbitrary constants. In this case,


√ √ √ √
u(x, t) =k1 cos −λx cos −λt + k2 cos −λx sin −λt
√ √ √ √
+k3 sin −λx cos −λt + k4 sin −λx sin −λt.

17.11 (a) u(r, t) = R(r)T (t), T 0 (t) = kλT, r(rR0 )0 = λR.


(b) u(x, t) = X(x)T (t), T 0 = λT, kX 00 − (α + λ)X = 0.
(c) u(x, t) = X(x)T (t), T 0 = λT, kX 00 − aX 0 = λX.
(d) u(x, t) = X(x)Y (y), X 00 = λX, Y 00 = −λY.
(e) u(x, t) = X(x)T (t), T 0 = kλT, X 0000 = λX.

17.12 u(x, y) = Ceλ(x+y) .

17.13 X 00 = λX, Y 0 − Y 00 + Y = λY.

Section 18
π
 − π2 k t 5π
 − 25π2 k t
18.1 u(x, t) = sin 2
x e 4 + 3 sin 2
x e 4 .
k(2n−1)2 π 2
P∞  
18.2 u(x, t) = 8d
π3
1
n=1 (2n−1)3 sin (2n−1)π
L
x e− L2
t
.

2 4
P∞ 1 2nπ
 −k 4n2 π2 t
18.3 u(x, t) = π
− π n=1 (4n2 −1)
cos L
x e L2 .
P∞ nπ
 − kn2 π2 t
18.4 u(x, t) = n=1 Cn sin L
x e L2 where
 4
 − nπ n = 2, 6, 10, · · ·
Cn = 0 n = 4, 8, 12, · · ·
 6

n is odd.
269


 −81kπ2 t
18.5 u(x, t) = 6 sin L
x e L2 .

1
P∞ nπ
 − kn2 π2 t
18.6 u(x, t) = 2
+ n=1 Cn cos L
x e L2 where
 2
 − nπ n = 1, 5, 9, · · ·
2
Cn = nπ
n = 3, 7, 11, · · ·
0 n is even

9kπ 2

e− t

18.7 u(x, t) = 6 + 4 cos L
x L2 .


 − 64kπ2 t
18.8 u(x, t) = −3 cos L
x e L2 .

18.9
∞  nπ   n2 π2 
− 1+ 2 t
X
u(x, t) = an cos x e L .
n=0
L
2 2
 
− 1+ n π2 t
As t → ∞, e L → 0 for each n ∈ N. Hence, u(x, t) → 0.

18.10 (b) We have


Z 1
0
E (t) =2 w(x, t)wt (x, t)dx
0
Z 1
=2 w(x, t)[wxx (x, t) − w(x, t)]dx
0
Z 1 Z 1 
1 2 2
= 2w(x, t)wx (x, t)|0 − 2 wx (x, t)dx + w (x, t)dx
0 0
Z 1 Z 1 
2 2
=−2 wx (x, t)dx + w (x, t)dx ≤ 0
0 0

Hence, E is decreasing, and 0 ≤ E(t) ≤ E(0) for all t > 0.


(c) Since w(x, 0) = 0, we must have E(0) = 0. Hence, E(t) = 0 for all t ≥ 0.
This implies that w(x, t) = 0 for all t > 0 and all 0 < x < 1. Therefore
u1 (x, t) = u2 (x, t). This means that the given problem has a unique solution.

18.11 (a) u(0, t) = 0 and ux (1, t) = 0.


270 ANSWERS AND SOLUTIONS

(b) Let’s assume that the solution can be written in the form u(x, t) =
X(x)T (t). Substituting into the heat equation we obtain

X 00 T0
= .
X T
Since X only depends on x and T only depends on t, we must have that
there is a constant λ such that
X 00 T0
X
= λ and T
= λ.

This gives the two ordinary differential equations

X 00 − λX = 0 and T 0 − λT = 0.

As far as the boundary conditions, we have

u(0, t) = 0 = X(0)T (t) =⇒ X(0) = 0

and
ux (1, t) = 0 = X 0 (1)T (t) =⇒ X 0 (1) = 0.
Note that T is not the zero function for otherwise u ≡ 0 and this contradicts
our assumption that
√ u is the√ non-trivial00solution.√
0 00
(c) We have X = −λ cos −λx and X = λ sin −λx. √ Thus, X −λX √ = 0.
0
Moreover X(0) = 0. Now, X (1) = 0 implies cos −λ = 0 or −λ =
2
n − 21 π, n ∈ N. Hence, λ = − n − 12 π 2 .


18.12 (a) Let’s assume that the solution can be written in the form u(x, t) =
X(x)T (t). Substituting into the heat equation we obtain

X 00 T0
= .
X kT
Since the LHS only depends on x and the RHS only depends on t, there must
be a constant λ such that
X 00 T0
X
= λ and kT
= λ.

This gives the two ordinary differential equations

X 00 − λX = 0 and T 0 − kλT = 0.
271

As far as the boundary conditions, we have

u(0, t) = 0 = X(0)T (t) =⇒ X(0) = 0

and
u(L, t) = 0 = X(L)T (t) =⇒ X(L) = 0.
Note that T is not the zero function for otherwise u ≡ 0 and this contradicts
our assumption that u is the non-trivial solution.
Next, we consider the three cases of the sign of λ.

Case 1: λ = 0
In this case, X 00 = 0. Solving this equation we find X(x) = ax + b. Since
X(0) = 0 we find b = 0. Since X(L) = 0 we find a = 0. Hence, X ≡ 0 and
u(x, t) ≡ 0. That is, u is the trivial solution.

Case 2: λ > 0 √ √
In this case, X(x) = Ae λx + Be− λx . Again, the conditions X(0) = X(L) =
0 imply A = B = 0 and hence the solution is the trivial solution.

Case 3: λ < 0 √ √
In this case, X(x) = A cos −λx + B sin −λx. The√condition X(0) = 0
implies A = 0. The condition X(L) = 0 implies B sin −λL = 0. We must
have B 6= 0 otherwise√X(x) = 0 and√this leads to the trivial solution. Since
B 6= 0, we obtain sin −λL = 0 or −λL = nπ where n ∈ N. Solving for λ
2 2
we find λ = − nLπ2 . Thus, we obtain infinitely many solutions given by

Xn (x) = An sin x, n ∈ N.
L
Now, solving the equation
T 0 − λkT = 0
by the method of separation of variables we obtain
n2 π 2
Tn (t) = Bn e− L2
kt
, n ∈ N.

Hence, the functions


 nπ  n2 π2
un (x, t) = Cn sin x e− L2 kt , n ∈ N
L
272 ANSWERS AND SOLUTIONS

satisfy ut = kuxx and the boundary conditions u(0, t) = u(L, t) = 0.


Now, in order for these solutions to satisfy the initial value condition u(x, 0) =
9πx

6 sin L , we invoke the superposition principle of linear PDE to write

X  nπ  n2 π2
u(x, t) = Cn sin x e− L2 kt . (13.4)
n=1
L

To determine the unknown constants Cn we use the initial condition u(x, 0) =


6 sin 9πx

L
in (13.4) to obtain
  X ∞
9πx  nπ 
6 sin = Cn sin x .
L n=1
L

By equating coefficients we find C9 = 6 and Cn = 0 if n 6= 9. Hence, the


solution to the problem is given by
 
9πx − 81π22 kt
u(x, t) = 6 sin e L .
L
(b) Similar to (a), we find
 π  π2 kt  
− 2 3π 9π 2 kt
u(x, t) = 3 sin x e L − sin x e− L2
L L

πx
 − π2 kt 5πx
 − 25π2 kt
18.13 u(x, t) = cos L
e L2 + 4 cos L
e L2 .
(b) u(x, t) = 5.

18.14 u(x, t) = 6 sin xe−8t .

Section 19
P∞ nπ
 nπ

19.1 u(x, y) = n=1 Bn sin b
y sinh b
x where
 Z b  nπ   h  nπ i−1
2
Bn = f2 (y) sin y dy sinh a .
b 0 b b
P∞ nπ nπ

19.2 u(x, y) = n=1 Bn sin a
x sinh a
(y − b) where
 Z a 
2  nπ   nπ 
Bn = g1 (x) sin x dx [sinh − b ]−1 .
a 0 a a
273

3
19.3 u(x, y) = 2xy + sinh π
sin πx sinh πy.

19.4 If u(x, y) = x2 − y 2 then uxx = 2 and uyy = −2 so that ∆u = 0.


If u(x, y) = 2xy then uxx = uyy = 0 so that ∆u = 0.

19.5
∞ h  nπ   nπ i
X nπ
u(x, y) = An cosh y + Bn sinh y sin x.
n=1
L L L
where
 Z L   −1
2 nπ nπH
An = (f1 (x) + f2 (x)) sin xdx cosh
L 0 L 2L

and −1
 Z L  
2 nπ nπH
Bn = (f2 (x) − f1 (x)) sin xdx sinh
L 0 L 2L

19.6 (a) Differentiating term by term with respect to x we find



X
ux + ivx = nan (x + iy)n−1 .
n=0

Likewise, differentiating term by term with respect to y we find



X
uy + ivy = nan i(x + iy)n−1 .
n=0

Multiply this equation by i we find



X
−iuy + vy = nan (x + iy)n−1 .
n=0

Hence, ux + ivx = vy − iuy which implies ux = vy and vx = −uy .


(b) We have uxx = (vy )x = (vx )y = −uyy so that ∆u = 0. Similar argument
for ∆v = 0.

19.7 Polar and Cartesian coordinates are related by the expressions x =


274 ANSWERS AND SOLUTIONS

1
r cos θ and y = r sin θ where r = (x2 + y 2 ) 2 and tan θ = xy . Using the chain
rule we obtain
sin θ
ux =ur rx + uθ θx = cos θur − uθ
r
uxx =uxr rx + uxθ θx
 
sin θ sin θ
= cos θurr + 2 uθ − urθ cos θ
r r
  
cos θ sin θ sin θ
+ − sin θur + cos θurθ − uθ − uθθ −
r r r
cos θ
uy =ur ry + uθ θy = sin θur + uθ
r
uyy =uyr ry + uyθ θy
 
cos θ cos θ
= sin θurr − 2 uθ + urθ sin θ
r r
  
sin θ cos θ cos θ
+ cos θur + sin θurθ − uθ + uθθ
r r r

Substituting these equations into (21.1) we obtain the dersired equation.

19.8 u(x, y) = u1 (x, y) + u2 (x, y) + u3 (x, y) + u4 (x, y) where

u1 (x, y) = 0


" #
X 2 (−1)n nπ  nπ 
u2 (x, y) = − · sin x sinh y
nπ sinh 3nπ

n=1 2
2 2
   
1 4π(x − 2) 4π
u3 (x, y) =  sinh sin y
sinh 8π3
3 3

X 14(1 − (−1)n )  nπ   nπ 
u4 (x, y) = sin y sinh x .
nπ sinh 2nπ

n=1 3
3 3

19.9
  
4  πx  π(x − L) πy
u(x, y) = πL
 sinh − sinh cos .
sinh 2H
2H 2H 2H
275

A0
P∞ √
− λn x

19.10 u(x, t) = 2
+ n=1 A n e cos λn y where
2 H
Z
A0 = f (y)dy
H 0
2 H
Z

An = f (y) cos ydy.
H 0 H

19.11
20  πx 
u(x, y) = π
sin
cosh L H + sinh Lπ H
π
  
L
L
 
5 3πx
− 3π 3π
 3π
  sin
L
cosh L
H + sinh L
H L

19.12 u(x, y) = sin (2πx)e−2πy .

19.13 u(x, y) = y.

1 2
19.14 u(x, y) = 2
x − 12 y 2 − ax + by + C where C is an arbitrary con-
stant.
2 cosh 3y sin 3x 5 cosh 10y sin 10x
19.15 u(x, y) = 3 sinh 6
− 10 sinh 20
.

Section 20

20.1 u(r, θ) = 3r5 sin 5θ.


h i
π
P∞ n 1−(−1)n sin nθ
20.2 u(r, θ) = 4
+ n=1 r n2 π
cos nθ + n
.

20.3 u(r, θ) = C0 + r2 cos 2θ.

20.4 Substituting C0 , An , and Bn into the right-hand side of u(r, θ) we find


Z 2π ∞ Z 2π
1 X rn
u(r, θ) = f (φ)dφ + n
f (φ) [cos nφ cos nθ + sin nφ sin nθ] dφ
2π 0 n=1
πa 0
Z 2π " ∞  
#
1 X r n
= f (φ) 1 + 2 cos n(θ − φ) dφ.
2π 0 n=1
a
276 ANSWERS AND SOLUTIONS

20.5 (a) We have eit = cos t + i sin t and e−it = cos t − i sin t. The result
follows by adding these two equalities and dividing by 2.
(b) This follows from the fact that
1
cos n(θ − φ) = (ein(θ−φ) + e−in(θ−φ) ).
2
q
(c) We have |q1 | = ar cos (θ − φ)2 + sin (θ − φ)2 = ar < 1 since 0 < r < a. A
similar argument shows that |q2 | < 1.

20.6 (a) The first sum is a convergent geometric series with ratio q1 and
sum
∞  
X r n in(θ−φ) ar ei(θ−φ)
e =
n=1
a 1 − q1
rei(θ−φ)
=
a − rei(θ−φ)
Similar argument for the second sum.
(b) We have
∞  
X r n rei(θ−φ)
1+2 cos n(θ − φ) =1 +
n=1
a a − rei(θ−φ)
re−i(θ−φ)
+
a − re−i(θ−φ)
r r
=1 + −i(θ−φ) + −i(θ−φ)
ae − r ae −r
r
=1 +
a cos (θ − φ) − r − ai sin (θ − φ)
r
+
a cos (θ − φ) − r + ai sin (θ − φ)
r[a cos (θ − φ) − r + ai sin (θ − φ)]
=1 +
a2 + 2ar cos (θ − φ) + r2
r[a cos (θ − φ) − r − ai sin (θ − φ)]
+
a2 − 2ar cos (θ − φ) + r2
a2 − r 2
= 2 .
a − 2ar cos (θ − φ) + r2
277

20.7 We have
" ∞  
#
Z 2π
1 X r n
u(r, θ) = f (φ) 1 + 2 cos n(θ − φ) dφ
2π 0 n=1
a

a − r2 2
Z
1
= f (φ) 2 dφ
2π 0 a − 2ar cos (θ − φ) + r2
a2 − r2 2π
Z
f (φ)
= dφ.
2π 0 a − 2ar cos (θ − φ) + r2
2

P∞ n+1 n sin nθ
20.8 u(r, θ) = 2 n=1 (−1) r n .

20.9 (a) Differentiating u(r, t) = R(r)T (t) with respect to r and t we find
utt = RT 00 and ur = R0 T and urr = R00 T.
Substituting these into the given PDE we find
 
00 2 00 1 0
RT = c R T + R T
r

Dividing both sides by c2 RT we find


1 T 00 R00 1 R0
= + .
c2 T R rR
Since the RHS of the above equation depends on r only, and the LHS depends
on t only, they must equal to a constant λ.
(b) The given boundary conditions imply

u(a, t) = 0 = R(a)T (t) =⇒ R(a) = 0

u(r, 0) = f (r) = R(r)T (0)


ut (r, 0) = g(r) = R(r)T 0 (0).
If λ = 0 then R00 + 1r R0 = 0 and this implies R(r) = C ln r. Using the condition
R(a) = 0 we find C = 0 so that R(r) = 0 and hence u ≡ 0. If λ > 0 then
T 00 − λc2 T = 0. This equation has the solution
√ √
T (t) = A cos (c λt) + B sin (c λt).
278 ANSWERS AND SOLUTIONS

The condition u(r, 0) = f (r) implies that A = f (r) which is not possible.
Hence, λ < 0

20.10 (a) Follows from the figure and the definitions of trigonometric func-
tions in a right triangle.
(b) The result follows from equation (20.1).

20.11 By the maximum principle we have

min u(x, y) ≤ u(x, y) ≤ max u(x, y), ∀(x, y) ∈ Ω


(x,y)∈∂Ω (x,y)∈∂Ω

But min(x,y)∈∂Ω u(x, y) = u(1, 0) = 1 and max(x,y)∈∂Ω u(x, y) = u(−1, 0) = 3.


Hence,
1 ≤ u(x, y) ≤ 3
and this implies that u(x, y) > 0 for all (x, y) ∈ Ω.

20.12 (i) u(1, 0) = 4 (ii) u(−1, 0) = −2.

20.13 Using the maximum principle and the hypothesis on g1 and g2 , for
all (x, y) ∈ Ω ∪ ∂Ω we have

min u1 (x, y) = min g1 (x, y)


(x,y)∈∂Ω (x,y)∈∂Ω

≤u1 (x, y) ≤ max u1 (x, y)


(x,y)∈∂Ω

= max g1 (x, y) < min g2 (x, y)


(x,y)∈∂Ω (x,y)∈∂Ω

= min g2 (x, y) = min u2 (x, y)


(x,y)∈∂Ω (x,y)∈∂Ω

≤u2 (x, y) ≤ max u2 (x, y) = max g2 (x, y)


(x,y)∈∂Ω (x,y)∈∂Ω

20.14 We have
∂2 n 1 ∂ n 1 ∂2 n
∆(rn cos (nθ)) = (r cos (nθ)) + (r cos (nθ)) + (r cos (nθ))
∂r2 r ∂r r2 ∂θ2
=n(n − 1)rn−2 cos (nθ) + nrn−2 cos (nθ) − rn−2 n2 cos (nθ) = 0

Likewise, ∆(rn sin (nθ)) = 0.


279

1 r2
20.15 u(r, θ) = 2
− 2a2
cos 2θ.

a 3

20.16 u(r, θ) = ln 2 + 4 r
cos 3θ.

Section 21

21.1 Convergent.

21.2 Divergent.

21.3 Convergent.

1
21.4 s−3
, s > 3.

1
21.5 s2
− 5s , s > 0.
2
21.6 f (t) = e(t−1) does not have a Laplace transform.

4 4 2
21.7 s
− s2
+ s3
, s > 0.

e−s
21.8 s2
, s > 0.
−2s
21.9 − e s + 1
s2
(e−s − e−2s ), s 6= 0.
n e−st
21.10 − t n
tn−1 e−st dt, s > 0.
R
s
+ s

21.11 (a) 0 (b) 0.

5 1 2
21.12 s+7
+ s2
+ s−2
, s > 2.

21.13 3e2t , t ≥ 0.

21.14 −2t + e−t , t ≥ 0.

21.15 2(e−2t + e2t ), t ≥ 0.

2
21.16 s−1
+ 5s , s > 1.
280 ANSWERS AND SOLUTIONS

e−s
21.17 s−3
, s > 3.

1 1 s

21.18 2 s
− s2 +4ω 2
, s > 0.

3
21.19 s2 +36
, s > 0.

s−2
21.20 (s−2)2 +9
, s > 3.

2 3 5
21.21 (s−4)3
+ (s−4)2
+ s−4
, s > 4.

21.22 2 sin 5t + 4e3t , t ≥ 0.

21.23 56 e3t t3 , t ≥ 0.

21.24 
0, 0≤t<2
9(t−2)
e , t ≥ 2.

21.25 3e3t − 3e−t , t ≥ 0.

21.26 4[e3(t−5) − e−3(t−5) ]H(t − 5), t ≥ 0.

21.27 y(t) = 2e−4t +3[H(t−1)−H(t−3)]−3[e−4(t−1) H(t−1)−e−4(t−3) H(t−


3)], t ≥ 0.

1 −2t
21.28 51 e3t + 20
e − 14 e2t , t ≥ 0.

et −e−2t
21.29 3
.

t
21.30 2
sin t.

t5
21.31 120
.

21.32 1
2
− e−t + 21 e−2t .

et e−t
21.33 −t + 2
− 2
.

Section 22
281

22.1 u(x, t) = sin (x − t) − H(t − x) sin (x − t).

22.2 u(x, t) = [sin (x − t) − H(t − x) sin (x − t)]e−t .


2 2
22.3 u(x, t) = 2e−4π t sin πx + 6e−16π t sin 2πx.

22.4 u(x, t) = [sin (x − t) − H(t − x) sin (x − t)]et .

22.5 u(x, t) = 12 t2 + 21 H(t − x)(t − x)2 .

22.6 u(x, t) = t − 12 x2 H t − 21 x2 .
 

 −sx 
22.7 u(x, t) = L−1 es2 +1 = H t − xc sin t − xc .
c
 

22.8 u(x, t) = 2 sin x cos 3t.

22.9 u(x, y) = y(x + 1) + 1.


Rt
22.10 u(x, t) = −c 0
f (t − τ )H(τ − xc )dτ.

22.11 u(x, t) = e−5x e−4t .


 √ 
s
22.12 u(x, t) = L−1 − Ts e− c x + Ts .
2
22.13 u(x, t) = 5e−3π t sin (πx).

22.14 u(x, t) = 40e−t cos x2 .

22.15 u(x, t) = 3 sin πx cos 2πt.

Section 23
(−1)n i
23.1 nπ
.
P∞
1 1 nπ
(einx + e−inx ).

23.2 f (x) = 2
− n=1 nπ sin 2
282 ANSWERS AND SOLUTIONS

sinh aπ
P∞ (−1)n (a+in) inx
23.3 f (x) = π n=−∞ (a2 +n2 )
e .

eix −e−ix
23.4 f (x) = 2i
.

1
 P−1 i −inT
P∞ i −inT

23.5 f (x) = 2π
T+ n=−∞ n [e − 1]einx + n=1 n [e − 1]einx .

23.6 (a) f (x) = π3 + −1 + ∞


2 2 2
n inx
(−1)n einx .
P P
n=−∞ n2 (−1) e n=1 n2
(b) f (x) = π3 + ∞
2 4 n
P
n=1 n2 (−1) cos nx.

23.7 (a)
Z 1
2 2 π π
a0 =2 sin πxdx = − [cos − cos − ] = 0
− 12 π 2 2
Z 1
2
an =2 sin πx cos 2nπxdx = 0
− 12
Z 1
2 8n
bn =2 sin πx sin 2nπxdx =
− 12 π − 4n2 π
c0 =0
4(−1)n n
cn =
i(π − 4n2 π)
4i(−1)n n
cn = .
π − 4n2 π
(−1)n n 2nπix
(b) f (x) = π4 ∞
P
n=−∞ i(1−4n2 ) e .

23.8 (a)
1 2
Z
a0 = (2 − x)dx = 4
2 −2
1 2
Z  nπ 
an = (2 − x) cos x dx = 0
2 −2 2
1 2 4(−1)n
Z  nπ 
bn = (2 − x) sin x dx =
2 −2 2 nπ
2(−1)n+1 i −( inπ 2(−1)n+1 i ( inπ
(b) f (x) = 2 − ∞ e 2 x) + ∞ e 2 x) .
P P
n=1 nπ n=1 nπ
283

4 4
23.9 an = cn + c−n = 0. We have for |n| odd bn = i inπ = nπ
and for
|n| even bn = 0.

23.10 Note that for any complex number z we have z + z = 2Re(z) and
z − z = −2iIm(z). Thus,
c n + c n = an
which means that an = 2Re(cn ). Likewise, we have

cn − cn = ibn

That is ibn = −2iIm(cn ). Hence, bn = −2Im(cn ).

1 1−cos (nT )
23.11 an = 2Re(cn ) = πn
sin (nT ) and bn = nπ
.
P∞ i sin (2−inπ) inπ x
23.12 f (x) = i n=−∞ 2−inπ
e 2 .

23.13 (a) We have 


1 0<t<1
f (t) =
0 1<t<2
and f (t + 2) = f (t) for all t ∈ R.
(b) We have

2 L
Z Z 2 Z 1
a0 = f (x)dx = dx = dx = 1
L 0 0 0
Z 1
sin nπ
an = cos nπxdx = = 0.
0 nπ
(c) We have
1
1 − cos nπ 1 − (−1)n
Z
bn = sin nπxdx = = .
0 nπ nπ
Hence,
2


if n is odd
bn =
0 if n is even
a0 1
(d) We have c0 = 2
= and for n ∈ N we have
2

i

an − ibn − nπ if n is odd
cn = =
2 0 if n is even
284 ANSWERS AND SOLUTIONS

23.14 sin 3x = 21 (e3ix − e−3ix ).

Section 24

24.1
2 sinξ ξ if ξ 6= 0

fˆ(ξ) =
2 if ξ = 0.

24.2
∂ û
+ iξcû = 0
∂t
û(ξ, 0) = fˆ(ξ).

24.3
∂ 2 û
2
= −c2 ξ 2 û
∂t
û(ξ, 0) = fˆ(ξ)
ût (ξ, 0) = ĝ(ξ).

24.4
ûyy = ξ 2 û
2 sin ξa
û(ξ, 0) = 0, û(ξ, L) = .
ξ
1 1 2α
24.5 α−iξ
+ α+iξ
= α2 +ξ 2
.

24.6 We have
Z ∞
−x
F[e H(x)] = e−x H(x)e−iξx dx
−∞


e−x(1+iξ)
Z
−x(1+iξ) 1
= e dx = − = .
0 1 + iξ 0 1 + iξ

24.7 Using the duality property, we have


 
1
F = F[F[e−ξ H(ξ)]] = 2πeξ H(−ξ).
1 + ix
285

24.8 We have
Z ∞
F[f (x − α)] = f (x − α)e−iξx dx
−∞
Z ∞
−iξα
=e f (u)e−iξu du
−∞

=e −iξα
fˆ(ξ)

where u = x − α.

24.9 We have
Z ∞ Z ∞
F[e iαx
f (x)] = iαx
e −iξx
f (x)e dx = f (x)e−i(ξ−α)x dx = fˆ(ξ − α).
−∞ −∞

24.10 We will just prove the first one. We have

f (x)eiαx e−iαx
F[cos (αx)f (x)] =F[ + f (x)
2 2
1
= [F[f (x)eiαx ] + F[f (x)e−iαx ]]
2
1 ˆ
= [f (ξ − α) + fˆ(ξ + α)].
2

24.11 Using the definition and integration by parts we find


Z ∞
0
F[f (x)] = f 0 (x)e−iξx dx
−∞
Z ∞
−iξx ∞
f (x)e−iξx dx

= f (x)e −∞
+ (iξ)
−∞

=(iξ)fˆ(ξ)

where we used the fact that limx→±∞ f (x) = 0.

2
24.12 ξ2
(1 − cos ξ).

2
24.13 iξ
(1 − cos ξa).
286 ANSWERS AND SOLUTIONS

x 2
24.14 F −1 [fˆ(ξ)] = √12π e− 2 .
 
24.15 F −1 a+iξ1
= e−ax , x ≥ 0.

Section 25
(x−ct)2
25.1 u(x, t) = F −1 [u(ξ, t)] = e− 4 .

25.2
r
γ −αt −1 −ξ2 (kt+ γ )
u(x, t) = e F [e 4 ]

r r
γ −αt π x2
= e · · e− 4(kt+γ/4)
4π kt + γ/4
r
γ x2
= e− 4kt+γ e−αt .
4kt + γ

R∞ (x−s)2
25.3 u(x, t) = √1
4πkt 0
e− 4kt ds.

25.4

2
u(x, t) =et F −1 [e−ξ t ]
1 − x2
=e−αt √ e 4t .
4πt

25.5 We have
Z ∞ Z 0 Z ∞
−|ξ|y iξx
e e dξ = ξy iξx
e e
dξ + e−ξy eiξx dξ
−∞ −∞ 0
0 ∞
1
ξ(y+ix) 1
ξ(−y+ix)
= e − e
y + ix −∞
y − ix
0
1 1 2y
= + = 2 .
y + ix y − ix x + y2
287

25.6
Z ∞
1
u(x, y) = fˆ(ξ)e−|ξ|y eiξx dξ
2π −∞
 
1 2y
= f (x) ∗ 2
2π x + y2
Z ∞
1 2y
= f (x) dξ.
2π −∞ (x − ξ)2 + y 2

25.7 ûtt + (α + β)ût + αβ û = −c2 ξ 2 û.

25.8 u(x, t) = e−(x−3t) .

25.9 u(x, t) = e−(x−kt) .


2
R∞ 2 − (x−s)
25.10 u(x, t) = √1
4πkt −∞
e−s 4kt ds.

25.11 u(x, t) = (x − ct)2 .

25.12 u(x, t) = f (x) ∗ F −1 [− |ξ|


1 −|ξ|y
e ].
288 ANSWERS AND SOLUTIONS
Index

Boundary value problem, 17 First derivative, 4


Burger’s equation, 11 First order PDE, 36
Forced harmonic oscillator, 11
Cauchy data, 75 Fourier coefficients, 121
Cauchy problem, 75 Fourier cosine series, 135
Characteristic curve, 60 Fourier inversion formula, 207
Characteristic direction, 59 Fourier law, 102
Characteristic equation, 140 Fourier series, 109, 121
Characteristic equations, 60, 72 Fourier sine series, 135
Characteristics, 60 Fourier transform, 207
Classical solution, 13 Function series, 120
Convection (transport) equation, 11 Fundamental period, 122
Convolution, 183
General solution, 13
Descriminant, 88 Generalized solution, 14
Differential equation, 5 Gradient, 51
Diffusion equation, 11 Gradient vector field, 56
Diffusivity constant, 102
Directional derivative, 50 Harmonic function, 154
Dirichlet boundary conditions, 17 Heat equation, 89, 100
Dirichlet conditions, 104 Heat source, 103
Dot product, 41 Helmholtz equation, 155
Homogeneous, 37, 88
Eigenvalue problem, 155 Homogeneous linear PDE, 8
Elliptic, 88 Hyperbolic, 88
Euler equation, 166
Euler-Fourier Formulas, 125 Ill-posed, 19
Even extension, 135 Initial curve, 75
Even function, 133 Initial data, 75
Evolution equation, 177 Initial temperature distribution, 104
Exponential order at infinity, 180 Initial value conditions, 18

289
290 INDEX

Initial value problem, 18 Order, 6


Inner product, 123 Ordinary differential equation, 5
integral curve, 57 Orthogonal, 123
Integral surface, 13, 59 Orthogonal projection, 45
integral transforms, 177
Integrating factor, 26 Parabolic, 88
Inverse Laplace transform, 182 Partial differential equation, 5
Piecewise continuous, 124, 180
Korteweg-Vries equation, 11 Piecewise smooth, 124
Pointwise convergence, 109, 120
Lagrange’s method, 71 Poisson Equation, 11
Laplace equation, 89, 154 Poisson equation, 155
Laplace transform, 178 Projected characteristic curve, 60
Laplace’s equation, 11
Laplacian, 154 Quasi-linear, 7, 36, 88
Level curve, 54 Right traveling wave, 67
Level surface, 54
Linear, 7, 36 Scalar projection, 46
linear, 88 Semi-linear, 7, 36, 88
Linear differential operator, 8 Separable, 31
Linear operator, 8 Separation of variables, 31
Smooth functions, 8
Method of characteristics, 59, 66, 71 Solution surface, 13
Method of undetermined coefficients, Specific heat, 101
222 Squeeze rule, 117
Method of Variation of Parameters, Stable solution, 18
229 stationary equation, 177
Minimal surface equation, 11 Strong solution, 13
Mixed boundary condition, 18 Superposition principle, 15
Mutually orthogonal, 123
Thermal conductivity, 102
Neumann boundary conditions, 17, 105 Thermal energy, 100
Non-homogeneous, 37, 88 Thin film equation, 11
Non-homogeneous PDE, 8 Total thermal energy, 103
Non-linear, 7, 36 Transport equation, 18
Nowhere characteristic, 77 Transport equation with decay, 68
Transport equationin 1-D space, 66
Odd extension, 134
Odd function, 133 Uniform convergence, 110, 121
INDEX 291

Vector field, 55
Vector function, 53

Wave equation, 89, 93


wave equation, 11
Weak solution, 14
Weierstrass M-test, 121
Well-posed, 18

You might also like