Transform Methods
Transform Methods
II B. Tech IV semester
CIVIL-R16
Ms.B Praveena
Assistant Professor
FRESHMAN ENGINEERING
INSTITUTE OF AERONAUTICAL ENGINEERING
(Autonomous)
Dundigal, Hyderabad - 500 043
UNIT-I
FOURIER SERIES
Definition of periodic function
Determination of Fourier coefficients
Fourier expansion of periodic function in a given interval of length 2π
Fourier series of even and odd functions
Fourier series in an arbitrary interval
Half- range Fourier sine and cosine expansions
INTRODUCTION:
Fourier series which was named after the French mathematician “Jean-Baptise Joseph
Fourier” (1768-1830).Fourier series is an infinite series representation of periodic function in
terms of trigonometric sine and cosine functions. It is very powerful method to solve ordinary
and partial differential equations particularly with periodic functions appearing as non-
homogeneous terms. We know that Taylor’s series expansion is valid only for functions
which are continuous and differentiable. Fourier series is possible not only for continuous
functions but also for periodic functions, functions which are discontinuous in their values
and derivatives because of the periodic nature Fourier series constructed for one period is
valid for all. Fourier series has been an important tool in solving problems in many fields like
current and voltage in alternating circuit, conduction of heat in solids, electrodynamics etc.
Periodic function
A function f:R→R is said to be periodic if there exists a positive number t such that
f(x+T)=f(x) for all x belongs to R .
T is called the period of f(x).
If a function f(x) has a smallest period T(>0) then this is called fundamental period
of f(x) or primitive period of f(x)
EXAMPLE
Sin x, cos x are periodic functions with primitive period 2
2
Sinnx,cosnx are periodic functions with primitive period
n
Tanx are periodic functions with primitive period
Tannx are periodic functions with primitive period
n
If f(x)= constant is a periodic function but it has no primitive period
NOTE
Any integral multiple of T is also a period i.e. if f(x) is a periodic then
f(x+nT)=f(x).where T is a period and n ϵ Z
If f1 and f2 are periodic functions having same period T then f(x)=c 1 f1(x)+c2 f2(x)
,[c1,c2 are constants] is also the periodic function of period T
If T is the period of f then f(c1x+c2)also has the period T [c1,c2 are constants]
If f(x) is a periodic function of x of period T
(1) f(ax),a≠0 is periodic function of x of period T/a
(2) f(x/b), b≠0 is periodic function of x of period Tb
EVEN FUNCTION:
A function f(x) is even function if f(-x)=f(x)
Ex: f(x)= cos x, x 2
NOTE
There may be some functions which are neither even nor odd
Ex: f(x) =4sinx +3tanx-ex
The product of two even functions is even
The product of two odd functions is even
The product of an even and odd function is odd
f(x)=a0+a1cosx+a2cos2x+a3cos3x+……………..ancosx+…….b1sinx+b2sin2x+b3 sin3x+……
……bnsinx+…………
f(x)= a0+ ∞ n=1(an cosnx + bn sin nx)
Where a0,a1,a2,…………an and b1,b2,…….bn are coefficient of the series.Since each term of
the trigonometric series is a function of period 2 it can be observed that if the series is
convergent then its sum is also a function of period 2
A function f(x) defined in [0,2π] has a valid Fourier series expansion of the form
𝐚
f(x)= 𝟐𝟎 + ∞ 𝐧=𝟏(𝐚𝐧 𝐜𝐨𝐬𝐧𝐱 + 𝐛𝐧 𝐬𝐢𝐧𝐧𝐱)
Where a0 , an , bn are constants, provided
f(x) is well defined and single-valued, except possibly at a finite number of point in
the
interval [0,2π] .
f(x) has finite number of finite discontinuities in the interval in [0,2π] .
f(x) has finite number of finite maxima and minima.
Note: The above conditions are valid for the function defined in the Intervals [-π,π],[0,2l],
[-l,l]
{1,cos 1x,cos 2x,….,cosnx,…..,sin 1x,sin 2x,….,sin nx,….}
Consider any two, All these have a common period 2π . Here 1=cos 0x
πx 2πx nπx πx 2πx nπx
{1,cos l ,cos l ,….,cos l ,…..,sin l ,sin l ,….,sin ,….}
l
All these have a common period 2l .
These are called complete set of orthogonal functions.
Then the Fourier series converges to f(x) at all points where f(x) is continuous. Also the
series converges to the average of the left limit and right limit of f(x) at each point of
discontinuity of f(x).
Example
sin−1 x cannot be expanded as fourier series since it is not single valued
Tan x cannot be expanded as Fourier series in (0,2π) since tan x is infinite at x =
π 3π
and 2
2
EULER’S FORMULAE
The Fourier series for the function f(x) in the interval c≤x≤c+2π is given by
a0 ∞
f(x)= + n=1 (an cosnx + bn sinnx)
2
1 c+2π
Where a0 = π c
f x dx
1 c+2π
an =π c
f x cosnx dx
1 c+2π
bn =π c f x sin nxdx
These values are known as Euler’s Formulae.
a ∞
Proof:consider f(x)= 20 + n=1 (an cosnx + bn sinnx)----------(1)
Integrating eq(1) with respective x from x=c, x=c+2π on both sides
c 2 c 2 c 2 c2
a0
f (x)dx 2
dx a n cos nxdx
n 1
b n 1
n sin nxdx
c c c c
c 2 c 2 c 2 c2
a0
f (x)dx dx a n cos nxdx b n sin nxdx
c c
2 n 1 c n 1 c
c 2
a0
c
f (x)dx
2
[c 2 c]
c2
1
a0
c
f (x)dx
Multiplying cosnx and Integrating eq(1) with respective x from x=c, x=c+2π on both sides
c 2 c 2 c 2 c 2
a0
c f (x) cosnx dx c 2 cos nxdx c n 1
a n cos nx cos nxdx c n 1
b n sin nx cos nxdx
c 2 c 2 c 2 c 2
a0
f (x) cosnx dx cos nxdx a n cos nxdx b n
2
sin nx cos nxdx
c c
2 n 1 c n 1 c
c 2
c
f (x) cosnx dx a n
c 2
1
an
c
f (x) cosnx dx
Multiplying sinnx and Integrating eq(1) with respective x from x=c, x=c+2π on both sides
c 2 c 2 c 2 c 2
a0
f (x) sinnx dx 2
sin nxdx a n cos nx sin nxdx
n 1
b n 1
n sin nx sin nxdx
c c c c
c 2 c 2 c 2 c 2
a0
f (x) sinnx dx sin nxdx a n cos nx sin nxdx b n sin 2 nxdx
c c
2 n 1 c n 1 c
c 2
c
f (x) sinnx dx b n
c2
1
bn
c
f (x) sinnx dx
1 2π
an =π 0
f x cosnx dx
1 2π
bn =π 0
f x sin nxdx
Let f(x) be a function defined in [-π,π] . Let f(x+2π)=f(x) then the Fourier Series of is
given by
a
f(x)= 20 + ∞ n=1 (an cosnx + bn sinnx)
1 π
Where a0 = π −π
f x dx
1 π
an = f x cosnx dx
π −π
1 π
bn =π −π
f x sin nxdx
Let f(x) is a function defined in [0,2l]. Let f(x+2l)=f(x) then the Fourier Series of is
given by
a nπx nπx
f(x)= 20 + ∞ n=1 (an cos 𝑙 + bn sin 𝑙 )
2𝑙
1
Where a0 = 𝑓 𝑥 𝑑𝑥
𝑙 0
1 2𝑙 𝑛𝜋𝑥
𝑎𝑛 = 𝑙 0 𝑓 𝑥 𝑐𝑜𝑠 𝑙 𝑑𝑥
1 2𝑙 𝑛𝜋𝑥
𝑏𝑛 = 𝑙 0 𝑓 𝑥 𝑠𝑖𝑛 𝑙 𝑑𝑥
These values a0 , an , bn are called as Fourier coefficients of f(x) in [0,2l]
Let f(x) be a function defined in [-l,l] . Let f(x+2l)=f(x) then the Fourier Series of is
given by
a nπx nπx
f(x)= 20 + ∞ n=1 (an cos 𝑙 + bn sin 𝑙 )
1 𝑙
Where a0 = 𝑙 −𝑙
𝑓 𝑥 𝑑𝑥
1 𝑙 𝑛𝜋𝑥
𝑎𝑛 = 𝑙 −𝑙
𝑓 𝑥 𝑐𝑜𝑠 𝑙 𝑑𝑥
1 𝑙 𝑛𝜋𝑥
𝑏𝑛 = 𝑙 −𝑙
𝑓 𝑥 𝑠𝑖𝑛 𝑙 𝑑𝑥
1 π
an =π −π
f x cosnx dx
1 π
bn =π −π
f x sin nxdx
2 π
Where a0 = π 0
f x dx
2 π
an =π 0
f x cosnx dx
a0 ∞ nπx
f(x)= + n=1 an cos l
2
2 l
Where a0 = l 0 f x dx
2 𝑙 𝑛𝜋𝑥
an = 𝑙 0
f x cos dx
𝑙
f(x)= ∞
n=1 bn sinnx
2 π
Where 𝑏𝑛 =π 0 f x sin nxdx
f(x)= ∞
n=1 bn sinnx
2 𝑙 𝑛𝜋𝑥
Where 𝑏𝑛 = 0 𝑓 𝑥 𝑠𝑖𝑛 𝑑𝑥
𝑙 𝑙
FOURIER SERIES FOR DISCONTINUOUS FUNCTIONS
𝑓1 𝑥 , 𝑐 < 𝑥 < 𝑥0
Let f(x) be defined by f(x) =
𝑓2 𝑥 , 𝑥0 < 𝑥 < 𝑐 + 2𝜋
𝒙𝟎 𝒄+𝟐𝝅
𝟏
𝒂𝒏 = 𝒇𝟏 𝒙 𝒄𝒐𝒔 𝒏𝒙 𝒅𝒙 + 𝒇𝟐 𝒙 𝒄𝒐𝒔 𝒏𝒙 𝒅𝒙
𝝅 𝒄 𝒙𝟎
𝒙𝟎 𝒄+𝟐𝝅
𝟏
𝒃𝒏 = 𝒇𝟏 𝒙 𝒔𝒊𝒏 𝒏𝒙 𝒅𝒙 + 𝒇𝟐 𝒙 𝒔𝒊𝒏 𝒏𝒙 𝒅𝒙
𝝅 𝒄 𝒙𝟎
𝑓 𝑥+0 +𝑓 (𝑥−0)
The Fourier series converges to if x is a point of discontinuity of f(x)
2
PROBLEMS
1 Find the Fourier series expansion of f(x) = x2, 0 < x < 2π. Hence deduce that
1 1 1 2
(i ) 2 2 2 ..............
1 2 3 6
1 1 1 2
(ii ) 2 2 2 ..............
1 2 3 12
1 1 1 2
(iii ) 2 2 2 ..............
1 3 5 8
Sol Fourier series is
2 2
1 1
a0 f ( x) dx x dx
2
0
0
2
a
1 x3
f ( x) 0 (a n cos nx bn sin nx)
2 n 1 3 0
1 8 3 8 2
0
3 3
2 2
1 1
an f ( x) cos nx dx x
2
cos nx dx
0
0
2
1 2 sin nx cos nx sin nx
( x ) (2 x) (2)
n n
2
n
3
0
1 (4 ) (1)
0 0 0 0 0
n 2
4
2
n
2 2
1 1
bn f ( x) sin nx dx x
2
sin nx dx
0
0
2
1 cos nx sin nx cos nx
( x 2 ) (2 x) (2)
n n
2
n 0
3
1 4 2 2 2
0 3 0 0 3
n n n
4
n
a0
f ( x) (a n cos nx bn sin nx)
2 n 1
1 8 2
4 4
2 cos nx sin nx
2 3 n 1 n n
4 2 cos x cos 2 x cos 3x sin x sin 2 x sin 3x
f ( x) 4 2 ............... 4 ............
1 1
2 2
3 2 3 2 3
2 Expand in Fourier series of f(x) = x sinx for 0 <x< 2π and deduce the result
1 1 1 2
..........
1.3 3.5 5.7 4
Sol Fourier series is
a
f ( x) 0 (a n cos nx bn sin nx)
2 n 1
2 2
1 1
a0
f ( x) dx
0
x sin x dx
0
1
x ( cos x) (1)( sin x)02
1
(2 0) (0 0)
2
2 2
1 1
an
f ( x) cos nx dx
0
x sin x cos nx dx
0
2
1
2 x(2 cos nx sin x) dx
0
2
x sin(n 1) x sin(n 1) x dx
1
, n 1
2 0
2 2
1 1
2 0 x sin(n 1) x dx 2 x sin(n 1) x dx
0
2
1 cos(n 1) x sin( n 1) x
( x) (1)
2 n 1 ( n 1) 2
0
2
1 cos(n 1) x sin( n 1) x
( x) (1)
2 n 1 (n 1)
2
0
2 (1)2 n2 1 2 (1) 2 n2
0 0 0 0 0 0
1
2 n 1 2 n 1
1 1
n 1 n 1
(n 1) (n 1)
an
(n 1)(n 1)
2
an 2 , n 1
n 1
When n = 1, we have
2 2
1 1
a1 f ( x) cos x dx
0
x sin x cos x dx
0
2
1
2 x sin 2 x dx
0
2
1 cos 2 x sin 2 x
x (1)
2 2 4 0
1 1
2 0 (0 0)
2 2
1
2
2 2
1 1
bn
f ( x) sin nx dx
0
x sin x sin nx dx
0
2
1
2 x(2 sin nx sin x) dx
0
2
x cos(n 1) x cos(n 1) x dx
1
, n 1
2 0
2 2
1 1
2 x cos(n 1) x dx
0
2 x cos(n 1) x dx
0
2
1 sin( n 1) x cos(n 1) x
( x) (1)
2 n 1 (n 1)
2
0
2
1 sin( n 1) x cos(n 1) x
( x) (1)
2 n 1 (n 1)
2
0
1 (1) 2 n2
1 1 (1) 2 n2
1
0 0 0 0
2 (n 1) 2 (n 1) 2 2 (n 1) 2 (n 1) 2
1 1 1 1 1 1
0 2
0 2
0 2
0
2 (n 1) (n 1) 2 (n 1) (n 1) 2
bn 0 , n 1
When n = 1, we have
2 2
1 1
b1
0
f ( x) sin x dx
x sin x sin x dx
0
2
1
x sin
2
x dx
0
2
1 1 cos 2 x
0
x
2
dx
2
1 x 2 sin 2 x cos 2 x
x (1)
2 2 2 4 0
1 2 1 1
2 0 0 0
2 2 2
a0
f ( x) (a n cos nx bn sin nx)
2 n 1
a0
a1 cos x a n cos nx b1 sin x bn sin nx
2 n2 n2
2 1
2
cos x cos nx sin x 0
2 2 n 2 ( n 1)( n 1)
sinh
2 sinh 1 (1) n
1=
2 n 2 n 2 1
2 sinh (1) n
1=
n 2 n2 1
4 Find the Fourier series for the function f(x) = 1 + x + x2 in (–π, π). Deduce
1 1 1 2
..............
12 2 2 32 6
1 2 3 2 3
2 3 2 3
1 2 3 2 2
2 2
3 3
1 1
an f ( x) cos nx dx (1 x x
2
) cos nx dx
1 sin nx cos nx sin nx
(1 x x 2 ) (1 2 x) (2)
n n
2
n
3
1 (1 2 ) (1) n (1 2 ) (1) n
0 0
0 0
n 2
n 2
(1) n
1 2 1 2
n2
(1) n 4 (1) n
( 4 )
n2 n2
1 1
bn f ( x) sin nx dx (1 x x
2
) sin nx dx
1 cos nx sin nx cos nx
(1 x x 2 ) (1 2 x) (2)
n n
2
n
3
1 2 ( 1)
n
2 (1) n 2 ( 1)
n
2 (1) n
(1 ) 0 (1 ) 0
n n3 n n 3
(1) n
n
1 2 1 2
(1) n 2 (1) n 2 (1) n 1
(2 )
n n n
a0
f ( x) (a n cos nx bn sin nx)
2 n 1
1 2 2
4(1) n 2(1) n1
2 2
cos nx sin nx
2 n 1 n
3 n
2
cos x cos 2 x cos 3x sin x sin 2 x sin 3x
1 4 2 .......... 2 ..........
1 1
2 2
3 2 3 2 3
2 cos x cos 2 x cos 3x sin x sin 2 x sin 3x
f ( x) 1 4 2 2 ............ 2 ............
1 1
2
3 2 3 2 3
Put x = π in the above series we get
2 1 1 1
f ( ) 1 4 2 2 2 ............ 2(0) --------------- (1)
3 1 2 3
1
3
0 8 3
8 2
3
1 1
an f ( x) cos nx dx ( x)
2
cos nx dx
1 sin nx cos nx sin nx
( x) 2 [2( x)(1)] (2)
n n
2
n
3
1 (4 ) (1) n
0 0 0 0 0
n 2
4 (1) n
n2
1 1
bn f ( x) sin nx dx ( x)
2
sin nx dx
1 cos nx sin nx cos nx
( x) 2 [2( x)(1)] (2)
n n
2
n
3
1 2 (1) n 2 ( 1)
n
2 (1) n
0 0 ( 4 ) 0
n3 n n 3
4 (1) n
n
a0
f ( x) (a n cos nx bn sin nx)
2 n 1
1 8 2
4(1) n 4 (1) n
cos nx sin nx
2 3 n 1 n 2
n
4 2
cos x cos 2 x cos 3x sin x sin 2 x sin 3x
f ( x) 4 2 ............... 4 ............
1
2 2
3 2 3 1 2 3
4 2 cos x cos 2 x cos 3x sin x sin 2 x sin 3x
(i.e.) f ( x) 4 2 ............... 4 ............
1 1
2 2
3 2 3 2 3
3
2 2x 2 x3
3 2 3 0
2 27
9 (0 0)
3 3
0
2nx
3 3
1 2
an
(3 / 2)
0
f ( x) cos nx dx
30 (2 x x 2 ) cos
3
dx
3
2nx 2nx 2nx
sin cos sin
2 3 (2 2 x) 3 (2) 3
(2 x x 2 )
3 2n 4n 2 2 8n 3 3
3 9 27 0
2 9 9
0 (4) 2 2 0 0 (2) 2 2 0
3 4n 4 n
2 54
2 2
3 4n
9
2 2
n
2nx
3 3
1 2
bn
(3 / 2) 0
f ( x ) sin nx dx
30 (2 x x 2 ) sin
3
dx
3
2nx 2nx 2nx
cos sin cos
2 3 (2 2 x) 3 (2) 3
(2 x x 2 )
3 2n 4n 2 2 8n 3 3
3 9 27 0
2 3 27 27
(3) 0 2 3 3 0 0 2 3 3
3 2n 8n 8n
3
n
a0 2n x 2n x
f ( x) a n cos bn sin
2 n 1 3 3
9 2n x 3 2n x
0 2 2 cos sin
n 1 n 3 n 3
9 1 2 x 1 4 x 1 6 x
(i.e.) f ( x) 2 2
cos 2 cos 2 cos .....................
1 3 2 3 3 3
3 1 2 x 1 4 x 1 6 x
sin sin sin .....................
1 3 2 3 3 3
7 Expand f(x) = x – x2 as a Fourier series in –l < x < l and using this series find the
root square mean value of f(x) in the interval.
1 l 2 l 3 l 2 l 3
l 2 3 2 3
1 2l 3 2l 2
l 3 3
nx nx
l l
1 1
an
l l
f ( x) cos
l
dx ( x x 2 ) cos
l l l
dx
l
nx nx nx
sin cos sin
1 l l l
( x x )
2
(1 2 x) (2)
l n n 2 2 n 3 3
l l2 l3 l
1 (1) n l 2 (1) n l 2
0 (1 2l ) 2 2 0 0 (1 2l ) 2 2 0
l n n
(1) ln 2
1 2l 1 2l
l n 2 2
(1) n l 4 l 2 (1) n 1
4l
n 2 2 n 2 2
nx nx
l l
1 1
l l
bn f ( x) sin dx ( x x 2 ) sin dx
l l l l
l
nx nx nx
cos sin cos
1 l (1 2 x) l (2) l
( x x 2 )
l n n 2 2 n 3 3
l l 2
l 3 l
1 (1) n l 2(1) n l 3 2 ( 1) l
n
2(1) n l 3
(l l 2 ) 0 (l l ) 0
n n n n 3 3
3 3
l
(1) n l
l n
l l2 l l2
n 1
(1) n 1
2l 2 l (1)
n n
a0 n x n x
f ( x) a n cos bn sin
2 n 1 l l
1 2l 2
4 l 2 (1) n 1 n x 2 l (1) n 1 n x
cos sin
2 3
n 1 n
2 2
l n l
l 2 4l 2 1 x 1 2 x 1 3 x 1 4 x
(i.e.) f ( x) 2 2 cos 2 cos 2 cos 2 cos ....................
3 1 l 2 l 3 l 4 l
2 l 1 x 1 2 x 1 3 x 1 4 x
sin sin sin sin ....................
1 l 2 l 3 l 4 l
8 Obtain the Fourier series of f(x) = 1-x2 over the interval (-1,1).
Sol The given function is even, as f(-x) = f(x). Also period of f(x) is 1-(-1)=2
Here
1 1
1
f ( x)dx = 2 f ( x)dx
1 1
a0 =
0
1
1
x3
= 2 (1 x )dx 2 x
2
0 3 0
4
3
1
1
an f ( x) cos(nx)dx
1 1
1
2 f ( x) cos(nx)dx
0
1
= 2 (1 x 2 ) cos(nx)dx
0
Integrating by parts, we get
1
sin nx cos nx sin nx
an 2 1 x 2
(2 x) (2)
n (n ) (n ) 0
2 3
4(1) n1
=
n 2 2
1
1
1 1
bn f ( x) sin(nx)dx =0, since f(x)sin(nx) is odd.
2 4 (1) n 1
f(x) =
3 2 n 1 n 2
cos(nx)
1 x, 2 x 0
9 Find the Fourier series for the function f ( x)
1 x, 0 x2
1 2
Deduce that
n 1 ( 2n 1)
2
8
2
nx nx
sin cos
(1 x) 2 (1) 2
n n 2 2
2 4 0
4 (1) n 4
0 2 2 0 2 2
n n
4
2 2 1 (1) n
n
0 4 [1 (1) n ] nx
f ( x) cos
2 n 1 n2 2
2
4 2 x 2 3x 2 5x
2 2
cos 0 2 cos 0 2 cos ........................
1 2 3 2 5 2
8 1 x 1 3x 1 5x
f ( x) 2 2
cos 2 cos 2 cos ........................
1 2 3 2 5 2
Put x = 0 in the above series we get
8 1 1 1
f (0) 2 2 2 2 ............ --------------- (1)
1 3 5
But x = 0 is the point of discontinuity. So we have
f (0) f (0) (1) (1) 2
f (0) 1
2 2 2
Hence equation (1) becomes
8 1 1 1
1 2 2 2 2 ............
1 3 5
2 1 1 1
2
2 2 ................
8 1 3 5
2
1
(i.e.)
n 1 ( 2n 1)
2
8
l
x in 0 x
2
10 Obtain the sine series for f ( x)
l x in l x l
2
Sol Fourier sine series is
n x
f ( x) bn sin
n 1 l
nx
l
2
bn
l 0
f ( x) sin
l
dx
nx nx
l/2 l
2 2
l
0
x sin
l
dx (l x) sin
l l/2 l
dx
l/2 l
nx nx nx nx
cos sin cos sin
2
l (1) 2
l (l x)
l (1) l
( x)
l n n 2 2 l n n 2 2
l l 2
0 l l2 l / 2
n 2 n n 2 n
2 l l . sin l l . cos 2 l . sin
l . cos
2 2
0 0 {0 0}
2 2
l 2 n n 2 2 l 2 n n 2 2
2 n
2l . sin
2 2
l n 2 2
4l n
bn 2 2 sin
n 2
n x
f ( x) bn sin
n 1 l
4l n n x
sin sin
n 1 n 2 2
2 l
4l 1 x 1 3 3 x 1 5 5 x
2 2
sin sin 0 2 sin sin 0 2 sin sin 0 .......................
1 2 l 3 2 l 5 2 l
4l 1 x 1 3 x 1 5 x
sin 2 sin 2 sin .......................
2 12 l 3 l 5 l
1, 0 x
11 Find the Fourier series of f ( x)
2, x 2
Sol Fourier series is
a0
f ( x) (a n cos nx bn sin nx)
2 n 1
2 2
1 1 1
a0
0
f ( x) dx
(1) dx
0
(2) dx
1
x 0 2 x 2
1
( 0) 2 (2 )
1 2 3
2 2
1 1 1
an
0
f ( x) cos nx dx
(1) cos nx dx
0
(2) cos nx dx
2
1 sin nx 2 sin nx
n 0 n
1 2
(0 0) (0 0)
0
2 2
1 1 1
bn
0
f ( x) sin nx dx
(1) sin nx dx
0
(2) sin nx dx
2
1 cos nx 2 cos nx
n 0 n
1 2
[(1) n 1] [1 (1) n ]
n n
1
n
(1) n 1 2 2(1) n
(1) n 1
n
a0
f ( x) (a n cos nx bn sin nx)
2 n 1
3 (1) n 1
0. cos nx sin nx
2 n 1 n
3 2 sin x sin 2 x sin 3x
.........................
2 1 2 3
l
x, 0 x
Find the Fourier series expansion of f ( x) 2
12 l
l x, xl
2
Sol l
. Let 2 L l L , then the given function becomes
2
x, 0xL
f ( x)
2 L x, L x 2 L
a0 n x n x
Fourier series is f ( x) a n cos bn sin
2 n 1 L L
2L L 2L
1 1 1
a0
L
0
f ( x) dx ( x) dx
L 0 L (2L x) dx
L
L 2L
1 x2 1 (2 L x) 2
L 2 0 L 2 L
1 L2 1 L2
0 0
L 2 L 2
L L
L
2 2
n x
2L
1
an
L
0
f ( x) cos
L
dx
n x n x
L 2L
1 1
L x cos
0
L
dx
L
L
(2 L x) cos
L
dx
L 2L
n x n x n x n x
sin cos sin cos
1 L (1) L 1 (2 L x) L (1) L
( x)
L n n 2 2 L n n 2 2
L L2 0 L L2 L
1 (1) n L2 L2 1 L2 (1) n L2
0 2 2 0 2 2 0 2 2 0 2 2
L n n L n n
2
1 L 2L
(1) n 1 1 (1) n 2 2 (1) n 1
Ln 2 2
n
n x
2L
1
L 0
bn f ( x) sin dx
L
n x n x
L 2L
1 1
L x sin
0
L
dx
L L
(2 L x) sin
L
dx
L 2L
n x n x n x n x
cos sin cos sin
1 L (1) L 1 (2 L x) L (1) L
( x)
L n n 2 2 L n n 2 2
L L2 0 L L2 L
1 (1) n L2 1 (1) n L2
0 0 0 0 0 0
L n L n
0
a0 n x n x
f ( x) a n cos bn sin
2 n 1 L L
L 2 L [(1) n 1] n x
cos 0
2 n 1 n2 2
L
L 2L 2 x 2 3 x 2 5 x
2 2 cos 0 2 cos 0 2 cos 0 .................
2 1 L 3 L 5 L
L 4L 1 x 1 3 x 1 5 x
2 2 cos 2 cos 2 cos .................
2 1 L 3 L 5 L
l 2l 1 2 x 1 6 x 1 10 x
(i.e.) f ( x) 12 cos cos cos .................
4 2 l 3 2
l 5 2
l
l x, 0 x l
13 Find the Fourier series expansion of f ( x)
0, l x 2l
1 1 1
Hence deduce the value of the series (i) 1 ..........
3 5 7
1 1 1
(ii) 2 2 2 ..............
1 3 5
Sol a0 n x n x
Fourier series is f ( x) a n cos bn sin
2 n 1 l l
2l l 2l
1 1 1
a0
l
0
f ( x) dx (l x) dx (0) dx
l 0 l l
l
1 (l x) 2
l 2 0
1
2l
0 l2
l
2
n x n x
2l l
1 1
an
l
0
f ( x) cos
l
dx (l x) cos
l 0 l
dx 0
l
n x n x
sin cos
1
(l x) l (1) l
l n n
2 2
l l2 0
1 (1) n l 2 l 2
0 2 2 0 2 2
l n n
1 l2
ln 2 2
(1) n 1 1
l
2 2 (1) n 1 1
n
n x n x
2l l
1 1
bn
l
0
f ( x) sin
l
dx (l x) sin
l 0 l
dx 0
l
n x n x
cos sin
1 l l
(l x) (1)
l n n 2 2
l l2 0
1 l2
{0 0} 0
l n
l
n
a0 n x n x
f ( x) a n cos bn sin
2 n 1 l l
l l [(1) n 1 1] n x l n x
cos sin
4 n 1 n 2 2 l n l
l l 2 x 2 3 x 2 5 x
2 2 cos 0 2 cos 0 2 cos 0 .................
4 1 l 3 l 5 l
l 1 x 1 2 x 1 3 x
sin sin sin .................
1 l 2 l 3 l
l 2l 1 x 1 3 x 1 5 x
(i.e.) f ( x) 2 2 cos 2 cos 2 cos .................
4 1 l 3 l 5 l
l 1 x 1 2 x 1 3 x
sin sin sin ................. (1)
1 l 2 l 3 l
l
Put x
(which is point of continuity) in equation (1), we get
2
l l 2l l 1 1 1 3 1 1 5
l 2 (0) sin sin sin sin 4 sin .................
2 4 1 2 2 3 2 4 5 2
l l l 1 1 1
1 0 3 0 5 0 7 .................
2 4
l l l 1 1 1
1 .................
2 4 3 5 7
l l 1 1 1
1 3 5 7 .................
4
1 1 1
1 .................
4 3 5 7
Put x = l in equation (1) we get
l 2l 1 1 1
f (l ) 2 2 2 2 ................ --------------- (2)
4 1 3 5
But x = l is the point of discontinuity. So we have
f (l ) f (l ) (0) (0)
f (l ) 0
2 2
Hence equation (2) becomes
l 2l 1 1 1
0 2 2 2 2 ............
4 1 3 5
l 2l 1 1 1
2 2 2 2 ............
4 1 3 5
2
1 1 1
2 2 2 ................
8 1 3 5
HALF RANGE FOURIER SERIES
Half Range Fourier Sine Series defined in 𝟎, 𝝅 :
∞
The Fourier half range sine series in 𝟎, 𝝅 is given by f(x)= 𝒏=𝟏 𝒃𝒏 𝒔𝒊𝒏𝒏𝒙
𝟐 𝛑
Where 𝒃𝒏 =𝛑 𝟎 𝐟 𝐱 𝐬𝐢𝐧 𝐧𝐱𝐝𝐱
This is Similar to the Fourier series defined for odd function in −𝜋, 𝜋
This is Similar to the Fourier series defined for odd function in −𝑙, 𝑙
𝟐 𝛑
an =𝛑 𝟎
𝐟 𝐱 𝐜𝐨𝐬𝐧𝐱𝐝𝐱
This is Similar to the Fourier series defined for even function in −𝜋, 𝜋
𝟐 𝒍
Where a0 = 𝒇 𝒙 𝒅𝒙
𝒍 𝟎
𝟐 𝒍 𝒏𝝅𝒙
an = 𝒍 𝟎
𝐟 𝐱 𝐜𝐨𝐬 𝐝𝐱
𝒍
This is Similar to the Fourier series defined for even function in −𝑙, 𝑙
Problems
1 Find the half range sine series for f(x) = 2 in 0 <x <.
Sol
f ( x) bn sin nx
n 1
2 2
bn
f ( x) sin nx dx
0
2 sin nx dx
0
4 cos nx
n 0 n
4
(1) n 1 4
n
1 (1) n
Half range sine series is
4 [1 (1) n ]
f ( x) bn sin nx sin nx
n 1 n 1 n
4 2 sin x 2 sin 3x 2 sin 5 x
.................
1 3 5
8 sin x sin 3x sin 5 x
.................
1 3 5
1 cos(n 1) x cos(n 1) x
n 1 n 1 0
1 (1) n 1 (1) n 1 1 1
n 1 n 1 n 1 n 1
1 1 1 1 1
(1) n
n 1 n 1 n 1 n 1
1 1 1 1 1
(1) n
n 1 n 1 n 1 n 1
1 2 n 2 n
(1) n 2 2
n 1 n 1
bn
2n
(n 2 1)
(1) n 1 , n 1
When n = 1, we have
2 2
b1
f ( x) sin x dx
0
cos x sin x dx
0
1
sin 2 x dx
0
1 cos 2 x 1
(1 1) 0
2 0 2
f ( x) bn sin nx b1 sin x bn sin nx
n 1 n2
2n [ (1) n 1]
0 sin nx
n2 (n 2 1)
2 4 sin 2 x 8 sin 4 x 12 sin 6 x
0 0 0 ..................
3 15 35
8 sin 2 x 2 sin 4 x 3 sin 6 x
..................
3 15 35
3 Find the half range cosine series for the function f(x) =x (π – x) in 0<x<
π.
Sol Half range Fourier cosine series is
a0
f ( x) an cos nx
2 n 1
2 2
a0
f ( x) dx
0
x( x) dx
0
2 x 2 x3
2 3 0
2 3 3
(0 0)
2 3
2 3
6
2
3
2 2
an
0
f ( x) cos nx dx x( x) cos nx dx
0
2 sin nx cos nx sin nx
( x x 2 ) ( 2 x) ( 2)
n n
2
n
3
0
2 ( )(1) n ( )(1)
0 0 0 2 0
n 2
n
2
n 2
(1) n 1
2
2 (1) n 1
n
a0
f ( x) a n cos nx
2 n 1
1 2
2
2 (1) n 1 cos nx
2 3
n 1 n
2 2 cos 2 x 2 cos 4 x 2 cos 6 x
2 0 0 0 0 ...............
2 2 2
6 2 4 6
2 cos 2 x cos 4 x cos 6 x
4 2 ...............
2
2 2
6 4 6
4 Find the half range cosine series for the functionf(x) = x in0 < x <l.
Sol a0 n x
Half range Fourier cosine series is f ( x) a n cos
2 n 1 l
l
2 x2 2 l 2
l l
2 2
a0 f ( x) dx x dx 0 l
l 0 l 0 l 2 0 l 2
n x n x
l l
2 2
l 0 l 0
an f ( x ) cos dx x cos dx
l l
l
n x n x
sin cos
2 l l
( x) (1)
l n n 2 2
l
l2
0
2 (1) n l 2 l 2
0 2 2 0 2 2
l n n
2l
2 2 (1) n 1
n
a0 n x
f ( x) a n cos
2 n 1 l
l 2 l [(1) n 1] n x
cos
2 n 1 n2 2
l
l 2l 2 x 2 3 x 2 5 x
12 cos l 0 3 2 cos l 0 5 2 cos l 0 ...................
2 2
l 4l 1 x 1 3 x 1 5 x
(i.e.) f ( x) 2 2 cos 2 cos 2 cos ...................
2 1 l 3 l 5 l
5 Find the half range sine series of f(x) = x cos x in (0, π).
Sol
Fourier sine series is f ( x) bn sin nx
n 1
2 2
bn
0
f ( x) sin nx dx
x cos x sin nx dx
0
1
x (2 sin nx cos x) dx
0
1
x [sin(n 1) x sin(n 1) x] dx ,
0
n 1
1 1
x sin(n 1) x dx
0
x sin(n 1) x dx ,
0
n 1
1 cos(n 1) x sin( n 1) x 1 cos(n 1) x sin( n 1) x
bn x (1) x (1)
n 1 (n 1)
2
0 n 1 (n 1)
2
0
1 (1) n 1 1 (1) n 1
0 0 0 0 0 0
n 1 n 1
(1) n 2 (1) n
n 1 n 1
1 1
(1) n
n 1 n 1
2n
(1) n
(n 1)(n 1)
2n (1) n
(i.e.)bn 2 , n 1
n 1
When n = 1, we have
2 2
b1
0
f ( x) sin x dx
x cos x sin x dx
0
1
x sin 2 x dx
0
1 cos 2 x sin 2 x
x (1)
2 4 0
1 1 1
0 {0 0}
2 2
f ( x) bn sin nx b1 sin x bn sin nx
n 1 n2
1
2 n(1) n
sin x sin nx
2 n2 n2 1
1 2 sin 2 x 3 sin 3x 4 sin 4 x
sin x 2 ..................
2 3 8 15
6 Obtain the half range cosine series for f(x) = (x – 2)2 in the interval 0
1 2
<x < 2. Deduce that (2n 1)
n 1
2
8
we get
2 2 1 1 1
2 2 2 2 ............
6 12 1 3 5
3 2 1 1 1
2 12 3 2 5 2 ............
12
2
1 1 1
2 2 2 ................
8 1 3 5
2
1
(i.e.)
8 n 1 (2n 1) 2
UNIT-II
FOURIER TRANSFORMS
Integral Transform
The integral transform of a function f(x) is given by
b
f ( x)k ( s, x)dx
I [f(x)] or F(s) a
Where k(s, x) is a known function called kernel of the transform
s is called the parameter of the transform
f(x) is called the inverse transform of F(s)
Fourier transform
k ( s, x) eisx
F [ f ( x)] F ( s)
f ( x)eisx dx
Laplace transform
k ( s, x) e sx
L[ f ( x)] F ( s) f ( x)e sx dx
0
Henkel transform
k ( s, x) xJ n ( sx)
H [ f ( x)] H ( s) f ( x)xJ n ( sx)dx
0
Mellin transform
k ( s, x) x s 1
M [ f ( x)] M ( s) f ( x)x s 1dx
0
DIRICHLET’S CONDITION
A function f(x) is said to satisfy Dirichlet’s conditions in the interval (a,b) if
1. f(x) defined and is single valued function except possibly at a finite number of
points in the interval (a,b)
2. f(x) and f1(x) are piecewise continuous in (a,b)
Fourier integral theorem
If f(x) is a given function defined in (-l,l) and satisfies the Dirichlet conditions then
1
f (x) f (t) cos (t x)dtd
0
Proof:
a0 nx
nx
f (x) a n cos( ) b n sin( )
2 n 1 L n 1 L
where
L
1
L L
a0 f (t) dt
nx
L
1
a n f (t) cos( ) dt
L L L
nx
L
1
bn
L L
f (t) sin(
L
) dt
sin cos x , x 1
0
d f (x) { 2
2
0, x 1
x 1
sin cos x 1 0
0
d
2
2
4
x0
sin
0
d
2
2 2 2
2 Using Fourier Integral show that e x cos x
0 4 2
cos xd
2 2 2
f (x)
0
4 2
cos xd
FOURIER TRANSFORMS
The complex form of Fourier integral of any function f(x) is in the form
1
f (x)
2
e ix f (t)eit dtd
Replacing by s
1
f (x)
2 e isx ds f (t)eist dt
Let
F(s) f (t)e
ist
dt
1
f (x)
2
F(s)e isx ds
Here F(s) is called Fourier transform of f(x) and f(x) is called inverse Fourier transform
of F(s)
Alternative Definitions
1 1
F ( s )e
isx
F [ f (t )] F ( s) f ( x)eist dt , f ( x) ds
2 2
1
F ( s )e
isx
F ( s) f ( x)e dx, f ( x) isx
ds
2
Finite
n t
l
2
FC [ f (t )] FC ( s)
l 0
f (t ) cos(
l
) dt
1 2 n x
f ( x) FC (0)
l
l n 1
FC ( s) cos(
l
)
2 n x
f ( x)
l n 1
FS ( s)sin(
l
)
Alternative Definitions:
2
0
1.FC (s) f (x)cossxdx,f (x)
0
FC (s)cossxds
2
0
2.FS (s) f (x)sin sxdx,f (x)
0
FS (s)sin sxds
t a z
dt dz
1
F[f(x a)]
2
f (z)eisz eias dz
1
F[f(x a)] eias
2 f (z)eisz dz
F[e iax
f(x)] F(s a)
1 s
3 Change of scale property: F[f(ax)] F( )(a 0)
a a
1
2
Proof F[f(ax)] f (at)e ist
dt
at z
1
dt dz
a
s
1 1 i z
F[f(ax)]
a 2
f (z)e a dz
1 s
F[f(ax)] F( )
a a
dn F
4 Multiplication Property: F[x n f(x)] (i)n
ds n
1
F[f(x)] f (t)e
ist
Proof dt
2
dF i
ds
2
t.f (t)eist dt
d2F i2
ds 2
2
t 2 .f (t)eist dt
continuing
dn F in
ds n
2
t n .f (t)eist dt
dn F
F[x n f(x)] (i) n
ds n
1
5 Modulation Theorem: F[f(x)cosax] F(s a) F(s a),F[s] F[f(x)]
2
1
2
Proof F[f(x)] f (t) cosat e ist
dt
1 eiat e iat ist
F[f(x)]
2
f (t)
2
e dt
1 1
1
F[f(x)]
2 2
f (t)ei(s a)t dt
2
f (t)ei(s a)t dt
1
F[f(x)cosax] F(s a) F(s a)
2
Problems
1, x 1
1 Find the Fourier transform of f (x) {
0, x 1
sin x
Hence evaluate 0
x
dx
Sol:
F[f (x)] f (x)e
isx
dx
1
F[f (x)] 1.eisx dx
1
1
eisx
F[f (x)]
is 1
e e is
is
sin s
F[f (x)] 2
is s
1
f (x)
2
F[s]e isx ds
1 sin s isx
f (x) s e ds
2
2
1 sin s isx
f (x) e ds
s
1 sin s isx 1, x 1
s
e ds {
0, x 1
x0
sin s
s
ds
sin s
0 s
ds
2
1 x2 , x 1
2 Find the Fourier transform of f (x) {
0, x 1
x cos x sin x x
Hence evaluate 0
x 3
cos dx
2
Sol:
F[f (x)] f (x)e
isx
dx
1
F[f (x)] (1 x 2 )eisx dx
1
1
eisx eisx eisx
F[f (x)] (1 x ) 2x 2
2
is is
2 3
is
1
e e e e
is is is is
F[f (x)] 2 2
s is
2 3
4
F[f (x)] 3 s cos s sin s
s
1
f (x)
2
F[s]e isx ds
1 4
3
f (x) s cos s sin s e isx ds
2 s
1
4 1 x2, x 1
s3 s cos s sin s e isx
ds {
2 0, x 1
x 1/ 2
1 4 3
3
s cos s sin s e isx ds
2 s 4
s cos s sin s [cos s i sin s ]ds 3
s3 2 2 8
s cos s sin s cos s ds 3
0
s3 2 16
e e dx
a x isx
F[f (x)]
2 2
e
a (x 2 isx /a 2 )
F[f (x)]
2
dx
t a(x isx / 2a 2 )
dx dt / a
dt
F[f (x)] e t es
2 2
/4a 2
a
/4a 2
es
2
e
t2
F[f (x)] dt
a
es
2 2
/4a
F[f (x)]
a
s2 /4a 2
F[f (x)] e
a
a 2 1/ 2
F[e x /2 ] 2 e s
2 2
/2
2
Hence e x /2
is self-reciprocal in respect of Fourier transform
ds 0
20
dI s x 2 s
e cos sxdx I
ds 2 0 2
dI s
ds
I 2
int egratingonbosthsides
s s 2
log I ds log c log c log(ce s /4 )
2
2 4
I ce s
2
/4
e
x2
cos sxdx ce s
2
/4
s0
c e x dx
2
0
2
s2 /4
e
x2
cos sxdx e
0
2
5 x
Find the Fourier sine transform e .Hence show that
x sin mx e m
0 1 x 2 dx
2
,m>0
x m
1 x
0
2
sin mxds
2
e
x, 0 x 1
6 Find the Fourier cosine transform f (x) {2 x,1 x 2 .
0, x 2
Sol:
Fc (f (x)) f (x) cos sxdx
0
1 2
Fc (f (x)) x cos sxdx (2 x) cos sxdx 0.cos sxdx
0 1 2
1 cos n
If the Fourier sine transform of f (x) then find f(x).
7 (n) 2
Sol: 2
f (x) Fs (n) sin nx
n 1
1 cos n
Fs (n)
(n ) 2
2 1 cos n
f (x)
n 1 (n ) 2
sin nx
2 1 cos n
f (x) n 2 sin nx
3 n 1
UNIT-III
LAPLACE TRANSFORM
Definition of Laplace transform
Properties of Laplace transform
Laplace transforms of derivatives and integrals
Inverse Laplace transform
Properties of Inverse Laplace transform
Convolution theorem and applications
Introduction
In mathematics the Laplace transform is an integral transform named after its
discoverer Pierre-Simon Laplace . It takes a function of a positive real variable t (often time)
to a function of a complex variable s (frequency).The Laplace transform is very similar to
the Fourier transform. While the Fourier transform of a function is a complex function of
a real variable (frequency), the Laplace transform of a function is a complex function of
a complex variable. Laplace transforms are usually restricted to functions of t with t > 0. A
consequence of this restriction is that the Laplace transform of a function is a holomorphic
function of the variable s. Unlike the Fourier transform, the Laplace transform of
a distribution is generally a well-behaved function. Also techniques of complex variables can
be used directly to study Laplace transforms. As a holomorphic function, the Laplace
transform has a power series representation. This power series expresses a function as a linear
superposition of moments of the function. This perspective has applications in probability
theory.
Introduction
Let f(t) be a given function which is defined for all positive values of t, if
e-stf(t) dt
F(s) =
0
exists, then F(s) is called Laplace transform of f(t) and is denoted by
L{f(t)} = F(s) = e-st f(t) dt
0
The inverse transform, or inverse of L{f(t)} or F(s), is
f(t) = L-1{F(s)}
where s is real or complex value.
u du 1 ( a 1)
2. L [t a ] t a e st dt ( ) a e u a 1 u a e u du
0 0 s s s 0 s a 1
e ( s a ) t 1
3. L [ e ]
at at st
e e dt
0 (s a) 0
sa
1 s a
4. L [e iat ] L [cos at i sin at ] 2 i 2
s ia s a 2
s a2
s a
L [cos at ] 2 , and L [sin at ] 2
s a 2
s a2
e at e at 1 1 1 a
5. L [sinh at ] L [ ] ( ) 2
2 2 sa sa s a2
e at e at 1 1 1 s
L [cosh at ] L [ ] ( ) 2
2 2 sa sa s a2
1. Linearity
L [af(t)+bg(t)] 0
[af (t ) bg (t )]e st dt a f (t )e st dt b g (t )e st dt aF ( s) bG( s)
0 0
2
EX: Find the Laplace transform of cos t.
1 cos 2t 11 s s2 2
Solution : L [cos t ] L [
2
] 2
2 2 s s 2 2 s( s 2 4)
2. Shifting
(a ) L [ f (t a )u(t a )] f (t a )u(t a )e st dt f (t a )e st dt
0 a
Let t a, then
L [ f (t a )u(t a )] f ( )e s ( a ) d e sa f ( )e s d e sa F ( s )
0 0
(b) F ( s a ) f (t )e ( s a ) t dt [e at f (t )]e st dt L [e at f (t )]
0 0
0, t4
EX:What is the Laplace transform of the function f (t )
2t , t 4
3
Solution: f(t)2t3u(t4)
L [f(t)]L {2[(t4)3+12(t4)2+48(t4)+64]u(t4)}
3! 2! 1 64 3 12 24 32
2e 4 s 4 12 3 48 2 4e 4 s 4 3 2
s s s s s s s s
3. Scaling
L [ f ( at )] f ( at )e st dt
0
Let at , then
1
s
s 1 s
L [ f ( at )] f ( )e a
d f ( )e a d F ( )
0 a a 0 a a
EX:Find the Laplace transform of cos2t.
s
Solution : L [cos t ]
s 1
2
s
1 2 s
L [cos 2t ] 2
2 ( s )2 1 s 4
2
4. Derivative
(a) Derivative of original function
L [f(t)] 0
f (t )e st dt f (t )e st
0
( s) f (t )e st dt
0
sF(s)f(0)esa[f(a+)f(a )]
(3) Similarly, if f(t) is not continuous at ta1, a2, …,…,an, equation reduces to
n
L [f(t)] sF ( s ) f (0) e
i 1
sai
[ f (ai ) f (ai )]
If f(t), f(t) , f(t), …, f(n1)(t) are continuous, and f(n)(t) is piecewise continuous, and all of them
are exponential order functions, then
n
L [f(n)(t)] s n F ( s ) s
i 1
n i
f ( i 1) (0)
d n F ( s)
[Deduction] n
L [( t ) n f (t )]
ds
t 2 , 0 t 1
EX: f (t ) , find L [ f (t )] .
0, t 1
Solution : f (t ) t 2[u(t ) u(t 1)]
2!
L [ f (t )] L [t 2 u(t )] L [t 2 u(t 1)] 3
L {[(t 1) 1]2 u(t 1)}
s
2
3
L {[(t 1) 2 2(t 1) 1]u(t 1)}
s
2 2 1 1
3 e s ( 3 2 2 )
s s s s
L [ f (t )] sF ( s ) f (0) e [ f (1 ) f (1 )]
s
2 2 2 2 2 2
[ 2 e s ( 2 1)] 0 e s (0 1) 2 e s ( 2 )
s s s s s s
5. Integration
t t
L [ f ( )d] f ( )de st d t
0 0 0
1 st t 1
s 0
e f ( )d f (t )e st dt F ( s )
0 0
s
t t t 1
L [ ... f (t )dtdt dt ] F ( s)
0 0 0 sn
s s 1
0 ln ln
s 1 s
1 e t s 1 s 1 1 1
(b) L [ 2 ] ln ds s ln s( )ds
t s s s s
s s 1 s
s 1 1 s 1
s ln ds s ln ln( s 1)
s s
s s 1 s s
( s 1) ln( s 1) s ln s s s ln s ( s 1) ln( s 1)
sin kt e st
sin x
EX: Find (a ) 0 t dt (b) x dx .
st
sin kte sin kt
Solution : ( a ) dt L [ ]
0 t t
k
L [sin kt] 2
s k2
sin kt k 1 1
L [ ] 2 ds ds
t s s k 2
k s s
( )2 1
k
s s
tan 1 tan 1
k s 2 k
sin x sin x
( b) dx 2 dx
x 0 x
st
sin kte
2 lim dt
k 1 0 t
s 0
s
2 lim ( tan 1 )
k 1 2 k
s 0
6. Convolution theorem
t t
L [ f ( ) g (t )d] f ( ) g (t )de st dt t
0 0 0 = t
f ( ) g (t )e st dtd f ( ) g (t )e st dtd
0 0
Let u t , du dt , then
t
L [ f ( ) g (t )d] f ( ) g (u )e s ( u ) dud
0 0 0
f ( )e s d g (u )e su du F ( s )G ( s )
0 0
t
e
t
EX: Find the Laplace transform of sin 2 d.
0
1 2
Solution : L [e t ] , L [sin 2t ] 2
s 1 s 4
t
L [ e t sin 2t d] L [e t * sin 2t ] L [e t ] L [sin 2t ]
0
1 2 2
2
s 1 s 4 ( s 1)( s 2 4)
T 2T
L [ f (t )] f (t )e st dt f (t )e st dt f (t )e st dt
0 0 T
2T T T
and T
f (t )e st dt f (u T )e s ( u T ) du e sT f (u )e su du
0 0
Similarly,
3T T
2T
f (t )e st dt e 2 sT f (u )e su du
0
T
L [ f (t )] (1 e sT e 2 sT ) f (t )e st dt
0
1 T
1 e sT 0
f (t )e st dt
k
EX: Find the Laplace transform of f (t ) t , 0 t p, f ( t p ) f ( t ) .
p
1 p k
ps 0
Solution : L [ f (t )] te st dt
1 e p
1 k 1 p
e st dt )]
p
ps
[ (te st
1 e p s 0 0
k
p
1
ps
(te st e st )
ps(1 e ) s 0
k sp e sp 1
( pe )
ps(1 e ps ) s s
f (t ) F ( s)
Deduce general initial value theorem : lim lim
t 0 g ( t ) s G ( s )
9. Final Value Theorem:
L [ f ' (t )] sF ( s ) f (0) lim f ' (t )e st dt lim sF ( s ) f (0)
s 0 0 s 0
f (t ) F ( s)
General final value theorem : lim lim
t g (t ) s 0 G( s)
sin x t
EX: Find L [
0 x dx ] .
t sin x sin t
Solution : Let f (t ) dx f (t ) , f (0) 0
0 x t
1
L [tf ' (t )] L [sin t ] 2
s 1
d 1
L [ f ' (t )] 2
ds s 1
d 1 d 1
[ sF ( s ) f (0)] 2 [ sF ( s )] 2
ds s 1 ds s 1
sF(s)tan1s+C
From the initial value theorem, we get
lim f (t ) lim sF ( s )
t 0 s
0 C C
2 2
1
sF ( s ) tan 1 s tan 1
2 s
1 1
F ( s ) tan 1
s s
ex
EX: Find L
t x
dx .
ex e t
Solution : Let f (t ) dx f (t ) , lim f (t ) 0
x x t t
1
L [tf ' (t )] L [ e t ]
s 1
d 1
[ sF ( s ) f (0)]
ds s 1
d 1
[ sF ( s )]
ds s 1
sF ( s ) ln( s 1) C
From the final value theorem : lim f (t ) lim sF ( s )
t s 0
ln( s 1)
0 0 C C 0, and F ( s )
s
sin x
t e x
Note: 0 x dx, and t x
dx are called sine, and exponential integral function, respectively.
I. Inversion from Basic Properties
1. Linearity
Ex. 1.
1 2s 1 4( s 1) 1
(a ) L [ ] ( b) L]. [
s2 4 s 2 16
2s 1 s 1 2 1
Solution : (a ) L 1[ 2 ] L 1[2 2 ] 2 cos 2t sin 2t
s 4 s 2 2
2 s 2
2 2
2
4( s 1) s 4
(b) L 1[ 2 ] L 1[4 2 2 ] 4 cosh 4t sinh 4t
s 16 s 4 2
s 42
2. Shifting
Ex. 2.
1 e s 2s 3 1
(a ) L [ ] ( b) L ]. [
s 2 2s 2 s 3s 2 2
e s e s
Solution : ( a ) L 1[ 2 ] L 1[ ]
s 2s 2 ( s 1) 2 1
1
L 1[ ] e t sin t
( s 1) 1
2
3
2( s )
2s 3
3
t t
1
( b) L [ 2 ] L [ 1 2 ] 2e cosh
2
s 3s 2 3 1
(s )2 ( )2 2
2 2
3. Scaling
Ex. 3.
1 4s
L [ ].
16s 2 4
1 4s 1 4s 1 1 1 t
Solution : L [ ] L [ ] cosh 2 t cosh
16s 2 4 (4s) 2
2 2
4 4 4 2
4. Derivative
Ex. 4.
1 1 1 sa
(a ) L [ ] ( b) L [ln ].
( s 2 ) 2
2
sb
d 2s
solution : ( a ) L [sin t ] L [t sin t ] ( 2 ) 2
s 22
ds s 2
( s 2 ) 2
2s
Let F (t ) t sin t L [ F ' (t )] s 2 F ( 0)
( s 2 ) 2
s2 ( s 2 2 ) 2 1 2
L [ F ' (t )] 2 2 [ ] 2 [ ]
( s 2 2 ) 2 ( s 2 2 ) 2 s 2 2 ( s 2 2 ) 2
2 3
2 L [sin t ]
( s 2 2 ) 2
1 1
L [2 sin t F ' (t )]
(s )
2 2 2
2 3
1 1 1
L 1[ 2 ] [2 sin t F ' (t )] (sin t t cos t )
(s ) 2 2
2 3
2 3
sa
(b) Let L [ f (t )] ln ln( s a ) ln( s b)
sb
d 1 1
L [tf (t )] [ln( s a ) ln( s b)] L [e bt e at ]
ds sb sa
bt at
e e
f (t )
t
5. Integration
Ex. 5.
1 1 s 1 sa 1
(a ) L [ ( )] ( b) L]. [ln
s2 s 1 sb
1 s 1 1 1 t t t
Solution : ( a ) L 1[ 2 ( )] L 1[ 2 ] e t dt e t dtdt
s s 1 s( s 1) s ( s 1) 0 0 0
t
( e t 1) ( e t 1)dt ( e t 1) ( e t 1) t 2 2e t t
0
1 1
(b) L [e bt e at ]
sb sa
e bt e at 1 1 sb sa
L [ ] ( )ds ln ln
t s sb sa sa s sb
bt at
1 sa e e
L [ln ]
sb t
6. Convolution
Ex. 6.
1 1 1 s
(a ) L [ ] ( b) L [ ].
( s 2 ) 2
2
( s 2 ) 2
2
1 1
Solution : ( a ) L [sin t ] L [ sin t ] 2
s 2
2
s 2
1 1 t
L 1[ 2 ] 2 sin sin (t )d
(s )
2 2
0
1 t1
2 [cos( t ) cos( t )]d
02
t
1 t 1 1
2 2 0 [cos(2 t ) cos t ]d 22 2 sin(2 t ) cos t
0
1 1 1
{[ (sin t sin( t )] t cos t} (sin t t cos t )
2 2
2
23
1 1 s
(b) L [ sin t ] 2 L [cos t ] 2
s 2
s 2
s 1 t
L 1[ 2 ] sin cos (t )d
(s )2 2
0
1 t1
[sin( t ) sin( t )]d
02
1
t
1 t 1
2 0
[sin t sin(2 t )]d
2
sin t
2
cos(2 t )
0
1 1 t
{t sin t [cos t cos( t )]} sin t
2 2 2
P( s)
If F(s) , where deg[P(s)]<deg[Q(s)]
Q( s)
1 P(ak )
P( a k ) lim
s ak Q ' ( s ) Q ' (ak )
P( s ) P( a1 ) / Q ' ( a1 ) P( a 2 ) / Q ' ( a 2 ) P(an ) / Q ' (an )
Q( s) s a1 s a2 s an
1 P( s ) P( a1 ) a1t P( a 2 ) a2t P ( a n ) an t
L [ ] e e e
Q ( s ) Q ' ( a1 ) Q ' (a2 ) Q ' (an )
Ex. 7.
1 s 1
L [ ].
s s 2 6s
3
s 1 s 1 A A A
Solution : 1 2 3
s s 6s s( s 2)( s 3) s s 2 s 3
3 2
s 1 1
A1 lim
s 0 ( s 2)( s 3) 6
s 1 3
A2 lim
s 2 s ( s 3) 10
s 1 2
A3 lim
s 3 s ( s 2) 15
1 3 2
s 1 1 3 2
L 1[ 3 ] 6 10 15 e 2 t e 3t
s s 6s
2
s s2 s3 6 10 15
d P( s )
Cm 1 lim { [ ( s a k ) m ]}
s ak ds Q ( s )
d 2 P( s ) 1
Cm 2 lim {[ ( s a k ) m ]}
s ak ds 2 Q ( s ) 2!
d m 1 P( s ) 1
C1 lim { m 1
[ ( s a k ) m ]}
s ak ds Q( s) ( m 1)!
1 P( s ) t m 1 t
L [ ] e ak t [Cm C m 1 m 2 C 2 t C1 ]
Q( s) ( m 1)! ( m 2)!
Ex. 8.
1 s 4 7s 3 13s 2 4s 12
L [ ].
s 2 ( s 1)( s 2)( s 3)
s 4 7 s 3 13s 2 4 s 12 C 2 C1 A A A
Solution : 2 1 2 3
s ( s 1)( s 2)( s 3)
2
s s s 1 s 2 s 3
s 4 7 s 3 13s 2 4 s 12 12
C 2 lim 2
s 0 ( s 1)( s 2)( s 3) 6
d s 4 7 s 3 13s 2 4 s 12
C1 lim [ ]
s 0 ds ( s 1)( s 2)( s 3)
4( 1)( 2)( 3) ( 12)[( 2)( 3) ( 1)( 3) ( 1)( 2)] 24 12 11
3
[( 1)( 2)( 3)]2 62
s 4 7 s 3 13s 2 4 s 12 1
A1 lim
s 1 s 2 ( s 2)( s 3) 2
s 4 7 s 3 13s 2 4 s 12 8
A2 lim 2
s 2 s ( s 1)( s 3)
2
4
s 4 7 s 3 13s 2 4 s 12 9 1
A3 lim
s 3 s 2 ( s 1)( s 2) 18 2
1 s 4 7 s 3 13s 2 4 s 12 1 1
L [ ] 2t 3 e t 2 e 2 t e 3 t
s 2 ( s 1)( s 2)( s 3) 2 2
R iI ( A ) iA
P( s )
where R and I are the real and imaginary parts of lim { [( s ) 2 2 ]}, respectively
s i Q( s)
A B R
then, , where we can get A and B, and
A I
1 P( s ) 1 A( s ) ( A B ) A B
L [ ] L [ ] e t A cos t sin t
Q( s) ( s )
2 2
Ex. 9.
1 s2
L [ ].
s4 4
s2 s2 s2
Solution :
s 4 4 ( s 2 ) 2 2 s 2 2 2 2 2 s 2 2 ( s 2 2) 2 ( 2 s ) 2
s2 AsB A sB
1 2 1 2 2 2
( s 2 s 2)( s 2 s 2) ( s 1) 1 ( s 1) 1
2 2
s2 2i
lim A1 ( 1 i ) B1 ( A1 B1 ) iA1
s 1i ( s 1) 1
2
4 4i
8 8i 1
( A1 B1 ) iA1 A1 , B1 0
32 4
2
s 2i
lim A2 (1 i ) B2 ( A2 B2 ) iA2
s 1i ( s 1) 1
2
4 4i
8 8i 1
( A2 B2 ) iA2 A2 , B2 0
32 4
1 1 1 1
2 ( s 1) ( s 1)
s
L 1[ 4 ] L 1[ 4 44 4]
s 4 ( s 1) 2 1 ( s 1) 2 1
e t et
( cos t sin t ) (cos t sin t )
4 4
A B R1
R1 iI 1 ( A B ) iA { , where A and B can be obtained
A I 1
d P( s ) d
lim { [( s ) 2 2 ]2 } A [C ( i) D ] lim [( s ) 2 2 ]
s i ds Q ( s ) s i ds
2i ( A B ) iA A 2, B 2
d 3 d
lim ( s 3s 2 6s 4) A [c(1 i ) D ] lim [( s 1) 2 1]
s 1i ds s 1i ds
0 A ( c ic D )2i ( A 2c ) 2i ( c D )
c 1, D 1
1 s 3 3s 2 6s 4 2( s 1) s 1
L [ ] L 1{ } L 1
[ ]
( s 2 s 2)
2 2
[( s 1) 2 1]2 ( s 1) 2 1
t
e t ( 2 sin t cos t ) e t (t sin t cos t )
2
Ex. 11.
1 1
L [ ].
( s 2 ) 2
2
d 1 2 d 1 2
Solution : ( 2 ) 2 L 1[ ( 2 )] L 1[ 2 ]
d s 2
(s )2 2
d s 2
( s 2 ) 2
1 d 1 d 1 1 t
2L 1[ 2 ] L 1[ 2 ] ( sin t ) 2 sin t cos t
(s )2 2
d s 2
d
1 1
L 1[ 2 ] (sin t t cos t )
(s )
2 2
23
Ex. 12.
1
L [e s ] .
e s e s e s
Solution : y e s
y , y
2 s 4s 4 s3
d 2
we get the equation 4 sy 2 y y 0 4 L [ (t y )] 2 L [ ty ] L [ y ] 0
dt
d dy 6t 1
4 (t 2 y ) 2ty y 0 4t 2 y '(6t 1) y 0 dt 0
dt y 4t 2
3 1
3 1
ln y ln t c1 y ct 2 e 4 t
2 4t
1
( )
1 1 1
L [t ] 1
2 2 , and L [ty ] L [ct e ]
2 4t
2
s
s
e s e s
1 1
while L [ty ] y L [ct 2 e 4 t ]
2 s 2 s
1 1 e s
2 4t
ct e 1
Apply general final value theorem lim lim 2 s c
2
t 1 s 0
2
t
s
1
1
y e 4t
2 t 3 / 2
Ex. 1.
1 0 x 3
y ' ' y ' y g ( x ), y (0) 1, y ' (0) 0, where g ( x ) .
3 x3
Solution : g ( x ) u( x ) 2u( x 3)
1 e 3 s
[ s Y sy(0) y ' (0)] [ sY y (0)] Y 2
2
s s
3 s
1 e
( s 2 s 1)Y s 1 2
s s
s 1 1 2e 3 s
Y 2
s s 1 s( s 2 s 1) s( s 2 s 1)
s 1 1 s 1 1 s 1
2 ( 2 ) 2e 3 s ( 2 )
s s 1 s s s 1 s s s 1
1 1 3
(s )
s 1 s 1
x
2 2 3 1 3
3 L [ 2 1
] e (cos
2
x sin x)
s s 1
2
1 2 3 2 s s 1 2 3 2
(s ) ( )
2 2
x 3
3 1 3
y ( x ) u( x ) 2u( x 3){1 e 2 [cos ( x 3) sin ( x 3)]}
2 3 2
Ex. 2.
y ' ' ' (t ) 2 y ' ' (t ) 5 y ' (t ) 0, y (0) 0, y ' (0) 1, y ( ) 1 .
8
Solution : [ s Y s y (0) sy' (0) y ' ' (0)] 2[ s Y sy(0) y ' (0)] 5[ sY y (0)] 0
3 2 2
y ' ' ( 0) c
sc2 A Ps Q
Y
s( s 2 s 5) s ( s 1) 2 2 2
2
sc2 c2
A lim 2
s 0 s 2 s 5 5
s c 2 1 c 2i c 3 4 2c
P(1 2i ) Q lim i
s 1 2 i s 1 2i 5 5
2c 2c 1
P , Q
5 5
c2 2c c3
y (t ) et ( cos 2t sin 2t )
5 5 10
c2 2c 1 c3 1
y( ) 1 1 e8 ( )c7
8 5 5 2 10 2
y (t ) 1 e t ( cos 2t sin 2t )
Ex. 3.
ty+(12t)y2y0, y(0)1, y(0)2.
d 2 d
Solution : [ s Y sy(0) y ' (0)] {[sY y (0)] 2 [ sY y (0)]} 2Y 0
ds ds
( s Y '2 sY 1) [( sY 1) 2( sY 'Y )] 2Y 0
2
( s 2 2 s )Y '( 2 s s 2 2)Y 0
dY ds
( s 2)Y ' Y ln Y ln( s 2) c1
Y s2
c
Y y (t ) ce 2 t
s2
y (0) 1, 1 c, y (t ) e 2 t
Ex. 4.
dx
dt 2 x y 2e
5t
dy , x ( 0) y ( 0) 0 .
x 2 y 3e 2 t
dt
2 2
sX x (0) 2 X Y s 5 ( s 2) X Y s 5
Solution :
3 3
sY y (0) X 2Y X ( s 2)Y
s2 s2
2 3
( s 2)
s5 s2 2 s 2 5s 7
X
( s 2) 2 1 ( s 1)( s 2)( s 3)( s 5)
2 3
( s 2)
3s 13
Y s5 s2
( s 2) 1
2
( s 1)( s 3)( s 5)
5/ 4 3 1 3/ 4 5 3
X x (t ) e t 3e 2 t e 3t e 5t
s 1 s 2 s 3 s 5 4 4
5/ 4 1 1/ 4 5 1
Y y ( t ) e t e 3t e 5 t
s 1 s 3 s 5 4 4
UNIT-IV
Z TRANSFORM
Definition of Z-transforms
Elementary properties
Inverse Z-transform
Convolution theorem
Formation and solution of difference equations.
Introduction
The z-transform is useful for the manipulation of discrete data sequences and has acquired a
new significance in the formulation and analysis of discrete-time systems. It is used
extensively today in the areas of applied mathematics, digital signal processing, control
theory, population science and economics. These discrete models are solved with difference
equations in a manner that is analogous to solving continuous models with differential
equations. The role played by the z-transform in the solution of difference equations
corresponds to that played by the Laplace transforms in the solution of differential equations.
Definition
If the function u n is defined for discrete value and u n 0 for n<0 then the Z-transform is
defined to be
Z(u n ) U(z) u z
n 1
n
n
Time Shifting
If we have f n F z then f n n0 z n0 F z
The ROC of Y(z) is the same as F(z) except that there are possible pole additions or deletions
at z = 0 or z = .
Proof:Let yn f n n0 then
Y z f n n0 z n
n
Assume k = n- n0 then n=k+n0, substituting in the above equation we have:
Y z f k z
k
k n0
z n0 F z
z0
The consequence is pole and zero locations are scaled by z0. If the ROC of FX(z) is rR< |z|
<rL, then the ROC of Y(z) isrR< |z/z0| <rL, i.e., |z0|rR< |z| < |z0|rL
n
z z
Proof: Y z z0n x nz n
xn X
n n z0 z0
The consequence is pole and zero locations are scaled by z0. If the ROC of X(z) is
rR<|z|<rL, then the ROC of Y(z) is
rR< |z/z0| <rL, i.e., |z0|rR < |z| < |z0|rL
Differentiation of X(z)
dF z
If we have f n F z then nf n
z z and ROC = Rf
z
Proof:
F z f nz n
n
dF z
z z n f nz n1 n f nz n
dz n n
dF z z
z nf n
dz
1 1
2
6 n
10
11
12
13
Problems
1 Find the z transform of 3n + 2 × 3n
Sol From the linearity property
Z{3n + 2 × 3n} = 3Z{n} + 2Z{3n}
and from the Table 1
Z n
z
and
Z 3n
z
z 3
z 12
n
(r with r = 3). Therefore
3z 2z
Z{3n + 2 × 3n}=
z 12 z 3
2 Find the z-transform of each of the following sequences:
(a) x(n)= 2nu(n)+3(½)nu(n)
(b) x(n)=cos(n 0)u(n).
Sol (a) Because x(n) is a sum of two sequences of the form nu(n), using the linearity
property of the z-transform, and referring to Table 1, the z-transform pair
13 1
4 z
1 3
X z 1
2
1 2z
1 2 z 1 1 z 1
1
1 z 1
2 2
22z-1-36z-2+016z-3
22z -88z +110z -44z-4
-1 -2 -3
52z-2-094z-3+044z-4
52z-2-208z-3+260z-4-104z-5
114z-3-216z-4+104z-5
F z f n z n 2z - 2 8z -3 22z - 4 52z -5 114z -6
n 0
F z
2z kz kz kz
1 2 3 2
z 2z 1 z 2 z 1 z 1
2
To find k1 multiply both sides of the equation by (z-2), divide by z, and let
z2
2z k z z 2 k3 z z 2
k1 z 2
z 12
z 1 z 12
2 k z 2 k3 z 2
k1 2
z 12
z 1 z 12
2 k2 z 2 k z 2
k1 3
z 12 z 2
z 1 z 2 z 12 z 2
k1 = 2
Similarly to find k3 multiply both sides by (z-1)2, divide by z, and let z1
k z 1
2
k2 z 1 k3 z
2
1
z 2 z 2
k3 = -2
Finding k2 requires going back to Equation A above and taking the
derivative of both sides
k z 1
2
k2 z 1 k3 z
2
1
z 2 z 2
2 2z 1 2z 12
k k
z 22 1 z 2 z 22 2
Now again let z1
k2 = -2
F z
2z 2z 2z
z 2 z 1 z 12
Convolution theorem
n
If u n Z1[U(z)] and vn Z1[V(z)] then Z1[U(z).V(z)] u n v n m u n * v n
m 0
Where the symbol * denotes the convolution operation
Proof We have
u n Z1[U(z)] and vn Z1[V(z)]
U(z).V(z) (u 0 u1z 1 u 2 z 2 .... u n z n ...) x(v 0 v1z 1 v 2 z 2 .... v n z n ...)
U(z).V(z) (u v
n 0
0 n u1v n 1 u 2 v n 2 .... u n v 0 )z n
z2
EX Use convolution theorem to evaluate Z1
(z a)(z b)
Sol z n 1 z
Z1 a ,Z b
n
z a z b
z 2
1 1 1 n n
Z1 Z a *b
(z a)(z b) (z a) (z b)
z2 n n n m
Z1
(z a)(z b) m 0
a .b
z2 n (a / b)
n 1
1
Z1 b
(z a)(z b) (a / b) 1
z2 a n 1 b n 1
Z1
(z a)(z b) ab
u 2 u
2
(2) c One-dimensional heat equation
t x 2
2u 2u
(3) 0 Two-dimensional Laplace equation
x 2 y 2
2u 2u
(4) f ( x, y ) Two-dimensional Poisson equation
x 2 y 2
Partial differential equations: An equation involving partial derivatives of one dependent
variable with respective more than one independent variables.
𝜕𝑧 𝜕𝑧 𝜕 2𝑧 𝜕2𝑧 𝜕2𝑧
𝑝 = 𝜕𝑥 ,𝑞 = 𝜕𝑦 , 𝑟 = 𝜕𝑥 2 , = 𝜕𝑥 𝜕𝑦 , t= ,
𝜕𝑦 2
Problems
1 Form a partial differential equation by eliminating a,b,c from
𝒙 𝟐 𝒚𝟐 𝒛 𝟐
+ + =𝟏
𝒂𝟐 𝒃𝟐 𝒄𝟐
Sol 𝑥2 𝑦2 𝑧2
Given 𝑎 2 + 𝑏 2 + 𝑐 2 = 1
Differentiating partially w.r.to x and y, we have
1 1 𝜕𝑧
2𝑥 + 𝑐 2 (2𝑧)𝜕𝑥 =o
𝑎2
1 1
𝑥 + 𝑐 2 (𝑧) 𝑝 =o _______(1)
𝑎2
1 1 𝜕𝑧
And 2𝑥 + (2𝑧)𝜕𝑥 =o
𝑏2 𝑐2
1 1
𝑦 + 𝑐 2 (𝑧) q = o _______(2)
𝑏2
Diff (1) partially w.r.to x, we have
1 𝑝 𝜕𝑧 𝑧 𝜕𝑝
+ 𝑐 2 𝜕𝑥 + 𝑐 2 𝜕𝑥 =o ______(3)
𝑎2
1 𝑝2 𝑧
+ + 2 𝑟 =O
𝑎2 𝑐2 𝑐
Multiply this equation by x and then subtracting (1) from it
1
𝑥𝑧𝑟 + 𝑥𝑝2 − 𝑝𝑧 = 0
𝑐2
4 Find the differential equation of all spheres whose centers lie on z-axis
with a given radius r.
Sol The equation of the family of spheres having their centers on z-axis and having
radius r is
𝑥 2 + 𝑦 2 + (𝑧 − 𝑐)2 = 𝑟 2
Where c and r are arbitrary constants
Differentiating this eqn partially w.r.t. x and y ,we get
𝜕𝑧
2𝑥 + 2 𝑧 − 𝑐 𝜕𝑥 = 0 ⇒ 𝑥 + 𝑧 − 𝑐 𝑝 = 0 _________(1)
𝜕𝑧
2𝑦 + 2 𝑧 − 𝑐 = 0 ⟹ 𝑦 + 𝑧 − 𝑐 𝑞 = 0_________(2)
𝜕𝑦
𝑥
From (1), 𝑧 − 𝑐 = − 𝑝 ___________(3)
𝑦
From (2), 𝑧 − 𝑐 = − 𝑞 ___________(4)
From (3) and (4)
𝑥 𝑦
We get − 𝑝 = − 𝑞
i.e. 𝑥𝑞 − 𝑦𝑝 = 0
Complete Integral : A solution in which the number of arbitrary constants is equal to the
number of independent variables is called complete integral or complete solution of the given
equation.
Particular Integral: A solution obtained by giving particular values to the arbitrary constants
in the complete integral is called a particular integral.
There are six types of non-linear partial differential equations of first order as given below.
1. f (p,q) = 0
2. f (z,p,q) = 0
3. f1 (x,p) = f2 (y,q)
4. z = px +qy + f(p,q)
5. f(xm p, ynq) = 0 and f(my p, ynq,z) = 0
6. f (pzm, qzm) = 0 and f1(x,pzm) = f2(y,qzm)
Charpit’s Method:
We present here a general method for solving non-linear partial differential equations.
This is known as Charpit's method.
LetF(x,y,u, p.q)=0be a general nonlinear partial differential equation of first-order.
Since u depends on x and y, we have
u u
du=uxdx+uydy = pdx+qdy where p=ux= , q = uy=
x y
If we can find another relation between x,y,u,p,q such that f(x,y,u,p,q)=0then we can
solve for p and q and substitute them in equation This will give the solution provided is
integrable.
To determine f, differentiate w.r.t. x and y so that
F F F p F q
p 0
x u p x q x
f f f p f q
p 0
x u p x q x
F F F p F q
q 0
y u p y q y
f f f p f q
q 0
y u p y q y
p q
Eliminating from, equations and from equations we obtain
x y
F f f F F f f F F f f F q
- - p - 0
x p x p u p u p q p q p dx
F f f F F f f F F f f F p
- - q - 0
y q y q u q u q p q p q dy
Adding these two equations and using
q 2u p
x xy y
and rearranging the terms, we get
F f F f F F f F F f
- - - p - q p
p x q y p q u x u p
F f f
q 0
y u q
We get the auxiliary system of equations
dx dy du dp dq df
- F - F F F F F F F 0
-p -q p q
p q p q x u y u
An Integral of these equations, involving p or q or both, can be taken as the required
equation.
Problems
1 solve 𝒙 − 𝒚 − 𝒚𝒛 𝒑 + 𝒙 − 𝒚𝟐 − 𝒛𝒙 𝒒 = 𝒛(𝒙 − 𝒚)
𝟐 𝟐 𝟐
Sol Here
P= x 2 − y 2 − yz , Q = x 2 − y 2 − zx , R = z(x − y)
dx dy dz
The subsidiary equations are x 2 −y 2 −yz = x 2 −y 2 −zx = z(x−y)
Using 1,-1,0 and x,-y,0 as multipliers , we have
dz dx −dy x dx −ydy
. z x−y = z x−y = x 2 −y 2 )(x−y
From the first two rations 0f ,we have
dz= dx-dy
integrating , z=x-y-c1 or x-y-z = c1
now taking first and last ratios in (2) ,we get
dz x dx − y dy 2dz 2x dx − 2y dy
= 2 2
or =
z x −y z x2 − y2
Integrating ,2 log z = log x 2 − y 2 − logc2
x2 − y2
⟹ = c2
z2
x 2 −y 2
The required general solution is f x − y − z , z 2 = 0
2 solve 𝐦𝐳 − 𝐧𝐲 𝐩 + 𝐧𝐱 − 𝐥𝐳 𝐪 = 𝐥𝐲 − 𝐦𝐱
Sol The equation is
mz − ny p + nx − lz q = ly − mx
Here P= mz − ny , Q = nx − lz , R = ly − mx
The Auxiliary equations are
dx dy dz
= =
P Q R
dx dy dz
i.e. = = ly−mx
mz −ny nx −lz
Choosing x,y,z as multipliers ,we get
x dx +ydy +zdz
Each fraction = , which gives x dx + ydy + zdz = 0
0
Integrating,x 2 + y 2 + z 2 = a
Again choosing l, m, n as multipliers ,we get
l dx +mdy +ndz
Each fraction = , which gives l dx + mdy + ndz = 0
0
Integrating, lx + my + nz = b
Hence the solution is
f x 2 + y 2 + z 2 , lx + my + nz = 0
5 Find the general solution of the first-order linear partial differential equation
with the constant coefficients: 4ux+uy=x2y
Sol The auxiliary system of equations is
dx dy du
4 1 x2y
From here we get
dx dy
or dx-4dy=0. Integrating both sides
4 1
dx du
we get x-4y=c. Also 2 or x2 y dx=4du
4 x y
x-c
or x2 ( ) dx =4du or
4
1
(x3 – cx2) dx = du
16
Integrating both sides we get
3 x 4 - 4cx 3
u=c1+
192
3 x 4 - 4cx 3
= f(c)+
192
After replacing c by x-4y, we get the general solution
3x 4 - 4( x - 4y )x 3
u=f(x-4y)+
192
4 3
x x y
=f(x-4y)-
192 12
6 Find the general solution of the partial differential equation y2up + x2uq = y2x
Sol The auxiliary system of equations is
dx dy du
2
2 2
y u x u xy
Taking the first two members we have x2dx = y2dy which on integration
given x3-y3 = c1. Again taking the first and third members,
we have x dx = u du
which on integration given x2-u2 = c2
Hence, the general solution is
F(x3-y3,x2-u2) = 0
x y - u 0
x y
Sol u u
: Let p = ,q=
x y
The auxiliary system of equations is
dx dy du dp dq
2px 2qy 2(p x q y ) p - p
2 2 2
q - q2
which we obtain from putting values of
F F F F F
2px , 2qy, p2 , - 1, q2
p q x u y
and multiplying by -1 throughout the auxiliary system. From first and 4 th expression
in (11.38) we get
p 2 dx 2pxdp
dx = . From second and 5th expression
py
q2 dy 2qydq
dy=
qy
Using these values of dx and dy we get
p 2 dx 2pxdp q2 dy 2qydq
=
p2 x q2 y
dx 2 dy 2dq
or dp
x p y q
Taking integral of all terms we get
ln|x| + 2ln|p| = ln|y|+2ln|q|+lnc
or ln|x| p2 = ln|y|q2c
or p2x=cq2y, where c is an arbitrary constant.
Solving for p and q we get cq2y+q2 y -u=0
(c+1)q2y=u
1
u 2
q=
(c 1)y
1
cu 2
p=
(c 1)x
1 1
cu 2 u 2
du= dx dy
(c 1)x (c 1)y
1 1 1
1 c 2 c 2 i 2
or du dx dy
u x y
1 1 1
By integrating this equation we obtain ((1 c )u) 2
(cx) 2
( y) 2
c1
This is a complete solution.
8 Solve p2+q2=1
Sol The auxiliary system of equation is
dx dy du dp dq
- 2 2
- 2p 2q - 2p - 2q 0 0
dx dy du dp dq
or 2
p q p q 2
0 0
Using dp =0, we get p=c and q= 1- c 2 , and these two combined with du
=pdx+qdy yield
u=cx+y 1- c 2 + c1 which is a complete solution.
dx dx
Using = p , we get du = where p= c
du c
x
Integrating the equation we get u = + c1
c
dy
Also du = , where q = 1- p 2 1- c 2
q
dy 1
or du = . Integrating this equation we get u = y +c2
1- c 2 1- c 2
This cu = x+cc1 and u 1- c 2 = y + c2 1- c 2
Replacing cc1 and c2 1- c 2 by - and - respectively, and eliminating c, we
get
u2 = (x-)2 + (y-)2
9 Solve u2+pq – 4 = 0
Sol The auxiliary system of equations is
dx dy du dp dq
= = = =
q p 2pq - 2up - 2uq
The last two equations yield p = a2q.
Substituting in u2+pq – 4 = 0 gives
1
q= 4 - u 2 and p = + a 4 - u 2
a
Then du = pdx+qdy yields
1
du = + 4 - u2 adx dy
a
du 1
or = + adx dy
4 - u2 a
u 1
Integrating we get sin--1 = + adx y c
2 a
1
or u = + 2 sin ax y c
a
10 Solve p2(1-x2)-q2(4-y2) = 0
Sol Let p2(1-x2) = q2 (4-y2) = a2
a a
This gives p = and q =
1- x 2 4 - y2
(neglecting the negative sign).
Substituting in du = pdx + q dy we have
a a
du = dx + dy
1- x 2 4 - y2
y
Integration gives u = a sin'x sin' + c.
2
Wave Equation
For the rest of this introduction to PDEs we will explore PDEs representing some of the basic
types of linear second order PDEs: heat conduction and wave propagation. These represent
two entirely different physical processes: the process of diffusion, and the process of
oscillation, respectively. The field of PDEs is extremely large, and there is still a
considerable amount of undiscovered territory in it, but these two basic types of PDEs
represent the ones that are in some sense, the best understood and most developed of all of
the PDEs. Although there is no one way to solve all PDEs explicitly, the main technique that
we will use to solve these various PDEs represents one of the most important techniques used
in the field of PDEs, namely separation of variables (which we saw in a different form while
studying ODEs). The essential manner of using separation of variables is to try to break up a
differential equation involving several partial derivatives into a series of simpler, ordinary
differential equations.
We start with the wave equation. This PDE governs a number of similarly related
phenomena, all involving oscillations. Situations described by the wave equation include
acoustic waves, such as vibrating guitar or violin strings, the vibrations of drums, waves in
fluids, as well as waves generated by electromagnetic fields, or any other physical situations
involving oscillations, such as vibrating power lines, or even suspension bridges in certain
circumstances. In short, this one type of PDE covers a lot of ground.
We begin by looking at the simplest example of a wave PDE, the one-dimensional wave
equation. To get at this PDE, we show how it arises as we try to model a simple vibrating
string, one that is held in place between two secure ends. For instance, consider plucking a
guitar string and watching (and listening) as it vibrates. As is typically the case with
modeling, reality is quite a bit more complex than we can deal with all at once, and so we
need to make some simplifying assumptions in order to get started.
First off, assume that the string is stretched so tightly that the only real force we need to
consider is that due to the string’s tension. This helps us out as we only have to deal with one
force, i.e. we can safely ignore the effects of gravity if the tension force is orders of
magnitude greater than that of gravity. Next we assume that the string is as uniform, or
homogeneous, as possible, and that it is perfectly elastic. This makes it possible to predict
the motion of the string more readily since we don’t need to keep track of kinks that might
occur if the string wasn’t uniform. Finally, we’ll assume that the vibrations are pretty
minimal in relation to the overall length of the string, i.e. in terms of displacement, the
amount that the string bounces up and down is pretty small. The reason this will help us out
is that we can concentrate on the simple up and down motion of the string, and not worry
about any possible side to side motion that might occur.
Now consider a string of a certain length, l, that’s held in place at both ends. First off, what
exactly are we trying to do in “modeling the string’s vibrations”? What kind of function do
we want to solve for to keep track of the motion of string? What will it be a function of?
Clearly if the string is vibrating, then its motion changes over time, so time is one variable we
will want to keep track of. To keep track of the actual motion of the string we will need to
have a function that tells us the shape of the string at any particular time. One way we can do
this is by looking for a function that tells us the vertical displacement (positive up, negative
down) that exists at any point along the string – how far away any particular point on the
string is from the undisturbed resting position of the string, which is just a straight line. Thus,
we would like to find a function u ( x, t ) of two variables. The variable x can measure distance
along the string, measured away from one chosen end of the string (i.e. x = 0 is one of the tied
down endpoints of the string), and t stands for time. The function u ( x, t ) then gives the
vertical displacement of the string at any point, x, along the string, at any particular time t.
As we have seen time and time again in calculus, a good way to start when we would like to
study a surface or a curve or arc is to break it up into a series of very small pieces. At the end
of our study of one little segment of the vibrating string, we will think about what happens as
the length of the little segment goes to zero, similar to the type of limiting process we’ve seen
as we progress from Riemann Sums to integrals.
Suppose we were to examine a very small length of the vibrating string as shown in figure 1:
Now what? How can we figure out what is happening to the vibrating string? Our best hope
is to follow the standard path of modeling physical situations by studying all of the forces
involved and then turning to Newton’s classic equation F ma . It’s not a surprise that this
will help us, as we have already pointed out that this equation is itself a differential equation
(acceleration being the second derivative of position with respect to time). Ultimately, all we
will be doing is substituting in the particulars of our situation into this basic differential
equation.
Because of our first assumption, there is only one force to keep track of in our situation, that
of the string tension. Because of our second assumption, that the string is perfectly elastic
with no kinks, we can assume that the force due to the tension of the string is tangential to the
ends of the small string segment, and so we need to keep track of the string tension forces T1
and T2 at each end of the string segment. Assuming that the string is only vibrating up and
down means that the horizontal components of the tension forces on each end of the small
segment must perfectly balance each other out. Thus
(1) T1 cos T2 cos T
whereT is a string tension constant associated with the particular set-up (depending, for
instance, on how tightly strung the guitar string is). Then to keep track of all of the forces
involved means just summing up the vertical components of T1 and T2 . This is equal to
(2) T2 sin T1 sin
where we keep track of the fact that the forces are in opposite direction in our diagram with
the appropriate use of the minus sign. That’s it for “Force,” now on to “Mass” and
“Acceleration.” The mass of the string is simple, just x , where is the mass per unit
length of the string, and x is (approximately) the length of the little segment. Acceleration
is the second derivative of position with respect to time. Considering that the position of the
string segment at a particular time is just u ( x, t ) , the function we’re trying to find, then the
2u
acceleration for the little segment is (computed at some point between a and a + x ).
t 2
Putting all of this together, we find that:
2u
(3) T2 sin T1 sin x 2
t
Now what? It appears that we’ve got nowhere to go with this – this looks pretty unwieldy as
it stands. However, be sneaky… try dividing both sides by the various respective equal parts
written down in equation (1):
T2 sin T1 sin x 2 u
(4)
T2 cos T1 cos T t 2
or more simply:
x 2 u
(5) tan tan
T t 2
Now, finally, note that tan is equal to the slope at the left-hand end of the string segment,
u
which is just
u
evaluated at a, i.e. a, t and similarly tan equals u a x, t , so (5)
x x x
becomes…
u
a x, t u a, t x u2
2
(6)
x x T t
1 u u u
2
(7) a x, t a , t
x x x T t
2
Now we’re ready for the final push. Let’s go back to the original idea – start by breaking up
the vibrating string into little segments, examine each such segment using Newton’s F ma
equation, and finally figure out what happens as we let the length of the little string segment
dwindle to zero, i.e. examine the result as x goes to 0. Do you see any limit definitions of
derivatives kicking around in equation (7)? As x goes to 0, the left-hand side of the
u 2 u
equation is in fact just equal to , so the whole thing boils down to:
x x x 2
2u 2u
(8)
x 2 T t 2
2u 2 u
2
(9) c
t 2 x 2
T
by bringing in a new constant c 2 (typically written with c 2 , to show that it’s a positive
constant).
This equation, which governs the motion of the vibrating string over time, is called the one-
dimensional wave equation. It is clearly a second order PDE, and it’s linear and
homogeneous.
There are several approaches to solving the wave equation. The first one we will work with,
using a technique called separation of variables, again, demonstrates one of the most widely
used solution techniques for PDEs. The idea behind it is to split up the original PDE into a
series of simpler ODEs, each of which we should be able to solve readily using tricks already
learned. The second technique, which we will see in the next section, uses a transformation
trick that also reduces the complexity of the original PDE, but in a very different manner.
This second solution is due to Jean Le Rond D’Alembert (an 18 th century French
mathematician), and is called D’Alembert’s solution, as a result.
First, note that for a specific wave equation situation, in addition to the actual PDE, we will
also have boundary conditions arising from the fact that the endpoints of the string are
attached solidly, at the left end of the string, when x = 0 and at the other end of the string,
which we suppose has overall length l. Let’s start the process of solving the PDE by first
figuring out what these boundary conditions imply for the solution function, u ( x, t ) .
Answer: for all values of t, the time variable, it must be the case that the vertical displacement
at the endpoints is 0, since they don’t move up and down at all, so that
are the boundary conditions for our wave equation. These will be key when we later on need
to sort through possible solution functions for functions that satisfy our particular vibrating
string set-up.
You might also note that we probably need to specify what the shape of the string is right
when time t = 0, and you’re right - to come up with a particular solution function, we would
need to know u (x,0) . In fact we would also need to know the initial velocity of the string,
which is just u t (x,0) . These two requirements are called the initial conditions for the wave
equation, and are also necessary to specify a particular vibrating string solution. For instance,
as the simplest example of initial conditions, if no one is plucking the string, and it’s perfectly
flat to start with, then the initial conditions would just be u( x,0) 0 (a perfectly flat string)
with initial velocity, u t ( x,0) 0 . Here, then, the solution function is pretty unenlightening –
it’s just u( x, t ) 0 , i.e. no movement of the string through time.
To start the separation of variables technique we make the key assumption that whatever the
solution function is, that it can be written as the product of two independent functions, each
one of which depends on just one of the two variables, x or t. Thus, imagine that the solution
function, u ( x, t ) can be written as
(2) u( x, t ) F ( x)G(t )
whereF, and G, are single variable functions of x and t respectively. Differentiating this
equation for u ( x, t ) twice with respect to each variable yields
2u 2u
(3) F ( x )G (t ) and F ( x)G (t )
x 2 t 2
Thus when we substitute these two equations back into the original wave equation, which is
2u 2 u
2
(4) c
t 2 x 2
then we get
2u 2 u
2
(5) F ( x )G
(t ) c c 2 F ( x)G(t )
t 2 x 2
Here’s where our separation of variables assumption pays off, because now if we separate the
equation above so that the terms involving F and its second derivative are on one side, and
likewise the terms involving G and its derivative are on the other, then we get
G (t ) F ( x)
(6)
c 2 G(t ) F ( x)
Now we have an equality where the left-hand side just depends on the variable t, and the
right-hand side just depends on x. Here comes the critical observation - how can two
functions, one just depending on t, and one just on x, be equal for all possible values of t and
x? The answer is that they must each be constant, for otherwise the equality could not
possibly hold for all possible combinations of t and x. Aha! Thus we have
G (t ) F ( x)
(7) 2
k
c G(t ) F ( x)
Case One: k = 0
yielding with very little effort two solution functions for F and G:
wherea,b, p and r, are constants (note how easy it is to solve such simple ODEs versus trying
to deal with two variables at once, hence the power of the separation of variables approach).
Putting these back together to form u( x, t ) F ( x)G(t ) , then the next thing we need to do is to
note what the boundary conditions from equation (1) force upon us, namely that
Unless G(t ) 0 (which would then mean that u( x, t ) 0 , giving us the very dull solution
equivalent to a flat, unplucked string) then this implies that
(11) F (0) F (l ) 0 .
But how can a linear function have two roots? Only by being identically equal to 0, thus it
must be the case that F ( x) 0 . Sigh, then we still get that u( x, t ) 0 , and we end up with
the dull solution again, the only possible solution if we start with k = 0.
Try to solve these two ordinary differential equations. You are looking for functions whose
second derivatives give back the original function, multiplied by a positive constant. Possible
candidate solutions to consider include the exponential and sine and cosine functions. Of
course, the sine and cosine functions don’t work here, as their second derivatives are negative
the original function, so we are left with the exponential functions.
Let’s take a look at (13) more closely first, as we already know that the boundary conditions
imply conditions specifically for F (x) , i.e. the conditions in (11). Solutions for F (x)
include anything of the form
(14) F ( x) Ae x
where 2 k and A is a constant. Since could be positive or negative, and since solutions
to (13) can be added together to form more solutions (note (13) is an example of a second
order linear homogeneous ODE, so that the superposition principle holds), then the general
solution for (13) is
(14) F ( x) Ae x Be x
where now A and B are constants and k . Knowing that F (0) F (l ) 0 , then
unfortunately the only possible values of A and B that work are A B 0 , i.e. that
F ( x) 0 . Thus, once again we end up with u( x, t ) F ( x)G(t ) 0 G(t ) 0 , i.e. the dull
solution once more. Now we place all of our hope on the third and final possibility for k,
namely…
So now we go back to equations (12) and (13) again, but now working with k as a negative
constant. So, again we have
Exponential functions won’t satisfy these two ODEs, but now the sine and cosine functions
will. The general solution function for (13) is now
where again A and B are constants and now we have 2 k . Again, we consider the
boundary conditions that specified that F (0) F (l ) 0 . Substituting in 0 for x in (15) leads
to
(16) F (0) A cos(0) B sin(0) A 0
n
(17) l n or (where n is an integer)
l
This means that there is an infinite set of solutions to consider (letting the constant B be equal
to 1 for now), one for each possible integer n.
n
(18) F ( x) sin x
l
Well, we would be done at this point, except that the solution function u( x, t ) F ( x)G(t ) and
we’ve neglected to figure out what the other function, G(t ) , equals. So, we return to the
ODE in (12):
where, again, we are working with k, a negative number. From the solution for F (x) we
have determined that the only possible values that end up leading to non-trivial solutions are
with
n
2
k
2
forn some integer. Again, we get an infinite set of solutions for (12) that
l
can be written in the form
cn
whereC and D are constants and n c k c , where n is the same integer that
l
showed up in the solution for F (x) in (18) (we’re labeling with a subscript “n” to identify
which value of n is used).
Now we really are done, for all we have to do is to drop our solutions for F (x) and G(t ) into
u( x, t ) F ( x)G(t ) , and the result is
n
(20) u n ( x, t ) F ( x)G(t ) C cos( n t ) D sin( n t ) sin x
l
where the integer n that was used is identified by the subscript in u n ( x, t ) and n , and C and
D are arbitrary constants.
At this point you should be in the habit of immediately checking solutions to differential
equations. Is (20) really a solution for the original wave equation
2u 2 u
2
c
t 2 x 2
and does it actually satisfy the boundary conditions u(0, t ) 0 and u(l , t ) 0 for all values of
t
The solution given in the last section really does satisfy the one-dimensional wave equation.
To think about what the solutions look like, you could graph a particular solution function for
varying values of time, t, and then examine how the string vibrates over time for solution
functions with different values of n and constants C and D. However, as the functions
involved are fairly simple, it’s possible to make sense of the solution u n ( x, t ) functions with
just a little more effort.
For instance, over time, we can see that the G(t ) C cos(n t ) D sin( n t ) part of the
2
function is periodic with period equal to . This means that it has a frequency equal to
n
n
cycles per unit time. In music one cycle per second is referred to as one hertz. Middle C
2
on a piano is typically 263 hertz (i.e. when someone presses the middle C key, a piano string
is struck that vibrates predominantly at 263 cycles per second), and the A above middle C is
440 hertz. The solution function when n is chosen to equal 1 is called the fundamental mode
(for a particular length string under a specific tension). The other normal modes are
represented by different values of n. For instance one gets the 2nd and 3rd normal modes
when n is selected to equal 2 and 3, respectively. The fundamental mode, when n equals 1
represents the simplest possible oscillation pattern of the string, when the whole string swings
back and forth in one wide swing. In this fundamental mode the widest vibration
displacement occurs in the center of the string (see the figures below).
Thus suppose a string of length l, and string mass per unit length , is tightened so that the
T
values of T, the string tension, along the other constants make the value of 1 equal
2l
to 440. Then if the string is made to vibrate by striking or plucking it, then its fundamental
(lowest) tone would be the A above middle C.
Now think about how different values of n affect the other part of u n ( x, t ) F ( x)G(t ) ,
n n
namely F ( x) sin x . Since sin x function vanishes whenever x equals a multiple
l l
l
of , then selecting different values of n higher than 1 has the effect of identifying which
n
parts of the vibrating string do not move. This has the affect musically of producing
overtones, which are musically pleasing higher tones relative to the fundamental mode tone.
For instance picking n = 2 produces a vibrating string that appears to have two separate
vibrating sections, with the middle of the string standing still. This mode produces a tone
exactly an octave above the fundamental mode. Choosing n = 3 produces the 3rd normal
mode that sounds like an octave and a fifth above the original fundamental mode tone, then
4th normal mode sounds an octave plus a fifth plus a major third, above the fundamental tone,
and so on.
It is this series of fundamental mode tones that gives the basis for much of the tonal scale
used in Western music, which is based on the premise that the lower the fundamental mode
differences, down to octaves and fifths, the more pleasing the relative sounds. Think about
that the next time you listen to some Dave Matthews!
Finally note that in real life, any time a guitar or violin string is caused to vibrate, the result is
typically a combination of normal modes, so that the vibrating string produces sounds from
many different overtones. The particular combination resulting from a particular set-up, the
type of string used, the way the string is plucked or bowed, produces the characteristic tonal
quality associated with that instrument. The way in which these different modes are
combined makes it possible to produce solutions to the wave equation with different initial
shapes and initial velocities of the string. This process of combination involves Fourier
Series which will be covered at the end of Math 21b (come back to see it in action!)
Finally, finally, note that the solutions to the wave equations also show up when one
considers acoustic waves associated with columns of air vibrating inside pipes, such as in
organ pipes, trombones, saxophones or any other wind instruments (including, although you
might not have thought of it in this way, your own voice, which basically consists of a
vibrating wind-pipe, i.e. your throat!). Thus the same considerations in terms of fundamental
tones, overtones and the characteristic tonal quality of an instrument resulting from solutions
to the wave equation also occur for any of these instruments as well. So, the wave equation
gets around quite a bit musically!
2u 2 u
2
(1) c
t 2 x 2
in terms of the variables x, and t, we rewrite it to reflect two new variables
(2) v x ct and z x ct
This then means that u, originally a function of x, and t, now becomes a function of v and z,
instead. How does this work? Note that we can solve for x and t in (2), so that
x v z and t v z
1 1
(3)
2 2c
Now using the chain rule for multivariable functions, you know that
u u v u z u u
(4) c c
t v t z t v z
v z
since c and c , and that similarly
t t
u u v u z u u
(5)
x v x z x v z
v z
since 1 and 1 . Working up to second derivatives, another, more involved
x x
application of the chain rule yields that
2 u u u 2 u v 2 u z 2 u z 2 u v
(6) c c c 2 c
t 2 t v z v t zv t z 2 t vz t
2u 2u 2 2u 2u 2 u
2
2u 2u
c 2
2
c z 2 vz c v 2 2 zv z 2
v zv
Another almost identical computation using the chain rule results in the fact that
2 u u u 2 u v 2 u z 2 u z 2 u v
(7)
x 2 x v z v 2 x zv x z 2 x vz x
2u 2u 2u
2
v 2 zv z 2
2u 2 u
2
(8) c
t 2 x 2
2u 2u 2u 2u 2u
and substitute in what we have calculated for 2 and in terms of , and .
t x 2 v 2 z 2 zv
Doing this gives the following equation, ripe with cancellations:
2u 2 u 2u 2u 2 u 2 u 2u 2u
2 2 2
(9) c
v 2 2 c c 2
t 2 zv z 2 x 2 v 2
zv z 2
2u 2u
Dividing by c2and canceling the terms involving and reduces this series of
v 2 z 2
equations to
2u 2u
(10) 2 2
zv zv
2u
(11) 0
zv
So what, you might well ask, after all, we still have a second order PDE, and there are still
several variables involved. But wait, think about what (11) implies. Picture (11) as it gives
you information about the partial derivative of a partial derivative:
u
(12) 0
z v
u
In this form, this implies that considered as a function of z and v is a constant in terms of
v
u
the variable z, so that can only depend on v, i.e.
v
u
(13) M (v)
v
This, as an indefinite integral, results in a constant of integration, which in this case is just
constant from the standpoint of the variable v. Thus, it can be any arbitrary function of z
alone, so that actually
Substituting back the original change of variable equations for v and z in (2) yields that
(16) u( x, t ) P( x ct ) N ( x ct )
whereP and N are arbitrary single variable functions. This is called D’Alembert’s solution to
the wave equation. Except for the somewhat annoying but easy enough chain rule
computations, this was a pretty straightforward solution technique. The reason it worked so
well in this case was the fact that the change of variables used in (2) were carefully selected
so as to turn the original PDE into one in which the variables basically had no interaction, so
that the original second order PDE could be solved by a series of two single variable
integrations, which was easy to do.
Check out that D’Alembert’s solution really works. According to this solution, you can pick
any functions for P and N such as P(v) v 2 and N (v) v 2 . Then
(17) u( x, t ) ( x ct ) 2 ( x ct ) 2 x 2 x ct c 2 t 2 2
2u
(18) 2c 2
t 2
and that
2u
(19) 2
x 2
so that indeed
2u 2 u
2
(20) c
t 2 x 2
This same transformation trick can be used to solve a fairly wide range of PDEs. For
instance one can solve the equation
2u 2u
(21)
xy y 2
(22) v x and z x y
(Try it out! You should get that u( x, y) P( x) N ( x y) with arbitrary functions P and N )
Note that in our solution (16) to the wave equation, nothing has been specified about the
initial and boundary conditions yet, and we said we would take care of this time around. So
now we take a look at what these conditions imply for our choices for the two functions P
and N.
If we were given an initial function u( x,0) f ( x) along with initial velocity function
u t ( x,0) g ( x) then we can match up these conditions with our solution by simply
substituting in t 0 into (16) and follow along. We start first with a simplified set-up, where
we assume that we are given the initial displacement function u( x,0) f ( x) , and that the
initial velocity function g (x) is equal to 0 (i.e. as if someone stretched the string and simply
released it without imparting any extra velocity over the string tension alone).
(23) u( x,0) P( x c 0) N ( x c 0) P( x) N ( x) f ( x)
We next figure out what choosing the second initial condition implies. By working with an
initial condition that u t ( x,0) g ( x) 0 , we see that by using the chain rule again on the
functions P and N
(24) u t ( x,0) P( x ct ) N ( x ct ) cP ( x ct ) cN ( x ct )
t
(remember that P and N are just single variable functions, so the derivative indicated is just a
simple single variable derivative with respect to their input). Thus in the case where
u t ( x,0) g ( x) 0 , then
(25) cP ( x ct ) cN ( x ct ) 0
(26) P ( x) N ( x)
and so P( x) k N ( x) for some constant k. Combining this with the fact that
P( x) N ( x) f ( x) , means that 2P( x) k f ( x) , so that P( x) f ( x) k 2 and likewise
N ( x) f ( x) k 2 . Combining these leads to the solution
(27) u ( x, t ) P( x ct ) N ( x ct )
1
f ( x ct ) f ( x ct )
2
(29) u (0, t )
1
f (ct ) f (ct ) 0
2
or
so that to meet this condition, then the initial condition function f must be selected to be an
odd function. The second boundary condition that u(l , t ) 0 implies
(31) u (l , t )
1
f (l ct ) f (l ct ) 0
2
so that f (l ct ) f (l ct ) . Next, since we’ve seen that f has to be an odd function, then
f (l ct ) f (l ct ) . Putting this all together this means that
which means that f must have period 2l, since the inputs vary by that amount. Remember that
this just means the function repeats itself every time 2l is added to the input, the same way
that the sine and cosine functions have period 2 .
What happens if the initial velocity isn’t equal to 0? Thus suppose u t ( x,0) g ( x) 0 .
Tracing through the same types of arguments as the above leads to the solution function
f ( x ct ) f ( x ct ) 1 x ct g (s)ds
1 x ct
(33) u ( x, t )
2 2c
In the next installment of this introduction to PDEs we will turn to the Heat Equation.
Heat Equation
For this next PDE, we create a mathematical model of how heat spreads, or diffuses through
an object, such as a metal rod, or a body of water. To do this we take advantage of our
knowledge of vector calculus and the divergence theorem to set up a PDE that models such a
situation. Knowledge of this particular PDE can be used to model situations involving many
sorts of diffusion processes, not just heat. For instance the PDE that we will derive can be
used to model the spread of a drug in an organism, of the diffusion of pollutants in a water
supply.
The key to this approach will be the observation that heat tends to flow in the direction of
decreasing temperature. The bigger the difference in temperature, the faster the heat flow, or
heat loss (remember Newton's heating and cooling differential equation). Thus if you leave a
hot drink outside on a freezing cold day, then after ten minutes the drink will be a lot colder
than if you'd kept the drink inside in a warm room - this seems pretty obvious!
If the function u( x, y, z, t ) gives the temperature at time t at any point (x, y, z) in an object,
then in mathematical terms the direction of fastest decreasing temperature away from a
specific point (x, y, z), is just the gradient of u (calculated at the point (x, y, z) and a particular
time t). Note that here we are considering the gradient of u as just being with respect to the
spatial coordinates x, y and z, so that we write
u u u
(1) grad (u ) u i j k
x y z
Thus the rate at which heat flows away (or toward) the point is proportional to this gradient,
so that if F is the vector field that gives the velocity of the heat flow, then
Now suppose we know the temperature function, u( x, y, z, t ) , for an object, but just at an
initial time, when t = 0, i.e. we just know u( x, y, z,0) . Suppose we also know the thermal
conductivity of the material. What we would like to do is to figure out how the temperature
of the object, u( x, y, z, t ) , changes over time. The goal is to use the observation about the
rate of heat flow to set up a PDE involving the function u( x, y, z, t ) (i.e. the Heat Equation),
and then solve the PDE to find u( x, y, z, t ) .
To get to a PDE, the easiest route to take is to invoke something called the Divergence
Theorem. As this is a multivariable calculus topic that we haven’t even gotten to at this point
in the semester, don’t worry! (It will be covered in the vector calculus section at the end of
the course in Chapter 13 of Stewart). It's such a neat application of the use of the Divergence
Theorem, however, that at this point you should just skip to the end of this short section and
take it on faith that we will get a PDE in this situation (i.e. skip to equation (10) below. Then
be sure to come back and read through this section once you’ve learned about the divergence
theorem.
First notice if E is a region in the body of interest (the metal bar, the pool of water, etc.) then
the amount of heat that leaves E per unit time is simply a surface integral. More exactly, it is
the flux integral over the surface of E of the heat flow vector field, F. Recall that F is the
vector field that gives the velocity of the heat flow - it's the one we wrote down as F ku
in the previous section. Thus the amount of heat leaving E per unit time is just
(1) F dS
S
whereS is the surface of E. But wait, we have the highly convenient divergence theorem that
tells us that
u u u
(3) grad (u ) u i j k
x y z
2u 2u 2u
(4) div ( grad (u )) u
x 2 y 2 z 2
Incidentally, this combination of divergence and gradient is used so often that it's given a
name, the Laplacian. The notation div ( grad (u) u is usually shortened up to simply
2 u . So we could rewrite (2), the heat leaving region E per unit time as
F dS k (
2
(5) u )dV
S E
On the other hand, we can calculate the total amount of heat, H, in the region, E, at a
particular time, t, by computing the triple integral over E:
where is the density of the material and the constant is the specific heat of the material
(don't worry about all these extra constants for now - we will lump them all together in one
place in the end). How does this relate to the earlier integral? On one hand (5) gives the rate
H
of heat leaving E per unit time. This is just the same as , where H gives the total
t
amount of heat in E. This means we actually have two ways to calculate the same thing,
H
because we can calculate by differentiating equation (6) giving H, i.e.
t
H u
(7) ( ) dV
t E
t
Now since both (5) and (7) give the rate of heat leaving E per unit time, then these two
equations must equal each other, so…
H u
(8) ( ) dV k ( 2 u )dV
t E
t E
For these two integrals to be equal means that their two integrands must equal each other
(since this integral holds over any arbitrary region E in the object being studied), so…
u
(9) ( ) k ( 2 u )
t
k
or, if we let c 2 , and write out the Laplacian, 2 u , then this works out simply as
u 2u 2u 2u
(10) c 2 2 2 2
t x y z
This, then, is the PDE that models the diffusion of heat in an object, i.e. the Heat Equation!
This particular version (10) is the three-dimensional heat equation.
We simplify our heat diffusion modeling by considering the specific case of heat flowing in a
long thin bar or wire, where the cross-section is very small, and constant, and insulated in
such a way that the heat flow is just along the length of the bar or wire. In this slightly
contrived situation, we can model the heat flow by keeping track of the temperature at any
point along the bar using just one spatial dimension, measuring the position along the bar.
This means that the function, u, that keeps track of the temperature, just depends on x, the
position along the bar, and t, time, and so the heat equation from the previous section
becomes the so-called one-dimensional heat equation:
u 2 u
2
(1) c
t x 2
One of the interesting things to note at this point is how similar this PDE appears to the wave
equation PDE. However, the resulting solution functions are remarkably different in nature.
Remember that the solutions to the wave equation had to do with oscillations, dealing with
vibrating strings and all that. Here the solutions to the heat equation deal with temperature
flow, not oscillation, so that means the solution functions will likely look quite different. If
you’re familiar with the solution to Newton’s heating and cooling differential equations, then
you might expect to see some type of exponential decay function as part of the solution
function.
Before we start to solve this equation, let’s mention a few more conditions that we will need
to know to nail down a specific solution. If the metal bar that we’re studying has a specific
length, l, then we need to know the temperatures at the ends of the bars. These temperatures
will give us boundary conditions similar to the ones we worked with for the wave equation.
To make life a bit simpler for us as we solve the heat equation, let’s start with the case when
the ends of the bar, at x 0 and x l both have temperature equal to 0 for all time (you can
picture this situation as a metal bar with the ends stuck against blocks of ice, or some other
cooling apparatus keeping the ends exactly at 0 degrees). Thus we will be working with the
same boundary conditions as before, namely
Finally, to pick out a particular solution, we also need to know the initial starting temperature
of the entire bar, namely we need to know the function u (x,0) . Interestingly, that’s all we
would need for an initial condition this time around (recall that to specify a particular solution
in the wave equation we needed to know two initial conditions, u (x,0) and u t (x,0) ).
The nice thing now is that since we have already solved a PDE, then we can try following the
same basic approach as the one we used to solve the last PDE, namely separation of
variables. With any luck, we will end up solving this new PDE. So, remembering back to
what we did in that case, let’s start by writing
(3) u( x, t ) F ( x)G(t )
whereF, and G, are single variable functions. Differentiating this equation for u ( x, t ) with
respect to each variable yields
2u u
(4) F ( x)G(t ) and F ( x)G (t )
x 2
t
When we substitute these two equations back into the original heat equation
u 2u
(5) c2 2
t x
we get
u 2 u
2
(6)
F ( x)G (t ) c c 2 F ( x)G(t )
t x 2
If we now separate the two functions F and G by dividing through both sides, then we get
G (t ) F ( x)
(7) 2
c G(t ) F ( x)
Just as before, the left-hand side only depends on the variable t, and the right-hand side just
depends on x. As a result, to have these two be equal can only mean one thing, that they are
both equal to the same constant, k:
G (t ) F ( x)
(8) 2
k
c G (t ) F ( x)
As before, let’s first take a look at the implications for F (x) as the boundary conditions will
again limit the possible solution functions. From (8) we get that F (x) has to satisfy
Just as before, one can consider the various cases with k being positive, zero, or negative.
Just as before, to meet the boundary conditions, it turns out that k must in fact be negative
(otherwise F (x) ends up being identically equal to 0, and we end up with the trivial solution
u( x, t ) 0 ). So skipping ahead a bit, let’s assume we have figured out that k must be
negative (you should check the other two cases just as before to see that what we’ve just
written is true!). To indicate this, we write, as before, that k 2 , so that we now need to
look for solutions to
(10) F ( x) 2 F ( x) 0
These solutions are just the same as before, namely the general solution is:
where again A and B are constants and now we have k . Next, let’s consider the
boundary conditions u(0, t ) 0 and u(l , t ) 0 . These are equivalent to stating that
F (0) F (l ) 0 . Substituting in 0 for x in (11) leads to
wheren is an integer. Next we solve for G(t ) , using equation (8) again. So, rewriting (8), we
see that this time
G (t ) n G(t ) 0
2
(15)
cn
where n , since we had originally written k 2 , and we just determined that
l
n
during the solution for F (x) . The general solution to this first order differential
l
equation is just
G(t ) Ce n t
2
(16)
n n 2t
(17) u ( x, t ) F ( x)G(t ) C sin x e
l
cn
Wheren is an integer, C is an arbitrary constant, and n . As is always the case, given
l
a supposed solution to a differential equation, you should check to see that this indeed is a
solution to the original heat equation, and that it satisfies the two boundary conditions we
started with.
The next question is how to get from the general solution to the heat equation
n n 2t
(1) u ( x, t ) C sin x e
l
that we found in the last section, to a specific solution for a particular situation. How can one
figure out which values of n and Care needed for a specific problem? The answer lies not in
choosing one such solution function, but more typically it requires setting up an infinite series
of such solutions. Such an infinite series, because of the principle of superposition, will still
be a solution function to the equation, because the original heat equation PDE was linear and
homogeneous. Using the superposition principle, and by summing together various solutions
with carefully chosen values of C, then it is possible to create a specific solution function that
will match any (reasonable) given starting temperature function u (x,0) .