0% found this document useful (0 votes)
73 views59 pages

P1Calculus1 Notes

1. The document provides an introduction to a course on calculus of a single variable, covering functions, limits, derivatives, integration, and series expansions. 2. The syllabus outlines the topics to be covered, including differentiation, integration, series, and Taylor/McLaurin expansions. 3. The course aims to help students understand the fundamentals of calculus and how it can be applied to describe physical systems using differential equations.

Uploaded by

paeji2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views59 pages

P1Calculus1 Notes

1. The document provides an introduction to a course on calculus of a single variable, covering functions, limits, derivatives, integration, and series expansions. 2. The syllabus outlines the topics to be covered, including differentiation, integration, series, and Taylor/McLaurin expansions. 3. The course aims to help students understand the fundamentals of calculus and how it can be applied to describe physical systems using differential equations.

Uploaded by

paeji2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

0/1

Calculus of a single variable

4 lectures, MT 2020 Prof. Ton van den Bremer (with thanks to Prof. David Murray)
Rev October 5, 2020 [email protected]
Course Page: Canvas

Introduction
Engineering involves the study of natural physical and biological systems, with the aim
of turning them to the use of mankind.
Systems have inputs and outputs. What happens between is often captured by a
differential equation which describes how the outputs vary not only with the input
values, but also with their rates of change and their higher derivatives too. The calculus
is exactly the mathematical machinery you need to understand and describe these
variations.
There are four courses in P1 that will advance your understanding of physical systems
and their description using calculus.

• Calculus 1: (4 lectures) Calculus involving a single independent variable.

• Ordinary Differential Equations 1: (8 lectures) First- and second-order ordinary


differential equations.

• Ordinary Differential Equations 2: (4 lectures) Elementary simultaneous equations,


frequency response and Fourier series.

• Calculus 2: (4 lectures) Calculus involving multiple independent variables.

• Calculus 3: (4 lectures) Vector and fields.

Some of the material in the four lectures of Calculus 1 will be revision. But the
emphasis here is not on how to crank out that particular derivative or that integral,
but on understanding the fundamentals. Later, it will help you make sense of calculus
in many variables.
0/2

Syllabus
The function concept. Definition and simple properties of elementary functions. Dif-
ferentiation of a product, quotient and function of a function.
Elementary integration, including substitutions, integration by parts, partial fractions,
tan half-angle, recursive formulae.
Elementary series: sum to n terms of linear and geometric series. The concept of a
limit. McLaurin and Taylor expansions in one variable and their use for linearization of
elementary functions. The error term. De l’Hôpital’s theorem.
At the end of the course students should be able to:

1. understand the concept of a function and the definition of differentiation


2. differentiate products and quotients of functions and use Leibnitz’s rule.
3. differentiate functions of the elementary functions.
4. perform integration by parts, use substitution, partial fractions and recursion.
5. analyze expressions combining integration and differentiation.
6. understand the notion of convergence in simple series.
7. expand functions using McLaurin and Taylor series and understand linearization.
8. derive limits using, for example, de l’Hopital’s theorem.

Lecture Content
Somewhat inconveniently, the material seems to split into three topics to be covered
over the four lecture slots.

1. Functions, Limits and Derivatives


2. Integration
3. Series and Series Expansions

Tutorial Sheets
The tutorial sheet directly associated with this course is

• 1P1A Calculus 1

But performing just the examples in 1P1A is not really enough to secure your knowledge.
Each of the texts below contains many worked and unworked problems to practise on.
0/3

Some suggestions for reading


Advanced texts such as Kreyszig’s Advanced Engineering Mathematics have material
on series, but assume knowledge of basic differentiation and integration. You will
therefore need to look for intermediate level “mathematical methods for scientists”
books for wider coverage. Visit your college library!

• Riley, Hobson, and Bence Mathematical Methods for Physics and Engineering: A
Comprehensive Guide CUP. 3rd ed (2006)
High quality stuff throughout, and useful for 1st and 2nd year material.
• Stephenson, G. (1973) Mathematical Methods for Science Students, Longman
Scientific & Technical, 2nd Ed.,
It is rather dated and is splendidly divorced from applications, but good for its
compressed presentation and for problem bashing.
• James, G. Modern Engineering Mathematics, Pearson/Prentice-Hall.
Chapters 7 and 6 cover most of the material here, and gives some engineering
context. You may become impatient with the long-winded explanations.

Do not forget our very own data book ...

• Howatson, Lund and Todd, Engineering Tables and Data, but always referred to
as HLT. Written by members of the Department, HLT was updated in 2009 by
Profs McFadden and Smith, and in 2018 by Prof Blakeborough.
HLT is a mine of information, and, importantly, it is available during examinations.
You need to have HLT under your pillow, and, even better, know what’s in there.

Course WWW Pages


Pdf copies of these notes (in colour), copies of the lecture slides, the tutorial sheets,
corrections, answers to FAQs etc, will be accessible from Canvas.
0/4

Some of the cast

Cauchy D’Alembert De l’Hopital

Leibnitz Taylor MacLaurin

Newton
Lecture 1

Functions, Limits and the Derivative

1.1 Introduction
The Calculus 1 course is concerned with the calculus of a single independent variable.
We also restrict ourselves to real variables. Whether dealing with a single variable or,
later, multiple variables, calculus is based (of course) on the concept of number, but
also on two further outstandingly important concepts. First is the concept of the
function, and the second is the concept of the limit.
Our aim in this first lecture is to revise these ideas and to use them to define the
derivative. We shall first skip through some definitions and jargon associated with
functions, touching on types of function, and then revise limits and continuity. That
will allow us to define the derivative. With that definition we can derive simple rules
like the Product and Quotient Rules. We will see how Leibnitz’s Rule helps to
find higher order derivatives of products. We then revise the Chain Rule associated
with functions of a function. Then, for completeness, we apply the definition of the
derivative to work out derivatives for the different basic types of function mentioned at
the start of the lecture.
Last in this lecture we give an overview of hyperbolic functions, which may be unfa-
miliar.

1.2 Functions of a single variable


The concept of a function is already well-known to you. Given an input, a function
defines a recipe for evaluating the corresponding output. One might also call the
function recipe a mapping, or a transformation, or an operator. Much of engineering is
concerned with inputs, output, and the functions that relate the two. These transfer
functions effectively describe how natural and engineered systems work.

1
1/2 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE

When defining functions, we should take care to define the nature of the inputs and
outputs. Mathematicians describe the input’s domain and the output’s codomain,
which might be the set of real numbers, integers, complex numbers, trees, graphs, etc.
Here we are concerned with a scalar real variables.
We must also take care to define the interval in which the function definition is valid.
If there is a y which is completely determined by x in some interval a ≤ x ≤ b, then we
say that y is a function of x, y = f (x), in that interval. x is the independent variable,
and y the dependent variable.
This might seem over-fussy. But when programming you will have to take similar care
with the types of variables and intervals of validity when writing functions in computer
code. Otherwise, programs might crash or will certainly return bogus results.

1.2.1 Some descriptions applied to functions


We now introduce some quite general descriptions applied to functions.
(a) Single-valued and multi-valued functions.
For a single value of the independent variable x, a function might deliver one or more
values of y , as illustrated in Figure 1.1.

y y y

x x

(a) (b) (c)


Figure 1.1: A single-, two-, and multi-value function.

(b) Continuous/discontinous functions.


Intuitively, a continuous function is one where the pen does not have to be lifted to
plot the function, as shown in Figure 1.2. Somewhat more precisely, for a continuous
function defined within an interval, any change in y = f (x) can be kept within any
arbitrary small bound by making the change in x suitably small. We shall return to this
later.
(c) Smooth functions.
As evident in Figure 1.2(a), A continuous function does not have to be smooth. For-
mally, a smooth function f is one where all its derivatives (ie, f 0 , f 00 up to f (∞) ) are
1.2. FUNCTIONS OF A SINGLE VARIABLE 1/3

y y

x x

(a) (b)
Figure 1.2: Two continuous functions, and a discontinuous functions.

continuous. Some functions might appear acceptably “smooth to the eye” because a
good number of higher derivatives are continuous. A function f is said to be of class
C k smoothness when derivatives up to f (k) are continuous. (See piecewise functions
below.)
(d) Monotonic increasing/decreasing functions.
If increasing x causes y = f (x) always to increase (or decrease), then the function is
monotonic increasing (or decreasing). Figure 1.3 shows a monotonic increasing func-
tion, y = log x. The first derivative or gradient always has the same sign.

Figure 1.3: A monotonic increasing function.

(e) Inverse functions The function y = f (x) defines a mapping between x and y , and
hence also one between y and x as x = φ(y ). Functions f and φ are inverses, written
f = φ−1 . The inverse might be easy to write down explicity, eg
y = ax + b ⇒ x = y /a − b/a (1.1)
and
y = exp x ⇒ x = ln(y ) (1.2)
or may be only implicit, eg
y = ex + ln(x) ⇒ x =? (1.3)
1/4 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE

This is interesting. For every x-value we can evaluate y , and hence are perfectly able
to plot x against y — but there is no function descriptor for the variation.
(f) Parametric functions.
It is sometimes convenient to define variables parametrically. Instead of considering
y = f√ (x), we are given y = y (p) and x = x(p). For example, instead of writing
y = ± 1 − x 2 to define a circle, one might write
x = cos(p) y = sin(p) . (1.4)
Obviously, p is the parameter.
(g) Odd and even functions.
If

−f (x) the function is ODD
f (−x) = (1.5)
+f (x) the function is EVEN
A function can, of course, be neither even nor odd.

y y

x x

(a) (b)

Figure 1.4: An (a) even function and (b) odd function. Note that a continuous odd function must have
f (0) = 0.

(h) Periodic functions.


A function whose output is the same if the input is increase by a period, T . For example
f (x) = cos ωx is periodic with period T = 2π/ω, so that for integer n
 

f (x + nT ) = f x + n = f (x) . (1.6)
ω
Cosine, sine, etc, are periodic of course. In a later course, you will learn about Fourier
series which provide a method of describing most periodic functions using sums of
cosines and sines. (Note too that periodicity is not restricted to time. If x was a length
one might describe T as the spatial period.)
(i) Piecewise functions.
1.2. FUNCTIONS OF A SINGLE VARIABLE 1/5

You will come across functions described as piecewise linear, piecewise quadratic, etc,
in your engineering courses. The idea is very simple. A piecewise function is one made
by joining together different functions defined over neighbouring intervals. Notice that
the piecewise linear function in Figure 1.5 is continuous, but is only C 0 smooth. A
piecewise quadratic function can be made C 1 smooth, mean that the gradients are
continuous, and so on. Bézier curves, devised by de Casteljau and used by French
engineer Pièrre Bézier in the early 1960’s to design beautiful French car bodies, are
piecewise cubic curves which are C 2 , making the curvature continuous. Très élégant.

Figure 1.5: A piecewise linear function.

1.2.2 Function types


While the previous descriptions can be applied quite generally, we turn now to consider
the quite small set of basic functions.
Functions seem to be divided into three classes, Algebraic, Transcendental, and Spe-
cial.
Algebraic function examples are (a) Rational functions (ratios of polynomials) and (b)
functions involving n-th roots. Transcendental functions include (c) logs and expo-
nentials, (d) hyperbolic functions, (e) trigonometrical functions and (f) other periodic
functions.
(a) Algebraic: Rational functions. A polynomial function y = a0 + a1 x + a2 x 2 + . . .
is a rational integral function. If one takes ratios of such functions, one ends up with
so-called general rational functions
a0 + a1 x + a2 x 2 + . . .
y= (1.7)
b0 + b1 x + b2 x 2 + . . .

(b) Algebraic:√ Roots of functions. Simple examples include roots of general rational
functions, eg x 2 + 1.
(c) Transcendental: Exponential and logarithmic functions. Functions ex , ln x are
the most obvious, but this class includes other exponentiated functions like ax .
1/6 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE

Note that
ln ax = x ln a → ax = e x ln a . (1.8)

(d) Transcendental: Hyperbolic functions. These are made from exponentials. See
Section 1.9.
(e) Transcendental: Trignometric functions. These functions are those involving sin
and cos tan, cot, sec and cosec. They are also called circular functions. Other periodic
functions are included.
(f) Specials. You will come across many specials throughout your course — sinc
functions, Error functions, Gamma functions, Bessel functions, and so on, and two
important discontinous functions the Heaviside step function u(x), and the Dirac delta
function, δ(x).

1.3 Limits and continuity


Consider the function
sin x
f (x) = . (1.9)
x
It is straightforward to evaluate, except when x = 0 — then the result 0/0 might be
anything, Indeed, at x = 0, f (x) is undefined. However, if you consider decreasing val-
ues of x =  = 10−1 , 10−2 , 10−3 ... and calculate sin / you will find that the difference
between the function and unity gets progressively smaller
sin 
1− = 1.7 × 10−3 , 1.7 × 10−5 , 1.7 × 10−7 , ...

The same happens when x increases towards zero from the negative side. Empirically
then, we can write
 
sin x
lim = 1.
x→0 x
even though sin 0/0 remains undefined.
We will discover a nice way of finding 0/0 limits in Lecture Topic 3. For now, what we
need to worry about is how to define the concept of a limit in general terms.

1.3.1 Tending to a limit


Writing limx→a f (x) = ` means that as x gets closer and closer to a, the function value
gets closer and closer to `. For any arbitrarily small number  > 0, it is possible to find
1.3. LIMITS AND CONTINUITY 1/7

an η such that
|f (x) − `| <  for all 0 < |x − a| < η . (1.10)
Why 0 < |x − a| and not 0 ≤ |x − a|? The notation x → a means that x keeps
approaching, but never reaches, the value of a, so that there is no need to consider
x = a. Indeed, the function does not even need to be defined for x = a (as, of course,
we saw with sin(x)/x and x → 0.)

f(x)
l ε 6
ε/10
η
x
a
Keep zooming in

Figure 1.6: However small you make  it is possible to find an η that satifies the condition.

It is therefore possible for different limits to exist either side of a discontinuity, as shown
in Figure 1.7. If the discontinuity is at x = a, the limits from below and above would
be written
lim = p lim = q . (1.11)
x→a− x→a+
±
where a indicates values an infinitesimally smaller or bigger than a, and similarly for
a+ .

Figure 1.7: A function can tend to different limits either side of a discontinuity.

1.3.2 A clarifying example


To clarify the point further, consider the three functions
 
sin(x)/x x 6= 0 sin(x)/x x 6= 0
f (x) = sin(x)/x g(x) = h(x) =
1 x =0 42 x =0
1/8 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE

The limits of all three functions as x → 0 are unity. But f (0) is undefined, while
g(0) = 1 and h(0) = 42. You’ll see that f (x) has a undefined “gap” in it, g(x) is the
only continuous function, while h(x) is defined but discontinuous.

1.3.3 A refined definition of continuity


Using the limit concept, we can revisit the definition of continuity. A single-valued
function g(x) is continuous at x = a if

• The function is defined at x = a, AND

• The limit limx→a g(x) exists, AND

• limx→a g(x) = g(a)

Notice how this fits in with the example of g(x) given above, and how f (x) and h(x)
violate the conditions.

1.3.4 Three useful theorems


Here are three useful theorems on limits, which are obvious enough. (f (x) and g(x)
are arbitary functions here.)

lim {f (x) + g(x)} = lim f (x) + lim g(x) (1.12)


x→a x→a x→a
 
f (x) limx→a f (x)
lim = (1.13)
x→a g(x) limx→a g(x)

lim {f (x)g(x)} = lim f (x) lim g(x) (1.14)


x→a x→a x→a

The theorems on limits allow us to say the following

• The sum and difference of two continuous functions is continuous.

• The quotient of two continuous functions is continuous at every point where the
denominator is non-zero.

• The product of two continuous functions is continuous.


1.4. THE DERIVATIVE 1/9

1.4 The Derivative


With functions and limits defined, we can formally define the derivative. The derivative
of f (x) at x is found as the limit
The Derivative defined
   
df δf f (x + δx) − f (x)
= lim = lim . (1.15)
dx δx→0 δx δx→0 δx
Notes:

• f (x) is differentiable at x if this limit exists at x, and exists independently of how


δx → 0.
• A differentiable function is continuous, but
• A continuous function is not necessarily differentiable. (Eg, f (x) = |x| at x = 0.)

And note too the different meaning of df and δf , dx and δx, The δf is a small but
finite difference, resulting from a small but finite difference δx in x. In the limit as δx
tends to 0, the changes becomes infinitesimally small (that is, no longer finite) and
become dx and df . (This distinction may seem trivial, but it is not, and it will help you
greatly if you keep it clear in your mind.)
♣ Example:
Q: Find dy /dx when y = f (x) = x 2 .
(x + δx)2 − x 2
  2
x + 2xδx + (δx)2 − x 2
 
df
= lim = (1.16)
dx δx→0 δx δx
A: The x 2 terms cancel, and we can neglect terms (δx)2 in comparison with terms in
(δx), the more so in the limit as δx → 0. Hence
 
df 2xδx
= lim = 2x . (1.17)
dx δx→0 δx

♣ Example: For f (x) = x n we need the binomial expansion for {x + δx}n


n o 
n n−1 n(n−1) n−2 2 n
df
 n
{x + δx} − x n
 x + nx δx + 2! x (δx) + . . . − x
= lim = 
dx δx→0 δx δx
 n−1 
nx δx
= = nx n−1 .
δx
1/10 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE

This result allows use to handle any polynomial. If we had already proven the Quotient
Rule, we would be able to differentiate any general rational function.

1.5 The Product Rule and Leibnitz’s Rule


1.5.1 First derivative of a product
Consider the product of two functions f (x)g(x). Its derivative is determined from
The Product Rule

d
(f g) = f 0 g + f g 0 . (1.18)
dx
We can prove this from first principles by writing
d f (x + δx)g(x + δx) − f (x)g(x)
(f g) = limδx→0 . (1.19)
dx δx
We can use the theorems given earlier to write this as
d [limδx→0 f (x + δx)] [limδx→0 g(x + δx)] − f (x)g(x)
(f g) = (1.20)
dx limδx→0 δx
But we have already shown in the definition of the derivative that
df
limδx→0 f (x + δx) = f (x) + limδx→0 δx = f + f 0 limδx→0 δx (1.21)
dx
so that we have
d f g + f 0 gδx + f g 0 δx + f 0 g 0 (δx)2 − f g
(f g) = limδx→0 = f 0g + f g 0. (1.22)
dx δx

1.5.2 Higher derivatives of a product: Leibnitz


What happens if we differentiate again? Well, f 0 (x) and g 0 (x) are still functions of x,
so that

d2
(f g) = f 00 g + f 0 g 0 + f 0 g 0 + f g 00 = f 00 g + 2f 0 g 0 + f g 00 , (1.23)
dx 2
d3
(f g) = GRIND AWAY = f 000 g + 3f 00 g 0 + 3f 0 g 00 + f g 000 . (1.24)
dx 3
You see the binomial coefficients emerging, and can guess what happens next. So did
Leibnitz ...
1.5. THE PRODUCT RULE AND LEIBNITZ’S RULE 1/11

Leibnitz’s Rule
n
dn
 
X n
(f g) = f (n−i ) g (i ) (1.25)
dx n n−i
i =0
P
Often the use of makes things harder to “see”. Unpacking this and the binomial
coefficients gives
Perhaps the most immediately useful Leibnitz’s Rule

dn (n) n (n−1) (1) n(n − 1) (n−2) (2)


(f g) = f g + f g + f g + ... (1.26)
dx n 1! 2!
Note that we have started writing out at a particular end of the expression. This has
repercussions as we explore now.
♣ Example:
Q: Find the nth derivative of x 3 e2x using Leibnitz
dn (n) n (n−1) (1) n(n − 1) (n−2) (2)
(f g) = f g + f g + f g + ...
dx n 1! 2!

A: First decide which function to choose as f and which as g. In absolute terms it does
not matter, but, having written the series out starting from a particular end, it is much
more convenient if we choose g to disappear quickly when we start differentiating it.
Clearly in this case it is most convenient to set g(x) = x 3 and f (x) = e2x . Now we
need to think about the individual derivatives. A moment’s thought gives

f (n) = 2n e2x g = x3 g (1) = 3x 2 g (2) = 6x g (3) = 6

So, pulling out the e 2x as a common factor:

dn 3 2x
 
2x n 3 n−1 2 n(n − 1) n−2 n(n − 1)(n − 2) n−3
x e = e 2 x + n2 3x + 2 6x + 2 6
dx n 2 6
 
3n 3n(n − 1) n(n − 1)(n − 2)
= e2x 2n x 3 + x 2 + x+ .
2 4 8

So suppose we wanted the n = 8th derivative. We would find it as

256e2x x 3 + 12x 2 + 42x + 42 .



(1.27)

You might want to try this using just the product rule — reserve an evening for it.
1/12 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE

♣ Example:
This is an example of things become messy when taking the n-th derivative.
Q: Find the nth derivative of x cos(x).
A: You want g to disappear quickly, so a good choice is of course
g(x) = x and ⇒f (x) = cos(x) (1.28)

What is the mth derivative of cos(x)? It is worth writing a few down to find the
pattern.
f (0) = cos(x) f (1) = − sin(x) f (2) = − cos(x) f (3) = sin(x) f (4) = cos(x)
Evidently, for m even it is (−1)m/2 cos(x), but for m odd it is (−1)(m+1)/2 sin(x).
Now we have different overall results for n odd and even. Notice too that if n is even,
the second term f (n−1) requires you to set m = (n−1) in the ODD derivative expression
(and if there was a third term, m = (n − 2) in the EVEN derivative, and so on).
For n even [check this after the lecture!]
dn
n
x cos(x) = (−1)n/2 cos(x)[x] + n(−1)(n−1+1)/2 sin(x)[1] (1.29)
dx
= (−1)n/2 (x cos(x) + n sin(x)) ,

but for n odd


dn
n
x cos(x) = (−1)(n+1)/2 sin(x)[x] + n(−1)(n−1)/2 cos(x)[1] (1.30)
dx
= (−1)(n+1)/2 (x sin(x) − n cos(x)) .

♣ Example:
dy
Q: Find the nth derivative of (x 3 + 2x)
dx
A: Choose
dy
f = g = (x 3 + 2x) (1.31)
dx
and notice that f (n) = y (n+1) . Then
dn (n+1) 3 (n) 2 n(n − 1) (n−1) n(n − 1)(n − 2) (n−2)
(f g) = y (x +2x)+ny (3x +2)+ y (6x)+ y (6)
dx n 2 6
(1.32)
This could be tidied up. But the main point is that this examples points to the possibility
of taking the nth derivative of an ordinary differential equation.
1.6. THE QUOTIENT RULE 1/13

1.6 The Quotient rule


1.6.1 Proof using the basics directly:
We can prove the quotient rule using just the limit theorems given earlier, and exploiting
the product rule.
Use the basic definition ...
   
d 1 1 1 1
= limδx→0 −
dx v δx v (x + δx) v (x)
 
1 v (x) − v (x + δx)
= limδx→0
δx v (x + δx)v (x) Simplified using Taylor expansion:
−v 0 (x)δx f(a+h)=f(a)+hf’(a)+…
 
1
= limδx→0
δx (v (x) + v 0 δx)v (x)
 0 
−v (x)
=
v2

Then use the product rule


 0
d u  1 −v
= u0 + u
dx v v v2
u 0 v − uv 0
=
v2

1.7 Function of a function: the chain rule


Suppose f = f (g) and g = g(x). If you are told x, you can work out g, and knowing
g you can find f . This must mean that f is also a function of x, and df /dx exists.
The chain rule tells us that
Chain rule
If f = f (g) and g = g(x), then
df df dg
= (1.33)
dx dg dx

This requires very little proof. When δx → 0 and becomes dx, the corresponding
infinitesimal dg in the numerator is exactly the same as that in the denominator giving
rise to df . We are allow to treat d(whatever) as a number, and hence divide
through by it.
1/14 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE

♣ Example:
Q: Differentiate y = exp(x 2 ) with respect to x.
A:
y = exp(g) g = x 2 dy
dy = exp(g) dg = 2x ⇒ = 2x exp(x 2 ). (1.34)
dg dx dx

♣ Example:
Q: Differentiate y = exp(cos(x 2 )) with respect to x.
A: This is a triple decker ...
y = exp(g) g = cos(h) h = x2
dy = exp(g) dg = − sin(h) dh (1.35)
dg dh dx = 2x
dy dy dg dh
⇒ = = exp(g)(− sin(h))2x = exp(cos(x 2 ))(− sin(x 2 ))2x . (1.36)
dx dg dh dx

1.8 Derivatives of the elementary functions


For completeness, we should apply the basic definition of differentiation to the set of
simple functions.
(a) General rational functions. We have already proved that the derivative of x n is
nx n−1 , and hence we can deal with any polynomial. Using the quotient rule we can
there handle any general rational function!
(b) n-roots. To prove this we need to know the n-th root of “1 plus a small thing” can
be approximated as follows:
 
1/n h
(1 + h) ≈ 1 + (1.37)
n
Then to find the derivative of y = x 1/n write
1/n
(x + δx)1/n − x 1/n
  
dy 1 1/n δx
= limδx→0 = limδx→0 x 1+ − x 1/n (1.38)
dx δx δx x
Using the approximation
1 x 1/n δx
   
dy 1 (1/n)−1
= limδx→0 = x (1.39)
dx δx nx n

(c) Exponentials - a tutorial exercise. To do this we need to know that when h is


small eh ≈ 1 + h. Then ...
1.9. HYPERBOLIC FUNCTIONS 1/15

(d) Trig functions - also a DIY exercise


Assume that cos(h) ≈ 1 and sin(h) ≈ h when h is small, and use trig relationships for
the sin of a sum to replace sin(x + δx) ...

1.9 Hyperbolic functions


The hyperbolic functions are defined in terms of exponentials, but have properties
similar to, but subtly different from, trigonometric functions.
Why hyperbolic? They can be generated in a manner similiar to trignometric functions
— but instead of being based on a circle x 2 + y 2 = 1, they are based on a hyperbola
x 2 − y 2 = 1. As illustrated in Figure 1.8, in parametric form

x = cos(t), y = sin(t), −π <t ≤ π (1.40)


x = cosh(t), y = sinh(t), −∞ ≤t ≤ ∞ . (1.41)

y y

sin(t) sinh(t)

x x
cos(t) cosh(t)

Figure 1.8: The circular functions contrasted with the hyperbolic functions. The values of sinh and cosh
are found as projections of points on the hyperbola x 2 − y 2 = 1 onto the y and x axes. This is similar
to the projection of point on a circle onto the axes to obtain sin and cos.

But the explicit definitions of the hyperbolic cosine and sine, cosh and sinh, are
1 x 1 x sinh x ex − e−x
e + e−x −x
 
cosh x = sinh x = e −e ⇒ tanh x = = x (1.42)
2 2 cosh x e + e−x
These are defined for −∞ ≤ x ≤ ∞, and are plotted in Figure 1.9. The other hyperbolic
functions like sech, cosech, etc, are defined as expected in terms of sinh and cosh.
1/16 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE

1.9.1 Properties

The parametric definition of sinh and cosh indicate that

cosh2 x − sinh2 x = 1 , (1.43)

and this is easy to show from the explicit definitions.

30 30

25 20

20 10
y=cosh(x)

15 y=sinh(x) 0

10 −10

5 −20

0 −30
−4 −2 0 2 4 −4 −2 0 2 4
x x

(a) (b)
1 4
cosh(x)
3.5 sinh(x)
0.5exp(x)
0.5 3

2.5
y=tanh(x)

2
y

1.5

−0.5 1

0.5

−1 0
−4 −2 0 2 4 0 0.5 1 1.5 2
x x

(c) (d)

Figure 1.9: Plots of (a) cosh(x), (b) sinh(x), (c) tanh(x) and (d) a comparison for x > 0 of cosh, sinh
and 0.5 exp(x).

Their derivative properties, all shown by differentiating the definitions, are

d d d
cosh x = sinh x sinh x = cosh x tanh x = sech2 x (1.44)
dx dx dx
1.10. [*] STATIONARY POINTS – MAXIMA, MINIMA, ETC 1/17

Other properties are (most are in HLT)

cosh(−a) = cosh(a)
sinh(−a) = − sinh(a)
cosh(a + b) = sinh(a) sinh(b) + cosh(a) cosh(b)
sinh(a + b) = cosh(a) sinh(b) + sinh(a) cosh(b)
cosh(a) + sinh(a) = ea
cosh(a) − sinh(a) = e−a

But why is there such interest in these functions? Two related reasons are that, first,
if you give cosh and sinh an imaginary argument, sin and cos pop out. For example,

sinh(iy ) = i sin(y ) . (1.45)

Second, cos t and sin t and cosh t and sinh t are solutions to the differential equation
d2 y dy
a 2 +b + cy = 0 .
dt dt
So while cos t and sin t handle oscillations in a system cosh t and sinh t can be use to
describe time-decaying solutions.

1.10 [*] Stationary points – maxima, minima, etc


The following table summarizes properties of stationary and inflexion points.
Minimum f 0 (x) = 0 f 00 (x) > 0 Saddle f 0 (x) = 0 f 00 (x) = 0
Maximum f 0 (x) = 0 f 00 (x) < 0 Inflexion f 0 (x) 6= 0 f 00 (x) = 0
Figure 1.10 shows examples from f (x) = x 2 + x 3 and f (x) = x 3 .

1.11 Sketching functions


As mentioned earlier, engineers use functions as descriptors of how systems behave,
and it is important to become skilled at sketching functions.
♣ Example: Sketch the function
1
f (ω) = p . (1.46)
(1 + α2 ω 2 )

This is a function that will be of interest in electronics — it describes the gain of a


low-pass filter. The “way in” to making a sketch of this is to spot that f (ω) ≈ 1 when
1/18 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE

0.4 0.4

0.3 0.3

0.2 Max 0.2

0.1 Inflexion 0.1


y=x2 + x3

y=x3
0 0
Min Saddle
−0.1 −0.1

−0.2 −0.2

−0.3 −0.3

−0.4 −0.4
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x

Figure 1.10: Maximum, minimum, inflexion and saddle points for functions of a single variable.

αω  1, and f (ω) ≈ 1/(αω) when αω  1. Now if you sketch on a log-log plot nice
things happen. On one side log f = 0, and on the other log f = − log α − log ω or
“y = − log α − x”, a graph of slope −1 on a log-log plot. The crossover point is when
ω = 1/α. The function has no stationary points.
The sketch is in Figure 1.11(a).

Figure 1.11: Sketches of the two functions given as examples.

♣ Example: Sketch the function


 
1
f (t) = t cos . (1.47)
t
We know that cos(ωt) oscillates with angular frequency ω. Actually, given cos(φ(t)),
where φ(t) is phase, the instantaneous angular frequency (iaf) is |dφ/dt|. So if φ = ωt,
the iaf is ω. So in our case the iaf is 1/t 2 – it oscillates with infinite frequency at t = 0
then slows down. As t → ±∞, cos(1/t) → 1. We multiply all that by t, a straight
line through the origin. There are stationary points, and you may wish to locate these.
The sketch is in Figure 1.11(b).
1.12. [*] PLOTTING FUNCTIONS 1/19

(You are probably still a bit worried about what happens at t = 0. Oscillating with
infinite frequency sounds awkward, but the amplitude is zero, so no energy is involved.)
Note that the sketches (i) contain informative labels and (ii) do not appear to have
been drawn by a spider on drugs.

1.12 [*] Plotting functions


You will learn Matlab this year as part of your P5 laboratory work. Matlab, which is
available to load onto your own machines, is an interactive computing language and
it contains some powerful graphing tools. For example, some base-level code to plot
y = x exp(−x) is
x = linspace ( -0.5 ,4.0 ,200); % 200 x values between -0.5 and 4
y = x .* exp ( - x ); % compute the y values
plot (x , y ); % plot y against x

1.13 Summary
In this lecture we have

• Reviewed aspects of functions of a single variable


• Reviewed the limit concept
• Defined the derivative
• Derived the Product Rule and Leibnitz’s Rule
• Derived the Quotient rule
• Derived the Chain rule
• Applied our definitions to find from first principles the derivatives of basic functions
• Spent some time introducing hyperbolic functions
1/20 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE
Lecture 2

Integration

This lecture develops integration of functions of a single variable. Much of the material
will be familiar, thought aspects of derivatives of integrals may be new to you. There
are many many examples in the recommended texts, and you should practise any type
of integral and technique of integration that causes you difficulty.

2.1 Definite Integrals


You will be familiar with the description of integration as a summation of the area under
a curve between two fixed values of the independent variable, as shown in Figure 2.1.

X Z b
δAi = f (xi )δx ⇒ A = limδx→0 f (xi )δx = f (x)dx . (2.1)
i a

The function being integrated, f (x), is called the integrand.

2.2 Indefinite Integrals


An indefinite integral results from integrating a function f (p) from some fixed value
p = a to a variable limit p = x. This results in a function of x
Z x
Φ(x) = f (p)dp , (2.2)
a

from which it follows that


dΦ(x)
= f (x) . (2.3)
dx
1
2/2 LECTURE 2. INTEGRATION

f(xi )

a b
xi xi+ δx
Figure 2.1:

We can derive this result in two ways. First, simply by noting that

Φ(x) = F (x) − F (a) (2.4)


dΦ(x)
= F 0 (x) − 0 = f (x) .
dx
Or, second, by using the definition of differentiation:
( R x+δx Rx )
dΦ(x) a f (p)dp − a f (p)dp
= limδx→0 (2.5)
dx δx
( R x+δx )
x f (p)dp
= limδx→0 (2.6)
δx
f (x)dx
= = f (x) (2.7)
dx
2.3. INTEGRATION BY PARTS 2/3

f(x) f(x)

a x x+δx a x x+ dx
R x+δx
Figure 2.2: The limit of x
f (p)dp as δx tends to zero is f (x)dx.

2.3 Integration by parts


We can put the last result to use straightaway to derive the method of integration by
parts.
Because we have proved the product rule
d
(uv ) = uv 0 + u 0 v (2.8)
dx
it must be that
Z Z
uv dx + u 0 v dx
uv = 0
(2.9)
Z Z
⇒ uv dx = uv − u 0 v dx .
0
(2.10)

So if we have a awkward product (uv 0 ) to integrate, it may be that by choosing u


to be something that is “reduced” by integration and v 0 as something that is easy to
integrate, we can arrive at a u 0 v that is easier to integrate.
♣ Example:
Q: Integrate x cos(x).
A: u = x reduces on differentation, and v 0 = cos(x) is easy to integrate.

u = x v 0 = cos(x)
Z Z
⇒ x cos(x)dx = x sin(x) − 1. sin(x)dx (2.11)
u 0 = 1 v = sin(x)
= x sin(x) + cos(x) + K , (2.12)

which should be checked by differentiating.


2/4 LECTURE 2. INTEGRATION

If we had tried x 2 cos(x) we would have used integration by parts twice. These sorts of
integral appear often when working out coefficients in Fourier Series, which you come
to later in the course. You need to be organized not to make a slip.
♣ Example:
2
Q: Integrate x 2 ex .
2
A: Don’t immediately rush for the obvious — u = x 2 , v 0 = ex — because you will
struggle to integrate v 0 . Instead
2
u = x v 0 = xex
2 ⇒ DIY! (2.13)
u 0 = 1 v = 12 ex
2.4. RECURRENCE FORMULAE 2/5

2.4 Recurrence formulae


Integration by parts can often be useful even when there does not seem to be an obvious
product. The technique allows one to “chip away” at the integral. Here is an example
that give rise to a recurrence formula.
♣ Example: R
Q: Find a general expression for cosn (x)dx, and use it to find the particular result for
n = 6.
A: Using the abbreviation c(x) = cos(x), etc:
u = cn−1 (x) v 0 = c(x)
(2.14)
u 0 = −(n − 1)cn−2 (x)s(x) v = s(x)

Z Z
n n−1
⇒ c (x)dx = c (x)s(x) + (n − 1) cn−2 (x)s2 (x)dx (2.15)
Z
n−1
= c cn−2 (x)(1 − c2 (x))dx
(x)s(x) + (n − 1) (2.16)
Z Z 
n−1 n−2 n
= c (x)s(x) + (n − 1) c (x)dx − c (x)dx (2.17)

Thus, writing the original integral as In ,

In = cn−1 (x)s(x) + (n − 1)In−2 − (n − 1)In (2.18)


⇒nIn = cn−1 (x)s(x) + (n − 1)In−2 (2.19)
1  n−1 
⇒In = c (x)s(x) + (n − 1)In−2 . (2.20)
n
The particular example is
1 5 
I6 = cos (x) sin(x) + 5I4 (2.21)
6 
1 5
cos5 (x) sin(x) + cos3 (x) sin(x) + 3I2
 
= (2.22)
6 4
  
1 5 5 3 3
= cos (x) sin(x) + cos (x) sin(x) + [cos(x) sin(x) + I0 ] (2.23)
6 4 2
  
1 5 3
= cos5 (x) sin(x) + cos3 (x) sin(x) + [cos(x) sin(x) + x] (2.24)
6 4 2

Watch out Rfor I0 = x. It Ris easy to be careless and write I0 = cos0 (x) = 1. But of
course I0 = cos0 (x)dx = 1dx = x.
2/6 LECTURE 2. INTEGRATION

2.5 Substitution
Another very useful technique for simplifying integration is to change variables by re-
placing a function of the current variable with the new variable. Suppose the current
variable is x and the new variable is v where x = φ(v ) or, using the inverse function,
v = φ−1 (x).
We all know how this works, but it is worth formalizing the steps for a single variable,
so that comparisons can be drawn when dealing with multiple integration.
There are three steps:
1. Change the integrand to a function of v . If the integrand is f (x), the new function
of v is F (v ) = f (φ(v )).
   
dx dx
2. Replace dx by dv . Note that = φ0 = 1/(φ−1 )0 .
dv dv
3. Replace the limits expressed in terms of x by limits in terms of v . For example,
vlower = φ−1 (xlower ) and so on.
♣ Example: Z
e ln x
Q: Integrate dx
x=1 x
A: Substitute u = ln x. Then
du 1
= ⇒ dx = xdu and limits are u = ln 1 = 0, and u = ln e = 1.
dx x
The integral become
Z 1 Z Z e
1 21 ln x 1 ln x 1
udu = u 0 ⇒ dx = (ln x)2 , and dx = .
0 2 x 2 1 x 2
2.5.1 Trigonometric and Hyperbolic substitutions
♣ Example:
Z
1
√ dx Subst: x = a sin(u) with dx = a cos(u)du (2.25)
a2 − x 2
Hence
Z Z
1 a cos(u)
√ dx = p du (2.26)
a2 − x 2 a 1 − sin2 (u)
Z
= du = u (2.27)

= sin−1 (x/a). (2.28)


2.5. SUBSTITUTION 2/7

♣ Example:
Z
1
√ dx Subst: x = a sinh(u) with dx = a cosh(u)du (2.29)
a2 + x 2
Hence
Z Z
1 a cosh(u)
√ dx = p du (2.30)
a2 + x 2 a 1 + sinh2 (u)
Z
= du = u (2.31)

= sinh−1 (x/a). (2.32)

♣ Example:
Z
1
dx Subst: x = a tan(u) with dx = a sec2 (u)du (2.33)
a2 + x 2
Hence
a sec2 (u)
Z Z
1
dx = du (2.34)
a2 + x 2 a2 1 + tan2 (u)
Z
1 u
= du = (2.35)
a a
1
= tan−1 (x/a). (2.36)
a

♣ Example:
Z
1 1
dx Subst: u = tan(x/2) with du = sec2 (x/2)dx ⇒dx = 2 cos2 (x/2)du
sin x 2
(2.37)
Don’t panic too soon that the RHS of dx is not entirely a function of u. Just carry on
to see whether things clear up.

2 cos2 (x/2)
Z Z
1
⇒ dx = du (2.38)
sin x Z 2 sin(x/2) cos(x/2)
= cot(x/2)du (2.39)
Z
du
= (2.40)
u
= ln u = ln tan(x/2) (2.41)
2/8 LECTURE 2. INTEGRATION

2.6 Substitution p = x and p = −x


If we write the integral
Z b
f (x)dx
a

one is perhaps tempted to think that the result is either a function of x, or that x is
special in some way. Neither is true, the result is a constant, and x is just a dummy
variable.
What happens if we substitute p = −x?
Z b Z −b Z −a
f (x)dx = f (−p)(−dp) = f (−p)dp
a −a −b

So, for example, substituting x = −x gives


Z 2 Z −1
x
e dx = e −x dx .
1 −2

Confused? Do it in two stages p = −x then x = p.

2.7 Partial fractions


A result from elementary algebra: any real polynomial in x can be factored as

P (x) = (x − a1 )m1 (x − a2 )m2 · · · (x 2 + b1 x + c1 )n1 (x 2 + b2 x + c2 )n2 · · · (2.42)

This product consists only of linear terms and quadratic terms all raised to various
powers.
This looks remarkable, but it is surprisingly easy to see why.
Writing the roots (ie solutions) of the polynomial as r1 , r2 , etc, it must be that P (x) =
(x − r1 )(x − r2 ) · · · (x − ri ). Now to allow some of the roots to be repeated we could
write, more generally, P (x) = (x − r1 )m1 (x − r2 )m2 · · · (x − ri )m2 .
But if the coefficients of the polynomial are real, then either a root is real, or two roots
must be conjugate pairs 1 .
A real root is guaranteed to be delivered from a linear term (with r1 , etc, real), and
conjugate pairs can be guaranteed by the quadratics.
p p
1
The conjugate pair −b/2a ± i (2ac − b2 )/2a arises from the roots −b/2a ± (b2 − 2ac)/2a to the quadratic ax 2 +
bx + c = 0 when b2 < 4ac.
2.7. PARTIAL FRACTIONS 2/9

Hence our general rational function can be written as


P (x) (x − a1 )j1 (x − a2 )j2 · · · (x 2 + b1 x + c1 )k1 (x 2 + b2 x + c2 )k2 · · ·
= (2.43)
Q(x) (x − d1 )m1 (x − d2 )m2 · · · (x 2 + e1 x + f1 )n1 (x 2 + e2 x + f2 )n2 · · ·
By analysing how ratios of linear and quadratic polynomials behave, it is straightforward
(but tedious) to determine that rational functions can be expressed as a sum of partial
fractions.
Various patterns emerge. The largest power of x in the numerator of a fraction is
always one less than that in the numerator. Excess powers in the numerator are pulled
outside, etc.
For example, in the first there is an excess of two, in the second an excess of 1, and in
the third no excess:
x 2 + 2x + 1 C
= Ax + B + (2.44)
x −1 x −1
2
x + 2x + 1 B C
= A+ + (2.45)
(x − 1)(x − 2) x −1 x −2
2
x + 2x + 1 A B C
= + + (2.46)
(x − 1)(x − 2)(x − 3) x −1 x −2 x −3
x 2 + 2x + 1 A Bx + C
= + (2.47)
(x − 1)(x − 2)2 x − 1 (x − 2)2
(x − 1)3 Cx + D
= Ax + B + (2.48)
(x − 2)2 (x − 2)2
1 A B
= + (2.49)
(x − 1)(x − 2) (x − 1) (x − 2)

2.7.1 Determining the coefficients - Method 1


To find the coefficients, one puts the RHS over the same common denominator as the
LHS then matches coefficients of x n in the numerator.
♣ Example:
(x − 1)3
Z
Q: Use partial fractions to find dx
(x − 2)2
A:
(x − 1)3 Cx + D
= Ax + B + (2.50)
(x − 2)2 (x − 2)2
⇒x 3 − 3x 2 + 3x − 1 = A(x 3 − 4x 2 + 4x) + B(x 2 − 4x + 4) + Cx + D
⇒A = 1, B = 1, C = 3, D = −5
2/10 LECTURE 2. INTEGRATION

(x − 1)3
Z  
3x − 5
Z
dx = x +1+ dx (2.51)
(x − 2)2 (x − 2)2
!
3
x2 2(x − 2)
Z
2 1
= +x + + dx (2.52)
2 (x − 2)2 (x − 2)2
x2 1
= + x + 3 ln(x − 2) − +K (2.53)
2 (x − 2)

Note that the rearrangement between the first and second line is nothing to do with
partial fractions. It is done to make the numerator the derivative of the denominator.

2.7.2 Determining the coefficients - Method 2


If the fractions are separated, a convenient method of finding coefficients is the cover
up rule. Take the third example in the list
x 2 + 2x + 1 A B C
= + + .
(x − 1)(x − 2)(x − 3) x −1 x −2 x −3
Setting x = 1 causes the A/(x − 1) term on the RHS to blow up to ∞, making the
remaining terms neglible. That is
x 2 + 2x + 1
 
A
lim =
x→1 (x − 1)(x − 2)(x − 3) x −1
The (x − 1) factor can be cancelled left and right, so that
 2 
x + 2x + 1
lim =A
x→1 (x − 2)(x − 3)

or
 
4
A= = 2.
(1 − 2)(1 − 3)
You should work out B and C and then check against the coefficient matching method.
Although this “cover up” method is fast, its limited applicability to general expressions
might lead one to prefer matching coefficients.
2.8. DIFFERENTIATION OF INTEGRALS 2/11

2.8 Differentiation of Integrals


Rb
Earlier we noted that a f (x)dx was a constant, and not a function of x. Hence
Z b
d
f (x)dx = 0 .
dx a
But suppose we were asked the following. What is
Z x
d
f (x)dx ?
dx a

You may find it clearer to replace the dummy variable x in the integral with p. ddx a f (p)dp .
Rx

If Φ(p) is the indefinite integral then


Z x
d d
f (p)dp = (Φ(x) − Φ(a)) = f (x) − 0 = f (x).
dx a dx
But what if the limit is a function of x — g(x), say? This involves the Chain Rule:
Z g(x)
d d
f (p)dp = (Φ(g(x)) − Φ(a)) = g 0 (x)f (g(x)).
dx a dx
And when both limits are functions of x:
Z g(x)
d
f (p)dp = g 0 (x)f (g(x)) − h0 (x)f (h(x)).
dx h(x)

2.8.1 Differentiation “under the integral sign”


Suppose the function f (x) being integrated contains a parameter. For example f (x) =
e−γx . The integral
Z b
e−γx dx
a

must be a function of γ — indeed, by doing the integral, that function is


−1 −γb
− e−γa

F (γ) = e
γ
So it is perfectly proper to ask what is
Z b
d
e−γx dx ?
dγ a
2/12 LECTURE 2. INTEGRATION

Work out the integral then differentiate ...


Z b  
d d −1
e−γx dx = e−γb − e−γa

(2.54)
dγ a dγ γ
1  1
= 2 e−γb − e−γa − −be−γb + ae−γa .

(2.55)
γ γ

But we can do this another way. Because γ is nothing to do with the variable being
used for integration, and nothing to do with the limits, we can move the differentia-
tion INSIDE the integral sign.
Z b Z b Z b
d d
e−γx dx = e−γx dx = −xe−γx dx

dγ a a dγ a

Look carefully! During the differentiation wrt γ, x is a constant.


To finish off, integrate by parts:
b b
u = −x v 0 = e−γx 1 b −γx
Z Z
−γx x −γx
⇒ xe dx = e − e dx (2.56)
u 0 = −1 v = −1
γ e
−γx
a γ a γ a
1 1
be−γb − ae−γa + 2 e−γb − e−γa
 
=
γ γ

which is identical to our earlier result.


2.9. EXAMPLES USING INTEGRATION 2/13

2.9 Examples using Integration


Applying integration to physical modelling is made much more straightforward when
you follow a routine.

1. Draw a diagram
2. Determine how the quantity Q of interest depends on the small change in the
system
dQ = f (x)dx

3. Write the integral with corresponding limits


Z Q2 Z x2
dQ = f (x)dx
Q1 x1

4. Do it ...

♣ Example:
Q: A spring has spring-constant k. What is the work done extending it from a position
with extension x = a to x = b.
A: Consider extension to a general position x. The force required is F (x) = kx (Figure
2.3.) The infinitesimal element of work required to move it to x + dx is

F(x) x
x+ d x
Figure 2.3: Spring of stiffness k extended to by x, and then to x + dx.

dW = F dx = kxdx (2.57)
Hence the total work done moving from extension x = a to x = b is
Z b
k
W = Wb − Wa = kx dx = (b2 − a2 ) . (2.58)
a 2
2/14 LECTURE 2. INTEGRATION

♣ Example:
Q: A piston is at position x = a in a cylinder of length L that contains an ideal gas.
How much work is required to move the piston to position b cylinder if the process is
isothermal.
A:Isothermal means at a fixed temperature. The internal energy U of the ideal gas
depends only on temperature, so there is no change in U in an isothermal process. This
means that we only need to do work against pressure. The pressure is related to the

F(x) p(x), V(x) A

x=0 x x+dx x=L


Figure 2.4:

volume by p(x)V (x) = nRT . At a general position x, the volume is V (x) = (L − x)A,
so p(x) = nRT /(L − x)A and the element of work done pushing in the piston by dx is
dx
dW = p(x)Adx = nRT (2.59)
L−x
Hence the total work is
Z b
dx
W = nRT (2.60)
a L−x
= nRT [− ln(L − x)|ba (2.61)
 
L−a
= nRT ln (2.62)
L−b

Note we could write this as W = nRT ln(Va /Vb ) where Va and Vb are the initial and final
volumes. As the process is isothermal, the work done on the gas must be extracted as
heat.
2.9. EXAMPLES USING INTEGRATION 2/15

♣ Example:
Q: A straight rod of uniform cross section A and length L has volume density ρ(x) =
α(L − x/2). Determine (i) the rod’s mass and (ii) the location of its centre of mass.

x=0 x x+dx x=L


Figure 2.5:

A: An infinitesimal section of length dx at x has mass

dM = ρ(x)dV = ρ(x)Adx (2.63)

Hence the total mass is


Z L L L
x2
Z 
x 3
M= ρ(x)Adx = α(L − )Adx = αA Lx − = αA L2 . (2.64)
0 0 2 4 0 4

The centre of mass is defined as the position at which placing the total mass gives the
same torque as dealing “properly” with the distributed mass.

dM
x x+ dx

Figure 2.6:

Consider the element of mass between x and x + dx. The element of torque about the
x = 0 point is

dΓ = xg(dM) = xgρ(x)Adx (2.65)

Hence the total torque is


Z L L  3
L3 L3
Z 
 x L
Γ=g xρ(x)Adx = gαA x L− dx = gαA − = gαA . (2.66)
0 0 2 2 6 3
2/16 LECTURE 2. INTEGRATION

So
L3 L3
 
3 2 4
xCOM Mg = gαA ⇒xCOM αA L = αA ⇒xCOM = L (2.67)
3 4 3 9

♣ Example:
Q: Find the average value of e−x in the range x = 0 and x = 2.
A: The average value of any quantity over a region is the integral of the quantity over
the region, divided by the size of the region. Here the region is just an interval on the
x-axis.
Hence
R2
e−x dx
R
f (x)dx
Average = RR = 0
R2 (2.68)
R dx 0 dx
−(e −2 − e 0 ) 1
= = (1 − e −2 ) . (2.69)
2−0 2

♣ Example:
Q: Show that √ the root-mean-squared value (RMS value) of the alternating voltage
V0 sin ωt is V0 / 2.
A: To find the RMS value one finds the mean (or average) of the squared value, and
then takes its square root.
The function V0 sin ωt is periodic in time with period T = 2π/ω. To find the mean (or
average) of the square

1 2 T 2 1 2 T1
Z Z
MS = V0 sin ωt dt = V0 (1 − cos 2ωt)dt = V02 /2
T 0 T 0 2

Taking the square root



RMS = V0 / 2 .
2.10. SUMMARY 2/17

2.10 Summary
In this lecture we

1. defined the definite and indefinite integral;


2. looked at some techniques in integration, such as integration by parts, recurrence
formulae, and substitution;
3. looked at how using partial fractions allow use to integral general rational functions;
4. saw how to differentiation integrals, which was a matter of decided whether or not
the integral was a function of the variable under consideration; and
5. considered some examples of setting out integration applied to physical problems
2/18 LECTURE 2. INTEGRATION
Lecture 3

Series: Elementary, then Taylor & MacLaurin

This lecture on series starts by visiting arithmetic and geometric series, and introducing
the notion of the convergence of a series. There are a number of tests for convergence
— which to deploy is rather hit or miss1 .
Much much more useful will be Taylor’s and MacLaurin’s series. Using the derivatives
at a point, these allow you to approximate functions in the neighbourhood of that
point, as a a polynomial or power series in the small displacement. Every engineer on
the planet uses these.

3.1 Series and convergence


Consider an ordered sequence of items a1 , a2 , etc, where each is related to its neighbours
by some pattern. Adding the terms together forms a series. If there are an infinite
number of items the sum is an infinite series and the sum is written

X
S∞ = ai .
i =1

If one adds up just a finite number of terms, the sum to n terms Sn is called a partial
sum.
An infinite series can either converge, which requires

lim an → 0 AND lim Sn → Finite value , (3.1)


n→∞ n→∞

or it diverges. Some divergent series have partials sums that tend either to +∞ or
−∞, in others successive partial sums oscillate between ±∞.
1
and so from 2018 they have been removed from the P1 syllabus.

1
3/2 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN

A convergent series with positive and negative terms in it might be conditionally con-
vergent or absolutely convergent. If the sum
S∞ = a1 + a2 + a3 + . . . (3.2)
contains positive and negative terms they must cancel each other somewhat. We could
therefore ask a stronger question: does the following converge?
0
S∞ = |a1 | + |a2 | + |a3 | + . . . (3.3)
If it does, the original series is absolutely convergent.
We now examine two common series, and then as an aside look a more general example
1. The arithmetic progression
An arithmetic series or progression is a sequence of n numbers successive members of
which differ by the same amount d. That is, {a1 , (a1 +d), (a1 +2d), . . . , (a1 +(n−1)d)}.
The i -th term is ai = a1 + (i − 1)d.
Suppose we wanted the sum to the n-th term, Sn = ni=1 ai . Write the sequence in
P

the forward and reverse directions, then add:


Sn = a1 + (a1 + d) + (a1 + 2d) + . . . + (a1 + (n − 1)d)
= (a1 + (n − 1)d) + (a1 + (n − 2)d) + (a1 + (n − 3)d) + . . . + a1
(3.4)
The sum of corresponding terms is always 2a1 + (n − 1)d and there are n of them so
the sum is
1 1
Sn = n(2a1 + (n − 1)d) = n(a1 + an ) . (3.5)
2 2
Convergence. The sum of this series always diverges as more terms are added. The
only series that remains finite is a string of zeros!
2. The geometric progression
A geometric series or progression is a sequence of n numbers successive members of
which are multiplied by a common factor r . That is, {a1 , r a1 , r 2 a1 , . . . , r n−1 a1 )}. The
i -th term is ai = r i −1 a1 . The sum to n terms is
Sn = a1 + r a1 + r 2 a1 + . . . + r n−1 a1 )
(3.6)
r Sn = r a1 + r 2 a1 + . . . + r n−1 a1 + r n a1 )
So
n a1 (1 − r n )
Sn (1 − r ) = a1 (1 − r ) ⇒Sn = . (3.7)
1−r
3.2. SOME TESTS FOR CONVERGENCE OR NOT [*FOR INTEREST ONLY*] 3/3

Convergence. The sum of an infinite number of terms will diverge to ±∞ unless


|r | < 1. If this condition is satisfied, r n → 0 and
a1
S∞ = . (3.8)
1−r

3. An example series where you need more advanced tests


Consider
x2 x3 x4
S∞ =1+x + + + + ... (3.9)
2! 3! 4!
One might easily imagine that this series would converge only for, say, |x| < 1.
It would be helpful if there was one just one test that would indicate whether a series
was convergent or not. Unfortunately, we are forced to consider a set of slightly
complicated tests, any one of which may indicate convergence or divergence.
In this example, by applying a ratio test to the moduli of the terms (D’Alembert’s ratio
test) one can show that the series is absolutely convergent for any x.

3.2 Some tests for convergence or not [*For interest only*]


** The detail is no longer in the P1 syllabus. This is for interest only. **

X ∞
X
Below, we will refer to a series sums as an and bn as required.
n=1 n=1

Test 1: Simple test (for all series)


If lim an 6= 0 the series DIVERGES.
n→∞

Explanation: Limits are limits, and must not change if ∞ is increased by 1.

Test 2: Comparison Test (for a series of all positive terms)



X ∞
X
If bn converges, and all an ≤ bn for sufficiently large n, then an CONVERGES.
n=1 n=1

X ∞
X
If bn diverges, and all an ≥ bn for sufficiently large n, then an DIVERGES.
n=1 n=1

Comment: This test is obvious enough as the terms are all positive. But what does the “sufficiently
large r ” mean? This caveat allows a finite number of terms at the start of the series to disobey the
condition. A finite number of terms can only contribute a finite sum, so this can discounted. (For
example, suppose you found a convergent series that satisfied an ≤ bn for all terms. Now just stick 47
as the first term of Series A and 0 as the first term of series B. We now have a1 > b1 , but Series A
must still converge.) For convenience we could just chop of the finite number of “misbehaving” terms,
and deal only with the rest of the series.
3/4 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN

Test 3: Ratio Comparison Test (for a series of all positive terms)


∞ ∞
X an+1 bn+1 X
If bn converges, and all ≤ for sufficiently large n, then an CONVERGES.
n=1
an bn n=1
∞ ∞
X an+1 bn+1 X
If bn diverges, and all ≥ for sufficiently large n, then an DIVERGES.
n=1
an bn n=1

Explanation: Test 2 involved comparisons between series, which can be awkward. This test however
involves ratios within each series, and hence removes the character of the two individual series when
making the comparison.
Write
         
an an−1 a2 bn bn−1 b2
an = ... a1 which must be ≤ ... a1 . (3.10)
an−1 an−2 a1 bn−1 bn−2 b1

Note the a1 at the end of the RHS. So, dividing out


bn an bn
an ≤ a1 ⇒ ≤ . (3.11)
b1 a1 b1
Now this is true for any r . It says that the an are decreasing more rapidly relative to the a1 than the bn
are decreasing relative to b1 . But if Series B converges, so must Series A.
Test 4: D’Alembert’s Ratio Test (for a series of all positive terms)

X
If, for an ,
n=1

   < 1 the series sum CONVERGES
an+1
L∞ = lim > 1 the series sum DIVERGES . (3.12)
n→∞ an
= 1 the series sum could do EITHER!

Explanation: Note that this test does not involve comparison.


To prove the result, consider a value L of a ratio some way into the series, where L∞ < L < 1. We can
then choose ratios later in the series and be confident in writing
at+1 at+2 at+3 at+4
<L; <L; <L; < L ; ... . (3.13)
at at+1 at+2 at+3
As all numbers are positive, we can therefore write

at+1 < Lat ; at+2 < L2 at ; at+3 < L3 at ; at+4 < L3 at ; . . . (3.14)

and adding up we have


L
at+1 + at+2 + at+3 + . . . < (L + L2 + L3 + . . .)at = at , (3.15)
1−L
where the equality is from the sum of a gp. So we see that the infinite sum omitting a finite number of
initial terms converge, and hence the entire series converges. (The divergence outcome is obvious, and
you should read a textbook to find out why L∞ = 1 is uncertain.)
3.2. SOME TESTS FOR CONVERGENCE OR NOT [*FOR INTEREST ONLY*] 3/5

Test 5: Cauchy Integral Test (for a series of positive terms)


X∞
If an consists of positive monotonically decreasing terms, and there exists a monotic
n=1 Z ∞
decreasing function f (x) such that f (1) = a1 , f (2) = a2 , and so on, then if f (x)dx
1

X ∞
X
converges so does an , and if the integral diverges, so does an .
n=1 n=1

Explanation: The key to understanding this result is Figure 3.1.

f(x) f(x)

a1 a1
a2
a3 a4 etc
x x
1 2 3 4 1 2 3 4

Figure 3.1: Background to the Cauchy Integral Test.

The vertical bars at x = 1, 2, 3, . . . represent the values an at n = 1, 2, 3, . . ., and the curve is f (x).
X∞
Because the bars are separated by ∆x = 1, an is the area given by the sum of the rectangles. It is
n=1
evident that
N
X Z N+1
an > f (x)dx , (3.16)
r =1 1

which seems to offer no information about convergence. However, if you copy each excess area over
into the a1 rectangle, you see immediately that the excess will be less that a1 × 1 = f (1). Hence
X∞ Z ∞
0< an − f (x)dx < f (1) . (3.17)
n=1 1


X
Thus, if the integral converges, so must an , and similiarly for divergence.
n=1

Obvious observation: (for a series of all negative terms)


A series of all negative terms can be turned into one of all positive terms by multiplying, er,
by (−1). Go back to Test 2 ...

Test 6: Leibnitz test (for a series of alternating terms)



X
If an consists of terms that alternate in sign and whose magnitude decreases monotonically
n=1
and limn→∞ an = 0, then the series CONVERGES.
** End of the “for interest only” section **
3/6 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN

3.3 Taylor’s series expansion


Of more immediate interest in engineering are series expansions, which allow us to
estimate values of a function from incomplete knowledge of the function. They are
very important!
Suppose we know the value f (a) of a function f (x) at x = a, and suppose we know at
least some of its derivatives f 0 (a), f 00 (a), etc, all at x = a. Can we estimate the value
of the function at x = a + h, where h is some small offset?
The answer is yes, using

Taylor’s expansion to O(hn )

h2 00 h3 hn
f (a+h) = f (a)+hf 0 (a)+ f (a)+ f 000 (a)+. . .+ f (n) (a)+Rn+1 . (3.18)
2! 3! n!
where the remainder
hn+1 (n+1)
Rn+1 = f (ζ), a ≤ζ ≤a+h . (3.19)
(n + 1)!

You might ask, why not simply put a + h into the function and compute the result.
Two reasons: first you might not actually know what f (x) is. Second, the expansion
provides an approximation to a function that is polynomial in the small quantity h. This
is very useful — indeed we have already used the expansion several times in Lecture 1.
Let’s first think about the expansion, and then worry about the remainder term.

3.3.1 Understanding the expansion [**For amusement only**]


The zeroth order approximation would be to argue that (a + h) is not far from a, so

f (a + h) ≈ f (a) (3.20)

as illustrated in Figure 3.2(a).


But if we know the first derivative at x = a, we can make a 1st order correction of
C1 = hf 0 (a), so that, as sketched in Figure 3.2(b),

f (a + h) ≈ f (a) + hf 0 (a). (3.21)

But suppose we also know f 00 (a). This should allow us to estimate the extra correction
arising from the extra gradient between a and a + h.
3.3. TAYLOR’S SERIES EXPANSION 3/7

1st order
f(a) f(a) correction

x x
a a+h a a+h

Figure 3.2:

The “extra” gradient at x = a is zero, of course. What is the extra gradient fˆ0 at an
intermediate point p, where p varies from 0 to h? It is
fˆ0 (p) = p f 00 (a) . (3.22)
So, between p and p + dp this will introduce an infinitesimal extra correction to the
function
dfˆ = fˆ0 (p)dp = p f 00 (a) dp . (3.23)
Integrating between 0 and h, the total extra correction arising from the rate of change
of gradient is
Z h Z h Z h
ˆ 00 00 h2 00
C2 = df = pf (a) dp = f (a) pdp = f (a) . (3.24)
0 0 0 2
To summarize so far:
Z h Z h
0 0
C1 = f (a)dp = f (a) dp = f 0 (a)h (3.25)
0 0
Z h Z h Z h
ˆ0 00 00 00 h2
C2 = f (p)dp = pf (a)dp = f (a) pdp = f (a) (3.26)
0 0 0 2!
(3.27)

The 3rd order correction requires us to integrate the “extra extra gradient” due to
f 000 (a).
Z h
C3 = fˆ
ˆ0 (p)dp . (3.28)
0
3/8 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN

To find fˆˆ0 (p) at p we have to integrate fˆ00 from 0 to p. That is,


Z p
ˆ0
fˆ (p) = fˆ00 (q)dq (3.29)
0
But the extra in f 00 at q is
fˆ00 (q) = f 000 (a)q (3.30)
Sticking this all together
Z h Z p h
p2 h3
 Z  
C3 = f 000 (a)qdq dp = f 000 (a) 000
dp = f (a) (3.31)
0 0 0 2 3!
We are on a roll now ...
Z h Z p Z q   Z h Z p  2  
q
C4 = f (4) (a)r dr dq dp = f (4) (a) dq dp (3.32)
0 0 0 0 0 2
Z h  3
(4) p (4) h4
= f (a) dp = f (a) (3.33)
0 3! 4!
and so on and on! Adding all the correction to f (a) gives us the result.

f(a)

x
a a+h
0 p p+dp h p

Figure 3.3:

3.4 Examples
Fortunately, Taylor’s expansion is easier to use than to prove. A reminder (leaving out
the remainder for now)
0 h2 00 h3 000
f (a + h) = f (a) + hf (a) + f (a) + f (a) + . . . (3.34)
2! 3!
3.4. EXAMPLES 3/9

♣ Example:
Q: Find the series expansion of ex about x = 1.
A: This mean a = 1 and our small quantity can be h.
Build a table of derivatives ...
f (x) e x f (1) = e
f 0 (x) e x f 0 (1) = e
f 00 (x) e x f 00 (1) = e
f 000 (x) e x f 000 (1) = e
So the series is
2 3
 
h h
e(1+h) = e 1 + h + + + ... (3.35)
2! 3!
Suppose h = 0.8 and we use 5 terms The series says

e1.8 ≈ e [1 + 0.8 + 0.32 + 0.0853 + 0.0171 . . .] = 6.0411 (3.36)

and the exact value is

e1.8 = 6.0496 (3.37)

an error of 0.14%. If we include more terms, the error reduces further.


♣ Example:
Q: Find the series expansion of 1/(1 + x) about x = a.
A: The fixed point is a and the small thing is h. The table of derivatives is ...
f (x) (1 + x)−1
f 0 (x) −1(1 + x)−2
f 00 (x) −1. − 2(1 + x)−3
f 000 (x) −1. − 2. − 3(1 + x)−4
In general, we can write
1
f (n) (x) = (−1)n n! (3.38)
(1 + x)n+1

Now multiply each by hn /n!, and the series at x = a becomes

1 1 h h2 h3
f (a + h) = = − + − + ... (3.39)
1+a+h (1 + a) (1 + a)2 (1 + a)3 (1 + a)4
"    2  3 #
1 h h h
= 1− + − + ... .
(1 + a) 1+a 1+a 1+a
3/10 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN

To test with some numbers, let a = 0.5 and h = 0.2. h/(1 + a) = 0.1333 The series is
"    2  3  4 #
2 0.2 0.2 0.2 0.2
f (0.7) = 1− + − + − . . . = 0.5883 , (3.40)
3 1.5 1.5 1.5 1.5
while the exact value is 1/1.7 = 0.5882.

3.5 A source of confusion, cleared up


Different books appear to express Taylor’s expansion differently. Sometimes you’ll see
f (a + h), other times f (x + h), and elsewhere it will be f (a + x).
This is merely a different choice of symbols for the fixed point and small offset. The
second one is choosing x as the fixed point and h as the small thing, so that
0 h2 00
f (x + h) = f (x) + hf (x) + f (x) + . . . (3.41)
2!
whereas the third is choosing a as the fixed point and x as the small offset, so that
x 2 00
f (a + x) = f (a) + xf 0 (a) + f (a) + . . . (3.42)
2!
3.6 The remainder term explained
You might care to read how the remainder term is arrived at. Here we only need to
explain what it means.
First, the remainder is not the “first neglected term” (that would be described as the
leading error term), but it is an estimate of the sum of ALL the neglected terms.
Second, notice that it is easy to remember. It looks almost exactly like “the next term”,
but f (n+1) is not evaluated at x = a but at x = ζ, where a ≤ ζ ≤ (a + h).
But what value does ζ have exactly? The way to work it out is actually very straight-
forward. As h is specified,
hn+1 (n+1)
Rn+1 (ζ) = f (ζ) (3.43)
(n + 1)!
is a function of ζ alone. So you can plot its value as ζ varies between a and a + h, and
use the largest magnitude value as the most pessimistic estimate of the remainder.
♣ Example:
Consider the previous example, with a=0.5 and h=0.2. We used terms to n=4, so that
1 1
Rn+1 = (−1)n+1 hn+1 ⇒ R 5 = (−1) 5
0.2 5
(3.44)
(1 + ζ)n+2 (1 + ζ)6
3.6. THE REMAINDER TERM EXPLAINED 3/11

ζ is somewhere
between a
? and a+h

a a+h x

Figure 3.4:

(n+1) a a+h ζ
Plot f (ζ )
between ζ =a
and ζ=(a+h)

a a+h ζ
Some possible plots ...

Figure 3.5:

Now, by checking dR5 /dζ we can see that there is no turning point. So the largest
value must be when ζ takes its smallest value in this case: that is, at ζ = a = 0.5.
1
⇒ max R5 = (−1)5 (0.2)5 (3.45)
(1 + 0.5)6

Figure 3.6 plots the absolute value of the Remainder term and the absolute value of the
Error as a function of the number of terms used in the series. Of course, there are only
discrete values at n = 1, 2, 3, ..., but there appears to be a straight line dependence on
the log-linear axes. The important thing is that the actual Error is always smaller than
our estimate of the remainder. It is a conservative estimate.
3/12 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN

−1 0
10 10
abs(Remainder) abs(Remainder)
abs(Error) abs(Error)
−2
10
−2
10
−4
10

−6
10
−3
10
−8
10

−4 −10
10 10
1 1.5 2 2.5 3 0 2 4 6 8 10

Figure 3.6:

3.7 MacLaurin’s Series

There is no further thought required to write down MacLaurin’s Series. It is simply a


Taylor Series about fixed point a = 0.
Because the fixed point is zero, this series is most often written using x as the small
quantity, rather than h.

MacLaurin’s Series to O(x n )

x 2 00 x3 xn
f (x) = f (0) + xf 0 (0) + f (0) + f 000 (0) + . . . + f (n) (0) + Rn+1 . (3.46)
2! 3! n!
where the remainder
x n+1 (n+1)
Rn+1 = f (ζ), 0≤ζ≤x . (3.47)
(n + 1)!

♣ Example:
Q: Determine the first four non-zero terms in the MacLaurin’s series for ln(1 + x), and
write down the general term.
A: The table of derivatives is:
3.7. MACLAURIN’S SERIES 3/13

n f (n) (x) f (n) (0)


0 ln(1 + x) 0
1 1/(1 + x) 1
2
2 −1/(1 + x) -1
3
3 2/(1 + x) 2
4
4 −(3)!/(1 + x) -6
n+1 n n+1
n (−1) (n − 1)!/(1 + x) (−1) (n − 1)!
Hence
1 2 2 6 (n − 1)! n
ln(1 + x) = x − x + x 3 − x 4 + . . . + (−1)n+1 x + ... (3.48)
2! 3! 4! n!
1 1 1 1
= x − x 2 + x 3 − x 4 + . . . + (−1)n+1 x n + . . . (3.49)
2 3 4 n

This series expansion will converge only for |x| < 1. If |x| is small just a few terms are
required to reach a good approximation, and vice versa.
For example, ln(1.25) = 0.2231 to 4sf, and the expansion will return that result with
6 terms; ln(1.8) = 0.5596 to 4sf, but that needs 22 terms.
3/14 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN

3.8 De l’Hopital’s Rule for limits


You will recall that
sin(x)
lim =1 (3.50)
x→0 x
but that this was awaiting proof.
The underlying problem is that both numerator and denominator tend to zero, leaving
and apparently indeterminate ratio.
Let’s consider the more general problem of finding
f (x)
lim (3.51)
x→a g(x)

where f (a) = 0 and g(a) = 0.


We have exactly the tool to handle this in Taylor’s expansion
f (x) f (a) + hf 0 (a) + (h2 /2!)f 00 (a) + . . .
lim = lim (3.52)
x→a g(x) h→0 g(a) + hg 0 (a) + (h 2 /2!)g 00 (a) + . . .

But both f (a) and g(a) are zero. Hence


f (x) f 0 (a) + (h/2!)f 00 (a) + . . . f 0 (a)
lim = lim 0 = (3.53)
x→a g(x) h→0 g (a) + (h/2!)g 00 (a) + . . . g 0 (a)
This is
De l’Hopital’s rule
If f (a) = g(a) = 0,
f (x) f 0 (a)
lim = 0 (3.54)
x→a g(x) g (a)
Notice that the two derivatives are taken separately — you do NOT use the quotient
rule!
♣ Example:
Q: Find
sin(x)
lim (3.55)
x→0 x
A: Both numerator and denominator are zero, hence using De l’Hopital’s rule
sin(x) f 0 (0) cos(0)
lim = 0 = =1. (3.56)
x→0 x g (0) 1
3.8. DE L’HOPITAL’S RULE FOR LIMITS 3/15

♣ Example:
Q: Find
sin(x 2 )
lim (3.57)
x→0 x2

A #1: If you are awake ... Substitute u = x 2 and take the limit as u → 0. Obviously
the answer is the same as before, unity.
A #2: However, suppose you hadn’t spotted that and just used De l’Hopital’s rule
sin(x 2 ) f 0 (0) 2x cos(x)
lim = = = cos(0) . (3.58)
x→0 x2 g 0 (0) 2x at x=0

Now you are allowed to divide through by 2x and hence the answer is cos(0) = 1.
A #3: However however, suppose you had not spotted even that, and just said that
it is 00 . Going back to the series expansion

f (x) f (a) + hf 0 (a) + (h2 /2!)f 00 (a) + . . .


lim = lim (3.59)
x→a g(x) h→0 g(a) + hg 0 (a) + (h 2 /2!)g 00 (a) + . . .

you’ll that if f (a) = g(a) = 0 and f 0 (a) = g 0 (a) = 0 then


f (x) f 00 (a)
lim = 00 (3.60)
x→a g(x) g (a)
This allows us to write a supercharged version ...
De l’Hopital’s rule (fuller statement)
If f (a) = g(a) = 0 and all f (n) = g (n) = 0 up to some number n = j, then

f (x) f (j+1) (a)


lim = (j+1) (3.61)
x→a g(x) g (a)
So in our example we can keep going!
sin(x 2 ) f 00 (0) 2 cos(x 2 ) − 4x 2 sin(x 2 )
lim = 00 = =1. (3.62)
x→0 x2 g (0) 2 at x=0

Notice that the limit you eventually reach does not have to finite.
♣ Example:
Q: Find
ln(x + 1)
lim (3.63)
x→0 sin(x)
3/16 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN

A: Apply De l’Hopital’s rule


ln(1 + x) 1/(x + 1)
lim = →∞ (3.64)
x→0 x2 2x at x=0

De L’Hopital’s rule really just involves series expansions, which means that an equivalent
method of solving such problems is simple to write down the series expansions. As the
limit in the last example is taken about x = 0, we could use the MacLaurin expansion
for ln(1 + x) where x is small. We are interested in

ln(1 + x) x − 21 x 2 + 13 x 3 − . . . 1 1 1 1 2
= = − + x − x ... (3.65)
x2 x2 x 2 3 4
which obviously shoots off to infinity.

3.9 Using Taylor’s to estimate derivatives numerically


Another very useful application of Taylor’s expansion is in the estimation of the deriva-
tives of a unknown function for which one only has information at discrete values of
the independent variable, as shown in Figure 3.7.
This is very useful computationally, as measurements are often obtained at discrete
points or at discrete times.

f(x)

f(a)

x
a−h a a+h a+2h

Figure 3.7: The function value is known only at discrete values of x.

We know f (a − h), f (a), f (a + h), and so on, but we want to estimate the derivatives.
Using Taylor’s expansion

0 h2 00 h3 000
f (a + h) = f (a) + hf (a) + f (a) + f (a) + . . . (3.66)
2! 3!
So we could estimate

0 h2 00
f (a + h) − f (a) ≈ hf (a) + f (a) (3.67)
2!
3.10. SUMMARY 3/17

or
 
f (a + h) − f (a)
f 0 (a) ≈ (3.68)
h

with a leading error term 2!h f 00 (a) that is O(h) (order h, or proportional to h). This
means that if you make h smaller by a factor of 10, the error in the estimate is expected
to reduce by a factor of 10.
But we can do better. Again using Taylor’s expansion

0 h2 00 h3 000
f (a + h) = f (a) + hf (a) + f (a) + f (a) + . . . (3.69)
2! 3!
2
h h3 000
f (a − h) = f (a) − hf (a) + f 00 (a) −
0
f (a) + . . . (3.70)
2! 3!
Now subtract ...

Blank because it is a tutorial exercise

3.10 Summary
In this lecture we have consider how to test the convergence properties if infinite series.
We then looked saw how Taylor’s expansion uses the known derivatives of a function
f (x) at a point to estimate values of the function at some small displacement from the
point. We saw that MacLaurin’s series is just Taylor’s expansion applied at x = 0.
Finally we saw how Taylor’s expansion can explain De L’Hopital’s theorem for discov-
ering the actual limit of a quotient function which naively has the value 0/0.

You might also like