P1Calculus1 Notes
P1Calculus1 Notes
4 lectures, MT 2020 Prof. Ton van den Bremer (with thanks to Prof. David Murray)
Rev October 5, 2020 [email protected]
Course Page: Canvas
Introduction
Engineering involves the study of natural physical and biological systems, with the aim
of turning them to the use of mankind.
Systems have inputs and outputs. What happens between is often captured by a
differential equation which describes how the outputs vary not only with the input
values, but also with their rates of change and their higher derivatives too. The calculus
is exactly the mathematical machinery you need to understand and describe these
variations.
There are four courses in P1 that will advance your understanding of physical systems
and their description using calculus.
Some of the material in the four lectures of Calculus 1 will be revision. But the
emphasis here is not on how to crank out that particular derivative or that integral,
but on understanding the fundamentals. Later, it will help you make sense of calculus
in many variables.
0/2
Syllabus
The function concept. Definition and simple properties of elementary functions. Dif-
ferentiation of a product, quotient and function of a function.
Elementary integration, including substitutions, integration by parts, partial fractions,
tan half-angle, recursive formulae.
Elementary series: sum to n terms of linear and geometric series. The concept of a
limit. McLaurin and Taylor expansions in one variable and their use for linearization of
elementary functions. The error term. De l’Hôpital’s theorem.
At the end of the course students should be able to:
Lecture Content
Somewhat inconveniently, the material seems to split into three topics to be covered
over the four lecture slots.
Tutorial Sheets
The tutorial sheet directly associated with this course is
• 1P1A Calculus 1
But performing just the examples in 1P1A is not really enough to secure your knowledge.
Each of the texts below contains many worked and unworked problems to practise on.
0/3
• Riley, Hobson, and Bence Mathematical Methods for Physics and Engineering: A
Comprehensive Guide CUP. 3rd ed (2006)
High quality stuff throughout, and useful for 1st and 2nd year material.
• Stephenson, G. (1973) Mathematical Methods for Science Students, Longman
Scientific & Technical, 2nd Ed.,
It is rather dated and is splendidly divorced from applications, but good for its
compressed presentation and for problem bashing.
• James, G. Modern Engineering Mathematics, Pearson/Prentice-Hall.
Chapters 7 and 6 cover most of the material here, and gives some engineering
context. You may become impatient with the long-winded explanations.
• Howatson, Lund and Todd, Engineering Tables and Data, but always referred to
as HLT. Written by members of the Department, HLT was updated in 2009 by
Profs McFadden and Smith, and in 2018 by Prof Blakeborough.
HLT is a mine of information, and, importantly, it is available during examinations.
You need to have HLT under your pillow, and, even better, know what’s in there.
Newton
Lecture 1
1.1 Introduction
The Calculus 1 course is concerned with the calculus of a single independent variable.
We also restrict ourselves to real variables. Whether dealing with a single variable or,
later, multiple variables, calculus is based (of course) on the concept of number, but
also on two further outstandingly important concepts. First is the concept of the
function, and the second is the concept of the limit.
Our aim in this first lecture is to revise these ideas and to use them to define the
derivative. We shall first skip through some definitions and jargon associated with
functions, touching on types of function, and then revise limits and continuity. That
will allow us to define the derivative. With that definition we can derive simple rules
like the Product and Quotient Rules. We will see how Leibnitz’s Rule helps to
find higher order derivatives of products. We then revise the Chain Rule associated
with functions of a function. Then, for completeness, we apply the definition of the
derivative to work out derivatives for the different basic types of function mentioned at
the start of the lecture.
Last in this lecture we give an overview of hyperbolic functions, which may be unfa-
miliar.
1
1/2 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE
When defining functions, we should take care to define the nature of the inputs and
outputs. Mathematicians describe the input’s domain and the output’s codomain,
which might be the set of real numbers, integers, complex numbers, trees, graphs, etc.
Here we are concerned with a scalar real variables.
We must also take care to define the interval in which the function definition is valid.
If there is a y which is completely determined by x in some interval a ≤ x ≤ b, then we
say that y is a function of x, y = f (x), in that interval. x is the independent variable,
and y the dependent variable.
This might seem over-fussy. But when programming you will have to take similar care
with the types of variables and intervals of validity when writing functions in computer
code. Otherwise, programs might crash or will certainly return bogus results.
y y y
x x
y y
x x
(a) (b)
Figure 1.2: Two continuous functions, and a discontinuous functions.
continuous. Some functions might appear acceptably “smooth to the eye” because a
good number of higher derivatives are continuous. A function f is said to be of class
C k smoothness when derivatives up to f (k) are continuous. (See piecewise functions
below.)
(d) Monotonic increasing/decreasing functions.
If increasing x causes y = f (x) always to increase (or decrease), then the function is
monotonic increasing (or decreasing). Figure 1.3 shows a monotonic increasing func-
tion, y = log x. The first derivative or gradient always has the same sign.
(e) Inverse functions The function y = f (x) defines a mapping between x and y , and
hence also one between y and x as x = φ(y ). Functions f and φ are inverses, written
f = φ−1 . The inverse might be easy to write down explicity, eg
y = ax + b ⇒ x = y /a − b/a (1.1)
and
y = exp x ⇒ x = ln(y ) (1.2)
or may be only implicit, eg
y = ex + ln(x) ⇒ x =? (1.3)
1/4 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE
This is interesting. For every x-value we can evaluate y , and hence are perfectly able
to plot x against y — but there is no function descriptor for the variation.
(f) Parametric functions.
It is sometimes convenient to define variables parametrically. Instead of considering
y = f√ (x), we are given y = y (p) and x = x(p). For example, instead of writing
y = ± 1 − x 2 to define a circle, one might write
x = cos(p) y = sin(p) . (1.4)
Obviously, p is the parameter.
(g) Odd and even functions.
If
−f (x) the function is ODD
f (−x) = (1.5)
+f (x) the function is EVEN
A function can, of course, be neither even nor odd.
y y
x x
(a) (b)
Figure 1.4: An (a) even function and (b) odd function. Note that a continuous odd function must have
f (0) = 0.
You will come across functions described as piecewise linear, piecewise quadratic, etc,
in your engineering courses. The idea is very simple. A piecewise function is one made
by joining together different functions defined over neighbouring intervals. Notice that
the piecewise linear function in Figure 1.5 is continuous, but is only C 0 smooth. A
piecewise quadratic function can be made C 1 smooth, mean that the gradients are
continuous, and so on. Bézier curves, devised by de Casteljau and used by French
engineer Pièrre Bézier in the early 1960’s to design beautiful French car bodies, are
piecewise cubic curves which are C 2 , making the curvature continuous. Très élégant.
(b) Algebraic:√ Roots of functions. Simple examples include roots of general rational
functions, eg x 2 + 1.
(c) Transcendental: Exponential and logarithmic functions. Functions ex , ln x are
the most obvious, but this class includes other exponentiated functions like ax .
1/6 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE
Note that
ln ax = x ln a → ax = e x ln a . (1.8)
(d) Transcendental: Hyperbolic functions. These are made from exponentials. See
Section 1.9.
(e) Transcendental: Trignometric functions. These functions are those involving sin
and cos tan, cot, sec and cosec. They are also called circular functions. Other periodic
functions are included.
(f) Specials. You will come across many specials throughout your course — sinc
functions, Error functions, Gamma functions, Bessel functions, and so on, and two
important discontinous functions the Heaviside step function u(x), and the Dirac delta
function, δ(x).
an η such that
|f (x) − `| < for all 0 < |x − a| < η . (1.10)
Why 0 < |x − a| and not 0 ≤ |x − a|? The notation x → a means that x keeps
approaching, but never reaches, the value of a, so that there is no need to consider
x = a. Indeed, the function does not even need to be defined for x = a (as, of course,
we saw with sin(x)/x and x → 0.)
f(x)
l ε 6
ε/10
η
x
a
Keep zooming in
Figure 1.6: However small you make it is possible to find an η that satifies the condition.
It is therefore possible for different limits to exist either side of a discontinuity, as shown
in Figure 1.7. If the discontinuity is at x = a, the limits from below and above would
be written
lim = p lim = q . (1.11)
x→a− x→a+
±
where a indicates values an infinitesimally smaller or bigger than a, and similarly for
a+ .
Figure 1.7: A function can tend to different limits either side of a discontinuity.
The limits of all three functions as x → 0 are unity. But f (0) is undefined, while
g(0) = 1 and h(0) = 42. You’ll see that f (x) has a undefined “gap” in it, g(x) is the
only continuous function, while h(x) is defined but discontinuous.
Notice how this fits in with the example of g(x) given above, and how f (x) and h(x)
violate the conditions.
• The quotient of two continuous functions is continuous at every point where the
denominator is non-zero.
And note too the different meaning of df and δf , dx and δx, The δf is a small but
finite difference, resulting from a small but finite difference δx in x. In the limit as δx
tends to 0, the changes becomes infinitesimally small (that is, no longer finite) and
become dx and df . (This distinction may seem trivial, but it is not, and it will help you
greatly if you keep it clear in your mind.)
♣ Example:
Q: Find dy /dx when y = f (x) = x 2 .
(x + δx)2 − x 2
2
x + 2xδx + (δx)2 − x 2
df
= lim = (1.16)
dx δx→0 δx δx
A: The x 2 terms cancel, and we can neglect terms (δx)2 in comparison with terms in
(δx), the more so in the limit as δx → 0. Hence
df 2xδx
= lim = 2x . (1.17)
dx δx→0 δx
This result allows use to handle any polynomial. If we had already proven the Quotient
Rule, we would be able to differentiate any general rational function.
d
(f g) = f 0 g + f g 0 . (1.18)
dx
We can prove this from first principles by writing
d f (x + δx)g(x + δx) − f (x)g(x)
(f g) = limδx→0 . (1.19)
dx δx
We can use the theorems given earlier to write this as
d [limδx→0 f (x + δx)] [limδx→0 g(x + δx)] − f (x)g(x)
(f g) = (1.20)
dx limδx→0 δx
But we have already shown in the definition of the derivative that
df
limδx→0 f (x + δx) = f (x) + limδx→0 δx = f + f 0 limδx→0 δx (1.21)
dx
so that we have
d f g + f 0 gδx + f g 0 δx + f 0 g 0 (δx)2 − f g
(f g) = limδx→0 = f 0g + f g 0. (1.22)
dx δx
d2
(f g) = f 00 g + f 0 g 0 + f 0 g 0 + f g 00 = f 00 g + 2f 0 g 0 + f g 00 , (1.23)
dx 2
d3
(f g) = GRIND AWAY = f 000 g + 3f 00 g 0 + 3f 0 g 00 + f g 000 . (1.24)
dx 3
You see the binomial coefficients emerging, and can guess what happens next. So did
Leibnitz ...
1.5. THE PRODUCT RULE AND LEIBNITZ’S RULE 1/11
Leibnitz’s Rule
n
dn
X n
(f g) = f (n−i ) g (i ) (1.25)
dx n n−i
i =0
P
Often the use of makes things harder to “see”. Unpacking this and the binomial
coefficients gives
Perhaps the most immediately useful Leibnitz’s Rule
A: First decide which function to choose as f and which as g. In absolute terms it does
not matter, but, having written the series out starting from a particular end, it is much
more convenient if we choose g to disappear quickly when we start differentiating it.
Clearly in this case it is most convenient to set g(x) = x 3 and f (x) = e2x . Now we
need to think about the individual derivatives. A moment’s thought gives
dn 3 2x
2x n 3 n−1 2 n(n − 1) n−2 n(n − 1)(n − 2) n−3
x e = e 2 x + n2 3x + 2 6x + 2 6
dx n 2 6
3n 3n(n − 1) n(n − 1)(n − 2)
= e2x 2n x 3 + x 2 + x+ .
2 4 8
You might want to try this using just the product rule — reserve an evening for it.
1/12 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE
♣ Example:
This is an example of things become messy when taking the n-th derivative.
Q: Find the nth derivative of x cos(x).
A: You want g to disappear quickly, so a good choice is of course
g(x) = x and ⇒f (x) = cos(x) (1.28)
What is the mth derivative of cos(x)? It is worth writing a few down to find the
pattern.
f (0) = cos(x) f (1) = − sin(x) f (2) = − cos(x) f (3) = sin(x) f (4) = cos(x)
Evidently, for m even it is (−1)m/2 cos(x), but for m odd it is (−1)(m+1)/2 sin(x).
Now we have different overall results for n odd and even. Notice too that if n is even,
the second term f (n−1) requires you to set m = (n−1) in the ODD derivative expression
(and if there was a third term, m = (n − 2) in the EVEN derivative, and so on).
For n even [check this after the lecture!]
dn
n
x cos(x) = (−1)n/2 cos(x)[x] + n(−1)(n−1+1)/2 sin(x)[1] (1.29)
dx
= (−1)n/2 (x cos(x) + n sin(x)) ,
♣ Example:
dy
Q: Find the nth derivative of (x 3 + 2x)
dx
A: Choose
dy
f = g = (x 3 + 2x) (1.31)
dx
and notice that f (n) = y (n+1) . Then
dn (n+1) 3 (n) 2 n(n − 1) (n−1) n(n − 1)(n − 2) (n−2)
(f g) = y (x +2x)+ny (3x +2)+ y (6x)+ y (6)
dx n 2 6
(1.32)
This could be tidied up. But the main point is that this examples points to the possibility
of taking the nth derivative of an ordinary differential equation.
1.6. THE QUOTIENT RULE 1/13
This requires very little proof. When δx → 0 and becomes dx, the corresponding
infinitesimal dg in the numerator is exactly the same as that in the denominator giving
rise to df . We are allow to treat d(whatever) as a number, and hence divide
through by it.
1/14 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE
♣ Example:
Q: Differentiate y = exp(x 2 ) with respect to x.
A:
y = exp(g) g = x 2 dy
dy = exp(g) dg = 2x ⇒ = 2x exp(x 2 ). (1.34)
dg dx dx
♣ Example:
Q: Differentiate y = exp(cos(x 2 )) with respect to x.
A: This is a triple decker ...
y = exp(g) g = cos(h) h = x2
dy = exp(g) dg = − sin(h) dh (1.35)
dg dh dx = 2x
dy dy dg dh
⇒ = = exp(g)(− sin(h))2x = exp(cos(x 2 ))(− sin(x 2 ))2x . (1.36)
dx dg dh dx
y y
sin(t) sinh(t)
x x
cos(t) cosh(t)
Figure 1.8: The circular functions contrasted with the hyperbolic functions. The values of sinh and cosh
are found as projections of points on the hyperbola x 2 − y 2 = 1 onto the y and x axes. This is similar
to the projection of point on a circle onto the axes to obtain sin and cos.
But the explicit definitions of the hyperbolic cosine and sine, cosh and sinh, are
1 x 1 x sinh x ex − e−x
e + e−x −x
cosh x = sinh x = e −e ⇒ tanh x = = x (1.42)
2 2 cosh x e + e−x
These are defined for −∞ ≤ x ≤ ∞, and are plotted in Figure 1.9. The other hyperbolic
functions like sech, cosech, etc, are defined as expected in terms of sinh and cosh.
1/16 LECTURE 1. FUNCTIONS, LIMITS AND THE DERIVATIVE
1.9.1 Properties
30 30
25 20
20 10
y=cosh(x)
15 y=sinh(x) 0
10 −10
5 −20
0 −30
−4 −2 0 2 4 −4 −2 0 2 4
x x
(a) (b)
1 4
cosh(x)
3.5 sinh(x)
0.5exp(x)
0.5 3
2.5
y=tanh(x)
2
y
1.5
−0.5 1
0.5
−1 0
−4 −2 0 2 4 0 0.5 1 1.5 2
x x
(c) (d)
Figure 1.9: Plots of (a) cosh(x), (b) sinh(x), (c) tanh(x) and (d) a comparison for x > 0 of cosh, sinh
and 0.5 exp(x).
d d d
cosh x = sinh x sinh x = cosh x tanh x = sech2 x (1.44)
dx dx dx
1.10. [*] STATIONARY POINTS – MAXIMA, MINIMA, ETC 1/17
cosh(−a) = cosh(a)
sinh(−a) = − sinh(a)
cosh(a + b) = sinh(a) sinh(b) + cosh(a) cosh(b)
sinh(a + b) = cosh(a) sinh(b) + sinh(a) cosh(b)
cosh(a) + sinh(a) = ea
cosh(a) − sinh(a) = e−a
But why is there such interest in these functions? Two related reasons are that, first,
if you give cosh and sinh an imaginary argument, sin and cos pop out. For example,
Second, cos t and sin t and cosh t and sinh t are solutions to the differential equation
d2 y dy
a 2 +b + cy = 0 .
dt dt
So while cos t and sin t handle oscillations in a system cosh t and sinh t can be use to
describe time-decaying solutions.
0.4 0.4
0.3 0.3
y=x3
0 0
Min Saddle
−0.1 −0.1
−0.2 −0.2
−0.3 −0.3
−0.4 −0.4
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
Figure 1.10: Maximum, minimum, inflexion and saddle points for functions of a single variable.
αω 1, and f (ω) ≈ 1/(αω) when αω 1. Now if you sketch on a log-log plot nice
things happen. On one side log f = 0, and on the other log f = − log α − log ω or
“y = − log α − x”, a graph of slope −1 on a log-log plot. The crossover point is when
ω = 1/α. The function has no stationary points.
The sketch is in Figure 1.11(a).
(You are probably still a bit worried about what happens at t = 0. Oscillating with
infinite frequency sounds awkward, but the amplitude is zero, so no energy is involved.)
Note that the sketches (i) contain informative labels and (ii) do not appear to have
been drawn by a spider on drugs.
1.13 Summary
In this lecture we have
Integration
This lecture develops integration of functions of a single variable. Much of the material
will be familiar, thought aspects of derivatives of integrals may be new to you. There
are many many examples in the recommended texts, and you should practise any type
of integral and technique of integration that causes you difficulty.
X Z b
δAi = f (xi )δx ⇒ A = limδx→0 f (xi )δx = f (x)dx . (2.1)
i a
f(xi )
a b
xi xi+ δx
Figure 2.1:
We can derive this result in two ways. First, simply by noting that
f(x) f(x)
a x x+δx a x x+ dx
R x+δx
Figure 2.2: The limit of x
f (p)dp as δx tends to zero is f (x)dx.
u = x v 0 = cos(x)
Z Z
⇒ x cos(x)dx = x sin(x) − 1. sin(x)dx (2.11)
u 0 = 1 v = sin(x)
= x sin(x) + cos(x) + K , (2.12)
If we had tried x 2 cos(x) we would have used integration by parts twice. These sorts of
integral appear often when working out coefficients in Fourier Series, which you come
to later in the course. You need to be organized not to make a slip.
♣ Example:
2
Q: Integrate x 2 ex .
2
A: Don’t immediately rush for the obvious — u = x 2 , v 0 = ex — because you will
struggle to integrate v 0 . Instead
2
u = x v 0 = xex
2 ⇒ DIY! (2.13)
u 0 = 1 v = 12 ex
2.4. RECURRENCE FORMULAE 2/5
Z Z
n n−1
⇒ c (x)dx = c (x)s(x) + (n − 1) cn−2 (x)s2 (x)dx (2.15)
Z
n−1
= c cn−2 (x)(1 − c2 (x))dx
(x)s(x) + (n − 1) (2.16)
Z Z
n−1 n−2 n
= c (x)s(x) + (n − 1) c (x)dx − c (x)dx (2.17)
Watch out Rfor I0 = x. It Ris easy to be careless and write I0 = cos0 (x) = 1. But of
course I0 = cos0 (x)dx = 1dx = x.
2/6 LECTURE 2. INTEGRATION
2.5 Substitution
Another very useful technique for simplifying integration is to change variables by re-
placing a function of the current variable with the new variable. Suppose the current
variable is x and the new variable is v where x = φ(v ) or, using the inverse function,
v = φ−1 (x).
We all know how this works, but it is worth formalizing the steps for a single variable,
so that comparisons can be drawn when dealing with multiple integration.
There are three steps:
1. Change the integrand to a function of v . If the integrand is f (x), the new function
of v is F (v ) = f (φ(v )).
dx dx
2. Replace dx by dv . Note that = φ0 = 1/(φ−1 )0 .
dv dv
3. Replace the limits expressed in terms of x by limits in terms of v . For example,
vlower = φ−1 (xlower ) and so on.
♣ Example: Z
e ln x
Q: Integrate dx
x=1 x
A: Substitute u = ln x. Then
du 1
= ⇒ dx = xdu and limits are u = ln 1 = 0, and u = ln e = 1.
dx x
The integral become
Z 1 Z Z e
1 21 ln x 1 ln x 1
udu = u 0 ⇒ dx = (ln x)2 , and dx = .
0 2 x 2 1 x 2
2.5.1 Trigonometric and Hyperbolic substitutions
♣ Example:
Z
1
√ dx Subst: x = a sin(u) with dx = a cos(u)du (2.25)
a2 − x 2
Hence
Z Z
1 a cos(u)
√ dx = p du (2.26)
a2 − x 2 a 1 − sin2 (u)
Z
= du = u (2.27)
♣ Example:
Z
1
√ dx Subst: x = a sinh(u) with dx = a cosh(u)du (2.29)
a2 + x 2
Hence
Z Z
1 a cosh(u)
√ dx = p du (2.30)
a2 + x 2 a 1 + sinh2 (u)
Z
= du = u (2.31)
♣ Example:
Z
1
dx Subst: x = a tan(u) with dx = a sec2 (u)du (2.33)
a2 + x 2
Hence
a sec2 (u)
Z Z
1
dx = du (2.34)
a2 + x 2 a2 1 + tan2 (u)
Z
1 u
= du = (2.35)
a a
1
= tan−1 (x/a). (2.36)
a
♣ Example:
Z
1 1
dx Subst: u = tan(x/2) with du = sec2 (x/2)dx ⇒dx = 2 cos2 (x/2)du
sin x 2
(2.37)
Don’t panic too soon that the RHS of dx is not entirely a function of u. Just carry on
to see whether things clear up.
2 cos2 (x/2)
Z Z
1
⇒ dx = du (2.38)
sin x Z 2 sin(x/2) cos(x/2)
= cot(x/2)du (2.39)
Z
du
= (2.40)
u
= ln u = ln tan(x/2) (2.41)
2/8 LECTURE 2. INTEGRATION
one is perhaps tempted to think that the result is either a function of x, or that x is
special in some way. Neither is true, the result is a constant, and x is just a dummy
variable.
What happens if we substitute p = −x?
Z b Z −b Z −a
f (x)dx = f (−p)(−dp) = f (−p)dp
a −a −b
This product consists only of linear terms and quadratic terms all raised to various
powers.
This looks remarkable, but it is surprisingly easy to see why.
Writing the roots (ie solutions) of the polynomial as r1 , r2 , etc, it must be that P (x) =
(x − r1 )(x − r2 ) · · · (x − ri ). Now to allow some of the roots to be repeated we could
write, more generally, P (x) = (x − r1 )m1 (x − r2 )m2 · · · (x − ri )m2 .
But if the coefficients of the polynomial are real, then either a root is real, or two roots
must be conjugate pairs 1 .
A real root is guaranteed to be delivered from a linear term (with r1 , etc, real), and
conjugate pairs can be guaranteed by the quadratics.
p p
1
The conjugate pair −b/2a ± i (2ac − b2 )/2a arises from the roots −b/2a ± (b2 − 2ac)/2a to the quadratic ax 2 +
bx + c = 0 when b2 < 4ac.
2.7. PARTIAL FRACTIONS 2/9
(x − 1)3
Z
3x − 5
Z
dx = x +1+ dx (2.51)
(x − 2)2 (x − 2)2
!
3
x2 2(x − 2)
Z
2 1
= +x + + dx (2.52)
2 (x − 2)2 (x − 2)2
x2 1
= + x + 3 ln(x − 2) − +K (2.53)
2 (x − 2)
Note that the rearrangement between the first and second line is nothing to do with
partial fractions. It is done to make the numerator the derivative of the denominator.
or
4
A= = 2.
(1 − 2)(1 − 3)
You should work out B and C and then check against the coefficient matching method.
Although this “cover up” method is fast, its limited applicability to general expressions
might lead one to prefer matching coefficients.
2.8. DIFFERENTIATION OF INTEGRALS 2/11
You may find it clearer to replace the dummy variable x in the integral with p. ddx a f (p)dp .
Rx
But we can do this another way. Because γ is nothing to do with the variable being
used for integration, and nothing to do with the limits, we can move the differentia-
tion INSIDE the integral sign.
Z b Z b Z b
d d
e−γx dx = e−γx dx = −xe−γx dx
dγ a a dγ a
1. Draw a diagram
2. Determine how the quantity Q of interest depends on the small change in the
system
dQ = f (x)dx
4. Do it ...
♣ Example:
Q: A spring has spring-constant k. What is the work done extending it from a position
with extension x = a to x = b.
A: Consider extension to a general position x. The force required is F (x) = kx (Figure
2.3.) The infinitesimal element of work required to move it to x + dx is
F(x) x
x+ d x
Figure 2.3: Spring of stiffness k extended to by x, and then to x + dx.
dW = F dx = kxdx (2.57)
Hence the total work done moving from extension x = a to x = b is
Z b
k
W = Wb − Wa = kx dx = (b2 − a2 ) . (2.58)
a 2
2/14 LECTURE 2. INTEGRATION
♣ Example:
Q: A piston is at position x = a in a cylinder of length L that contains an ideal gas.
How much work is required to move the piston to position b cylinder if the process is
isothermal.
A:Isothermal means at a fixed temperature. The internal energy U of the ideal gas
depends only on temperature, so there is no change in U in an isothermal process. This
means that we only need to do work against pressure. The pressure is related to the
volume by p(x)V (x) = nRT . At a general position x, the volume is V (x) = (L − x)A,
so p(x) = nRT /(L − x)A and the element of work done pushing in the piston by dx is
dx
dW = p(x)Adx = nRT (2.59)
L−x
Hence the total work is
Z b
dx
W = nRT (2.60)
a L−x
= nRT [− ln(L − x)|ba (2.61)
L−a
= nRT ln (2.62)
L−b
Note we could write this as W = nRT ln(Va /Vb ) where Va and Vb are the initial and final
volumes. As the process is isothermal, the work done on the gas must be extracted as
heat.
2.9. EXAMPLES USING INTEGRATION 2/15
♣ Example:
Q: A straight rod of uniform cross section A and length L has volume density ρ(x) =
α(L − x/2). Determine (i) the rod’s mass and (ii) the location of its centre of mass.
The centre of mass is defined as the position at which placing the total mass gives the
same torque as dealing “properly” with the distributed mass.
dM
x x+ dx
dΓ
Figure 2.6:
Consider the element of mass between x and x + dx. The element of torque about the
x = 0 point is
So
L3 L3
3 2 4
xCOM Mg = gαA ⇒xCOM αA L = αA ⇒xCOM = L (2.67)
3 4 3 9
♣ Example:
Q: Find the average value of e−x in the range x = 0 and x = 2.
A: The average value of any quantity over a region is the integral of the quantity over
the region, divided by the size of the region. Here the region is just an interval on the
x-axis.
Hence
R2
e−x dx
R
f (x)dx
Average = RR = 0
R2 (2.68)
R dx 0 dx
−(e −2 − e 0 ) 1
= = (1 − e −2 ) . (2.69)
2−0 2
♣ Example:
Q: Show that √ the root-mean-squared value (RMS value) of the alternating voltage
V0 sin ωt is V0 / 2.
A: To find the RMS value one finds the mean (or average) of the squared value, and
then takes its square root.
The function V0 sin ωt is periodic in time with period T = 2π/ω. To find the mean (or
average) of the square
1 2 T 2 1 2 T1
Z Z
MS = V0 sin ωt dt = V0 (1 − cos 2ωt)dt = V02 /2
T 0 T 0 2
2.10 Summary
In this lecture we
This lecture on series starts by visiting arithmetic and geometric series, and introducing
the notion of the convergence of a series. There are a number of tests for convergence
— which to deploy is rather hit or miss1 .
Much much more useful will be Taylor’s and MacLaurin’s series. Using the derivatives
at a point, these allow you to approximate functions in the neighbourhood of that
point, as a a polynomial or power series in the small displacement. Every engineer on
the planet uses these.
If one adds up just a finite number of terms, the sum to n terms Sn is called a partial
sum.
An infinite series can either converge, which requires
or it diverges. Some divergent series have partials sums that tend either to +∞ or
−∞, in others successive partial sums oscillate between ±∞.
1
and so from 2018 they have been removed from the P1 syllabus.
1
3/2 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN
A convergent series with positive and negative terms in it might be conditionally con-
vergent or absolutely convergent. If the sum
S∞ = a1 + a2 + a3 + . . . (3.2)
contains positive and negative terms they must cancel each other somewhat. We could
therefore ask a stronger question: does the following converge?
0
S∞ = |a1 | + |a2 | + |a3 | + . . . (3.3)
If it does, the original series is absolutely convergent.
We now examine two common series, and then as an aside look a more general example
1. The arithmetic progression
An arithmetic series or progression is a sequence of n numbers successive members of
which differ by the same amount d. That is, {a1 , (a1 +d), (a1 +2d), . . . , (a1 +(n−1)d)}.
The i -th term is ai = a1 + (i − 1)d.
Suppose we wanted the sum to the n-th term, Sn = ni=1 ai . Write the sequence in
P
Comment: This test is obvious enough as the terms are all positive. But what does the “sufficiently
large r ” mean? This caveat allows a finite number of terms at the start of the series to disobey the
condition. A finite number of terms can only contribute a finite sum, so this can discounted. (For
example, suppose you found a convergent series that satisfied an ≤ bn for all terms. Now just stick 47
as the first term of Series A and 0 as the first term of series B. We now have a1 > b1 , but Series A
must still converge.) For convenience we could just chop of the finite number of “misbehaving” terms,
and deal only with the rest of the series.
3/4 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN
Explanation: Test 2 involved comparisons between series, which can be awkward. This test however
involves ratios within each series, and hence removes the character of the two individual series when
making the comparison.
Write
an an−1 a2 bn bn−1 b2
an = ... a1 which must be ≤ ... a1 . (3.10)
an−1 an−2 a1 bn−1 bn−2 b1
at+1 < Lat ; at+2 < L2 at ; at+3 < L3 at ; at+4 < L3 at ; . . . (3.14)
f(x) f(x)
a1 a1
a2
a3 a4 etc
x x
1 2 3 4 1 2 3 4
The vertical bars at x = 1, 2, 3, . . . represent the values an at n = 1, 2, 3, . . ., and the curve is f (x).
X∞
Because the bars are separated by ∆x = 1, an is the area given by the sum of the rectangles. It is
n=1
evident that
N
X Z N+1
an > f (x)dx , (3.16)
r =1 1
which seems to offer no information about convergence. However, if you copy each excess area over
into the a1 rectangle, you see immediately that the excess will be less that a1 × 1 = f (1). Hence
X∞ Z ∞
0< an − f (x)dx < f (1) . (3.17)
n=1 1
∞
X
Thus, if the integral converges, so must an , and similiarly for divergence.
n=1
h2 00 h3 hn
f (a+h) = f (a)+hf 0 (a)+ f (a)+ f 000 (a)+. . .+ f (n) (a)+Rn+1 . (3.18)
2! 3! n!
where the remainder
hn+1 (n+1)
Rn+1 = f (ζ), a ≤ζ ≤a+h . (3.19)
(n + 1)!
You might ask, why not simply put a + h into the function and compute the result.
Two reasons: first you might not actually know what f (x) is. Second, the expansion
provides an approximation to a function that is polynomial in the small quantity h. This
is very useful — indeed we have already used the expansion several times in Lecture 1.
Let’s first think about the expansion, and then worry about the remainder term.
f (a + h) ≈ f (a) (3.20)
But suppose we also know f 00 (a). This should allow us to estimate the extra correction
arising from the extra gradient between a and a + h.
3.3. TAYLOR’S SERIES EXPANSION 3/7
1st order
f(a) f(a) correction
x x
a a+h a a+h
Figure 3.2:
The “extra” gradient at x = a is zero, of course. What is the extra gradient fˆ0 at an
intermediate point p, where p varies from 0 to h? It is
fˆ0 (p) = p f 00 (a) . (3.22)
So, between p and p + dp this will introduce an infinitesimal extra correction to the
function
dfˆ = fˆ0 (p)dp = p f 00 (a) dp . (3.23)
Integrating between 0 and h, the total extra correction arising from the rate of change
of gradient is
Z h Z h Z h
ˆ 00 00 h2 00
C2 = df = pf (a) dp = f (a) pdp = f (a) . (3.24)
0 0 0 2
To summarize so far:
Z h Z h
0 0
C1 = f (a)dp = f (a) dp = f 0 (a)h (3.25)
0 0
Z h Z h Z h
ˆ0 00 00 00 h2
C2 = f (p)dp = pf (a)dp = f (a) pdp = f (a) (3.26)
0 0 0 2!
(3.27)
The 3rd order correction requires us to integrate the “extra extra gradient” due to
f 000 (a).
Z h
C3 = fˆ
ˆ0 (p)dp . (3.28)
0
3/8 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN
f(a)
x
a a+h
0 p p+dp h p
Figure 3.3:
3.4 Examples
Fortunately, Taylor’s expansion is easier to use than to prove. A reminder (leaving out
the remainder for now)
0 h2 00 h3 000
f (a + h) = f (a) + hf (a) + f (a) + f (a) + . . . (3.34)
2! 3!
3.4. EXAMPLES 3/9
♣ Example:
Q: Find the series expansion of ex about x = 1.
A: This mean a = 1 and our small quantity can be h.
Build a table of derivatives ...
f (x) e x f (1) = e
f 0 (x) e x f 0 (1) = e
f 00 (x) e x f 00 (1) = e
f 000 (x) e x f 000 (1) = e
So the series is
2 3
h h
e(1+h) = e 1 + h + + + ... (3.35)
2! 3!
Suppose h = 0.8 and we use 5 terms The series says
1 1 h h2 h3
f (a + h) = = − + − + ... (3.39)
1+a+h (1 + a) (1 + a)2 (1 + a)3 (1 + a)4
" 2 3 #
1 h h h
= 1− + − + ... .
(1 + a) 1+a 1+a 1+a
3/10 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN
To test with some numbers, let a = 0.5 and h = 0.2. h/(1 + a) = 0.1333 The series is
" 2 3 4 #
2 0.2 0.2 0.2 0.2
f (0.7) = 1− + − + − . . . = 0.5883 , (3.40)
3 1.5 1.5 1.5 1.5
while the exact value is 1/1.7 = 0.5882.
ζ is somewhere
between a
? and a+h
a a+h x
Figure 3.4:
(n+1) a a+h ζ
Plot f (ζ )
between ζ =a
and ζ=(a+h)
a a+h ζ
Some possible plots ...
Figure 3.5:
Now, by checking dR5 /dζ we can see that there is no turning point. So the largest
value must be when ζ takes its smallest value in this case: that is, at ζ = a = 0.5.
1
⇒ max R5 = (−1)5 (0.2)5 (3.45)
(1 + 0.5)6
Figure 3.6 plots the absolute value of the Remainder term and the absolute value of the
Error as a function of the number of terms used in the series. Of course, there are only
discrete values at n = 1, 2, 3, ..., but there appears to be a straight line dependence on
the log-linear axes. The important thing is that the actual Error is always smaller than
our estimate of the remainder. It is a conservative estimate.
3/12 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN
−1 0
10 10
abs(Remainder) abs(Remainder)
abs(Error) abs(Error)
−2
10
−2
10
−4
10
−6
10
−3
10
−8
10
−4 −10
10 10
1 1.5 2 2.5 3 0 2 4 6 8 10
Figure 3.6:
x 2 00 x3 xn
f (x) = f (0) + xf 0 (0) + f (0) + f 000 (0) + . . . + f (n) (0) + Rn+1 . (3.46)
2! 3! n!
where the remainder
x n+1 (n+1)
Rn+1 = f (ζ), 0≤ζ≤x . (3.47)
(n + 1)!
♣ Example:
Q: Determine the first four non-zero terms in the MacLaurin’s series for ln(1 + x), and
write down the general term.
A: The table of derivatives is:
3.7. MACLAURIN’S SERIES 3/13
This series expansion will converge only for |x| < 1. If |x| is small just a few terms are
required to reach a good approximation, and vice versa.
For example, ln(1.25) = 0.2231 to 4sf, and the expansion will return that result with
6 terms; ln(1.8) = 0.5596 to 4sf, but that needs 22 terms.
3/14 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN
♣ Example:
Q: Find
sin(x 2 )
lim (3.57)
x→0 x2
A #1: If you are awake ... Substitute u = x 2 and take the limit as u → 0. Obviously
the answer is the same as before, unity.
A #2: However, suppose you hadn’t spotted that and just used De l’Hopital’s rule
sin(x 2 ) f 0 (0) 2x cos(x)
lim = = = cos(0) . (3.58)
x→0 x2 g 0 (0) 2x at x=0
Now you are allowed to divide through by 2x and hence the answer is cos(0) = 1.
A #3: However however, suppose you had not spotted even that, and just said that
it is 00 . Going back to the series expansion
Notice that the limit you eventually reach does not have to finite.
♣ Example:
Q: Find
ln(x + 1)
lim (3.63)
x→0 sin(x)
3/16 LECTURE 3. SERIES: ELEMENTARY, THEN TAYLOR & MACLAURIN
De L’Hopital’s rule really just involves series expansions, which means that an equivalent
method of solving such problems is simple to write down the series expansions. As the
limit in the last example is taken about x = 0, we could use the MacLaurin expansion
for ln(1 + x) where x is small. We are interested in
ln(1 + x) x − 21 x 2 + 13 x 3 − . . . 1 1 1 1 2
= = − + x − x ... (3.65)
x2 x2 x 2 3 4
which obviously shoots off to infinity.
f(x)
f(a)
x
a−h a a+h a+2h
We know f (a − h), f (a), f (a + h), and so on, but we want to estimate the derivatives.
Using Taylor’s expansion
0 h2 00 h3 000
f (a + h) = f (a) + hf (a) + f (a) + f (a) + . . . (3.66)
2! 3!
So we could estimate
0 h2 00
f (a + h) − f (a) ≈ hf (a) + f (a) (3.67)
2!
3.10. SUMMARY 3/17
or
f (a + h) − f (a)
f 0 (a) ≈ (3.68)
h
with a leading error term 2!h f 00 (a) that is O(h) (order h, or proportional to h). This
means that if you make h smaller by a factor of 10, the error in the estimate is expected
to reduce by a factor of 10.
But we can do better. Again using Taylor’s expansion
0 h2 00 h3 000
f (a + h) = f (a) + hf (a) + f (a) + f (a) + . . . (3.69)
2! 3!
2
h h3 000
f (a − h) = f (a) − hf (a) + f 00 (a) −
0
f (a) + . . . (3.70)
2! 3!
Now subtract ...
3.10 Summary
In this lecture we have consider how to test the convergence properties if infinite series.
We then looked saw how Taylor’s expansion uses the known derivatives of a function
f (x) at a point to estimate values of the function at some small displacement from the
point. We saw that MacLaurin’s series is just Taylor’s expansion applied at x = 0.
Finally we saw how Taylor’s expansion can explain De L’Hopital’s theorem for discov-
ering the actual limit of a quotient function which naively has the value 0/0.