MAT334 - Complex Variables
MAT334 - Complex Variables
Contents
3 Integration 52
3.1 Curves in C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2 Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2.1 Green’s Theorem and the Cauchy Integral Theorem . . . . . . . . . . . . . . . . . . . . 58
3.2.2 Path Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.3 A Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
1
3.4 Integrating Around Poles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.4.1 Cauchy’s Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4.2 The Deformation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.4.3 Poles of Higher Order - The Generalized Cauchy Integral Formula . . . . . . . . . . . . 75
3.5 Liouville’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Appendices 110
Index 115
2
1 | The Complex Numbers
This text is entirely based around the study of C, the set of complex numbers. How do complex functions
behave? Can we differentiate and integrate? How does this differ from working over R? In due time, we will
see all of this. But before we can get to any of that, we need to discuss what C is and how to do algebra in
the complex numbers.
1.1 What is C?
The imaginary unit i is a number such that i2 = −1. A complex number is a number of the form a + bi,
where a, b ∈ R. The set of complex numbers is C = {a + bi|a, b ∈ R}.
Note. The name "imaginary" is a misnomer. When mathematicians first started thinking about complex
numbers, i was treated at best as a calculation trick and at worst as pure nonsense. The name was originally
chosen as an insult.
Now, we recognize that the concept is real in the same way any other advanced mathematics is. These
actually exist, and we can formally design a system that exhibits this behavior.
Beyond that, however, complex numbers are actually useful in real life. For example, the mathematics
behind electomagnetism is based on working with complex numbers.
Example 1.1.1
3
Definition 1.2.1: Real and Imaginary Parts
Let a + bi ∈ C. Then the real and imaginary parts of a + bi are:
Re(a + bi) = a
Im(a + bi) = b
Example 1.2.1
Consider z = 3 + 4i. The real part of z is 3, and the imaginary part is 4. Notice: 4, not 4i. The
imaginary part of z is still a real number.
So adding z, w ∈ C is done by adding together the real parts of z, w, and adding together the imaginary
parts.
Definition 1.2.3: Multiplication
Let a + bi, c + di ∈ C. Then:
Why would we choose this definition? Well, we want complex multiplication to satisfy the "distributivity
property": (a + b)c = ac + bc and a(b + c) = ab + ac. If we require these to hold, then we are forced to
conclude that:
Complex multiplication and addition satisfy a whole bunch of properties, specifically what are called the
field axioms.
Theorem 1.2.1: The Field Axioms
The complex numbers satisfy the following properties. For all u, w, z ∈ C:
1. w + z = z + w
4
2. u + (w + z) = (u + w) + z
3. z + 0 = z
5. wz = zw
6. u(wz) = (uw)z
7. 1z = z
Proof. Many of these properties aren’t hard to check, and follow pretty quickly from facts you know about
real numbers.
We will provide a proof for the existence of multiplicative inverses very shortly, before we discuss division.
Example 1.2.2
w2 − zw = w(w − z) = (4 − 3i)[(4 − 3i) − (2 + 7i)] = (4 − 3i)(2 − 10i) = (8 − 30) + (−40 − 6)i = −22 − 46i
1
What about division? First, what is division? What does z
mean?
Definition 1.2.4: Multiplicative Inverse
So how do we find a multiplicative inverse for z? To do that, we’re going to need to introduce two new
ideas, the complex conjugate and the modulus:
Definition 1.2.5: Complex Conjugate
Let a + bi ∈ C. Then the complex conjugate of a + bi is:
a + bi = a − bi
5
Definition 1.2.6: Modulus
Let a + bi ∈ C. Then the modulus of a + bi is the real number:
√
|a + bi| = a2 + b 2
How does this help us define division? It turns out, these are exactly the building blocks we need:
Lemma 1.2.1. Let a+bi ∈ C be non-zero (i.e., at least one of a, b is not 0). Then z1 = a2 +b
a b
2 − a2 +b2 i. Another
way of writing this is that z1 = |z|z 2 , where we understand that division by a real number means division on
both the real part and imaginary part.
1
Proof. To show that z
is what we claim, we need only check that z z1 = 1. We compute:
a2 − (b)(−b) ab + a(−b)
a b
(a + bi) 2 2
− 2 i = + i
a +b a + b2 a2 + b 2 a2 + b 2
a2 + b 2
= 2
a + b2
=1
Where did this come from? The intuition comes from the calculation:
1 a − bi a − bi a − bi
= = 2
a + bi a − bi (a − bi)(a + bi) a + b2
We can’t use this as a proof of the result, since it relies on being able to divide by complex numbers (which
we haven’t even defined yet!), as well as being able to manipulate fractions. However, as far as intuition goes,
this is a good way to understand the result.
Note. This calculation also shows that zz = |z|2 . This is a very important fact, and will come up very
frequently.
Example 1.2.3
z w
Let z = 4 + 3i and w = 1 + i. Find w
and z
.
6
z zw
We know from our formula that w
= |w|2
. So we compute:
w =1−i
|w|2 = (12 + 12 ) = 2
z (4+3i)(1−i) 7
So, w
= 2
= 2
− 21 i.
w w 1 2
To compute , we could go through the same process. Or, we could note that = = . We
z z ( wz ) 7−i
then get that:
w 2(7 + i) 14 2
= 2
= + i
z |7 − i| 50 50
How do complex conjugation and the modulus behave when combined with our algebraic operations?
Theorem 1.2.2: Algebraic Properties of Conjugation and the Modulus
Let z, w ∈ C.
1. zw = z w
2. z + w = z + w
3. z = z.
5. wz = wz
6. |zw| = |z||w|
7. |z| = |z|
9. The triangle inequality: |z + w| ≤ |z| + |w|. And |z + w| = |z| + |w| if and only if z = rw for
some r ∈ R or w = rz for some r ∈ R.
z |z|
10. w
= |w|
Proof. We will prove some, but not all of these statements. Many of them are variations on the same idea,
and so will be quick to verify.
Let z = a + bi and w = c + di.
7
zw = (ac − bd) − (ad + bc)i
z w = (a − bi)(c − di) = (ac − (−b)(−d)) + (a(−d) + (−b)c)i = (ac − bd) − (ad + bc)i
So we see that zw = z w.
9. To show this, it is enough to show that |z + w|2 ≤ (|z| + |w|)2 . It will take us a bit of work to get there
though.
First, notice that (ad − bc)2 ≥ 0. This implies that
2abcd ≤ a2 d2 + b2 c2
p
(a2 + 2ac + c2 ) + (b2 + 2bd + d2 ) ≤ (a2 + b2 ) + 2 (a2 + b2 )(c2 + d2 ) + (c2 + d2 )
√ √
(a + c)2 + (b + d)2 ≤ ( a2 + b2 + c2 + d2 )2
√ √
However, (a + c)2 + (b + d)2 = |z + w|2 , a2 + b2 = |z|, and c2 + d2 = |w|. So we have shown that
|z + w|2 ≤ (|z| + |w|)2 . Taking the square root of both sides gives |z + w| ≤ |z| + |w|.
For equality, note that if z = rw or w = rz (and you do need either condition, since you could have
z = 0 or w = 0), then ad − bc = 0. In that case:
2abcd = a2 d2 + b2 c2
And rather than having ≤ in our calculations above, equality follows through.
Note. Fisher has a shorter proof of this fact in section 1.2. However, he does not discuss equality.
8
1 1 1 c d
10. To begin, let’s show that w
= |w|
. Notice that w
= c2 +d2
− c2 +d2
i. And so:
s
1 c2 d2
= +
w (c2 + d2 )2 (c2 + d2 )2
r
1
=
c2
+ d2
1
=√
c2 + d 2
1
=
|w|
z 1 1 |z|
= |z| = |z| =
w w |w| |w|
−1 1 2 3
−1
Note. In general, we can visualize the complex number z = x + iy as the point (x, y) on the plane.
However, this does not mean that C and R2 are the same thing. They are not. We cannot multiply vectors
in R2 . We can’t divide by vectors. Etc.
9
Definition 1.3.1: Rectangular Coordinates
This perspective does allow us to give some fairly nice proofs. For example:
Theorem 1.3.1: The Triangle Inequality
Let z = x + iy. Then |z| is equal to the length of the vector (x, y).
As a consequence, |z + w| ≤ |z| + |w|.
p
Proof.
p The first claim is fairly straightforward: |z| = x2 + y 2 , and the length of the vector (x, y) is k(x, y)k =
x + y2.
2
~z
w
~
~z + w
~
10
z
|z|
We can also express this vector in R2 by giving its length, |z|, and the angle it forms with the positive
x-axis, θ.
Lemma 1.4.1. If z ∈ C with |z| = r, and z forms an angle of θ with the positive x-axis, then:
z = r(cos(θ) + i sin(θ))
Proof. Suppose |z| = r. Then the vector ~z in R2 has k~zk = r. As such, ~z is on a circle of radius r, which we
know (from MAT235) is parametrized by the equations x = r cos(θ) and y = r sin(θ), where θ is the angle
between ~z and the positive x-axis.
As such, we conclude that z = x + iy = (r cos(θ)) + (r sin(θ))i, as desired.
Example 1.4.1
Find the real and imaginary parts of z = 2(cos(0.2) + i sin(0.2)).
Well, z = 2 cos(0.2) + (2 sin(0.2))i, and so Re(z) = 2 cos(0.2) and Im(z) = 2 sin(0.2).
Note. Be careful! The real part of z is not 2. 2 is its modulus! This is a very common mistake!
Example 1.4.2
√
Write |z| = 4 − 4 3i in polar form.
p √
Let’s start by finding |z|. This is always easier. We compute: |z| = 16 + (16)(3) = 64 = 8.
Now, we can see that z forms a right angle triangle with the positive real axis which has hypotenus
√
8, and side lengths 4 and 4 3. If we view this as:
11
Ψ
There is another bit of notation, which you may have come across, used to write polar form.
Definition 1.4.2: Euler’s Formula
eiθ = cos(θ) + i sin(θ).
So, instead of writing z = r(cos(θ) + i sin(θ)), we will from now on write z = reiθ . We will discuss why we
use this notation (why e?), later on in the course. There is a reason.
Example 1.4.3
−1 √1 i,
Find the polar form for z = √
2
+ 2
and write it as z = reiθ .
We know that r = |z|. So we compute:
s 2 2
−1 1 √
|z| = √ + √ = 1=1
2 2
As for the angle, we need to be careful here. If we were to take the angle θ = arctan xy =
arctan(−1), we would end up with an angle in the fourth quadrant. But our complex number is in the
second quadrant. So how do we handle this?
Well, note that our angle θ and arctan(−1) are on directly opposite sides of the unit circle, and so
θ = arctan(−1) + π = 3π4
.
3π
Therefore, z = 1ei 4 .
12
Example 1.4.4
π
True or false: the real part of 3ei 2 is 3.
False. This is a mistake I have seen quite a lot. You need to be able to find the real and imaginary
parts of a complex number written in polar form.
π
In this case, 3ei 2 = 3 cos π2 + i sin π2 = 3(0 + i) = 3i. So the real part of this complex number
is 0!
Polar coordinates are useful in a variety of ways, which are majorly different from the ways in which
rectangular form is useful. One way is that working in polar form makes doing multiplication very easy.
Theorem 1.4.1: Multiplication in Polar Form
Proof. We go back to rectangular coordinates. z = r(cos(θ) + i sin(θ)) and w = R(cos(Ψ) + i sin(Ψ)). Then:
Proof. The proof is similar to that of theorem 1.4.1, except using the angle difference formulas for sin and
cos. Consider working through the proof, mimicing the proof of theorem 1.4.1.
Note. The key ingredients in these proofs are trig identities. Specifically, the angle sum (and angle difference)
formulas. Trig is very important to working with complex numers. You will need to know your trig identities.
Since multiplying in polar form is very quick, taking powers should also be really quick. Intuitively, the
argument above tells us that if θ is an angle for z, then nθ is an angle for z n . After all, if we add n copies of
θ, we get nθ. This intuitive idea is actually a named theorem!
13
Theorem 1.4.3: De Moivre’s Theorem
Let n ∈ N. Then (cos(θ) + i sin(θ))n = cos(nθ) + i sin(nθ).
We end our discussion of complex algebra with one last definition. We will need to talk about the polar
form, and specifically the angle, of a complex number very often.
Definition 1.4.3: The Argument
The argument of a complex number is not unique. For example, ei0 = 1, and ei2π = cos(2π)+i sin(2π) = 1.
This means that 0 and 2π are both arguments for 1!
Example 1.4.5
7π
√
Is 3
an argument for 1 + 3i?
√
There are two approaches to this. One way would be to find an argument for 1 + 3i. We recognize
this as appearing on a 30-60-90 special triangle of hypotenuse 2, in the first quadrant. In particular,
√
θ = π3 is an argument for 1 + 3i.
π 7π
Then we quickly check that 3
and 3
point in the same direction, since they differ by a multiple of
2π.
7π 7π
√
3
√ 7π
Another approach would be to see what ei 3 is. We find that ei 3 = 12 + i, and so 1 + 3i = 2ei 3 .
√ 2
So 7π
3
is an argument for 1 + 3.
The non-uniqueness of the argument ends up giving the complex numbers a lot of rich theory. For
example, every non-zero number will have n different nth roots. Every complex number will have infinitely
many logarithms. We’ll see this when we talk about the concept of “branches".
14
Sometimes, we don’t need all that freedom. Very often, it’s enough to consider arguments within a specific
range. One particular choice is (−π, π).
Definition 1.4.4: The Principal Argument
Let z ∈ C such that z is not a negative real number. The principal arugment of z is the argument
Arg(z) ∈ (−π, π).
Note. This is a different covention that some other sources you may encounter. I am specifically excluding
negative reals from having a principal argument. I am not doing this arbitrarily however: this will allow us to
avoid some ugly continuity issues later on when we define the principal logarithm, or other principal branches
of multivalued functions.
We now turn our attention to square and higher roots. What is an nth root, and how do we find them?
Definition 1.5.1: nth Roots
Let n ∈ N. Then we say that z is an nth root of w if z n = w.
In the real numbers, solving the equation xn = c is fairly straightforward. If n is even, then it has no
√
solution for c < 0. And for c ≥ 0, the solutions are x = ± n c. For n odd, there is always a unique solution:
√
x = n c.
We have already seen that this is no longer true for complex numbers. In particular, the equation z 2 = −1
has a solution: i (and −i as well). This is true in a much more broad sense. The equation z n = c always has
a solution, and De Moivre’s theorem tells us exactly how to find such solutions.
Let us begin with an example, to see the general tactic in action.
Example 1.5.1
15
work for higher powers.
Instead, let’s see what happens if we work in polar form. Let z = reiθ . Then we have:
√ π
r2 e2iθ = 1 + i = 2ei 4
√ √
By comparing moduli on both sides, we find r2 = 2, so r = 4 2.
Also, by comparing arguments, we see that 2θ is an argument for 1 + i. I.e., 2θ = π4 + 2kπ for some
k ∈ Z.
√ π
As such, z = 4 2ei 8 +kπ for some k ∈ Z. Recalling that eiθ is 2π periodic, we see that we only get
√ π
two distinct solutions: ± 4 rei 8 .
Does this work generally? Is there some algorithm or formula that gives us nth roots?
Theorem 1.5.1: Existence of nth Roots
Let w = reiθ ∈ C. If w = 0, then z n = w has a unique solution, z = 0.
If w 6= 0, then the solutions to z n = w are:
√
n
i(θ+2jπ)
zj = re n
In other words, we need to prove that z n = w if and only if z = zj for some j. Let z = seiΨ be in polar
form. Then z n = w ⇐⇒ sn einΨ = reiθ . This occurs if and only if sn = r and nΨ = θ + 2kπ for some k ∈ Z,
by considering the moduli and arguments of each side.
√
As such, z n = w if and only if s = n r and Ψ = θ+2kπ
n
for some k ∈ Z. I.e., z n = w if and only if z = zk
for some k ∈ Z.
Lastly, we need to justify why we only need to consider j ∈ {0, 1, . . . , n − 1}. Let k ∈ Z. Then by the
division algorithm, we can write k = qn + j for some 0 ≤ j ≤ n − 1. We find that:
16
Example 1.5.2
π
• z0 = ei 8
2π
• z1 = z0 ei 4 = iz0
2∗2π
• z2 = z0 ei 4 = −z0
• z3 = −iz0
√ √ √ √
π 2+ 2 2− 2
As an interesting aside, it turns out that ei 8 = 2
+ i 2
. One possible way to show this
2 1 i
would be to try to solve the equation z = √2 + √2 by setting z = a + bi.
√ θ 2jπ 2jπ
Notice, in our example, we factored zj as n
rei n ei n . These numbers ei n are the nth roots of unity.
Definition 1.5.2: Roots of Unity
2jπ
The nth roots of unity are the solutions to z n = 1. They are precisely the complex numbers ωj = ei n .
There are two nice geometric facts about the nth roots of w that we can glean from the results of the previous
section.
√
• Notice that |zj | = n r for each j. This means that each of the nth roots of w all lay on the circle of
√
radius n r centered at 0.
√
This circle has the equation |z| = n r. More generally, a circle of radius r centered at z0 has the equation
|z − z0 | = r.
2π
• Notice that the angle between zj and zj+1 is exactly n
.
So, for example, the 6th roots of w form a picture like:
17
z1
z0
z2
z5
z3
z4
Where the arc between zj and zj+1 has angle π3 . Notice that this is a regular hexagon!
And this is true in general. If w 6= 0 and n ≥ 3, then the roots of z n = w form a regular n-gon.
I made a small note in the preceeding discussion which I would like to expand upon. Namely, how to
describe complex circles.
Theorem 1.6.1
The set of points {z ∈ C||z − z0 | = r} is a circle of radius r centered at z0 .
p
r = |z − z0 | = |(x − x0 ) + i(y − y0 )| = (x − x0 )2 + (y − y0 )2
1.7 Functions
Our major goal is to talk about complex calculus: both differentiation and integration. To do so, we need
some notion of a function.
18
Definition 1.7.1: Function
Let U, W be subsets of C. A function f : U → W is a rule that assigns to each element z ∈ U exactly
one element f (z) ∈ W . U is called the domain of f , and W is called the codomain of f .
The range of f is the set {f (z)|z ∈ U }. The range does not need to be all of W .
So, in essence, functions of a complex variable are defined exactly the same way any other function is.
The difference here is that the domain and codomain both lie in planes, so we won’t be able to draw graphs
to visualize functions. After all, how do you visualize a 4-dimensional picture? Actually, we also aren’t going
to be able to use contour maps to understand these functions either
Example 1.7.1
ix+y
Find the domain and range of the function f (x + iy) = x−iy
.
When finding the domain of a function given by a formula, take the largest set on which that formula
makes sense. So, in this example, the formula gives an output whenever x − iy 6= 0. I.e., when z 6= 0.
So, the domain of this function is C \ {0}.
For the range, w is in the range if there exists z with f (z) = w. So, we have to solve the equation
ix+y
x−iy
= w.
There are two ways to do this, depending on whether you notice something.
So, f (z) = w for some z if there exist x, y making this equation true.
If we look at the real and imaginary parts of this equation, we find that:
x = xb − ay
y = xa + yb
Note, this matrix has determinant (b − 1)2 + a2 . If the determinant is non-zero, we have a unique
solution x = y = 0. This would give z = 0, which is not in our domain.
If the determinant is zero, then there is a non-trivial solution. So if (b − 1)2 + a2 = 0, w = a + ib is
in the range. This occurs exactly when a = 0 and b = 1. I.e., when w = i.
19
Easy way: Notice that ix + y = i(x − iy). So, if z 6= 0, then f (z) = i. The range of f is {i}.
As this example illustrates, generally it is difficult to find the range of a complex function. But this isn’t
terribly different than working over the reals. Except for the very simple functions we see in first year calculus,
it is also generally hard to find the range of a real function as well. Especially functions whose domain is in
R2 . The methods we used in this example do not generalize; finding ranges is always ad hoc.
Next, let’s take a little tour of a few of the basic functions we’re going to encounter. First, polynomials:
Definition 1.7.2: Polynomials
A polynomial p on C is a function of the form:
n
X
n
p(z) = an z + · · · + a1 z + a0 = ak z k
k=0
We’re going to be spending a decent amount of time talking about polynomials in this course. Keep in
mind that these don’t behave the same way as real polynomials, and that we can have complex coefficients.
p(z) = z 2 + iz − (1 − i) is a polynomial.
Let’s also talk about root functions. Over the reals, it’s fairly easy to define root functions: either x has
0 nth roots, 1 nth root, or 2 nth roots. If it has no nth roots, then f (x) isn’t defined. If it has 1 nth root,
then that’s f (x). And if it has 2 roots, then one is positive and we choose that to be f (x).
This isn’t possible over C. We have n nth roots, and we don’t have any notion of positivity. (Notice,
we’ve never talked about z < w where z, w are complex numbers. That’s because it’s not possible to define
in a useful way.)
Note. There is no notion of a positive complex number, and it does not mean anything to say that z < w
for complex numbers.
In this text, if we write a < b, it must be understood that a, b are real numbers.
So do we have an nth root function on C? Let’s start by taking a look at a bad way to define such a
function:
Example 1.7.2
Consider the following (clearly false) proof:
Claim. 1 = −1.
√ iθ
Proof. Let z = reiθ . Consider the function f (z) defined by f (z) = re 2 . Note that f (z)2 = z, and so
f (z) is a square root function.
As we have already seen, we can write 1 = ei0 = ei(2π) . Applying our function gives:
20
√ 0
f (1) = f (ei0 ) = 1ei 2 = 1ei0 = 1
√ 2π
f (1) = f (ei(2π) ) = 1ei 2 = eiπ = −1
Exercise
What’s wrong with this argument?
Well, if you take me at my word that f (z) is a function, nothing. So that’s the problem, f (z) isn’t
a function. Not every formula defines a function, so we need to be careful.
In fact, our argument here really just shows that this formula doesn’t define a function.
So, if we want to define an nth root function, we need to be a lot more careful. The nth root is our first
example of a common theme with complex formulae: it’s a multivalued function.
Definition 1.7.3: Multi-valued Function
A multi-valued function f on C is a rule that assigns to z ∈ C a set of (possibly more than one)
outputs.
The output set of f is still written as f (z).
√ θ
For example, the formula f (reiθ ) = rei 2 is a rule that assigns to z 6= 0 its two square roots. It therefore
defines a multi-valued function. We can similarly look at all nth roots as giving multi-valued function. The
notation for this is:
1
Definition 1.7.4: z n
√ θ
Let z = reiθ . The formula f (z) = n
rei n defines a multi-valued function on C. We denote this function
1
by z n .
We need to be careful now. Our goal is to do calculus. That’s going to require us to have functions to
work with. So how can we go from having a multi-valued function to an actual function?
Definition 1.7.5: Branch of a Multi-valued Function
Let f be a multivalued function with domain U . A branch of f is a function g : U → C such that
g(z) ∈ f (z). (Remember that f (z) is a set, so this makes sense.)
So, for each input, we pick one output (out of the possible outputs given by the multi-valued function) to
be the output of the branch. Now, without care, you can choose some truely bizarre branches. For example,
for the square root, we could say that if z = x + iy, we choose the square root a + bi with a ≥ 0 if x is rational,
and if x is irrational we choose the square root with a < 0.
This is a contrived example, but that’s the point. You can cook up some truly weird and unpleasant
21
branches if you set your mind to it. Is there some way to choose our branches nicely? For some nice functions
yes, and it comes down to the argument of z, actually.
Example 1.7.3
Let z ∈ C. For z 6= 0, we define arg(z) = {θ ∈ R|z = |z|eiθ } (i.e., arg(z) is the set of all arguments of
z). Then this is a multi-valued function (in fact, infinitely-valued) function on C.
One way to take branches of arg is to specify a range of angles. So, for example, we could get a
branch of the argument arg0 (z) by choosing arg0 (z) = θ where θ is the unique angle in arg(z) ∩ [0, 2π).
This rule assigns to each z 6= 0 a single argument.
Example 1.7.4
Let θ ∈ R. Define arg0 (z) to be the branch of arg(z) with arg0 (z) ∈ [θ, θ + 2π).
1 √ arg0 (z)
Then for z = reiθ , we can define z 2 = rei 2 . This gives a branch of the square root.
1 √ 2 arg0 (z) 2 arg0 (z)
Notice that (z 2 )2 = r (ei 2 )2 = rei 2 = rei arg0 (z) = z. So this is actually a square root.
It also only gives one output to each input. Each z ∈ C, except z = 0, has a unique argument
arg0 (z) between θ and θ +2π. As such, the formula doesn’t depend on the angle we choose for z. Indeed,
there is no choice!
This may seem a bit arcane. Frankly, this is one of the more difficult concepts in this course. We’re going
to come back to it once we introduce the complex logarithm (which will also be a multi-valued function). I
wanted to introduce the concept before we run into that.
Notation: Very often, we will write equations involving multi-valued functions. For example, we can write
the quadratic formula as:
1
−b + (b2 − 4ac) 2
z=
2a
It is important to understand that we are saying that z takes on two different values, one for each different
value of the square root.
Unlike roots, this is a function. Indeed, for each z ∈ C, there is a unique choice of x, y ∈ R such that
22
z = x + iy, and so we don’t get multiple values coming from this formula.
Is this a good definition? What might we expect to be true of a complex exponential function. We would
like:
• ez ew = ez+w
• ez 6= 0
• For z = r ∈ R, we would like ez (the complex exponential) to be equal to er (the real exponential).
• If z = iy, we should have that ez = cos(y) + i sin(y), so that this formula also agrees with Euler’s
formula.
And these turn out to all be true! The third and fourth are a quick exercise in using the definition. We’ll
prove the first very quickly:
ez ew = ex eiy ea eib
= ex ea ei(y+b) ( by theorem 1.4.1)
x+a i(y+b)
=e e
Example 1.7.5
Did you notice anything about these two numbers? Give a conjecture for the relationship between ez and
ez .
Example 1.7.6
A function f is called injective if f (z) = f (w) implies z = w.
Exercise
True or false: f (z) = ez is an injective function.
23
False. We have already seen that e0 = ei2π . In fact, for any w 6= 0, ez = w has infinitely many
solutions!
Example 1.7.7
Let w = 1 + 4i. Solve the equation ez = w.
Let z = x + iy. Then ex eiy = 1 + 4i. This tells us that:
√
ex = |w| = 17
√
So we conclude that x = ln( 17).
Further, the expression w = ex eiy tells us that y is an argument for w! So, we need to find the
√
arguments for w. We see that w = 17ei arctan(4) .
√
Therefore, z = ln( 17) + i arctan(4).
Exercise
There is an error in this argument. What is it?
We know that y is AN argument for w. Not that y is this particular argument for w. Instead, we can
√
only conclude that y = arctan(4) + 2kπ for some k ∈ Z, and therefore z = ln( 17) + i(arctan(4) + 2kπ).
√ √
Note. ln( 17) + i arctan(4) and ln( 17) + i(arctan(4) + 2π) are different complex numbers! While arctan(4)
and arctan(4) + 2π point in the same direction as angles, y is not an angle. y is the vertical component of the
complex number z.
This distinction is important. y is not an angle, and so we can’t ignore this 2kπ.
Example 1.7.8
w2 − 2iw − 1 = 0
Now, from your homework, we know that we can solve this using the quadratic formula:
1
2i ± (−4 + 4) 2
w= =i
2
24
So, the solutions z to the original equation satisfy eiz = i. Let z = x + iy. Then e−y+ix = i. This
gives e−y = 1, so y = 0.
Also, we know that x is an argument for i, so x = π
2
+ 2kπ for k ∈ Z. Therefore, eiz − e−iz = 2i if
and only if z = π2 + 2kπ for some k ∈ Z.
Example 1.7.9
True or false: For any z ∈ C, ez > 0.
False. Remember, it is nonsense to say that a > b if a, b are complex numbers. ez can be any
complex number (except for 0), so this is a nonsense statement.
More concretely, eiπ = −1 6> 0.
eiθ + e−iθ
cos(θ) =
2
eiθ − e−iθ
sin(θ) =
2i
Since this is our only connection between trigonometric functions and complex numbers, it seems reason-
able to use this to define complex versions of cos(z) and sin(z).
Definition 1.7.7: Trigonometric Functions
The complex differentiable functions cos(z) and sin(z) are defined by:
eiz + e−iz
cos(z) =
2
eiz − e−iz
sin(z) =
2i
The other trigonometric functions tan(z), sec(z), csc(z), and cot(z) are defined exactly how you
would expect.
25
Just like for our complex exponential, notice that if z = x ∈ R, then cos(z) is exactly the real cosine
function cos(x), and similarly for sin(z). So this isn’t totally unreasonable.
Many of the usual properties of the real trigonometric functions are still satisfied by these complex functions
as well. For example:
Example 1.7.10
We know that the real function cos(x) is 2π periodic. I.e., cos(x) = cos(x + 2π). This is still true over
C.
To see this, note that:
However, these complex functions can have some wildly different behavior as well.
Example 1.7.11
True or false: −1 ≤ | sin(z)| ≤ 1 for any z ∈ C.
As we saw in class, when looking at sin(iy), we see that for y very large, | sin(iy)| is very large. In
|y|
fact, | sin(iy)| ≈ e 2 .
In fact, the range of sin(z) is actually C!
Example 1.7.12
Find all solutions to sin(z) = 1.
iz −iz
We are looking for all z for which sin(z) = e −e 2i
= 1. Notice that this is equivalent to solving
−iz
iz
e −e = 1. As we saw in example 1.7.8, the solutions to this are precisely z = π2 + 2kπ for k ∈ Z.
So the only solutions to sin(z) = 1 are real solutions.
We’ve already seen an example of finding a logarithm. Last class, we showed that the solutions to ez = 1+4i
√
are of them form z = ln( 17) + i(arctan(4) + 2kπ) for k ∈ Z. So, 1 + 4i has many logarithms. These are
precisely the z listed above.
Is there a general formula for finding the logarithms of z ∈ C, or do we need to do it by hand each time
we wish to solve ez = w?
26
Theorem 1.7.1: Calculating Logarithms
Let z = reiθ with z 6= 0. Then the logarithms of z are the complex numbers ln(r) + i(θ + 2kπ), where
k ∈ Z. Put another way, log(z) = ln |z| + i arg(z), remembering that we mean this as multi-valued
functions.
If z = 0, then z has no logarithms.
ea eib = reiθ
Taking the modulus of both sides, we see that ea = r, and so a = ln(r), which is defined since r 6= 0.
Now, we can see that reib = z, and so b is an argument for z. As we have shown before, this means that
b = θ + 2kπ for some k ∈ Z.
As for z = 0, notice that |ew | = ea 6= 0. However, |z| = 0. So, we cannot have ew = z.
The complex logarithm is the most important example of a multi-valued function. In fact, all of the
examples we are going to see in this course (including the ones we already have seen!) will depend on the
complex logarithm.
Notation. We are often very lazy with our notation for logarithms. If ew = z, we very often write that
w = log(z).
But, as we’ve seen, it is possible to have w1 6= w2 with ew1 = ew2 = z. So is w1 = log(z) or is w2 = log(z)?
The answer is that log(z) isn’t really one number. The complex logarithm is a multi-valued function, and
so w1 and w2 are two different values of the same multi-valued function. So when we say that w = log(z), we
really mean that w is one of the logarithms of z.
log(z) = {w ∈ C|ew = z}
Example 1.7.13
Suppose we know that log(z) = 1 + 3i. Is it possible that log(z) = 1 + 7i?
No. We would need e1+3i = e1+7i . But these have different angular components. The angles 3 and
7 do not point in the same direction!
27
Definition 1.7.10: Complex Exponentiation
How do we interpret this? After all, we just discussed that log(z) isn’t one number. This formula should
be interpretted as saying that z a is a multi-valued function, and that its values are eaw where w is a logarithm
for z.
Example 1.7.14
This definition has some surprising consequences. For example, every value of ii is real!
Why is that? Well, ii = ei log(i) . However, since |i| = 1, we see that the logarithms of i are:
π π
log(i) = ln(1) + i + 2kπ = i + 2kπ
2 2
2 π
As such, ii = ei ( 2 +2kπ) = e− 2 +2kπ , which is a real number!
π
Example 1.7.15
Consider the following claim:
Fake Theorem 1.7.1
Every complex number is real.
You should quickly convince yourself that this is false. For example, why is i not real? (What
property defines i?)
So, if this is a false claim, any proof of this claim must have an error. Find the error (or errors, if
there are more than one) in the following proof.
Proof. Let z ∈ C, and write z in polar form as z = reiθ . Then we find that:
2πθ
z = reiθ = rei( 2π ) = r(e2πi ) 2π
θ
Be careful. Any errors are subtle. If you think you have an easy answer, chances are your answer
is not correct.
Solution: As we discussed in class, the error occurs in two places. When we write:
2πθ θ
ei 2π = (ei2π ) 2π
28
θ
we are choosing a branch f (z) of the multivalued function z 2π so that f (1) = eiθ .
θ
On the other hand, when we say that 1 2π = 1, we are choosing a branch g(z) with g(1) = 1. We’re
working with two different branches as if they are the same!
Now that we have talked about how they are multivalued, and how that requires some care, let’s talk
about their branches. Corresponding to the principal Argument Arg(z), there is a principal branch of these
multivalued functions as well:
Definition 1.7.11: Principal Logarithm
Example 1.7.16
Example 1.7.17
Let n ∈ Z. Is z n a single valued function?
Well, z n = en log(z) . Let z = reiθ . Then:
So this is a single valued function. Regardless of our choice of argument, we get the same result.
Example 1.7.18
Consider the formula f (z) = az . Is this a function?
Let z = x + iy. We have az = ez log(a) = ez(ln |a|+i arg(a)) = e(x ln |a|−y arg(a))+i(x arg(a)+y ln |a|) . This formula
outputs a single value if and only if y arg(a) doesn’t depend on the choice of argument, so y = 0. Also,
we need that eix arg(a) doesn’t depend on the choice of argument, so x ∈ Z.
29
So, this tells us that az is a multi-valued function as well! Does that mean that ez is multi-valued? The
answer to that is no. Our definition of ez doesn’t depend on the argument of e. Technically, our definition of
ez is the principal branch of the function:
However, to avoid unnecessary notational baggage (after all, this function is a bit of a mouthful) and to
avoid unnecessary abstraction, ez will always be understood to be an exception to az being a multi-valued
function.
Example 1.7.19
Find the range of sin(z).
Let w ∈ C. We would like to see if there exists some z ∈ C with sin(z) = w. Supposing there is, we
have:
eiz − e−iz
=w
2i
Rearranging gives eiz − 2iw − e−iz = 0. Let u = eiz . Since u 6= 0, we see that:
1
u − 2iw − = 0 ⇐⇒ u2 − 2iwu − 1 = 0
u
log(u)
And so z = i
.
But wait, we’re not done! We need to know that log(u) actually exists for any given w. We assumed
u 6= 0, but are we guaranteed that it is? After all, it depends on w. How do we know there doesn’t
exist some w such that u = 0?
1
Suppose u = 0. Then (1 − w2 ) 2 = iw. Squaring both sides gives 1 − w2 = −w2 , which cannot occur.
So u 6= 0, and therefore such a z exists for any w.
This example gives us an idea of how to define inverse trig functions as well:
Definition 1.7.13: arcsin
Let z ∈ C. Then:
1
arcsin(z) = −i log(iz + (1 − z 2 ) 2 )
Further, the principal arcsin is given by:
1
Arcsin(z) = −iLog(iz + (1 − z 2 ) 2 )
1
where (1 − z 2 ) 2 is the principal square root.
30
2 | Limits and Differentiation
In this chapter, we will continue to develop the theory of complex functions in a way that mirrors your
first introduction to calculus. Now that we know what our functions of interest are, let’s talk about limits
and differentiation.
2.1 Limits
Just like for differentiation over R, our first building block is the limit. There is a complication here though:
in R, when we take a limit we’re looking at two directions: limx→c f (x) depends on what f (x) does as x
approaches c from the left hand and from the right hand.
In C, we not longer have two directions. We have an infinite number! More than that though, we need to
consider not just travelling along straight lines, but rather any curve toward our point.
To capture this, I’ll present two definitions. The first is a fairly formal one, and we aren’t going to work
with it at all. It may appear in some proofs, but that’s all. The second is the intuitive way to understand
limits.
Definition 2.1.1: δ-ε Definition of a Limit
Let f : U → C, and z0 ∈ C. Further, assume there exists some r > 0 such that {z ∈ C|0 < |z − z0 | <
r} ⊂ U . Then limz→z0 f (z) = L if:
Intuitively, this says that for any ε > 0, we can find a circle around z0 so that if z is inside this circle, then
the distance from f (z) to L is less than ε. I.e., f (z) gets as close as we want to L, and stays that close.
There’s another way to understand limits that is more in line with how we visualize limits. It involves
looking at the real and imaginary parts of f (z).
Definition 2.1.2: R2 Definition of a Limit
Let f : U → C and z0 = x0 + iy0 ∈ U . Further, assume there exists some r > 0 such that {z ∈
C||z − z0 | < r} ⊂ U . Let z = x + iy, and write f (x + iy) = u(x, y) + iv(x, y). That is, write f (z) in
terms of its real and imaginary parts, which we will view as functions on R2 .
Then limz→z0 f (z) = L if:
lim u(x, y) = Re(L)
(x,y)→(x0 ,y0 )
31
So, in essence, complex limits can be viewed as just a pair of limits on R2 . Remember, when looking at
limits on R2 , we need to consider arbitrary paths to z0 . So, for example, when taking a limit to 0, the limit
must exist and be the same along y = 0, x = 0, y = x, y = x4 , etc.
Because of this, a lot of complex limits turn out to be unpleasant. However, unlike working over R2 , a
lot will turn out to be really nice. Many of the techniques for understanding limits that we saw in first year
calculus work.
Example 2.1.1
Find limz→0 zz .
You could try doing this algebraically. However, this is much easier to understand by trying a few
paths out.
If the limit exists, then its value must be give by approaching 0 along the line y = 0. We find:
z x + 0i
lim = lim =1
z→0 z (x,0)→(0,0) x − 0i
On the other hand, if the limit exists, it must also be given by approaching 0 along the line x = 0.
We find:
z 0 + iy
lim = lim = −1
z→0 z (0,y)→(0,0) 0 − iy
z
Since these limits disagree, we find that limz→0 z
does not exist.
Example 2.1.2
Find limz→0 ez .
As we have seen before, we know that ez = ex cos(y) + iex sin(y). From our definition, we know that
we need to find the limits:
lim ex cos(y)
(x,y)→(0,0)
lim ex sin(y)
(x,y)→(0,0)
For the first one, we can use the product law for limits to get:
x x
lim e cos(y) = lim e lim cos(y)
(x,y)→(0,0) (x,y)→(0,0) (x,y)→(0,0)
lim ez = 1 + 0i = 1
z→0
32
We’ve seen a couple of examples of working out limits by hand. In practice, this is a pain. Even in your
first year course in calculus, you had tools for working with limits. These same tools, namely the limit laws
and continuity, are still applicable in C.
Theorem 2.1.1: The Limit Laws
Let f, g : U → C and z0 ∈ U . If limz→z0 f (z) = A and limz→z0 g(z) = B, then:
• limz→z0 (f + g)(z) = A + B
• limz→z0 (f g)(z) = AB
You have a lot of practice using these already. Their application is identical to their use over R. Also, the
proofs of these facts are identical to the proofs over R, so we will not reproduce them here.
2.2 Continuity
By far the most useful way for finding limits is continuity. If a function is continuous, finding limits for it
becomes immediate. So knowing what continuity gives us, and then building up a repetoire of continuous
functions, is really important.
Definition 2.2.1: Continuity
So, if we know a function is continuous, then finding limits turns into just evaluating your function at that
point.
Example 2.2.1
The function f (z) = ez is continuous.
While this seems like it should be true, we still need to show it. However, if we reproduce our
33
argument for showing that limz→0 ez = 1, we find:
Exercise
Prove that f (z) = z is continuous.
We have a few basic functions, but what about combining them? The limit laws tell us that continuous
functions combine very nicely.
Theorem 2.2.1: Properties of Continuous Functions
• h ◦ f is continuous at z0 .
As a result of this, most of the functions we’ve seen so far are continuous. Constants, polynomials,
exponentials, and our trig functions are continuous.
What about logarithms, or the argument?
Example 2.2.2
Let arg0 (z) be the branch of the argument defined by setting arg0 (z) ∈ [−π, π).
Find limz→−1 arg0 (z).
Since this limit needs to exist and agree regardless of which direction we approach −1 from, we’re
going to approach along two curves: we’re going to follow the unit circle to −1 from above and from
below.
34
First, let’s discuss what happens if we follow the curve z = eiθ as it approaches −1 from above (i.e., as
we follow the red, dashed curve.) Notice that since we are in the second quadrant, that arg0 (z) ∈ π2 , π ,
And so, in a similar way to the previous curve, we find that limz→−1 arg0 (z) = −π.
Since the limit approaches two different values, it does not exist. Also, as a consequence, we see
that arg0 (z) is not continuous at −1.
Note. A similar argument will show that arg0 and log0 are not continuous on (−∞, 0], but that they are
continuous on C \ (−∞, 0].
This is why we defined Arg(z) and Log(z) to have the domain C\(−∞, 0]. We want to work with continuous
(actually, differentiable) functions, and so we define these functions on a set where they are continuous.
The issue here is that arg(z) and log(z) are multivalued functions. As we move around the circle |z| = 1:
the value of arg(z) increases by 2π, and the value of log(z) increases by 2πi. The same thing happens with
other multivalued functions.
35
Example 2.2.3
1 1
What happens to z 2 and z 3 if we travel around the unit circle?
1 √ θ 1
For z 2 , we use the formula rei 2 . So, starting at 1 written as ei0 , we have 1 2 = 1. As we move
1
around the circle once (clockwise), we see that θ moves from 0 to 2π, and so z 2 moves from ei0 = 1 to
eiπ = −1. Pictorially:
−→
−→
1 2π
In this case, as we travel the unit circle once, z 3 goes from 1 to ω1 = ei 3 , which is another cube
root of unity.
So what’s the solution to this problem? We want to work with continuous functions, if we’re going to talk
about differentiation. We’ve seen an example already: for log(z), we restricted the domain to C \ (−∞, 0],
and we were able to find a branch on that domain which is continuous.
This leads us to a more general phenomenon.
36
Definition 2.2.2: Branch Cuts
Let f (z) be a branch of a multi-valued function. A branch cut is a curve in C along which f (z) is
discontinuous.
It is called a branch cut because we cut (i.e. remove) the branch cut from the domain to get a continuous
function.
Example 2.2.4
Let log0 (z) be the branch of the logarithm given by arg(z) ∈ [−π, π). As we saw earlier, log0 (z) is
discontinuous along (−∞, 0], and so this is a branch cut. Removing (−∞, 0] from the domain of log0 (z)
results in the function Log(z), which is continuous on its domain.
All of the multi-valued functions we have seen so far are defined in terms of log(z) or arg(z). As such, we
can get branch cuts for them by taking branch cuts for log(z) or arg(z). Let’s start by talking about how to
do that.
Consider the branch log0 (z) of log(z) given by arg(z) ∈ (θ, θ + 2π) for some θ ∈ R. This function is
continuous on its domain, in the same way that Log(z) is continuous on C \ (−∞, 0]. We have removed the
ray {reiθ |r ≥ 0}, pictured below:
By taking this branch cut, we have found a continuous branch. In practice, these are the only types of branch
cuts we will consider.
Now that we have a standard way of taking branch cuts for log(z) or arg(z), how do these choices affect
other multivalued functions?
Example 2.2.5
1
Consider f (z) = (iz + 1) 2 , where we are working with the principal branch. What is the branch cut of
this function?
Since we are asking about a branch cut, this should serve as a huge clue that this function involves
1 1
log(z) somehow. Recall that (iz + 1) 2 = e 2 Log(iz+1) .
Now, our branch cut on Log(z) is to remove (−∞, 0] from the domain. So the corresponding branch
cut for f (z) is to remove where iz + 1 ∈ (−∞, 0].
Suppose iz + 1 ∈ (−∞, 0]. Then iz ∈ (−∞, −1]. As such, z ∈ ri |r ≤ −1 = {si|s ≥ 1}. As such,
37
2.3 Infinite Limits and Limits at Infinity
The last thing to consider is infinite limits and limits at infinity. The key concept here is that in C, there is
only one ∞. No matter which direction you go out in, left or right, up or down, you get to the same infinity.
This is tied to the notion of the Riemann sphere, and of Riemann surfaces, which are a geometric abstraction
that makes a lot of complex analysis very nice. We won’t be talking in this context in this course. I am
pointing these out in case you are interested in further reading.
So, how do we define infinite limits? Well, f (z) should be going to ∞. But what way? Well, since all
directions give the same ∞, any way!
Definition 2.3.1: Infinite Limits
Let f : U → C and z0 ∈ C, such that {z ∈ C|0 < |z − z0 | < r} ⊂ U for some r > 0. Then we say that
limz→z0 f (z) = ∞ if:
In laymans terms, this means that |f (z)| gets arbitrarily large as we get close to z0 .
Example 2.3.1
1
Is limz→0 e z infinite or not?
1
Well, for example, if we consider z → 0 along the positive real axis, we find that e x does go to
infinity. After all, limx→0+ x1 = ∞.
1
However, consider what occurs as z → 0 along the positive imaginary axis. We have that e z =
1 −i 1 1
e ri = e r . And so |e z | = 1. As such, if we approach from this direction, e z does not approach ∞.
So this limit is not infinite.
Let f : U → C such that {z ∈ C||z| > r} ⊂ U for some r > 0. Then we say that limz→∞ f (z) = L if:
Or, in laymans terms, as |z| gets arbitrarily large, f (z) gets arbitrarily close to L.
Example 2.3.2
Show that limz→∞ z = ∞.
So, we haven’t defined this, but hopefully seeing the two separate definitions allows us to synthesize
the correct definition of an infinite limit at infinity.
In this case, we want that as |z| gets arbitrarily large, that |f (z)| = |z| gets arbitrarily large, which
38
is obviously the case.
f (z0 + h) − f (z0 )
f 0 (z0 ) = lim exists
h→0 h
Before we begin to dive into the theory, let’s work out an example.
Example 2.4.1
Because complex limits are really limits in 2 dimensions, we need to be careful. For example, if you were
going to try fo find the derivative of ez by definition, you would run into some very nasty R2 limits. We don’t
want that in our lives.
However, the solution to avoiding working with nasty R2 limits is to leverage the 2-dimensional nature of
the derivative!
Theorem 2.4.1
Suppose f (z) = u(x, y) + iv(x, y) is differentiable at z0 . Then the following two equations hold:
∂u ∂v
f 0 (z) = (x0 , y0 ) + i (x0 , y0 )
∂x ∂x
∂v ∂u
f 0 (z) = (x0 , y0 ) − i (x0 , y0 )
∂y ∂y
39
Proof. Since f 0 (z0 ) exists, we know that limh→0 f (z0 +h)−f
h
(z0 )
exists. As such, it must exist from every direction.
We will consider approaching along two lines: h = a + 0i and h = 0 + ib. I.e., along the real and imaginary
axes.
Along the real axis, we get the limit:
f (z0 + a) − f (z0 )
f 0 (z0 ) = lim
a→0 a
u(x0 + a, y0 ) + iv(x0 + a, y0 ) − u(x0 , y0 ) − iv(x0 , y0 )
= lim
a→0 a
u(x0 + a, y0 ) − u(x0 , y0 ) v(x0 + a, y0 ) − v(x0 , y0 )
= lim + i lim
a→0 a a→0 a
∂u ∂v
= (x0 , y0 ) + i (x0 , y0 )
∂x ∂x
Notice, this gives two expressions for the derivative! Since the limit has only one value, we an conclude
that these two expressions are actually equal.
Corollary 2.4.1: The Cauchy-Riemann Equations
ux = vy
uy = −vx
Let’s see a couple of examples for how we can use these two results.
Example 2.4.2
The principal logarithm Log(z) is differentiable on C \ (−∞, 0]. We will justify why this is true a bit
later, but for now we can accept it. Prove that Log(z)0 = z1 where Re(z) > 0.
40
Let z = x + iy. Then Log(z) = ln |z| + iArg(z). So:
p
u(x, y) = ln x2 + y 2
y
v(x, y) = arctan
x
We know that f 0 (z) = ux + ivx . We compute:
x
ux = p 2
x2 + y 2
1 −y
vx = y 2 2
+1 x
x
−y 1
= 2 2
y
+1x
x
−y
= 2
x + y2
x−iy z
As such, the derivative is x2 +y 2
= |z|2
= z1 .
Example 2.4.3
Find where f (z) = cos(x) is differentiable.
We have that u(x, y) = cos(x) and v(x, y) = 0. If this were differentiable, it would satisfy the
Cauchy-Riemann equations. We would have:
sin(x) = ux = vy = 0
0 = uy = −vx = 0
So, this function does not satisfy the Cauchy-Riemann equations when sin(x) 6= 0. I.e, when x 6= kπ.
What happens when x = kπ? Consider:
cos(kπ+a)−(−1)k
by L’Hopital (in R). As such, the squeeze theorem tells us that limh→0 a+ib
= 0.
So cos(x) is differentiable at exactly the points z = kπ.
41
In this example, we saw an example of a function that was differentiable at exactly the places where it
satisfied the Cauchy-Riemann equations. Is that generally true? That would make life very nice for us.
Example 2.4.4
p
Consider f (z) = |xy|. Prove that this satisfies the Cauchy-Riemann equations at 0, but is not
differentiable there.
p p
Since |xy| is real, u(x, y) = |xy| and v(x, y) = 0. Now, computing the partial derivatives using
differentiation rules will not work here. We need to use the defintion:
p p
|a × 0| − |0 × 0|
ux (0, 0) = lim =0
a→0 x
to exist. That means it must exist along every direction! We have shown that it is 0 along the real and
imaginary axes. Consider the direction where x = y, and x > 0. We get:
p
|a2 | a 1
lim+ + = lim+ = 6= 0
(a,a)→(0 ,0 ) a + ia a→0 a + ia 1+i
Since we get different limits along different directions, f 0 (0) does not exist. Therefore, f satisfies
the Cauchy-Riemann equations at z = 0, but is not differentiable there!
We would like to have an easy to check, general condition to see if a function is differentiable. The
Cauchy-Riemann equations seemed like a good bet, but we’ve seen that they aren’t sufficient to guarantee
that a function is differentiable. Can we fix this?
The answer is yes. There is an easy condition we can add, which will salvage the usefulness of the
Cauchy-Riemann equations. However, to properly discuss it we will need to take a brief detour into topology.
42
So they’re just filled in circles missing their boundaries:
z0
Open balls are the basic building blocks of open sets. There are two equivalent characterisations:
Definition 2.4.3: Open Set
We say that a subset U ⊂ C is open if for any z0 ∈ U , there exists a ball Br (z0 ) which is contained in
U.
Alternatively, an open set is an arbitrary union of open balls.
z0
Notice that around the point z0 , we have found a ball Br (z0 ) contained entirely in U , which is the shaded
region. We could do the same for any point in the shaded region, if we desired.
How do we recognize open sets in practice? If we have a picture, it’s easy to do so. The set shouldn’t
contain any “edge" or boundary. This is an intuitive notion which is easy to see visually, so we won’t go through
the effort of describing it formally. There is a formal definition, but it isn’t very helpful for developing a visual
heuristic.
Algebraically, it’s much easier to determine open sets. Open sets are generally described by conditions
involving inequalities of some variety. For example: {z ∈ C| Re(z) > 1} is an open set. Open balls are open
sets. C and the empty set are open. And there are many other examples.
In addition to open sets, we need one more topological notion: connectedness. The idea behind a set being
"connected" is that it is in one piece. To describe this formally, we need to talk about paths.
Definition 2.4.4: path
A path γ in C is a function γ : [0, 1] → C such that Re(γ) and Im(γ) are continuous functions.
43
In a set that is in one piece, it should be possible to draw a path between two points in the set without
leaving the set. And this is precisely the definition of connectedness.
Definition 2.4.5: Connected Set
A set U ⊂ C is called connected if for any two z0 , z1 ∈ U there exists a path γ such that γ(0) = z0 ,
γ(1) = z1 , and γ(t) ∈ U for all t.
or or
On the other hand, these are sets that are not connected:
or or
The first of these is hopefully self-explanatory. It’s in two pieces. The second of these, which consists of
the triangle and the circle, is also in two discrete pieces.
In the third picture, we have open balls which are tangent to one another. However, since they are missing
their boundaries, these two sets actually have a gap between them!
Putting these two notions together, we have the basic topological object we will be dealing with in this
course:
Definition 2.4.6: Domain
A set U ⊂ C is called a domain if it is open and connected.
Note. This is not the same thing as the domain of a function! You need to be able to tell the difference from
context. If we are not discussing a function, then domain will be referring to this definition. In contexts where
44
a function is also being discussed, you will need to be vigilant. There is a difference between the language
“the domain U " and “the domain U of f ."
Note. The last note, while an important technical issue, doesn’t quite give the full picture.
In the context of this course, domains will almost only ever occur as the domain of a differentiable function.
(In fact, what’s called an "analytic function", which we will see soon.)
Note. The textbook does not use the terminology “holomorphic" to describe these functions. However, to be
precise, analytic really means something different. An analytic function is a function that is equal to a power
series.
However, it is one of the crowning achievements of complex analysis that every holomorphic function is
analytic. (I.e., if f is differentiable on an open set, then it can be described by power series on that open set.)
As such, holomorphic and analytic are the same condition on C. This is not true on R that differentiable and
analytic are the same though, so it is worthwhile to make this distinction.
Plus, I just like the sound of holomorphic more.
So what was the point of introducing all these definitions? Do we get something out of this?
Theorem 2.4.2
Suppose f = u + iv is defined on an open set U . If u, v, ux , uy , vx , vy are defined and continuous
everywhere in U and u, v satisfy the Cauchy-Riemann equations, then f is holomorphic on U .
And similarly:
v(x + a, y + b) = v(x, y) + avx + bvy + εv (a, b)
45
where εv is defined similarly for v.
As such:
f (z + h) − f (z) u(x + a, y + b) + iv(x + a, y + b) − (u(x, y) + iv(x, y))
lim = lim
a+ib→0 h a+ib→0 a + ib
aux + buy + i(avx + bvy ) + εu (a, b) + iεv (a, b)
= lim
a+ib→0 a + ib
aux − bvx + i(avx + bux ) εu (a, b) + iεv (a, b)
= lim + lim
a+ib→0 a + ib a+ib→0 a + ib
Now, aux − bvx + i(avx + bux ) = (a + ib)ux + (ia − b)vx = (a + ib)ux + i(a + ib)vx . As such:
So we need only consider this last limit. However, note that by the triangle inequality:
εu (a,b)+iεv (a,b)
So by the squeeze theorem, lima+ib→0 a+ib
= 0 as well.
As such, f 0 (z) = ux + ivx , and f is differentiable at z. Since this applies for any z ∈ U , we see that f is
holomorphic on U .
Example 2.4.5
Show that ez is analytic on C.
We need to write ez as u + iv, and show that u, v, ux , uy , vx , vy are continuous. We also need to show
that u, v satisfy the Cauchy-Riemann equations.
To start, ez = ex eiy = ex cos(y) + iex sin(y). As such, u(x, y) = ex cos(y) and v(x, y) = ex sin(y).
These are continuous.
We compute the partials:
ux = ex cos(y)
uy = −ex sin(y)
vx = ex sin(y)
vy = ex cos(y)
46
Notice that these are all continuous. And further, that:
ux = ex cos(y) = vy
Functions which are analytic on C are special. They have some very nice properties. For example, their
range is always either C or C \ {z0 } for some z0 ∈ C. This is called Picard’s Little Theorem. We won’t be
using that in this course, but it’s a good example of why these functions are special. As such, they have a
name:
Definition 2.4.8: Entire Function
A function which is analytic on C is called entire.
We’ll talk more about entire functions as the course moves on.
How do we compute derivatives in practice? After all, computing partials and then turning them into a
function of z can be quite nasty. Is there a better way?
Theorem 2.4.3: The Derivative Rules
Suppose f, g are differentiable at z0 and h is differentiable at f (z0 ). Then:
Combining this with the basic derivatives we know: z, ez , Log(z), etc. allows us to figure out the derivatives
of basically any function we want.
Example 2.4.6
Find the derivative of z n .
Well, we could us the definition of the derivative in combination with the binomial theorem.
Instead, let’s use the product rule and induction. We claim that if f (z) = z n , then f 0 (z) = nz n−1
for integers n ≥ 0.
47
f (z+h)−f (z)
Base case: When n = 0, we know that f (z) = 1, so f 0 (z) = 0. After all, h
= 1−1
h
= 0 for
h 6= 0. So the formula holds when n = 0.
(z+h)−z h
For n = 1, we have limh→0 h
= limh→0 h
= 1. So the formula also holds for n = 1.
Induction step: Consider z n+1 . Let f (z) = z n and g(z) = z. Then, by the product rule:
d z n+1
= f 0 (z)g(z) + f (z)g 0 (z) = (nz n−1 )z + z n (1) = nz n + z n = (n + 1)z n = (n + 1)z (n+1)−1
dz
As such, the formula holds for n + 1 as well. By induction, our claim holds for all integers n ≥ 0.
What about non-integer powers? First, we need to find the derivative of any branch of log(z).
Example 2.4.7
Suppose log0 (z) is a continuous branch of log(z), given by taking arg(z) ∈ (θ, θ + 2π). We have already
shown that Log(z) is differentiable. A similar argument will work here. So we’ll move ahead with
finding the derivative.
We know that elog0 (z) = z. By the chain rule:
So, 1 = z d log
dz
0 (z)
. So the derivative of log0 (z) is z1 .
Example 2.4.8
Find the derivative of any branch of z α where α ∈ C.
By definition, f (z) = z α = eα log0 (z) , where log0 (z) is the branch of the logarithm corresponding to
the branch of z α .
αeα log0 (z)
By the chain rule, f 0 (z) = eα log0 (z) (α log0 (z))0 = z
. Can we simplify this at all?
So, the formula we expect works not just for integers, but for any complex exponent.
48
2.5 Harmonic Functions
Before we move on, we have one more fact about analytic functions that we’re going to discuss. It turns out,
they’re very closely related to harmonic functions.
Definition 2.5.1: Harmonic Function
Let U be an open set in C. A function u : U → R is harmonic if it satisfies that:
uxx + uyy = 0
Now, it is a very nice fact about holomorphic functions f = u + iv that u and v are actually second
differentiable, and uxy , uyx , vxy , and vyx are all continuous. In fact, we will see later that if f is holomorphic,
then f is infinitely differentiable. For now, we will take this for granted.
Theorem 2.5.1
Suppose f is an analytic function on U . Then u and v are harmonic functions on U .
Proof. By our discussion above, we know that we can apply Clairaut’s theorem to get:
∂ux
uxx =
∂x
∂vy
= (by C-R)
∂x
= vxy = vyx (by Clairaut)
∂vx ∂(−uy )
= = (by C-R)
∂y ∂y
= −uyy
So uxx + uyy = −uyy + uyy = 0, and u is harmonic. A similar argument shows that v is also harmonic.
Example 2.5.1
uxx = 2
uyy = 0
Alright, so the real (and imaginary) parts of an analytic function are harmonic. Is the converse true? I.e.,
49
does every harmonic function appear as the real or imaginary part of an analytic function? It turns out that
the answer is yes! Let’s see an example.
Example 2.5.2
Let u(x, y) = 3x2 y − y 3 . Find an analytic function f (z) whose real part is u(x, y).
First, notice that u is harmonic. We won’t use this explicitly anywhere, but our goal is to demon-
strate that harmonic functions do appear as the real parts of an analytic function.
If f (z) is such a function, it must satisfy the Cauchy-Riemann equations. Let f (z) = u + iv. Then:
vy = ux = 6xy
R
As such, we can see that v(x, y) = ux dy = 3xy 2 + g(x) for some function g(x). Why do we get this
g(x)? Well, we’re integrating in terms of y. That only recovers y information. In particular, it misses
any parts of v that depend only on x! So we need to assume there is some part of v that depends only
on x, and then try to determine what that is.
To do so, let’s look at vx . We have vx = 3y 2 + g 0 (x). However, by Cauchy-Riemann, we know that:
This example turns out to be archetypical. Given u(x, y), finding v(x, y) so that u + iv is analytic boils
down to solving a system of partial differential equations. However, the procedure isn’t unreasonable. Such
u and v are called harmonic conjugates:
Definition 2.5.2: Harmonic Conjugates
Let u(x, y) be a harmonic function. We say that v is a harmonic conjugate for u if f (z) = u + iv is
holomorphic.
Example 2.5.3
Why do we need this notion? Isn’t it true that if u, v are harmonic, then u + iv is holomorphic?
Well, no. Consider u(x, y) = x and v(x, y) = −y. Then these are both harmonic. However,
f (z) = x − iy = z, which we know is nowhere differentiable.
So, it is not true that if you take any two arbitrary harmonic functions u, v, that u+iv is holomorphic.
This is something very special.
In light of this example, it’s natural to ask whether or not harmonic functions even have a harmonic
conjugate. The answer is, luckily, yes.
50
Theorem 2.5.2
If u is harmonic on an open set U , then there exists a harmonic conjugate v for u.
vy = ux
vx = −uy
Since then u, v satisfy Cauchy-Riemann and f = u + iv will be holomorphic. (We are guaranteed that u, v
and their partials are continuous since they are harmonic.) We know ux and −uy , so we just need to know
how to solve the system vx = f and vy = g. The theoretical idea is to integrate. We will eschew giving a
precise proof; our topic isn’t PDEs after all.
51
3 | Integration
We are now ready to begin talking about the integration side of complex calculus. Since complex functions
can be thought of as functions of two variables, we have a couple of options for integrating: surface integrals,
R R
such as U f (z)dxdy where we integrate over the open set U ; and line integrals γ f (z)dz where we integrate
over a curve γ in C.
It turns out that surface integrals hold little interest. Line integrals are where the magic happens. Here
are some facts we’re going to be able to show using complex line integrals:
• Every holomorphic function is analytic, meaning it has a power series expansion around any point in
the domain.
Liouville’s theorem is a neat theoretical tool, which will give us some very powerful theorems. It’s going to
allow us to prove the Fundamental Theorem of Algebra (every non-constant complex polynomial has roots).
Why might the last fact be interesting? Well, it’s not the fact by itself that’s interesting. After all, it’s a
seemingly random integral. (Indeed, it only holds interest to me because it appeared on my comprehensive
exam for complex analysis!) What’s interesting is that we can use complex line integrals to find real improper
integrals.
3.1 Curves in C
Before we can begin discussing line integrals, we should give ourselves a quick refresher on curves. After all,
we need to integrate over something.
Definition 3.1.1: Curves
A curve in C is a function γ : [a, b] → C, for some real a < b, such that Re(γ) and Im(γ) are continuous.
Further, such a curve is called smooth if Re(γ) and Im(γ) are differentiable. It is called piecewise
smooth if they are differentiable except at finitely many places. In this case, γ 0 = [Re(γ)]0 + [Im(γ)]0 .
Paths are a special case of curves, where a = 0 and b = 1.
52
The point γ(a) is called the start of the curve, and γ(b) is the end of the curve. γ is called closed
if it starts and ends at the same point.
We are going to be using a lot of different paths throughout this course. We present a few common ones:
Example 3.1.1
Let z0 , z1 ∈ C. The line segment from z0 to z1 can be described by the curve γ : [0, 1] → C with:
So, for example, the line segment from 1 − i to 5 + 7i is given by the curve γ(t) = 1 − i + t(4 + 8i) =
(1 + 4t) + (−1 + 8t)i.
Example 3.1.2
Let z0 ∈ C and r ≥ 0. Fix an angle θ. Then travelling along the circle of radius r centered at z0 ,
starting at θ and travelling once counterclockwise, is done by the curve:
γ(t) = z0 + rei(θ+t)
There are a few interesting ways to manipulate and combine curves. These will pop up, so let’s give a
brief description:
Definition 3.1.2: −γ
Let γ : [a, b] → C be a curve in C. Then we can define the curve −γ which traverses γ backwards at
the same speed. We define −γ : [a, b] → C by:
(−γ)(t) = γ(a + b − t)
Why a + b − t? When looking at γ, the variable t travels along the interval [a, b] from a to b linearly.
Going backwards, we want a linear function s : [a, b] → [a, b] so that s(a) = b and s(b) = a. I.e., go from b to
a linearly. The function s(t) = a + b − t does this. It’s linear, s(a) = a + b − a = b and s(b) = a + b − b = a.
We can also combine curves, by following one and then the other.
53
Definition 3.1.3: γ1 + γ2
Suppose γ1 : [a, b] → C and γ2 : [c, d] → C are two curves such that γ1 (b) = γ2 (c). I.e., the second curve
starts where the first one ends.
Then γ1 +γ2 is the curve that first follows γ1 and then follows γ2 . We define γ1 +γ2 : [a, b+(d−c)] → C
by:
γ (t),
1 t ∈ [a, b]
(γ1 + γ2 )(t) =
γ2 (c − b + t), t ∈ [b, b + (d − c)]
The first part simply says to follow γ1 until the end. The second part, however, looks confusing. Why
c − b + t? Well, keep in mind that when we are done travelling γ1 , we are at t = b. To start γ2 , we need the
input of γ2 to be c. The formula c − b + t gives (γ1 + γ2 )(b) = γ2 (c − b + b) = γ2 (c), like we want.
And then we end at b + (d − c) since it gives (γ1 + γ2 )(b + (d − c)) = γ2 (c − b + (b + (d − c)) = γ2 (d), also
as desired.
These are mostly academic formulas. They’ll let us prove some things, and that’s their only real use. In
fact, we are very quickly going to prove a couple of theorems that render these formulas totally useless.
Lastly, we need to quickly talk a bit about closed curves. As it turns out, here’s some notion of "direction"
when you travel along a closed curve, provided that it doesn’t self intersect.
Definition 3.1.4: Simple Closed Curve
A closed curve is called simple if it does not self-intersect. This means that γ(t) = γ(s) occurs only
when t = s or at the start and end.
Just to be clear, when we say that there is a path from z to ∞, we mean there is a continuous function
σ : [a, ∞) → C so that σ(a) = z and limt→∞ |σ(t)| = ∞. I.e., the path gets arbitrarily far from the origin.
So, with that understanding, intuitively the outside is the set of all points that can "escape to ∞" without
passing γ. The inside is all points that cannot escape to ∞ and which are not on γ.
The Jordan curve theorem is notoriously difficult to prove. For such a simple statement, you end up
needing a lot of heavy machinery. (The first place I saw a proof was in graduate differential topology, to give
you an idea of how hard it is to prove.) So we’re naturally going to run far, far away.
However, this idea of inside and outside will let us define our notion of direction.
54
Definition 3.1.5: Orientation of a Simple Closed Curve
Let γ be a simple, closed curve. Then γ is positively oriented if the inside of γ is on your left as you
traverse γ. If the inside of γ remains on your right, the curve is negatively oriented.
This is a visual definition. Orientation is something you need to be able to understand from a picture.
There is a more formal definition, but it isn’t terribly useful for us.
Example 3.1.3
The curve which follows the triangle from 0 to 1 to i and back to 0 is positively oriented.
Circles travelled counterclockwise are positively oriented. Circles travelled clockwise are negatively
oriented.
Example 3.1.4
If γ is positively oriented, then −γ is negatively oriented. If you travel the curve backwards, the inside
of γ now appears on the opposite side.
Let γ : [a, b] → C be a smooth curve in C and f = u + iv be a complex function whose domain contains
γ. Let γ(t) = a(t) + ib(t). Then we define the line integral of f over γ by:
Z Z b
f (z)dz = f (γ(t))γ 0 (t)dt
γ a
Z b
= (u(γ(t)) + iv(γ(t)))(a0 (t) + ib0 (t))dt
a
Z b Z b
0 0
= u(a(t), b(t))a (t) − v(a(t), b(t))b (t)dt + i v(a(t), b(t))a0 (t) + u(a(t), b(t))b0 (t)dt
a a
Before we develop all sorts of neat techniques, let’s get our hands dirty and actually compute a line integral.
Example 3.2.1
R
Let γ be the line segment from 2 − i to −3 + 2i. Find γ
ez dz.
We have γ(t) = (2 − i) + t((−3 + 2i) − (2 − i)) = (2 − i) + t(−5 + 3i) = (2 − 5t) + i(−1 + 3t), where
55
t goes from 0 to 1. As such, γ 0 (t) = −5 + 3i.
Z Z 1
z
e dz = e(2−5t)+i(−1+3t) (−5 + 3i)dt
γ 0
Z 1
= (e2−5t cos(3t − 1) + ie2−5t sin(3t − 1))(−5 + 3i)dt
Z0 1 Z 1
2−5t 2−5t
= −5e cos(3t − 1) − 3e sin(3t − 1)dt + i 3e2−5t cos(3t − 1) − 5e2−5t sin(3t − 1)dt
0 0
And this has now become a particularly nasty first year calculus integral. Integrating by parts a
couple of times gives an answer. But there must be a better way?
Rather than continuing to bash our heads against this, let’s try to find a smarter way to handle this
integral. After all, we would expect integrating a simple function like ez to be simple. If we were to integrate
Rb x
a
e dx, we know by the Fundamental Theorem of Calculus that this is eb − ea . Is there a complex version of
this?
Theorem 3.2.1: The Complex Fundamental Theorem of Calculus
Suppose F (z) is an analytic function on an open set U , and γ : [a, b] → C is a smooth curve contained
in U . If F 0 (z) = f (z), then: Z
f (z)dz = F (γ(b)) − F (γ(a))
γ
Proof. The key tool here is the chain rule: F 0 (γ(t))γ 0 (t) = (F ◦ γ)0 (t). This is a slightly different chain rule
that we’ve seen so far, so let’s give a quick proof:
Let F (z) = u(x, y) + iv(x, y) and γ(t) = a(t) + ib(t). Then F (γ(t)) = u(a(t), b(t)) + iv(a(t), b(t)). Now,
F (γ(t)) is a curve in C, so we know that its derivative is:
(F ◦ γ)0 (t) = ux (a(t), b(t))a0 (t) + uy (a(t), b(t))b0 (t) + i (vx (a(t), b(t))a0 (t) + vy (a(t), b(t))b0 (t))
On the other hand, we know that F 0 = ux + ivx , and that ux = vy and uy = −vx by the Cauchy-Riemann
56
equations. As such:
F 0 (γ(t))γ 0 (t) = (ux (a(t), b(t)) + ivx (a(t), b(t)))(a0 (t) + ib0 (t))
= ux (a(t), b(t))a0 (t) − vx (a(t), b(t))b0 (t) + i (vx (a(t), b(t))a0 (t) + ux (a(t), b(t))b0 (t))
= ux (a(t), b(t))a0 (t) + uy (a(t), b(t))b0 (t) + i (vx (a(t), b(t))a0 (t) + vy (a(t), b(t))b0 (t))
= (F ◦ γ)0 (t)
With this intermediate result in hand, we can now tackle what we wish to prove. We have:
Z Z b Z b
0 0
f (z)dz = F (γ(t))γ (t)dt = (F ◦ γ)0 (t)dt
γ a a
Let (F ◦ γ)(t) = x(t) + iy(t). So then (F ◦ γ)0 (t) = x0 (t) + iy 0 (t). As such:
Z Z b Z b
0
f (z)dz = x (t)dt + i y 0 (t)dt
γ a a
However, since x and y are real functions, we can now use the Fundamental Theorem of Calculus over R
to get:
Z
f (z)dz = x(b) − x(a) + i(y(b) − y(a))
γ
= (x + iy)(b) − (x + iy)(a)
= (F ◦ γ)(b) − (F ◦ γ)(a)
as desired.
Example 3.2.2
R
Find γ ez dz where γ is the line segment from 2 − i to −3 + 2i.
Well, we know that if f (z) = ez , then f 0 (z) = ez = f (z). So, we have that:
Z
ez dz = eend of γ − estart of γ = e−3+2i − e2−i
γ
Unlike over R, we don’t call a function F (z) so that F 0 (z) = f (z) an antiderivative. The terminology in
C is a primitive.
Definition 3.2.2: Primitive
Let f (z) be a function defined on an open set U ⊂ C. We say that an analytic function F (z) is a
primitive for f (z) on U if F 0 (z) = f (z) for all z ∈ U .
At this point, we could reduce this to a course in finding primitives. However, as you’ve seen in first year,
that’s a wholly unsatisfying endeavour. Also, it ignores the uniquely complex characteristics from our setting.
57
3.2.1 Green’s Theorem and the Cauchy Integral Theorem
As a quick refresher from multivariable calculus, let’s remind ourselves what Green’s theorem says.
Theorem 3.2.2: Green’s Theorem
Let γ be a positively oriented, piecewise smooth, simple, closed curve in R2 . Suppose that f, g : U → R
where U is an open set containing both γ and the inside in(γ). If f and g have continuous partials,
then: Z ZZ
f dx + gdy = (gx − fy )dxdy
γ in(γ)
We will not be proving this. We will, however, gladly use it. Before we use it, let’s unpack the line integral.
Let γ : [a, b] → R2 be given by γ(t) = (x(t), y(t)). Then:
Z Z b
f dx + gdy = f (x(t), y(t))x0 (t) + g(x(t), y(t))y 0 (t)dt
γ a
We can use this to prove a very useful theorem in C. However, it is going to require us to make an
assumption. We are going to assume for the remainder of the course that if f (z) is holomorphic on an open
set U , then ux , uy , vx , and vy are all continuous on U . (It is my intent to revisit this later and provide a proof,
as an appendix.)
With this assumption, we can prove:
Theorem 3.2.3: Cauchy’s Integral Theorem (Version 1)
Suppose f is holomorphic on an open set U , and γ is a piecewise smooth, positively oriented, simple,
closed curve in U , such that in(γ) ⊂ U as well. Then:
Z
f (z)dz = 0
γ
Rb
u(a(t), b(t))a0 (t) − v(a(t), b(t))b0 (t)dt =
R
Notice that a γ
udx − vdy. So by Green’s theorem, this becomes:
Z b Z ZZ
0 0
u(a(t), b(t))a (t) − v(a(t), b(t))b (t)dt = udx − vdy = (−vx ) − uy dxdy
a γ in(γ)
58
Similarly, we also find that:
Z b ZZ ZZ
0 0
v(a(t), b(t))a (t) + u(a(t), b(t))b (t)dt = ux − vy dxdy = ux − ux dxdy = 0
a in(γ) in(γ)
R
As such, γ
f (z)dz = 0 + i0 = 0.
Why go through all this effort to prove this? Shouldn’t this just be a quick application of the CFTC?
Example 3.2.3
Consider this argument:
Proof. Let F be a primitive for f on the open set U . Then by CFTC:
Z
f (z)dz = F (γ(b)) − F (γ(a))
γ
What’s wrong with this proof? Why did we have to break out Green’s theorem instead?
Well, this is predicated on us having a primitive F for f ! We don’t know that analytic functions
have primitives on any open set, and it turns out not to be true.
So when does an analytic function have a primitive? How might we go about finding this function? For
inspiration, we turn to a similar result from first year calculus, the Fundamental Theorem of Calculus.
Recall that the FTC has two parts. The first part we already have an analogue for, which is our CFTC.
The second part says that if f (x) is a continuous function on [a, b], then:
Z x
F (x) = f (t)dt
a
is a differentiable function on (a, b) with F 0 (x) = f (x). I.e., that F is an antiderivative for f .
If we try to emulate this definition in C, we could try to define a function F (z) on a domain D as follows.
Fix a point z0 in D. For any point z ∈ D, we define
Z
F (z) = f (w)dw
γz
where γz is any curve from z0 to z in D. Does this definition make sense? Well, we can compute these
integrals for sure. So let’s work out an example.
59
Example 3.2.4
Let D = C \ {0}, and f (z) = z1 . This function is analytic on the domain D. We’ll fix our point z0 = 1.
Let’s consider what this formula gives F (−1). We will compute the integral along two curves:
γ1
γ2
The curve γ1 is the upper semicircle of radius 1 centered at 0, travelled from 1 to −1. We can
parametrize it as γ1 (t) = eit for t ∈ [0, π]. Its derivative is γ10 (t) = ieit . So we compute the integral:
Z Z π Z π
1 1 it
dz = it
ie dt = idt = iπ
γ1 z 0 e 0
The curve γ2 is the lower semicircle of radius 1 centered at 0, travelled from 1 to −1. We can
parametrize it as γ2 (t) = e−it for t ∈ [0, π]. Its derivative is γ20 (t) = −ieit . So we compute the integral:
Z Z π Z π
1 1 −it
dz = − −it ie dt = −idt = −iπ
γ2 z 0 e 0
We find that −iπ = F (−1) = iπ, which tells us that F isn’t a function!
In this example, we ran afoul of something very unfortunate: complex line integration is not path inde-
pendent in general.
Z Z
0= f (z)dz − f (z)dz
γ1 γ2
Z Z
= f (z)dz + f (z)dz From week 5 homework, Q6
γ1 −γ2
Z
= f (z)dz From week 5 homework, Q7
γ1 +(−γ2 )
R R R
So, the integrals γ1
f (z)dz and γ2
f (z)dz are equal if and only if the integral γ1 −γ2
f (z)dz, over the closed
60
curve γ1 + (−γ2 ) is 0. This looks an awful lot like a Cauchy Integral Theorem type result. However, we have
no guarantees that γ1 + (−γ2 ) is smooth or simple. We only know that it is piecewise smooth and closed.
So, for the moment, let’s try to generalize the Cauchy-Integral theorem to handle piecewise smooth, closed
curves. To do so, we need to overcome several hurdles. We need to:
Now, fortunately, extending CIT to work over piecewise smooth curves takes no work. Green’s theorem
applies to piecewise smooth, simple, closed curves as well.
To handle the closed case, the idea is fairly simple. If we have a closed curve, such as:
which is composed of three different piecewise smooth, closed curves. If f is analytic on a domain containing
each of these curves and their insides, then we can use CIT on each of them. Then the total integral will be
the sum of each of these integrals, which will be 0.
A complete proof of this is much more technical. So our proof will be somewhat "hand-wavey".
Theorem 3.2.4: Cauchy’s Integral Theorem (version 2)
Let f be a function that is analytic on a domain D. Suppose that γ is a closed, piecewise smooth curve
61
such that D contains γ and all of the regions bounded by γ. Then:
Z
f (z)dz = 0
γ
• countably many curves σj such that γ traverses each σj an equal number in both directions
Z X Z
f (z)dz = f (z)dz
γ γj is a simple, closed summand ofγ γj
Z Z !
X
+ n f (z)dz + f (z)dz
σj is a curve such that γ traverses σj an equal number of times in each direction σj −σj
R R
By our original CIT, the first summand is 0. And since σj f (z)dz + −σj f (z)dz = 0, the second summand
is 0. Therefore: Z
f (z)dz = 0
γ
The conditions on the curves in our CIT version 2 and in theorem 3.2.5 – namely that the domain contains
all regions bounded by the curve – are generally annoying to handle on a case by case basis. And if we want
62
our strategy to find a primitive to work, we need these conditions to hold for all curves from z0 to z in D. So
we should expect this to require a condition on D. Fortunately, there’s a nice topological condition we can
impose that guarantees everything we need.
Definition 3.2.4: Simply Connected Domains
A domain D is called simply connected if for every simple, closed curve γ in D, that in(γ) ⊂ D.
Intuitively, this means that the set has no "holes". For example:
Example 3.2.5
These domains are not simply connected because they have "holes":
or
On the other hand, there are plenty of sets we’re used to that are simply connected. C, C \ (−∞, 0],
and any open ball are all simply connected.
This condition lets us state another version of the Cauchy Integral Theorem:
Theorem 3.2.6: Cauchy’s Integral Theorem (version 3)
Suppose f (z) is analytic on a simply connected domain D. If γ is any piecewise smooth, closed curve
in D, then:
Z
f (z)dz = 0
γ
Proof. This follows immediately from CIT version 2, since any closed curve in a simply connected domain
satisfies the hypotheses of CIT version 2.
Having a sufficiently general version of the Cauchy Integral Theorem is a key step towards proving results
about primitives. However, before we do that, we need another result. As it turns out, the result guaranteeing
primitives that I would like to prove hinges on the ability to estimate integrals. So, before we can finish talking
about primitives, we need the following result:
63
Theorem 3.2.7: M-L Estimation of Integrals
Suppose γ : [a, b] → C is a piecewise smooth curve and f (z) is a continuous function whose domain
includes γ. Let M = max {|f (γ(t))||t ∈ [a, b]}, and L = Length(γ). Then:
Z
f (z)dz ≤ M L
γ
Proof. To begin, we will need to prove a nice fact: for any curve g : [a, b] → C, we have:
Z b Z b
g(t)dt ≤ |g(t)|dt
a a
Rb
Let a
g(t)dt = reiθ . Set h(t) = e−iθ g(t). Then:
Z b Z b Z b
−iθ −iθ
h(t)dt = e g(t)dt = e g(t)dt = r
a a a
Rb Rb Rb
So a
h(t)dt ∈ R. We therefore have that Re a
h(t)dt = a
Re(h(t))dt. So, we have:
Z b
g(t)dt = r
a
Z b
= h(t)dt
a
Z b
= Re(h(t))dt
a
Z b
≤ |Re(h(t))| dt (from multivariable calc)
a
Z b
≤ |h(t)|dt (since | Re(h(t))| ≤ |h(t)|)
a
Z b
= |g(t)|dt (since |h(t)| = |e−iθ ||g(t)| = |g(t)|)
a
With this in hand, we can now proceed to prove the result we desire:
Z Z b
f (z)dz = f (γ(t))γ 0 (t)dt
γ a
Z b
≤ |f (γ(t))||γ 0 (t)|dt
a
Z b
≤ M |γ 0 (t)|dt
a
Z b
=M |γ 0 (t)|dt
a
64
Rb
However, we know that a
|γ 0 (t)|dt = L, and so we have the desired inequality.
This result is, for now, theoretically interesting. However, much later on in the course, it will become very
useful in practice.
Now that we have a better Cauchy’s Integral Theorem and this technical result, we can prove:
Theorem 3.2.8
If f (z) is analytic on a simply connected domain D, then f has a primitive on this domain.
where γz is any piecewise smooth curve from z0 to z. By CIT version 3, we know that integration of f in D
is path independent, so this function is well defined.
All that remains is to show that F 0 (z) = f (z). Let γz be any curve from z0 to z. Since D is a domain,
there exists a radius r > 0 such that Br (z) ⊂ D, so we assume that |h| < r. Let γh be the straight line from
z to h. Notice that γh ⊂ Br (z), and that γz + γh is a curve from z0 to z + h in D. As such:
R R
γ
f (w)dw − γz
f (w)dw 1
Z
0 z+h
F (z) = lim = lim f (w)dw
h→0 h h→0 h γ
h
Now, since f is continuous on D, then for any ε > 0 there exists r > 0 such that if |w| < r, then
|f (w) − f (z)| < ε. If |h| < r, then for any w on γh , we also have |w| < r. This gives us that M =
max{f (w)|w = γh (t)} < ε. Notice also that γh has length |h|. So by M-L estimation, we have:
1 |h|
|F 0 (z) − f (z)| ≤ lim ε|h| = lim ε=ε
h→0 h h→0 |h|
Therefore, for any ε > 0, |F 0 (z) − f (z)| < ε. So |F 0 (z) − f (z)| = 0 and F 0 (z) = f (z).
Alright, so this is fairly technical. However, it tells us that a lot of functions have primitives. However,
note that this is only one direction. This does not say that if the domain isn’t simply connected, then f has
no primitive.
65
Example 3.2.6
d z1
We know that dz
= − z12 . So, even though C \ {0} is not simply connected, 1
z2
has a primitive!
On the other hand, this does tell us that some fairly fantastical functions have primitives. For
example, sin(sin(sin(ez cos(z) ) + z 2 )) cos(z 3 − 1) has a primitive on C! Good luck finding it, but it’s there.
Alright, so we have a result that tells us when a function has a primitive. Can we figure out when a
function doesn’t have a primitive? Well, remember that if f has a primitive on D, then the integral of f over
any closed curve is automatically 0, by CFTC. This turns out to actually be an if and only if:
Theorem 3.2.9
R
Let f be analytic on a domain D. Then f has a primitive on D if and only if γ
f (z)dz = 0 for any
piecewise smooth, closed curve in D.
R
Proof. If f has a primitive, CFTC gives that γ f (z)dz = 0.
R
On the other hand, if γ f (z)dz = 0 for every γ, then our argument in simply connected case still applies.
The integral definition of F (z) is still well-defined. The rest of the argument doesn’t use that D is simply
connected, and so the rest of the proof still applies!
In practice, we can use this to conclude when something doesn’t have a primitive on a domain.
Example 3.2.7
f (z) = z1 does not have a primitive on any domain containing the unit circle. To prove this, note that
we have shown that the integral of f (z) from 1 to −1 over the upper unit circle is iπ, while over the
lower unit circle we have −iπ. All together, this tells us that the integral of f (z) over the whole unit
circle is 2πi.
R
Since there is a closed curve γ in D with γ f (z)dz 6= 0, we cannot have a primitive.
3.3 A Roadmap
So far, we have seen how to handle a large class of integrals. integrating analytic functions on simply connected
domains is easy. How do we handle the case where we want to integrate over a domain which is not simply
connected? For example, how do we find:
Z Z
1 1
4
dz or e z dz
|z|=2 z + 1 |z|=1
66
Definition 3.3.1
An isolated singularity z0 ∈ C of a function f is a point such that:
• f is discontinuous at z0
So, an isolated singularity is a point where the function is not continuous, but is analytic around it.
Example 3.3.1
1
z
has an isolated singularity at z = 0.
1
z 4 −1
has isolated singularities at z = ±1, ±i.
Log(z) has no isolated singularities.
The strategy for integrating on curves whose inside contains isolated singularity depends on the type of
singularity. There are three types: removable discontinuities, poles, and essential singularities.
Example 3.3.2
z 2 −1 sin(z)
z−1
has a removable discontinuity at z = 1. z
has a removable discontinuity at z = 0.
So, a removable discontinuity is a an isolated singularity which can be filled. It turns out that filling this
discontinuity results in an analytic function, and so the Cauchy Integral Theorem still applies in this case!
Definition 3.3.3: Pole of order n
A pole of order n is an isolated singularity z0 such that there exists a function g(z) which is analytic
on Br (z0 ), g(z0 ) 6= 0, and:
g(z)
f (z) =
(z − z0 )n
A pole of order 1 is called a simple pole.
These isolated singularities behave like vertical asymptotes. We will see later on that limz→z0 f (z) = ∞,
and the order tells you how quickly the function tends to ∞.
67
Example 3.3.3
1 1
z 4 −1
has simple poles at each of ±1, ±i. For example, at z = 1, we can write g(z) = (z+1)(z 2 +1)
and
1 g(z)
z 4 −1
= z−1
. The function g(z) is analytic on C \ {−1, i, −i}, and so in particular it is analytic on B1 (1).
It turns out that this type of singularity behaves terribly. Not only does limz→z0 f (z) not exist, it’s also
not ∞. Furthermore, it turns out that for any r > 0, {f (w)|w ∈ Br (z0 ) \ {z0 }} = C or C \ {0}. This result,
which we won’t prove, is called the Great Picard’s Theorem. So, f (z) takes on every value (except maybe
one value) infinitely often near its essential singularities. This is incredibly bad behavior. This is akin to a
sin( 1 )
function like x x on R.
For each of these types of isolated singularity, we’re going to have a particular method:
• To integrate around removable discontinuities, we will soon see that the Cauchy Integral Theorem is
sufficient.
• To integrate around a pole, we will need to talk about the Cauchy Integral Formula.
• To integrate around essential singularities, we are going to need to talk about power series and Laurent
series. This will lead us to a nice result, called the Residue theorem, which will encapsulate each of
these methods.
Since this function isn’t analytic inside the curve, the Cauchy Integral Theorem does not apply. We have
no reasonable guess as to a primitive (and indeed, this function does not have a primitive on any domain
containing the unit circle!). What if we try using the definition of the integral?
Example 3.4.1
68
2π it 2π
ez ee it
Z Z Z
dz = ie dt = i ecos(t) (cos(sin(t)) + i sin(sin(t)))dt
γ z 0 eit 0
How exactly are we supposed to evaluate this mess of an integral? It doesn’t appear likely that any
of the techniques from first year calculus will be of much use here.
So, none of our techniques so far work. We need something new. It turns out there is a general theorem
that will let us handle any function with a simple pole.
Suppose f (z) is analytic on a domain D such that {z ∈ C||z − z0 | ≤ r} ⊂ D. Let γ be the circle of
radius r centered at z0 , travelled once counterclockwise. Then:
Z
f (z)
dz = 2πif (z0 )
γ z − z0
Before we prove this, let’s see how it applies to our previous example.
Example 3.4.2
Let f (z) = ez . This is entire, so is analytic on C. C is a domain containing {z ∈ C||z − z0 | ≤ 1}. Let
|z| = 1 refer to the unit circle travelled once counterclockwise. Then CIF applies to give:
ez
Z Z
f (z)
dz = dz = 2πif (0) = 2πie0 = 2πi
|z|=1 z |z|=1 z−0
Notice: the theorem is very easy to apply. No complicated arithmetic involved. However, we do need to
check the conditions of the theorem before we apply it. This takes some care.
Let’s prove this theorem. The proof contains at least one very useful idea.
Proof. Let γs be the circle of radius s centered at z0 , where s ∈ (0, r]. We may assume, without loss of
generality, that each γs starts at the angle θ = 0 and ends at θ = 2π. We define a function F (s) as follows:
Z
f (z)
F (s) = dz
γs z − z0
So the integral we are interested in is F (r). We shall prove two facts about F (s):
69
• lims→0+ F (s) = 2πif (z0 ).
γr,upper
γs,upper
L2 L1
γs,lower
γr,lower
And similarly: Z
f (z)
dz = 0
γr,lower −L1 +γs,lower −L2 z − z0
Z Z Z Z
f (z) f (z) f (z) f (z)
dz + dz + dz + dz
γr,upper z − z0 L2 z − z0 γr,lower z − z0 L1 z − z0
Z Z Z Z
f (z) f (z) f (z) f (z)
+ dz + dz + dz + dz = 0
γs,upper z − z0 −L1 z − z0 γs,lower z − z0 −L2 z − z0
As such, F (s) = F (r) for all s ∈ (0, r). So F is constant on (0, r].
So, we see that lims→0+ F (s) = lims→0+ F (r) = F (r).
All that remains is for us to actually compute this limit. To do that, we go to the definition of the integral.
70
2π 2π
f (z0 + seit ) it
Z Z Z
f (z)
dz = ise dt = if (z0 + seit )dt
γs z − z0 0 seit 0
R 2π
We claim that lim s → 0+ 0
if (z0 + seit )dt = 2πif (z0 ). Consider the difference:
Z 2π Z 2π Z 2π
+ it it
lim s → 0 if (z0 + se )dt − 2πif (z0 ) = lim+ if (z0 + se )dt − if (z0 )dt
0 s→0 0 0
Z 2π
≤ lim+ |f (z0 + seit ) − f (z0 )|dt
s→0 0
≤ lim+ 2π max{|f (w) − f (z0 )||w ∈ {z ∈ C||z − z0 | ≤ s}}
s→0
Now, since f is continuous at z0 , we see that lims→0+ max{|f (w) − f (z0 )||w ∈ {z ∈ C||z − z0 | ≤ s}} = 0.
So the squeeze theorem gives us that:
Z 2π
lim+ if (z0 + seit )dt − 2πif (z0 ) = 0
s→0 0
R 2π
As such, lims→0+ 0
if (z0 + seit )dt = 2πif (z0 ) as desired.
Putting it all together, we get that:
Z
f (z)
dz = F (r) = lim+ F (s) = 2πif (z0 )
γr z − z0 s→0
While technical, this proof has a really important idea. The technique for showing that the integrals over
the two circles are equal is fairly useful. For example:
Example 3.4.3
1
R
Let γ be the circle of radius 3 centered at 0, travelled once counterclockwise. Find γ z−1
dz.
As written, CIF doesn’t apply. This first version of CIF only applies to circles centered at z0 , which
in this case is 1.
However, our technique from the proof of CIF gives that:
Z Z
1 1
dzdz =
γ z−1
|z−1|=1 z − 1
R 1
And now we’re in a situation where CIF applies. It gives γ z−1 dz = 2πi.
We continue with another example of how to use the Cauchy Integral Formula:
71
Example 3.4.4
1
R
Let γ be the circle of radius 2 centered at 0, travelled once counterclockwise. Find γ z 2 +1
dz.
The function z21+1 = (z+i)(z−i)
1
has two simple poles inside the curve. So CIF doesn’t immediately
apply. Instead, we need to turn this into a situation where it does. Consider the following curves:
γ2
γ1 and
−i
γ3
Since none of these three curves enclose ±i, CIT applies to give that:
Z
1
2
dz = 0
γj z + 1
3 Z Z Z Z
X 1 1 1 1
0= 2
dz = 2
dz − 2
dz − dz
j=1 γj z +1 C z +1 C1 z +1 C2 z2 +1
where C is the large circle travelled once counterclockwise, C1 is the smaller circle around i travelled once
counterclockwise, and C2 is the smaller circle around −i travelled once counterclockwise. This second
equality comes from noticing that the green lines in γ1 and γ2 are travelled in opposite directions, so
their integrals cancel out. The same is true for the black lines in γ1 and γ3 .
So we are left with the red arcs which together make C, and the blue arcs which together make
−C1 and −C2 (notice that the blue arcs are travelling clockwise!)
All together, this shows that C z21+1 dz = C1 z21+1 dz + C1 z21+1 dz. We now calculate these two
R R R
1
R R 1
And similarly, C2 2
z +1
dz = −π. So all together, C z 2 +1
dz = 0.
72
This example suggests a general technique.
Proof. We proceed by induction. Suppose n = 1. Fix θ ∈ (0, π). Consider the rays R+ = {z1 + reiθ |r ≥ rj }
and R− = {z1 + re−iθ |r ≥ rj }. We have something like the following picture:
R+
R−
Since zj is inside γ, there exists r+ and r− which are the closest intersections of R+ and R− with γ to the
point zj . Let L+ be the line segment from z0 + r− eiθ to z0 + r+ eiθ and L− the line segment from z0 + rj e−iθ
to z0 + r− e−iθ .
These intersections divide γ into two segments: γ1 and γ2 . Specifcally, γ1 starts at the intersection of γ
with R+ and travels γ in the positive orientation until it hits the intersection with R− . And γ2 starts where
γ intersects R− and travels along γ until it hits R+
Similarly, the circle is divided into two segments: the arc A1 from the angle −θ to θ, and the arc A2 from
the angle θ to 2π − θ.
73
By our construction and the assumption that C1 is inside γ, z1 is not inside either of γ1 − L− − A2 + L+
or γ2 − L+ − A1 + L− . (Follow the picture to see why these are the curves we want.) These are each simple
closed, positively oriented curves whose insides are in D \ {z1 }. As such, CIT gives that:
Z Z
f (z)dz = f (z)dz = 0
γ1 −L− −A2 +L+ γ2 −L+ −A1 +L−
Z Z Z Z Z Z Z Z
f (z)dz − f (z)dz − f (z)dz + f (z)dz + f (z)dz − f (z)dz − f (z)dz − f (z)dz = 0
γ1 L− A2 L+ γ2 L+ A1 L1
R R
Simplifying gives that γ
f (z)dz − C1
f (z)dz = 0, as desired.
Proceeding with the induction, suppose the claim is true for k points z1 , . . . , zk where k ≤ n.
Decompose γ as in the case for n = 1. However, in this case, we will have that γ1 − L− − A2 + L+ will
now have (without loss of generality) z2 , . . . , zk inside γ1 − L− − A2 + L+ for some k ≤ n. The remaining
zk+1 , . . . , zn (if there are any) will be inside γ2 − L+ − A1 + L− necessarily. Since {z2 , . . . , zk } contains at most
n − 1 points, the induction hypothesis applies to give:
Z k Z
X
f (z)dz = f (z)dz
γ1 −L− −A2 +L+ j=2 Cj
Z n
X Z
f (z)dz = f (z)dz
γ2 −L+ −A1 +L− j=k+1 Cj
Z Z n Z
X
f (z)dz − f (z)dz = f (z)dz
γ C1 j=2 Cj
As an easy consequence of this, we can state a more general version of the Cauchy Integral Formula:
Corollary 3.4.1: Cauchy’s Integral Formula (vII)
Suppose f (z) is analytic on a domain D. Let γ be a piecewise smooth, positively oriented, simple closed
curve in D whose inside is in D. Suppose z0 is inside γ. Then:
Z
f (z)
dz = 2πif (z0 )
γ z − z0
This follows immediately from deforming γ to a circle and using our original CIF.
74
3.4.3 Poles of Higher Order - The Generalized Cauchy Integral Formula
Now that we know how to integrate curves surrounding a simple pole, or multiple simple poles, how do we
z
handle integrating around higher order poles? For example, how do we find |z|=1 ze2 dz? It turns out that we
R
Suppose f (z) is analytic on a domain D. Let γ be a piecewise smooth, positively oriented, simple
closed curve in D whose inside is in D. Suppose z0 is inside γ. Suppose n > 0. Then:
Z
f (z) 2πi
n
dz = f (n−1) (z0 )
γ (z − z0 ) (n − 1)!
The key to proving this is a result from multivariable calculus called the Leibniz Integral Rule.
Lemma 3.4.1. Suppose f (w, z) is continuous in both z and w on some region R such that if (w0 , γ(t)) ∈ R
for all t whenever there exists (w0 , z) ∈ R. Further, suppose that fw (w, z) is continuous in both z and w.
Then:
Z Z
d ∂
f (w, z)dz = f (w, z)dz
dw γ γ ∂w
Proof. We proceed by induction. For n = 1, the claim holds by the regular Cauchy Integral Formula.
f (z)
Suppose the claim holds for some n. Let g(z0 , z) = (z−z 0)
n . Notice that since z0 is inside γ and z is on γ,
that g(z0 , z) is continuous in z0 and z (we can actually take z close to γ, since z0 has some positive distance
between it and γ). Further:
∂ nf (z)
g(z0 , z) =
∂z0 (z − z0 )n+1
is also continuous on the same region, for the same reason. As such, Leibniz’s rule gives:
Z Z
f (z) 1 ∂
dz = g(z0 , z)dz
γ (z − z0 )n+1 γ n ∂z0
Z
1 d
= g(z0 , z)dz
n dz0 γ
Z
1 d f (z)
= dz
n dz0 γ (z − z0 )n
R f (z) 2πi
However, our induction hypothesis gives that γ (z−z0 )n
dz = (n−1)!
f (n−1) (z0 ). So:
75
Z
f (z) 1 d 2πi
dz = f (n−1) (z0 )
γ (z − z0 )n+1 n dz0 (n − 1)!
2πi (n)
= f (z0 )
n!
Note that at no point did we assume that the derivatives f (n) (z0 ) exist. In fact, the Leibniz rule gives us
that they exist, since they’re equal to these integrals which do exist! As a corollary:
Corollary 3.4.2: Holomorphic Functions are Smooth
e−1 +e −1 −e
0 2
(2i)2 − 2e 2i
(2i) −4(e−1 + e) − 4(e−1 − e) 1
f (i) = = =−
16 32 4e
R sin(z) πi
And so |z−i|=1 (z 2 +1)2
dz = − 2e .
sin(z)
For the circle centered at −i, we follow the same procedure with g(z) = (z−i)2
. We find that:
76
Z
sin(z)
2 2
dz = 2πig 0 (−i)
|z+i|=1 (z + 1)
And we calculate:
Proof.
Proof idea: To show f (z) is constant, we can show that f 0 (z) = 0 for all z ∈ C.
To connect f 0 (z) to the fact that f (z) is bounded, we go through Cauchy’s integral formula, which connects
f 0 (z) to f (z).
Proof: Suppose f (z) is bounded and entire, but is non-constant. Let M ∈ R such that |f (z)| ≤ M for all
z ∈ C.
Let γR be the circle of radius of circle R, centered at a, travelled once counter-clockwise. Since f (z) is
entire, Cauchy’s integral formula tells us that:
Z
0 f (z)
2πif (a) = dz
γR (z − a)2
We would like to show this integral is 0. However, we don’t have any way to calculate it. Instead, let’s
estimate:
77
|f (z)|
Z Z Z
0 f (z) f (z)
|2πif (a)| = dz ≤ dz = dz
γR (z − a)2 γR (z − a)2 γR |z − a|2
Now, since z is on γR , which is the circle of radius R centered at a, we see that |z − a| = R. So:
|f (z)|
Z
0
|2πif (a)| ≤ dz
γR R2
2πRM 2πM
|2πif 0 (a)| ≤ =
R2 R
However, notice this is true for all R. So by the squeeze theorem:
2πM
0 ≤ |2πif 0 (a)| ≤ lim =0
R→∞ R
As such, 2πif 0 (a) = 0. So f 0 (a) = 0. Since a was arbitrary, we have shown that f 0 (z) = 0 for all z, so that
f (z) is a constant function.
This is a fairly strong statement. It is also a favourite for coming up with interesting proof questions on
tests. Let’s see an example:
Example 3.5.1
Suppose f (z) is entire and f (z) 6= kz for any k. (Meaning the function f (z) is not equal to the function
kz.) Then there exists w ∈ C with |f (w)| ≤ |w|.
z
Suppose |z| < |f (z)| for all z ∈ C. Consider the function g(z) = f (z) . Since 0 ≤ |z| < |f (z)| for all
z ∈ C, we have that f (z) 6= 0 for all z. This means that g(z) is entire.
However, we also know that |g(z)| = |f|z| (z)|
< 1 for all z ∈ C. So by Liouville, g(z) is constant. In
z
particular, f (z) = k for some k ∈ C. I.e., f (z) = kz. A contradiction. Therefore, |f (z)| ≤ |z| for some
z ∈ C.
Proof.
1
Proof Idea: To show p(z) has roots, we look at p(z) . We use Liouville to show that if p(z) has no roots,
then this new function is actually constant. That contradicts that p(z) is non-constant.
78
Proof: Let p(z) be a non-constant complex polynomial. We proceed by contradiction. Assume p(z) 6= 0 for
all z.
1
Consider f (z) = p(z) . Since p(z) is entire and non-zero, f (z) is also entire. We claim that f (z) is bounded,
so that we can use Liouville’s theorem.
We will handle this in two pieces: show p(z) is bounded on some large closed circle, and show it’s bounded
outside that circle as well.
However, we need to figure out what this circle actually is. To do that, we’re going to look at limz→∞ f (z).
Limit: Let p(z) = an z n + · · · + a1 z + a0 . Then using the triangle inequality, we find that |p(z)| ≥ |an ||z n | −
(|an−1 ||z|n−1 + · · · + |a0 |). Suppose |z| = R. Then:
Circle We can use this limit fact to find a large circle such that f (z) is bounded outside that circle.
Remember, limz→∞ f (z) = L means that:
Choose ε = 1. Since limz→∞ f (z) = 0, there exists some radius R such that when |z| > R, we have
|f (z) − 0| < 1. I.e., |f (z)| < 1.
So f (z) is bounded outside the disc |z| ≤ R, by definition.
We know that {z||z| ≤ R} is closed and bounded (by an example in the appendix), and that f (z) is
continuous (since it is entire). So by EVT, f (z) is also bounded on |z| ≤ R. Say |f (z)| ≤ M .
All Together So, if z ∈ C, then |z| ≤ R or |z| > R. If |z| ≤ R, then |f (z)| ≤ M . And if |z| > R, then
|f (z)| < 1. So |f (z)| ≤ max{1, M }.
As such, f (z) is a bounded function. It is also entire. And so is constant.
1
But then p(z) = f (z)
is also constant, contradicting that p(z) is non-constant. Therefore, p(z) has roots.
79
4 | Power and Laurent Series
Last chapter, we laid out an integration road map. Unfortunately, we can’t handle removable or essential
singularities without further investigating the properties of complex functions. The key tool to understanding
these kinds of singularities is the notion of a Laurent series.
Well, if we’re going to be looking at power series, we had best first talk about sequences and series.
A sequence in C is an infinite list of complex numbers ak , ak+1 , . . . . We will write this as (an )∞
n=k .
∞
We say that the sequence (an )n=k converges to L, or that limn→∞ an = L, or even that an → L, if:
We aren’t going to spend a lot of time discussing sequences. We only care about them in the context of
series.
Definition 4.1.2: Series
Let (an )∞ ∞
n=k . Define a new sequence (Sn )n=k by:
n
X
Sn = aj
j=k
This is called the nth partial sum of the sequence (an )∞ . The infinite series ∞
P
n=k n=k an is defined
to be ∞
P
a
n=k n = limn→∞ nS , if the sequence converges. In that case, we say the sequence converges as
well.
Sn = 1 + z + z 2 + · · · + z n
80
1−z n+1
This is a finite geometric series, which is equal to 1−z
. To see this, notice that:
(1 − z)(1 + z + · · · + z n ) = (1 − z) + z(1 − z) + z 2 (1 − z) + · · · + z n (1 − z)
= 1 − z + z − z 2 + z 2 − · · · − z n + z n − z n+1
= 1 − z n+1
1−z n+1
So for z 6= 1, Sn = 1−z
. We now take the limit of this sequence:
This example is going to be very useful. Make sure you know this formula.
For a lot of series, actually calculating the limit can be quite hard. For example, how would you show
P∞ (−1)k
that ∞ 2n 2
P
n=0 n! = e , or that k=0 (2k+1)! = sin(1)?
For the moment, let’s instead focusing on determining if a series converges at all. We begin by recalling a
couple of convergence results that you should have already seen for R.
Theorem 4.1.1: The Divergence Test
If limn→∞ an 6= 0, then ∞
P
n=k an diverges.
The converse is false.
P∞
Proof. We prove the contrapositive instead. Suppose n=k an converges.
Notice that an = Sn − Sn−1 . As such, limn→∞ an = limn→∞ Sn − limn→∞ Sn−1 . Notice that the sequences
(Sn )∞ ∞
n=k and (Sn−1 )n=k+1 have the same limit. Indeed, these are actually the same sequence, just indexed
differently! So:
∞
X ∞
X
lim an = an − an = 0
n→∞
n=k n=k
P∞ 1
For the converse, we claim that n=1 n diverges. The idea is to write the sum as:
81
1 1 1 1 1 1 1
1+ + + + + + + +...
2 |3 {z 4} |5 6 {z 7 8}
< 14 + 14 = 21 < 48 = 12
P2k+1 1 1
2k+1 2k+1 = 12 . As such:
P
More generally, notice that n=2k +1 n > n=2k +1
m+1
2X m 2 k+1
1 1 X X 1 1 m
S2m+1 = =1+ + ≥1+ +
n=1
n 2 k=1 k
n 2 2
n=2 +1
m+1
Therefore, limn→∞ Sn = limm→∞ 1 + 2
= ∞. So the series diverges.
This is called the harmonic series, and is the classic example of a series whose terms go to 0 be which
diverges anyway.
This is not the only example of a series whose terms go to 0 but which diverges. There are plenty of
X 1
interesting examples of this behavior. For example, also diverges, although much more slowly. While
p prime
p
outside the scope of this course, this is traditionally shown using the Prime Number Theorem, which is
equivalent to saying that the nth prime pn is approximately n ln(n) for n very large.
Another way to tell if a series is convergent is if it is absolutely convergent.
Definition 4.1.3: Absolutely Convergent Series
A series ∞
P P∞
n=k a n is called absolutely convergent if n=k |an | converges.
Theorem 4.1.2
Absolutely convergent series converge.
Proof. See A.2.1 in the appendices for a proof. It is a technical proof, using the notion of Cauchy sequences.
I have included it for completeness’s sake.
With this new notion, we can prove a very powerful convergence test.
Theorem 4.1.3: D’Alembert’s Ratio Test
an+1
Suppose lim = L.
n→∞ an
• If L < 1, then ∞
P
n=k an converges absolutely. (And therefore converges.)
an+1
Proof. Suppose lim = L.
n→∞ an
82
an+1
Suppose L < 1. Choose ε > 0 so that L < L + ε < 1. Since lim = L, there exists N ∈ N with
n→∞ an
an+1
N > k such that if n ≥ N , then < L + ε.
an
As such, |aN +j | < (L + ε)|aN +j−1 | for all j ≥ 1. We can use this iteratively to conclude that |aN +j | <
(L + ε)j |aN |.
We see then that ∞
P PN −1 P∞ j
n=k |an | < n=k |an | + j=N (L + ε) |aN |. Since L + ε < 1, this geometric series
converges.
Now, the Comparison Theorem (from first year calculus) tells us that if 0 ≤ an ≤ bn and ∞
P
P∞ P∞ n=k bn converges,
then so does n=k an . So, by the Comparison Theorem, n=k |an | converges.
Therefore, ∞
P
n=k an converges absolutely.
an+1
Now, supposing that L > 1, we choose ε so that 1 < L − ε < L. Since lim = L, there exists N ∈ N
n→∞ an
an+1
with N > k such that if n ≥ N , then > L − ε.
an
In this case, we know that for j ≥ 0, |aN +j | > (L − ε)j |aN |. However, as j → ∞, (Lε )j → ∞. As such,
limn→∞ an = ∞. By the Divergence Test, ∞
P
n=k an diverges.
This is going to be an extremely useful test for us. In fact, when dealing with power series (as we will be
very shortly), this test is the most useful.
Example 4.1.2
The series ∞ zn
P
n=0 n!
converges for all n.
To see this, we apply the ratio test. We need to be careful here. Remember that in the ratio test,
an represents the nth term of the series. It does not represent the coefficient of z n .
zn
So, in this case, an = n!
. And therefore:
z n+1
(n+1)! |z|
lim zn = lim =0
n→∞ n→∞ n + 1
n!
Since L = 0 < 1 for all z ∈ C, we see that the series converges absolutely for each z ∈ C.
Example 4.1.3
P∞ zn
Find an R ≥ 0 such that if |z| < R, then n=0 2n converges, and if |z| > R, then the sum diverges.
This turns out to follow right away from the ratio test. We apply:
z n+1
2n+1 |z|
lim zn = lim
n→∞ n→∞ 2
2n
83
Now, if |z|
2
< 1, the ratio test tells us that the series converges. And if |z|
2
> 1, the series diverges.
In other words, R = 2 works.
The domain of this function is every point where this series converges, and contains at least z0 .
For some function g(z), we say that g(z) is represented by the power series ∞ n
P
P∞ n=0 an (z − z0 ) on a
domain D if g(z) = n=0 an (z − z0 )n on D.
One of the most crucial facts moving forward is that holomorphic functions can be represented by power
series. This is a drastic difference between the complex and real theory. There are infinitely differentiable
functions over R which cannot be meaningfully represented by power series. That is not the case over C.
Theorem 4.2.1
Suppose f (z) is holomorphic on a domain D, and z0 ∈ D. Then there exists some R > 0 such that if
|z − z0 | < R, then:
∞
X f (n) (z0 )
f (z) = (z − z0 )n
n=0
n!
Choose r ∈ R such that |w − z0 | < r < R. This can be done since w ∈ BR (z0 ) =⇒ |w − z0 | < R. Let γr
be the circle of radius r centered at z0 , travelled once counterclockwise. By the Cauchy Integral Formula:
Z
1 f (z)
f (w) = dz
2πi γr z−w
84
1
For the moment, let us consider z−w
. We can rewrite this as:
1 1 1 1
= =
z−w (z − z0 ) − (w − z0 ) (z − z0 ) 1 − w−z
z−z0
0
w−z0
Now, since |w − z0 | < r and |z − z0 | = r (since z is on γr ), we know that z−z0
< 1. So, our geometric
series formula gives that:
∞ n X ∞
1 1 X w − z0 (w − z0 )n
= =
z−w z − z0 n=0 z − z0 n=0
(z − z0 )n+1
1
R P∞ n f (z)
And therefore, f (w) = 2πi γr n=0 (w −z0 ) (z−z0 )n+1 dz. By uniform convergence of the series (see theorem
A.2.3 in the appendices for the very technical proof), we have that:
∞
Z X
1 f (z)
f (w) = (w − z0 )n dz
2πi γr n=0 (z − z0 )n+1
∞ Z
X 1 f (z)
= (w − z0 )n dz
n=0
2πi γr (z − z0 )n+1
∞
(w − z0 )n
Z
X f (z)
= dz
n=0
2πi γr (z − z0 )n+1
∞
CIF
X (w − z0 )n 2πif (n) (z0 )
=
n=0
2πi n!
∞
X f (n) (z0 )
= (w − z0 )n
n=0
n!
as desired. Since w was chosen arbitrarily in BR (z0 ), the power series expansion holds on BR (z0 ).
This theorem is sometimes stated very simply as “holomorphic functions are analytic". We have used
holomorphic and analytic interchangeably throughout this text. However, f analytic really means that the
function can be described as a convergent power series. We have just shown that holomorphic functions
actually are analytic, they can be described by power series.
Let’s work out an example by calculating an example of a power series.
Example 4.2.2
Consider f (z) = ez . Find the power series expansion for ez centered at z0 . For which z ∈ C is it valid?
Well, f (n) (z) = ez , and so the power series expansion of ez at z0 is:
85
∞
X ez0
(z − z0 )n
n=0
n!
Now, the theorem above was valid on BR (z0 ) where BR (z0 ) is contained in a domain where f (z)
is holomorphic. However, ez is entire, and so all R satisfy this condition. As such, the power series
expansion is valid on BR (z0 ) for any R. Since for each w ∈ C, there exists R so that w ∈ BR (z0 ), it
follows that this power series expansion is valid on C.
Knowing that holomorphic functions can be described by convergent power series raises a couple of other
questions:
Some things to keep in mind here: if we’re looking at power series of a holomorphic function, the radius
of convergence depends on the center of the series. It’s not usually a one size fits all situation. There is one
situation where that’s not the case though:
Example 4.2.3
Suppose f (z) is entire. Then the power series expansion for f (z) at z0 has radius of convergence R = ∞
for any z0 .
However, theorem 4.2.1 tells us that if f (z) is analytic on Br (z0 ), then its power series expansion
at z0 converges to the value of the function on Br (z0 ). As such, R ≥ r. But since f (z) is analytic on
Br (z0 ) for all r, this means that R > r for all r. No real number satisfies this, and so R = ∞.
Alright, this helps us figure out the radius of convergence in some situations. But what if we don’t know
what function the power series gives? Is there some way to find R in that case?
Theorem 4.2.2: Ratio Test for Power Series
an+1
Suppose limn→∞ an
= L where L ∈ [0, ∞]. Then:
P∞
• If L = 0, the series n=0 an (z − z0 )n has radius of convergence R = ∞.
86
• If L = ∞, the series has radius of convergence 0.
P∞
Proof. We apply the usual ratio test to the series n=0 an (z − z0 )n .
We know the series always converges when z = z0 , so we only need to consider when z 6= z0 . We compute:
an+1 (z − z0 )n+1
lim = L|z − z0 |
n→∞ an (z − z0 )n
Now, if L|z − z0 | < 1, this series converges absolutely. And when L|z − z0 | > 1, it diverges. So, we consider
our cases:
• If L = 0, then L|z − z0 | = 0 < 1 for all z ∈ C. As such, the series converges everywhere and R = ∞.
an+1 (z−z0 )n+1
• If L = ∞, we have that limn→∞ an (z−z0 )n
> 1 for any z 6= z0 . As such, the series diverges on
C \ {z0 } and R = 0.
• If L ∈ (0, ∞), the the series converges when |z − z0 | < L1 and diverges when |z − z0 | > 1
L
. As such,
B 1 (z0 ) is the largest open ball on which the series converges, so R = L1 .
L
(−1)n+1
1 2n+1 (n+1)3 n3 1
= lim (−1)n
= lim 3
=
R n→∞ n→∞ 2(n + 1) 2
2n n3
As such, R = 2.
∞
X (−1)k z 2k
cos(z) =
k=0
(2k)!
87
Now, since cos(z) is entire, we know this has radius of convergence R = ∞. Out of curiousity, is it
possible to use this formula to find this?
Well, we have:
1 1
cos(z) = 1 + 0z − z 2 + 0z 3 + z 4 + 0z 5 + ...
2 24
n
In this case, a2n = (−1)
(2n)!
and a2n+1 = 0. So, the sequence bn = an+1
an
has b2n = 0 and b2n+1 undefined!
So we can’t take limn→∞ |bn |, since the sequence isn’t defined for all n.
How can we fix this? One way is to set w = z 2 . Then:
∞
X (−1)n wn
cos(z) =
n=0
(2n)!
(−1)n
This series isn’t missing any terms, so we can use the ratio test with an = (2n)!
. We find that this
series converges when |w| < R where:
(−1)n+1
1 (2(n+1))! 1
= lim (−1)n
= lim =0
R n→∞ n→∞ (2n + 2)(2n + 1)
(2n)!
So, this series (in terms of w!) has radius of convergence R = ∞. Since it converges for all w, it
also converges for all z.
In this case, it didn’t matter that we had |w| < R vs. |z| < R. But, for example, if we found that the
series in terms of w had radius of convergence 4, we would have |w| < 4. But w = z 2 , so this gives |z 2 | < 4
or |z| < 2. The moral: be careful.
Can we find the radius of convergence of the power series for some function without actually computing
the power series?
Example 4.2.6
Consider f (z) = z21−1 . This function is analytic on C \ {±1}, so it has a power series expansion at each
of those points by theorem 4.2.1. Let’s find the radius of convergence of this series at z0 = 3.
Rather than actually compute the power series, notice that the theorem tells us that if f (z) is
analytic on Br (z0 ). then:
∞
X f (n) (z0 )
f (z) = (z − z0 )n
n=0
n!
for any z ∈ Br (z0 ). As such, if f (z) is analytic on Br (z0 ), then this power series must have radius of
convergence R ≥ r.
88
In our case, notice that B2 (3) is the largest ball centered at 3 on which f (z) is analytic. As such,
we have that R ≥ 2.
In fact, R = 2! But to see this, we’re going to need to develop some more techniques.
To finish this example, we prove (or state) some theorems that will help us determine the radius of
convergence.
Theorem 4.2.3
Suppose f (z) = ∞ n
P
n=0 an (z − z0 ) . Let w ∈ C.
If f (z) converges at z = w, then f (z) converges at z for all z ∈ C with |z − z0 | < |w − z0 |. And if
f (z) diverges at z = w, it diverges for all z ∈ C with |z − z0 | > |w − z0 |.
Before we prove this, let’s talk about what this means. If this power series has radius of convergence R,
this is saying that the series diverges for all z ∈ C with |z − z0 | > R. Why? Well, if it converges at such a z,
then it converges on B|z−z0 | (z0 ) which is a bigger ball than BR (z0 ). Since BR (z0 ) is the largest ball on which
the series converges, this can’t happen. So we get the useful corollary:
Corollary 4.2.1
Proof. We now prove the theorem. The idea is to try to write the series as being close to a geometric series.
Suppose ∞ n 0
P
n=0 an (z − z0 ) converges at z = w. Suppose |z − z0 | < |w − z0 |. Then:
∞ ∞
X
0 n
X (z 0 − z0 )n
an (z − z0 ) = an (w − z0 )n
n=0 n=0
(w − z0 )n
z 0 −z0
Let ρ = w−z0
. Note that |ρ| < 1 since |z 0 − z0 | < |w − z0 |.
Now, since ∞ n n
P
n=0 an (w − z0 ) converges, we know that limn→0 an (w − z0 ) = 0 by the divergence test. As
such, ∃M ∈ R so that for all n ∈ N , |an (w − z0 )n | < M .
Now, we have that |an (z 0 − z0 )n | ≤ M ρn . Since ∞ n
P
P∞ n=0 M ρ converges, the comparison test for real series
tells us that n=0 |an (z 0 − z0 )n | converges as well.
Therefore, ∞ 0 n
P
n=0 an (z − z0 ) converges absolutely, and hence converges.
Next, we state a very important but very technical result. The proof will appear in the appendices.
Theorem 4.2.4: Power Series are differentiable
Suppose f (z) = ∞
P n
P∞ n−1
n=0 an (z − z0 ) has radius of convergence R > 0. Then g(z) = n=1 nan (z − z0 )
also has radius of convergence R and f 0 (z) = g(z).
89
To put this more plainly, the derivative of a power series is the term by term derivative. This also lets us
prove something about primitives of power series.
Theorem 4.2.5: Power Series have primitives
Suppose f (z) = ∞ n
P∞ an
(z−z0 )n+1
P
n=0 an (z−z0 ) has radius of convergence R > 0. Then F (z) = C+ n=0 n+1
also has radius of convergence R and F 0 (z) = f (z).
Proof. To begin, we show that they have the same radius of convergence. We know that f (z) has radius of
convergence:
an
R = lim
n→∞ an+1
if this limit exists. (A more precise version of this argument would use the concept of lim sup which always
exists.)
If F (z) has radius of convergence RF , then:
(n + 2)an an
RF = lim = lim =R
n→∞ nan+1 n→∞ an+1
So they have the same radius of convergence. Then, by theorem 4.2.4, F 0 (z) = f (z).
Let’s finish off our example from last class before we talk about other ways to use these results.
Example 4.2.7
1
We know that the power series for centered at z0 = 3 has radius of convergence R ≥ 2.
z 2 −1
1
f (1) = lim f (z) = lim+ f (r) = lim+ =∞
z→1 r→1 r→1 r2 −1
This is a contradiction, since we know that f (1) is defined (and therefore not ∞.) So we cannot
have R > 2, leaving us with R = 2.
90
f (z) centered at z0 is:
R = min{|z0 − zj |}
Proof. The proof proceeds analagously to the previous example. We know that f (z) is analytic on BR (z0 ),
since this ball contains none of the isolated singularities of f (z).
However, since limz→zj f (z) does not exist, we cannot extend the power series to be valid beyond any of
the zj . Since BR (z0 ) is the largest ball containing none of the zj , this is the largest ball on which the series
converges.
What other ways can we use these results? They allow us to create new series without much fuss, so let’s
see what this can give us.
Example 4.2.8
P∞ P∞ z n+1
Find the radius of convergence for n=1 nz n−1 and n=0 n+1 . What functions are these?
We recognize that nz n−1 is the derivative of z n . So:
∞ ∞
X
n−1
X d n d 1 1
nz = z = =
n=1 n=1
dz dz 1 − z (1 − z)2
And theorem 4.2.4 tells us that this series has the same radius of convergence as the power series
1
for 1−z centered at z0 = 0, which is R = 1.
Similarly, we recognize that zn+1 is a primitive for z n , and so ∞
n+1 P z n+1 1
n=0 n+1 is a primitive for 1−z . As
such, it also has radius of convergence R = 1 and we have:
∞
X z n+1
= − log0 (1 − z) + C
n=0
n+1
for some logarithm log0 (z). Luckily, evaluating at z = 0 gives 0 = − log0 (1) + C. We can therefore
choose log0 (z) = Log(z) and C = 0.
Example 4.2.9
(−1)n
Find ∞
P
n=2 2n (n2 −n) .
Well, this isn’t a power series. However, this is the power series:
∞
X zn
n=2
n(n − 1)
−1
evaluated at z = 2
. So, we need to investigate this power series.
91
To begin, we need to figure out how this series was built. The division by n and n − 1 signals to
me that this was built by taking primitives. The fact that we have two of them suggests we took the
primitive of the primitive. To see this, let’s reindex. Set m = n − 2. Then we get:
∞ ∞
X zn X z m+2
=
n=2
n(n − 1) m=0 (m + 2)(m + 1)
∞
X z m+1
m=0
(m + 1)
which the previous example tells us is −Log(1 − z). As such, this series is given by F (z) where
F 0 (z) = −Log(1 − z). If you recall from first year calculus, the primitive for ln(x) is x ln(x) − x. We
try something similar:
F (z) = (1 − z)Log(1 − z) − (1 − z) + C
−1
F 0 (z) = (1 − z) − Log(1 − z) + 1 = −Log(1 − z)
1−z
So this function is indeed a primitive for −Log(1 − z). We just need to find C to finish up. Well,
F (0) = −1 + C. Also, F (0) = ∞ 0n
P
n=2 n(n−1) = 0. As such, C = 1 and F (z) = (1 − z)Log(1 − z) + z.
And since we have taken primitives from a series with radius of convergence R = 1, we have that this
is valid if |z| < 1. In particular at z = −1
2
. We find:
∞
(−1)n
X 1 3 3 1 3 3 1
=F − = Log − = ln −
n=2
2n (n2 − n) 2 2 2 2 2 2 2
92
Theorem 4.3.1
D\{z0 }, has a removable discontinuity at z0 ∈ D, and that limz→z0 f (z) = L.
Suppose f (z) is analytic on
f (z), z 6= z
˜ 0
Then the function f (z) = is analytic on D.
L, z = z0
Proof. Since f˜(z) = f (z) on D \ {z0 }, we know that f˜ is differentiable on D \ {z0 }. As such, we only need
to show that f˜ is differentiable at z0 . However, this is not immediately accessible from the definition of the
derivative. We instead consider a new function. Define:
k(z) = (z − z0 )f˜(z)
(z0 + h − z0 )f˜(z0 + h)
k 0 (z0 ) = lim = lim f˜(z0 + h) = lim f (z0 + h) = L
h→0 h h→0 h→0
As such, k(z) is differentiable at z0 as well. Since k is analytic on D and z0 ∈ D, k has a power series
expansion valid on Br (z0 ) for some r > 0. In particular:
∞ ∞
0
X k (n) (z0 ) n
X k (n) (z0 )
k(z) = k(z0 ) + k (z0 )(z − z0 ) + (z − z0 ) = L(z − z0 ) + (z − z0 )n
n=2
n! n=2
n!
P∞ k(n) (z0 )
Now, for z 6= z0 , f˜(z) = z−z
k(z)
0
= L + n=2 n!
(z − z0 )n−1 . However, note that when we evaluate this
(n)
power series at z0 we get f˜(z0 ) = L = L + ∞ k (z0 ) (z0 − z0 )n−1 . So f˜ is given by this power series on
P
n=2 n!
Br (z0 )!
However, we know that power series with positive radii of convergence are analytic. Since f˜ is described by
a power series with positive radius of convergence on Br (z0 ), f˜ is analytic on Br (z0 ) and hence is differentiable
at z0 , completing the proof.
Proof. By theorem 4.3.1, there exists a function f˜ analytic on D such that f˜(z) = f (z) for z 6= z0 . Now, since
γ is contained in D \ {z0 }, f˜ = f on γ. As such:
93
Z Z
CIT
f (z)dz = f˜(z)dz = 0
γ γ
sin(z) z2 z4
=1− + − ...
z 3! 5!
sin(z)
As such, limz→0 z
= 1. So the singularity is removable. By our previous theorem:
Z
sin(z)
dz = 0
|z|=1 z
That means we need to be able to compute limz→z0 (z − z0 )n f (z). This is a 0 × ∞ type of limit, which is
precisely the setup for L’Hopital’s rule.
However, to do so we need to discuss different types of zeroes of functions.
94
Definition 4.4.1: Zero of order n
Suppose f (z) is analytic on a domain D and z0 ∈ D. Then z0 is a zero of order n if:
Suppose f (z), g(z) are analytic on a domain D, and z0 ∈ D. Further, suppose that z0 is a zero of order
m for f (z) and a zero of order k for g(z). Then:
f (z)
• If m > k, then limz→z0 g(z)
= 0.
f (z)
• If m < k, then limz→z0 g(z)
= ∞.
Proof. Since f (z), g(z) are analytic on D and z0 ∈ D, we then f (z), g(z) have power series centered at z0 with
positive radius of convergence. In particular, there exists R > 0 such that on BR (z0 ) we have:
∞
X f (n) (z0 )
f (z) = (z − z0 )n
n=0
n!
∞
X g (n) (z0 )
g(z) = (z − z0 )n
n=0
n!
However, we know that z0 is a zero of order m for f and a zero of order k for g. As such:
∞
X f (n) (z0 )
f (z) = (z − z0 )n
n=m
n!
∞
X g (n) (z0 )
g(z) = (z − z0 )n
n=k
n!
95
(k)
Notice that the denominator has a constant term of g k!(z0 ) , which is non-zero. In the numerator, the
powers of (z − z0 ) that occur are m − k, m − k + 1, etc. Notice that these are all positive powers. As such:
f (z)
lim =0
z→z0 (z − z0 )k
g(z) g (k) (z0 )
lim =
z→z0 (z − z0 )k k!
f (z)
This gives that limz→z0 g(z)
= 0.
Case 2. m < k
Suppose m < k. Then:
f (m) (z0 )
Notice that the denominator has a no constant term. And the numerator has a constant term of m!
.
96
Example 4.4.1
z
R
Find |z|=4 sin(z) dz.
By the deformation theorem:
Z Z Z Z
z z z z
dz = dz + dz + dz
|z|=4 sin(z) |z|=1 sin(z) |z−π|=1 sin(z) |z+π|=1 sin(z)
So how do we handle each of these integrals? Well, let’s try to see if they’re removable or poles.
z 1
For z0 = 0, we have that limz→0 sin(z)
= cos(0)
= 1 by L’Hoptial. So z0 = 0 is removable and the first
integral is 0.
z
For z0 = π, we see that limz→0 sin(z) = ∞ since the denominator is approaching 0 but the numerator
is not. Therefore, this is not removable.
(z−π)z
Is it a pole of order 1? To see this, we check: limz→z0 sin(z)
. Using L’Hopital’s rule, we find that
this is:
(z − π)z 2z − π
lim = lim = −π
z→z0 sin(z) z→π cos(z)
z (z−π)z
So sin(z) has a pole of order 1, and that g(z) = sin(z)
has a removable discontinuity which can be
made analytic by setting g(π) = −π. As such:
Z
z
dz = 2πig(π) = −2π 2 i
|z−π|=1 sin(z)
(z+π)z
And for z0 = −π, setting g2 (z) = sin(z)
gives:
−π
lim g2 (z) = =π
z→0 −1
And so:
Z
z
dz = 2πig2 (π) = 2π 2 i
|z−π|=1 sin(z)
This example shows us how to handle simple poles. What about poles of higher order? Well, we could use
this argument. However, once we develop Laurent series, we can give a slightly more straightforward formula.
To end, let’s talk about how to recognize the order of a pole intuitively. You will have a homework problem
asking you to prove the following:
97
Theorem 4.4.2
Suppose f (z) has a zero of order n at z0 and g(z) has a zero of order m at z0 . Then:
f (z)
• If n ≥ m, then g(z)
has a removable discontinuity at z0 .
f (z)
• If n < m, then g(z)
has a pole of order m − n at z0 .
Before we get into how this idea is useful, let’s start with an example:
Example 4.5.1
1
P∞
Suppose f (z) = 1−z
. We know that on |z| < 1 that f (z) = n=0 zn.
What about for |z| > 1? Well, we can rewrite:
1 1 1 1
f (z) = 1 =− 1
z z −1 z1− z
1
Well, we know that |z| > 1, so z
< 1. So, we can use the power series valid on |z| < 1 to get that:
∞ n
1 X 1
1 =
1− z n=0
z
∞
1 X 1
=−
1−z n=1
zn
Alright, so let’s tackle some theoretical concerns. In particular, we’re interested in a few questions:
98
• Is there a radius of convergence type condition that let’s us see when Laurent series converge?
P∞
Regarding the first question: we broke n=−∞ an (z − z0 )n into two sums. The sum with the positive
powers:
∞
X
an (z − z0 )n
n=0
is a power series. As such, it has a radius of convergence R1 so that it converges on BR1 (z0 ), and diverges
when |z − z0 | > R1 .
1
To investigate the negative terms, we’ll make a change of variables. Let w = z−z0
. Then we have:
∞ ∞
X a−n X
= a−n wn
n=1
(z − z0 )n n=1
which is a power series in w. As such, it has a radius of convergence R2 so that it converges when |w| < R2
and diverges when |w| > R2 . As such, this series converges when |z −z0 | > R12 and diverges when |z −z0 | < R12 .
As such, there are two radii r1 and r2 such that the series converges when r1 < |z − z0 | < r2 . Be careful:
this does not automatically imply that r1 < r2 . We need r1 < r2 for the series to converge anywhere, but
we can easily find series such that r1 > r2 , so that when the positive powers converge, the negative powers
diverge.
Regarding analyticity:
Theorem 4.5.1
If f (z) = ∞ n
P
n=−∞ an (z − z0 ) converges when R1 < |z − z0 | < R2 and R1 < R2 , then:
∞
X
0
f (z) = nan (z − z0 )n−1
n=−∞
We won’t be proving this. It’s another theoretical result needing uniform convergence, like the fact that
power series are analytic.
Much more interesting is that analytic functions admit Laurent series, and that the coefficients of the
Laurent series are integrals!
Theorem 4.5.2
Suppose f (z) is analytic on D = {z ∈ C|R1 < |z − z0 | < R2 }. Let r ∈ (R1 , R2 ). Then on D, f (z) is
given by the Laurent series:
99
∞
X
f (z) = an (z − z0 )n
n=−∞
1
R f (z)
where an = 2πi |z−z0 |=r (z−z0 )n+1
dz.
Proof. Let w ∈ D, and choose r1 , r2 ∈ R so that R1 < r1 < |w − z0 | < r2 < R2 . By problem 3b on practice
test 2, we have that:
Z Z
1 f (z) f (z)
f (w) = dz − dz
2πi |z−z0 |=r2 z−w |z−z0 |=r1 z−w
Z ∞ Z
f (z) X f (z)
dz = dz (w − z0 )n
|z−z0 |=r2 z−w n=0 |z−z0 |=r2 (z − z0 )n+1
Now, since we’re concerned with the integral around the circle |z −z0 | = r1 , we know that |z −z0 | < |w −z0 |
by our initial assumption. As such, w−z
z−z0
0
> 1. We may therefore use our Laurent series for 1−z 1
on the annulus
|z| > 1 to get:
∞
f (z) 1 X f (z)(z − z0 )n
=−
z−w z − z0 n=1 (w − z0 )n
Z ∞ Z
f (z) X
n−1 1
dz = − f (z)(z − z0 ) dz
|z−z0 |=r1 z−w n=1 |z−z0 |=r1 (w − z0 )n
This isn’t in quite the form we want it in. So I’ll make a change of variable on the index. Let k = −n.
Then we have:
Z −1 Z
f (z) X f (z)
dz = − k+1
dz (w − z0 )k
|z−z0 |=r1 z−w k=−∞ |z−z0 |=r1 (z − z0 )
100
∞ Z −1 Z
!
1 X f (z) X f (z)
f (w) = k+1
dz (w − z0 )k + k+1
dz (w − z0 )k
2πi k=0 |z−z0 |=r2 (z − z0 ) k=−∞ |z−z0 |=r1 (z − z0 )
This is close to what we want, but not quite! Indeed, our final formula should involve integrals over
f (z)
|z − z0 | = r, not r1 or r2 . However, since (z−z 0)
k+1 is analytic on D, the deformation theorem tells us that
Z Z Z
f (z) f (z) f (z)
dz = dz = dz
|z−z0 |=r (z − z0 )k+1 |z−z0 |=r1 (z − z0 )k+1 |z−z0 |=r2 (z − z0 )k+1
This gives us the desired formula.
So how does this help us to integrate? Well, what happens when we set n = −1? We get:
Z
1
a−1 = f (z)dz
2πi |z−z0 |=r
So if we have a Laurent series, integrating becomes super simple! Let’s see an example:
Example 4.5.2
R 1
Find |z|=1 ze z2 dz.
With some effort, we can verify that this is neither removable nor a pole. This is an essential
singularity. So we can’t use CIT or CIF here. The only other options we have are to integrate by
definition (terrible idea!) or to find a Laurent series. Mercifully, we know that ez = ∞ zn
P
n=0 n! for any
z ∈ C. For z 6= 0, z12 ∈ C, and so:
∞
! ∞
1 X 1 X 1 1 1 1
ze z2 =z = =z+ + + + ...
n=0
n!z 2n n=0
n!z 2n−1 z 2!z 3 3!z 5
Now, to integrate this we only need to find that a−1 term. This is specifically the coefficient attached
to the power z −1 . In this case, we have a z −1 term of z1 , which gives us a−1 = 1.
R 1
So, |z|=1 ze z2 dz = 2πia−1 = 2πi.
This is really useful. If you have a Laurent series valid on your curve, you can find the integral without
any work.
101
Definition 4.6.1: Residue
Suppose f (z) = ∞ n
P
n=−∞ an (z − z0 ) converges on the annulus {z ∈ C|0 < |z − z0 | < R} for some R > 0.
Then the residue of f (z) at z0 is:
Res(f ; z0 ) = a−1
Warning 4.6.1
In a Laurent series, a−1 is called the residue ONLY if the inner radius of the annulus is 0. Otherwise it
is not the residue.
In particular, you can easily run into situations where the function both has a Laurent series defined
on 0 < |z − z0 | < R1 and on r2 < |z − z0 | < R2 with different a−1 coefficients occuring. The reside only
corresponds to 0 < |z − z0 | < R. However, if you are integrating in the region r2 < |z − z0 | < R2 , you
will need that a−1 coefficient that is valid in that region.
Example 4.6.2
1
Find Res( 1−z ; 0).
1
Well, we know that 1−z is analytic inside B1 (0). As such, it has a power series. A power series is
1
just a Laurent series with an = 0 if n < 0. So a−1 = 0 and Res( 1−z ; 0) = 0.
Proof. We’ve already discussed what happens when f (z) is analytic on BR (z0 ).
If f has a removable singularity at z0 , then we have previously shown (in the proof of 4.3.1 that f (z) is
given by a power series on BR (z0 ) \ {z0 } for some R > 0. This is an annulus whose inner radius is 0, and so
the a−1 term of this power series is the residue. As such, the residue is 0.
1
We’ve used this principle to compute one integral (our first example with ze z2 ). Does this apply more
generally? It turns out it’s fairly flexible:
102
Theorem 4.6.2: The Residue Theorem
Let D be a domain and z1 , . . . , zn ∈ D. Suppose f (z) is analytic on D \ {z1 , . . . , zn }. Let γ be a
piecewise smooth, positively oriented, simple closed curve in D such that the inside of γ is in D and
z1 , . . . , zn are inside γ. Then:
Z n
X
f (z)dz = 2πiRes(f ; zk )
γ k=1
What this really says is that the integral is 2πi times the sum of the residues at the function’s singularities.
And in particular, the singularities inside the curve.
Proof. With the conditions on γ and f give, the deformation theorem applies to give that:
Z n Z
X
f (z)dz = f (z)dz
rk
γ k=1 |z−zk |= 2
where rk > 0 such that Brk (zk ) ∈ D \ {z1 , . . . , zk }. However, since f (z) is analytic on Brk (zk ) \ {zk }, we know
that |z − zk | = r2k is contained in an annulus of radius 0 on which f (z) is analytic. As such:
Z
f (z)dz = 2πiRes(f ; zk )
rk
|z−zk |= 2
All this business of integrating using Laurent series and residues takes a bit of care.
Example 4.6.3
1
R
We know, by using the Cauchy Integral Formula that |z|=2 1−z dz = −2πi. So what’s wrong with the
following argument?
1 1 1
R
We know that |z|=2 1−z dz = a−1 in a Laurent series for 1−z . However, a−1 = Res(f ; 0). Since 1−z is
1 1
R
analytic on B1 (0), we have that Res( 1−z ; 0) = 0 and so |z|=2 1−z dz = 0.
The key thing here is that we’re using the wrong Laurent series! We want a−1 from a Laurent series
1 1
of 1−z
which is valid on |z| = 2. However, we know that the power series for 1−z is only valid on |z| < 1,
which does not contain the curve. The correct Laurent series to use is:
∞
1 X 1
=−
1−z n=1
zn
which gives a−1 = −1, agreeing with our calculating using CIF.
1
R
Another way to look at this is: we used the wrong residue! We tried to say that |z|=2 1−z dz =
1 1
2πiRes( 1−z ; 0). But that’s not what the residue theorems says to do. We need Res( 1−z ; 1). Since
103
1 1
1−z
= − z−1 is already written as a Laurent series centered at z0 = 1, we can quickly read off the residue
as −1.
And when trying to use the Residue Theorem, plug the right points in! The whole idea is that you look
at the residues at the singularities!
The Residue Theorem gives us an overarching theory for integrating functions which are analytic except
for some number of isolated singularities. However, this simplicity is really just an organizational tool. We’ve
moved the hard part from knowing a bunch of different theorems to knowing how to calculate residues. We’ve
handled removable singularities. That leaves poles and essential singularities.
1 dn−1
Res(f ; z0 ) = lim (z − z0 )n f (z)
(n − 1)! z→z0 d z n−1
P∞
Proof. To start, let’s look at (z − z0 )n f (z). We write f (z) = k=−n ak (z − z0 )k . Then:
104
We need to isolate a−1 . Let’s see what differentiating does:
d
(z − z0 )n f (z) = a−n+1 + 2a−n+2 (z − z0 ) + 3a−n+3 (z − z0 )2 + . . .
dz
d2
(z − z0 )n f (z) = 2a−n+2 + (3)(2)a−n+3 (z − z0 ) + . . .
d z2
Inductively, we can show that:
dn−1
n−1
(z − z0 )n f (z) = (n − 1)!a−1 + n!a0 (z − z0 ) + . . .
dz
n−1
And so limz→z0 ddzn−1 (z − z0 )n f (z) = (n − 1)!a−1 = (n − 1)!Res(f ; z0 ). (Why can’t we just substitute in
z = z0 at this point?)
Dividing by (n − 1)! gives the desired result.
1
Well, sin2 (z)
is analytic on {z ∈ Z|0 < |z| < π}, so the Residue Theorem applies to give:
Z
1 1
dz = 2πiRes ;0
|z|=1 sin2 (z) sin2 (z)
so we only need to find this residue. We need to identify what type of singularity this is first. We have
1
already seen that sin(z) has a simple zero at z0 = 0, so sin(z) has a simple pole there. We expect that
1
sin2 (z)
has a double pole there. Indeed, L’Hopital’s rule quickly gives us that:
z2
lim =1
z→0 sin2 (z)
so this is a double pole. Unfortunately, this is not the limit we need to compute to find the residue.
This limit only tells us what type of pole we have. Instead, we have:
z2 2z sin(z) − 2z 2 cos(z)
1 d
Res ; 0 = lim = lim
sin2 (z) z→0 d z sin2 (z) z→0 sin3 (z)
From here, we use L’Hopital’s rule twice to get:
105
Generally, for a pole of order n you should expect to use L’Hopital’s rule n times to compute this limit.
So, if you have a Laurent series that extends infinitely in the negative direction and has inner radius of
convergence 0, the singularity is essential.
for a surprising array of functions. Today, we’ll see how to do this for rational functions.
To begin, let’s remind ourselves what this improper integral means:
Definition 4.7.1: Improper Integrals
R∞
Suppose f (x) is continuous on R. Then the improper integral −∞ f (x)dx is defined to be:
Z ∞ Z r Z a
f (x)dx = lim f (x)dx + lim f (x)dx
−∞ r→∞ a s→−∞ s
for any a ∈ R, and whenever both limits exist. If both limits converge, we say the improper integral
converges.
Rr
We won’t be working with two limits. For our purposes, we need one limit: limr→∞ −r
f (x)dx. Thankfully,
if the integral converges then:
106
Z ∞ Z r
f (x)dx = lim f (x)dx
−∞ r→∞ −r
So, to compute the improper integrals we’re interested in, we’re going to first show the integral exists and
then calculate the principal value.
We’re interested in rational functions. So we’ll handle the general case:
Theorem 4.7.1
Suppose p(z), q(z) are polynomials. Then:
R∞ p(x)
• If q(x) has roots in R which are not roots of p(x) then −∞ q(x)
dx diverges.
R∞ p(x)
• If deg p(x) ≥ deg q(x) − 1, then −∞ q(x)
dx diverges.
R∞ p(x)
• If deg p(x) ≤ deg q(x) − 2 and q(x) 6= 0 for x ∈ R , then −∞ q(x)
dx converges,
1
Proof. In the first two cases, comparing the function to x
gives the desired divergence.
p(x)
For the final case, suppose p(x) has real roots r1 < · · · < rn . Since q(x)
is continuous on [r1 , rn ], then
R rn p(x)
r1 q(x)
dx exists. So we only need to handle the two tails.
p(x)
By the intermediate value theorem, q(x)
is either always positive or always negative on each of the intervals
p(x)
(−∞, r1 ] and [rn , ∞). So we can apply the limit comparison test. In particular, limx→±∞ q(x) 1 exists.
x2 +1
R∞ R r1 p(x) R∞
And since −∞ x21+1 dx = π, the limit comparison test gives us that −∞ q(x)
dx and rn p(x)
q(x)
dx also exist.
So now we know that the integrals we care about exist, how do we actually compute the integral?
Example 4.7.1
R∞ x+1
Find −∞ x4 +2x 2 +1 dx.
By our theorem, this integral exists. We need to somehow turn this real integral into a complex
one.
Rr x+1
We’re interested in −r x4 +2x 2 +1 dx. To get a closed curve in C, we look at:
107
Cr
−r 0 Lr r
z+1
If f (z) = z 4 +2z 2 +1
, then the integral we care about is:
Z ∞ Z r Z
f (x)dx = lim f (x)dx = lim f (z)dz
−∞ r→∞ −r r→∞ Lr
R
1. Show that limr→∞ Cr f (z)dz = 0.
R∞ R
2. Show that −∞ f (z)dz = limr→∞ Lr +Cr f (z)dz.
R
3. Compute limr→∞ Lr +Cr f (z)dz.
Step 1. To do this, we’ll use M-L estimation (3.2.7). Since Cr is a semicircle of radius r, it has length
L = πr.
z+1
So we need to estimate the maximum of |f (z)| on Cr . Since f (z) = z4 +2z 2 +1 , we can get an upper
bound for |f (z)| by getting an upper bound for |z | and a lower bound for |z + 2z 2 + 1|.
2 4
r+1
Putting these two bounds together, we see that |f (z)| ≤ r4 −2r2 −1
. And so, M-L esitmation gives us
that:
Z
πr(r + 1)
f (z)dz ≤
Cr r4 − 2r2 − 1
.
So, in the limit:
Z
πr(r + 1)
0 ≤ lim f (z)dz ≤ lim =0
r→∞ Cr r→∞ r 4 − 2r2 − 1
R
And so limr→∞ Cr
f (z)dz = 0.
108
Step 2: We know that:
Z ∞ Z
f (x)dx = lim f (z)dz
−∞ r→∞ Lr
R
But since limr→∞ Cr
f (z)dz = 0, we have that:
Z ∞ Z Z Z
f (x)dx = lim f (z)dz + lim f (z)dz = lim f (z)dz
−∞ r→∞ Lr r→∞ Cr r→∞ Lr +Cr
Step 3: We compute this integral using the residue theorem. Since z 4 + 2z 2 + 1 = (z 2 + 1)2 , we see
that f (z) has two double poles: ±i. Of these, only i is inside the curve (when r > 1). As such, when
r > 1 the residue theorem gives:
Z
z+1
f (z)dz = 2πiRes ;i
Lr +CR z 4 + 2z 2 + 1
d (z − i)2 (z + 1)
z+1
Res ; i = lim
z 4 + 2z 2 + 1 z→i d z (z 2 + 1)2
d z+1
= lim
z→i d z (z + i)2
∞
−i
Z Z
π
f (x)dx = lim f (z)dz = 2πi =
−∞ r→∞ Lr +Cr 4 2
109
Appendices
110
A | Technical Proofs
So z 6∈ K. As such, Bε (w) ⊂ K c .
Therefore, we have shown that for any w ∈ K c , there is a ball Br (w) which is contained in K c . So
K c is open, and therefore K is closed.
It is a neat fact, derived from the fact that C is topologically the same as R2 , that C is complete: every
Cauchy sequence in C converges.
Theorem A.1.1: Extreme Value Theorem
Suppose f (z) is continuous on C. Let K ⊂ C be closed and bounded. Then there exists M ∈ R such
that |f (z)| ≤ M for all z ∈ K. I.e., f (z) is bounded on K.
Proof. Suppose f (z) is not bounded on K. I.e., for any N ∈ C, there exists z ∈ K such that |f (z)| ≥ N .
Define Kn = {z ∈ C||f (z)| > 2n }. We know that Kn is non-empty for each n. For each n, choose zn ∈ Kn .
Since K is bounded, there exists some a < b and c < d so that K ⊂ D = {z ∈ C|a ≤ Re(z) ≤ b, c ≤
Im(z) ≤ d}.
111
Now, there are an infinite number of zn . (Since for any z ∈ C there exists m with |f (z)| < 2m , so for any
n there exists m such that zn 6= zj for all j > m.)
We cut D into four regions:
a+b c+d
{z ∈ C|a ≤ Re(z) ≤ , c ≤ Im(z) ≤ }
2 2
a+b c+d
{z ∈ C|a ≤ Re(z) ≤ , ≤ Im(z) ≤ d}
2 2
a+b c+d
{z ∈ C| ≤ Re(z) ≤ b, c ≤ Im(z) ≤ }
2 2
a+b c+d
{z ∈ C| ≤ Re(z) ≤ b, ≤ Im(z) ≤ d}
2 2
Since there are an infinite number of zn and only four regions, one of these four rectangles contains an
infinite number of the zn . Set D1 to be that region.
Repeat this procedure with D1 to find another set D2 ⊂ D1 containing an infinite number of the zn .
Repeat to find Dj such that: Dj ⊂ Dj−1 and Dj contains an infinite number of these zn .
From each Dj , choose one of the zn in that Dj ∩ Kj , which we will call wj . This intersection is non-empty
since Dj contains an infinite number of the zn , and only finitely many of that zn are not in Kj .
I claim that the wj converge. To prove this, we will need to use a fact about C: C is a complete metric
space. This means that every Cauchy sequence in C converges. So, if we can prove that the sequence
w1 , w2 , . . . is Cauchy, then we know it converges.
This turns out to be easy. Let ε > 0. In a rectangle with side lengths r, s, the greatest distance between
any two points in the rectangle is given by considering the distance between two opposite vertices. This is
√
construction, the rectangle Dj has side lengths 2−j (b − a) and 2−j (d − c). Therefore, choose
r2 + s2 . By ourp
N such that 2−N (b − a)2 + (d − c)2 < ε.
Then for any n, m > N , we have that wn ∈ Dn ⊂ DN and wm ∈ Dm ⊂ DN . As such:
p
|wn − wm | ≤ 2−N (b − a)2 + (d − c)2 < ε
So the sequence is Cauchy, and hence converges to some w. Furthermore, since K is closed, we know that
w ∈ K.
To finish up, we use that f (z) is continuous. This tells us that:
However, we know that for any N ∈ N, if j > N then wj ∈ DN ∩ KN , and so |f (wj )| > 2N . As such,
|f (w)| = limj→∞ |f (wj )| = ∞. However, this is not possible since f is continuous at w. Contradiction.
Therefore, f is bounded on K.
112
A.2 Series
Theorem A.2.1
Absolutely convergent series converge.
P∞ ∞
Proof. Suppose n=k an converges absolutely. To show that it converges, we need to show that (Sn )n=k
converges. As mentioned in subsection A.1 of these appendices, this is equivalent to showing that the sequence
is Cauchy.
I.e., we need to show that we can force |Sn − Sm | = | nj=m+1 aj | to be very small.
P
n
X n
X
| an | ≤ |aj |
j=m+1 j=m+1
And so |Sn − Sm | ≤ |Tn − Tm |. Therefore, if n, m > N , then |Sn − Sm | ≤ |Tn − Tm | < ε. This proves that
the sequence (Sn )∞
n=k is Cauchy, and therefore converges as well.
For some of our theorem on analytic functions, we need to discuss the notion of uniformly convergent
series.
Definition A.2.1: Uniform Convergence
Suppose (fn )∞n=k is a sequence of functions, defined on some set S. We say that fn converges uniformly
to f on S if:
Notice the order of the quantifiers here. This is saying that N doesn’t depend on z. So not only does the
series converge to f at each point, but the series converges at roughly the same speed. More precisely, there’s
some lower bound to how fast the series converges at each point.
Why is this important? Well, it let’s me get away with swapping some integrals and limits. In particular,
it lets us do the following:
Theorem A.2.2
Suppose fn are all continuous on the piecewise smooth curve γ and fn → f uniformly. Then
R R
limn→∞ γ fn (z)dz = γ f (z)dz.
113
R R R
Proof. Since γ
fn (z)dz − γ
f (z)dz ≤ γ
|fn (z) − f (z)|dz, we prove that:
Z
lim |fn (z) − f (z)|dz = 0
n→∞ γ
ε
Let ε > 0. Then there exists N ∈ N such that n ≥ N implies that |fn (z) − f (z)| ≤ L(γ)
, where L(γ) is the
length of γ.
ε
R
Now, by M-L estimation, γ |fn (z) − f (z)|dz ≤ L(γ) L(γ) = ε.
R
So, we have shown that ∀ε > 0 there exists some N ∈ N such that n > N implies γ |fn (z) − f (z)|dz =
R R
γ
|f n (z) − f (z)|dz − 0 < ε. As such, limn→∞ γ
|fn (z) − f (z)|dz = 0.
R R R R
By the squeeze theorem, limn→∞ γ fn (z)dz − γ f (z)dz = 0, and so limn→∞ γ fn (z)dz = γ f (z)dz.
P∞
All that remains for our purposes is to show that n=0 z n converges uniformly on |z| ≤ R for any R < 1.
Theorem A.2.3
P∞ 1
Let 0 ≤ R < 1. Then n=0 z n converges uniformly to 1−z
on |z| ≤ R.
n+1
Proof. Let D = {z ∈ C||z| ≤ R}. Let Sn (z) = nj=0 z j . Recall that Sn (z) = 1−z
P
1−z
. So, to show that the
1−z n+1 1
sum converges uniformly, we need to show that 1−z converges uniformly to 1−z .
1 z n+1
Notice that |Sn (z) − 1−z
| = 1−z
. Now, since |z| ≤ R < 1, we have that |1 − z| ≥ 1 − R > 0. As such,
z n+1 n+1
1−z
≤ R1−R .
Rn+1 Rn+1
Let ε > 0. Since limn→∞ 1−R
= 0, there exists N ∈ N such that if n > N , then 1−R
< ε.
1 Rn+1
However, this also gives that |Sn (z) − 1−z
| ≤ 1−R
< ε for all z ∈ D. As such, the series converges
1
uniformly to 1−z on D.
(w−z0 )n
In actuality, we need that ∞ f (z)
P
n=0 f (z) (z−z0 )n+1 where |w − z0 | < |z − z0 | converges uniformly to z−w on
a simple, closed curve γ. Since f (z) is bounded on the curve, this is a simple modification of the above
argument.
114
Index
115
L’Hopital’s Rule, 95 Removable Discontinuity, 67
Laurent Series, 98 Residue, 102
Limit Residue Theorem, 103
R2 , 31 Roots, 16
at infinity, 38 geometry, 17
formal definition, 31 of unity, 17
infinite, 38
laws, 33 Sequence, 80
Line Integral, 55 Series
Liouville’s Theorem, 77 absolutely convergent, 82
Logarithm, 26 divergence test, 81
principal, 29 geometric, 80
Laurent series, 98
M-L estimate, 64 power series, 84
path, 43 ratio test, 82
Pole, 67 Simple Pole, 67
simple, 67 Topology
Polynomial, 20 closed set, 111
degree, 20 connected set, 44
Positive number, 20 open ball, 42
Power series, 84 open set, 43
radius of convergence, 86 path, 43
ratio test, 86
Primitive, 57 Zero of order n, 95
116