0% found this document useful (0 votes)
43 views55 pages

Math Phase Space

The document is a reader on the analysis of non-linear differential equations, focusing on matrices, eigenvalues, and linearization techniques. It covers various mathematical concepts including vectors, systems of equations, and the application of these concepts in biological modeling. The content is designed for students in theoretical biology and includes exercises and examples to facilitate understanding.

Uploaded by

ctw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views55 pages

Math Phase Space

The document is a reader on the analysis of non-linear differential equations, focusing on matrices, eigenvalues, and linearization techniques. It covers various mathematical concepts including vectors, systems of equations, and the application of these concepts in biological modeling. The content is designed for students in theoretical biology and includes exercises and examples to facilitate understanding.

Uploaded by

ctw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Matrices, Linearization, and the Jacobi matrix


tr2 − 4 det
  
dx/dt = f (x, y) ∂x f ∂ y f tr ±
J= λ1,2 =
dy/dt = g(x, y) ∂x g ∂y g 2

Alexander V. Panfilov, Kirsten H.W.J. ten Tusscher & Rob J. de Boer

Theoretical Biology, Utrecht University, 2021


i

c Utrecht University, 2021

Ebook publically available at: [Link]/rdb/books/[Link]


ii
Contents

1 Preface 1

2 Vectors and Matrices 3


2.1 Scalars, Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Matrices and systems of equations . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Forest succession . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Systems of two differential equations 15


3.1 Solutions of Linear 2D Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 Linear approximation of non-linear 2D systems 19


4.1 Partial derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2 Linearization of a systems of ODEs: Jacobian . . . . . . . . . . . . . . . . . . . 21
4.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5 Efficient analysis of 2D ODEs 25


5.1 Determinant-trace method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.2 Graphical Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.3 Plan of qualitative analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

6 Lotka Volterra model 31


6.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

7 Complex numbers 35
7.1 Complex valued eigenvalues of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.2 Example: the Lotka Volterra model . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

8 Answers for exercises 41


Exercises Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Exercises Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Exercises Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Exercises Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Exercises Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Exercises Chapter 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
iv CONTENTS
Chapter 1

Preface

This reader provides an introduction for the analysis of systems of non-linear differential equa-
tions. It starts with introducing the concept of a matrix and its eigenvalues and eigenvectors.
Next we show that a linear system of differential equations can be written in a matrix notation,
and that its solution can be written as a linear combination of the eigenvalues and eigenvectors
of that matrix. Subsequently, we show systems of non-linear ordinary differential equations
(ODEs) can be linearized around steady states by taking partial derivatives. Writing these in
the form of a matrix the eigenvalues can be used to determine the stability of these equilibrium
points.

Finally, we explain an approach to determine these partial derivatives graphically from the vector
field in the phase portrait, and provide an introduction to complex numbers (as we regularly
encounter complex eigenvalues in systems of ODEs).

This reader was largely compiled from an earlier and much more extensive reader called
“Qualitative analysis of differential equations” that was written by Alexander Panfilov (2010),
([Link] at the time he was teaching mathematical biology to
biology students at Utrecht University. That reader was later adapted by Kirsten ten Tusscher
and by Levien van Zon. I have shortened and simplified the text, made the notation consistent
with the reader “Biological Modeling of Populations”, added a few examples, and the full lin-
earization of the stable spiral point of the Lotka Volterra model (Chapters 6 and 7). The Grind
scripts referred to in the exercises can be found on [Link]/rdb/bm/models.

January 5, 2021

Rob J. de Boer,
Theoretical Biology, Utrecht University.
2 Preface
Chapter 2

Vectors and Matrices

2.1 Scalars, Vectors and Matrices

Variables that have only one property can be described by a single number, also called a scalar.
Examples are the number of individuals in a population, N , or the concentration level of a
chemical compound in a vessel C. If a variable has several properties, one can use a vector to
describe it (Fig. 2.1). Vectors are used to describe the forces acting on an object, or the speed
and direction with which an object moves, or the number of predators and prey in some area.
Mathematically, vectors are written either as a row, or a column, of numbers between brackets,
and are then referred to as row or column vectors. For example, in a two-dimensional plane in
which the force acting on an object has a x-component Vx = 2 and a y-component Vy = 1, this

Figure 2.1: Vectors in a 2 dimensional plane: the scaling of a 2D vector, the addition of two 2D
vectors, and finally the rotation of a 2D vector by multiplying it with a transformation matrix
T.
4 Vectors and Matrices

force can be represented as:  


~ = 2 ~ = 2 1 .

V or V
1

The length of the force vector is given by |V | = 22 + 12 , whereas the direction of the force
vector is 2 steps (in a certain unit) in the positive x-direction (to the right) and 1 step in the
positive y-direction (upward). The simplest operation that can be performed on a vector is
multiplication by a scalar. As the word scalar implies, this simply results in the scaling of the
size of the vector, without changing its direction (Fig. 2.1):
     
~ 2 0.5 × 2 1
0.5V = 0.5 = = .
1 0.5 × 1 0.5

An example would be a car that keeps driving in the same direction, but halves its speed.

Another operation is adding two or more vectors, for example to determine the net resultant
force from the sum of all forces acting on an object. Vector addition is achieved by adding up
the corresponding elements of the different vectors (Fig. 2.1). Addition can only be performed
on vectors that are of the same size (have same number of elements):
       
~ ~ 2 1 2+1 3
V +W = + = = .
1 3 1+3 4

A more complex operation is the rotation of a vector. Such a rotation can be obtained by
~ =W
multiplying the vector by a so-called matrix: AV ~ , where V~ is the original vector, W
~ is the
new resulting vector, and A is the matrix performing the rotation (Fig. 2.1).

Before explaining this in more detail, let us first introduce the concept of a matrix. Mathemati-
cally speaking, matrices are written as a block of n rows and m columns of numbers, all between
brackets:  
1 4 5
A= .
2 6 10
This particular matrix A has two rows and three columns, i.e., it has a size of 2 × 3. Matrices
can be used to store and represent data sets. An example would be an experiment in which the
expression of a large set of genes is measured over a range of different conditions. By using the
rows to represent the different genes and the columns to represent the different conditions, each
matrix element would reflect the expression level of a single gene under a particular condition.

As for vectors, the simplest operation that can be performed on a matrix is the multiplication
by a scalar. This is done by multiplying each individual element of the matrix with the scalar.
For a general 2 × 2 matrix this can be written as:
   
a b λa λb
λ = .
c d λc λd

A simple example would be the rescaling of experimentally measured fluorescence. A matrix


with fluorescence values should then be multiplied by a factor that translates all fluorescence
levels into gene expression levels. Like vectors, two matrices A and B can be added up into a
new matrix C only if they are of the same size. Both the number of rows and the number of
columns should be equal. Matrix addition can then be performed by adding up the corresponding
elements of the two matrices:
       
1 4 5 2 1 4 1+2 4+1 5+4 3 5 9
+ = = .
2 6 10 1 3 5 2 + 1 6 + 3 10 + 5 3 9 15
2.1 Scalars, Vectors and Matrices 5

Complex scaling of vector Shearing of vector parallel to x-axis

y v= a
b( )
v= a
b( )
b
w=Tv= 0.5
0 ( 0
0.25 ) (
v= 0.5a
0.25b ) y w=Tv= 1
0 ( 1 ) (
0.2 v= a+0.2b
b )
b
0.25b

0.5a a x
a+0.2b
a x

Figure 2.2: Scaling and shearing by matrix transformations of vectors.

For two general 2 × 2 matrices, this can be written as:


     
a b x y a+x b+y
+ = .
c d z w c+z d+w
For the elements in a matrix C = A + B, this can be written as Cij = Aij + Bij , where Cij is
the value in matrix C at row i and column j.

Finally, one can multiply a matrix A with a matrix B to obtain a new matrix C (if the number
of columns in the first matrix is equal to the number of rows in the second matrix). Matrix
multiplication is defined as the products of the rows of the first matrix with the columns of the
second matrix. Thus, to find the element in row i and column j of the final matrix one needs
to multiply the ith row of the first matrix by the j th column of the second matrix1 :
n
X
Cij = Aik Bkj . (2.1)
k=1

For a product of two 2 × 2 matrices this gives:


    
a b x y ax + bz ay + bw
= , (2.2)
c d z w cx + dz cy + dw
and from this it follows that multiplication of a matrix by a column vector (which is a matrix
with only one column) is given by:
    
a b x ax + by
= . (2.3)
c d y cx + dy
Note that in Eq. (2.2) the multiplication with the first matrix A produces a transformation of
the second matrix. In other words, the first matrix is the transformation matrix, and the second
matrix (or vector) is the one being transformed. Matrix multiplications are not commutative,
i.e. A × B 6= B × A. This means that if transformations are applied in a different order, a
different outcome will be produced.

A vector can be rotated and/or scaled by a matrix. Scaling a vector by a different amount in
the x and y directions can also be performed by a transformation matrix (see Fig. 2.2):
    
wx sx 0 x
= .
wy 0 sy y
1 P
The notation in Eq. (2.1) may require some explanation. The sum-sign basically says: add up a number
of expressions, whereby k will be 1 in the first expression, 2 in the second, . . . , up to k = n. In this case, n is the
number of columns in the first matrix, and rows in the second matrix. To obtain the value in matrix C, row i,
column j, one multiplies what is in matrix A, row i, column 1 with the value in matrix B, row 1, column j. One
then adds to that the product of what is in matrix A, row i, column 2 with the value in matrix B, row 2, column
j, and so forth, until one has added the values for all columns and rows.
6 Vectors and Matrices

Figure 2.3: On the left the original pictures from “On Growth and Form” by
D’Arcy Wentworth Thompson (1942) (which was first published in 1917). On the
right pictures from a computer program of the School of Mathematics and Statistics
of the University of St. Andrews in Scotland, performing similar shape transformations
([Link] The
transformation matrices used apply rotation, scaling and shearing transformations.

Similarly, a matrix can be used to apply shearing to a vector. A shear force could stretch a
vector in one direction, e.g., parallel to the x-axis (see Fig. 2.2),
    
wx 1 k x
= .
wy 0 1 y

which results in wx = x + ky and wy = y.

Note that as matrices can be multiplied with one another to obtain a new matrix, the mul-
tiplication of rotation, scaling and shearing matrices can result in a single matrix performing
a complex transformation in one go. Also note that any point (x, y) on a flat object can be
considered as a vector ~v = (x y). Thus, one can apply complex transformation matrices to
objects (basically a collection of points), to change them into objects with different orientations,
shapes and sizes. This can for example be applied to simulate or deduce the changes in shape
and size that occur during development, evolution or growth in animals and plants. Indeed, the
famous mathematician and biologist D’Arcy Wentworth Thompson used transformation matri-
ces to show how one could go from the shape of one fish species to that of another fish species,
or from the shape of a human skull to the shape of a chimpanzee skull. He called this the theory
of transformation, which he described in his 1917 book “On Growth and Form” (see Fig. 2.3
and the re-edited version of the book (Thompson, 1942)).

2.2 Matrices and systems of equations

In this course we will use matrices to write down systems of linear equations. For instance,
consider 
x − 2y = −5
, (2.4)
2x + y = 10
2.3 Forest succession 7

and write the coefficients in front of x and y in the left hand side as a square matrix:
 
1 −2
A= .
2 1

We also have two numbers in the right hand side which one can write as a vector, i.e., V ~ =
(−5 10). Now if one writes x and y as a vector X ~ = (x y), one can represent the system of Eq.
(2.4), using the definition matrix multiplication in Eq. (2.3), as
    
~ =V
~ 1 −2 x −5
AX or = .
2 1 y 10

To find the solution of this system one solves x − 2y = −5 to obtain x = 2y − 5. Substituting


this into 2x + y = 10 gives 2(2y − 5) + y = 10, 4y − 10 + y = 10, 5y = 20, and finally y = 4.
Substituting this into x = 2y − 5 gives x = 2 × 4 − 5 = 3. Thus, the solution is (x y) = (3 4).

Let us define the trace and the determinant as two important properties of square matrices. For
the 2 × 2 matrices that we consider in this course these properties are defined as:
 
a b
det[A] = = ad − bc and tr[A] = a + d . (2.5)
c d

Determinants were invented to study whether a system of linear equations can be solved. It can
be shown that solutions like the one obtained above, are only possible when det[A] 6= 0. To see
why this is the case consider the general linear system

ax + by = p
.
cx + dy = q

Start by solving x from the first equation, i.e., x = p/a − by/a. Use this solution to solve y from
the second equation, i.e., substitution gives cp/a − cby/a + dy = q, or

qa − cp
cp − cby + ady = qa or y(ad − cb) = qa − cp or y = ,
ad − cb
which only has a finite solution when the denominator ad − cb is not equal to zero. This
denominator indeed corresponds to the determinant ad − bc of the matrix defining this linear
system. Thus, there is no solution when the determinant equals zero. Note that one can also
calculate the determinant of 3 × 3 matrices, or even large square matrices, (see Wikipedia or
any book on linear algebra), but this lies outside the scope of this course. Finally, typing
determinant {{a,b},{c,d}} (or even just {{a,b},{c,d}}) into WolframAlpha gives you the
determinant of the matrix.

2.3 Forest succession

A simple example of a matrix model in ecology is the model developed by Henry Horn, who
studied ecological succession in forests in the USA (Horn, 1975). He recorded the different
species of saplings that were present under each species of tree, and assumed that each tree
would be replaced by another tree at a rate proportional to these sapling densities. Taking time
steps of 50 years he also estimated the specific rate of survival of each tree species. This resulted
in the following table:
8 Vectors and Matrices

Gray Birch Blackgum Red Maple Beech


Gray Birch 0.05 0.01 0 0
Blackgum 0.36 0.57 0.14 0.01
Red Maple 0.5 0.25 0.55 0.03
Beech 0.09 0.17 0.31 0.96

with columns summing up to one. Each diagonal element gives the probability that after 50
years a tree is replaced by a tree of the same species (which is the sum of its survival rate (still
standing), and the rate of replacement by itself). Each off-diagonal element in this table gives
the probability that a particular species is replaced by another species. For example, the fraction
of Red Maple trees after 50 years would be 0.5 times the fraction of Gray Birch trees, plus 0.25
times the fraction of Blackgum trees, plus 0.55 times the fraction of Red Maples, plus 0.03 times
the fraction of Beech trees. He actually measured several more species, but we give the major
four species here for simplicity.

This data can be written as a square matrix:


 
0.05 0.01 0 0
0.36 0.57 0.14 0.01
A=  0.5 0.25 0.55 0.03 ,

0.09 0.17 0.31 0.96

and the current state of the forest as a column vector. For instance, V~0 = (1 0 0 0), which would
be a monoculture of just Gray Birch trees. After 50 years the next state of the forest is defined
by the multiplication of the initial vector by the matrix:
V~50 = AV~0 = 0.05 0.36 0.5 0.09 ,

(2.6)
which is a forest with 5% Gray Birch, 36% Blackgum, 50% Red Maple, and 9% Beech trees.
Check for yourself that we indeed obey the normal rule of matrix multiplication, and that we
obtain the first column of the matrix A because we start with a forest that is just composed of
Birch trees. The next state of the forest is
V~100 = AV~50 = 0.0061 0.2941 0.3927 0.3071 ,

(2.7)
and so on. Fig. 2.4a shows a time course that is obtained by applying this transformation
25 times, which reveals that this model describes the succession of these types of forest quite
realistically (Horn, 1975).

What would we now predict for the ultimate state (climax state) of this forest, and would that
depend on the initial state? Actually, we can already see from the last two equations that
V~100 = AV~50 = A2 V~0 , and hence that after 5000 years, i.e., 100 intervals of 50 years, the state of
~ = A100 V~0 , where
the forest is given by V5000
 
0.005 0.005 0.005 0.005
0.048 0.048 0.048 0.048
A100 = 
0.085 0.085 0.085 0.085 .

0.866 0.866 0.866 0.866


~ = (x y z w), where w = 1 − x − y − z, and notice that
Now consider an arbitrary vector V
     
x 0.005(x + y + z + w) 0.005
 y  0.048(x + y + z + w) 0.048
A100 
 z  = 0.085(x + y + z + w) = 0.085 ,
    

w 0.866(x + y + z + w) 0.866
2.4 Eigenvalues and eigenvectors 9

(a) (b)

1.0

15
● Gray birch
(0,1)
Blackgum
Red maple (2,1)
● ● ● ● ● ● ●
Beech
● ● ● ●
(4,5) ●
● ●



(14,13)
0.8



10

0.6
Frequency

y


0.4


5

● ●



0.2




● ●

● ● ● ●
● ● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ●
● ● ● ●
● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ●
0.0

0
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

0 200 400 600 800 1000 1200 0 5 10 15


Time in years x

Figure 2.4: Repeated transformation of a vector by a matrix produces a vector with a direc-
tion closer and closer to the dominant eigenvector of that matrix, (in this case the eigenvector
corresponding to the largest eigenvalue). Panel (a) depicts the vectors V ~ representing the fre-
~
quencies of four trees in a forest as a time plot. Starting with V0 = (1 0 0 0) the climax
V~∞ = (0.005 0.048 0.085 0.866) is approached after about 800 years. This climax corresponds to
the dominant eigenvector of the transition matrix observed
 by Horn (1975). Panel (b) depicts a
few iterations with the transformation matrix A = 12 21 with eigenvalues λ1 = 3 and λ2 = −1,
and corresponding eigenvectors V~1 = 11 and V~2 = −1 1
. Starting with the red vector v~0 = 01
  

we obtain the green v~1 = Av~0 = 21 . Applying the same transformation again, we obtain the


blue v~2 = Av~1 = 45 , and next the orange v~3 = Av~r = 14


 
13 . Note that the direction of these
~ 1

vectors approaches the eigenvector V1 = 1 . Panel (a) was made with the R script horn.R and
Panel (b) with eigen.R. The latter script allows one to do more than just three iterations and
to rescale the length of the vectors to one.

meaning that the succession converges into climax state. Since the columns of the matrix A100
are almost identical, we obtain that the state of the forest after 5000 years hardly depends on
the initial vector. Next we will show that this climax vector is an eigenvector of the matrix A.

2.4 Eigenvalues and eigenvectors

We have learned above that using matrices one can transform vectors, and change both their
length and their direction. It turns out that for each particular matrix there exists a set of
special vectors called its “eigenvectors”. Applying the matrix to an eigenvector will only change
its length, and not its direction. Note that this is the same effect as multiplying the vector with a
scalar, i.e., the transformation matrix in fact behaves as a scalar when applied to an eigenvector.
The factor by which the eigenvector changes size when the matrix is applied to it, i.e., the scaling
factor, is called the corresponding “eigenvalue”. Each eigenvector will have its own eigenvalue,
and an n-dimensional matrix can maximally have n eigenvectors and eigenvalues.

Formally, one can write this as follows

Av = λv . (2.8)
10 Vectors and Matrices

which says that for a certain vector v, application of the transformation matrix A results in the
scaling of this vector by an amount λ. Thus, v is an eigenvector and λ is the corresponding
eigenvalue of transformation matrix A. Note that eigenvectors are not unique, in the sense that
one can always multiply them by an arbitrary constant k to obtain another eigenvector, i.e.,

kAv = kλv or A(kv) = λ(kv) . (2.9)

therefore, one can say that kv is also an eigenvector of Eq. (2.8), corresponding to eigenvalue λ.

What is the use of knowing eigenvalues and eigenvectors of a matrix? We have seen in the forest-
succession example that eigenvectors give the principal directions of change imposed by a matrix.
Eigenvalues give the amount of change in each of these directions. Knowing the eigenvectors,
one can to a large extent predict the effect of a matrix on a vector, and hence predict the
behavior of the system, as it will be rotated into the direction of the dominant eigenvector (see
Fig. 2.4). Technically, the dominant eigenvector is the eigenvector associated with the eigenvalue
that is largest when they are ranked by their absolute values; below we will use the term largest
eigenvalue to denote the eigenvalue that is largest by their real values. Finding eigenvalues
and eigenvectors is one of the most important problems in applied mathematics. It arises in
many biological applications, such as population dynamics, biostatistics, bioinformatics, image
processing and many other fields. In this course we will only use it for the solution of systems
of differential equations, which is considered in Chapter 3.

Let us now explain the general approach for solving the eigenvalues λ and eigenvectors v of an
arbitrary 2 × 2 matrix. Thus, consider the problem
    
a b x x
Av = =λ . (2.10)
c d y y
First rewrite this into a system of two equations with three unknowns λ, x, y

ax + by = λx
, (2.11)
cx + dy = λy
which can be further rewritten as:
     
(a − λ)x + by = 0 a−λ b x 0
or in matrix form = . (2.12)
cx + (d − λ)y = 0 c d−λ y 0
This system always has a solution x = y = 0. However, eigenvectors are defined to be non-zero
vectors, and hence the (0 0) solution does not correspond to an eigenvector. In order to find
non-zero solutions, let us first cancel y from the system, by multiplying the first equation by
d − λ, the second equation by b, and then subtract them. Multiplication gives:
 
(d − λ)[(a − λ)x + by] = 0 (d − λ)(a − λ)x + (d − λ)by = 0
or .
b[cx + (d − λ)y] = 0 bcx + b(d − λ)y = 0
Subtracting the second equation from the first then gives:

[(d − λ)(a − λ) − bc]x = 0 ,

and given that x 6= 0 one obtains

(d − λ)(a − λ) − bc = λ2 − (a + d)λ + (ad − cb) = λ2 − trλ + det = 0 , (2.13)

which is a quadratic equation, with two possible solutions λ1 and λ2 found using the classical
‘abc’-formula, √
tr ± tr2 − 4 det
λ1,2 = . (2.14)
2
2.4 Eigenvalues and eigenvectors 11

Eq. (2.13), or the equivalent Eq. (2.14), is called the “characteristic equation”. In general, for
an n × n matrix there are maximally n solutions for λ.

For example, use this approach to find the eigenvalues of the following matrix
 
1 2
,
2 1

with tr = 2 and det = 1 − 4 = −3. Using the characteristic equation one writes
√ √
2 ± 4 + 12 2 ± 16
λ1,2 = = ,
2 2
giving λ1 = 3 and λ2 = −1. As a next step we have to find the eigenvectors belonging to these
two eigenvalues. One can do this by substituting the eigenvalues into the original equations and
solving the equations for x and y. For the eigenvector corresponding to the eigenvalue λ1 = 3
one obtains:
  
(1 − 3)x + 2y = 0 −2x + 2y = 0 −2x = −2y
or or . (2.15)
2x + (1 − 3)y = 0 2x − 2y = 0 2x = 2y

The two equations give us the same solution: x = y. This means that v1 = (1 1) is an eigenvector
corresponding to the eigenvalue λ1 = 3. However, we can use any other value for x and hence y
as long as x = y is satisfied, which we can write as
   
x 1
=k , (2.16)
y 1

where k is an arbitrary number. Eq. (2.16) thus gives us all possible solutions of Eq. (2.15). It also
illustrates a general property of eigenvectors which we have already proven in Eq. (2.9), namely
that if we multiply an eigenvector by an arbitrary number k, we will get another eigenvector
of our matrix. Using this matrix to repeatedly transform an initial vector indeed turns these
vectors into the direction of the dominant eigenvector (Fig. 2.4b).

Similarly we can find the eigenvector corresponding to the other eigenvalue λ2 = −1:
  
(1 − (−1))x + 2y = 0 2x + 2y = 0 2x = −2y
or or . (2.17)
2x + (1 − (−1))y = 0 2x + 2y = 0 2x = −2y

Hence the relation between x and y obeys x = −y, and for the eigenvector we could use v2 =
(−1 1).

Note, that in both cases we could have used just the first equation to find the eigenvectors. In
both cases the second equation did not provide any new information. Therefore we introduce
a simpler method for finding the eigenvectors of a general system (see Eq. (2.11)). Consider
that we found eigenvalues λ1 and λ2 from the characteristic equation, Eq. (2.13). To find the
corresponding eigenvectors, one needs to substitute these eigenvalues into the matrix and solve
the following system of linear equations (see Eq. (2.12)):

(a − λ1 )x + by = 0
(2.18)
cx + (d − λ1 )y = 0

It is easy to check that the values x = −b and y = a − λ1 give the solution of the first equation

(a − λ1 )x + by = (a − λ1 )(−b) + b(a − λ1 ) = 0 ,
12 Vectors and Matrices

and substituting these expressions into the second equation provides

cx + (d − λ1 )y = −cb + (d − λ1 )(a − λ1 ) = 0 ,

which is zero because (d−λ1 )(a−λ1 )−cb = 0, in accordance with the characteristic equation, Eq.
(2.13). Therefore x = −b and y = a − λ1 give a solution of Eq. (2.18), which is an eigenvector
corresponding to the eigenvalue λ1 . Similarly we find the eigenvector corresponding to the
eigenvalue λ2 . Note that this approach will fail when both b = 0 and a − λ = 0 in Eq. (2.12).
In such cases one can use the second equation cx + (d − λ1 )y = 0, to find an eigenvector as
x = d − λ1 and y = −c. Summarizing, the final formulas are:
       
−b −b d − λ1 d − λ2
v1 = & v2 = or v1 = & v2 = . (2.19)
a − λ1 a − λ2 −c −c

Applying this to the example used above, i.e.,


 
1 2
A= with eigenvalues λ1 = 3 and λ2 = −1 ,
2 1

the eigenvectors can be found from Eq. (2.19) as:


         
−b −2 −2 −2 −2
v1 = = = and v2 = = ,
a − λ1 1−3 −2 1 − (−1) 2
1 −1
 
which are indeed equivalent to the eigenvectors, v1 = 1 and v2 = 1 , that were obtained
above.

2.5 Exercises

1. We have seen that the determinant ad − cb of a general 2-dimensional linear system,



ax + by = p
,
cx + dy = q

tells us whether or not the system has a solution for any p or q. Note that the two rows
define two lines, i.e., y = (p − ax)/b and y = (q − cx)/d, and that the solution corresponds
to the intersection point of these lines.
a. What is the slope of these two lines?
b. What would be a condition for them to not intersect?
c. Does this depend on p and q?

2. Find
 eigenvalues
 and eigenvectors of the following matrices:
−2 1
a.
1 −2
 
1 4
b.
1 1
 
a b
c.
0 d

3. Study the model of Horn (1975) with the R script horn.R. Note the notation %*% for
matrix muplication!
a. Will succession in this model approach the same climax state for any initial condition?
2.5 Exercises 13

b. Is that biologically reasonable?


~ =V
4. Write the following linear system in a matrix form AX ~ , and compute the determinant
of the
 matrix A to check whether or not the system has a solution.
2x − 4y = 3
a.
x+y =1
14 Vectors and Matrices
Chapter 3

Systems of two differential equations

The general form of a system of two differential equations is:



dx/dt = f (x, y)
, (3.1)
dy/dt = g(x, y)

where x(t) and y(t) are unknown functions of time t, and f and g are functions of both x and
y. A linear example of such a system is

dx/dt = ax + by
. (3.2)
dy/dt = cx + dy

This is called a linear system of differential equations because the equations only contain linear
terms. Solving x and y from ax + by = 0 and ax + dy = 0 reveals that linear systems always
has only one steady state: (x̄, ȳ) = (0, 0). Depending on the values and signs of the parameters
a, b, c, d these equations can describe a range of different processes. For example, consider the
specific case a = −2, b = 1, c = 1, d = −2:

dx/dt = −2x + y
(3.3)
dy/dt = x − 2y

where x and y decay at a rate −1 per unit of time, and are converted into one another at a rate
1 (and, hence, each population has a total loss rate of −2 per unit of time).

3.1 Solutions of Linear 2D Systems

The analytical solution for the linear two-dimensional systems of Eq. (3.2) is known. Rather
than deriving this solution, we will simply provide it to illustrate its analogy with the solution
of linear one-dimensional systems. For one-dimensional linear systems of the form
dx
= ax , we know that x(t) = Ceat , (3.4)
dt
is the general solution, where C is an unknown constant depending on the initial value of x
(in this case C = x(0)). From this equation it follows that for a > 0, x approaches infinity
over time, which means that x = 0 is an unstable equilbrium. For a < 0, x will approach zero,
meaning that x = 0 is a stable equilibrium (or attractor) of this equation.
16 Systems of two differential equations

In an analogous manner, a two-dimensional system of the form



dx/dt = ax + by
,
dy/dt = cx + dy
which in matrix notation can be written as
      
dx/dt a b x x
= =A ,
dy/dt c d y y
has as a general solution
x(t) = C1 x1 eλ1 t + C2 x2 eλ2 t
y(t) = C1 y1 eλ1 t + C2 y2 eλ2 t ,
which in vector notation can be written as:
     
x(t) x 1 λ1 t x 2 λ2 t
= C1 e + C2 e , (3.5)
y(t) y1 y2

where λ1 , λ2 are the eigenvalues, and v1 = xy11 and v2 = xy22 the corresponding eigenvectors
 

of the matrix A. As we saw in Chapter 2, the eigenvectors indicate the major directions of
change of the system described by the matrix, and apparently all solutions can be written as
a linear combination of the change along the two eigenvectors. Similar to the single unknown
C depending on x(0) in the one-dimensional solution of Eq. (3.4), we here have two unknowns
C1 and C2 that are defined by the initial values of x and y, i.e., x(0) = C1 x1 + C2 x2 and
y(0) = C1 y1 + C2 y2 .

Similar to the single exponent a in the solution of one dimensional linear systems, the signs
of the two eigenvalues determine the stability of the equilibrium point (0, 0). Note that the
equilibrium point will only be stable when both exponentials in the solution converge to zero,
which implies that both eigenvalues need to be smaller than zero. In case of complex-valued
eigenvalues (see Chapter 7), which occur for spiral (and center) points, the real part of the two
eigenvalues needs to be smaller than zero.

Since x(t) and y(t) grow when λ1,2 > 0 we obtain that the steady state (0, 0) is
• a stable node when both λ1,2 < 0;
• an unstable node when both λ1,2 > 0;
• an (unstable) saddle point when λ1 > 0 and λ2 < 0 (or vice versa).
When λ1,2 are complex, i.e., λ1,2 = α ± iβ, we obtain that (0, 0) is
• a stable spiral when the real part α < 0;
• an unstable spiral when the real part α > 0;
• a neutrally stable center point when the real part α = 0.

For example, derive the general solution of the system in Eq. (3.3) for the matrix
 
−2 1
A= .
1 −2
Since tr = −4 and det = 4 − 1 = 3 we obtain:

−4 ± 16 − 12
λ1,2 = = −2 ± 1 such that λ1 = −1 and λ2 = −3 .
2
Hence solutions tend to zero and (x, y) = (0, 0) is a stable node. To find the eigenvector v1 we
can now write      
−b −1 1
v1 = = or v1 = ,
a − λ1 −1 1
3.2 Exercises 17

1.0
● ● ● ● ●

0.5
● ● ● ● ●

0.0
y
● ● ● ● ●

● ● ● ● ●
−0.5

● ● ● ● ●
−1.0

−1.0 −0.5 0.0 0.5 1.0


x

Figure 3.1: The eigenvectors and a phase portrait of the model defined by Eq. (3.3). The red line is
1 −1

defined by the eigenvector v1 = 1 and the blue line by v2 = 1 . The bullets are starting points of
trajectories (that are depicted as black lines). Note that all trajectories approach an eigenvector and
then the origin. Initial conditions starting on an eigenvector form trajectories on that eigenvector. This
figure was made with the script linear.R).

and for v2 we can write    


−b −1
v2 = = ,
a − λ2 1
(see Fig. 3.1). We write the general solution as
x(t) = C1 e−t − C2 e−3t
      
x(t) 1 −1t −1 −3t
= C1 e + C2 e or .
y(t) 1 1 y(t) = C1 e−t + C2 e−3t
Note that the integration constants C1 and C2 can subsequently be solved from the initial
condition, i.e., x(0) = C1 − C2 and y(0) = C1 + C2 . One can check this solution by substituting
it into Eq. (3.3), and compare that to the derivative of the solution (see the slides and the video).

3.2 Exercises
1. Find the solution for the following initial value problem:
        
dx/dt 1 −2 x x(0) 3
= with =
dy/dt 5 8 y y(0) −3
Hint: Proceed by first finding the general solution. After that, substitute t = 0 and
x(0) = 3, y(0) = −3 to find the values of constants C1 and C2 .
2. Two different concentrations of a solution are separated by a membrane through which the
solute can diffuse. The rate at which the solute diffuses is proportional to the difference
in concentrations between two solutions. The differential equations governing the process
are (
dA/dt = − Vk1 (A − B)
,
dB/dt = Vk2 (A − B)
18 Systems of two differential equations

where A and B are the two concentrations, V1 and V2 are the volumes of the respec-
tive compartments, and k is the rate at which the chemical exchanges between the two
compartments. Let V1 = 20 liters, V2 = 5 liters, and k = 0.2 liters/min. Start with
A(0) = 3 moles/liter and B(0) = 0, and find A(t) and B(t) as functions of time. Hint:
this exercise is similar to the previous one!
a. Find the solution of this system.
b. What is the steady state, and does this match the initial condition?
c. Is this a stable steady state?
Chapter 4

Linear approximation of non-linear


2D systems

For non-linear systems we typically do not have an analytical solution. In this chapter we will
discuss that such a system can be linearized around its steady state. One can then establish the
stability of that steady state by solving the linearized system. The analytical solution of the
approximate linear system approaches the behavior of the original system closely as long as we
remain close to an equilibrium point.

We will linearize functions, f (x), by approximating them by their local derivative for a particular
x-value, x̄ (see Fig. 4.1a). Since the derivative of a function f (x) at point x̄ can be written as

f (x) − f (x̄) f (x̄ + h) − f (x̄)


f 0 (x̄) = lim or f 0 (x̄) = lim ,
x→x̄ x − x̄ h→0 h

we can use this expression to write a linear approximation of f (x) for x close to x̄, i.e.,

f (x) ' f (x̄) + f 0 (x̄) (x − x̄) or f (x) ' f (x̄) + f 0 (x̄) h ,

where h → 0. Indeed, a two-dimensional function f (x) = ax2 + b can be represented as a line


in a two-dimensional plot, with the value of x on the x-axis and the value of f (x) on the y-axis
(see the curved red line in Fig. 4.1a). By taking the derivative of f in a particular point x̄, i.e.,
f 0 (x̄), we obtain the slope, or tangent line, of the graph of f (x) in point x̄ (see the straight blue
line in Fig. 4.1a). To explicitly write that we are taking the derivative with respect to x we can
also write f 0 (x̄) as ∂x f (x̄), i.e., for f (x) = ax2 + b we obtain that ∂x f (x) = 2ax. The derivative
can be used to approximate the curved f (x) around a particular value x̄. From Fig. 4.1a we can
read that
f (x) ' f (x̄) + ∂x f (x̄) (x − x̄) ,

where x − x̄ is a small step in the x-direction that we multiply with the local slope, ∂x f (x̄), to
approximate the required change in the vertical direction. For example, with a = 2 and b = 1
and x = 3 we obtain that f (3) = 2 × 9 + 1 = 19, and that ∂x f (3) = 2 × 2 × 3 = 12. One would
estimate the value of f (3.1) by writing f (3.1) ' f (3) + ∂x f (3) × 0.1 = 19 + 12 × 0.1 = 20.2,
while the true value f (3.1) = 20.22. When h → 0 this linear approximation becomes extremely
good.
20 Linear approximation of non-linear 2D systems

f (x) ' f (x̄) + ∂x f (x̄)(x − x̄)

f (x)
∂x f (x̄)
f (x̄)

x̄ x
← h→

Figure 4.1: On the left we have the function f (x) = ax2 + b (curved red line) with its local
derivative in the point x̄ depicted as ∂x f (x̄) = 2ax̄ (straight blue line). On the right we depict
the function f (x, y) = 3x − x2 − 2xy, with in the point (x̄, ȳ) = (1, 1) its partial derivatives
∂x f (x, y) = 3 − 2x − 2y = −1 and ∂y f (x, y) = −2x = −2, depicted by the heavy red and
blue lines, respectively. We approximate the value of f (1.25, 1.25) by these partial derivatives
(see the colored small plane), i.e., f (1.25, 1.25) ' f (1, 1) + ∂x f (1, 1)0.25 + ∂y f (1, 1)0.25 =
−1 × 0.25 − 2 × 0.25 = −0.75. Note that the true value of f (1.25, 1.25) = −0.9375, and that
the short vertical heavy purple line depicts the distance between the true and the approximated
value.

4.1 Partial derivatives

Now consider the three-dimensional function f (x, y) = 3x − x2 − 2xy plotted in Fig. 4.1b with
x on the x-axis, y on the y-axis, and the value of the function z = f (x, y) on the upward axis.
The function value at the point (x̄, ȳ) = (1, 1) is zero, i.e., f (1, 1) = 3 − 1 − 2 = 0. We can now
linearize the function by differentiating it with respect to x and y, respectively, i.e.,

∂x f (x, y) = 3 − 2x − 2y and ∂y f (x, y) = −2x ,

because y is treated as a constant when one differentiates with respect to x, and x is taken as a
constant when we take the partial derivative with respect to y. To make this explicit one speaks
of partial derivatives of the function f (x, y).

These partial derivatives again define the local tangents of the curved function f (x, y). For
instance, in the point (x̄ = 1, ȳ = 1) the slope in the x-direction is ∂x f (1, 1) = 3−2×1−2×1 = −1
(see the heavy red line in Fig. 4.1b), and the slope in the y-direction is ∂y f (1, 1) = −2 × 1 = −2
(see the heavy blue line in Fig. 4.1b). We can use these two tangent lines to approximate f (x, y)
close to the point (x̄, ȳ), i.e.,

f (x, y) ' f (x̄, ȳ) + ∂x f (x̄, ȳ) (x − x̄) + ∂y f (x̄, ȳ) (y − ȳ) . (4.1)

Because f (x̄, ȳ) = f (1, 1) = 0 the approximation would in this case simplify to

f (x, y) ' ∂x f (x̄, ȳ) (x − x̄) + ∂y f (x̄, ȳ) (y − ȳ) , (4.2)


4.2 Linearization of a systems of ODEs: Jacobian 21

where again we could write hx = x − x̄ and hy = y − ȳ to define the step sizes in the x-
direction and y-direction, respectively. For very small step sizes this should become a very good
approximation. For instance, taking a step size hx = hy = 0.25 we obtain that

f (x, y) ' −1hx − 2hy = −0.25 − 0.5 = −0.75

(see the small orange plane in Fig. 4.1). This is close to the true function value f (1.25, 1.25) =
−0.9375 (the error is depicted by the short vertical purple line in Fig. 4.1).

4.2 Linearization of a systems of ODEs: Jacobian

Consider a general system of two differential equations:



dx/dt = f (x, y)
, (4.3)
dy/dt = g(x, y)

with an equilibrium point at (x̄, ȳ), i.e., f (x̄, ȳ) = 0 and g(x̄, ȳ) = 0. Using Eq. (4.1) we find a
linear approximation of f (x, y) close to the equilibrium

f (x, y) ' 0 + ∂x f (x̄, ȳ) (x − x̄) + ∂y f (x̄, ȳ) (y − ȳ) = ∂x f (x − x̄) + ∂y f (y − ȳ) , (4.4)

where ∂x f = ∂x f (x̄, ȳ) is an abbreviation for the partial derivative of f (x, y) at the steady state
(x̄, ȳ). A similar approach for g(x, y) yields:

g(x, y) ' ∂x g (x − x̄) + ∂y g (y − ȳ) , (4.5)

where ∂x g = ∂x g(x̄, ȳ) is an abbreviation for the partial derivative of g(x, y) at the steady state
(x̄, ȳ). If we now replace the right hand sides of Eq. (4.3) by their local approximations around
(x̄, ȳ), i.e., Eq. (4.4) and Eq. (4.5), we obtain

dx/dt ' ∂x f (x − x̄) + ∂y f (y − ȳ)
(4.6)
dy/dt ' ∂x g (x − x̄) + ∂y g (y − ȳ)

The system of Eq. (4.6) is simpler than the original system defined by Eq. (4.3), because the
partial derivatives in Eq. (4.6) are constants simply representing a slope at the equilibrium point
(x̄, ȳ). We therefore rewrite Eq. (4.6) into

dx/dt = a(x − x̄) + b(y − ȳ)
, (4.7)
dy/dt = c(x − x̄) + d(y − ȳ)
where a = ∂x f , b = ∂y f , c = ∂x g and d = ∂y g. As x̄ and ȳ are also constants, and hence their
time derivatives dx̄/dt and dȳ/dt are zero, we can apply a trick and write
dx dx dx̄ d(x − x̄) dy dy dȳ d(y − ȳ)
= − = and = − =
dt dt dt dt dt dt dt dt
giving 
d(x − x̄)/dt = a(x − x̄) + b(y − ȳ)
. (4.8)
d(y − ȳ)/dt = c(x − x̄) + d(y − ȳ)

Because x − x̄ and y − ȳ define the distances to the steady state (x̄, ȳ), we can change variables
and rewrite this into the distances, hx = x − x̄ and hy = y − ȳ, i.e.,

dhx /dt = ahx + bhy
. (4.9)
dhy /dt = chx + dhy
22 Linear approximation of non-linear 2D systems

Since this has the form of a general linear system, we know the solution
     
hx (t) x 1 λ1 t x 2 λ2 t
= C1 e + C2 e ,
hy (t) y1 y2

where λ1 and λ2 are the eigenvalues of the matrix defined by the four constants in Eq. (4.9),
x1 x2

and v1 = y1 and v2 = y2 are the corresponding eigenvectors. In Chapter 3 we learned that
this means that the distances to the steady state decline when λ1,2 < 0. In all other cases small
disturbances around the equilibrium will grow.

Having this solution one can see that the rate at which perturbations die out is determined by
the largest eigenvalue. Hence, the return time is defined by
−1
TR =
max(λ1 , λ2 )
when λ1,2 < 0 (otherwise the return time is negative and not defined).

Summarizing, to determine the behavior of general 2D system, Eq. (4.3), around a steady state,
we need to determine the values of the partial derivatives in the equilibrium point, which together
constitute the matrix defining the linearized system:
   
∂x f ∂y f a b
J= = . (4.10)
∂x g ∂ y g c d

This matrix is called the Jacobian of system Eq. (4.3) in the point (x̄, ȳ). This Jacobi matrix
allows us to determine the eigenvalues and hence establish the type of the equilibrium of the
original non-linear system. The approach we developed here for 2D systems is equally valid for
systems composed more than two ODEs. One just obtains a larger Jacobi matrix and computes
the eigenvalues of that matrix to establish the stability, and/or type, of the equilibrium point.

4.3 Example

Let us finish with a 2-dimensional example to illustrate how this is actually performed. Consider
two (biological) populations, x ≥ 0 and y ≥ 0, with a source for x, death for both, and a mass-
action interaction term,
dx dy
= f (x, y) = a − bx − cxy and = g(x, y) = dxy − ey ,
dt dt
with a steady state x̄ = ab when y = 0, and a non-trivial steady state x̄ = de and ȳ = ad b
ce − c . To
find the Jacobi matrix in these steady states, we need to take the partial derivatives of f (x, y)
and g(x, y) with respect to x and y respectively, i.e.,
   
∂x f ∂y f −b − cȳ −cx̄
J= = .
∂x g ∂ y g dȳ dx̄ − e
a
First consider the trivial steady steady state and fill in x̄ = b and ȳ = 0,

−b − ca
 
J1 = b .
0 da b −e

Since this matrix is in a diagonal form we know that the diagonal elements provide the eigenval-
ues, i.e., λ1 = −b and λ2 = dab − e. Since λ1 < 0, the trivial steady state will be stable whenever
4.4 Exercises 23

λ2 < 0, i.e., whenever ab < de . Note this parameter condition also determines whether or not
ȳ = ad b
ce − c in the non-trivial steady state is positive, i.e., the trivial steady state is stable only
when ȳ < 0. Next we consider the Jacobian of the (positive) non-trivial steady state, and let us
first fill in x̄ = de , i.e.,
−b − cȳ − ce
 
J2 = d .
dȳ 0
When ȳ > 0 the signs of this matrix are given by
 
−α −β
J3 = with trJ3 = −α < 0 and det J3 = βγ > 0 ,
γ 0

such that √ p
tr ± tr2 − 4 det −α ± α2 − 4βγ
λ1,2 = = ,
2 2
p
implying that both eigenvalues are negative (because α2 − 4βγ < α), and hence for ȳ > 0
the non-trivial steady state is stable. Summarizing we find that for ab < de the steady state
(x̄, ȳ) = (a/b, 0) is stable and the non-trivial steady state is non-existent (i.e., located at negative
ȳ), and that for ab > de the (a/b, 0) state is unstable and the (x̄, ȳ) = de , ad b

ce − c steady state is
stable.

4.4 Exercises

1. Find partial derivatives of these functions. After finding derivatives evaluate their value
at the given point.
a. ∂x z and ∂y z for z(x, y) = x2 + y 2 − 4 at x = 1; y = 2
b. ∂x z for z(x, y) = x(25 − x2 − y 2 ) at x = 3; y = 4

2. Find a linear approximation for the function f (x, y) = x2 + y 2 at x = 1, y = 1.

3. Find equilibria of the following non-linear systems, dx/dt = f (x, y) and dy/dt = g(x, y),
andfind the partial derivatives, ∂x f, ∂y f, ∂x g and ∂y g, at each equilibrium point
dx/dt = −4y
a.
dy/dt = 4x − x2 − 0.5y
dx/dt = 9x + y 2

b.
dy/dt = x − y

dx/dt = 2x − xy
c.
dy/dt = −y + y 2 x
4. Find the Jacobian matrix of the non-trivial steady state of the Lotka-Volterra model,
dR/dt = aR − bR2 − cRN and dN/dt = dRN − eN .
24 Linear approximation of non-linear 2D systems
Chapter 5

Efficient analysis of 2D ODEs

In the previous chapters we learned how to determine the type and stability of an equilibrium
from the interaction matrix of a linear system, or by determining the Jacobian matrix of a
non-linear system in the equilibrium point. From these matrices we computed the eigenvalues
to find the stability and the type of equilibrium. Here we will demonstrate an efficient method
for determining the signs and types of the eigenvalues, and hence the type of equilibrium, from
the coefficients of the matrix without actually computing the eigenvalues.

5.1 Determinant-trace method

We have learned in the previous chapters that we can linearize the non-linear functions of any
system of differential equations, e.g.,

dx/dt = f (x, y)
,
dy/dt = g(x, y)

around a steady state (x̄, ȳ) into a Jacobian matrix


   
∂x f ∂y f a b
J= = ,
∂x g ∂y g c d

and that the eigenvalues of this matrix are given by



tr ± D
λ1,2 = where D = tr2 − 4 det , (5.1)
2
where tr = a + d, det = ad − bc, and D is the discriminant.

Thus, although the original linearized system depends on four parameters, a, b, c, d, the charac-
teristic equation of Eq. (5.1) depends only on two parameters, tr[J] and det[J], and if we know
the determinant and the trace of the Jacobian, we can find the eigenvalues, and hence the type
of the equilibrium (x̄, ȳ). The solution of Eq. (5.1) are like the roots of any quadratic equation,
and one can prove that they obey the following expressions

λ1 + λ2 = tr[J] and λ1 × λ2 = det[J] . (5.2)


26 Efficient analysis of 2D ODEs

det
D=0
stable spiral non−stable spiral

6
3 5 4 2
stable center non−stable
node node

tr
1
saddle

Figure 5.1: The trace and the determinant determine the type of steady state. Plotting the
trace, tr, along the horizontal axis and the determinant, det, along the vertical axis, we can plot
the parabola where the discriminant D = 0. Saddle points (case 1 in the text) corresponds to
the lower half of the plane. Stable points (case 3 and 5 in the text) are located in the upper
left quadrant, and unstable points (case 2 and 4) in the upper right section. The discriminant
depicted by the parabola separates the real from the complex roots. For reasons of completeness
we indicate “center points” along the positive part of the vertical axis where tr[J] = 0. Such
steady states are neither stable or unstable, i.e., they said to be “neutrally stable”, and occur
as bifurcation points (in proper models).

√ √
The former is true because λ1 + λ2 = (tr + D + tr − D)/2 = tr, and the latter can be checked
by writing
1 √ 1 √ 1 1
(tr + D) (tr − D) = (tr2 − D) = (tr2 − tr2 + 4 det) = det .
2 2 4 4
Remember that the steady state is only stable when both eigenvalues are negative. When det > 0
one knows that either both eigenvalues are negative, or that they are both positive (because
λ1 × λ2 = det[J]). Having det > 0 and tr < 0 one knows that they cannot be positive (because
λ1 + λ2 = tr[J]), and therefore that they are both negative. Hence the steady state has to be
stable. Summarizing a quick test for stability is tr[J] < 0 and det[J] > 0.

Although it is typically sufficient to know whether or not a steady state is stable, we can
elaborate this somewhat because the signs of the trace, determinant, and discriminant also
provide information on the type of the equilibrium (see Fig. 5.1):
1. if det < 0 then D > 0, both eigenvalues are real, with λ1,2 having unequal signs: saddle point.
2. if det > 0, tr > 0 and D > 0 the eigenvalues are real, with λ1,2 > 0: unstable node.
3. if det > 0, tr < 0 and D > 0 the eigenvalues are real, with λ1,2 < 0: stable node. The return
time is defined as TR = −1/λmax
4. if det > 0, tr > 0 and D < 0 the eigenvalues form a complex pair (see Chapter 7),

tr −D
λ1,2 = ± i ,
2 2
and having a positive trace means that the steady state is an unstable spiral, because the
real part of the eigenvalues, tr/2, is positive.
5.2 Graphical Jacobian 27

5. if det > 0, tr < 0 and D < 0 the eigenvalues form a similar complex pair, but since the real
part of the eigenvalues, −tr/2, now is negative, the steady state is a stable spiral point (see
Chapter 7). The return time is now defined as TR = −1/λRe = 2/tr

5.2 Graphical Jacobian

Since the stability of the steady states just depends on the signs of the determinant and the
trace of the Jacobi matrix, it is often sufficient to just known the signs of the partial derivatives
that make up the Jacobian. Fortuitously, the sign of the partial derivatives (+, −, 0) in the
equilibrium can be obtained from the vector field around the steady state.

The main idea can be seen in Fig. 5.2, where we consider an equilibrium point (x̄, ȳ), at the
intersection of the solid dx/dt = f (x, y) = 0 nullcline, and the dashed dy/dt = g(x, y) = 0
nullcline. Using the definition of a derivative, the partial derivative ∂x f (x, y), in the equilibrium
point (x̄, ȳ), can be approximated by

f (x, ȳ) − f (x̄, ȳ) f (x̄ + h, ȳ)


∂x f (x̄, ȳ) ' = ,
x − x̄ h
because f (x̄, ȳ) = 0, and where h = x − x̄ is a small increase in x. This is the difference in the
f (x, y) value between the original point (x̄, ȳ), where f (x̄, ȳ) = 0, and a nearby point with a
slightly higher x value (x̄+h, ȳ), divided by the distance h = x− x̄ between these two points. The
sign of f (x̄ + h, ȳ) can be read from the vector field close to the original point (x̄, ȳ). Similarly,
since ∂y f (x̄, ȳ) ' f (x̄, ȳ + h)/h, the change in f (x, y) as a function of an increase in y is the
difference in f (x, y) value between the original point (x̄, ȳ) and a nearby point with a slightly
higher y value (x̄, ȳ + h), divided by the distance h between these two points.

In other words
 
f (x̄ + h, ȳ) f (x̄, ȳ + h)
∂x f ' ∂y f '  
h h α β
J = g(x̄, ȳ + h)  = γ δ , (5.3)

g(x̄ + h, ȳ)
∂x g ' ∂y g '
h h
with tr[J] = α + δ and det[J] = αδ − βγ. Obviously, this approximation will be best if the point
(x̄ + h, ȳ) and (x̄, ȳ + h) is close to the equilibrium point, i.e., if h is small. Since we typically
only need to know the signs of the trace and the determinant, it is often sufficient to just obtain
the signs of the four elements of the Jacobian.

Summarizing, for the steady state (x̄, ȳ) we can use the point (x̄ + h, ȳ) (slightly to the right)
and the point (x̄, ȳ + h) (slightly upward) to determine the signs of the partial derivatives (see
Fig. 5.2):
• the horizontal vector field, → or ←, gives the sign of f (x, y) in those points, i.e., the horizontal
arrow in the point (x̄ + h, ȳ) determines the sign of ∂x f (x̄, ȳ), and the horizontal arrow in the
point (x̄, ȳ + h) determines the sign of ∂y f (x̄, ȳ).
• the vertical direction vector, ↑ or ↓, provide the sign of g(x, y), i.e., the vertical arrow in
point (x̄ + h, ȳ) determines the sign of of ∂x g(x̄, ȳ), and the vertical arrow in point (x̄, ȳ + h)
determines the sign of of ∂y g(x̄, ȳ).
In Fig. 5.2a and c we see that the leftward horizontal arrow (←) in a point to the right of the
equilibrium, (x + h, y), tells us that ∂x f (x, y) = α < 0, and that the horizontal leftward arrow
(←) in a point above the steady state tells us that ∂y f (x̄, ȳ) = β < 0. Similarly, the vertical
28 Efficient analysis of 2D ODEs

y y y

(x,y+h)

(x+h,y)
(x,y) (x,y) (x,y)

x x x

a b c

Figure 5.2: The graphical Jacobian method. Panel (a) shows the nullclines, vectorfield, and the
location of the equilibrium (x̄, ȳ). The solid line corresponds to dx/dt = 0 and the dashed line
to the dy/dt = 0 nullcline. Panel (b) shows two reference points, one located slightly to the right
(x̄ + h, ȳ), and one located just above (x̄, ȳ + h) the equilibrium. These are used to compute
the partial derivatives: Panel (c) shows the vector field in these two reference points, i.e., (←, ↑)
and (←, ↓), respectively.

upward arrow (↑) in the point (x̄ + h, ȳ) tells us that ∂x g(x̄, ȳ) = γ > 0, whereas the downward
vertical arrow (↓) in the point (x̄, ȳ + h) tells us that ∂g (x̄, ȳ) = δ < 0.

Note that a point to the right of the equilibrium would lie on a nullcline if that nullcline is
perfectly horizontal, and hence that ∂fx = 0 (if this were the dx/dt = f (x, y) = 0 nullcline)
or that ∂gx = 0 (if this would be the dy/dt = g(x, y) = 0 nullcline). Similarly, if a nullcline is
exactly vertical we obtain that either ∂fy = 0 or that ∂gy = 0 (depending on which nullcline
the point lands).

5.3 Plan of qualitative analysis

We summarize all of the above by formulating a plan to qualitatively study systems of two
differential equations: 
dx/dt = f (x, y)
. (5.4)
dy/dt = g(x, y)
The main aim is to plot the phase portrait and determine the stability (and possibly the type)
of the equilibrium points, such that we can predict the dynamics of the system.

Start with sketching nullclines and the vector field (see the online tutorial at
[Link]/rdb/bm/clips/nullclines):
1. Decide which variables can most easily be solved from the f (x, y) = 0 and g(x, y) = 0
expressions, and plot that variable on the vertical axis (and the other on the horizontal axis).
Sketch the f (x, y) and g(x, y) = 0 nullclines in this phase space.
2. Choose a point in an “extreme” region (e.g., both variables big, both small, or an asym-
metric point) on the x, y plane, and find the local horizontal arrow from f (x, y). Plot the
corresponding arrow, i.e., → if f (x, y) > 0 and ← if f (x, y) < 0.
3. Check all regions of the phase space and swap the horizontal arrow when crossing this null-
cline.
4. Do the same to find the direction of the vertical arrows, i.e., take an extreme point to find the
local vertical arrow from g(x, y), and swap this arrow when crossing the dy/dt = 0 nullcline.
5. Study the vector field in the four different regions surrounding each equilibrium point, and
see if this provides enough information on stability of the equilibrium. This can be done when
5.4 Exercises 29

the steady state is a stable node, an unstable node, or a saddle point.

Should the vector field be insufficient to determine the stability of an equilibrium, which probably
means it is a spiral point, we determine the graphical Jacobian of the equilibrium point:
1. For each equilibrium point (x̄, ȳ) choose two points. One located slightly to the right, (x̄+h, ȳ),
and one slightly above, (x̄, ȳ + h), the equilibrium. Find the signs of the Jacobian from the
local vector field at these points.
2. Compute the sign of the trace and determinant of this Jacobi matrix, check whether tr < 0
and det > 0, and see if this identifies the type of equilibrium (see Fig. 5.1).

Finally, if the stability of the equilibrium can not be determined from the graphical Jacobian,
we need to determine the full Jacobian by taking the partial derivatives of f (x, y) and g(x, y)
at the steady state (x̄, ȳ)    
∂x f ∂y f a b
J= = , (5.5)
∂x g ∂y g c d
to solve the eigenvalues of this matrix, i.e.,

tr ± D
λ1,2 = where D = tr2 − 4 det , (5.6)
2
and tr = a + d and det = ad − bc. When both λ1 < 0 and λ2 < 0 the steady state (x̄, ȳ) is stable.

Independent of how we determined the type and stability of the equilibria, we use our knowledge
of the type and stability, the local vector field and nullclines, of each equilibrium, to draw a local
phase portrait with trajectories around that equilibrium point. Finally, connecting the different
local phase portraits into a global phase portrait we get an idea of the separatrices and the
basins of attraction of the attractors.

5.4 Exercises

1. Find the type and stability of the equilibria of the following linear (or linearized) systems
using the determinant-trace method:
  
dx/dt = 3x + y dx/dt = 2x + y dx/dt = 2x + y
a. b. c.
dy/dt = −20x + 6y dy/dt = 2x − 10y dy/dt = 5x − 2y

2. Consider the following model

dx dy
= 2x(1 − y) and = 2 − y − x2 ,
dt dt
assuming x ≥ 0 and y ≥ 0.
a. Find all equilibria of the system.
b. Find the general expression for the Jacobian of this system.
c. Determine the type of each equilibrium using the “determinant-trace” method.
d. Sketch the qualitative local phase portraits around each equilibrium point by computing
eigenvalues and eigenvectors, and try to connect these into a global phase space.

3. Study the model of the previous question again using the graphical Jacobian approach:
a. Sketch the vector field for the system using nullclines.
b. Find type and stability of equilibria using the graphical Jacobian.
c. Compare your results to the previous question.
30 Efficient analysis of 2D ODEs

4. On the webpage we provide the Grind script linear.R which encodes Eq. (3.2).
a. Find values of a, b, c and d such that the origin is a stable node, an unstable node, a
saddle point, and a spiral point. Help yourself by reducing this problem to find the
proper trace, determinant, and discriminant of this matrix.
b. Draw the nullclines with a phase portrait for each situation (plane(portrait=TRUE)).
Chapter 6

Lotka Volterra model

Using the famous Lotka Volterra model as an example we review these methods for analyzing
systems of non-linear differential equations. The Lotka-Volterra predator prey model can be
written as:
dR dN
= aR − bR2 − cRN and = dRN − eN , (6.1)
dt dt
where a, b, c, d, and e are positive constant parameters, and R and N are the prey and predator
densities. In the online tutorial [Link]/rdb/bm/clips/nullclines/[Link] we show that
the nullclines can intersect in three (Fig. 6.1a) or two (Fig. 6.1b) steady states, and that these
steady states are given by
 
e da − eb
(R, N ) = (0, 0) , (R, N ) = (a/b, 0) and (R, N ) = ,
d dc

which confirms that the non-trivial steady state only exists when da > eb. The same tutorial
explains how to obtain the vector field, and explains that this is sufficient to see that the origin
is a saddle point, and that the steady state (R, N ) = (a/b, 0) is saddle in Fig. 6.1a and a stable
node in Fig. 6.1b.

(a) (b)
R R
a N a N

c c
N

0 ●

e a

0 ● ●

a e
0 d b
0 b d
R R

Figure 6.1: The two qualitatively different phase spaces of the Lotka-Volterra model, with the vector
field indicated by arrows.
32 Lotka Volterra model

To establish the stability of the non-trivial steady state in Fig. 6.1a we next apply our method
of linearization. Thus, we define two functions
dR dN
= aR − bR2 − cRN = f (R, N ) and = dRN − eN = g(R, N ) , (6.2)
dt dt
and rewrite the model into two new variables hR and hN that define the distance to the steady
state (R̄, N̄ ) by adopting Eq. (4.1),

d(R̄ + hR ) dhR
= = f (R̄ + hR , N̄ + hN ) ' f (R̄, N̄ ) + ∂R f (R̄, N̄ ) hR + ∂N f (R̄, N̄ ) hN , (6.3)
dt dt
where hR = R − R̄ and hN = N − N̄ . and

d(N̄ + hN ) dhN
= = g(R̄ + hR , N̄ + hN ) ' g(R̄, N̄ ) + ∂R g(R̄, N̄ ) hR + ∂N g(R̄, N̄ ) hN , (6.4)
dt dt
where hR and hN define the distance to (R̄, N̄ ). Because (R̄, N̄ ) is an equilibrium point, i.e.,
f (R̄, N̄ ) = 0 and g(R̄, N̄ ) = 0, this simplifies into

dhR
= ∂R f (R̄, N̄ ) hR + ∂N f (R̄, N̄ ) hN , (6.5)
dt
and
dhN
= ∂R g(R̄, N̄ ) hR + ∂N g(R̄, N̄ ) hN . (6.6)
dt

This linearized system describes the growth of a small disturbance (hR , hN ) around the steady
state (R̄, N̄ ). If the solutions of this system approach (hR , hN ) = (0, 0) the steady state is locally
stable. The four partial derivatives in the steady state form the so-called Jacobi-matrix,
 
a − 2bR̄ − cN̄ −cR̄
J= , (6.7)
dN̄ dR̄ − e

and the general solution of the linear system has the form
     
hR (t) R1 λ1 t R2 λ2 t
= c1 e + c2 e , (6.8)
hN (t) N1 N2
R1 R2
 
where λ1,2 are the eigenvalues, and v1 = N 1
and v2 = N 2
the corresponding eigenvectors of
the Jacobian. One can see that the steady state is stable if, and only if, λ1,2 < 0. Whenever
both eigenvalues are negative small disturbances will die out.

To determine the stability of the three steady states in Fig. 6.1a, one therefore only needs to
know the eigenvalues of the Jacobian (or its trace and determinant). For (R̄, N̄ ) = (0, 0) one
finds  
a 0
J= . (6.9)
0 −e
Because the matrix is in the diagonal form, one can immediately see that the eigenvalues are
λ1 = a and λ2 = −e. Because λ1 > 0 the steady state is unstable, i.e., a saddle point. For
(R̄, N̄ ) = (a/b, 0) one finds
−a − ac
 
J= b . (6.10)
0 da−eb
b
The eigenvalues are λ1 = −a and λ2 = (da − eb)/b. Because of the requirement da > eb
one knows λ2 > 0, and hence that the point is not stable. (Note that (R̄, N̄ ) = (a/b, 0) will
6.1 Exercises 33

be stable when da < eb: to see what happens sketch the nullclines for that situation). For
(R̄, N̄ ) = de , da−eb
dc one obtains

− be − ce
   
d d −bR̄ −cR̄
J= da−eb = . (6.11)
c 0 dN̄ 0

One finds
trJ = −bR̄ < 0 and det J = cdR̄N̄ > 0 , (6.12)
which tells us that the steady state is stable. Note that we never filled in numerical values for
the parameters in this analysis.

Graphical Jacobian

Consider the vector field around the steady state of some system dx/dt = f (x, y) and dy/dt =
g(x, y). Around the steady state (x̄, ȳ) in the phase space (x, y) the sign of dx/dt is given by
the horizontal arrows, i.e., the horizontal component of the vector field. The sign of ∂x f can
therefore be determined by making a small step to the right, i.e., in the x direction, and reading
the sign of dx/dt from the vector field. Similarly, a small step upwards gives the effect of y
on dx/dt, i.e., gives ∂y f , and the sign can be read from the vertical arrow of the vector field.
Repeating this for ∂x g and ∂y g, while replacing x, y with R, N , one finds around the steady state
(0, 0) in Fig. 6.1a:    
∂x f ∂y f α 0
J = = , (6.13)
∂x g ∂y g 0 −β
where α and β are positive constants. Because det(J) = −αβ < 0 the steady state is a saddle
point (see Fig. 5.1). For the steady state without predators one finds in Fig. 6.1a
 
−α −β
J= , (6.14)
0 γ

Because det(J) = −αγ < 0 the equilibrium is a saddle point. For the non-trivial steady state
in Fig. 6.1a one finds  
−α −β
J= , (6.15)
γ 0
and because tr(J) = −α < 0 and det(J) = βγ > 0 the equilibrium is stable. This graphical
method is also explained in the book of Hastings (1997).

6.1 Exercises

Determine the stability of the non-trivial equilibrium point of this Lotka Volterra model after
setting b = 0, i.e., after disallowing for a carrying capacity of the prey.
34 Lotka Volterra model
Chapter 7

Complex numbers

Consider a general quadratic equation

aλ2 + bλ + c = 0 , (7.1)

with roots given by the ‘abc’-formula


√ √
−b ± b2 − 4ac −b ± D
λ1,2 = = where D = b2 − 4ac .
2a 2a
The value D is called the discriminant. What happens with this equation if D < 0? Can it still
have roots in this case?

You have probably learned that a quadratic equation cannot be solved when D < 0. It is indeed
true that the equation has no real solutions in this case, because the squareroot of a negative
number does not exist in any real sense. However, we shall see that even if the solutions are
not real numbers, one can still perform calculations with them. In order to do this, so-called
complex numbers have been invented, which allow for a solution of Eq. (7.1) even if D < 0. Let
us define the basic complex number i as:

i2 = −1 or equivalently i= −1 . (7.2)

To see how this works, consider the equation λ2 = −3, which does not have a real solution.
Given that i2 = −1 we can rewrite this into

λ2 = −1 × 3 = i2 × 3 or λ1,2 = ±i 3 . (7.3)

Here i is the basic complex number, which is similar to ‘1’ for real numbers. In general, the
equation λ2 = −a2 , has solutions λ1,2 = ±ai. As calculations with complex numbers may in the
end deliver an outcome composes of real numbers only, the definition i2 = −1 can be very useful
in real life applications. A general complex number z can be written as z = α + iβ, where α is
called the real part and iβ is called the imaginary part of the complex number z.

Now we can solve


√ Eq.√(7.1) for √ D < 0. If D is negative, then −D must be positive, and
√ the case
we can write D = −1 × −D = i −D, and

−b ± i −D
λ1,2 = . (7.4)
2a
36 Complex numbers

(a) (b)
imaginary axis

z=a+ib
b

real axis
(0,0) a

Figure 7.1: Panel (a): complex numbers represented as points or vectors on a complex plane.
This is called an Argand diagram. Panel (b): the Mandelbrot set created by the series zi =
2
zi−1 + c, where z0 = 0 and c is a point in the Argand diagram (defined by the horizontal and
vertical axes of Panel (b)). The color indicates the size of zn after a fixed number of iterations
i = 1, 2, . . . , n. This image was taken from Wikipedia (which also provides further explanation).

For example, solve the equation λ2 + 2λ + 10 = 0:


√ √
−2 ± 4 − 4 × 10 −2 ± −36 −2 ± 6i
λ1,2 = = = . (7.5)
2 2 2
In other words, λ1 = −1 + 3i and λ2 = −1 − 3i. We see that the solution of this equation forms
a complex pair with real part −1, and imaginary part 3. Complex numbers that have identical
real parts and imaginary parts with opposite signs, are called complex conjugates. The number
z2 = a − ib is called the complex conjugate to the number z1 = a + ib. Roots of a quadratic
equation with a negative discriminant (D < 0) are always complex conjugates to each other (see
Eq. (7.4)).

Complex numbers can be plotted as vectors in a complex plane. This is a graph in which the
horizontal axis is used for the real part, and the vertical axis is used for the imaginary part of
the complex number (see Fig. 7.1a). Note that this is very similar to the depiction of a vector
(x y), when x and y are real valued numbers on a real plane. In other words, you can think
of a complex number z = α + iβ as a vector (α β) on a complex plane. Indeed it turns out
that scaling, adding and multiplication of complex numbers follow the same rules as those for
vectors.

Adding two complex numbers is a simple matter of adding their real parts together, and then
adding their imaginary parts together. For example, with z1 = 3 + 10i and z2 = −5 + 4i,

z1 + z2 = (3 + 10i) + (−5 + 4i) = 3 − 5 + 10i + 4i = −2 + 14i .

Note that this is the same as adding two expressions containing a variable (e.g. (3+10x)+(−5+
4x)). Moreover, if you think of the complex numbers as vectors on a complex plane, addition
works the same as it would for normal vectors: (3 10) + (−5 4) = (−2 14). As was already
stated, scaling, adding and multiplication of complex numbers follow the same rules as defined
for vectors. We only need to remember is that i2 = −1. Thus, multiplication by a real number
(a scalar) results in the multiplication of both the real and imaginary parts by this number. For
example, if z1 = 3 + 10i then 10z1 = 10(3 + 10i) = 30 + 100i.

Multiplication of two complex numbers is the same as multiplying two expressions that contain
a variable (e.g. (a + bx)(c + dx)). In the case of complex numbers, the real and imaginary part
7.1 Complex valued eigenvalues of ODEs 37

of the first number should both be multiplied by both the real and imaginary part of the second
number. Consider the same examples as before, z1 = 3 + 10i and z2 = −5 + 4i

z1 × z2 = (3 + 10i)(−5 + 4i) = 3(−5) + 3 × 4i + 10i(−5) + 10i4i = −15 + 12i − 50i + 40i2


= −15 − 38i + 40i2 = −15 − 38i − 40 = −55 − 38i .

Similarly, one can check that (z1 )2 = (3 + 10i)2 = −91 + 60i. Now that we can do addition and
multiplication with complex numbers, we can check that λ1 = −1 + 3i is indeed a solution of
the equation in example (7.5). It is just a simple matter of filling in (substituting) λ1 into the
equation

λ2 + 2λ + 10 = (−1 + 3i)2 + 2(−1 + 3i) + 10 = 1 − 6i − 9 − 2 + 6i + 10 = 0 .

Now that you know how to add and multiply complex numbers, you may be interested to
explore the beautiful fractal world of the Mandelbrot set (see Fig. 7.1b and Wikipedia), which
contains amazing shapes just created by taking the complex numbers z0 = x + yi at all positions
in a particular area of an Argand diagram, squaring z0 , and adding the result to the original
number, i.e., z1 = z02 + z0 , z1 is squared again, and added to the original number, and so on i.e.,
2 + z (for i = 1, 2, . . . , n).
zi = zi−1 0

Dividing two complex numbers is somewhat more difficult. We need a little trick to remove
the imaginary value in the denominator. Remember that one can always multiply both the
numerator and the denominator of a fraction with the same expression. The fraction of two
complex numbers zz21 , can therefore be multiplied with zz¯¯22 = 1,

z1 z1 z¯2
= , (7.6)
z2 z2 z¯2
which allows us to eliminate the i from the denominator. An example would be

1 + 3i 1 + 3i 1 + 4i (1 + 3i)(1 + 4i) 1 + 3i + 4i + 12i2 −11 + 7i −11 7


= = 2 2
= = = + i.
1 − 4i 1 − 4i 1 + 4i 1 +4 17 17 17 17
To see why this works, we√ can use |z|, the absolute value or modulus of a complex number z.
If z = a + ib, then |z| = a2 + b2 , which is real, and equal to the length of the vector (a b) on
the complex plane. Check yourself that (a + ib)(a − ib) = a2 + b2 , which means that z z̄ = |z|2 .
A complex division can therefore also be written as
z1 z1 z¯2 z1 z¯2 (a1 + b1 i)(a2 − b2 i)
= = 2 = .
z2 z2 z¯2 |z2 | a2 2 + b2 2

7.1 Complex valued eigenvalues of ODEs

Consider the general linear system of ODEs:


 √
dx/dt = ax + by tr ± D
with eigenvalues λ1,2 = ,
dy/dt = cx + dy 2

where D = tr2 − 4 det, tr = a +√d and det = ad − bc. When the discriminant D is negative we
can rewrite the square root as i −D, and hence

tr ± i −D
λ1,2 = or λ1,2 = α ± iβ , (7.7)
2
38 Complex numbers

where α and β are the real and the imaginary part of this complex conjugate. The corresponding
eigenvectors are also complex,
       
−b −b −b 0
v1 = k =k =k − ik = kw1 − ikw2 , (7.8)
a − λ1 a − (α + iβ) a−α β
−b
and w2 = β0 correspond to the real and
 
where k is an arbitrary real constant, and w1 = a−α
imaginary parts of the eigenvector v1 . Similarly
   
−b −b
v2 = k =k = kw1 + ikw2 . (7.9)
a − λ2 a − (α − iβ)

We can substitute these complex eigenvectors and eigenvalues into the equation for the general
solution, i.e.,  
x(t)
= C1 (w1 − iw2 )e(α+iβ)t + C2 (w1 + iw2 )e(α−iβ)t , (7.10)
y(t)
where the constants k are absorbed into C1 and C2 .

It remains quite unclear what this means for the behavior of the solutions. To get an idea about
this we need to introduce a new mathematical relationship. Just as we have equations relating
trigonometric functions (e.g sin2 x + cos2 x = 1), or exponential functions (e.g. ea+b = ea × eb ),
there also is a special function relating trigonometric and exponential functions via complex
numbers:
eix = cos x + i sin x or e−ix = cos x − i sin x . (7.11)
This famous equation is called Euler’s formula. With this formula we rewrite

eα+iβ = eα eiβ = eα (cos β + i sin β) , eα−iβ = eα (cos β − i sin β) , (7.12)

and hence
 
x(t)
= C1 (w1 − iw2 )eαt (cos βt + i sin βt) + C2 (w1 + iw2 )eαt (cos βt − i sin βt)
y(t)
= eαt [C1 (w1 − iw2 )(cos βt + i sin βt) + C2 (w1 + iw2 )(cos βt − i sin βt)] .

Note that at this point the general solution would give us both real valued and complex valued
x and y values, which is impossible for biological variables. Nevertheless, we can already see
that the solutions would tend to zero whenever α = tr/2 < 0, demonstrating that a negative
trace remains a requirement for stability (see Fig. 5.1). We can learn more about this equation
by also considering the initial time point, where t = 0, eαt = 1, cos βt = 1 and i sin βt = 0, and
where have the initial condition (x(0) y(0)), corresponding to two real numbers, i.e.,
 
x(0)
= C1 (w1 − iw2 ) + C2 (w1 + iw2 ) = w1 (C1 + C2 ) + iw2 (C2 − C1 ) ,
y(0)
or
x(0) = −b(C1 + C2 ) and y(0) = (a − α)(C1 + C2 ) + iβ(C2 − C1 ) , (7.13)
from which one can solve the complex pair C1 and C2 satisfying this equation (note that C1 + C2
should be real, whereas C2 − C1 should be an imaginary number to cancel the imaginary term
in the expression for y(0)).

These complex conjugates C1 and C2 should also cancel the imaginary parts of the full solution,
i.e., the w1 i sin βt and the iw2 cos βt terms, such that the full solution (x(t) y(t)) remains real.
The remaining w1 cos βt and w2 sin βt terms define oscillatory behavior of real x(t) and y(t)
7.2 Example: the Lotka Volterra model 39

(a) (b) (c)


1 1 0.05
0.05 0.05
0.05

x(t), y(t)
hR , hN
0 0
N

0.5 0 hy
hx 0 yx

0 0 -0.05
-0.05 -0.05
-0.05
00 0.5
0.5
11 00 5
10
10 15
20
20
00 5
10
10 15
20
20
R R
t t
t t

Figure 7.2: The stable spiral point of the Lotka Volterra model. Panel (a) depicts the nullclines
with a nontrivial steady state at R̄ = 0.5 and N̄ = 0.5. The trajectory starting at R̄ = 0.55
and N̄ = 0.5 spirals into the steady state. Panel (b) depicts for this trajectory the distances
hR = 0.5 − R (red) and hN = 0.5 − R (green) as a function of time. Panel (c) depicts the very
similar solution of the linearized model of Eq. (7.14) (red) and Eq. (7.15) (green). Parameters:
a = b = c = d = 1 and e = 0.5.

values, and we see that 1/β determines the wave length of these oscillations. The real parts of the
constants C1 and C2 determine their amplitude, and the α parameter in the leading exponential
function determines the growth rate of this amplitude. As argued above the oscillations will grow
in amplitude when α > 0, and will be dampened when α = tr/2 < 0. For complex eigenvalues
stability is therefore guaranteed when the trace is negative (see Fig. 5.1).

7.2 Example: the Lotka Volterra model

In Chapter 6 the Lotka-Volterra predator prey model was written as


 
dR dN e da − eb
= aR − bR2 − cRN , = dRN − eN , with (R̄, N̄ ) = ,
dt dt d dc

as the nontrivial steady state, with the Jacobian


 be
− d − ce
  
d −bR̄ −cR̄
J = da−eb = .
c 0 dN̄ 0

For a = b = c = d = 1 and e = 0.5, the nontrivial steady state is at R̄ = 0.5 and N̄ = 0.5, and
the Jacobian becomes
 
−0.5 −0.5
J= with tr = −0.5 , det = 0.25 , and D = −0.75 ,
0.5 0

implying that
√ √
tr ± i −D −0.5 ± i 0.75
λ1,2 = or λ1,2 = = −0.25 ± i 0.43 .
2 2
Hence α = −0.25 and β = 0.43, the nontrivial state is stable, has a return time of −1/α = 4,
and a wave length of 1/β time units. This is sufficient to classify the steady state as a stable
40 Complex numbers

spiral, which is confirmed in Fig. 7.2 where we start a trajectory at R(0) = 0.55 and N (0) = 0.5,
and observe it spiraling intro the steady state (0.5, 0.5).

To illustrate how the solutions become real even if we have complex eigenvalues and eigenvectors,
we proceed by studying the linearized solution of this system, starting at the point R(0) = 0.55
and N (0) = 0.5. The corresponding eigenvectors are
       
0.5 0.5 0.5 0.5
v1 = = and v2 = = .
−0.5 − λ1 −0.25 − i0.43 −0.5 − λ2 −0.25 + i0.43

Hence the general solution for the distance x = R̄ − R(t) and y = N̄ − N (t) is
 
x(t)
= e−0.25t [C1 v1 (cos 0.43t + i sin 0.43t) + C2 v2 (cos 0.43t − i sin 0.43t)] ,
y(t)
or

x(t) = e−0.25t [C1 (0.5)(cos 0.43t + i sin 0.43t) + C2 (0.5)(cos 0.43t − i sin 0.43t)] ,
= e−0.25t 0.5[(C1 + C2 ) cos 0.43t + (C1 − C2 )i sin 0.43t] . (7.14)

Similarly, using i2 = −1, we obtain

y(t) = e−0.25t [C1 (−0.25 − i0.43)(cos 0.43t + i sin 0.43t) +


C2 (−0.25 + i0.43)(cos 0.43t − i sin 0.43t)]
= e−0.25t [0.43(C1 + C2 ) sin 0.43t − 0.25(C1 + C2 ) cos 0.43t +
i0.25(C2 − C1 ) sin 0.43t + i0.43(C2 − C1 ) cos 0.43t . (7.15)

For the initial condition, where t = 0, e−0.25t = 1, cos 0.43t = 1, and sin 0.43t = 0, the linearized
solution x(t) simplifies into x(0) = 0.05 = 0.5(C1 + C2 ), or C1 + C2 = 0.1, and y(t) simplifies
into y(0) = 0 = i0.43(C2 − C1 ) − 0.25(C1 + C2 ). Together, this delivers C1 = 0.05 + i0.029 and
C2 = 0.05 − i0.029. Substituting these two constants into Eq. (7.14) and Eq. (7.15) gives

x(t) = e−0.25t [0.05 cos 0.433t − 0.0289 sin 0.433t] and y(t) = e−0.25t 0.0577 sin 0.433t , (7.16)

which is perfectly real, and closely resembles the true approach to the steady state (compare
Fig. 7.2b with c).

7.3 Exercises
 
−1 5
Find the eigenvalues and eigenvectors of the following matrix .
−1 3
Chapter 8

Answers for exercises

Exercises Chapter 2

1. a. −a/b and −c/d respectively.


b. When the lines are parallel to each other they will never intersect, i.e., when a/b = c/d
or ad = bc, there is no solution.
c. p and q are absent from the slopes.

2. a. The matrix A = −2 1 2

1 −2 has tr(A) = −4 and det(A) = 3 such that λ − trλ + det =
λ + 4λ + 3 = (λ + 3)(λ + 1) = 0 giving λ1 = −1 with v1 = k −2−(−1) = k −1
2 −1
−1 , and
−1 −1
 
λ2 = −3 with v2 = k −2−(−3) = k 1 , where k is an arbitrary constant.
b. The matrix A = 11 41 has tr(A) = 2 and det(A) = −3 such that


√ √
2± 22 + 4 × 3 2 ± 16 2±4
λ1,2 = = = =1±2 ,
2 2 2

−4
= k −4 or v1 = k −2
  
For λ1 = −1 one obtains v1 = k 1−(−1) 2 1 , and for λ2 = 3 one
−4
= k −4 or v2 = k −2
  
obtains v2 = k 1−3 −2 −1 , where k is an arbitrary constant.
a b

c. The matrix A = 0 d is in a diagonal form and hence the eigenvalues correspond to
its diagonal elements (see also Chapter 5). This can be checked by seeing that the
characteristic equation simplifies into (d − λ)(a − λ) − 0 = 0, immediate giving the two
solutions, λ1 = a and λ2 = d, which are indeed the
 diagonal elements
 of the matrix.
−b −b −b −b

For the eigenvectors we fill in v1 = k a−λ1 = k 0 and v2 = k a−λ2 = k a−d

3. a. Yes the forest will approach the eigenvector corresponding to the dominant eigenvalue
for every initial condition except V~0 = (0 0 0 0).
b. The latter illustrates that a forest not a containing a particular species should also
not have that species in its climax state. Thus, the model assumes that all species are
present, which here is a natural assumption because all species were counted as saplings
in the original data matrix.

    
2 −4 x 3
4. a. = , det = 2 × 1 − (−4) × 1 = 2 + 4 = 6. So this has a solution.
1 1 y 1
42 Answers for exercises

Exercises Chapter 3
1. The eigenvalues of matrix of this system of ODEs are solved from λ2 − trλ + det =
λ2 − 9λ + 18 = (λ − 6)(λ − 3) = 0 so λ1 = 3 and λ2 = 6. Thus, the eigenvectors are
       
2 2 2 2
v1 = = and v2 = = .
1−3 −2 1−6 −5

The general solution is


     
x(t) 2 3t 2
= C1 e + C2 e6t .
y(t) −2 −5

Using the initial condition one substitutes x(0) = 3 and y(0) = −3 and obtains
           
3 2 3×0 2 6×0 3 2 2
= C1 e + C2 e = = C1 + C2 .
−3 −2 −5 −3 −2 −5

Writing 3 = C1 × 2 + C2 × 2 and −3 = C1 × −2 + C2 × −5, one finds C1 = 1.5 − C2 from


the first equation. Substituting this into the second equation gives

−3 = −2(1.5 − C2 ) − 5C2 = −3 + 2C2 − 5C2 = −3 − 3C2 so C2 = 0 and C1 = 1.5 .

Finally we write the solution of the initial value problem as:


   
x 2
= 1.5 e3t ,
y −2

from which we see that the system moves along the first eigenvector (which is due to
the fact that we started on the first eigenvector with the initial condition x(0) = 3 and
y(0) = −3). The origin is unstable and from this initial condition the system moves to
infinitely large values of x and infinitely negative values of y.

2. a. We are dealing again with an initial value problem of a linear system here. First write
it in the more familiar form
    
dA/dt −0.01 0.01 A
= with tr = −0.05 and det = 0 .
dB/dt 0.04 −0.04 B

Finding the eigenvalues λ2 − trλ + det = λ2 + 0.05λ = λ(λ + 0.05) = 0 so λ1 = 0 and


λ2 = −0.05. The fact that the largest eigenvalue is zero, immediately reveals that the
steady state has neutral stability. The eigenvectors are
     
−0.01 −0.01 1
v1 = k =k or v1 = k ,
−0.01 − 0 −0.01 1

and      
−0.01 −0.01 −1
v2 = k =k or v2 = k ,
−0.01 − (−0.05) 0.04 4
where k is an arbitrary constant. The general solution is
         
A(t) 1 0t −1 −0.05t 1 −1 −0.05t
= C1 e + C2 e = C1 + C2 e .
B(t) 1 4 1 4

Using the initial condition by substituting A(0) = 3 and B(0) = 0 one obtains
         
3 1 −1 −0.05×0 1 −1
= C1 + C2 e = C1 + C2 ,
0 1 4 1 4
8.0 Exercises Chapter 4 43

and writes 3 = C1 − C2 or C1 = 3 + C2 , and 0 = C1 + 4C2 . Substituting the first into


the second equation gives 0 = 3 + 5C2 meaning that C2 = −3/5 = −0.6 and hence that
C1 = 2.4. Thus, the solution of the initial value problem is
     
A(t) 1 −1 −0.05t
= 2.4 − 0.6 e
B(t) 1 4

b. Because ultimately e−0.05t → 0, the final state is given by A = B = 2.4 moles/liter,


which makes sense because the diffusion should lead to equal concentrations. The total
amount in the final state, (20 + 5) × 2.4 = 60 moles is equal to that of the initial
condition, 20 × 3 = 60 moles.
c. No, since the largest eigenvalue is zero, the system is not stable to perturbations.
Instead, every other initial condition leads to another steady state.

Exercises Chapter 4
1. a. ∂x z = 2x and ∂y z = 2y. At (x, y) = (1, 2) we obtain ∂x z = 2 and ∂y z = 4.
b. ∂x z = 25 − 3x2 − y 2 , which at (3, 4) is -18.
2. Note that f (1, 1) = 2, that ∂x f = 2x, which in (1, 1) also equals 2, and that ∂y f = 2y
which in (1, 1) also equals 2. Hence f (x, y) ' 2 + 2(x − 1) + 2(y − 1) = 2 + 2x − 2 + 2y − 2 =
−2 + 2x + 2y.
3. a. Solving dx/dt = −4y = 0 gives y = 0, and substituting this into dy/dt gives 4x − x2 =
x(4 − x) = 0, having two solutions, x = 0 and x = 4. Thus, there are two equilibria
(x̄,
 ȳ) = (0, 0) and (4, 0). For  the Jacobian we first define the partial  derivatives
 J =
∂x f = 0 ∂y f = −4 0 −4
. For (x̄, ȳ) = (0, 0) one obtains J1 = , and for
∂x g = 4 − 2x̄ ∂y g = −0.5 4 −0.5
 
0 −4
(x̄, ȳ) = (4, 0) one finds J2 = . You may like to draw the nullclines and
−4 −0.5
compute the eigenvalues with the first model in the Grind script exercise4.3.R.
b. Solving dy/dt = x − y = 0 gives x = y, and substituting this into dx/dt = 0 gives
9x + x2 = x(9 + x) = 0, having two solutions, x = 0 and x = −9. Thus, there are two
 (0, 0) and (−9, −9).
equilibria  For the Jacobian we first define the partial derivatives
∂x f = 9 ∂y f = 2ȳ
J = . Hence, for the steady state (x̄, ȳ) = (0, 0) one obtains
∂x g = 1 ∂y g = −1
   
9 0 9 −18
J = , and for (x̄, ȳ) = (−9, −9) one finds J = . You may like to
1 −1 1 −1
draw the nullclines and compute the eigenvalues with the second model in the Grind
script exercise4.3.R.
c. Solving dx/dt = 2x − xy = x(2 − y) = 0 gives x = 0 or y = 2. Substituting x = 0
into dy/dt = 0 gives −y = 0 or y = 0. Substituting y = 2 into dy/dt = 0 gives
−2 + 4x = 0 so x = 0.5. Thus, the equilibria  are (0, 0) and (0.5, 2). For the Jacobian
∂x f = 2 − ȳ ∂y f = −x̄
we first define the partial derivatives J = . meaning
∂x g = ȳ 2 ∂y g = −1 + 2x̄ȳ
 
2 0
that in (x̄, ȳ) = (0, 0) the Jacobian J = , whereas in the other steady state,
0 −1
 
0 −0.5
(x̄, ȳ) = (0.5, 2), one finds J = . You may like to draw the nullclines and
4 1
compute the eigenvalues with the third model in the Grind script exercise4.3.R.
4. See Eq. (6.11) in Chapter 6.
44 Answers for exercises

Exercises Chapter 5

1. Because these are linear systems (x̄, ȳ) = (0, 0) is the steady state.
a. Here (0,0) is not a saddle point because det = 3 × 6 − 1 × −20 = 38 > 0, it is unstable
because tr = 3 + 6 = 9 > 0, and because D = tr2 − 4 det = 92 − 4 × 38 = 81 − 152 =
−71 < 0, the eigenvalues are complex numbers, and this has to be an unstable spiral.
b. Here (0,0) is a saddle point because det = 2 × −10 − 1 × 2 = −20 − 2 = −22 < 0.
c. Here (0,0) is also a saddle point because det = 2 × −2 − 1 × 5 = −4 − 5 = −9 < 0.

2. a. Solving dx/dt = 2x(1 − y) = 0 gives x = 0 or y = 1. First substitute x = 0 into dy/dt


to find that dy/dt = 2 − y = 0, which gives y = 2. Next substitute y = 1 in dy/dt to
find that dy/dt = 2 − 1 − x2 = 1 − x2 = 0, which gives x2 = 1, and hence x = 1 or
x = −1. Thus, the equilibria are (0, 2), and (1, 1), and note that equilibrium (−1, 1) is
not valid as we required x ≥ 0.
b. Defining f (x, y) = 2x(1 − y) and g(x, y) = 2 − y − x2 , the partial  derivatives are
2(1 − ȳ) −2x̄
∂x f = 2(1−y), ∂y f = −2x, ∂x g = −2x and ∂y g = −1, such that J = .
−2x̄ −1
 
−2 0
c. For the steady state (x̄, ȳ) = (0, 2) one obtains J = with det = (−2 ×
0 −1
−1) − 0 = 2 > 0 and tr = −2 − 1 = −3 < 0. This point is stable, and because
D = tr2 − 4 det = 9 − 4 × 2= 9 − 8 = 1 > 0 it is a stable node. For the equilibrium (1, 1)
0 −2
we obtain J = with tr = −1 and det = (0 × −1) − (−2 × −2) = −4 < 0,
−2 −1
implying that it is a saddle point (which is always unstable).
d. To sketch the phase portrait we compute for the steady state (0, 2) that λ1 = −2 and
λ2 = −1, with corresponding eigenvectors (1, 0) and (0, 1), respectively. Thus, the two
stable directions are horizontal and vertical, respectively.
√ For the second steady state
(1, 1) one obtains
√ the unstable largest λ1 = (−1+ 17)/2 ' 1.56 and the stable √ smallest
λ2 = (−1 − 17)/2 √ ' −2.56. The corresponding eigenvectors,
√ 1 v = ((−1 − 17)/2, 2)
√ v1 = ((−1 − 17)/4, 1) ' (−1.28, 1) and v2 = ((−1 + 17)/2, 2) or v2 = ((−1 +
or
17)/4, 1) ' (0.78, 1) reveal that the unstable direction is a line with a negative slope
(of about -0.78), and that the stable direction is a line with a slope of 1.28. Sketching
these eigenvectors as red signed arrows around the two steady state reveals the following
phase portrait:
2.5

x
● ● ● ● ● ● ● ● ● ● y

● ● ● ● ● ● ● ● ● ●
2.0


● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ●
1.5

● ● ● ● ● ● ● ● ● ●
y

● ● ● ● ● ● ● ● ● ●
1.0


● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ●
0.5

● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ●
0.0

0.0 0.5 1.0 1.5


x

where the blue arrows are trajectories, and the second picture was made with the Grind
model exercise5.2.R.

3. a. dx/dt = 2x(1 − y) = 0 gives x = 0 or y = 1 as the x0 = 0 nullclines, and


dy/dt = 2 − y − x2 = 0 gives y = 2 − x2 as the y 0 = 0 nullcline. Having no free
parameters we can simply fill in numbers here. Let us fill in x = 2 and y = 2. This
gives dx/dt = 2 × 2(1 − 2) = 4 × −1 = −4 so the vector field points leftward (←)
and dy/dt = 2 − 2 − 22 = −4 so the vector field points downward (↓). From this we
construct the remainder of the vectorfield:
8.0 Exercises Chapter 5 45

y
1

0
-0.5 0 1 2
  x
−α 0
b. For (0, 2) we obtain J = (note that zero in second row occurs because
0 −γ
moving to the left for a tiny bit on the horizontal maximum of the parabola lands on
the y 0 = 0 nullcline). Hence tr = −α − γ = −(α + γ) < 0 and det = −α × −γ − 0 × 0 =
α × β > 0, and the
 point is stable. We cannot tell whether it is spiral or node. For (1, 1)
0 −β
we obtain J = with det = (0 × −γ) − (−β × −δ) = −β × γ < 0, implying
−δ −γ
that this is a saddle (and therefore unstable).
c. As stated in the text, we find the same answers, except that for the stable equilibrium
we can not determine whether it is node or spiral.
46 Answers for exercises

4. The model linear.R illustrates all cases by making the following figure:
(a) (b)

1.0

1.0
x
y

● ● ● ● ● ● ● ● ● ●

0.5

0.5
● ● ● ● ● ● ● ● ● ●
0.0

0.0
y

y
● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ●
−0.5

−0.5
● ● ● ● ● ● ● ● ● ●
−1.0

−1.0
−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0
x x
(c) (d)
1.0

1.0
● ● ● ● ● ● ● ● ● ●
0.5

0.5

● ● ● ● ● ● ● ● ● ●
0.0

0.0
y

● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ●
−0.5

−0.5

● ● ● ● ● ● ● ● ● ●
−1.0

−1.0

−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0
x x
(e) (f)
1.0

1.0

● ● ● ● ● ● ● ● ● ●
0.5

0.5

● ● ● ● ● ● ● ● ● ●
0.0

0.0
y

● ● ● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ●
−0.5

−0.5

● ● ● ● ● ● ● ● ● ●
−1.0

−1.0

−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0
x x

a. To make a stable node we need to have a tr < 0, det > 0, and discriminant, D > 0. The
first is guaranteed if both diagonal elements, a and d, are negative. Since we require
ad − bc > 0 we make the absolute values of a and d relatively large, which also helps
the D = tr2 − 4 det > 0 condition. This is all satisfied when a = −2, b = 1, c = 1 and
d = −2 (see Panel (a)). The script provides a function algebra that calculates the
eigenvalues and eigenvectors.
b. To make an unstable node we need to have a tr > 0, det > 0, and discriminant, D > 0,
and just reverse the sign of the diagonal elements, i.e., a = 2, b = 1, c = 1 and d = 2
(see Panel (b)).
c. To make a saddle point we need det < 0, so we need sufficiently small diagonal elements,
e.g., a = −1, b = −2, c = −2, and d = −1 (see Panel (c)).
d. To make a stable spiral point we need a tr < 0, det > 0, and discriminant, D < 0. This
can be achieved by having a small trace due to small diagonal elements and a large
determinant by setting c = −2, i.e., a = −1, b = 2, c = −2 and d = −1 (see Panel (d)).
8.0 Exercises Chapter 6 47

e. To make a stable spiral point we need a tr > 0, det > 0, and discriminant, D < 0, which
can be achieved by reversing the sign of diagonal elements, i.e., a = 1, b = 2, c = −2
and d = 1 (see Panel (e)).
f. A neutrally stable center point is obtained when tr = 0, det > 0, and discriminant, D <
0, which is achieved by setting the diagonal elements to zero, i.e., a = 0, b = 2, c = −2
and d = 0 (see Panel (f)).

Exercises Chapter 6

The model dR/dt = aR − cRN and dN/dt = dRN − eN has the non-trivial steady state
(R̄, N̄ ) = (e/d, a/c). The Jacobian of this steady state is
   
a − cN̄ −cR̄ 0 −ce/d
J= = ,
dN̄ dR̄ − e da/c 0

with tr = 0 and det = ae and D = 0 − 4 det = −4ae. Because the trace is zero the steady state
has a “neutral” stability. The eigenvalues of this matrix are

−4ae √
λ± = ± = ± i ae .
2
Because the eigenvalues have no real part the system is not structurally stable: any small change
of the system will either make the equilibrium stable or unstable. The behavior of the model
are cycles of neutral stability: any perturbation of the predator or prey densities leads to a new
cycle. Because the model is so sensitive to any small change, it is “structurally unstable” and
should not be used in a biological context.

Exercises Chapter 7

With tr = 2 and det = 2 we obtain


√ √
2± 22 − 4 × 2 2±i 4
λ1,2 = = =1±i
2 2
For λ1 = −1 + i we obtain
         
−b −5 −5 −5 0
v1 = k =k =k =k − ik ,
a − λ1 −1 − (1 + i) −2 − i −2 1

where k is an arbitrary real number. And for λ2 = −1 − i we obtain


     
−5 −5 0
v2 = k =k + ik ,
−1 − (1 − i) −2 1

where k is an arbitrary real number.


48 Answers for exercises
Bibliography

Hastings, A. (1997). Population biology: concepts and models. New York: Springer.

Horn, H. S. (1975). Markovian properties of forest succession. Ecology and evolution of


communities. Harvard University Press.

Panfilov, A. V. (2010). Qualitative analysis of differential equations.


[Link]

Thompson, D. W. (1942). On Growth and Form. Cambridge: Cambridge University Press,


new edn.

You might also like