0% found this document useful (0 votes)
11 views

Math 202 Linear Systems 2

This document discusses the geometry behind solutions to linear differential equations of the form dx/dt = Ax. It provides examples of solving such systems via eigenvalues and eigenvectors. The solutions are written as sums involving eigenvectors and exponential terms. Graphs are shown to illustrate the behavior in different eigenspaces. The solutions may have a hyperbolic shape with eigenspaces as asymptotes, depending on the system.

Uploaded by

Raash Mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Math 202 Linear Systems 2

This document discusses the geometry behind solutions to linear differential equations of the form dx/dt = Ax. It provides examples of solving such systems via eigenvalues and eigenvectors. The solutions are written as sums involving eigenvectors and exponential terms. Graphs are shown to illustrate the behavior in different eigenspaces. The solutions may have a hyperbolic shape with eigenspaces as asymptotes, depending on the system.

Uploaded by

Raash Mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

MATH202 – Linear systems 2 (notes)

This week we will look at the geometry behind solutions to the matrix form of a linear DE:
dx
dt
= A x. We will seek to understand the behaviour of solutions without messy computations
(although sometimes we do the messy work to remind ourselves of what we can miss out on).

Checkpoint: We have seen that the linear system x0 = A x has solution


m
X
x= ck eλk t vk
k=1

where the {vk } are eigenvectors in the eigenspaces Eλk . This can be written
as

Φ(t) = eλ1 t v1 ... eλ2 t v2 ..


h i
x = Φ(t) c where . ··· e λn t
vn .

10 Geometry of linear systems


Example 10.1 Consider  dx1 
  
dt
1 2 x1
=dx2 .
dt
3 2 x2
     
2 1
The eigenpairs for this system are 4; and −1; . The first solutions are of the
3 −1
form  
4t 2
c1 e .
3
The solutions can be graphed: x1 (t) = 2 c1 e4t and x2 (t) = 3 c1 e4t . Note that both solutions
grow exponentially, and the ratio of x1 to x2is fixed. This is because the solutions
are
confined
2 2
to the line generated by the eigenvector ; this is the eigenspace E4 = span .
3 3

x1 , x2 x2
x2 (t)
x1 (t) E4

increasing t

eigenvector [23 ]
3 c1
initial data c1 [23 ]
2 c1

t x1

Graphing x1 and x2 individually Graphing x1 against x2

1
 
−t 1
Similar pictures can be made for the second solution, c2 e where the solutions are
  −1
1
confined to the eigenspace E−1 = span . .
−1

x1 , x2 x2
initial data c2 [−11 ]

−c2

x2 (t) increasing t
t x1
x1 (t)

c2
eigenvector [−11 ]
E−1
Graphing x1 and x2 individually Graphing x1 against x2

Combining the pictures together we can see that a typical solution has components in both
eigenspaces.

x2 E4

x1

E−1

The solutions here have a distinctive hyperbolic shape, and the eigenspaces are like asymptotes.
This system is said to be of saddle type. 

Checkpoint: The eigenspaces are preserved by solutions of the system—


the behaviour in each Eλ is simply multiplication by eλ t , and the general
solution is obtained as a direct sum of these “eigen-solutions” when there
is an eigenvector basis.

2
Example 10.2 Consider  dx1    
dt
−1 1 x1
dx2 = .
dt
1 −1 x2
We seek a solution in terms of eigenvectors. The first step is to calculate the characteristic
polynomial and set it to 0: λ (λ + 2) = 0. The eigenvalues are therefore λ = 0, −2. Then
   
1 1
E0 = span and E−2 = span .
1 −1

Notice that for any point on the eigenspace E0 ,


 
1
x = c1
1

and theory tells us that the corresponding solution to the DE is


   
0t 1 c
x1 (t) = c1 e = 1 .
1 c1

In fact, it is easy to check that



     
−1 1 c1 −c1 + c1 0
A x1 = = = .
1 −1 c1 c1 − c1 0

This makes sense, since the derivative of a constant ought to be zero! In fact, all of E0 is a
line of fixed points. General solutions are written as
   
1 −2 t 1
x(t) = c1 + c2 e .
1 −1

The latter term decays to zero as t → ∞, so the overall picture is as below.

x2
E0

x1

E−1

3
Example 10.3 Solve the following problem via eigenvalues and eigenvectors and comment
on the long term solution:
   
4 0 0 1
dx 
= −1 3 1 x, x0 = 1 .
dt
1 1 3 1

Solution: First,
   
4 0 0 1 0 0
det −1 3 1 − λ 0 1 0 = (4 − λ) ((3 − λ)2 − 1) = (4 − λ)2 (2 − λ).
1 1 3 0 0 1
 
0
For λ = 2, the eigenvector is v1 =  −1  and the corresponding solution to the DE is
1
 
0
−2 t 
x(t) = e −1 .
1
For λ = 4, first form
     
4 0 0 1 0 0 0 0 0
−1 3 1 − 4 0 1 0 = −1 −1 1  .
1 1 3 0 0 1 1 1 −1
Eigenvectors can be found via the augmented matrix and row operations
.. ..
   
0 0 0 . 0 1 1 −1 . 0
−1 −1 1 ... 0 R1 ↔ R3 , R2 ← R2 + R1 .. 
  
.
0 0 0 . 0
  
.. ..
1 1 −1 . 0 0 0 0 . 0
The system of equations has one pivot, and two free variables. Putting v2 = s and v3 = t the
first row of the matrix reads v1 + s − t = 0. Then
       
v1 −s + t −1 1
v2  =  s  = s  1  + t 0 , s, t ∈ R.
v3 t 0 1
   
−1 1
Thus, the eigenspace E4 is spanned by two independent eigenvectors,  1  and 0. This
0 1
gives two additional solutions to the system:
   
−1 1
4t  4t  
e 1  and e 0 .
0 1
Organising all three solutions in matrix,
 
0 −e4t e4t
Φ(t) = −e2t e4t 0 .
2t
e 0 e4t

4
We can check that this is a fundamental matrix by calculating its determinant (the Wronskian):

W (t) = 0 − (−e4t )((−e2t e4t − 0) + e4t (0 − e4t e2t ) = −2 e10t .

This quantity is never zero, so the matrix is a fundamental matrix. Finally, using the formula
for solutions
   −1  
0 −e4t e4t 0 −1 1 1
−1 2t 4t
x(t) = Φ(t) Φ(0) x0 = −e e 0  −1 1 0 1 .
2t 4t
e 0 e 1 0 1 1
| {z }
putting t = 0 in Φ(t)

With further algebra, it is possible to calculate


 −1  
0 −1 1 −1/2 −1/2 1/2
−1 1 0 = −1/2 1/2 1/2
1 0 1 1/2 1/2 1/2

and hence  −1    


0 −1 1 1 −1/2
−1 1 0 1 =  1/2  .
1 0 1 1 3/2
The solution to the DE is then
 
e4t
x(t) =  (e2t + e4t )/2  .
(3e4t − e2t )/2

Looking at this solution, we see e4t and e2t — these functions diverge very rapidly as t → ∞.
Conversely, as t → −∞ these functions approach 0. Indeed, every nonzero solution of this
DE shares these same properties: in “backwards time” the solution approaches the origin, in
forward time it diverges to infinity (at an exponential rate). Some solutions – the ones whose
initial conditions are aligned with the eigenvector v1 — diverge with like e2t ; all other solutions
are dominated by a e4t term.
Note: the eigenvalues alone reveal the long term behaviour of solutions; we do not need the
exact formula! 

Checkpoint: The eigenvalues λ reveal long term behaviour because the


solutions behave as eλ t . With positive eigenvalues solutions diverge to ∞;
convergence to 0 occurs only along eigenvectors with λ < 0.

5
Connection with diagonalisation
In the last example, the fundamental matrix was
     2t 
0 −e4t e4t 0 −1 1 e 0 0
2t
Φ(t) = −e e4t 0  = −1 1 0  0 e4t 0  .
e2t 0 e4t 1 0 1 0 0 e4t

The observation that Φ(t) (whose columns are always vk eλk t ) can be written as a product of
a constant matrix and a matrix with exponentials on the diagonal is true generally. In fact,

Φ(t) = Φ(0) diag(eλk t ).

The matrix Φ(0) has the eigenvectors of A as its columns, and the overall solution to the DE
is
x(t) = Φ(t)Φ(0)−1 x0 = Φ(0) diag(eλk t ) Φ(0)−1 x0 .
| {z }
similarity transformation
This method is really diagonalisation in disguise! If we put y = Φ(0)−1 x then a simple
calculation shows that
dyk
= λk yk ,
dt
and the system in y coordinates is completely decoupled, with solutions yk (t) = eλk t yk (0).
This is the same diagonalisation as seen in other courses, with P −1 AP = D, but here D =
diag(λk ) and P = Φ(0).

6
11 Linear systems of DEs with complex eigenvalues
Real matrices can have complex conjugate pairs of eigenvalues. We’ll now look at the implica-
tions of this for linear systems. It is handy to recall Euler’s formula: ei ω = cos ω + i sin ω; it
will be used below. This provides a link between complex exponential solutions and rotational
behaviour.

Lemma 11.1. Suppose that A is a square real matrix and λ = µ + i ω is an eigenvalue with
eigenvector u + i v. Then µ − i ω is an eigenvalue with eigenvector u − i v.

Proof. By direct multiplication, using i2 = −1,

A(u + i v) = (µ + i ω)(u + i v) = (µ u − ω v) + i (µ v + ω u).

Pulling out the real and imaginary parts of this equation,

Re: A u = µ u − ω v Im: A v = µ v + ω u.

Then
A(u − i v) = (µ u − ω v) − i (µ v + ω u) = · · · = (µ − i ω)(u − i v).
This completes the proof. 

Note: The proof of Lemma 11.1 can be used to show that the plane P := span(u, v) is
preserved under multiplication by A. This means if x ∈ P then A x ∈ P , and hence dx dt
is
parallel to P . Hence, solutions that start in P must stay in P . The next theorem makes this
explicit. 

Theorem 11.2. Suppose that {µ ± i ω; u ± i v} are eigenpairs for a real matrix A. Then the
system of DEs x0 = A x has the following solutions:

x1 = eµ t (cos(ω t)u − sin(ω t) v) and x2 = eµ t (sin(ω t)u + cos(ω t) v).

Proof. Formally,

e(µ±i ω)t (u±i v) = eµ t (cos(ωt)±i sin(ωt)) (u±i v) = eµ t (cos(ωt) u−sin(ωt) v)±i eµ t (sin(ωt) u+cos(ωt) v)

are solutions to the DE. Call these solutions x+ and x− . Other solutions can be written as

x = c1 x + + c2 x − ;

all solutions to the real IVP must be real functions. It can be shown that c2 = c1 is both
necessary and sufficient for x to be real. Taking c1 = c2 = 21 gives x1 , and taking c1 = − 2i ,
c2 = 2i gives x2 . 

Example 11.3 Solve  


0 0 1
x = x.
−1 0

7
Solution: The characteristic equation for this matrix is λ2 + 1 = 0; this has solutions λ = ±i.
Seeking the eigenvectors,
" . #
.. −i 1 .. 0
[A − i I . 0] = . .
−1 −i .. 0

The row operation R2 ← R2 + i R1 reduces this to the form


" . #
−i 1 .. 0
. .
0 0 .. 0
     
1 1 0
The eigenvector is therefore = +i . It is easy to check that
i 0 1
       
1 0 1 0
A −i = −i −i .
0 1 0 1
   
1 0
Using u = ,v= , µ = 0 and ω = 1 in Theorem 11.2 gives
0 1
           
1 0 cos t 1 0 sin t
x1 = cos t − sin t = and x2 = sin t + cos t =
0 1 − sin t 0 1 cos t

as two independent solutions. 

Example 11.4 Solve  


0 2 8
x = x.
−1 −2

Solution: The characteristic equation for this matrix is λ2 +4 = 0; this has solutions λ = ±2 i.
Seeking the eigenvectors,
" .. #
.. 2 − 2i 8 . 0
[A − 2 i I . 0] = . .
−1 −2 − 2i .. 0
(2+2i) 2+2i
The row operation R2 ← R2 + 8
R1 followed by R1 ← 8
R1 reduces this to the form
" .. #
1 2 + 2i . 0
.. .
0 0 . 0
         
2 + 2i 2 2 2 2
The eigenvector is therefore = +i . Using u = ,v= ,µ=0
−1 −1 0 −1 0
and ω = 2 in Theorem 11.2 gives
   
2 cos 2t − 2 sin 2t 2 sin 2t + 2 cos 2t
x1 = and x2 =
− cos 2t − sin 2t

as two independent solutions. 

8
Example 11.5 Solve y 000 + 2 y 00 + y 0 + 2y = 0 and interpret the solutions.

Solution: First, this higher order equation can be reduced to a first order system by adding
additional variables for the first and second derivatives. Doing this,
y0 = z
z0 = w
w0 = (z 0 )0 = y 000 = −2y − 1y 0 − 2y 00 = −2y − 1z − 2w.
Writing this in matrix form
 0   
y 0 1 0 y
 z0  =  0 0 1   z.
0
w −2 −1 −2 w
The characteristic polynomial is −λ ((λ2 + 2λ + 1) − 2 = −(λ2 + 1)(λ + 2). The eigenvalues
are therefore
  λ = −2 and λ = ±i. By  the usual
 method,
  λ = −2 the eigenvector is
 for
1 1 1 0
−2. For λ = ±i the eigenvector is  ±i  =  0  ± i 1. The solutions are thus
4 −1 −1 0
         
1 1 0 1 0
−2t 
e −2 ,
 cos t 0 − sin t 1
    and sin t 0 + cos t 1 .
  
4 −1 0 −1 0
In particular, y = c1 e−2t + c2 sin t + c3 cos t. 

Geometric interpretation of systems of DEs with complex eigenvalues


The solutions in the last example had two basic types:exponential
  decay
 along the eigenspace
1 0
E−2 and periodic behaviour in the plane spanned by  0  and 1. [To see the periodicity,
−1 0
note that if t is increased by 2π then both sine and cosine repeat.] These solutions are indicated
in the left picture below. A general solution of this system is a combination of the rotational
motion in the plane, and decay along the other eigenspace, as depicted on the right.

E−2 E−2

   
1 0
span 0  , 1
−1 0

9
12 Stability in linear systems
In this section we pick up the emerging geometric theme. To get started, let’s recap a few
types of behaviour that we have seen so far. From Theorem 11.2, when a linear system has
eigenvalues µ ± i ω the solutions are linear combinations of
eµ t cos(ω t) and eµ t sin(ω t)
(multiplied by certain vectors to preserve an invariant plane). The value of ω gives an angular
frequency, and µ determines whether or not solutions decay to zero (µ < 0), follow elliptical
orbits (µ = 0) or grow exponentially (µ > 0).

Possible solutions with complex conjugate eigenvalues: spiral in (µ < 0, left), elliptical
(µ = 0, centre), spiral out (µ > 0, right).

Hyperbolic saddles
The behaviour from Example 10.1 is a hyperbolic saddle, with most solutions diverging
asymptotically along the eigenspace associated with a positive eigenvalue. Only those solutions
with an initial condition on the eigenspace for negative eigenvalues can approach 0.

x2 E4

x1

E−1

It is natural to wonder about other possibilities.

Stable and unstable nodes


     
0 −4 4 −2 t 2 −6t 2
Example 12.1 The DE x = x can be solved as x = c1 e +c2 e .
1 −4 1 −1
Both solutions converge exponentially to 0.

10
x2

E−2

x1

E−6

Checkpoint: If all eigenvalues are real and negative then every solution
converges exponentially to 0. The origin is a fixed point and is called a
stable node or attractor. If all the eigenvalues are real and positive then
the origin is called an unstable node or repellor.

Centres and spirals


Example 12.2 The DE y 00 + ω 2 y = 0 can be written in systems form as
 
0 0 1
x = x.
−ω 2 0

The eigenvalues are ±i ω and the eigenvectors are [10 ] ± i [0ω ]. By Theorem 11.2, the solutions
are    
cos ω t sin ω t
x = c1 + c2 .
−ω sin ω t ω cos ω t

All solutions are periodic (with period ω
), and sweep out elliptical orbits.

Checkpoint: With complex conjugate eigenvalues µ ± i ω, the origin is


either a stable spiral (attracting) or unstable spiral (repelling), according
to whether µ < 0 or µ > 0. If µ = 0 then eigenvalues are purely imaginary
and all solutions are periodic; the origin is a centre. Such situations are
sometimes called elliptical.

Nonhyperbolicity
Definition 12.3 A fixed point can be degenerate if some of the eigenvalues are 0. A system
is called hyperbolic if all eigenvalues have non-zero real part.

Example 10.2 is degenerate, and hence non-hyperbolic. Centres are also nonhyperbolic. The
stability of the origin is defined in terms of what solutions near to (0, 0) do. If all solutions
which start near to 0 stay near to 0, then the origin is generally considered to be “stable”. If
some solutions which start near to 0 diverge from 0, then the origin is considered “unstable”.

11
Classification exercise
To consolidate the various stability options with 2 × 2 systems of linear differential equations,
use eigenvectors and eigenvalues to classify the following systems. State the stability and
behaviour near the fixed point at x∗ = 0.
 
1 2
1. x0 = x
3 2
 
−5 1
2. x0 = x
1 −5
 
0 1
3. x0 = x
−2 −2
 
2 8
4. x0 = x
−1 −2
 
−1 1
5. x0 = x
1 −1
 
1 0
6. x0 = x
0 3
 
0 1
7. x0 = x
2 0
 
−1/2 −1/2
8. x0 = x
−1/2 −5/2
 
0.4 0
9. x0 = x
−0.6 −3

12
Linear systems of DEs with repeated eigenvalues
Example 12.4 The DE y 00 + 2 y 0 + y = 0 can be converted to systems form by writing
 0   
y 0 1 y
0 = .
z −1 −2 z

The matrix has characteristic polynomial λ2 + 2 λ + 1, which has λ = −1 as a repeated root.


The corresponding eigenvectors are found by solving:
" . # " . #
.. 1 1 .. 0 1 1 .. 0
[A − (−1) I . 0] = . R2 ← R2 + R1 . .
−1 −1 .. 0 0 0 .. 0

The matrix has only a one-dimensional solution space, so E−1 = span([1−1 ]) with the corre-
sponding solution    
y −t 1
=e .
z −1
The eigenvector method has not produced a second independent solution for this equation.

Guessing a resolution
In the auxiliary equation method, when a repeated root occurs, two solutions are obtained as
eλ t and t eλ t . In the systems version, we have x1 = eλ t v; it is worth guessing that t eλ t v
might work. Testing this:
d
(t eλ t v) = (1 + λ t) eλ t v
dt
A (t eλ t v) = t eλ t A v = λ t eλ t v.

Unfortunately, the two expressions are not the same, so this method does not work. However,
as a final attempt to rescue, one could try

x2 = t eλ t v + eλ t u

for some choice of u. Then one has


d
x2 = (1 + λ t) eλ t v + λ eλ t u
dt
A x2 = t eλ t A v = λ t eλ t v + eλ t A u.

Comparing the terms on the right, this method appears promising if u can be found to satisfy

1v + λ u = A u.

Rewritten, a second independent solution can be found if (A − λ I) u = v has a solution.




13
Theory from linear algebra and solution for defective matrices
Fact! If λ is an m–times repeated root of the characteristic polynomial then there are m
linearly independent solutions to
(A − λ I)m u = 0.
Not all of these may be eigenvectors.
A matrix is called defective if it does not have enough linearly independent eigenvectors. The
theory of this situation gives the Jordan Normal Form of a matrix A.

Theorem 12.5. Suppose that λ is an eigenvalue of A of multiplicity 2, but Eλ = span(v) is


only a one-dimensional eigenspace. Then there is a vector u such that (A − λ I) u = v and
x = t eλ t v + eλ t u is a solution to dx
dt
= A x.

Proof. By the remark preceding the statement of the theorem, there are two linearly inde-
pendent solutions to (A − λ I)2 u = 0. First, since v is an eigenvector, (A − λ I) v = 0.
Hence
(A − λ I)2 v = (A − λ I) (A − λ I)v = (A − λ I) 0 = 0
| {z }
=0

too. But there must be another linearly independent solution

(A − λ I)2 u2 = 0.

However, u2 is not an eigenvector, so u1 := (A − λ I) u2 6= 0. Indeed,

(A − λ I) u1 = (A − λ I) (A − λ I) u2 = (A − λ I)2 u2 = 0.

That is, u1 is an eigenvector! We can therefore solve (A − λ I)u = v (by taking u to be a


suitable multiple of u2 ). Note that A u = λ u + v.
Returning to the DE, the given x can be written as
i t eλ t 
.
h
x = v .. u .
eλ t

Then
i t eλ t  h i t eλ t  h i t eλ t 
. . .
h
.
Ax = A v . u .
= Av . Au .
= λv . λu + v .
eλ t eλ t eλ t

On the other hand,


  h i (1 + λ t) eλ t  h i λ 1  t eλ t 
dx d h . i t eλ t . .
= . = v .. u = v .. u .
dt dt v . u eλ t λ eλ t 0 λ eλ t

Comparing the two expressions on the right shows that x solves the DE. 

Clearly, something similar can happen for the case of general m. For now, it suffices to note
that in the m = 2. If v is the eigenvector of the defective matrix, then

x 1 = eλ t v

14
is the first solution. By Theorem 12.5,

x2 = t eλ t x1 + eλ t u

is also a solution. To check for linear independence, suppose that

0 = c1 x1 + c2 x2 = ((c1 + t c2 ) v + c2 u) eλ t .

Because {u, v} are linearly independent, this can happen only if c1 = c2 = 0.


Returning to Example 12.4: the DE y 00 + 2 y 0 + y = 0 corresponds to the matrix system
 
0 0 1
x = x.
−1 −2

The characteristic polynomial is (λ + 1)2 = 0. The repeated eigenvalue is λ = −1. Then


" .. #
h
.
i h
.
i 1 1 . 0
(A − λ I) .. 0 = (A − (−1) I) .. 0 = .. .
−1 −1 . 0
 
1
After row reduction, there is only a one dimensional eigenspace, E−1 = span . The
  −1
−t 1
first solution to the system is therefore x1 = e . For a second solution, solve (A −
  −1
1
λ I)u = v. Since v = , the augmented matrix for this system is
−1
" .. # " .. #
1 1 . 1 1 1 . 1
.. R2 ← R2 + R1 . .
−1 −1 . −1 0 0 .. 0
   
1 −1
Solving, u = +s for arbitrary s ∈ R. The vector associated with the free parameter
0 1
spans the eigenspace (and will not give any useful
 additional solution). Taking s = 1 (an
0
essentially arbitrary choice), we obtain u = and
1

t e−t
 
λt λt
x2 = t e v + e u = .
(1 − t) e−t

This leads to the solution x = c1 x1 + c2 x2 and hence

y = c1 e−t + c2 t e−t (while y 0 = −c1 e−t + c2 (1 − t) e−t ). 

15
An example from chemical kinetics
A simple chemical reaction scheme
k k k
W →1 X →2 Y →3 Z
leads to the system of DEs:
dw
= −k1 w
dt
dx
= k1 w − k2 x
dt
dy
= k2 x − k3 y
dt
dz
= k3 y
dt
where the variables are concentrations of chemical species W, X, Y, Z and the constants
k1 , k2 , k3 are reaction rates. When k1 = 0.002, k2 = 0.08, k3 = 1 and the initial condi-
tion (w0 , x0 , y0 , z0 ) = (500, 0, 0, 0) this can be written as
 0   
w −0.002 0 0 0 w
 x0   0.002 −0.08 0 0  x 
 0 =    .
y   0 0.08 −1 0  y 
z0 0 0 1 0 z
The eigenvalues are {−0.002, −0.08, −1, 0}. The eigenvectors are the columns of the ma-
trix Φ(0),  
1 0 0 0
 0.0256 1 0 0 
 0.0021 0.0870 1 0  .
Φ(0) =  

−1.0277 −1.087 −1 1
The solution to the system is thus
         
w(t) 1 0 0 0
 x(t) 
 = c1 e−0.002 t  0.0256  + c2 e−0.08 t
   1  −t
0 0
 0.087  + c3 e
     + c4  
 y(t)   0.0021  1 0
z(t) −1.0277 −1.087 −1 1

(note that e0 t = e0 = 1 for every t). The last step is to calculate the the constants c1 , . . . , c4 .
But these are obtained (to 4dp) as
   −1    
c1 1 0 0 0 500 500
  0  −12.8205
c2 
  = Φ(0)−1 x0 =  0.0256 1 0 0
    
c3   0.0021 0.0870 1 0  0  =  0.0871  .
c4 −1.0277 −1.087 −1 1 0 500
The solution to the system is thus
         
w(t) 1 0 0 0
 x(t) 
 = 500 e−0.002 t  0.0256 −12.8205 e−0.08 t
   1  −t
0 0
 0.087 +0.0871 e
    +500  .
 y(t)   0.0021  1 0
z(t) −1.0277 −1.087 −1 1

16
From this formula we can see that limt→∞ (w(t), x(t), y(t), z(t)) = (0, 0, 0, 500) — all the
concentration is eventually chemical Z.
The solutions can be plotted too.

Solutions computed using ode45 in Matlab.

By viewing the solutions in phase space, the connection with the DE (defined by the direction
field) is clear.

17
Finally, a more interesting model includes an autocatalytic reaction:

k
X + 2 Y →4 3 Y.

This leads to the revised system of DEs:

w0 = −k1 w
x0 = k1 w − k2 x − k4 x y 2
y0 = k2 x − k3 y + k4 x y 2
z0 = k3 y

Now, there is a nonlinear term due to the autocatalytic reaction; its strength is controlled by
the parameter k4 . We have no chance of solving this equation analytically, but we can handle
it numerically. In fact, with same k1 , k2 , k3 , there are dramatic changes when the nonlinearity
is turned on.

18

You might also like