Section 4 Eigenvalues and Eigenvectors Lecture
Section 4 Eigenvalues and Eigenvectors Lecture
Definition 4.1. The number 𝜆 ∈ ℂ is an eigenvalue of a matrix 𝐴 ∈ 𝑀(𝑛, 𝑛), with a corresponding nonzero
eigenvector 𝑋 ∈ ℂ𝑛 such that 𝐴 ∙ 𝑋 = 𝜆 ∙ 𝑋.
𝑥1
𝑥 2
Using matrix representation for vectors in ℂ𝑛 , let 𝑋 = [ ⋮ ] be 𝑛 × 1 matrix.
𝑥𝑛
The matrix equation, 𝐴 ∙ 𝑋 = 𝜆 ∙ 𝑋, may be written in the form: (𝐴 − 𝜆𝐼) ∙ 𝑋 = 𝟎.
The eigenvalue equation 𝐴 ∙ 𝑋 = 𝜆 ∙ 𝑋 yields a homogeneous system of equations which has a nontrivial
solution 𝑋 ≠ 0 only if 𝐝𝐞𝐭(𝑨 − 𝝀𝑰) = 0.
𝑎11 – 𝜆 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 – 𝜆 … 𝑎2𝑛
By the expansion we get: det(𝐴 − 𝜆𝐼) = det [ ]=
⋮ ⋮ ⋱ ⋮
𝑎𝑛1 𝑎𝑛2 … 𝑎𝑛𝑛 – 𝜆
Beata Ciałowicz ~ 28 ~
Mathematical Analysis and Linear Algebra – Section 4 – Eigenvalues and eigenvectors
Remarks.
1. For a matrix 𝐴2×2 the characteristic polynomial has a form 𝑃3 (𝜆) = 𝜆2 − 𝑆1 𝜆 + 𝑆2 = 0,where:
𝑆1 = 𝑎11 + 𝑎22 sum of the main diagonal elements,
𝑆2 = 𝑑𝑒𝑡𝐴.
2. For a matrix 𝐴3×3 the characteristic polynomial has a form 𝑃3 (𝜆) = 𝜆3 − 𝑆1 𝜆2 + 𝑆2 𝜆 − 𝑆3 = 0, where
𝑆1 = 𝑎11 + 𝑎22 + 𝑎33 sum of the main diagonal elements,
𝑆2 = 𝑀11 + 𝑀22 + 𝑀33 sum of the minors of the main diagonal elements,
𝑆3 = 𝑑𝑒𝑡𝐴.
2 −3 1
Example 4.1. Determine the eigenvalues and eigenvectors of the matrices: 𝐴 = [2 4] , 𝐵 = [ 3 1 3] .
5 3
−5 2 −4
Properties of eigenvalues and eigenvectors:
1. If 𝐴 is triangular, the eigenvalues 𝜆1 , 𝜆2 , … , 𝜆𝑛 are the diagonal entries 𝑎11 , 𝑎22 , … , 𝑎𝑛𝑛 .
2. If 𝐴 is a symmetric matrix, the eigenvalues 𝜆1 , 𝜆2 , … , 𝜆𝑛 ∈ ℝ are real numbers.
3. A matrix 𝐴 is invertible if and only if 𝜆1 ∙ 𝜆2 ∙ … ∙ 𝜆𝑛 ≠ 0 (every eigenvalue is nonzero).
It means that if ∃𝑖 𝜆𝑖 = 0 ⇔ 𝑑𝑒𝑡𝐴 = 0.
4. If 𝐴 is an orthogonal matrix (𝐴−1 = 𝐴𝑇 and 𝐴𝑇 ∙ 𝐴 = 𝐼), then every eigenvalue is either 1 or −1.
5. If 𝜆1 , 𝜆2 , … , 𝜆𝑛 are eigenvalues of a matrix 𝐴, with corresponding eigenvectors 𝑋1 , 𝑋2 , … , 𝑋𝑛 , then:
a) its transpose 𝐴𝑇 has the same eigenvalues 𝜆1 , 𝜆2 , … , 𝜆𝑛 ,
b) the eigenvalues of the multiplicative inverse 𝐴−1 of a matrix 𝐴 are the reciprocals of the eigenvalues
1 1 1
of the matrix itself , 𝜆 , … , 𝜆 ; the eigenvectors are the same 𝑋1 , 𝑋2 , … , 𝑋𝑛 ,
𝜆1 2 𝑛
c) a matrix 𝐵 = 𝑘 ∙ 𝐴, where 𝑘 ∈ ℝ\{0} has eigenvalues 𝑘 ∙ 𝜆1 , 𝑘 ∙ 𝜆2 , … , 𝑘 ∙ 𝜆𝑛 and the same
eigenvectors 𝑋1 , 𝑋2 , … , 𝑋𝑛 ,
d) a matrix 𝐵 = 𝐴 − 𝐼 has eigenvalues 𝜆1 − 1, 𝜆2 − 1, … , 𝜆𝑛 − 1,
e) a matrix 𝐵 = 𝐴𝑛 has eigenvalues (𝜆1 )𝑛 , (𝜆2 )𝑛 , … , (𝜆𝑛 )𝑛 .
Remarks.
1. The sum of eigenvalues of a square matrix 𝐴 equals the trace of this matrix:
∑𝑛𝑖=1 𝜆𝑖 = 𝜆1 + 𝜆2 + ⋯ + 𝜆𝑛 = ∑𝑛𝑖=1 𝑎𝑖𝑖 = 𝑎11 + 𝑎22 + ⋯ + 𝑎𝑛𝑛 = 𝑡𝑟(𝐴), and equals to the sum of
elements on the main diagonal.
2. The product of all eigenvalues of a square matrix 𝐴 equals its determinant:
∏𝑛𝑖=1 𝜆𝑖 = 𝜆1 ∙ 𝜆2 ∙ … ∙ 𝜆𝑛 = det 𝐴.
Theorem 4.1 (Cayley-Hamilton) Every matrix 𝐴𝑛×𝑛 satisfies its own characteristic equation, that is
𝑃𝑛 (𝐴) = 𝟎, where 𝑃𝑛 (𝜆) is the characteristic polynomial of 𝐴.
2 2
Example 4.2. Let be given matrix 𝐴 = [ ]. Find 𝐴4 and 𝐴−1 using the Cayley-Hamilton theory.
1 3
Beata Ciałowicz ~ 29 ~
Mathematical Analysis and Linear Algebra – Section 4 – Eigenvalues and eigenvectors
4.2. Matrix diagonalization
Matrix diagonalization is the process of taking a square matrix and converting it into a diagonal matrix.
Definition 4.2. A square matrix 𝐴 ∈ 𝑀(𝑛, 𝑛) can be diagonalized if there exists an invertible matrix
𝑄 ∈ 𝑀(𝑛, 𝑛) and a diagonal matrix 𝐷 ∈ 𝑀(𝑛, 𝑛) such that 𝑄 −1 ∙ 𝐴 ∙ 𝑄 = 𝐷 (𝐴 = 𝑄 ∙ 𝐷 ∙ 𝑄 −1 ).
Remarks.
If a 𝐴 ∈ 𝑀𝑛 (ℝ) can be diagonalized, this matrix is called diagonalizable and 𝐴 and 𝐷 are called similar
(𝐴~𝐷).
If 𝐴 and 𝐷 are similar (𝐴~𝐷) then 𝑑𝑒𝑡𝐴 = 𝑑𝑒𝑡𝐷, 𝑟(𝐴) = 𝑟(𝐷) and they have the same eigenvalues.
If a matrix is not diagonalizable, it is said to be defective.
If 𝑄 −1 ∙ 𝐴 ∙ 𝑄 = 𝐷 then 𝐴 = 𝑄 ∙ 𝐷 ∙ 𝑄 −1 and 𝐴𝑛 = 𝑄 ∙ 𝐷𝑛 ∙ 𝑄 −1 .
Theorem 4.2. Let 𝐴 ∈ 𝑀(𝑛, 𝑛). Then:
1) Matrix 𝐴 is diagonalizable if and only if there is an invertible matrix 𝑄 = [𝑋1 𝑋2 … 𝑋𝑛 ] where
𝑋1 , 𝑋2 , … , 𝑋𝑛 are eigenvectors of 𝐴 and corresponding eigenvalues are the diagonal entries of the diagonal
matrix 𝐷.
2) If 𝐴 is a symmetric matrix then the eigenvalues of 𝐴 are real and this matrix is diagonalizable.
3) If 𝐴 has 𝑛 different eigenvalues then 𝐴 is diagonalizable.
Beata Ciałowicz ~ 30 ~
Mathematical Analysis and Linear Algebra – Section 4 – Eigenvalues and eigenvectors
4.3. Markov matrix
In mathematics, a Markov matrix (stochastic matrix, probability matrix, transition matrix) is a square matrix
used to describe the transitions of a Markov chain (representing steps in a Markov chain) in a finite state space.
Each of its entries represents the probability of an outcome (is a nonnegative real number).
Example 4.5. Determine if the given matrix is a Markov matrix and find its steady state (if possible):
0 1 0
1 0 0 1 0,6 0,2 0.7 0.3 0.8 1.3
𝐼=[ ] 𝐴=[ ] 𝐵=[ ] 𝐶 = [0 0 1] 𝐷 = [ ] 𝐸=[ ]
0 1 1 0 0,4 0,8 0.4 0.6 0.2 −0.3
1 0 0
Beata Ciałowicz ~ 31 ~
Mathematical Analysis and Linear Algebra – Section 4 – Eigenvalues and eigenvectors
4.4. Difference equations
In many cases, it is of interest to model the evolution of some systems over time. There are two distinct cases.
One can think of time as a continuous variable, or one can think of time as a discrete variable. The first case
often leads to differential equations. The second leads to difference equations.
Notation
𝑘 denotes time (time period could have any length)
𝑦0 = [𝑦10 , 𝑦20 , … 𝑦𝑛0 ]𝑇 ∈ ℝ𝑛 is an initial state of 𝑦 (initial state of the given system)
𝐴 ∈ 𝑀(𝑛, 𝑛)
𝑦𝑘 = [𝑦1𝑘 , 𝑦2𝑘 , … 𝑦𝑛𝑘 ]𝑇 ∈ ℝ𝑛 value of 𝑦 in a time period 𝑘
for 𝑡 = 1: 𝑦1 = 𝐴 ∙ 𝑦0
for 𝑡 = 2: 𝑦2 = 𝐴 ∙ 𝑦1 = 𝐴2 ∙ 𝑦0
and in general: 𝑦𝑘 = 𝐴 ∙ 𝑦𝑘−1 = 𝐴𝑘 ∙ 𝑦0 is called a difference equation (or recursion equation)
Definition 4.4.
1. The state 𝑦∞ is called a limit state of the difference equation if 𝑦∞ = lim 𝐴𝑘 ∙ 𝑦0 .
𝑘⟶∞
2. The state 𝑦 ∗ is called an equilibrium state (point)of the difference equation if 𝐴 ∙ 𝑦 ∗ = 𝑦 ∗ .
Remarks.
1) An equilibrium state is a stationary points (a fixed point) at which the system does not change any more.
2) An equilibrium state equals eigenvector of a matrix 𝐴 associated with eigenvalue 𝜆 = 1.
Theorem 4.5. If 𝐴 is the Markov matrix then:
1) the difference equation has an equilibrium state,
2) the equilibrium state is the limit state.
Example 4.5. Each year, 1000 salmon are stocked in a creak and the salmon have a 30% chance of surviving
and returning to the creak the next year. How many salmon will be in the creak each year and what will be the
population in the far future?
Example 4.6. Consider the system of the given student who is in states:
𝑠1 : semester grade point average (GPA) < 3.0 or 𝑠2 : semester grade point average (GPA) ≥ 3.0
As a result of observation from high school, it is found that if the student is in state 𝑠1 in one semester, then
she/he will work harder the next semester and achieve state 𝑠2 with probability 0.8, state 𝑠1 with a lower
probability 0.2. But if the student is in state 𝑠2 in one semester, then she/he relaxes the next semester and falls
below 3.0 with probability 0.3 , and stays above 3.0 with probability 0.7. This is an example of a Markov
chain with observation times at the end of each semester.
a) Write the Markov matrix and the difference equation which represents this observation.
b) Suppose that the student achieves a 2.5 grade point average (GPA) in the first semester. What is
probability that this student will have a GPA above 3.0 after the fourth semester?
c) Find the equilibrium state (if exists).
Beata Ciałowicz ~ 32 ~