0% found this document useful (0 votes)
450 views14 pages

Unit-2 Linear Algebra - 2 Notes

The document discusses orthogonal and orthonormal vectors and sets. It defines orthogonal and orthonormal properties and provides examples. It also covers topics like the Gram-Schmidt process, eigenvalues and eigenvectors, and singular value decomposition.

Uploaded by

lokaprasaad.v.s
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
450 views14 pages

Unit-2 Linear Algebra - 2 Notes

The document discusses orthogonal and orthonormal vectors and sets. It defines orthogonal and orthonormal properties and provides examples. It also covers topics like the Gram-Schmidt process, eigenvalues and eigenvectors, and singular value decomposition.

Uploaded by

lokaprasaad.v.s
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Department of Mathematics

LINEAR ALGEBRA - II

Topic Learning Objectives:


Upon Completion of this unit, students will be able to:
• Study the orthogonal and orthonormal properties of vectors.
• Use Gram-Schmidt process to factorize a given matrix as a product of an orthogonal
matrix(Q) and an upper triangular invertible matrix(R).
• Diagonalize symmetric matrices using eigenvalues and eigenvectors.
• Decompose a given matrix into product of an orthogonal matrix(U), a diagonal matrix
( Σ) and an orthogonal matrix(VT ).
Introduction:
This section deals with the study of orthogonal and orthonormal vectors which forms the
basis for the construction of an orthogonal basis for a vector space. The Gram-Schmidt
process is applied to construct an orthogonal basis for the column space of a given matrix
and further to decompose a given matrix to the form A = QR, where Q has orthonormal
column vectors and R is an upper triangular invertible matrix with positive entries along
the diagonal. This section also deals with finding the Eigen values and Eigen vectors of a
square matrix, which is applied to diagonalize a square matrix as D = P−1 AP. Further the
singular value decomposition is studied wherein, a given matrix is resolved as a product of
an orthogonal matrix (U), a diagonal matrix(Σ ) and an orthogonal matrix (VT ).

Orthogonal Vectors:
Two vectors u and v in Rn are orthogonal to each other if u.v = 0.
u = (1, 2) and v = (6, −3) are orthogonal in R2 as u.v = (1, 2).(6, −3) = 0.

Orthogonal Sets:
A set of vectors {u1 , u2 , ..., up } in Rn is said to be an orthogonal set if each pair of distinct
vectors from the set is orthogonal, that is, if ui .uj = 0 whenever i 6= j.
ex. {u1 , u2 , u3 } such that u1 = (3, 1, 1), u2 = (−1, 2, 1), u3 = (− 21 , −2, 72 ).
u1 .u2 = (3, 1, 1).(−1, 2, 1) = −3 + 2 + 1 = 0 u1 .u3 = (3, 1, 1).(− 21 , −2, 72 ) = − 32 − 2 + 27 = 0
u2 .u3 = (−1, 2, 1).(− 21 , −2, 27 ) = 12 − 4 + 72 = 0.
Each pair of distinct vectors is orthogonal and so {u1 , u2 , u3 } is an orthogonal set.

Orthonormal Sets:
A set {u1 , u2 , ..., up } is an orthonormal set if it is an orthogonal set of unit vectors.
{e1 , e2 , ..., en }, the standard basis for Rn , is an orthonormal set.
Any non-empty subset of {e1 , e2 , ..., en } is orthonormal.

Orthogonal Basis:
An orthogonal basis for a subspace W of Rn is a basis for W that is also an orthogonal set.
ex. S = {u1 , u2 , u3 }, u1 = (3, 1, 1), u2 = (−1, 2, 1), u3 = (− 21 , −2, 72 ) is an orthogonal basis
for R3 as (i) S is an orthogonal set and (ii) S forms a basis of R3 .
3 1 1
7 1
−1 2 1 = 3(7 + 2) − 1(− + ) + 1(2 + 1) = 27 + 3 + 3 = 33 6= 0
2 2
− 12 −2 27
Orthonormal Basis:
An orthonormal basis for a subspace W of Rn is a basis for W that is also an orthonormal set.

Third Semester
Department of Mathematics

Example:
1. Show that {v1 , v2 , v3 } is an orthonormal basis of R3 , where
v1 = ( √311 , √111 , √111 ), v2 = (− √16 , √26 , √16 ), v3 = (− √166 , − √466 , √766 ).
Solution:
v1 .v2 = − √366 + √266 + √166 = 0, v1 .v3 = − √726 3 4
− √726 7
+ √726 1
= 0, v2 .v3 = √396 8
− √396 7
+ √396 =0
Thus {v1 , v2 , v3 } is a orthogonal set.
9 1 1
v1 .v1 = 11 + 11 + 11 = 1, v2 .v2 = 26 + 46 + 61 = 1, v3 .v3 = 66 1
+ 1666
49
+ 66 = 1 which shows that
v1 , v2 , v3 are unit vectors.
Thus {v1 , v2 , v3 } is an orthonormal set.
Since the set is linearly independent, its three vectors form a basis for R3 .

Orthogonal Matrix:
A square matrix A with real entries and satisfying the condition A−1 = AT is called an
orthogonal matrix.
     
cos θ − sin θ cos θ sin θ cos θ sin θ
ex. Let P = Then P −1 = and P T =
sin θ cos θ − sin θ cos θ − sin θ cos θ
−1 T
clearly P = P
∴ P is an orthogonal matrix.
1
− 23 23

3
ex. The matrix A =  23 − 13 − 23  is orthogonal,
2 2 1
3  3 3
1 2 2 1
− 23 23
   
3 3 3 3
1 0 0
T 2 1 2 2 1 2
since A A = − 3 − 3 3

3
−3 −3 = 0 1  0
2 2 1 2 2 1
3
−3 3 3 3 3
0 0 1
The row vector of A, namely ( 31 , − 23 , 23 ), ( 32 , − 31 , − 32 ) and ( 23 , 23 , 31 ) are orthonormal.
So are the column vectors of A.

Note:
Suppose that A is an n × n matrix with real entries. Then
(a) A is orthogonal iff the row vectors of A form an orthonormal basis of Rn .
(b) A is orthogonal iff the column vectors of A form an orthonormal basis of Rn .

Orthogonal Projections:
Given a non-zero vector → −u in Rn , consider the problem of decomposing a vector → −
y in Rn

− →

into the sum of two vectors, one a multiple of u and the other orthogonal to u . We wish to
write →−
y = ŷ + →−
z − (1), where ŷ = α→−u , for some scalar α and → −z is some vector orthogonal


to u .
Given any scalar α, let → −
z =→ −y − α→−u , so that (1) is satisfied.
Then y − ŷ is orthogonal to u iff 0 = (→

− →
− −y − α→−
u ).→
−u =→ −y .→
−u − (α→ −
u ).→
−u =→−y .→−
u − α(→
−u .→

u)


y . →

u →

y . →

u
That is, (1) is satisfied with →

z orthogonal to → −
u iff α = → − →
− and ŷ = →− →



u.
u . u u . u
The vector ŷ is called the orthogonal projection of → −
y onto → −
u , and the vector → −
z is called


the component of y orthogonal to u . →

ex. Let →
−y = (7, 6) and →
−u = (4, 2).
The orthogonal projection of →

y onto →− u is given by,


y .→

u→ 40 −
ŷ = → −
u = → u = 2→

u = 2(4, 2) = (8, 4)

u.u→
− 20

Third Semester
Department of Mathematics

Note:
The orthogonal projection of → −
y onto a space W spanned by orthogonal vectors {u1 , u2 } is

− →

y .u1 → →
−y .→−
u
given by ŷ = → −
u1 + →
2→−
u2

u1 .→

u1 −
u2 .→−
u2
The distance from a point → −y in Rn to a subspace W is defined as the distance from → −y to
the nearest point in W .
ex. The distance from → −
y to W = Span{u1 , u2 }, where → −y = (−1, −5, 10), u1 = (5, −2, 1), u2 =
(1, 2, −1). is given by
(−1, −5, 10).(5, −2, 1) (−1, −5, 10).(1, 2, −1)
ŷ = (5, −2, 1) + (1, 2, −1) = (−1, −8, 4)
(5, −2, 1).(5, −2, 1) (1, 2, −1).(1, 2, −1)

−y − ŷ = (−1, −5, 10) − (−1, −8, √ 4) = (0, 3, 6) √ √
The distance from → −
y to W is 0 + 32 + 62 = 45 = 3 5.

Exercise:
1. Determine which set of vectors are orthogonal.
(i) u1 = (−1, 4, −3), u2 = (5, 2, 1), u3 = (3, −4, −7) ,
(ii) u1 = (5, −4, 0, 3), u2 = (−4, 1, −3, 8), u3 = (3, 3, 5, −1).
2.Show that {(2, −3), (6, 4)} forms an orthogonal basis of R2 .
3.Show that {(1, 0, 1), (−1, 4, 1),
 (2, 1, −2)} formsan orthogonal basis of R3 .
3

11
− √16 − √166
4. Show that the matrix U =  √111 √26 − √466  is an orthogonal matrix.
 
√1 √1 − √766
11 6
5. Find the orthogonal projection of y = (2, 6) onto u = (7, 1).
6. Let u1 = (2, 5, −1), u2 = (−2, 1, 1) and y = (1, 2, 3). W = Span{u1 , u2 }. Find the orthog-
onal projection of y onto W = Span{u1 , u2 }.

Answers:
1. u1 , u2 and u2 , u3 .
2. u1 , u2 , u1 , u3 .
5. (14/5, 2/5)
6. (−2/5, 2, 1/5)

Gram-Schmidt Orthogonalization
The Gram-Schmidt process is a simple algorithm for producing an orthogonal or orthonor-
mal basis for any non-zero subspace of Rn .
The construction converts a skewed set of axes into a perpendicular set.

Gram-Schmidt process
Given a basis {x1 , x2 , ..., xp } for a subspace W of Rn
define, v1 = x1
x2 .v1
v2 = x2 − v1
v1 .v1
x3 .v1 x3 .v2
v3 = x3 − v1 − v2
v1 .v1 v2 .v2
.
.
xp .v1 xp .v2 xp .vp−1
vp = vp − v1 − v2 ... − vp−1
v1 .v1 v2 .v2 vp−1 .vp−1
Then {v1 , v2 , ..., vp } is an orthogonal basis for W .
In addition Span{v1 , v2 , ..., vp } = Span{x1 , x2 , ..., xk } for 1 ≤ k ≤ p.

Third Semester
Department of Mathematics

Examples:
1. Let W = Span{x1 , x2 } where x1 = (3, 6, 0) and x2 = (1, 2, 2). Construct an orthogonal
basis {v1 , v2 } for W .
Solution:
x2 .x1 (1, 2, 2).(3, 6, 0)
Let v1 = x1 and v2 = x2 − x1 = (1, 2, 2) − (3, 6, 0) = (0, 0, 2).
x1 .x1 (3, 6, 0).(3, 6, 0)
Then {v1 , v2 } is an orthogonal set of non-zero vectors in W . Since dimW = 2, the set {v1 , v2 }
is a basis in W .

2. Let W = Span{v1 , v2 , v3 }, where v1 = (0, 1, 2), v2 = (1, 1, 2), v3 = (1, 0, 1). Construct
an orthogonal basis {u1 , u2 , u3 } for W .
Solution:
v2 .u1 (1, 1, 2).(0, 1, 2)
Set u1 = v1 and u2 = v2 − u1 = (1, 1, 2) − (0, 1, 2)= (1, 0, 0)
u1 .u1 (0, 1, 2).(0, 1, 2)
v3 .u1 v3 .u2
and u3 = v3 − u1 − u2
u1 .u1 u2 , u2
(1, 0, 1).(0, 1, 2) (1, 0, 1).(1, 0, 0)
= (1, 0, 1) − (0, 1, 2) − (1, 0, 0)
(0, 1, 2).(0, 1, 2) (1, 0, 0).(1, 0, 0)
2 2 1
= (1, 0, 1) − (0, 1, 2) − (1, 0, 0) = (0, − , ).
5 5 5

QR Factorization:
If A is an m × n matrix with linearly independent columns, then A can be factored as
A = QR, where Q is an m × n matrix whose columns form an orthonormal basis for col A
and R is an n × n upper triangular invertible matrix with positive entries on its diagonal.

Examples:  
1 0 0
1 1 0
1. Find a QR factorization of A =  1 1 1

1 1 1
Solution:
Construction an orthonormal basis for Col A
The columns of A are the vectors {x1 , x2 , x3 }
Let v1 = x1 = (1, 1, 1, 1)
x2 .v1 (0, 1, 1, 1).(1, 1, 1, 1)
v2 = x2 − v1 = (0, 1, 1, 1) − (1, 1, 1, 1)
v1 .v1 (1, 1, 1, 1).(1, 1, 1, 1)
3 3 1 1 1
= (0, 1, 1, 1) − (1, 1, 1, 1) = (− , , , )
4 4 4 4 4
x3 .v1 x3 .v2
v3 = x3 − v1 − v2
v1 .v1 v2 .v2
3 1 1 1
(0, 0, 1, 1).(1, 1, 1, 1) (0, 0, 1, 1).(− , , , )
= (0, 0, 1, 1) − (1, 1, 1, 1) − 4 4 4 4 (− 3 , 1 , 1 , 1 )
(1, 1, 1, 1).(1, 1, 1, 1) 3 1 1 1 3 1 1 1 4 4 4 4
(− , , , ).(− , , , )
4 4 4 4 4 4 4 4
2 2 3 1 1 1 2 1 2
= (0, 0, 1, 1) − (1, 1, 1, 1) − (− , , , ) = (0, − , , )
4 3 4 4 4 4 3 3 3
∴ {v1 , v2 , v3 } forms an orthogonal basis of Col A .
{( 12 , 12 , 21 , 12 ), (− √312 , √112 , √112 , √112 ), (0, − √26 , √16 , √16 )} forms an orthonormal basis of Col A.

Third Semester
Department of Mathematics

 √ 
1/2 −3/√ 12 0√
1/2 1/ 12 −2/ 6
∴Q= √ √ 
1/2 1/ 12 1/
√ √6

1/2 1/ 12 1/ 6
To construct an upper triangular invertible matrix

We have A = QR =⇒ QT A = QT QR =⇒ QT A = IR =⇒ QT A = R i.e., R = QT A.
 
  1 0 0  
1/2
√ 1/2
√ 1/2
√ 1/2
√ 1 1 0 2 3/2
√ √1
∴ R = −3/ 12 1/ √ 1 1 1 = 0 3/ 12 2/ √12 .
12 1/ √12 1/ √12    
0 −2/ 6 1/ 6 1/ 6 0 0 2/ 6
1 1 1
 
1 2 5
−1 1 −4
 
2. Find a QR factorization of A =   −1 4 −3 

 1 −4 7 
1 2 1
Solution:
{x1 , x2 , x3 } are the columns of the matrix A.
Let v1 = x1 = (1, −1, −1, 1, 1)
x2 .v1 (2, 1, 4, −4, 2).(1, −1, −1, 1, 1)
v2 = x2 − v1 = (2, 1, 4, −4, 2) − (1, −1, −1, 1, 1)
v1 .v1 (1, −1, −1, 1, 1).(1, −1, −1, 1, 1)
−5
= (2, 1, 4, −4, 2) − (1, −1, −1, 1, 1) = (3, 0, 3, −3, 3)
5
x3 .v1 x3 .v2
v3 = x3 − v1 − v2
v1 .v1 v2 .v2
= (5, −4, −3, 7, 1) − (5,−4,−3,7,1).(1,−1,−1,1,1)
(1,−1,−1,1,1).(1,−1,−1,1,1)
(1, −1, −1, 1, 1) − (5,−4,−3,7,1).(3,0,3,−3,3)
(3,0,3,−3,3).(3,0,3,−3,3)
(3, 0, 3, −3, 3)
20 −12
= (5, −4, −3, 7, 1) − (1, −1, −1, 1, 1) − (3, 0, 3, −3, 3) = (2, 0, 2, 2, −2).
5 36
∴ {(1,√ −1, −1, √0, 3, −3,
√1, 1), (3, √ 3),√ (2, 0, 2, 2, −2)} forms an orthogonal basis of Col A.
{(1/ 5, −1/ 5, −1/ 5, 1/ 5, 1/ 5), (1/2, 0, 1/2, −1/2, 1/2), (1/2, 0, 1/2, 1/2, −1/2)}
forms an orthonormal basis of Col A.
 √ 
1/ √5 1/2 1/2
−1/ 5 0 0 
 √ 
∴Q= −1/
 √ 5 1/2 1/2 

 1/ 5 −1/2 1/2 

1/ 5 1/2 −1/2
 
 √ √ √ √ √  1 2 5
1/ 5 −1/ 5 −1/ 5 1/ 5 1/ 5  −1 1 −4

T
R = Q A =  1/2 0 1/2 −1/2 1/2   −1 4 −3

1/2 0 1/2 1/2 −1/2  1 −4 7 
1 2 1
√ √ √ 
5 − 5 4 5
∴R= 0 6 −2 
0 0 4

Third Semester
Department of Mathematics

 
3 −5 1
1 1 1
3. Find the orthogonal basis for the column space of the matrix  −1 5 −2

3 −7 8
Solution:
The columns of A are the vectors {x1 , x2 , x3 }
where x1 = (3, 1, −1, 3), x2 = (−5, 1, 5, −7), x3 = (1, 1, −2, 8).
Let v1 = (3, 1, −1, 3)
x2 .v1
v2 = x2 − v1 = (−5, 1, 5, −7) − (−5,1,5,−7).(3,1,−1,3)
(3,1,−1,3).(3,1,−1,3)
(3, 1, −1, 3)
v1 .v1
= (−5, 1, 5, −7) − −40 20
(3, 1, −1, 3) = (1, 3, 3, −1)
x3 .v1 x3 .v2 (1,1,−2,8).(3,1,−1,3)
v3 = v1 − v2 = (1, 1, −2, 8)− (3,1,−1,3).(3,1,−1,3) (3, 1, −1, 3)− (1,1,−2,8).(1,3,3,−1)
(1,3,3,−1).(1,3,3,−1)
(1, 3, 3, −1)
v1 .v1 v2 .v2
= (1, 1, −2, 8) − 30 20
(3, 1, −1, 3) − −1020
(1, 3, 3, −1) = (−3, 1, 1, 3)
{(3, 1, −1, 3), (1, 3, 3, −1), (−3, 1, 1, 3)} is an orthogonal basis for the column space of the
given matrix.  
−1 6 6
 3 −8 3 
4. Find the orthogonal basis for the column space of the matrix   1 −2 6 

1 −4 −3
Solution:
The columns of A are the vectors {x1 , x2 , x3 }
where x1 = (−1, 3, 1, 1), x2 = (6, −8, −2, −4), x3 = (6, 3, 6, −3).
Let v1 = (−1, 3, 1, 1)
x2 .v1
v2 = x2 − v1 = (6, −8, −2, −4) − (6,−8,−2,−4).(−1,3,1,1)
(−1,3,1,1).(−1,3,1,1)
(−1, 3, 1, 1)
v1 .v1
= (6, −8, −2, −4) − −36 12
(−1, 3, 1, 1) = (3, 1, 1, −1)
x3 .v1 x3 .v2 (6,3,6,−3).(−1,3,1,1)
v3 = v1 − v2 = (6, 3, 6, −3)− (−1,3,1,1).(−1,3,1,1) (−1, 3, 1, 1)− (6,3,6,−3).(3,1,1,−1)
(3,1,1,−1).(3,1,1,−1)
(3, 1, 1, −1)
v1 .v1 v2 .v2
6 30
= (6, 3, 6, −3) − 12 (−1, 3, 1, 1) − 12 (3, 1, 1, −1) = (−1, −1, 3, −1)
{(−1, 3, 1, 1), (3, 1, 1, −1), (−1, −1, 3, −1)} is an orthogonal basis for the column space of the
given matrix.

Exercise:
1. Let W = Span{v1 , v2 }, where v1 = (1, 1) and v2 = (2, −1). Construct an orthogonal basis
{u1 , u2 } for W .
2. Find the orthonormal basis of the subspace spanned by the vectors u1 = (1, −4, 0, 1), u2 =
(7, −7, −4, 1)  
1 3 5
−1 −3 1
 
3. Find the QR factorization of the matrix A =   0 2 3 

1 5 2
1 5 8
Answer:
3 3
1. v1 = (1, 1), v2 = ( , − )
2 2
2. v1 = (1, −4, 0, 1), v2 = (5, 1, −4, −1)

Third Semester
Department of Mathematics

Eigen Values and Eigen Vectors:


If A is a square matrix of order n, we can find the matrix A − λI, where I is the nth order
unit matrix. The determinant of this matrix equated  to zero, i.e,
a11 − λ a12 . . . a1n
 a21
 a 22 − λ . . . a2n  
|A − λI| =  .  . . . . .  
 . . . . . . 
an1 an2 . . . ann − λ
is called the characteristic equation of A.
On expanding the determinant, the characteristic equation takes the form
(−1)n λn + k1 λn−1 + k2 λn−2 + ... + kn = 0,
where k s are expressible in terms of the elements aij . The roots of this equation are called
the characteristic
  roots or latent roots or eigen-values of the matrix A.
x1  
 x2  a11 a12 . . . a1n
   a21 a22 . . . a2n 
.  
If x =   and A =  . . . . . . ,
.  
   . . . . . . 
.
an1 an2 . . . ann
xn
then the linear transformation y = Ax - (1) carries the column vector x into the column
vector y by means of the square matrix A.
In practice it is often required to find such vectors which transform into themselves or to a
scalar multiple of themselves.
Let x be such a vector which transforms into λx by means of the transformation (1).
Then, λx = Ax or Ax − λIx = 0 or [A − λI]x = 0- (2)
The matrix equation represents n homogeneous linear equations,
(a11 − λ)x1 + a12 x2 + ... + a1n xn = 0
a21 x1 + (a22 − λ)x2 + ... + a2n xn = 0
......
an1 x1 + an2 x2 + ... + (ann − λ)xn = 0 -(3)
which will have a non-trivial solution only if the coefficient matrix is singular.
i.e, if |A − λI| = 0
This is called the characteristic equation of the transformation and is same as the charac-
teristic equation of the matrix A.
It has nroots and corresponding
  to each root, the equation (2)( or equation (3)) will have a
x1
 x2 
 
.
non-zero solution, x =   . , which is known as the eigen vector or latent vector.

 
.
xn
Observation 1:
Corresponding to n distinct eigen values, we get n independent eigen vectors. But when two
or more eigen values are equal, it may or may not be possible to get linearly independent
eigen vectors corresponding to the repeated roots.
Observation 2:
If xi is a solution for a eigen value λi then it follows from (2) that cxi is also a solution,
where c ia an arbitrary constant. Thus the eigen vector corresponding to an eigen value is
not unique, but may be any one of the vectors cxi .

Third Semester
Department of Mathematics

Examples:  
1/2 1/2
1. Find the Eigen Values and Eigen vectors of the matrix A =
1/2 1/2
Solution:
1 1
−λ
|A − λI| = 0 =⇒ 2 1 1
2 =⇒ λ2 − λ = 0 =⇒ λ = 0, λ = 1.
2 2
− λ
    
−1/2 1/2 x1 0
with λ = 1, (A−λI)x = 0 =⇒ = =⇒ −x1 +x2 = 0 =⇒ x2 = x1
1/2
  −1/2 x 2 0
1
Letting x1 = 1 =⇒ x2 = 1 ∴ x =
 1    
1/2 1/2 x1 0
with λ = 0, (A − λI)x = 0 =⇒ = =⇒ x1 + x2 = 0 =⇒ x2 = −x1
1/2 1/2  x2 0
1
Letting x1 = 1 =⇒ x2 = −1 ∴ x =
−1
 
0 −1
2. Find the Eigen Values and Eigen vectors of the matrix A =
1 0
Solution:
−λ −1
|A − λI| = 0 =⇒ =⇒ λ2 + 1 = 0 =⇒ λ = +i, λ = −i.
1 −λ     
−i −1 x1 0
with λ = i, (A − λI)x = 0 =⇒ = =⇒ −ix1 − x2 = 0 =⇒ x2 = −ix1
1  −i  x2 0
1
Letting x1 = 1 =⇒ x2 = −i ∴ x =
 −i    
i −1 x1 0
with λ = −i, (A − λI)x = 0 =⇒ = =⇒ ix1 − x2 = 0 =⇒ x2 = ix1
 1 i x2 0
1
Letting x1 = 1 =⇒ x2 = i ∴ x =
i
 
3 4 2
3. Find the Eigen Values and Eigen vectors of the matrix A = 3 5 4.
0 1 2
Solution:
3−λ 4 2
|A − λI| = 0 =⇒ 3 5−λ 4 = 0 =⇒ λ3 − 10λ2 + 15λ = 0
0 1 2−λ
√ √
=⇒ λ = 5 + 10, 5 − 10, 0  √    
√ −2 − 10 √4 2 x1 0
with λ = 5 + 10 |A − λI| = 0 =⇒  3 − 10 4√  x2 = 0
 
0 1 −3 − 10 x3 0
x1 −x2 x x1 x2 x
=⇒ √ = √ = 3 =⇒ √ = √ = 3
3  10 +√6  −9 − 3 10 3 2 + 10 3 + 10 1
2 + √10
∴ x = 3 + 10
1
 √    
√ −2 + 10 √4 2 x1 0
with λ = 5 − 10, |A − λI| = 0 =⇒  3 10 4√  x2 = 0
 
0 1 −3 + 10 x3 0
x −x2 x x1 x2 x
=⇒ √1 = √ = 3 =⇒ √ = √ = 3
−3 10 + 6 −9 + 3 10 3 2 − 10 3 − 10 1

Third Semester
Department of Mathematics

 √ 
2 − √10
∴ x = 3 − 10
1     
3 4 2 x1 0
with λ = 0, |A − λI| = 0 =⇒ 3 5 4
  x2 = 0
 
0 1 2 x3 0
x1 −x2 x3 x1 x2 x3
=⇒ = = =⇒ = =
6 6 3 2 −2 1
2
∴ x = −2
1
 
1 3 3
4. Find the Eigen Values and Eigen vectors of the matrix A = −3 −5 −3
3 3 1
Solution:
1−λ 3 3
|A − λI| = 0 =⇒ −3 −5 − λ −3 = 0 =⇒ λ3 + 3λ2 − 4 = 0
3 3 1−λ
=⇒ λ = 1, −2, −2     
0 3 3 x1 0
with λ = 1, |A − λI| = 0 =⇒ −3 −6 −3
   x2 = 0
 
3 3 0 x3  0
1
x1 −x2 x3 x1 x2 x3
=⇒ = = =⇒ = = ∴ x = −1

9 9 9 1 −1 1
   1  
3 3 3 x1 0
with λ = −2, |A − λI| = 0 =⇒ −3 −3 −3
   x2 = 0
 
3 3 3 x3 0
3x1 + 3x2 + 3x3 = 0 =⇒ x1 = −x2 − x3  
−k1 − k2
Letting x2 = k1 , x3 = k2 =⇒ x1 = −k1 − k2 ∴ x =  k1 
    k2
−1 −1
or x = 0 , x = 1  are the linearly independent eigen vectors corresponding to λ = −2.
  
1 0

Diagonalization of a Matrix:
Suppose the n by n matrix A has n linearly independent eigen vectors. If these eigen vectors
are the columns of a matrix P , then P −1 AP is a diagonal matrix D. The eigen values of A
are on the 
diagonal of D 
λ1

 λ2 

−1
 . 
P AP =   
 . 

 . 
λn

Third Semester
Department of Mathematics

Note:
1. Any matrix with distinct eigen values can be diagonalized.
2. The diagonalization matrix P is not unique.
3. Not all matrices posses n linearly independent eigen vectors, so not all matrices are diag-
onalizable.
4. Diagonalizability of A depends on enough eigen vectors.
5. Diagonalizability can fail only if there are repeated eigen values.
6. The eigen values of Ak are λk1 , λk2 , ..., λkn and each eigen vector of A is still an eigen vector
of Ak .
[Dk = D.D....D(k times) = (P −1 AP )(P −1 AP )...(P −1 AP ) = P −1 Ak P ].

Problems:  
7 2
1. Diagonalize the matrix A = .
2 4
Solution:
7−λ 2
|A − λI| = 0 =⇒ = 0 =⇒ λ2 − 11λ + 24 = 0 =⇒ (λ − 3)(λ − 8) = 0 =⇒
2 4−λ
λ = 3, λ = 8.    
4 2 x1 0
With λ = 3, (A − 3I)x = 0 =⇒ = =⇒ 2x1 + x2 = 0.
2 1 x2   0
1
Letting x1 = 1 =⇒ x2 = −2. Hence x1 =
−2  
−1 2 x1 0
With λ = 8, (A − 8I)x = 0 =⇒ = =⇒ −x1 + 2x2 = 0.
2 −4 x2 0
2
Letting x2 = 1 =⇒ x1 = 2. Hence x2 =
    1 
1 2 3 0 −1 1/5 −2/5
=⇒ P = ,D= ,P =
−2 1 0 8 2/5 1/5
 
1 0
2. Diagonalize the matrix A = .
0 −3
Solution:
1−λ 0
|A − λI| = 0 =⇒ = 0 =⇒ (λ − 1)(λ + 3) = 0 =⇒ λ = −3, λ = 1.
0 −3 − λ    
4 0 x1 0
With λ = −3, (A + 3I)x = 0 =⇒ = =⇒ 4x1 = 0 =⇒ x1 = 0. Let x2 = 1.
  0 0 x2 0
0
Hence x1 =
1    
0 0 x1 0
With λ = 1, (A − I)x = 0 =⇒ = =⇒ 4x2 = 0 =⇒ x2 = 0. Let x1 = 1.
  0 −4 x2 0
1
Hence x2 =
 0     
0 1 −3 0 −1 0 1
=⇒ P = ,D= ,P = .
1 0 0 1 1 0

Third Semester
Department of Mathematics

 
6 −2 −1
3. Diagonalize the matrix A = −2 6 −1
−1 −1 5
6 − λ −2 −1
Soln: |A − λI| = 0 =⇒ −2 6 − λ −1 = 0 =⇒ λ3 − 17λ2 + 90λ − 144 = 0
−1 −1 5−λ
=⇒ λ = 3, 6, 8     
3 −2 −1 x1 0
with λ = 3 |A − λI| = 0 =⇒ −2 3 −1 x2  = 0
−1 −1 2 x3  0
1
x1 −x2 x3 x1 x2 x3
=⇒ = = =⇒ = = ∴ x = 1

5 −5 5 1 1 1
   1  
0 −2 −1 x1 0
with λ = 6 |A − λI| = 0 =⇒ −2 0 −1
   x2 = 0
 
−1 −1 −1 x3  0
−1
x1 −x2 x3 x1 x2 x3
=⇒ = = =⇒ = = ∴ x = −1

−1 1 2 −1 −1 2
   2 
−2 −2 −1 x1 0
with λ = 8 |A − λI| = 0 =⇒ −2 −2 −1
   x2 = 0
 
−1 −1 −3 x
3
 0
1
x1 −x2 x3 x1 x2 x3
=⇒ = = =⇒ = = ∴ x = −1

5 5 0 1 −1 2
    0 
1 −1 1 3 0 0 1/3 1/3 1/3
Hence P = 1 −1 −1, D = 0 6 0, P −1 = −1/6 −1/6 1/3.
1 2 0 0 0 8 1/2 −1/2 0
 
3 −2 4
4. Diagonalize the matrix A = −2 6 2
4 2 3
3 − λ −2 4
Soln: |A − λI| = 0 =⇒ −2 6 − λ 2 = 0 =⇒ λ3 − 12λ2 − 21λ + 98 = 0
4 2 3−λ
=⇒ λ = −2, 7, 7       
5 −2 4 x1 0 2
λ = −2, x1 −x2 x3
with =⇒ −2 8 2 x2  = 0 =⇒ = = ∴ x1 =  1 
|A − λI| = 0 36 −18 −36
4 2 5 x3   0   −2
−4 −2 4 x1 0
with λ = 7, |A − λI| = 0 =⇒ −2 −1 2  x2  = 0
4 2 −4 x3 0
As the second and third row are dependent on the first row, we get only one equation
in three unknowns. i.e., −4x1 − 2x2 + 4x3 = 0. Letting x1 and x3 as arbitrary implies
x2 = −2x1 + 2x3 . With x1 = 1, x3 = 2 we get x2 = 2. With x1 = 2, x3 = 1 we get x2 = −2.
   
1 2
∴ x2 = 2, ∴ x3 = −2
2 1

Third Semester
Department of Mathematics

Exercise:    
3 1 1 5
1. Diagonalize the matrices (i) , (ii) .
1 3 5 1  
−2 −36 0 7 −4 4
2. Diagonalize the matrices (i) −36 −23 0, (ii) −4 5 0.
0 0 3 4 0 9

Singular Value Decomposition:

Any m×n matrix A can be factored into A = U ΣV T = (orthogonal)(diagonal)(orthogonal).


The columns of U (m by m) are eigen vectors of AAT , and the columns of V (n by n) are eigen
vectors of AT A. The r singular values on the diagonal of Σ(m by n) are the square roots of
the non-zero eigen values of both AAT and AT A.

Note:
The diagonal(but rectangular) matrix Σ has eigen values from AT A. These positive en-
tries(also called sigma) will be σ1 , σ2 , ..., σr ,such that σ1 ≥ σ2 ≥ ... ≥ σr > 0. They are the
singular values of A.
When A multiplies a column vj of V , it produces σj times a column of U (A = U ΣV T =⇒
AV = U Σ).

Examples:  
−1
1. Decompose A = 2  as U ΣV T , where U and V are orthogonal matrices.

2
Solution:
   
−1   1 −2 −2
AAT =  2  −1 2 2 = −2 4 4
2 −2 4 4
1 − λ −2 −2
|AAT − λI| = 0 =⇒ −2 4 − λ 4 =0
−2 4 4−λ
=⇒ λ3 − 9λ2 = 0 =⇒ λ1 = 0, λ2 = 0, λ3 = 9
T
with
 λ = 9, [AA − λI]x
 =  0 =⇒
−8 −2 −2 x1 0
−2 −5 4  x2  = 0 =⇒ −8x1 − 2x2 − 2x3 = 0, −18x2 + 18x3 = 0
−2 4 −5 x3 0  
−1
=⇒ x1 = −(1/2)x3 , x2 = x3 =⇒ x =  2 
2
T
with
 λ = 0, [AA − λI]x
 =  0 =⇒    
1 −2 −2 x1 0 2 2
−2 4 4  x2 = 0 =⇒ x1 = 2x2 + 2x3 =⇒ x = −1 and x = 2 
     
−2 4 4 x3 0  2 −1
−1/3 2/3 2/3
Hence U = 2/3 −1/3 2/3 

2/3 2/3 −1/3

Third Semester
Department of Mathematics

 
  −1  
AT A = −1 2 2  2  = 9
2
T
|A A − λI| = 0 =⇒ |9 − λ| =0 =⇒  λ = 9
Then AT A − λI)x= 0 =⇒ 0 x1 = 0
Let x1 = 1 ∴ x = 1  
Hence V = 1 or V T = 1
T T
9 is an eigen valueof both
 AA and A A.
−1
And rank of A =  2  is r = 1.
2  
√ 3
∴ Σ has only σ1 = 9 = 3. ∴ Σ = 0
   0  
−1 −1/3 2/3 2/3 3  
∴ the SVD of A =  2  =  2/3 −1/3 2/3  0 1
2 2/3 2/3 −1/3 0
 
1 1
2. Obtain the SVD of A =
1 0
Solution:    
1 1 1 1 2 1
AAT = =
1 0 1 0 1 1
√ √
T 2−λ 1 2 3− 5 3+ 5
|AA − λI| = 0, = 0 =⇒ λ − 3λ + 1 = 0 =⇒ λ1 = , λ2 =
1 1−λ 2 2
√ √
3− 5
 
1+ 5 √
withλ = 1    
x 0 1+ 5
2  2 √   1
= =⇒ x1 + x2 = 0.
(AAT − λI)x = 0  −1 + 5 x2 0 2
=⇒ 1
2
√ √
 
1+ 5 −1√ 
 
1+ 5
−1
Letting x1 = −1, then x2 = ∴x= 1+ 5 =
 , where α = .
2 α 2
√  √ 2
3+ 5 1− 5

with λ = √
1    
x 0 1 − 5
2  2 √   1
= =⇒ x1 + x2 = 0.
(AAT − λI)x = 0  −1 − 5 x 2 0 2
=⇒ 1
2
√ √
 
1− 5 −1√
 
1 − 5
−1
Letting x1 = −1, then x2 = ∴ x = 1 − 5 = , where β = .
2 β 2
2
−1 −1
 
√ p
 1 + α2 1 + β2 
 
Hence U =   

α β

 
p
1 + α2 1 + β 2
−1 α

√
 √1 + α2 √1 + α2 

  λ1 0
As AT A = AAT V T =   and Σ = 

.
 −1 β
 
p p
 0 λ2
1+β 2 1+β 2

Third Semester
Department of Mathematics

 
−1 1 0
3. Obtain the SVD of A =
0 −1 1
Solution:  
  −1 0  
T −1 1 0  2 −1
AA = 1 −1 =
0 −1 1 −1 2
0 1
2 − λ −1
|AAT − λI| = 0 =⇒ = 0 =⇒ λ2 − 4λ + 3 = 0 =⇒ λ1 = 1, λ2 = 3
−1 2 − λ    
T −1 −1 x1 0
with λ = 3 (AA − λI)x = 0 =⇒ = =⇒ x1 + x2 = 0 =⇒ x1 = −x2
−1 −1 x2 0
−1
Letting x2 = 1 =⇒ x1 = −1 ∴ x =
 1    
T 1 −1 x1 0
with λ = 1 (AA − λI)x = 0 =⇒ = =⇒ x1 − x2 = 0 =⇒ x1 = x2
 −1
 1 x2 0
1
Letting x2 = 1 =⇒ x1 = 1 ∴ x =
1
   
−1 0   1 −1 0
−1 1 0
AT A =  1 −1 = −1 2 −1
0 −1 1
0 1 0 −1 1
1 − λ −1 0
|AT A − λI| = 0 =⇒ λ3 − 4λ2 + 3λ = 0
−1 2 − λ −1 = 0
=⇒ =⇒ λ1 = 0, λ2 = 1, λ3 = 3
0 −1 1 − λ
     
with λ = 0 1 −1 0 x1 0 1
x − x2 = 0 x1 = x2
(AT A − λI)x = 0 −1 2 −1 x2  = 0 , 1 , ∴ x = 1

x2 − x3 = 0 x2 = x3
=⇒  0 −1 1  x3  0  1
withλ = 1 0 −1 0 x1 0 −x1 + x2 −1
T x1 = −x3
(A A − λI)x = 0 −1 1 −1 x2 = 0 , −x3 = 0 ,
      ∴x= 0 

x2 = 0
=⇒  0 −1 0   x3  0 x2 = 0 1  
withλ = 3 −2 −1 0 x1 0 1
T −2x1 − x2 = 0 2x1 = x2
(A A − λI)x = 0 −1 −1 −1 x2 = 0 ,
      , ∴ x = −2

x2 + 2x3 = 0 x2 = −2x3
=⇒ 0 −1 −2 x3 0 1
 √ √ √ 
 √ √  √  1/ √6 −1/ 2 1/√3
−1/√ 2 1/√2 3 0 0
Hence U = Σ= V = −2/√ 6
 0√ 1/√3
1/ 2 1/ 2 0 1 0
1/ 6 1/ 2 1/ 3
Exercise:
1. Find the SVDof 
  1 1  
2 −1 3 2 2
(i) , (ii) 0 1 , (iii)
 
2 2 2 3 −2
−1 1

Third Semester

You might also like