Chapter 5
Chapter 5
LINEAR TRANSFORMATIONS
Theorem: a function T from vector space V in to W over the same field K is a linear
transformation iff T(1v1 + 2v2) = 1T(v1) + 2T(v2) for all 1, 2 K and for all v1, v2 V.
Page 1 of 24
Consequently T: V W is a linear transformation.
Observe that by induction we also have a more general relation
T(1v1 + 2v2 + … + nvn) = 1T(v1) + 2T(v2) + …+T(vn) or
(∑ )
n n
T α i vi = ∑ αi T ( v i )
i =1 i=1 for any n vectors v1, v2, …, vn in V and for
any n scalars 1, 2, 3, …, n in K whenever T:VW is a linear transformation.
Example: Let V be a vector space over the field K. Then the mapping
I: V V given by I(v) = v vV is a linear transformation. To prove this let
u,v V and K. Then u + v V and uV as V is a vector space. Since I(x)
= x x V, we have
i) I (u + v) = u + v and I(u) + I(v) = u + v
Thus I (u + v) = I(u) + I(v)
ii) I (u) = u and I(u) = u
Thus I (u) = I(u)
Therefore I is a linear transformation. We call I identity transformation.
Example: Let T be a mapping from a vector space V over a field K into it self
given by T(v) = Ov v V where Ov is a zero vector in V.
Then T is a linear transformation. (Verify!). We call this linear transformation the
zero transformation.
Example: Let V be the vector space of all differentiable real valued functions
of real variables on an open interval (a, b). Then the mapping D: V V
Page 2 of 24
Solution: Let (a, b) , (c, d) 2 and .
Then (i) T((a, b) + (c, d)) = T(a + c, b + d)
= (a + c + b + d, b + d, a + c)
= (a + b, b, a) + (c + d, d, c))
= T(a, b) + T(c, d)
= √ 19
N(0, 1, 2) + N(3, 0, 1) = ‖(0,1,2)‖ + ‖3, 0, 1‖
= √ 02+12+22 + √ 32+02+12
= √ 5 + √10
Since √ 19 ≠ √ 5 + √ 10 , N ((0,1,2) + (3, 0, 1)) N (0,1,2) + (3,0,1)
Thus N is not a linear transformation as N(u + v) = N(u) + N(v) must hold true
for all vectors u, v in 3.
Page 3 of 24
Example: Let T be a linear transformation from a vector space V in to W over the same field K.
Prove that the vectors v1, v2, v3, …, vn V are linearly independent if T(v 1), T(v2),
T(v3), …, T(vn) are linearly independent vectors in W.
Solution: Suppose T(v1), T(v2), T(v3), …, T(vn) are linearly independent vectors in W where v 1,
v2, v3, …, vn are vectors in V and T:V W is a linear transformation. To prove
that v1, v2, v3,…,vn are linearly independent.
Let 1 2, 3, … ,n K such that
1v1 + 2v2 + 3v3 + …+ nvn = Ov
Then T(1v1 + 2v2 + 3v3 + … + nvn) = T(Ov)
So 1T(v1) + 2T(v2) + 3T(v3) + … + nT(vn) = Ow
But T(v1), T(v2), T(v3), …, T(vn) are linearly independent.
Hence 1 = 2 = 3 = … = n = 0. Thus we have shown that
1v1 + 2v2 + 3v3 + … + nvn = Ov implies
1 = 2 = 3 = … = n = 0.
Consequently v1, v2, v3, …, vn are linearly independent.
Theorem: Let V and W be vector spaces over the field K. Let {v1, v2, v3, …, vn}be
a basis of V. If {w1, w2, w3, …, wn}is a set of arbitrary vectors in W, then
there exists a unique linear transformation F: V W such that
F(vj) = wj for j = 1, 2, … n.
To prove the theorem, we need to
a) define a function F from V into W such that F(vi) = wi for all i = 1, 2, 3, … n.
b) show F is a linear transformation
c) show that F is unique.
Proof: Let V and W be vector spaces over the field K. Let {v1, v2, v3, …, vn} be a basis
of V and {w1, w2, w3, …, wn} be any set of n-vectors in W.
Page 4 of 24
Since {v1, v2, v3, …, vn} is a basis of V, for any v V there exist unique scalars
a1, a2, a3, …, an K such that
n
v = ∑ ai vi
i =1
Every element v of V is mapped to only one element of W as the scalars a i ‘s are unique.
As W is a vector space, every linear combination of vectors in W is also in W. So
n
∑ ai w i ∈ W
i=1 . Thus F is a function from V in to W
Moreover, since vi = 0.v1 + 0.v2 + … + Ovi-1 + 1.vi + 0.vi+1 + …+ 0.vn
for any i = 1, 2, 3, …, n, we have
F(vi) = 0.w1 + 0.w2 + … + 0wi-1 + 1.wi + 0.wi+1 + … + 0.wn = wi
So F(vi) = wi for each i = 1, 2, 3, …, n
b) We show that F is a linear transformation,
i.e F(u + v) = F(u) + F(v) and F(v) = F(v) K and u, v V.
To do this, let x, y V and K
n n
x = ∑ xi vi y = ∑ yi vi
Then i =1 and i=1 for some unique scalars x1, x2, x3, …,xn, y1, y2,
y3, …, yn in K.
( )
n n
F ∑ xi vi + ∑ yi vi
(i) F(x + y) = i =1 i=1
(∑ )
n
F ( xi + yi ) vi
= i =1
n
∑ ( xi + y i ) wi
= i=1 by definition of F
n n
∑ x i wi + ∑ y i wi
= i=1 i=1
Page 5 of 24
(∑ ) (∑ )
n n
F xi vi + F yi vi
= i=1 i=1 by definition of F
= F(x) + F(y)
F(x + y) = F(x) + F(y) x, y V
( )
n
F α ∑ x i vi
ii) F(x) = i=1
(∑ )
n
F ( αx i ) v i
= i =1
n
∑ ( αxi ) wi
= i=1 by definition of F
n
α ∑ xi wi
= i=1
(∑ )
n
α F xi vi
= i=1 by definition of F
= α F ( x)
So F (x) = F(x) K and x V
Therefore from (i) and (ii) we conclude that F is a linear transformation from V in to W.
c) In (a) and (b) we have shown the existence of a linear transformation F : V W such
that F(vi) = wi for all i = 1, 2, 3, …, n.
To prove that F is unique, suppose that G: V W is a linear transformation such that G(v i) = wi
i {1, 2, …,n}
n
x = ∑ xi vi
Let x be any vector in V then i=1 for some unique scalars x1, x2, x3, …, xn in K.
(∑ )
n
G( x ) = G xi vi
Thus i=1
n
∑ x i G( vi )
= i=1 as G is a linear transformation
Page 6 of 24
n
∑ xi w i
= i=1 as G(vi) = wi for each i = 1, 2, 3, ..., n.
(∑ )
n
F xi vi
= i =1 by definition of F.
= F(x)
Since G(x) = F(x) for any x V, we conclude that G = F.
This proves that F is unique. With this we complete the proof of the theorem.
Example: Describe explicitly the linear transformation T: 2 2 such that
T(2, 3) = (4, 5) and T(1, 0) = (0, 0).
Solution: Let (x, y) 2. Then (x, y) can be expressed as a linear combination of
(2, 3) and (1, 0)as {(2,3), (1,0)} form a basis of 2.
So (x, y) = a(2, 3) + b(1, 0) for some a, b or (x, y) = (2a + b, 3a)
y
a=
Hence we have x = 2a + b and y = 3a, which inturn implies 3 and
2y
b= x−
3
Thus (x, y) =
y
3
(2, 3) + x −
2y
3 (
( 1 , 0) )
(3
T ( x , y ) = T (2 , 3) + ( x− y ) (1 , 0 )
y 2
3 )
T (2, 3 ) + ( x− y ) T (1, 0 )
y 2
= 3 3
( 4 , 5) + ( x− y ) (0 , 0 )
y 2
= 3 3
=
( 4
3
y ,
5y
3 ) + (0, 0)
T (x,y) = ( y,
3 ) (x, y)
4 5y
Therefore 3 2
Page 7 of 24
T(1, 2) = (3, -1, 5) and T(0, 1) = (2, 1, -1)
Solution: = {(1, 2), (0, 1)} is a basis of 2. (verify)
Thus there exist a unique linear transformation (by theorem 4.2.2) T: 2 3
such that T(1,2) = (3, -1, 5) and T(0, 1) = (2, 1, -1).
To describe this unique linear transformation explicitly, suppose (x, y) 2
Then (x, y) = t1 (1, 2) + t2 (0, 1) for some t1, t2 as is a basis of 2.
= (t1, 2t1 + t3)
Thus we have
{ x = t1
y = 2t 1 + t 2
Notation: We shall denote the Kernel and image of a linear transformation T: V W by KerT
and ImT respectively. So KerT = {u V| T(u) = Ow} and
ImT = {w W| w = T(v) for some v V}
Theorem: Let T be a linear transformation from a vector space V in to W over the some field K.
Then (a) Ker T is a subspace of V
Page 8 of 24
(b) ImT is a subspace of W.
Proof :a) It is already proved
b) Clearly ImT is a subset of W
i) Since T(Ov) = Ow , Ow ImT.
ii) Let w1, w2 ImT. Then there exists u1, u2 V such that
T(u1) = w1 and T(u2) = w2. Since V is a vector space, u1 + u2 V. Moreover
T(u1 + u2) = T(u1) + T(u2) = w1 + w2. Thus w1 + w2 ImT as there exists a
vector v V such that T(v) = w1 + w2 (v = u1 + u2).
So we have w1 + w2 ImT w1, w2 W.
iii) Let K and w ImT. Then there exists v V such that T(v) = w
as w ImT. Since V is a vector space over K, v V. Moreover
T(v) = T(v) = w. Hence w ImT. From (i), (ii) and (iii) it
follows that ImT is a subspace of W.
3 2
Example: Let G : ℜ → ℜ be defined by G( x , y , z )=( x+ y , y−z )
a) Show that G is a linear transformation
b) Find the ker G and ImG.
3
Solution: a) Let ( x , y , z), (u , v , w )∈ ℜ and α ∈ ℜ
= ( x+ u+ y +v , y +v −z−w )
= ( x+ y , y −z )+(u+v , v−w)
= G( x , y , z)+G(u , v , w )
= (αx+ αy , αy−αz )
= α ( x+ y , y−z )
= αG ( x , y , z )
Therefore G is a linear transformation.
Page 9 of 24
b) ker G = {( p ,q ,r )∈ ℜ | G( p , q , r )=(0 ,0 ) }
3
= {( p ,q , r )∈ ℜ | − p=q=r }
3
{( p,−p ,− p) | P ∈ℜ } = { p(1,−1,−1) | p∈ ℜ}
=
3
So kerG is the subspace of ℜ generated by (1,-1,-1).
= { (x+ y , y−z) | x , y ,z ∈ ℜ }
= { (x+ y , 0 )+(0, y−z) | x , y , z ∈ ℜ }
={
t 1 (1 , 0 )+t 2 (0 , 1) | t 1 =x+ y ∈ ℜ and t 2 = y−z ∈ ℜ }
2
Thus ImG is the subspace of ℜ generated by (1, 0) and (0, 1).
2
That is Im G =ℜ (why?) observe that dim (ker G) = 1 and dim (ImG) = 2 .
Page 10 of 24
T(x1) + T(-x2) = Ow. Thus we have T(x1 + -x2) = Ow, which implies
x1 + (-x2) ker T. Since ker T contains only the zero vector by assumption,
x1 + -x2 = Ov and hence x1 = x2. Consequently T is one-to-one.
2
ii) Since ker T = {(0, 0)} (contains only the zero vector of ℜ ), T is
Page 11 of 24
one – to – one by theorem 4.3.2. But it is not on to why?
Notice that whenever kerT {Ov}, we can conclude that T is not one-to-one.
Theorem: (Rank-nullity theorem) Let V and W be vector spaces over the same field K. Let L:
V W be a linear transformation. If V is finite dimensional vector space then
dim V = nullity of L + rank of L
i.e dim V = dim (ker L) + dim (ImL)
Proof: Since V is finite dimensional vector space, it is obvious that Ker L and
ImL =L (V) are finite dimensional. Moreover dim (Ker L), dim (ImL) dim V
(Verify)
Let {u1, u2, …, up} and {w1, w2, …, wq} be basis of kerL and ImL respectively
(p, q dim V) Then there exist v1, v2, …, vq V such that L(vi) = wi for i = 1, 2,
3,…q as wi ImL.
Claim: = {u1, u2, …, up, v1, v2, …, vq} is a bais of V.
Now we show that
i) generates V
ii) is linearly independent.
Page 12 of 24
i) Let v V. Then L(v) ImL and hence there exist unique scalars b 1, b2, …, bqin K such
that L(v) = b1w1 + b2w2 + … + bqwq.
So L(v) = b1L(v1) + b2L(v2) + … + bqL(vq) as L(vi) = wi for i = 1, 2, ..q
= L(b1v1 + b2v2 + … + bqvq)
Thus L(v-b1v1 – b2v2 - … - bqvq) = Ow. Which implies v – b1v1 – b2v2- … - bqvq ker L.
Hence v – b1v1 – b2v2- … -bqvq = a1u1 + a2u2 + …+apup for some unique
scalars a1, a2, …, ap in K. Therefore v = a1u1 + a2u2 + …+apup + b1v1 + b2v2+…+bqvq
From the this we conclude that generates V.
ii) Suppose 1u1 + 2u2 + … + pup + r1v1 + r2v2+ … + rqvq = Ov ……… (1)
where 1, 2 …,pr1, r2, …, rq K
Then L(1u1 + 2u2 + … + pup + r1v1 + r2v2 + … + rqvq) = L(Ov) = Ow
So we have 1L(u1) + 2L(u2) + …+pL(up) + r1L(v1) + r2L(v2) + … + rqL(vq) = Ow
r1w1 + r2w2 + … + rqwq = Ow since L(uj) = Ow and
L(vi) = wi j = 1, 2, …, p and i = 1, 2, …,q
r1 = r2 = … = rq = 0, since {w1, w2, …, wq} is basis of ImL.
1u1 + 2u2 + … + pup = Ov (replace r1, r2 … rqby 0 in (1)
1 = 2 = … = p = 0, since {u1, u2, …, up} is a basis of Ker L.
Thus we have shown that
1u1 + 2 u2 + … + pup + r1v1 + r2v2 + … + rqvq = Ov
1 = 2 = … = p = r1 = v2 = … = rq = 0
That is is a linearly independent set in V. From (i) and (ii) it follows that
= {u1, u2, …, up, v1, v2, …, vq} is a basis of V.
Hence dimV = p + q = dim (kerL) + dim (ImL)
Therefore dim V = Nullity of L + Rank of L..
Page 13 of 24
one can get a linear transformation T: 3 4 such that
T(1, 0, 0) = (1, 2, 0, -4), T(0, 1, 0) = (2, 0, -1, -3) and T(0,0,1) = (0,0,0,0)
If (x, y, z) 3, T(x, y, z) = T(x(1, 0, 0) + y(0, 1, 0) + z(0, 0, 1))
= x T(1, 0, 0) + yT(0, 1, 0) + zT (0, 0, 1)
= (x, 2x, 0, -4x) + (2y, 0, -y, -3y) + z(0, 0, 0, 0)
Therefore T(x, y, z) = (x + 2y, 2x, -y, -4x-3y) is the required linear
transformation.
Theorem: Let V and W be vector spaces over the field K. Let T and S be linear transformations
from V in to W and K. Then
i) the function T + S is a linear transformation from V in to W.
ii) the function T is a linear transformation from V in to W.
iii) L (V, W) the set of all linear transformations from V in to W with
respect to the operation of vector addition and scalar multiplication
defined as (T + S) (v) = T(v) + S(v) and (T (v) = T(v) for all v V,
is a vector space over K.
Page 14 of 24
= (T(v1) + S(v1))
= (T + S) (v1)
Therefore T + S is a linear transformation.
ii) Let T be a linear transformation form V into W and K
Then for any v1, v2 V and K.
We have (T) (v1 + v2) = (T(v1 + v2))
= (T(v1) + T(v2) )
= (T(v1)) + (T(v2))
= (T) (v1) + (T) (v2)
and (T) (v1) = (T(v1))
= (T(v1))
= () (T(v1))
= () (T(v1))
= ((T(v1)))
= (T)(v1)
Therefore T is a linear transformation.
iii) Here we need to verify that all the conditions in the definition of a vector
space are satisfied. We leave this to the students.
Theorem: Let V, W and Z be vector spaces over the field K. Let T and S be linear
transformations from V in to W and from W into Z respectively. Then the
compose function SoT defined by (SoT) (v) = S(T(v)) for all v V is a linear
transformation.
Proof: (exercise).
Theorem: Let V and W be vector spaces over the field F and Let T be a linear transformation
from V in to W. If T is invertible then the inverse function T -1 is a linear
transformation.
Proof: Suppose T : V W is invertible then there exists a unique function
T-1: W V such that T-1(w) = v T(v) = w for all w W. Moreover
T is 1-1 and on to. We need to show that T-1 is linear. Let w1, w2 W and F
Page 15 of 24
Then there exist unique vectors v1, v2 V such that T(v1) = w1 and T(v2) = w2
as T is 1-1 and on to. So T-1(w1) = v1 and T-1(w2) = v2.
T(v1 + v2) = T(v1) + T(v2) because T is linear
= w1 + w2
Thus T-1(w1 + w2) = v1 + v2 = T-1(w1) + T-1(w2) for any w1, w2 W.
Since T( v1) = w1, T-1 (w1) = v1 = T-1(w1). Therefore T-1 is a linear
transformation.
= {(a , b , c )∈ ℜ3|(a , 3 b , c ) = (0 , 0 , 0) }
= { (0 ,0 ,0 )} .
Therefore T is one to one.
Moreover from rank-nullity theorem, dim(3) = dim (kerT) + dim (ImT).
3 = 0 + dim(ImT). So dim(ImT) = 3. Since ImT is a subspace of 3
and dim(ImT) = 3, we have ImT = 3. Hence T is on to as ImT = 3.
Therefore T is invertible as it one to- one and on to.
To find T-1, let T (x, y, z) = (u, v, w).
v
y = , z = w
Then (x, 3y, z) = (u, v, w). So x = u, 3 .
But T(x, y, z) = (u, v, w) T-1 (u, v, w) = (x, y, z).
Page 16 of 24
Linear transformation associated with a given matrix
A=( a ij )m×n n m
Definition: For any matrix over a field K. The mapping T A : K → K defined
Example: a) Let
A= 1 0 ( ) 2 2
0 1 . Then T A : ℜ → ℜ is an identity linear transformation.
( ) ( )( ) ( )
1 0 1 0 x x
B= 2 1 T()B
x
y
= 2 1 y = 2 x+ y
b) Let
3 0 . Then 3 0 3x
T ()( )
x
x =
B 2 x+ y
y
Thus T B : ℜ → ℜ given by
2 3 3 x is the linear
transformation associated with matrix B.
c) Let
[ ] 2 0 () () ()
A= 1 3 , u = 0 , v= 2 , w= 2
0 1 2 and TA be the linear
transformation associated with matrix A.
Find i) T A (u ), T A (v ) and T A ( w )
i)
T A (u )=Au= 1 3
0 1 ( ) (02 )=(62 )
ii) T A deforms the given square as if the top of the square were
Page 17 of 24
pushed to the right while the base is held fixed (see the figure below).
Example
Let
(
A= 1 0 3
−2 1 −3 , ) b= −4 ( )
9 and T A be the linear transformation associated with
matrix A. Then find ker TA
3 2
Solution: T A : ℜ →ℜ is given by
( )( ()
x1 x1
TA x2 =
x3
1
−2
0
1
3
−3 ) x2 =
x3
(
x 1 +3 x 3
−2 x 1 + x 2−3 x 3 )
ker T A = { X ∈ ℜ3|T A ( X )=0 }
Then
()
a
b ∈ kerT A ⇔
c
(
a+3 c
=
−2 a+b−3 c 0
0
) ( )⇔¿ a+3c=0¿} ¿¿¿
So
⇔ a=−3 c=b
{( ) } { ( ) }
−3 c −3
ker T A= −3 c |c ∈ ℜ c −3 | c ∈ ℜ
c 1
Thus =
Page 18 of 24
In the previous subsection we have seen that associated to any given m× n matrix A there is a
n m
linear transformation T A : ℜ →ℜ defined by T A ( X )= AX . In this subsection we shall see
the reverse process, that is to find a matrix associated to a given linear transformation from a
finite dimensional vector space V in to finite dimensional vector space W over the same field K.
Before going further let us recall about the coordinates of an element V of a finite dimensional
vector space V with respect to a given ordered basis β of V. What do we mean by ordered
β= {b1 , b2 ,. .. , b n }
basis? Suppose is an ordered basis for a finite dimensional vector space V
over a field K and v is in V. The coordinates of v relative to the basis β (or the β coordinates
If
c 1 , c 2 , .. . , c n are the β -coordinates of v then the vector in Kn,
[]
c1
c2
[ v ]β= .
.
.
cn
coordinate vector of
X=
[ 16 ] relative to β is
[ X ] =[−2 ]
β
3 ,
since X = (-2)b1 + 3b2.
Page 19 of 24
Let V be an n-dimensional vector space over the field K and let W be an m-dimensional vector
β= {v 1 , v 2 ,. .. , v n } β ' = {w1 , w2 , .. . , wm }
space over K. Let and be ordered bases of V and W
respectively. Suppose T : V → W is a linear transformation.
Then for any x in V, T ( x )∈ W and T(x) can be expressed as a linear combination of elements
'
of the basis β . So we have,
T (v 1 ) ¿ a11 w1 + a 21 w 2 + … + a m 1 wm
T (v 2 ) ¿ a12 w1 + a 22 w 2 + … + a m 2 wm
⋮ (1 )
T ( vj) ¿ a1 j w 1 + a2 j w 2 + … + amj w m
⋮
T (v n ) ¿ a 1n w1 + a2 n w 2 + … + amn w m
where aij’s are scalars in K.
[ ]
Writing the coordinates of T(v1),T(v2),…,T(vn) successively as columns of a matrix we get
bases β and β ' . If β and β ' are standard bases of V and W respectively, we call the matrix
M in (2) the standard matrix for the linear transformation T.
3 2
Example: Define T : ℜ →ℜ by
Page 20 of 24
T(x1, x2, x3) = (3x1 - 2x2 + x3, - x1 + x2 +5x3)
a) Find the standard matrix for T.
b) Find the matrix of T relative to the ordered bases
3 2
= {(1,0,1), (0,1,1), (0,0,1)} and ’ = {(1,0), (1,1)} of ℜ and ℜ
respectively.
3 2
Solution: a) Here we use the standard basis of ℜ and ℜ .
T(1,0,0) = (3, -1) = 3(1,0) + -1(0.1)
T(0,1,0) = (-2,1) = -2(1,0) + 1(0,1)
T(0,0,1) = (1,5) = 1(1,0) + 5(0,1)
Writing the coordinates of T(1,0,0), T(0,1,0), T(0,0,1) as first, second and
third columns of a matrix we get the standard matrix M for T as
M=
[ 3 −2 1
−1 1 5 ]
b) T(1,0,1) = (4,4) = 0(1,0) + 4(1,1)
T(0,1,1) = (-1,6) = -7(1,0) + 6(1,1)
T(0,0,1) = (1,5) = -4(1,0) + 5(1,1)
[]
x1
x2
β=[ X ] β = .
.
.
xn
and T(x) = T(x1v1 + x2v2 + …+xnvn) = x1T(v1) + x2T(v2) + …+xnT(vn) ….. (3)
Page 21 of 24
Using the basis β ' in W, we can rewrite (3) in terms of coordinate vectors relative to β ' as
[ T ( X ) ]β ' =M [ X ] β …. (5)
Thus if
[ X ]β is the coordinate vector of x relative to β , then the equation in (5) shows that
M [ X ] β is the coordinate vector of the vector T(X) relative to β ' .
Note: In case when W is the same as V and the basis β ' is the same as β , the matrix M in (2)
3
Example: Let T : ℜ → ℜ2 be the linear transformation defined by
T(x, y, z) = (3x + 2y - 4z, x -5y + 3z). Find the matrix of T relative to the bases
B1 = {(1, 1, 1), (1, 1, 0), (1, 0, 0) of ℜ3 and B2 = {(1, 3), (2, 5)} of ℜ2 .
Solution: T(1, 1, 1) = (1, -1) = -7(1, 3) + 4(2, 5)
T(1, 1, 0) = (5, -4) = -33(1, 3) + 19(2, 5)
T(1, 0, 0) = (3, 1) = -13(1, 3) + 8(2, 5)
The matrix M of T relative to the bases B1 and B2 is:
M=
[ −7 −33 −13
4 19 8 ]
Page 22 of 24
β= {b1 , b2 ,b 3 }
Example: Let be a basis for a vector space V over the set of real numbers. Find
T(3b1 – 4b2), where T is a linear transformation form V in to V whose
matrix relative to β is
[ ]
0 −6 1
[ T ] β = 0 5 −1
1 −2 7
( )
3
[ X ] β = −4
Solution: Let x = 3b1 - 4b2. Then the coordinate vector of x relative to
0 and the
[ ][ ][ ]
0 −6 1 3 24
0 5 −1 −4 −20
[ T ( X ) ]β =[ T ] β [ X ] β = 1 −2 7 0 = 11
Hence T(x) = 24b1 - 20b2 + 11b3.
Definition:
Let T: V V be a linear operator on a vector space V over a field K. An eigenvalue of T is a
scalar in K such that there is a non-zero vector v in V with T(v) = v.
If is an eigenvalue of T, then
a) any vector v in V such that T(v) = v is called an eigenvector of T associated with the
eigenvalue ;
b) the collection of all vectors v of V with T(v) = v is called the eigenspace associated with .
Note:
1. One of the meanings of the word “eigen” in German is “Proper”. Thus eigen values
are also called proper values or characteristic values or latent roots.
2. If and w are eigenvectors associated with eigenvalue . Then:
i) + w is also an eigenvector with eigenvalue , because:
T( + w) = T() + T(w) = + w = ( + w)
Page 23 of 24
ii) each scalar multiple k, k 0, is also an eigenvector with eigenvalue ,
because: T(k) = kT() = k() = (k).
Page 24 of 24