0% found this document useful (0 votes)
23 views24 pages

Chapter 5

Chapter Five discusses linear transformations, defining them as functions between vector spaces that satisfy specific properties. It provides examples of linear transformations, such as the identity transformation and the zero transformation, and discusses the conditions under which a function is a linear transformation. The chapter also includes theorems related to linear transformations, including their properties and uniqueness based on vector space bases.

Uploaded by

gueshberhe97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views24 pages

Chapter 5

Chapter Five discusses linear transformations, defining them as functions between vector spaces that satisfy specific properties. It provides examples of linear transformations, such as the identity transformation and the zero transformation, and discusses the conditions under which a function is a linear transformation. The chapter also includes theorems related to linear transformations, including their properties and uniqueness based on vector space bases.

Uploaded by

gueshberhe97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter Five

LINEAR TRANSFORMATIONS

5.1. Definition of linear Transformations and examples


Definition: Let V and W be vector spaces over the same field K. A function
T: V  W is called a linear transformation (or a linear mapping) of V in to
W if it satisfies the following conditions:
i) T (u + v) = T(u) + T(v)  u,v  V
ii) T(u) =  T(u)   K and u  V

Theorem: a function T from vector space V in to W over the same field K is a linear
transformation iff T(1v1 + 2v2) = 1T(v1) + 2T(v2) for all 1, 2  K and for all v1, v2  V.

Proof: First suppose T: V  W is a linear transformation.


Since 1v1, 2v2  V 1, 2  K and v1, v2  V,
T(1v1 + 2v2) = T(1v1) + T(2v2) by condition (i) of definition 4.1.1
= T(v1) + 2 T(v2) by condition (ii) of definition 4.1.1
Thus T(1v1 + 2v2) = 1T(v1) + 2T(v2) 1, 2 K and v1, v2 V.
Next suppose that
T(1v1 + 2v2) = 1T(v1) + 2 T(v2) 1, 2 K and v1, v2  V *
We have to prove that T is a linear transformation.
T(v1 + v2) = T(1.v1) + 1.v2) , where 1 is the multiplicative identity in K.
= 1.T(v1) + 1.T(v2) From (*)
= T(v1) + T(v2).
T(1 v1) = T(1v1 + 0.v2) , where 0 is the zero element of K.
= 1T(v1) + 0.T(v2) , from (*)
= 1T(v1) + Ow, (0.T(v2) = Ow – zero vector in W)
= 1 T(v1)
Therefore T(v1 + v2) = T(v1) + T(v2)and T(1v1) = 1 T(v1)
 1  k, v1, v2  V.

Page 1 of 24
Consequently T: V  W is a linear transformation.
Observe that by induction we also have a more general relation
T(1v1 + 2v2 + … + nvn) = 1T(v1) + 2T(v2) + …+T(vn) or

(∑ )
n n
T α i vi = ∑ αi T ( v i )
i =1 i=1 for any n vectors v1, v2, …, vn in V and for
any n scalars 1, 2, 3, …, n in K whenever T:VW is a linear transformation.

Example: Let V be a vector space over the field K. Then the mapping
I: V  V given by I(v) = v vV is a linear transformation. To prove this let
u,v V and   K. Then u + v V and uV as V is a vector space. Since I(x)
= x x  V, we have
i) I (u + v) = u + v and I(u) + I(v) = u + v
Thus I (u + v) = I(u) + I(v)
ii) I (u) = u and I(u) = u
Thus I (u) = I(u)
Therefore I is a linear transformation. We call I identity transformation.

Example: Let T be a mapping from a vector space V over a field K into it self
given by T(v) = Ov v  V where Ov is a zero vector in V.
Then T is a linear transformation. (Verify!). We call this linear transformation the
zero transformation.

Example: Let V be the vector space of all differentiable real valued functions
of real variables on an open interval (a, b). Then the mapping D: V  V

given by D( f )=f ' (where f ' is the derivative of f) is a linear


transformation. This can be easily verified by using the properties of
derivative.

Example: Let a function L: 23 be given by L (x, y) = (x + y, y, x)


Is L a linear transformation?

Page 2 of 24
Solution: Let (a, b) , (c, d)  2 and   .
Then (i) T((a, b) + (c, d)) = T(a + c, b + d)
= (a + c + b + d, b + d, a + c)
= (a + b, b, a) + (c + d, d, c))
= T(a, b) + T(c, d)

ii) T((a, b)) = T(a, b)


= (a + b, b, a)
= (a + b, b, a)
=  T(a, b)
Thus T is a linear transformation.

Example: Is the mapping N: 3 defined by N(x, y, z) = ‖( x, y ,z )‖


a linear transformation?
Solution: No, take (0, 1, 2), (3, 0, 1)  3
N((0, 1,2) + (3, 0, 1)) = N(3, 1, 3)

= ‖(3 ,1 , 3)‖ = √ 32 +12 +32

= √ 19
N(0, 1, 2) + N(3, 0, 1) = ‖(0,1,2)‖ + ‖3, 0, 1‖

= √ 02+12+22 + √ 32+02+12
= √ 5 + √10
Since √ 19 ≠ √ 5 + √ 10 , N ((0,1,2) + (3, 0, 1))  N (0,1,2) + (3,0,1)
Thus N is not a linear transformation as N(u + v) = N(u) + N(v) must hold true
for all vectors u, v in 3.

Page 3 of 24
Example: Let T be a linear transformation from a vector space V in to W over the same field K.
Prove that the vectors v1, v2, v3, …, vn  V are linearly independent if T(v 1), T(v2),
T(v3), …, T(vn) are linearly independent vectors in W.
Solution: Suppose T(v1), T(v2), T(v3), …, T(vn) are linearly independent vectors in W where v 1,
v2, v3, …, vn are vectors in V and T:V  W is a linear transformation. To prove
that v1, v2, v3,…,vn are linearly independent.
Let 1 2, 3, … ,n  K such that
1v1 + 2v2 + 3v3 + …+ nvn = Ov
Then T(1v1 + 2v2 + 3v3 + … + nvn) = T(Ov)
So 1T(v1) + 2T(v2) + 3T(v3) + … + nT(vn) = Ow
But T(v1), T(v2), T(v3), …, T(vn) are linearly independent.
Hence 1 = 2 = 3 = … = n = 0. Thus we have shown that
1v1 + 2v2 + 3v3 + … + nvn = Ov implies
1 = 2 = 3 = … = n = 0.
Consequently v1, v2, v3, …, vn are linearly independent.

Properties of Linear transformation

Theorem: Let T be a linear transformation from a vector space V in to W over the


same field K. Then i) T(-v) = -T(v) v  V
ii) T(v1 – v2) = T(v1) – T(v2) v1, v2  V

Theorem: Let V and W be vector spaces over the field K. Let {v1, v2, v3, …, vn}be
a basis of V. If {w1, w2, w3, …, wn}is a set of arbitrary vectors in W, then
there exists a unique linear transformation F: V  W such that
F(vj) = wj for j = 1, 2, … n.
To prove the theorem, we need to
a) define a function F from V into W such that F(vi) = wi for all i = 1, 2, 3, … n.
b) show F is a linear transformation
c) show that F is unique.
Proof: Let V and W be vector spaces over the field K. Let {v1, v2, v3, …, vn} be a basis
of V and {w1, w2, w3, …, wn} be any set of n-vectors in W.

Page 4 of 24
Since {v1, v2, v3, …, vn} is a basis of V, for any v  V there exist unique scalars
a1, a2, a3, …, an  K such that
n
v = ∑ ai vi
i =1

a) Define F : V  W by F(v) = a1w1 + a2 w2 + a3w3 + … + anwn


n n
F (v) = ∑ ai w i v = ∑ ai vi
i.e i =1 where i =1

Every element v of V is mapped to only one element of W as the scalars a i ‘s are unique.
As W is a vector space, every linear combination of vectors in W is also in W. So
n
∑ ai w i ∈ W
i=1 . Thus F is a function from V in to W
Moreover, since vi = 0.v1 + 0.v2 + … + Ovi-1 + 1.vi + 0.vi+1 + …+ 0.vn
for any i = 1, 2, 3, …, n, we have
F(vi) = 0.w1 + 0.w2 + … + 0wi-1 + 1.wi + 0.wi+1 + … + 0.wn = wi
So F(vi) = wi for each i = 1, 2, 3, …, n
b) We show that F is a linear transformation,
i.e F(u + v) = F(u) + F(v) and F(v) = F(v)   K and u, v  V.
To do this, let x, y  V and   K
n n
x = ∑ xi vi y = ∑ yi vi
Then i =1 and i=1 for some unique scalars x1, x2, x3, …,xn, y1, y2,
y3, …, yn in K.

( )
n n
F ∑ xi vi + ∑ yi vi
(i) F(x + y) = i =1 i=1

(∑ )
n
F ( xi + yi ) vi
= i =1
n
∑ ( xi + y i ) wi
= i=1 by definition of F
n n
∑ x i wi + ∑ y i wi
= i=1 i=1

Page 5 of 24
(∑ ) (∑ )
n n
F xi vi + F yi vi
= i=1 i=1 by definition of F
= F(x) + F(y)
F(x + y) = F(x) + F(y) x, y  V

( )
n
F α ∑ x i vi
ii) F(x) = i=1

(∑ )
n
F ( αx i ) v i
= i =1

n
∑ ( αxi ) wi
= i=1 by definition of F
n
α ∑ xi wi
= i=1

(∑ )
n
α F xi vi
= i=1 by definition of F

= α F ( x)
So F (x) =  F(x)   K and x  V
Therefore from (i) and (ii) we conclude that F is a linear transformation from V in to W.
c) In (a) and (b) we have shown the existence of a linear transformation F : V  W such
that F(vi) = wi for all i = 1, 2, 3, …, n.
To prove that F is unique, suppose that G: V  W is a linear transformation such that G(v i) = wi
i  {1, 2, …,n}
n
x = ∑ xi vi
Let x be any vector in V then i=1 for some unique scalars x1, x2, x3, …, xn in K.

(∑ )
n
G( x ) = G xi vi
Thus i=1
n
∑ x i G( vi )
= i=1 as G is a linear transformation

Page 6 of 24
n
∑ xi w i
= i=1 as G(vi) = wi for each i = 1, 2, 3, ..., n.

(∑ )
n
F xi vi
= i =1 by definition of F.
= F(x)
Since G(x) = F(x) for any x  V, we conclude that G = F.
This proves that F is unique. With this we complete the proof of the theorem.
Example: Describe explicitly the linear transformation T: 2  2 such that
T(2, 3) = (4, 5) and T(1, 0) = (0, 0).
Solution: Let (x, y)  2. Then (x, y) can be expressed as a linear combination of
(2, 3) and (1, 0)as {(2,3), (1,0)} form a basis of 2.
So (x, y) = a(2, 3) + b(1, 0) for some a, b  or (x, y) = (2a + b, 3a)
y
a=
Hence we have x = 2a + b and y = 3a, which inturn implies 3 and
2y
b= x−
3

Thus (x, y) =
y
3
(2, 3) + x −
2y
3 (
( 1 , 0) )
(3
T ( x , y ) = T (2 , 3) + ( x− y ) (1 , 0 )
y 2
3 )
T (2, 3 ) + ( x− y ) T (1, 0 )
y 2
= 3 3

( 4 , 5) + ( x− y ) (0 , 0 )
y 2
= 3 3

=
( 4
3
y ,
5y
3 ) + (0, 0)

T (x,y) = ( y,
3 )  (x, y)  
4 5y
Therefore 3 2

Example: Find a linear transformation T : 2  3 such that

Page 7 of 24
T(1, 2) = (3, -1, 5) and T(0, 1) = (2, 1, -1)
Solution:  = {(1, 2), (0, 1)} is a basis of 2. (verify)
Thus there exist a unique linear transformation (by theorem 4.2.2) T: 2  3
such that T(1,2) = (3, -1, 5) and T(0, 1) = (2, 1, -1).
To describe this unique linear transformation explicitly, suppose (x, y)  2
Then (x, y) = t1 (1, 2) + t2 (0, 1) for some t1, t2   as  is a basis of 2.
= (t1, 2t1 + t3)

Thus we have
{ x = t1
y = 2t 1 + t 2

Which in turn implies t1 = x and t2 = y – 2x. So (x, y) = x(1, 2) + (y – 2x) (0, 1)


and T(x, y) = x T(1, 2) + (y - 2x) T(0, 1)
= x(3, -1, 5) + (y – 2x) (2, 1, -1)
= (3x, -x, 5x) + (2y – 4x, y-2x, 2x – y)
= (2y – x, y – 3x, 7x – y)
Therefore the required linear transformation is given by
T(x, y) = (2y – x, y – 3x, -y + 7x).

5.2. The Rank and Nullity of a linear transformation and examples

Definition: Let V, W be vector spaces over K and T: V  W be a linear transformation.


i) The set of elements v  V such that T(v) = Ow is called the Kernel of T or the
null space of T.
ii) The set of elements w in W such that there exists an element v of V such that T(v)
= w is called the image or the range of T.

Notation: We shall denote the Kernel and image of a linear transformation T: V  W by KerT
and ImT respectively. So KerT = {u  V| T(u) = Ow} and
ImT = {w  W| w = T(v) for some v  V}

Theorem: Let T be a linear transformation from a vector space V in to W over the some field K.
Then (a) Ker T is a subspace of V

Page 8 of 24
(b) ImT is a subspace of W.
Proof :a) It is already proved
b) Clearly ImT is a subset of W
i) Since T(Ov) = Ow , Ow  ImT.
ii) Let w1, w2  ImT. Then there exists u1, u2  V such that
T(u1) = w1 and T(u2) = w2. Since V is a vector space, u1 + u2  V. Moreover
T(u1 + u2) = T(u1) + T(u2) = w1 + w2. Thus w1 + w2  ImT as there exists a
vector v  V such that T(v) = w1 + w2 (v = u1 + u2).
So we have w1 + w2  ImT w1, w2  W.
iii) Let   K and w  ImT. Then there exists v  V such that T(v) = w
as w  ImT. Since V is a vector space over K,  v  V. Moreover
T(v) = T(v) = w. Hence  w  ImT. From (i), (ii) and (iii) it
follows that ImT is a subspace of W.

3 2
Example: Let G : ℜ → ℜ be defined by G( x , y , z )=( x+ y , y−z )
a) Show that G is a linear transformation
b) Find the ker G and ImG.
3
Solution: a) Let ( x , y , z), (u , v , w )∈ ℜ and α ∈ ℜ

Then G(( x , y , z )+(u , v , w))=G( x +u , y+v , z+w )

= ( x+ u+ y +v , y +v −z−w )

= ( x+ y , y −z )+(u+v , v−w)

= G( x , y , z)+G(u , v , w )

and G(α ( x , y , z)) = G(αx , αy , αz )

= (αx+ αy , αy−αz )

= α ( x+ y , y−z )

= αG ( x , y , z )
Therefore G is a linear transformation.

Page 9 of 24
b) ker G = {( p ,q ,r )∈ ℜ | G( p , q , r )=(0 ,0 ) }
3

= {( p ,q , r)∈ ℜ | ( p+q , q−r)=(0 , 0 ) }


3

= {( p ,q , r)∈ ℜ | p+q=0 and q−r=0 }


3

= {( p ,q , r )∈ ℜ | − p=q=r }
3

{( p,−p ,− p) | P ∈ℜ } = { p(1,−1,−1) | p∈ ℜ}
=
3
So kerG is the subspace of ℜ generated by (1,-1,-1).

Im G = {( d , e)∈ ℜ2 | G( x , y, z)=(d , e)for some ( x, y ,z )∈ ℜ3 }

= {(d , e)∈ ℜ2 | ( x+ y , y−z)=(d , e)for some ( x, y , z )∈ ℜ3 }

= {(d , e )∈ ℜ |x+ y=d and y −z=e for some x , y , z∈ ℜ }


2

= { (x+ y , y−z) | x , y ,z ∈ ℜ }
= { (x+ y , 0 )+(0, y−z) | x , y , z ∈ ℜ }

= { (x+ y ) (1 , 0)+( y−z) (0 , 1 ) |x , y , z∈ ℜ }

={
t 1 (1 , 0 )+t 2 (0 , 1) | t 1 =x+ y ∈ ℜ and t 2 = y−z ∈ ℜ }
2
Thus ImG is the subspace of ℜ generated by (1, 0) and (0, 1).
2
That is Im G =ℜ (why?) observe that dim (ker G) = 1 and dim (ImG) = 2 .

Theorem: Let T: V  W be a linear transformation.


Then T is one-to-one if and only if ker T = {Ov}.
Proof: Suppose T is one-to-one. We need to show that kerT contains only the zero
vector. Let x  kerT. Then T(x) = Ow.
But T(Ov) = Ow. So we have T(x) = T(Ov), which implies x = Ov as T is
one-to-one. Therefore kerT = {Ov}
Conversely suppose kerT = {Ov}. We wish to show that T is one-to-one.
Now let T(x1) = T(x2); x1, x2 V. Then T(x1) + -T(x2) = Ow or

Page 10 of 24
T(x1) + T(-x2) = Ow. Thus we have T(x1 + -x2) = Ow, which implies
x1 + (-x2)  ker T. Since ker T contains only the zero vector by assumption,
x1 + -x2 = Ov and hence x1 = x2. Consequently T is one-to-one.

Example: Let T: 2  3 be a linear transformation given by


T(a, b) = (a + b, a – b, b).
i) Find ker T and ImT.
ii) Is T one-to-one? Why?
Solution: (i) Ker T = { (x, y)  2 | T (x, y) = (0,0,0)}
= {(x, y)  2 | T (x +y, x-y, y) = (0, 0, 0)}
= {(x, y)  2 | x + y = 0, x – y = 0 and y = 0}
= {(x, y)  2 | x = 0 and y = 0} = {(0, 0)}
From the definition of ImT, it follows that

(u , v , w )∈ Im T ⇔ (u , v , w )=T (a ,b ) for some (a , b )∈ ℜ2


⇔(u , v , w )=(a+b , a−b , b ) for some a , b ∈ ℜ
⇔u=a+b , v=a−b and w=b for some a , b ∈ ℜ
⇔u−v =2 b and w=b for some a , b ∈ ℜ
⇔u−v =2 w where u , v , w ∈ ℜ
⇔u=v +2 w where u , v , w ∈ ℜ

Thus Im T = {(u ,v , w ) | u , v ,w ∈ ℜ and u=v+2 w }


= {(v+2w , v,w ) | v , w ∈ ℜ}
= {(v ,v ,0)+(2w ,0,w ) | v ,w ∈ℜ }
= { v(1,1,0)+w(2,0,1) | v, w ∈ℜ }
3
Therefore ImT is a subspace of ℜ generated by (1, 1, 0) and (2, 0,1).
What is dim(kerT)? What is dim(ImT)?

2
ii) Since ker T = {(0, 0)} (contains only the zero vector of ℜ ), T is

Page 11 of 24
one – to – one by theorem 4.3.2. But it is not on to why?
Notice that whenever kerT {Ov}, we can conclude that T is not one-to-one.

Theorem: Let T: V  W be a linear transformation and v 1, v2, v3 …, vn be linearly independent


vectors in V. If kerT = {0 v} then T(v1), T(v2), T(v3),…, T(vn) are linearly
independent vectors in W.
Proof: Let x1, x2, x3 …, xn be scalars such that
x1T(v1) + x2T(v2) + x3 T(v3) + … + xnT(vn) = Ow
Then by linearity of T we get, T(x1v1 + x2v2 + x3v3 + …+ xnvn) = Ow.
Hence x1v1 + x2 v2 + x3v3 + … + xnvn  ker T. But ker T = {Ov}. Thus
x1v1 + x2 v2 + x3v3 + … xnvn = Ov. Since v1, v2, v3, …, vn are linearly independent
vectors in V, it follows that x1 = x2 = x3 = … = xn = 0.
Definition: Let L be a linear transformation from a vector space V in to W over the field K.
(a) The dimension of the Kernel (the null space) of L is called the nullity of L.
(b) The dimension of the Image (the range) of L is called the rank of L.

Theorem: (Rank-nullity theorem) Let V and W be vector spaces over the same field K. Let L:
V  W be a linear transformation. If V is finite dimensional vector space then
dim V = nullity of L + rank of L
i.e dim V = dim (ker L) + dim (ImL)
Proof: Since V is finite dimensional vector space, it is obvious that Ker L and
ImL =L (V) are finite dimensional. Moreover dim (Ker L), dim (ImL)  dim V
(Verify)
Let {u1, u2, …, up} and {w1, w2, …, wq} be basis of kerL and ImL respectively
(p, q  dim V) Then there exist v1, v2, …, vq  V such that L(vi) = wi for i = 1, 2,
3,…q as wi  ImL.
Claim:  = {u1, u2, …, up, v1, v2, …, vq} is a bais of V.
Now we show that
i)  generates V
ii)  is linearly independent.

Page 12 of 24
i) Let v  V. Then L(v)  ImL and hence there exist unique scalars b 1, b2, …, bqin K such
that L(v) = b1w1 + b2w2 + … + bqwq.
So L(v) = b1L(v1) + b2L(v2) + … + bqL(vq) as L(vi) = wi for i = 1, 2, ..q
= L(b1v1 + b2v2 + … + bqvq)
Thus L(v-b1v1 – b2v2 - … - bqvq) = Ow. Which implies v – b1v1 – b2v2- … - bqvq  ker L.
Hence v – b1v1 – b2v2- … -bqvq = a1u1 + a2u2 + …+apup for some unique
scalars a1, a2, …, ap in K. Therefore v = a1u1 + a2u2 + …+apup + b1v1 + b2v2+…+bqvq
From the this we conclude that  generates V.

ii) Suppose 1u1 + 2u2 + … + pup + r1v1 + r2v2+ … + rqvq = Ov ……… (1)
where 1, 2 …,pr1, r2, …, rq  K
Then L(1u1 + 2u2 + … + pup + r1v1 + r2v2 + … + rqvq) = L(Ov) = Ow
So we have 1L(u1) + 2L(u2) + …+pL(up) + r1L(v1) + r2L(v2) + … + rqL(vq) = Ow
 r1w1 + r2w2 + … + rqwq = Ow since L(uj) = Ow and
L(vi) = wi j = 1, 2, …, p and i = 1, 2, …,q
 r1 = r2 = … = rq = 0, since {w1, w2, …, wq} is basis of ImL.
 1u1 + 2u2 + … + pup = Ov (replace r1, r2 … rqby 0 in (1)
 1 = 2 = … = p = 0, since {u1, u2, …, up} is a basis of Ker L.
Thus we have shown that
1u1 + 2 u2 + … + pup + r1v1 + r2v2 + … + rqvq = Ov
 1 = 2 = … = p = r1 = v2 = … = rq = 0
That is  is a linearly independent set in V. From (i) and (ii) it follows that
 = {u1, u2, …, up, v1, v2, …, vq} is a basis of V.
Hence dimV = p + q = dim (kerL) + dim (ImL)
Therefore dim V = Nullity of L + Rank of L..

Example: Find a linear transformation T: 34 whose range (image) is spanned


by (1, 2, 0, -4), (2, 0, -1, -3).
Solution: Given ImT is spanned by {(1, 2, 0, -4), (2, 0, -1, -3)}
Let us include a vector (0,0,0,0) in this set which will not affect the given
spanning property. Now by using, the standard basis of 3 and theorem 4. 2.2,

Page 13 of 24
one can get a linear transformation T: 3  4 such that
T(1, 0, 0) = (1, 2, 0, -4), T(0, 1, 0) = (2, 0, -1, -3) and T(0,0,1) = (0,0,0,0)
If (x, y, z)  3, T(x, y, z) = T(x(1, 0, 0) + y(0, 1, 0) + z(0, 0, 1))
= x T(1, 0, 0) + yT(0, 1, 0) + zT (0, 0, 1)
= (x, 2x, 0, -4x) + (2y, 0, -y, -3y) + z(0, 0, 0, 0)
Therefore T(x, y, z) = (x + 2y, 2x, -y, -4x-3y) is the required linear
transformation.

5.3. Algebra of Linear transformations

Theorem: Let V and W be vector spaces over the field K. Let T and S be linear transformations
from V in to W and   K. Then
i) the function T + S is a linear transformation from V in to W.
ii) the function T is a linear transformation from V in to W.
iii) L (V, W) the set of all linear transformations from V in to W with
respect to the operation of vector addition and scalar multiplication
defined as (T + S) (v) = T(v) + S(v) and (T (v) = T(v) for all v  V,
is a vector space over K.

Proof: i) We need to show that T + S is a linear transformation from V in to W for


any linear transformation T and S from V in to W. For this let
v1 , v2  V and   K.
Then (T + S) (v1 + v2) = T(v1 + v2) + S(v1 + v2)
= T(v1) + T(v2) + S(v1) + Sv2)
= T(v1) + S(v1) + T(v2) + S(v2)
= (T + S) (v1) + (T + S) (v2)
Moreover (T + S) (v1) = T(v1) + S(v1)
= T(v1) + S(v1)

Page 14 of 24
= (T(v1) + S(v1))
= (T + S) (v1)
Therefore T + S is a linear transformation.
ii) Let T be a linear transformation form V into W and   K
Then for any v1, v2  V and   K.
We have (T) (v1 + v2) = (T(v1 + v2))
=  (T(v1) + T(v2) )
= (T(v1)) + (T(v2))
= (T) (v1) + (T) (v2)
and (T) (v1) = (T(v1))
= (T(v1))
= () (T(v1))
= () (T(v1))
= ((T(v1)))
= (T)(v1)
Therefore T is a linear transformation.
iii) Here we need to verify that all the conditions in the definition of a vector
space are satisfied. We leave this to the students.

Theorem: Let V, W and Z be vector spaces over the field K. Let T and S be linear
transformations from V in to W and from W into Z respectively. Then the
compose function SoT defined by (SoT) (v) = S(T(v)) for all v V is a linear
transformation.
Proof: (exercise).

Theorem: Let V and W be vector spaces over the field F and Let T be a linear transformation
from V in to W. If T is invertible then the inverse function T -1 is a linear
transformation.
Proof: Suppose T : V  W is invertible then there exists a unique function
T-1: W  V such that T-1(w) = v  T(v) = w for all w  W. Moreover
T is 1-1 and on to. We need to show that T-1 is linear. Let w1, w2  W and  F

Page 15 of 24
Then there exist unique vectors v1, v2  V such that T(v1) = w1 and T(v2) = w2
as T is 1-1 and on to. So T-1(w1) = v1 and T-1(w2) = v2.
T(v1 + v2) = T(v1) + T(v2) because T is linear
= w1 + w2
Thus T-1(w1 + w2) = v1 + v2 = T-1(w1) + T-1(w2) for any w1, w2  W.
Since T( v1) = w1, T-1 (w1) =  v1 =  T-1(w1). Therefore T-1 is a linear
transformation.

Example: Let T : 3  3 be a linear transformation given by T(x, y, z) = (x, 3y, z). Is T


invertible? If so, find a rule for T-1 like the one which defines T.

Solution: We know that if kerT = {} then T is one to-one

KerT = {(a , b , c )∈ ℜ3|T ( a ,b , c ) = (0 , 0 , 0 ) }

= {(a , b , c )∈ ℜ3|(a , 3 b , c ) = (0 , 0 , 0) }

= {(a , b , c )∈ ℜ3| a= 0 , 3 b = 0 and c = 0 }

= { (0 ,0 ,0 )} .
Therefore T is one to one.
Moreover from rank-nullity theorem, dim(3) = dim (kerT) + dim (ImT).
3 = 0 + dim(ImT). So dim(ImT) = 3. Since ImT is a subspace of 3
and dim(ImT) = 3, we have ImT = 3. Hence T is on to as ImT = 3.
Therefore T is invertible as it one to- one and on to.
To find T-1, let T (x, y, z) = (u, v, w).
v
y = , z = w
Then (x, 3y, z) = (u, v, w). So x = u, 3 .
But T(x, y, z) = (u, v, w)  T-1 (u, v, w) = (x, y, z).

Thus T-1 (u, v, w) =


(u , v3 , w ) .

1.4. Matrix representation of a linear transformation

Page 16 of 24
Linear transformation associated with a given matrix
A=( a ij )m×n n m
Definition: For any matrix over a field K. The mapping T A : K → K defined

by T A ( X )= AX ∀ X ∈ K n is called the linear transformation


associated with the matrix A.
Note: In this definition, each element X of Kn should be considered as column vector otherwise
AX is not defined.

Example: a) Let
A= 1 0 ( ) 2 2
0 1 . Then T A : ℜ → ℜ is an identity linear transformation.

( ) ( )( ) ( )
1 0 1 0 x x
B= 2 1 T()B
x
y
= 2 1 y = 2 x+ y
b) Let
3 0 . Then 3 0 3x

T ()( )
x
x =
B 2 x+ y
y
Thus T B : ℜ → ℜ given by
2 3 3 x is the linear
transformation associated with matrix B.

c) Let
[ ] 2 0 () () ()
A= 1 3 , u = 0 , v= 2 , w= 2
0 1 2 and TA be the linear
transformation associated with matrix A.

Find i) T A (u ), T A (v ) and T A ( w )

ii) The image of the square with vertices


(00 ) ,(20) ,(22 ) (02 )
and .
Solution:

i)
T A (u )=Au= 1 3
0 1 ( ) (02 )=(62 )
ii) T A deforms the given square as if the top of the square were

Page 17 of 24
pushed to the right while the base is held fixed (see the figure below).

We call such transformation shear transformation.

Example

Let
(
A= 1 0 3
−2 1 −3 , ) b= −4 ( )
9 and T A be the linear transformation associated with
matrix A. Then find ker TA
3 2
Solution: T A : ℜ →ℜ is given by

( )( ()
x1 x1
TA x2 =
x3
1
−2
0
1
3
−3 ) x2 =
x3
(
x 1 +3 x 3
−2 x 1 + x 2−3 x 3 )
ker T A = { X ∈ ℜ3|T A ( X )=0 }
Then

()
a
b ∈ kerT A ⇔
c
(
a+3 c
=
−2 a+b−3 c 0
0
) ( )⇔¿ a+3c=0¿} ¿¿¿
So
⇔ a=−3 c=b

{( ) } { ( ) }
−3 c −3
ker T A= −3 c |c ∈ ℜ c −3 | c ∈ ℜ
c 1
Thus =

The matrix of a linear transformation

Page 18 of 24
In the previous subsection we have seen that associated to any given m× n matrix A there is a
n m
linear transformation T A : ℜ →ℜ defined by T A ( X )= AX . In this subsection we shall see
the reverse process, that is to find a matrix associated to a given linear transformation from a
finite dimensional vector space V in to finite dimensional vector space W over the same field K.
Before going further let us recall about the coordinates of an element V of a finite dimensional

vector space V with respect to a given ordered basis β of V. What do we mean by ordered
β= {b1 , b2 ,. .. , b n }
basis? Suppose is an ordered basis for a finite dimensional vector space V

over a field K and v is in V. The coordinates of v relative to the basis β (or the β coordinates

of v) are the scalars


c 1 , c 2 , .. . , c n in K such that v=c 1 b1 + c 2 b2 + .. .+c n b n .

If
c 1 , c 2 , .. . , c n are the β -coordinates of v then the vector in Kn,

[]
c1
c2
[ v ]β= .
.
.
cn

is the coordinate vector of v relative to β or the β -coordinate vector of v.

Example: (i) The coordinates of (x, y, z) relative the standard basis

{( 1,0,0 ) , ( 0,1,0 ) , ( 0,0,1 ) } of ℜ3 are simply x, y and z, since


(x, y, z) = x(1, 0, 0)+ y(0, 1, 0+ z(0, 0, 1).

ii) Consider a basis β = {b1, b2} of


2
ℜ , where []
b 1=
1
0 and []
b 2=
1
2 . The

coordinate vector of
X=
[ 16 ] relative to β is
[ X ] =[−2 ]
β
3 ,
since X = (-2)b1 + 3b2.

Page 19 of 24
Let V be an n-dimensional vector space over the field K and let W be an m-dimensional vector
β= {v 1 , v 2 ,. .. , v n } β ' = {w1 , w2 , .. . , wm }
space over K. Let and be ordered bases of V and W
respectively. Suppose T : V → W is a linear transformation.

Then for any x in V, T ( x )∈ W and T(x) can be expressed as a linear combination of elements
'
of the basis β . So we have,
T (v 1 ) ¿ a11 w1 + a 21 w 2 + … + a m 1 wm
T (v 2 ) ¿ a12 w1 + a 22 w 2 + … + a m 2 wm
⋮ (1 )
T ( vj) ¿ a1 j w 1 + a2 j w 2 + … + amj w m

T (v n ) ¿ a 1n w1 + a2 n w 2 + … + amn w m
where aij’s are scalars in K.

[ ]
Writing the coordinates of T(v1),T(v2),…,T(vn) successively as columns of a matrix we get

a11 a12 . . . a1 j . . . a1n


a21 a22 . . . a2 j . . . a2n
. . . .
. . . .
M = . . . .
ai 1 ai 2 a ij a in
. . . .
. . . .
. . . .
am 1 am 2 amj amn m×n
…. (2)

M=[ [ T ( v 1 ) ]β ' [ T ( v 2 )]β ' . .. [ T (v j )]β ' .. . [ T ( v n )β' ] ]


That is
The matrix M in (2) is called a matrix representation of T or the matrix for T relative to the

bases β and β ' . If β and β ' are standard bases of V and W respectively, we call the matrix
M in (2) the standard matrix for the linear transformation T.

3 2
Example: Define T : ℜ →ℜ by

Page 20 of 24
T(x1, x2, x3) = (3x1 - 2x2 + x3, - x1 + x2 +5x3)
a) Find the standard matrix for T.
b) Find the matrix of T relative to the ordered bases
3 2
 = {(1,0,1), (0,1,1), (0,0,1)} and ’ = {(1,0), (1,1)} of ℜ and ℜ
respectively.
3 2
Solution: a) Here we use the standard basis of ℜ and ℜ .
T(1,0,0) = (3, -1) = 3(1,0) + -1(0.1)
T(0,1,0) = (-2,1) = -2(1,0) + 1(0,1)
T(0,0,1) = (1,5) = 1(1,0) + 5(0,1)
Writing the coordinates of T(1,0,0), T(0,1,0), T(0,0,1) as first, second and
third columns of a matrix we get the standard matrix M for T as

M=
[ 3 −2 1
−1 1 5 ]
b) T(1,0,1) = (4,4) = 0(1,0) + 4(1,1)
T(0,1,1) = (-1,6) = -7(1,0) + 6(1,1)
T(0,0,1) = (1,5) = -4(1,0) + 5(1,1)

The matrix of T relative to  and ’ is


M=
[ 04 −7 −4
6 5 ]
Our next task is to examine how the matrix M in (2) determines the linear transformation T.
If x = x1v1 + x2v2 + …+xnvn is a vector in V, then the coordinate vector of x relative to

[]
x1
x2
β=[ X ] β = .
.
.
xn

and T(x) = T(x1v1 + x2v2 + …+xnvn) = x1T(v1) + x2T(v2) + …+xnT(vn) ….. (3)

Page 21 of 24
Using the basis β ' in W, we can rewrite (3) in terms of coordinate vectors relative to β ' as

[ T ( x )]β ' =x1 [ T ( v 1 ) ]β ' + x 2 [ T (v 2 )] β ' + .. .+ x n [ T (v n )] β ' … (4)


Further the vector equation (4) can be written as a matrix equation?

[ T ( X ) ]β ' =M [ X ] β …. (5)

Thus if
[ X ]β is the coordinate vector of x relative to β , then the equation in (5) shows that
M [ X ] β is the coordinate vector of the vector T(X) relative to β ' .

Note: In case when W is the same as V and the basis β ' is the same as β , the matrix M in (2)

is called the matrix for T relative to β and is denoted by


[ T ]β .
Activity 5.4.5: Using equation (3), verify equations (4) and (5).

3
Example: Let T : ℜ → ℜ2 be the linear transformation defined by
T(x, y, z) = (3x + 2y - 4z, x -5y + 3z). Find the matrix of T relative to the bases

B1 = {(1, 1, 1), (1, 1, 0), (1, 0, 0) of ℜ3 and B2 = {(1, 3), (2, 5)} of ℜ2 .
Solution: T(1, 1, 1) = (1, -1) = -7(1, 3) + 4(2, 5)
T(1, 1, 0) = (5, -4) = -33(1, 3) + 19(2, 5)
T(1, 0, 0) = (3, 1) = -13(1, 3) + 8(2, 5)
The matrix M of T relative to the bases B1 and B2 is:

M=
[ −7 −33 −13
4 19 8 ]

Page 22 of 24
β= {b1 , b2 ,b 3 }
Example: Let be a basis for a vector space V over the set of real numbers. Find
T(3b1 – 4b2), where T is a linear transformation form V in to V whose

matrix relative to β is

[ ]
0 −6 1
[ T ] β = 0 5 −1
1 −2 7

( )
3
[ X ] β = −4
Solution: Let x = 3b1 - 4b2. Then the coordinate vector of x relative to 
0 and the

coordinate vector of T(X) relative to β is

[ ][ ][ ]
0 −6 1 3 24
0 5 −1 −4 −20
[ T ( X ) ]β =[ T ] β [ X ] β = 1 −2 7 0 = 11
Hence T(x) = 24b1 - 20b2 + 11b3.

5.5. Eigenvalues and Eigenvectors of a linear Transformation

Definition:
Let T: V  V be a linear operator on a vector space V over a field K. An eigenvalue of T is a
scalar  in K such that there is a non-zero vector v in V with T(v) = v.
If  is an eigenvalue of T, then
a) any vector v in V such that T(v) = v is called an eigenvector of T associated with the
eigenvalue ;
b) the collection of all vectors v of V with T(v) = v is called the eigenspace associated with .
Note:
1. One of the meanings of the word “eigen” in German is “Proper”. Thus eigen values
are also called proper values or characteristic values or latent roots.
2. If  and w are eigenvectors associated with eigenvalue . Then:
i)  + w is also an eigenvector with eigenvalue , because:
T( + w) = T() + T(w) =  + w = ( + w)

Page 23 of 24
ii) each scalar multiple k, k  0, is also an eigenvector with eigenvalue ,
because: T(k) = kT() = k() = (k).

5.6. Eigenspace of a linear transformation


Let T: V  V a linear operator over a field K. Let   K. Let V = The set of all eigenvectors
of T with eigenvalue .
Claim V is a subspace of V.
Proof:
i) T(0) = 0 =  . 0, Showing that 0  V.
ii) 1, 2  V  T(1 + 2) = T(1) + T(2)
= 1 + 2
= (1 + 2)
 1 + 2  V
iii) V,   K  T() = T()= ()= ()
   V
Thus we have proved that the eigenspace associated with  is a s

Page 24 of 24

You might also like