13
MTL101 Lecture 11 and 12
(Sum & direct sum of subspaces, their dimensions, linear transformations, rank & nullity)
(39) Suppose W1 , W2 are subspaces of a vector space V over F. Then define
W1 + W2 := {w1 + w2 : w1 ∈ W1 , w2 ∈ W2 }.
This is a subspace of V and it is call the sum of W1 and W2 . Students must verify that W1 +W2
is a subspace of V (use the criterion for a subspace).
Examples:
(a) Let V = R2 , W1 = {(x, x) : x ∈ R} and W2 = {(x, −x) : x ∈ R}. Then W1 + W2 = R2 .
Indeed, (x, y) = ( x+y
2
, x+y
2
) + ( x−y
2
, − x−y
2
).
(b) Next, let V = R4 , W1 = {(x, y, z, w) : x + y + z = 0, x + 2y − z = 0}, W2 = {(s, 2s, 3s, t) :
s, t ∈ R}. How to describe W1 + W2 (e.g., find a basis)?
The following theorem tells us the dimension of W1 + W2 and the proof of the theorem suggest
how to write its bases.
Theorem: If W1 , W2 are subspaces of a vector space V , then
dim(W1 + W2 ) = dimW1 + dimW2 − dim(W1 ∩ W2 ).
Proof: Let S be a basis of W1 ∩W2 (if W1 ∩W2 is the zero space then S = Φ.). For each i = 1, 2,
extend S to a basis Bi of Wi . Let S = {u1 , u2 , . . . , ur }, B1 = {u1 , u2 , . . . , ur , v1 , v2 , . . . , vs } and
B2 = {u1 , u2 , . . . , ur , w1 , w2 , . . . , wt }. Then dim(W1 ∩ W2 ) = r, dimW1 = r + s, dimW2 = r + t.
Let B = {u1 , u2 , . . . , ur , v1 , v2 , . . . , vs , w1 , w2 , . . . , wt }. It is enough to show that B is a basis
of W1 + W2 because then we have dim(W1 + W2 ) = r + s + t = (r + s) + (r + t) − r =
dimW1 + dimW2 − dim(W1 ∩ W2 ).
To show that B is linearly independent let
r
X s
X t
X
ai u i + bj vj + ck wk = 0.
i=1 j=1 k=1
Then
r
X s
X t
X
ai u i + b j vj = − ck w k .
i=1 j=1 k=1
t
P
Now the LHS is in W1 and the RHS is in W2 . So this element is in W1 ∩ W2 . Thus − ck w k =
k=1
t
P r
P t
P
di ui so that d i ui + ck wk = 0 which implies di = 0 and ck = 0 for each i and k (since
k=1 i=1 k=1
r
P s
P
B2 is linearly independent). Therefore, ai u i + bj vj = 0 which implies ai = 0, bj = 0 for
i=1 j=1
each i and each j (since B1 is linearly independent). Thus, B is linearly independent. Let
Pr Ps
w ∈ W1 + W2 . Then w = w1 + w2 for some wi ∈ Wi for i = 1, 2. Then w1 = pi ui + qj vj
i=1 j=1
r
P t
P r
P s
P t
P
and w2 = g i ui + hj wj for pi , qi , gi , hi ∈ F. Now w = (pi + gi )ui + qj v j + hk w k
i=1 j=1 i=1 j=1 k=1
which is in span(B).
(40) The sum W1 + W2 is called direct if W1 ∩ W2 = {0}. In particular, a vector space V is said to
be the direct sum of two subspaces W1 and W2 if V = W1 + W2 and W1 ∩ W2 = {0}. When V
is a direct sum of W1 and W2 we write V = W1 ⊕ W2 .
Theorem: Suppose W1 and W2 are subspaces of a vector space V so that V = W1 + W2 . Then
V = W1 ⊕ W2 if and only if every vector in V can be written in a unique way as w1 + w2 where
w i ∈ Wi .
Proof: Since V = W1 + W2 , every vector in V is a sum of a vector in W1 and a vector in
W2 . Suppose that for every v ∈ V , there is only one pair (w1 , w2 ) with wi ∈ Wi such that
v = w1 + w2 . If W1 ∩ W2 is nonzero, pick a nonzero vector u ∈ W1 ∩ W2 . Then u = u + 0
14
with u ∈ W1 , 0 ∈ W2 and u = 0 + u with 0 ∈ W1 , u ∈ W2 . This contradicts our uniqueness
assumption.
Conversely, suppose V = W1 ⊕ W2 . Then V = W1 + W2 and W1 ∩ W2 = {0}. If for v ∈ V we
have v = w1 + w2 = w10 + w20 for w1 , w10 ∈ W1 and w2 , w20 ∈ W2 then w1 − w10 = w20 − w2 . The
LHS is in W1 and the RHS in W2 ; therefore, this vector is in W1 ∩ W2 . Since by assumption
W1 ∩ W2 = {0}, we have w1 − w10 = 0 and w20 − w2 = 0 so that w1 = w10 and w2 = w20 .
Examples:
(a) V = R2 , W1 = {(x, 2x) : x ∈ R}, W2 = {(x, 3x) : x ∈ R}. Then V = W1 ⊕ W2 .
Indeed, (x, y) = (3x − y, 6x − 2y) + (y − 2x, 3y − 6x) (Hint: Find a, b ∈ F such that
(x, y) = (a, 2a) + (b, 3b)) and W1 ∩ W2 = 0 (Hint: Let (x, y) ∈ W1 ∩ W2 then (x, y) =
(a, 2a) = (b, 3b) for some a, b ∈ F. By comparing the first component we get a = b and
comparing the second we get a = 0).
(b) Suppose n ≥ 2, V = Rn , W1 = {(a1 , a2 , . . . , an ) ∈ Rn : a1 + a2 + · · · an = 0} and
W2 = {(a1 , a2 , . . . , an ) ∈ Rn : −a1 + a2 − · · · + (−1)n an = 0}. Show that V = W1 + W2 .
Further show that when n = 2, V = W1 ⊕ W2 and when n > 2 the sum is not direct.
(c) V = Mn (R), W1 is the subspace of all the upper trangular matrices and W2 is the subspace
of all the lower trangular matrices over R (this sum is not direct).
(d) V = Mn (R), W1 is the subspace of all the symmetric n × n matrices over R and W2 is the
subspace of all the skew-symmetric n × n matrices over R (in this, the sum is direct).
Exercise: Write a basis of W1 + W2 of example 39b.
(41) Suppose V and W are vector spaces over the same field F. A map T : V → W is called a
linear transformation if T (au + bv) = aT (u) + bT (v) for any a, b ∈ F and any u, v ∈ V .
Examples:
(a) Let V = W = R2 , F = R. Then T1 (x, y) = (ax + by, cx + dy) for any a, b, c, d ∈ R
is a linear transformation (verify). But T2 (x, y) = (x + y + 1, x − y) is not a linear
transformation (why?). If T : R2 → R2 is a linear transformation then T = T1 for
some choices of a, b, c, d ∈ R. Indeed, T (x, y) = T (xe1 + ye2 ) = xT (e1 ) + yT (e2 ). Since
T (e1 ), T (e2 ) ∈ R2 , we have T (e1 ) = (p, q) and T (e2 ) = (r, s) for some p, q, r, s ∈ R. Thus,
T (x, y) = x(p, q) + y(r, s) = (px + ry, qx + sy).
(b) For any vector spaces V, W over F, map from V to W defined by v 7→ 0 for all v ∈ V is a
linear transformation and it is called the zero map (or zero transformation).
(c) IdV : V → V defined by v 7→ v for all v ∈ V is a linear tranformation; it is called the
identity operator.
(d) For each 1 ≤ i ≤ n, pi : Rn → R defined by pi (a1 , a2 , . . . , an ) = ai is a linear transformation
and it is called the i-th projection.
Remarks: If T : V → W is a linear transformation then T (0) = 0, i.e., the image of the zero
vector of V is the zero vector of W . Indeed, T (0) = T (00) = 0T (0) = 0 (where we have written
00, the first zero is the scalar zero and the second zero is the zero vector in the domain space
V of T . T : R2 → R2 given by T (x, y) = (x + y + 1, x − y) is not linear because T (0, 0) = (1, 0)
which is not the zero vector of R2 .
(42) Suppose T : V → W is a linear transformation. Then define
ker(T ) := {v ∈ V : T (v) = 0} and T (V ) := {T (v) : v ∈ V }.
Show that ker(T ) < V and T (V ) < W for any linear transformation T . The spaces ker(T )
and T (V ) are called respectively null space (or kernel) and image space (or range space)
of T . The dimension of ker(T ) is call the nullity of T and the dimension of T (V ) is call the
rank of T .
Theorem: A linear tranformation is injective if and only if its null space is the zero space.
Proof: Suppose T : V → W is a linear tranformation. Suppose that the null space is the
zero space. If T (u) = T (v) for u, v ∈ V then T (u − v) = 0 so that u − v ∈ ker(T ) = {0}
which implies u − v = 0 ⇒ u = v. Conversely, assume that T is injective. If u ∈ ker(T ) then
T (u) = 0 = T (0) which implies u = 0 (since T is invertible).
15
Theorem: (Rank-nullity Theorem) Suppose T : V → W is a linear tranformation. Then
rank(T ) + Nullity(T ) = dim(V ).
Proof: Suppose {u1 , u2 , . . . , um } is a basis of the null space ker(T ). Extend this basis to a
basis of V . Let {u1 , u2 , . . . , um , v1 , v2 , . . . , vr } be a basis of V so that dim(V ) = s + r. We
will show that B = {T (v1 ), T (v2 ), . . . , T (vr )} is a basis of the range space T (V ) (to prove that
r
P Pr
dim(T (V )) = r). By definition B ⊂ T (V ). Suppose ai T (vi ) = 0. Then T ( ai vi ) = 0 (since
i=1 i=1
r
P r
P m
P m
P Pr
T is linear). Hence ai vi ∈ ker(T ) so that ai vi = bj uj or (−bj )uj + ai vi = 0. Since
i=1 i=1 i=1 j=1 i=1
{u1 , u2 , . . . , um , v1 , v2 , . . . , vr } is linearly independent, the coefficients are all zero. In particular,
ai = 0 for each 1 ≤ i ≤ r. Thus, B is linearly independent. If w ∈ T (V ), there exists v ∈ V such
Pm Pr
that T (v) = w. Since {u1 , u2 , . . . , um , v1 , v2 , . . . , vr } is a basis of V , v = pi ui + qj vj where
i=1 j=1
r
P
pi , qj ∈ F. Since T (ui ) = 0 for each i, we get, by applying T both sides, T (v) = qj T (vj ).
j=1
Examples: Let T : R3 → R3 defined by T (x, y, z) = (x + y − z, x − y + z, y − z). The null of
T is {(x, y, z) : x + y − z = 0, x − y + z = 0, y − z = 0} which is the solution space of certain
homogeneous system of linear equations. We know how to solve a system of homogenous linear
equations. Show that ker(T ) = {(x, y, z) : x = 0, y = z} = {(0, t, t) : t ∈ R}. {(0, 1, 1)} is a
basis of ker(T ) so that Nullity(T ) = 1. Using the rank-nullity theorem rank(T ) = dim(R3 )−1 =
3 − 1 = 2.
How to compute rank(T ) directly by displaying a basis of T (V ): First pick a basis of V ,
for instance, {e1 , e2 , e3 }. Then T (V ) is generated by Y = {T (e1 ), T (e2 ), T (e3 )} which is
{(1, 1, 0), (1,
−1, 1), (−1, 1, −1)}.
We have to pick a basis of T (V ) using Y . Consider the
1 1 0
matrix A = 1 −1 1 which is obtained from Y by converting elements of Y into rows.
−1 1 −1
Then a basis of the row space of A is a basis of therange space.
To find a basis of the row space
1
1 0 2
find the row reduced echelon form of A which is 0 1 − 12 . Hence {(1, 0, 12 ), (0, 1, − 12 )} is
0 0 0
a basis of the range space. Thus rank(T ) = 2. Thus we have verified the rank-nullity theorem
for the given linear tranformation.
Applications of rank-nullity theorem: Since for any linear transformation T , rank(T ) ≥
0,Nullity(T ) ≥ 0, we have, rank(T ) ≤ dim(V ) and Nullity(T ) ≤ dim(V ). A linear tranforma-
tion T : V → W is injective if and only if Nullity = {0}; and T is surjective if and only if
rank(T ) = dim(W ). Using rank-nullity theorem we have the following statements.
(a) There is no injective linear tranformation from Rm to Rn if m > n.
(b) There is no surjective linear transformation from Rm to Rn if n > m.
(c) There is an isomorphism (a bijective linear tranformation) from Rm to Rn if and only if
m = n.
Proof: (a) If T : Rm → Rn is injective, then Nullity(T ) = 0 so rank(T ) + 0 = dim(Rm ) = m
(by rank-nullity theorem). Since T (V ) < Rn , rank(T ) ≤ n. Therefore, m ≤ n. Equivalentely,
if m > n, there is no injective linear transformation from Rm to Rn .
(b) If T : Rm → Rn is surjective (i.e.,T (V ) = W ), then rank(T ) = n. By rank-nullity theorem,
n + Nullity(T ) = m. So m − n ≥ 0 or m ≥ n (since Nullity(T ) ≥ 0).
(c) Suppose T : Rm → Rn is an isomorphism (a bijective linear transformation). Since T is
injective, by part (a), m ≤ n and since T is surjective m ≥ n. Therefore, m = n.
Next, suppose m = n. Then the identity map is a linear transformation from Rm to itself.
Remark: This completes the syllabus of Minor Test I. Go through the questions with answers
appeared in the course page. Soon more of QWA are to appear.