mth204-lecture-notes-self-reading-guide
mth204-lecture-notes-self-reading-guide
LINEAR ALGEBRA 1
DEPARTMENT OF MATHEMATICS
FACULTY OF NATURAL SCIENCES
UNIVERSITY OF JOS
NIGERIA
Contents
3 LINEAR MAPPING 18
3.1 Definition, Examples and Theorems . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Image and Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Rank and Nullity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Singular and Non-singular Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Invertible Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.6 Matrix representation of a linear operator . . . . . . . . . . . . . . . . . . . . . 23
Chapter 1
The elements of V are called vectors while those of K are called scalars.
Theorem 1.1.2.
Let V be a vector space over a field k;
Proof
0u = (0 + 0)u = 0u + 0u
(iii) Suppose that ku = 0 and k 6= 0. Then there exists a scalar k −1 such that k −1 k = I; hence
u = Iu = (k −1 k)u = k −1 (ku) = k −1 0 = 0
0 = 0u = (k + (−k))u = ku + (−k)u.
1. Let V be the set of all functions from a non-empty set X into a field K. For any functions
f, g ∈ V and any scalar k ∈ K, let f + g and kf be the functions in V defined as follows:
and
(kf )(x) = kf (x) ∀x ∈ X
Then V is a vector space over K.
2. Let K be an arbitrary field. The set of all n-tuples of elements of K with vector addition
and scalar multiplication defined by:
and
k(a1 , a2 , ..., an ) = (ka1 , ka2 , ..., kan ),
where ai , bi ∈ K, is a vector space over K. We denote this space as K n .
3. Let V be the set of all m × n matrices with entries from an arbitrary field K. Then V
is a vector space over K with respect to the operations of matrix addition and scalar
multiplication.
a0 + a1 t + a2 t2 + ... + an tn
with coefficients ai from a field K. Then V is a vector space over K with respect to the
usual operations of addition of polynomials and multiplication by a constant.
1.2 Subspace
Definition 1.2.1. (Subspace)
1. Let V be any vector space. Then the set {0} consisting of the zero vector alone and also
the entire space V are subspaces of V . Note that the subspace V is a vector space itself.
2. Let V be the vector space R3 . Then the set W consisting of those vectors whose third
component is zero is a subspace of V . That is,
W = {(a, b, 0) : a, b ∈ R}
is a subspace of V .
3. Let V be the space of all square n × n matrices. Then the set W consisting of those
matrices that are symmetric is a subspace of V .
4. Let V be the space of polynomials. Then the set W consisting of polynomials with degree
≤ n, for a fixed n, is a subspace of V .
5. Let V be the space of all functions from a non-empty set X into the real field R. Then
the set W consisting of all bounded functions in V is a subspace of V .
Theorem 1.2.3.
W is a subspace of V if and only if
(i) W 6= ∅ (non-empty)
(ii) u + v ∈ W for all u, v ∈ W
(iii) ku ∈ W for every k ∈ K and u ∈ W .
Corollary 1.2.4.
W is a subspace of V if and only if
(i) 0 ∈ W
(ii) u, v ∈ W =⇒ au + bu ∈ W ∀ a, b ∈ K
Proof
Suppose that W satisfies (i) and (ii). Then, by (i) W is non-empty. Furthermore, if u, v ∈ W
then by (ii), u + v = Iu + Iv ∈ W and if v ∈ W and k ∈ K then, by (ii), kv = kv + 0v ∈ W .
Thus by Theorem 1.2.3, W is a subspace of V .
Conversely, if W is a subspace of V then clearly (i) and (ii) hold in W .
Solution:
0 = (0, 0, 0) ∈ W since 0 + 0 + 0 = 0.
Suppose that u = (a, b, c) and v = (e, f, g) belong to W .
Then a + b + c = 0 and e + f + g = 0. Let k and p be any scalars. Then
ku + pv = k(a, b, c) + p(e, f, g)
= (ka, kb, kc) + (pe, pf, pg)
= (ka + pe, kb + pf, kc + pg)
and furthermore,
(ka + pe) + (kb + pf ) + (kc + pg) = K(a + b + c) + p(e + f + g)
= k0 + p0
=0
Thus ku + pv ∈ W and so W is a subspace of V .
2. Let V = R3 . Show that
W = {(a, b, c) : a2 + b2 + c2 ≤ 1}
is not a subspace of V .
Solution:
v = (1, 0, 0) ∈ W and u = (0, 1, 0) ∈ W .
But,
v + u = (1, 0, 0) + (0, 1, 0)
= (1, 1, 0) ∈/W
since
12 + 12 + 02 = 2 > 1.
Hence W is not a subspace of V .
Theorem 1.2.6.
Let U and W be subspaces of a vector space V . Then U ∩ W is also a subspace of V .
Proof
Clearly, 0 ∈ U and 0 ∈ W since U and W are subspaces. Hence 0 ∈ U ∩ W .
Now suppose that u, v ∈ U ∩ W then u, v ∈ U and u, v ∈ W . Since U and W are subspaces, we
have that: au + bv ∈ U and au + bv ∈ W for any scalars a, b ∈ K. Accordingly, au + bv ∈ U ∩ W
and U ∩ W is a subspace of V .
Corollary 1.2.7.
The intersection of any number of subspaces of a vector space V is a subspace of V .
Solution
Let v = xu1 + yu2 + zu3 . That is,
(3, 9, −4, −2) = x(1, −2, 0, 3) + y(2, 3, 0, −1) + z(2, −1, 2, 1)
= (x + 2y + 2z, − 2x + 3y − z, 2z, 3x − y + z)
Thus,
x + 2y + 2z =3
−2x + 3y − z =9
2z = −4
3x − y + z = −2
Solving, we obtain x = 1, y = 3, z = −2.
Thus v = u1 + 3u2 − 2u3 and v is a linear combination of u1 , u2 and u3 .
Note that if v is not a linear combination, then the system of equations above will not have
a solution. For example, the vector v = (−1, 7, 2) is not a linear combination of the vectors
u1 = (1, 3, 5), u2 = (2, −1, 3), u3 = (−3, 2, −4)
U + W = {(u + w : u ∈ U, w ∈ W )}.
Theorem 1.5.1.
The sum U + W of the subspaces of U and W of V is also a subspace of V .
Proof
U + W is non-empty, since 0 = 0 + 0 ∈ U + W as 0 ∈ U and 0 ∈ W .
Furthermore, suppose that u + w and u1 + w1 belong to U + W with u, u1 ∈ U and w, w1 ∈ W .
Then
(u + w) + (u1 + w1 ) = (u + u1 ) + (w + w1 ) ∈ U + W
and for any scalar k,
k(u + w) = ku + kw ∈ U + W
Thus U + W is a subspace of V .
Theorem 1.5.2.
Let U and W be subspaces of a vector space V . Then;
(i) U and W are contained in U + W ;
(ii) U + W is the smallest subspace of V containing U and W i.e U + W is the linear span of
U and W or U + W = L(U, W ).
Proof
(ii) Since U + W is a subspace of V containing both U and W , it must also contain the linear
span of U and W , i.e L(U, W ) ⊂ U + W .
On the other hand, if v ∈ U + W then v = u + w = Iu + Iw where u ∈ U and w ∈ W ;
hence v is a linear combination of elements in U + W and so belongs to L(U, W ). Thus
U + W ⊂ L(U, W ). The two inclusion relations give us the required result.
Theorem 1.5.3.
Suppose that U and W are subspaces of a vector space V and that {ui } generates U and {wj }
generates W , then {ui , wj }, i.e {ui } ∪ {wj }, generates U + W .
Proof
Let v ∈ U + W . Then v = u + w where u ∈ U and w ∈ W . Since {ui } generates U , then u is a
linear combination of the ui s. Also, since {wj } generates W , w is a Linear combination of wj s;
i.e
u = a1 u1 + a2 u2 + ... + an un , ai ∈ K, ui ∈ {ui }
v = b1 w1 + b2 w2 + ... + bn wm , bj ∈ K, wj ∈ {wj }
Thus,
v = u + w = a1 u1 + a2 u2 + ... + an un + b1 w1 + b2 w2 + ... + bn wm
and so {ui , wj } generates U + W .
Examples 1.6.2.
1. In the vector space R3 , let U be the xy plane and let W be the yz plane. Thus;
U = {(a, b, 0) : a, b ∈ R} and
W = {(0, b, c) : b, c ∈ R}
and also
(3, 5, 7) = (3, −4, 0) + (0, 9, 7)
U = {(a, b, 0) : a, b ∈ R} and
W = {(0, 0, c) : c ∈ R}
Now any vector (a, b, c) ∈ R3 can be written as the sum of a vector in U and a vector in
V in one and only one way.
Theorem 1.6.3.
The vector space V is the direct sum of its subspaces U and W if and only if:
(i) V = U + W and
(ii) U ∩ W = {0}
Proof
Suppose V = U ⊕ W . Then any v ∈ V can be uniquely written in the form v = u + w where
u ∈ U and w ∈ W . Thus, in particular, V = U + W .
Now suppose that v = U ∩ W . Then:
(b) v = 0 + v where 0 ∈ U, v ∈ W .
Example 1.6.4
Let V be the vector space of n−square matrices over a field R. Let U and W be the subspaces
of symmetric and skew-symmetric matrices respectively. Then V = U ⊕ W .
Solution:
We first show that V = U + W . Let A be any arbitrary n−square matrix. Note that
1 1
A = (A + At ) + (A − At )
2 2
Recall that
1 1
(A + At ) ∈ U and (A − At ) ∈ W
2 2
10
Chapter 2
a1 v1 + a2 v2 + ... + am vm = 0 (2.1)
Note
1. The relation (2.1) will always hold if the ai ’s are all zero. If this relation holds only in
this case, that is
2. If the relation (2.1) holds when one of the ai ’s is not 0, them the vectors are linearly
dependent.
3. If 0 is one of the vectors v1 ...vm , say vi = 0, then the vectors must be dependent, for
Example 2.1.2
The vectors v1 = (1, −1, 0), v2 = (1, 3, −1) and v3 = (5, 3, −2) are dependent since 3v1 +
2v2 − v3 = 0. Whereas the vectors v4 = (6, 2, 3, 4), v5 = (0, 5, −3, 1) and v6 = (0, 0, 7, −2) are
independent since for a4 v4 + a5 v5 + a6 v6 = 0 implies that a4 = a5 = a6 = 0.
11
Theorem 2.1.3.
The vectors v1 , ..., vm are linearly dependent if and only if one of them is a linear combination
of the others.
Proof
Suppose that vi is a linear combination of the others, i.e.
where the coefficient of vi is not 0; hence the vectors are linearly dependent.
Conversely, suppose that the vectors are linearly dependent, say,
Then,
1
vj = − (b1 v1 + ... + bj−1 vj−1 + bj+1 vj+1 + ... + bm vm )
bj
and so vj is a linear combination of the other vectors.
Theorem 2.1.4.
The non zero vectors v1 , ..., vm are linearly dependent if and only if one of them, say, vi , is a
linear combination of the preceding vectors; i.e.
Proof
Suppose that
vi = a1 v1 + ... + ai−1 vi−1
then
a1 v1 + ... + ai−1 vi−1 − vi + 0.vi+1 + ... + 0.vm = 0
and the coefficient of vi is not 0. Hence the vi s are linearly dependent.
Conversely, suppose that the vi are linearly dependent. Then there exist scalars a1 , ..., am not
all 0, such that a1 v1 + ... + am vm = 0. Let k be the largest integer such that ak 6= 0. Then
a1 v1 + ... + ak vk = 0.
Suppose that k = 1, then a1 v1 = 0, a1 6= 0 and so v1 = 0. But the vi are non zero vectors,
hence k > 1 and
−1
vk = (a1 v1 + ... + ak−1 vk−1 )
ak
That is, vk is a linear combination of the preceding vectors.
12
Note 2.1.5.
1. The set {v1 , ..., vm } is called a dependent set or an independent set according as the
vectors v1 , ..., vm are dependent or independent. We also define the empty set ∅ to be
independent.
2. If any two vectors {v1 , ..., vm } are equal, say v1 = v2 , then the vectors are dependent. For,
3. Two vectors v1 and v2 are dependent if and only if one of them is a multiple of the other.
4. A set which contains a dependent subset is itself dependent. Hence any subset of an
independent set is independent.
5. If the set {v1 , ..., vm } is independent, then any arrangement of the vectors, such as
{vi1 , vi2 , ..., vim } is also independent.
• Any two vectors u and v are dependent if and only if they lie on the same line
through the origin.
• Any three vectors u, v and w are dependent if and only if they lie on the same plane
through the origin.
dim V = n,
if there exists n linearly independent vectors e1 , e2 , ...en which span V . The sequence {e1 , e2 , ..., en }
is then called a basis of V .
When a vector space is not of finite dimension, it is said to be of infinite dimension.
Examples 2.2.1.
1. Let K be any field. Consider the vector space K n which consists of n− tuple of elements
of K. The vectors:
13
2. Let V be the vector space of all 2 × 3 matrices over a field K. Then the matrices
1 0 0 0 1 0 0 0 1
, , ,
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
, ,
1 0 0 0 1 0 0 0 1
form a basis of V . Thus dimV = 6.
More generally, let V be the vector space of all m × n matrices over K and let Eij ∈ V
be the matrix with ij-entry 1 and 0 elsewhere. Then the set {Eij } is a basis, called the
usual basis of V . Consequently, dimV = mn.
3. Let W be the vector space of polynomials in t of degree ≤ n. The set {1, t, t2 , ..., tn } is
linearly independent and generates W . Thus, it is a basis of W and so dimW = n + 1.
We observe that the vector space V of all polynomials is not finite dimensional since no
finite set of polynomials generates V .
4. The vector space {0} is defined to have dimension zero. Note that {0} is generated by ∅
since ∅ is an independent set.
Theorem 2.2.3.
Suppose that {v1 , ..., vn } generate a vector space V .
(i) If w ∈ V , then {w, v1 , ..., vm } is linearly dependent and generates V .
(ii) If vi is a linear combination of the preceding vectors, then {v1 , ..., vi−1 , vi+1 , ..., vm } gen-
erates V .
Proof
(i) If w ∈ V , then w is a linear combination of the vi s, since {vi } generates V . Accordingly,
{w, v1 , ..., vn } is linearly dependent. Clearly, w with the vi s generate V since vi s by
themselves generate V . That is {w, v1 , ..., vn } generates V .
(ii) Suppose that vi = k1 v1 + ... + ki−1 vi−1 . Let u ∈ V . Then since {vi } generates V , u is a
linear combination of the vi s, say: u = a1 v1 + ... + am vm .
Substituting for vi , we obtain
u = a1 v1 + ... + ai−1 vi−1 + ai (k1 v1 + ... + ki−1 vi−1 ) + ai+1 vi+1 + ... + am vm
= (a1 + ai k1 )v1 + ... + (ai−1 + ai ki−1 )vi−1 + ai+1 vi+1 + ... + am vm
since {v1 , ..., vi−1 , vi+1 , ..., vm } generates V . In other words, we can delete vi from the
generating set and still retain a generating set.
Theorem 2.2.4.
Suppose that {v1 , ..., vn } generate a vector space V . If {w1 , ..., wm } is linearly independent, then
m ≤ n and V is generated by a set of the form {w1 , ..., wm , vi1 , ...vin−m }. Thus, in particular
n + 1 or more vectors in V are linearly dependent.
Theorem 2.2.5.
Let V be a finite dimensional vector space. Then any basis of V has the same number of vectors.
14
Proof
Let {e1 , e2 , ..., en } and {f1 , f2 , ..., fm } be basis of V. Hence {ei } generates V , the basis {f1 , f2 , ..., fm }
contains m less than n vectors, i.e. m ≤ n. Similarly, since {fj } generates V, the basis
{e1 , e2 , ..., en } contains n less than m, i.e. n ≤ m. Combining, we have that m = n. Thus the
basis {f1 , f2 , ..., fm } contains exactly n vectors, and so the theorem will holds.
Note 2.2.6.
Let V be of finite dimension n. Then:
(ii) Any linearly independent set is part of a basis and can be augmented to a basis.
(iv) Let {v1 , ..., vm } be independent but {v1 , ..., vm , w} be dependent. Then w is a linear
combination of the vi s.
Theorem 2.2.7.
Let W be a subspace of an n−dimensional vector space V . Then dim W ≤ n. In particular, if
dim W = n, then W = V .
Proof
Since V is of dimension n, any n + 1 or more vectors are linearly dependent. Furthermore, since
a basis of w consists of linearly independent vectors, it cannot contain more than n elements.
Hence dim W ≤ n.
In particular, if {w1 , ..., wn } is a basis of W , then since it is an independent set with n elements
it is also a basis of V . Thus W = V when dim W = n.
Theorem 2.2.8.
Let U and W be finite-dimensional subspaces of a vector space V. Then U + W has finite
dimension and:
dim (U + W ) = dim U + dim W − dim (U ∩ W ).
Example 2.2.9.
Suppose that U and W are the xy - plane and yz - plane respectively in R3 . Then U = {(a, b, 0)}
and {(0, b, c)} since R3 = U + W , then dim (U + W ) = 3. Also dim U = 2 and dim W = 2. By
Theorem 2.2.8,
3 = 2 + 2 − dim (U ∩ W ) =⇒ dim (U ∩ W ) = 1
Observe that this agrees with the fact that U ∩ W is the y − axis, i.e U ∩ W = {(0, b, 0)} and
so has dimension 1.
If V is the direct sum of U and W , i.e., V = U ⊕ W , then dim V = dim U + dim W ; since
U ∩ W = {0} and dim (U ∩ W ) = dim{0} = 0.
15
u = a1 e1 + a2 e2 + ... + an en , ai ∈ K.
We call the scalars a1 , a2 , ..., an the co-ordinates of u in {ei } and we call the n−tuple (a1 , ..., an )
the co-ordinate vector of u relative to {ei } and denote it by [u]e i.e.
Note
u = a1 e1 + a2 e2 + ... + an en , ai ∈ K
is unique. That is, the n scalars a1 , ..., an are completely determined by the vector u and
the basis {ei }.
Examples 2.3.2
V = {at2 + bt + c : a, b, c ∈ R}.
The polynomials e1 = 1, e2 = t−1 and e3 = (t − 1)2 form a basis of V. Let u = 2t2 −5t+6.
Find [u]e .
Solution:
Set u as a linear combination of the ei using the unknowns x, y and z.
We now set the coefficients of the same powers of t equal to each other:
x−y+z =6
y − 2z = −5
z=2
16
2. Consider the real space R3 . Find the co-ordinate vector of u = (3, 1, −4) relative to the
basis f1 = (1, 1, 1), f2 = (0, 1, 1), f3 = (0, 0, 1).
Solution:
Set u as a linear combination of the fi using the unknowns x, y, and z:
Then set the corresponding components equal to each other to obtain the equivalent
system of equations:
x=3
x+y =1
x + y + z = −4
Note: Relative to the usual basis e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1), the
co-ordinate vector of u is identical to u itself, i.e. [u]e = (3, 1, −4) = u.
17
Chapter 3
LINEAR MAPPING
Note:
(a) F : V → U is linear if it preserves the two basic operations of a vector space; that of
vector addition and scalar multiplication.
(b) Substituting k = 0 in (ii), we obtain F (0) = 0. That is, every linear mapping takes the
zero vector into the zero vector.
(c) For any scalars a, b ∈ K and any vectors v, w ∈ V , we obtain (by applying both conditions
of linearity)
The condition F (av + bw) = aF (v) + bF (w) completely characterizes linear mappings and
is sometimes used as its definition.
More generally, for any scalars ai ∈ K and any vectors vi ∈ V , we obtain the basic
property of linear mappings:
18
Examples 3.1.2.
1 Let V be the vector space of polynomials in the variable t over the real field R. Then the
differential mapping D : V → V and the integral mapping S : V → R are linear. For if
we let D = dtd and u, v ∈ V , k ∈ K we have,
d du dv
D(u + v) = (u + v) = + = D(u) + D(v)
dt dt dt
d dv
D(kv) = (kv) = k = kD(v)
dt dt
Therefore, D is linear.
R
Also, if we let S(u) = udt then
Z Z Z
S(u + v) = (u + v)dt = udt + vdt = S(u) + S(v)
Z Z
S(kv) = kvdt = k vdt = kS(v)
Therefore, S is linear.
2 The identity mapping I : V → V which maps each v ∈ V into itself is linear. Note that
I(v) = v for all v ∈ V.
Theorem 3.1.4.
Let F : V → U be linear and suppose that v1 , ...vn ∈ V have the property that their images
F (v1 ), ..., F (vn ) are linearly independent. Then the vectors v1 , ..., vn are also linearly indepen-
dent.
Proof.
Suppose that for scalars a1 , ..., an
a1 v1 + a2 v2 + ... + an vn = 0.
Then
Since the F (vi ) are linearly independent, all the ai = 0. Thus the vectors v1 , ..., vn are linearly
independent.
19
Theorem 3.1.5.
Suppose that the linear mapping F : V → U is one-to-one and onto. Then the inverse mapping
F −1 : U → V is also linear.
Proof.
Let u1 , u2 ∈ U. Since F is one-to-one and onto, there exists unique vectors v1 , v2 ∈ V for which
F (v1 ) = u1 and F (v2 ) = u2 . Since F is linear, we have
and
F (kvi ) = kF (vi ) = kui .
By definition of the inverse mapping,
F −1 u1 = v1 , F −1 u2 = v2 , F −1 (u1 + u2 ) = v1 + v2
Definition 3.2.2.(Kernel)
The kernel of F , written ker F , is the set of points in V which map onto 0 ∈ U , that is
ker F = {v ∈ V : F (v) = 0}.
Theorem 3.2.3.
Let F : V → U be a linear mapping. Then
20
Proof.
(i) Since F (0) = 0, 0 ∈ Im F. Now let u1 , u2 ∈ Im F and a, b ∈ K. Since u1 and u2 belong
to the image of F , there exist vectors v1 , v2 ∈ V such that F (v1 ) = u1 and F (v2 ) = u2 .
Then
(ii) Since F (0) = 0, then 0 ∈ ker F . Now let v, w ∈ ker F and a, b ∈ K. Since v and w belong
to the kernel of F , we have that F (v) = 0 and F (w) = 0. Thus
Theorem 3.2.5.
Suppose that the vectors v1 , ..., vn generate V and that F : V → U is linear. Then the vectors
F (v1 ), ..., F (vn ) ∈ U generate Im F .
Proof.
Let u ∈ Im F, then F (v) = u for some v ∈ V . Since the vi generate V and since v ∈ V , there
exists scalars a1 , ..., an for which v = a1 v1 + a2 v2 + ... + an vn .
Accordingly,
u = F (v)
= F (a1 v1 + a2 v2 + ... + an vn )
= a1 F (v1 ) + a2 F (v2 ) + ...an F (vn )
Theorem 3.2.6.
Let V be of finite dimension and that F : V → U is a linear mapping. Then:
Example 3.2.7.
From Example 3.2.4, dim (Im F ) = 2, dim (ker F ) = 1 and dim V = 3, since
21
Note:
From Theorem 3.2.6, we have that if V is of finite dimension then dim V =rank (F )+nullity (F ).
Theorem 3.4.3.
Let F : V → U be linear and that V is of finite dimension, then V and the image of F have
the same dimension if and only if F is non-singular.
Proof.
By Theorem 3.2.6, dim V = dim(Im F ) + dim(ker F ). Hence V and Im F have the same dimen-
sion if and only if dim(ker F ) = 0 or ker F = {0}, i.e. if and only if F is non-singular.
Theorem 3.4.4.
A linear mapping F : V → U is non-singular if and only if the image of any independent set is
independent.
Proof.
Let F be non-singular and {v1 , ..., vn } be an independent subset of V . We claim that the vectors
F (v1 ), ..., F (vn ) are independent. Let
F (a1 v1 + a2 v2 + ... + an vn ) = 0;
22
Theorem 3.5.2.
A linear operator F : V → V on a vector space of finite dimension is invertible if and only if it
is non-singular.
Proof.
Let F be invertible. We know that F is invertible if and only if it is one - one and onto. Thus
in particular, if F is invertible then only 0 ∈ V can map into itself, i.e. F is non-singular.
Conversely, let F be non-singular, i.e. ker F = {0}. Then F is one-to-one. Moreover, assuming
V has finite dimension, we have by Theorem 3.2.6,
Thus Im F = V, i.e. the image of F is V and as such F is onto. Hence F is both one-to-one
and onto and so is invertible.
Example 3.5.3.
Let F be the operator on R2 defined by F (x, y) = (y, 2x − y). The kernel of F is {(0, 0)}; hence
F is non-singular and by Theorem 3.5.2, it is invertible.
We now find a formula for F −1 . Let (s, t) be in the image of (x, y) under F ; hence (x, y) is the
image of (s, t) under F −1 ;
23
i.e.
a11 a21 ... an1
a12 a22 ... an2
[F ]e =
..................
a1n a2n ... ann
Examples 3.6.2.
1 Let V be the vector space of polynomials in t over R of degree ≤ 3 and let D : V → V
be the differential operator defined by D(p(t)) = dtd (p(t)). Compute the matrix of D in
the basis {1, t, t2 , t3 }.
Solution
D(1) = 0 = 0.1 + 0t + 0t2 + 0t3
D(t) = 1 = 1.1 + 0t + 0t2 + 0t3
D(t2 ) = 2t = 0.1 + 2t + 0t2 + 0t3
D(t3 ) = 3t2 = 0.1 + 0t + 3t2 + 0t3
In matrix form
D(1) 0 0 0 0 1
D(t) 1 0 0 0 t
D(t2 ) = 0 2
0 0 t2
D(t3 ) 0 0 3 0 t3
Accordingly
0 1 0 0
0 0 2 0
[D] =
0
0 0 3
0 0 0 0
24
2 Let F be the linear operator on R2 defined by F (x, y) = (4x − 2y, 2x + y). Compute the
matrix of F in the basis {f1 = (1, 1), f2 = (−1, 0)}.
Solution
F (f1 ) = F (1, 1)
= (2, 3)
= 3(1, 1) + (−1, 0)
= 3f1 + f2 .
F (f2 ) = F (−1, 0)
= (−4, −2)
= −2(1, 1) + 2(−1, 0)
= −2f1 + 2f2 .
In matrix form
F (f1 ) 3 1 f1
= .
F (f2 ) −2 2 f2
Thus
3 −2
[F ]f = .
1 2
Theorem 3.6.3
Let {e1 , ..., en } be a basis of V and F a linear operator on V . Then for any v ∈ V ,
[F ]e [v]e = [F (v)]e .
Examples 3.6.4.
(1) Consider the differential operator D : V → V of Example 3.6.2, (no 1). Let
then
25
0 1 0 0 a b
0 0 2 0 b 2c
[D] [p(t)] =
0
=
3d = [D(p(t))]
0 0 3 c
0 0 0 0 d 0
which verifies the validity of Theorem 3.6.3.
where f1 = (1, 1) and f2 = (−1, 0). Hence, relative to the basis {f1 , f2 }.
7
[v]f =
2
and
17
F (v) = .
f 11
But
3 −2 7 17
[F ]f [v]f = = = F (v)
1 2 2 11 f
26
The transpose of the above matrix of coefficients, denoted by [F ]fe , is called the matrix rep-
resentation of F relative to the basis {ei } and {fi } or the matrix of F in the basis {ei } and
{fj };
a11 a21 ... am1
a12 a22 ... am2
[F ]fe =
......................
a1n a2n ... amn
The following Theorem apply.
Theorem 3.6.6.
For any vector v ∈ V , [F ]fe [v]e = [F (v)]f .
Example 3.6.7
Let F : R3 → R2 be the linear mapping defined by F (x, y, z) = (3x + 2y − 4z, x − 5y + 3z).
(ii) Verify that the action of F is preserved by its matrix representation, that is, for any
v ∈ R3 ,
[F ]gf [v]f = [F (v)]g .
Solution.
(ii) Exercise.
27