0% found this document useful (0 votes)
20 views28 pages

mth204-lecture-notes-self-reading-guide

The document contains lecture notes for a Linear Algebra course at the University of Jos, covering topics such as vector spaces, subspaces, basis and dimension, and linear mappings. It includes definitions, theorems, and examples related to these concepts. The notes are structured into chapters and sections, providing a comprehensive guide for self-study.

Uploaded by

aisosaahanor27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views28 pages

mth204-lecture-notes-self-reading-guide

The document contains lecture notes for a Linear Algebra course at the University of Jos, covering topics such as vector spaces, subspaces, basis and dimension, and linear mappings. It includes definitions, theorems, and examples related to these concepts. The notes are structured into chapters and sections, providing a comprehensive guide for self-study.

Uploaded by

aisosaahanor27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

lOMoARcPSD|54843648

MTH204 Lecture Notes - Self reading guide

Linear algebra (University of Jos)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Paul Müller ([email protected])
lOMoARcPSD|54843648

MTH 204 LECTURE NOTES

LINEAR ALGEBRA 1

DEPARTMENT OF MATHEMATICS
FACULTY OF NATURAL SCIENCES
UNIVERSITY OF JOS
NIGERIA

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Contents

1 VECTOR SPACES AND SUBSPACES 3


1.1 Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Linear Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Generators and Linear Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Sum of Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Direct Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 BASIS AND DIMENSION 11


2.1 Linear Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Co-ordinates, Co-ordinate Vector . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3 LINEAR MAPPING 18
3.1 Definition, Examples and Theorems . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Image and Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Rank and Nullity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Singular and Non-singular Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Invertible Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.6 Matrix representation of a linear operator . . . . . . . . . . . . . . . . . . . . . 23

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Chapter 1

VECTOR SPACES AND SUBSPACES

1.1 Vector Space


Definition 1.1.1 Vector space.
Let K be a field. A non-empty set V with the operation of addition and scalar multiplication
is called a vector space (or a linear space) if the following axioms hold:

A1: u + v ∈ V for all u, v ∈ V


A2: (u + v) + w = u + (v + w) for all u, v, w ∈ V
A3: There exists 0 ∈ V such that u + 0 = 0 + u = u for all u ∈ V
A4: For any u ∈ V , there exists −u ∈ V such that u + (−u) = (−u) + u = 0
A5: u + v = v + u for all u, v ∈ V
M1: ku ∈ V for all k ∈ K, u ∈ V
M2: k(u + v) = ku + kv for all k ∈ K, u, v ∈ V
M3: (a + b)u = au + bu ∀ a, b ∈ K, u ∈ V
M4: (ab)u = a(bu) ∀ a, b ∈ K, u ∈ V
M5: For I ∈ K we have that Iu = uI = u for any u ∈ V

The elements of V are called vectors while those of K are called scalars.

Theorem 1.1.2.
Let V be a vector space over a field k;

(i) For any k ∈ K and 0 ∈ V , k0 = 0.


(ii) For any 0 ∈ K and any u ∈ V , 0u = 0.
(iii) If ku = 0, where k ∈ K and u ∈ V , then either k = 0 or u = 0.
(iv) For any k ∈ K and any u ∈ V , (−k)u = k(−u) = −ku.

Proof

(i) By A3 with u = 0, we have 0 + 0 = 0. Hence by M2, k0 = k(0 + 0) = k0 + k0.


Adding −k0 to both sides gives the required result.
(ii) By a property of K, 0 + 0 = 0, Hence by M3,

0u = (0 + 0)u = 0u + 0u

Adding −0u to both sides yields the required result.

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

(iii) Suppose that ku = 0 and k 6= 0. Then there exists a scalar k −1 such that k −1 k = I; hence

u = Iu = (k −1 k)u = k −1 (ku) = k −1 0 = 0

(iv) Using u + (−u) = 0. We obtain

0 = k0 = k(u + (−u)) = ku + k(−u)

Adding −ku to both sides gives −ku = k(−u).


Using k + (−k) = 0, we obtain

0 = 0u = (k + (−k))u = ku + (−k)u.

Adding −ku to both sides yields −ku = (−k)u.


Thus (−k)u = k(−u) = −ku.

Examples of Vector Spaces

1. Let V be the set of all functions from a non-empty set X into a field K. For any functions
f, g ∈ V and any scalar k ∈ K, let f + g and kf be the functions in V defined as follows:

(f + g)(x) = f (x) + g(x)

and
(kf )(x) = kf (x) ∀x ∈ X
Then V is a vector space over K.

2. Let K be an arbitrary field. The set of all n-tuples of elements of K with vector addition
and scalar multiplication defined by:

(a1 , a2 , ..., an ) + (b1 , b2 , ..., bn ) = (a1 + b1 , a2 + b2 , ..., an + bn )

and
k(a1 , a2 , ..., an ) = (ka1 , ka2 , ..., kan ),
where ai , bi ∈ K, is a vector space over K. We denote this space as K n .

3. Let V be the set of all m × n matrices with entries from an arbitrary field K. Then V
is a vector space over K with respect to the operations of matrix addition and scalar
multiplication.

4. Let V be the set of all polynomials

a0 + a1 t + a2 t2 + ... + an tn

with coefficients ai from a field K. Then V is a vector space over K with respect to the
usual operations of addition of polynomials and multiplication by a constant.

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

1.2 Subspace
Definition 1.2.1. (Subspace)

Let W be a subset of a vector space V over a field K. Then W is called a subspace of V if


W is itself a vector space over K with respect to the operations of vector addition and scalar
multiplication defined on V .

Examples of Subspace 1.2.2.

1. Let V be any vector space. Then the set {0} consisting of the zero vector alone and also
the entire space V are subspaces of V . Note that the subspace V is a vector space itself.

2. Let V be the vector space R3 . Then the set W consisting of those vectors whose third
component is zero is a subspace of V . That is,

W = {(a, b, 0) : a, b ∈ R}

is a subspace of V .

3. Let V be the space of all square n × n matrices. Then the set W consisting of those
matrices that are symmetric is a subspace of V .

4. Let V be the space of polynomials. Then the set W consisting of polynomials with degree
≤ n, for a fixed n, is a subspace of V .

5. Let V be the space of all functions from a non-empty set X into the real field R. Then
the set W consisting of all bounded functions in V is a subspace of V .

Theorem 1.2.3.
W is a subspace of V if and only if

(i) W 6= ∅ (non-empty)
(ii) u + v ∈ W for all u, v ∈ W
(iii) ku ∈ W for every k ∈ K and u ∈ W .

Corollary 1.2.4.
W is a subspace of V if and only if

(i) 0 ∈ W

(ii) u, v ∈ W =⇒ au + bu ∈ W ∀ a, b ∈ K

Proof
Suppose that W satisfies (i) and (ii). Then, by (i) W is non-empty. Furthermore, if u, v ∈ W
then by (ii), u + v = Iu + Iv ∈ W and if v ∈ W and k ∈ K then, by (ii), kv = kv + 0v ∈ W .
Thus by Theorem 1.2.3, W is a subspace of V .
Conversely, if W is a subspace of V then clearly (i) and (ii) hold in W .

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Examples of Subspaces 1.2.5.


1. Let V = R3 . Show that
W = {(a, b, c) : a + b + c = 0}
is a subspace of V .

Solution:
0 = (0, 0, 0) ∈ W since 0 + 0 + 0 = 0.
Suppose that u = (a, b, c) and v = (e, f, g) belong to W .
Then a + b + c = 0 and e + f + g = 0. Let k and p be any scalars. Then
ku + pv = k(a, b, c) + p(e, f, g)
= (ka, kb, kc) + (pe, pf, pg)
= (ka + pe, kb + pf, kc + pg)
and furthermore,
(ka + pe) + (kb + pf ) + (kc + pg) = K(a + b + c) + p(e + f + g)
= k0 + p0
=0
Thus ku + pv ∈ W and so W is a subspace of V .
2. Let V = R3 . Show that
W = {(a, b, c) : a2 + b2 + c2 ≤ 1}
is not a subspace of V .
Solution:
v = (1, 0, 0) ∈ W and u = (0, 1, 0) ∈ W .
But,
v + u = (1, 0, 0) + (0, 1, 0)
= (1, 1, 0) ∈/W
since
12 + 12 + 02 = 2 > 1.
Hence W is not a subspace of V .
Theorem 1.2.6.
Let U and W be subspaces of a vector space V . Then U ∩ W is also a subspace of V .

Proof
Clearly, 0 ∈ U and 0 ∈ W since U and W are subspaces. Hence 0 ∈ U ∩ W .
Now suppose that u, v ∈ U ∩ W then u, v ∈ U and u, v ∈ W . Since U and W are subspaces, we
have that: au + bv ∈ U and au + bv ∈ W for any scalars a, b ∈ K. Accordingly, au + bv ∈ U ∩ W
and U ∩ W is a subspace of V .

Corollary 1.2.7.
The intersection of any number of subspaces of a vector space V is a subspace of V .

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

1.3 Linear Combination


Definition 1.3.1 (Linear Combination).
Let V be a vector space over a field K and let v1 , v2 , ..., vm ∈ V . Any vector in V of the form
a1 v1 + a2 v2 + ... + am vm
where the ai ∈ K, is called a linear combination of v1 , v2 , ..., vm .

Examples of Linear Combination 1.3.2.


1. Any vector (a, b, c) ∈ R3 is a linear combination of
e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1),
since
(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1)
= ae1 + be2 + ce3 .
2. Any polynomial in t in the vector space of polynomials is a linear combination of 1, t, t2 , t3 , ...
since any polynomial is a linear combination of 1 and powers of t.
3. Determine whether or not the vector v = (3, 9, −4, −2) is a linear combination of the
vectors u1 = (1, −2, 0, 3), u2 = (2, 3, 0, −1) and u3 = (2, −1, 2, 1).

Solution
Let v = xu1 + yu2 + zu3 . That is,
(3, 9, −4, −2) = x(1, −2, 0, 3) + y(2, 3, 0, −1) + z(2, −1, 2, 1)
= (x + 2y + 2z, − 2x + 3y − z, 2z, 3x − y + z)
Thus,
x + 2y + 2z =3
−2x + 3y − z =9
2z = −4
3x − y + z = −2
Solving, we obtain x = 1, y = 3, z = −2.
Thus v = u1 + 3u2 − 2u3 and v is a linear combination of u1 , u2 and u3 .
Note that if v is not a linear combination, then the system of equations above will not have
a solution. For example, the vector v = (−1, 7, 2) is not a linear combination of the vectors
u1 = (1, 3, 5), u2 = (2, −1, 3), u3 = (−3, 2, −4)

1.4 Generators and Linear Span


Definition 1.4.1. (Generators and Linear Span)
1. The vectors v1 , v2 , ..., vn ∈ V is said to generate V if every v ∈ V can be written as a
linear combination of the vi ’s.
2. Let S be a non-empty subset of V . The set of all linear combinations of vectors in S is
a subspace of V containing S and it is denoted by L(S) and called the linear span of S.
For convenience L(∅) is taken as {0}.

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

1.5 Sum of Subspaces


Let U and W be subspaces of a vector space V . The sum of U and W , written U + W consists
of all sums u + w where u ∈ U and w ∈ W ; i.e

U + W = {(u + w : u ∈ U, w ∈ W )}.

Theorem 1.5.1.
The sum U + W of the subspaces of U and W of V is also a subspace of V .

Proof
U + W is non-empty, since 0 = 0 + 0 ∈ U + W as 0 ∈ U and 0 ∈ W .
Furthermore, suppose that u + w and u1 + w1 belong to U + W with u, u1 ∈ U and w, w1 ∈ W .
Then
(u + w) + (u1 + w1 ) = (u + u1 ) + (w + w1 ) ∈ U + W
and for any scalar k,
k(u + w) = ku + kw ∈ U + W
Thus U + W is a subspace of V .

Theorem 1.5.2.
Let U and W be subspaces of a vector space V . Then;
(i) U and W are contained in U + W ;

(ii) U + W is the smallest subspace of V containing U and W i.e U + W is the linear span of
U and W or U + W = L(U, W ).
Proof

(i) Let u ∈ U . By hypothesis, W is a subspace of V and so 0 ∈ W . Thus u = u + 0 ∈ U + W .


Hence, U is contained in U + W .
Similarly, W is contained in U + W .

(ii) Since U + W is a subspace of V containing both U and W , it must also contain the linear
span of U and W , i.e L(U, W ) ⊂ U + W .
On the other hand, if v ∈ U + W then v = u + w = Iu + Iw where u ∈ U and w ∈ W ;
hence v is a linear combination of elements in U + W and so belongs to L(U, W ). Thus
U + W ⊂ L(U, W ). The two inclusion relations give us the required result.

Theorem 1.5.3.
Suppose that U and W are subspaces of a vector space V and that {ui } generates U and {wj }
generates W , then {ui , wj }, i.e {ui } ∪ {wj }, generates U + W .

Proof
Let v ∈ U + W . Then v = u + w where u ∈ U and w ∈ W . Since {ui } generates U , then u is a
linear combination of the ui s. Also, since {wj } generates W , w is a Linear combination of wj s;
i.e
u = a1 u1 + a2 u2 + ... + an un , ai ∈ K, ui ∈ {ui }

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

v = b1 w1 + b2 w2 + ... + bn wm , bj ∈ K, wj ∈ {wj }

Thus,
v = u + w = a1 u1 + a2 u2 + ... + an un + b1 w1 + b2 w2 + ... + bn wm
and so {ui , wj } generates U + W .

1.6 Direct Sum


Definition 1.6.1 (Direct Sum).
The vector space V is said to be the direct sum of its subspaces U and W , denoted by U ⊕ W
if and only if every vector v ∈ V can be written in one and only one way as v = u + w where
u ∈ U and w ∈ W .

Examples 1.6.2.

1. In the vector space R3 , let U be the xy plane and let W be the yz plane. Thus;

U = {(a, b, 0) : a, b ∈ R} and
W = {(0, b, c) : b, c ∈ R}

Then R3 = U + W since every vector in R3 is the sum of a vector in U and a vector in W .


However, R3 is not the direct sum of U and W since sums are not unique. For instance

(3, 5, 7) = (3, 1, 0) + (0, 4, 7)

and also
(3, 5, 7) = (3, −4, 0) + (0, 9, 7)

2. In R3 , let U be the xy plane and let W be the z axis. That is,

U = {(a, b, 0) : a, b ∈ R} and
W = {(0, 0, c) : c ∈ R}

Now any vector (a, b, c) ∈ R3 can be written as the sum of a vector in U and a vector in
V in one and only one way.

(a, b, c) = (a, b, 0) + (0, 0, c)

Hence, R3 is the direct sum of U and W , that is, R3 = U ⊕ W .

Theorem 1.6.3.
The vector space V is the direct sum of its subspaces U and W if and only if:

(i) V = U + W and

(ii) U ∩ W = {0}

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Proof
Suppose V = U ⊕ W . Then any v ∈ V can be uniquely written in the form v = u + w where
u ∈ U and w ∈ W . Thus, in particular, V = U + W .
Now suppose that v = U ∩ W . Then:

(a) v = v + 0 where v ∈ U, 0 ∈ W and

(b) v = 0 + v where 0 ∈ U, v ∈ W .

Since such a sum for v must be unique, v = 0. Hence U ∩ W = {0}.


On the other hand, suppose that V = U + W and U ∩ W = {0}. Let v ∈ V . Since V = U + W ,
there exist u ∈ U and w ∈ W such that v = u + w. We need to show that such a sum is
unique. Suppose also that v = u1 + w1 where u1 ∈ U and w1 ∈ W . Then u + w = u1 + w1 and
so u − u1 = w1 − w. But u − u1 ∈ U and w1 − w ∈ W ; hence by U ∩ W = {0}, u − u1 = 0
and w1 − w = 0. Thus, u = u1 and w = w1 and such a sum for v ∈ V is unique and V = U ⊕ W .

Example 1.6.4
Let V be the vector space of n−square matrices over a field R. Let U and W be the subspaces
of symmetric and skew-symmetric matrices respectively. Then V = U ⊕ W .

Solution:
We first show that V = U + W . Let A be any arbitrary n−square matrix. Note that
1 1
A = (A + At ) + (A − At )
2 2
Recall that
1 1
(A + At ) ∈ U and (A − At ) ∈ W
2 2

We next show that U ∩ W = {0}. Suppose M ∈ U ∩ W . Then M = M t and M t = −M which


implies that M = −M or M = 0.
Thus U ∩ W = {0} and hence, V = U ⊕ W .

10

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Chapter 2

BASIS AND DIMENSION

2.1 Linear Dependence


Definition 2.1.1 (Linear Dependence).
Let V be a vector space over a field K. The Vectors v1 , v2 , ..., vm ∈ V are said to be linearly
dependent over K, if there exist scalars a1 , a2 , ..., am ∈ K, not all of them zero, such that

a1 v1 + a2 v2 + ... + am vm = 0 (2.1)

Otherwise the vectors are said to be linearly independent over K.

Note
1. The relation (2.1) will always hold if the ai ’s are all zero. If this relation holds only in
this case, that is

a1 v1 + a2 v2 + ... + am vm = 0 only if a1 = 0, ..., am = 0

then the vectors are linearly independent.

2. If the relation (2.1) holds when one of the ai ’s is not 0, them the vectors are linearly
dependent.

3. If 0 is one of the vectors v1 ...vm , say vi = 0, then the vectors must be dependent, for

0v1 + 0v2 + ... + 5vi + ... + 0vm = 0 + 0 + ... + 5.0 + ... + 0 = 0

and the coefficient of vi is not 0.

4. Any non zero vector v is by itself independent, for kv = 0, v 6= 0 implies that k = 0.

Example 2.1.2
The vectors v1 = (1, −1, 0), v2 = (1, 3, −1) and v3 = (5, 3, −2) are dependent since 3v1 +
2v2 − v3 = 0. Whereas the vectors v4 = (6, 2, 3, 4), v5 = (0, 5, −3, 1) and v6 = (0, 0, 7, −2) are
independent since for a4 v4 + a5 v5 + a6 v6 = 0 implies that a4 = a5 = a6 = 0.

11

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Theorem 2.1.3.
The vectors v1 , ..., vm are linearly dependent if and only if one of them is a linear combination
of the others.

Proof
Suppose that vi is a linear combination of the others, i.e.

vi = a1 v1 + ... + ai−1 vi−1 + ai+1 vi+1 + ... + am vm

Then by adding −vi to both sides, we obtain

a1 v1 + ... + ai−1 vi−1 − vi + ai+1 vi+1 + ... + am vm = 0

where the coefficient of vi is not 0; hence the vectors are linearly dependent.
Conversely, suppose that the vectors are linearly dependent, say,

b1 v1 + ... + bj vj + ... + bm vm = 0 where bj 6= 0

Then,
1
vj = − (b1 v1 + ... + bj−1 vj−1 + bj+1 vj+1 + ... + bm vm )
bj
and so vj is a linear combination of the other vectors.

Theorem 2.1.4.
The non zero vectors v1 , ..., vm are linearly dependent if and only if one of them, say, vi , is a
linear combination of the preceding vectors; i.e.

vi = a1 v1 + ... + ai−1 vi−1 .

Proof
Suppose that
vi = a1 v1 + ... + ai−1 vi−1
then
a1 v1 + ... + ai−1 vi−1 − vi + 0.vi+1 + ... + 0.vm = 0
and the coefficient of vi is not 0. Hence the vi s are linearly dependent.
Conversely, suppose that the vi are linearly dependent. Then there exist scalars a1 , ..., am not
all 0, such that a1 v1 + ... + am vm = 0. Let k be the largest integer such that ak 6= 0. Then

a1 v1 + ... + ak vk + 0.vk+1 + ... + 0.vm = 0 or

a1 v1 + ... + ak vk = 0.
Suppose that k = 1, then a1 v1 = 0, a1 6= 0 and so v1 = 0. But the vi are non zero vectors,
hence k > 1 and
−1
vk = (a1 v1 + ... + ak−1 vk−1 )
ak
That is, vk is a linear combination of the preceding vectors.

12

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Note 2.1.5.

1. The set {v1 , ..., vm } is called a dependent set or an independent set according as the
vectors v1 , ..., vm are dependent or independent. We also define the empty set ∅ to be
independent.

2. If any two vectors {v1 , ..., vm } are equal, say v1 = v2 , then the vectors are dependent. For,

v1 − v2 + 0v3 + ... + 0vm = 0

and the coefficient of v1 is not 0.

3. Two vectors v1 and v2 are dependent if and only if one of them is a multiple of the other.

4. A set which contains a dependent subset is itself dependent. Hence any subset of an
independent set is independent.

5. If the set {v1 , ..., vm } is independent, then any arrangement of the vectors, such as
{vi1 , vi2 , ..., vim } is also independent.

6. In the real space R3 , dependence of the vectors can be described as geometrically as


follows:

• Any two vectors u and v are dependent if and only if they lie on the same line
through the origin.
• Any three vectors u, v and w are dependent if and only if they lie on the same plane
through the origin.

2.2 Basis and Dimension


Definition 2.2.1 (Basis and Dimension).
A vector space V is said to be of finite dimension n or to be n−dimensional, written

dim V = n,

if there exists n linearly independent vectors e1 , e2 , ...en which span V . The sequence {e1 , e2 , ..., en }
is then called a basis of V .
When a vector space is not of finite dimension, it is said to be of infinite dimension.
Examples 2.2.1.

1. Let K be any field. Consider the vector space K n which consists of n− tuple of elements
of K. The vectors:

e1 = (1, 0, 0, ..., 0, 0) , e2 = (0, 1, 0, ..., 0, 0) , ..., en = (0, 0, 0, ..., 0, 1)

form a basis, called the usual basis of K n . Thus K n has dimension n.

13

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

2. Let V be the vector space of all 2 × 3 matrices over a field K. Then the matrices
     
1 0 0 0 1 0 0 0 1
, , ,
0 0 0 0 0 0 0 0 0
     
0 0 0 0 0 0 0 0 0
, ,
1 0 0 0 1 0 0 0 1
form a basis of V . Thus dimV = 6.

More generally, let V be the vector space of all m × n matrices over K and let Eij ∈ V
be the matrix with ij-entry 1 and 0 elsewhere. Then the set {Eij } is a basis, called the
usual basis of V . Consequently, dimV = mn.
3. Let W be the vector space of polynomials in t of degree ≤ n. The set {1, t, t2 , ..., tn } is
linearly independent and generates W . Thus, it is a basis of W and so dimW = n + 1.

We observe that the vector space V of all polynomials is not finite dimensional since no
finite set of polynomials generates V .
4. The vector space {0} is defined to have dimension zero. Note that {0} is generated by ∅
since ∅ is an independent set.
Theorem 2.2.3.
Suppose that {v1 , ..., vn } generate a vector space V .
(i) If w ∈ V , then {w, v1 , ..., vm } is linearly dependent and generates V .
(ii) If vi is a linear combination of the preceding vectors, then {v1 , ..., vi−1 , vi+1 , ..., vm } gen-
erates V .
Proof
(i) If w ∈ V , then w is a linear combination of the vi s, since {vi } generates V . Accordingly,
{w, v1 , ..., vn } is linearly dependent. Clearly, w with the vi s generate V since vi s by
themselves generate V . That is {w, v1 , ..., vn } generates V .
(ii) Suppose that vi = k1 v1 + ... + ki−1 vi−1 . Let u ∈ V . Then since {vi } generates V , u is a
linear combination of the vi s, say: u = a1 v1 + ... + am vm .
Substituting for vi , we obtain
u = a1 v1 + ... + ai−1 vi−1 + ai (k1 v1 + ... + ki−1 vi−1 ) + ai+1 vi+1 + ... + am vm
= (a1 + ai k1 )v1 + ... + (ai−1 + ai ki−1 )vi−1 + ai+1 vi+1 + ... + am vm
since {v1 , ..., vi−1 , vi+1 , ..., vm } generates V . In other words, we can delete vi from the
generating set and still retain a generating set.
Theorem 2.2.4.
Suppose that {v1 , ..., vn } generate a vector space V . If {w1 , ..., wm } is linearly independent, then
m ≤ n and V is generated by a set of the form {w1 , ..., wm , vi1 , ...vin−m }. Thus, in particular
n + 1 or more vectors in V are linearly dependent.

Theorem 2.2.5.
Let V be a finite dimensional vector space. Then any basis of V has the same number of vectors.

14

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Proof
Let {e1 , e2 , ..., en } and {f1 , f2 , ..., fm } be basis of V. Hence {ei } generates V , the basis {f1 , f2 , ..., fm }
contains m less than n vectors, i.e. m ≤ n. Similarly, since {fj } generates V, the basis
{e1 , e2 , ..., en } contains n less than m, i.e. n ≤ m. Combining, we have that m = n. Thus the
basis {f1 , f2 , ..., fm } contains exactly n vectors, and so the theorem will holds.

Note 2.2.6.
Let V be of finite dimension n. Then:

(i) Any set of n + 1 or more vectors is linearly dependent.

(ii) Any linearly independent set is part of a basis and can be augmented to a basis.

(iii) A linearly independent set with n-elements is a basis.

(iv) Let {v1 , ..., vm } be independent but {v1 , ..., vm , w} be dependent. Then w is a linear
combination of the vi s.

Theorem 2.2.7.
Let W be a subspace of an n−dimensional vector space V . Then dim W ≤ n. In particular, if
dim W = n, then W = V .

Proof
Since V is of dimension n, any n + 1 or more vectors are linearly dependent. Furthermore, since
a basis of w consists of linearly independent vectors, it cannot contain more than n elements.
Hence dim W ≤ n.
In particular, if {w1 , ..., wn } is a basis of W , then since it is an independent set with n elements
it is also a basis of V . Thus W = V when dim W = n.

Theorem 2.2.8.
Let U and W be finite-dimensional subspaces of a vector space V. Then U + W has finite
dimension and:
dim (U + W ) = dim U + dim W − dim (U ∩ W ).
Example 2.2.9.
Suppose that U and W are the xy - plane and yz - plane respectively in R3 . Then U = {(a, b, 0)}
and {(0, b, c)} since R3 = U + W , then dim (U + W ) = 3. Also dim U = 2 and dim W = 2. By
Theorem 2.2.8,
3 = 2 + 2 − dim (U ∩ W ) =⇒ dim (U ∩ W ) = 1
Observe that this agrees with the fact that U ∩ W is the y − axis, i.e U ∩ W = {(0, b, 0)} and
so has dimension 1.

If V is the direct sum of U and W , i.e., V = U ⊕ W , then dim V = dim U + dim W ; since
U ∩ W = {0} and dim (U ∩ W ) = dim{0} = 0.

15

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

2.3 Co-ordinates, Co-ordinate Vector


Definition 2.3.1.(Co-ordinates, Co-ordinate Vector)
Let {e1 , ..., en } be a basis of an n−dimensional vector space V over a field K and let u be any
vector in V . Since {ei } generates V then u is a linear combination of the ei s;

u = a1 e1 + a2 e2 + ... + an en , ai ∈ K.

We call the scalars a1 , a2 , ..., an the co-ordinates of u in {ei } and we call the n−tuple (a1 , ..., an )
the co-ordinate vector of u relative to {ei } and denote it by [u]e i.e.

[u]e = (a1 , a2 , ..., an ).

Note

(i) Since the ei are independent, the representation

u = a1 e1 + a2 e2 + ... + an en , ai ∈ K

is unique. That is, the n scalars a1 , ..., an are completely determined by the vector u and
the basis {ei }.

(ii) At times [u]e may be written as a column vector.

Examples 2.3.2

1. Let V be the vector space of polynomials with degree ≤ 2; i.e.

V = {at2 + bt + c : a, b, c ∈ R}.

The polynomials e1 = 1, e2 = t−1 and e3 = (t − 1)2 form a basis of V. Let u = 2t2 −5t+6.
Find [u]e .

Solution:
Set u as a linear combination of the ei using the unknowns x, y and z.

u = xe1 + ye2 + ze2 3


i.e. 2t2 − 5t + 6 = x(1) + y(t − 1) + z(t − 1)2
= zt2 + (y − 2z)t + (x − y + z)

We now set the coefficients of the same powers of t equal to each other:

x−y+z =6
y − 2z = −5
z=2

The solution of the above system is x = 3, y = −1, z = 2.


Then,
u = 3e1 − e2 + 2e3 and [u]e = (3, −1, 2).

16

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

2. Consider the real space R3 . Find the co-ordinate vector of u = (3, 1, −4) relative to the
basis f1 = (1, 1, 1), f2 = (0, 1, 1), f3 = (0, 0, 1).

Solution:
Set u as a linear combination of the fi using the unknowns x, y, and z:

(3, 1, −4) = x(1, 1, 1) + y(0, 1, 1) + z(0, 0, 1) = (x, x + y, x + y + z)

Then set the corresponding components equal to each other to obtain the equivalent
system of equations:

x=3
x+y =1
x + y + z = −4

having solution x = 3, y = −2, z = −5. Thus [u]f = (3, −2, −5).

Note: Relative to the usual basis e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1), the
co-ordinate vector of u is identical to u itself, i.e. [u]e = (3, 1, −4) = u.

17

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Chapter 3

LINEAR MAPPING

3.1 Definition, Examples and Theorems


Definition 3.1.1 (Linear Mapping)
Let V and U be vector spaces over the same field K. A mapping F : V → U is called a linear
mapping if it satisfies the following two conditions:

(i) For any v, w ∈ V, F (v + w) = F (v) + F (w).

(ii) For any k ∈ K and any v ∈ V , F (kv) = kF (v).

Note:

(a) F : V → U is linear if it preserves the two basic operations of a vector space; that of
vector addition and scalar multiplication.

(b) Substituting k = 0 in (ii), we obtain F (0) = 0. That is, every linear mapping takes the
zero vector into the zero vector.

(c) For any scalars a, b ∈ K and any vectors v, w ∈ V , we obtain (by applying both conditions
of linearity)

F (av + bw) = F (av) + F (bw)


= aF (v) + bF (w).

The condition F (av + bw) = aF (v) + bF (w) completely characterizes linear mappings and
is sometimes used as its definition.

More generally, for any scalars ai ∈ K and any vectors vi ∈ V , we obtain the basic
property of linear mappings:

F (a1 v1 + a2 v2 + ... + an vn ) = a1 F (v1 ) + a2 F (2 ) + ...an F (n ).

(d) Other phrases for linear mapping are:

(i) vector space homomorphism and


(ii) linear operator or linear transformation when V = U .

18

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Examples 3.1.2.
1 Let V be the vector space of polynomials in the variable t over the real field R. Then the
differential mapping D : V → V and the integral mapping S : V → R are linear. For if
we let D = dtd and u, v ∈ V , k ∈ K we have,

d du dv
D(u + v) = (u + v) = + = D(u) + D(v)
dt dt dt
d dv
D(kv) = (kv) = k = kD(v)
dt dt
Therefore, D is linear.
R
Also, if we let S(u) = udt then
Z Z Z
S(u + v) = (u + v)dt = udt + vdt = S(u) + S(v)
Z Z
S(kv) = kvdt = k vdt = kS(v)

Therefore, S is linear.

2 The identity mapping I : V → V which maps each v ∈ V into itself is linear. Note that
I(v) = v for all v ∈ V.

3 Let F : V → U be the mapping which assigns 0 ∈ U to every vector v ∈ V . That is


F (v) = 0 for every v ∈ V . Then F is linear. Here F is called the zero mapping and is
usually denoted by O.
Theorem 3.1.3.
Let V and U be the vector spaces over a field K. Let {v1 , v2 , ..., vn } be a basis of V and
let u1 , u2 , ..., un be any arbitrary vectors in U . Then there exists a unique linear mapping
F : V → U such that F (v1 ) = u1 , F (v2 ) = u2 , ..., F (vn ) = un .

Theorem 3.1.4.
Let F : V → U be linear and suppose that v1 , ...vn ∈ V have the property that their images
F (v1 ), ..., F (vn ) are linearly independent. Then the vectors v1 , ..., vn are also linearly indepen-
dent.

Proof.
Suppose that for scalars a1 , ..., an

a1 v1 + a2 v2 + ... + an vn = 0.

Then

0 = F (0) = F (a1 v1 + a2 v2 + ... + an vn )


= a1 F (v1 ) + a2 F (v2 ) + ... + an F (vn )

Since the F (vi ) are linearly independent, all the ai = 0. Thus the vectors v1 , ..., vn are linearly
independent.

19

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Theorem 3.1.5.
Suppose that the linear mapping F : V → U is one-to-one and onto. Then the inverse mapping
F −1 : U → V is also linear.

Proof.
Let u1 , u2 ∈ U. Since F is one-to-one and onto, there exists unique vectors v1 , v2 ∈ V for which
F (v1 ) = u1 and F (v2 ) = u2 . Since F is linear, we have

F (v1 + v2 ) = F (v1 ) + F (v2 ) = u1 + u2

and
F (kvi ) = kF (vi ) = kui .
By definition of the inverse mapping,

F −1 u1 = v1 , F −1 u2 = v2 , F −1 (u1 + u2 ) = v1 + v2

and F −1 (kv1 ) = kv1 .


Then
F −1 (u1 + u2 ) = v1 + v2 = F −1 (u1 ) + F −1 (u2 )
and
F −1 (ku1 ) = kv1 = kF −1 (u1 ).
Thus F −1 is linear.

3.2 Image and Kernel


Definition 3.2.1. (Image)
Let F : V → U be a linear mapping. The image of F written Im F , is the set of image points
in U , that is Im F = {u ∈ U : F (v) = u for some v ∈ V }.

Definition 3.2.2.(Kernel)
The kernel of F , written ker F , is the set of points in V which map onto 0 ∈ U , that is
ker F = {v ∈ V : F (v) = 0}.

Theorem 3.2.3.
Let F : V → U be a linear mapping. Then

(i) the image of F is a subspace of U and

(ii) the kernel of F is a subspace of V.

20

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

Proof.
(i) Since F (0) = 0, 0 ∈ Im F. Now let u1 , u2 ∈ Im F and a, b ∈ K. Since u1 and u2 belong
to the image of F , there exist vectors v1 , v2 ∈ V such that F (v1 ) = u1 and F (v2 ) = u2 .
Then

F (av1 + bv2 ) = aF (v1 ) + F (v2 )


= au1 + bu2 ∈ Im F.

Thus the image of F is a subspace of U .

(ii) Since F (0) = 0, then 0 ∈ ker F . Now let v, w ∈ ker F and a, b ∈ K. Since v and w belong
to the kernel of F , we have that F (v) = 0 and F (w) = 0. Thus

F (av + bw) = aF (v) + bF (w)


= a.0 + b.0
= 0,

and so av + bw ∈ ker F. Thus the kernel of F is a subspace of V .


Example 3.2.4.
Let F : R3 → R3 be the projection mapping into the xy-plane, that is F (x, y, z) = (x, y, 0).
The image of F is the entire xy-plane, i.e. Im F = {(a, b, 0) : a, b ∈ R}. The kernel of F is the
z-axis, i.e. ker F = {(0, 0, c) : c ∈ R}.

Theorem 3.2.5.
Suppose that the vectors v1 , ..., vn generate V and that F : V → U is linear. Then the vectors
F (v1 ), ..., F (vn ) ∈ U generate Im F .

Proof.
Let u ∈ Im F, then F (v) = u for some v ∈ V . Since the vi generate V and since v ∈ V , there
exists scalars a1 , ..., an for which v = a1 v1 + a2 v2 + ... + an vn .
Accordingly,

u = F (v)
= F (a1 v1 + a2 v2 + ... + an vn )
= a1 F (v1 ) + a2 F (v2 ) + ...an F (vn )

and hence the vectors F (v1 ), ..., F (vn ) generate Im F.

Theorem 3.2.6.
Let V be of finite dimension and that F : V → U is a linear mapping. Then:

dim V = dim (ker F ) + dim (Im F ).

Example 3.2.7.
From Example 3.2.4, dim (Im F ) = 2, dim (ker F ) = 1 and dim V = 3, since

dim V = dim (ker F ) + dim (Im F ).

21

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

3.3 Rank and Nullity


Let F : V → U be a linear mapping. Then the rank of F is defined to be the dimension of its
image and the nullity of F is defined to be the dimension of its kernel; i.e.

rank(F) = dim (Im F ) and nullity (F) = dim (ker F ).

Note:
From Theorem 3.2.6, we have that if V is of finite dimension then dim V =rank (F )+nullity (F ).

3.4 Singular and Non-singular Mapping


Definition 3.4.1 (Singular Mapping).
A linear mapping F : V → U is said to be singular if the image of some non zero vector under
F is 0, i.e. if there exists v ∈ V for which v 6= 0 but F (v) = 0.

Definition 3.4.2 (Non-singular Mapping).


F : V → U is non-singular if only 0 ∈ V maps into 0 ∈ U OR if its kernel consists only of the
zero vector, i.e. ker F = {0}.

Theorem 3.4.3.
Let F : V → U be linear and that V is of finite dimension, then V and the image of F have
the same dimension if and only if F is non-singular.
Proof.
By Theorem 3.2.6, dim V = dim(Im F ) + dim(ker F ). Hence V and Im F have the same dimen-
sion if and only if dim(ker F ) = 0 or ker F = {0}, i.e. if and only if F is non-singular.

Theorem 3.4.4.
A linear mapping F : V → U is non-singular if and only if the image of any independent set is
independent.
Proof.
Let F be non-singular and {v1 , ..., vn } be an independent subset of V . We claim that the vectors
F (v1 ), ..., F (vn ) are independent. Let

a1 F (v1 ) + a2 F (v2 ) + ... + an F (vn ) = 0,

where ai ∈ K. Since F is linear,

F (a1 v1 + a2 v2 + ... + an vn ) = 0;

hence a1 v1 + a2 v2 + ... + an vn ∈ ker F. But F is non-singular, i.e. ker F = {0}; hence a1 v1 +


a2 v2 + ... + an vn = 0. Since the vi are linearly independent, all the ai are 0. Accordingly, the
F (vi ) are linearly independent. In other words, the image of the independent set {v1 , v2 , ..., vn }
is independent.
On the other hand, let the image of any independent set be independent. If v ∈ V is non zero,
then {v} is independent. Then {F (v)} is independent and so F (v) 6= 0. Thus F is non-singular.

22

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

3.5 Invertible Operators


Definition 3.5.1 (Invertible Operators).
A linear operator F : V → V is said to be invertible if it has an inverse, i.e. if there exists F −1
such that F F −1 = F −1 F = I, where I is the identity mapping.

Theorem 3.5.2.
A linear operator F : V → V on a vector space of finite dimension is invertible if and only if it
is non-singular.
Proof.
Let F be invertible. We know that F is invertible if and only if it is one - one and onto. Thus
in particular, if F is invertible then only 0 ∈ V can map into itself, i.e. F is non-singular.
Conversely, let F be non-singular, i.e. ker F = {0}. Then F is one-to-one. Moreover, assuming
V has finite dimension, we have by Theorem 3.2.6,

dim V = dim(Im F ) + dim(ker F )


= dim(Im F ) + dim({0})
= dim(Im F ) + 0
= dim(Im F ).

Thus Im F = V, i.e. the image of F is V and as such F is onto. Hence F is both one-to-one
and onto and so is invertible.

Example 3.5.3.
Let F be the operator on R2 defined by F (x, y) = (y, 2x − y). The kernel of F is {(0, 0)}; hence
F is non-singular and by Theorem 3.5.2, it is invertible.
We now find a formula for F −1 . Let (s, t) be in the image of (x, y) under F ; hence (x, y) is the
image of (s, t) under F −1 ;

F (x, y) = (s, t) and F −1 (s, t) = (x, y)


we have
F (x, y) = (y, 2x − y) = (s, t)
and so
y = s, 2x − y = t.
Solving for x and y in terms of s and t, we obtain x = 12 s + 21 t, y = s. Thus F −1 is given by
the formula F −1 (s, t) = ( 12 s + 21 t, s).

3.6 Matrix representation of a linear operator


Definition 3.6.1 (Matrix representation of a linear operator)
Let F be a linear operator on a vector space V over a field K and that {e1 , ..., en } is a basis of
V . Now F (e1 ), ..., F (en ) are vectors in V and so each is a linear combination of the elements

23

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

of the basis {ei }.


F (e1 ) = a11 e1 + a12 e2 + ... + a1n en
F (e2 ) = a21 e1 + a22 e2 + ... + a2n en
F (e3 ) = a31 e1 + a32 e2 + ... + a3n en
..... .....................................
F (en ) = an1 e1 + an2 e2 + ... + ann en
In matrix form, we have
    
F (e1 ) a11 a12 ... a1n e1
 F (e2 )   a21 a22 ... a2n   e2 
 =  
 ...   ..................   ... 
F (en ) an1 an2 ... ann en
i.e. F = AE.

The transpose of the above matrix of coefficients, denoted by [F ]e or [F ] (which is equal to At ),


is called the matrix representation of F relative to the basis {ei } or simply the matrix of F in
the basis {ei }.

i.e.  
a11 a21 ... an1
 a12 a22 ... an2 
[F ]e = 
 ..................


a1n a2n ... ann

Examples 3.6.2.
1 Let V be the vector space of polynomials in t over R of degree ≤ 3 and let D : V → V
be the differential operator defined by D(p(t)) = dtd (p(t)). Compute the matrix of D in
the basis {1, t, t2 , t3 }.

Solution
D(1) = 0 = 0.1 + 0t + 0t2 + 0t3
D(t) = 1 = 1.1 + 0t + 0t2 + 0t3
D(t2 ) = 2t = 0.1 + 2t + 0t2 + 0t3
D(t3 ) = 3t2 = 0.1 + 0t + 3t2 + 0t3
In matrix form     
D(1) 0 0 0 0 1
 D(t)   1 0 0 0  t
  
 D(t2 )  =  0 2
   
0 0   t2 
D(t3 ) 0 0 3 0 t3
Accordingly
 
0 1 0 0
 0 0 2 0 
[D] = 
 0

0 0 3 
0 0 0 0

24

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

2 Let F be the linear operator on R2 defined by F (x, y) = (4x − 2y, 2x + y). Compute the
matrix of F in the basis {f1 = (1, 1), f2 = (−1, 0)}.

Solution

F (f1 ) = F (1, 1)
= (2, 3)
= 3(1, 1) + (−1, 0)
= 3f1 + f2 .
F (f2 ) = F (−1, 0)
= (−4, −2)
= −2(1, 1) + 2(−1, 0)
= −2f1 + 2f2 .

In matrix form     
F (f1 ) 3 1 f1
= .
F (f2 ) −2 2 f2
Thus
 
3 −2
[F ]f = .
1 2

Theorem 3.6.3
Let {e1 , ..., en } be a basis of V and F a linear operator on V . Then for any v ∈ V ,

[F ]e [v]e = [F (v)]e .

Examples 3.6.4.

(1) Consider the differential operator D : V → V of Example 3.6.2, (no 1). Let

p(t) = a + bt + ct2 + dt3 ,

then

D(p(t)) = b + 2ct + 3dt2 .


Hence, relative to the basis {1, t, t2 , t3 }
 
a
 b 
[p(t)] =  
 c 
d
and  
b
   2c 
D(p(t)) =  
 3d 
0

25

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

    
0 1 0 0 a b
 0 0 2 0   b   2c 
[D] [p(t)] = 
 0
 =
  3d  = [D(p(t))]

0 0 3  c
0 0 0 0 d 0
which verifies the validity of Theorem 3.6.3.

(2) Consider the linear operator F : R2 → R2 of Example 3.6.2 (no 2) i.e.

F (x, y) = (4x − 2y, 2x + y).

Let v = (5, 7). Then

v = (5, 7) = 7(1, 1) + 2(−1, 0) = 7f1 + 2f2


F (v) = (6, 17) = 17(1, 1) + 11(−1, 0) = 17f1 + 11f2

where f1 = (1, 1) and f2 = (−1, 0). Hence, relative to the basis {f1 , f2 }.
 
7
[v]f =
2
and  
  17
F (v) = .
f 11
But     
3 −2 7 17  
[F ]f [v]f = = = F (v)
1 2 2 11 f

which verifies the truthfulness of Theorem 3.6.3.

Theorem 3.6.5 (3.6.1 and 3.6.3 for linear mappings).


We now consider the general case of linear mappings from one space into another. Let V and
U be vector spaces over the same field K and say, dim V = m and dim U = n. Furthermore,
let {e1 , ..., em } and {f1 , f2 , ..., fn } be arbitrary but fixed basis of V and U respectively. Suppose
that F : V → U is a linear mapping. Then the vectors F (e1 ), ..., F (em ) belong to U and so
each is a linear combination of the fj .

F (e1 ) = a11 f1 + a12 f2 + ... + a1n fn


F (e2 ) = a21 f1 + a22 f2 + ... + a2n fn
....... ...................................
F (en ) = am1 f1 + am2 f2 + ... + amn fn .

In matrix form, we have


    
F (e1 ) a11 a12 ... a1n f1
 F (e2 )   a21 a22 ... a2n   f2 
 .......  =  .......................   ...  .
    

F (em ) am1 am2 ... amn fn

26

Downloaded by Paul Müller ([email protected])


lOMoARcPSD|54843648

The transpose of the above matrix of coefficients, denoted by [F ]fe , is called the matrix rep-
resentation of F relative to the basis {ei } and {fi } or the matrix of F in the basis {ei } and
{fj };  
a11 a21 ... am1
 a12 a22 ... am2 
[F ]fe =  
 ...................... 
a1n a2n ... amn
The following Theorem apply.

Theorem 3.6.6.
For any vector v ∈ V , [F ]fe [v]e = [F (v)]f .

Example 3.6.7
Let F : R3 → R2 be the linear mapping defined by F (x, y, z) = (3x + 2y − 4z, x − 5y + 3z).

(i) Find the matrix of F in the following bases of R3 and R2 :

{f1 = (1, 1, 1), f2 = (1, 1, 0), f3 = (1, 0, 0)}


{g1 = (1, 3), g2 = (2, 5)}

(ii) Verify that the action of F is preserved by its matrix representation, that is, for any
v ∈ R3 ,
[F ]gf [v]f = [F (v)]g .
Solution.

(i) (a, b) = (2b − 5a)g1 + (3a − b)g2 . Hence

F (f1 ) = F (1, 1, 1) = (1, −1) = −7g1 + 4g2


F (f2 ) = F (1, 1, 0) = (5, −4) = −33g1 + 19g2
F (f3 ) = F (1, 0, 0) = (3, 1) = −13g1 + 8g2 .

(ii) Exercise.

27

Downloaded by Paul Müller ([email protected])

You might also like