MTH 311: Advanced Linear Algebra
Semester 1, 2020-2021
Dr. Prahlad VaidyanathanContents
|. Prel
1
2
3.
4.
inaries
Fields
Matrices and Elementary Row Operations
Matrix Multiplication
Invertible Matrices
Il, Vector Spaces
Bee
Definition and Examples
Subspaces
Basos and Dimension
Coordinates
Summary of Row Equivalence
IIL. Linear Transformations
eRene
Linear Transformations
‘The Algebra of Linear Transformations
Isomorphisi
Representation of Transformations by Matrices
Linear Functionals
‘The Double Dual
The Transpose of a Linear Transformation
1V. Polynomials
ween
Algebras
Algebra of Polynomials
Lagrange Interpolation
Polynomial Ideals
Prime Factorization of a Polynomial
V. Determinants
1
2
3.
4
Commutative Rings
Determinant Functions
Permutations and Uniqueness of Determinants
Additional Properties of Determinants
VI.Elementary Canonical Forms
1
2
Introduction .
Characteristic Values
ae
18.
22
27
35
40
44
44
48
ST
58
67
72
83
83
85
88
90
99
104
104
105
i
116
123
123
1233. Annihilating Polynomials
4. Invariant Subspaces
5. Direct-Sum Decomposition
6. Invariant Direct Sums
7. Simultaneous Triangulation; Simultaneous Diagonalization
8. The Primary Decomposition Theorem
ViIThe Rational and Jordan Forms
1. Cyclic Subspaces and Annihilators
2. Cyclic Decompositions and the Rational Form,
3. The Jordan Form
Vilinner Product Spaces
1. Inner Products
2 Inner Product Spaces
3. Linear Functionals and Adjoints
4 Unitary Operators
5. Normal Operators
IX. Instructor Notes
132
ul
151
155
161
164
170
170
175
189
196
196
201
212
218
226
231|. Preliminaries
1.
Ids
‘Throughout this course, we will be talking about “Vector spaces”, and “Fields”, The
definition of a vector space depends on that of a field, so we begin with that
Example 1.1. Consider F = R, the set of all real numbers. It comes equipped with
two operations: Addition and multiplication, which have the following properties:
(i) Addition is commutative
zty
for all ,y €F
(ii) Addition is associative
rt(ytz)=(e+y) +2
for all x,y,z € F.
(iii) There is an additive identity, 0 (zero) with the property that
r+0=0+n=2
for all x € F
(iv) For each x € F, there is an additive inverse (—x) € F which satisfies
x+(-2)=(-2)42=
(v) Multiplication is commutative
ry = ye
for all c,y € F
(vi) Multiplication is associative
(yz) = (xy)2
for all x,y,z F
(vii) There is a multiplicative identity, 1 (one) with the property that
tl=lr=x
for all x € F(viii) To each non-zero x € F, there is an multiplicative inverse x7! € F which satisfies
(ix) Finally, multiplication distributes over addition
aly +2) =ay=22
for all x,y,z € F.
Definition 1.2. A field is a set F together with two operations
Addition : (x,y) 2+ y
Multiplication : (2, y) 4 2y
which satisfy all the conditions 1.1-1.9 above. Elements of a field will be termed scalars.
Example 1.3. (i) F = Risa field
(ii) F =C isa field with the usual operations
Addition : (a + ib) + (c+ id) = (a+c) +i6+4d), and
Multiplication : (a + #b)(c-+ id) := (ae — bd) + i(ad + be)
(ai) F = Q, the set of all rational numbers, is also a field. In fact, Q is a subfield of R
(in the sense that it is a subset of IR which also inherits the operations of addition
and multiplication from R). Also, R is a subfield of C.
(iv) F
is not a field, because 2 € Z does not have a multiplicative inverse.
Standing Assumption:
will either be R or C, unless stated otherwise
For the rest of this course, all fields will be denoted by F, and
2. Matrices and Elementary Row Operations
Definition 2.1. Let F be a field and n,m € N be fixed integers. Given m scalars
Um) € F™ and nm elements {a,j :1< i
n.
But R has n rows, so r (@i): If A is row-equivalent to the identity matrix, then R = J in the above equation.
‘Thus, A = P~!, But the inverse of an elementary matrix is again an clementary
matrix, Thus, by Theorem 4.3, P 1 is also a product of elementary matrices.
(it) = (): This follows from Theorem 4.4 and Theorem 4.3.
‘The next corollary follows from Theorem 4.6 and Corollary 3.7.
Corollary 4.7. Let A and B be m xn matrices. Then B is row-equivalent to A if and
only if B= PA for some invertible matrix P
Theorem 4.8. For ann xn matrix A, the following are equivalent
(i) Ais invertible.
(it) The homogeneous system AX = 0 has only the trivial solution X = 0.
(iit) For every vector Y € F*, the system of equations AX = has a solution,
Proof. Onco again, we prove (i) => (it) = (i), and () = (i) > (7)
(i) = (@): Let B= A“ and X be a solution to the homogeneous system AX =
X =X =(BA)X = B(AX) = B(0) =0
, then
Hence, X = 0 is the only solution,
(is) = (®): Suppose AX = 0 has only the trivial solution, then A is rov-equivalent to the
identity matrix by Theorem 2.11, Hence, A is invertible by Theorem 4.6
(i) = (Wi): Given a vector Y, consider X == ATI, then AX = Y by associativity of matrix
multiplication.
(iis) > (i): Let R be a row-reduced echelon matrix that is row-equivalent to A. By Theo-
rem 4.6, it suffices to show that R= J. Since R is a row-reduced echelon matrix,
it suffices to show that the n‘® row of R is non-zero. So set
Y =(0,0,...,1)
‘Then the equation RX = Y has a solution, which must necessarily be non-zero
(since ¥ # 0). Thus, the last row of R cannot be zero. Hence, R = I, whence A
is invertible
16Corollary 4.9. A square matrix which is either left or right invertible is invertible.
Proof. Suppose A is left-invertible, then there exists a matrix B so that BA = J. If X
is a vector so that AX = 0, then X = B(AX) = (BA)X = 0. Hence, the equation
AX =0 has only the trivial solution. By Theorem 4.8, A is invertible
Now suppose A is right-invertible, then there exists a matrix B so that AB =I. If is
any veetor, then X := B(Y) has the property that AX = Y. Hence, by Theorem 4.8,
Ais invertible. a
Corollary 4.10. Let A = AjAz... Ay where the A; aren xn matri
invertible if and only if each A, is invertible.
Then, A is
Proof. If each A; is invertible, then A is invertible by ‘Theorem 4.3. Conversely, suppose
Ais invertible and X is a vector such that A,X = 0, then
AX = (AAz... An a)ARX =
Since A is invertible, this forces X = 0. Hence, the only solution to the equation
AtX = 0 is the trivial solution. By Theorem 4.8, it follows that Ay, is invertible, Hence,
AjAy.. Ay, = AAG?
is invertible, Now, by induction on k, each A, is invertible for 1 V given by (a, 8) Hat
(Scalar Multiplication): F x V > V given by (ca) +9 ca
with the following properties:
(i) Addition is commutative
at+s=Bta
for alla, BEV
(ii) Addition is associative
at (G49) =(0 +8) +9
for all a, 8,7 €V
(ii) There is a unique zero vector 0 € V which satisfies the equation
a+0=0+a=0
for alla eV
vector a € V, there is a unique
vector (—a) € V such that
(iv) For
a+(-a) =(-a) +a=0
(v) For each ae V,
(vi) For every c1,c2 € Fand a €V,
(crea) = ea(era)
(vii) For every c € F and a, 8 €
+8) =ca+ep
(viii) For every ¢1,¢2 € F anda eV,
(ata)a = qa + oa
18.An element of the set V is called a vector, while an element of F is called a scalar.
Technically, a vector space is a tuple (V, F, +,-), but usually, we simply say that V is a
vector space over F, when the operations + and - are implicit.
Example 1.2. (i) The n-tuple space F": Let F be any field and V be the set of all
n-tuples (1,22 =) whose entries x, are in F. If 3 = (y1, y2,-..,Yn) EV.
and c € F, we define addition by
+ B= (ey +My F2 + as En + Yn)
and scalar multiplication by
6: (cx1, €%2,..-, Cty)
One can then verify that V = F* satisfies all the conditions of Definition 1.1
(ii) The space of mx n matrices P™": Let F be a field and m,n € N be positive
integers. Let F*" be the set of all m x n matrices with entries in F. For matrices
A,B & F™, we dofine addition by
A
(A+B), a5 + Bas
and sealar multiplication by
(cA)ig = cig
for any c € F. [Observe that FO = F” from the previous example]
(ai) The space of functions from a set to a field: Let F be a field and Sa non-empty
set. Let V denote the set of all functions from § taking values in F. For f,g € V,
define
(f + a)(s) = S(s) + 9s)
whore the addition on the right-hand-side is the addition in F, Similarly, scalar
multiplication is defined pointwise by
(cf\(s) = ef(s)
which the multiplication on the right-hand-side is that of F. Once again, it is easy
to verify the axioms (note that zero vector here is zero function)
If S = (1,2,....n}, then the fmnetion f : S + F may be identified with a
tuple (f(1), ((2),---. f(n)). Conversely, any n-tuple (23, 29,...,2) may be
thought of as a function. This identification shows that the first example is
a special case of this example.
¢ Similarly, if S$ = {(i,j) :1
F may be identified with a matrix A € FP" where Ajj -= f(i,j). This
identification is a bijection between the sot of functions from S'—» F and the
space F”*". ‘Thus, the second example is also a special case of this one
19(iv) The space of polynomial functions over a field: Let F be a field, and V be the set
of all functions f: F + F which are of the form
F(z) =
for some scalars ¢9,¢1,-..,¢, € F. Such a function is called a polynomial function.
With addition and scalar multiplication defined exactly as in the previous example,
V forms a vector space.
(v) Let C denote the set of all complex numbers and F = R, Then C may be thought
of as a vector space over IR. In fact, C may be identified with R?.
cg tart... tenn"
Lemma 1.3. (i) For any F,
a =0
where 0 € V denotes the zero vector.
(it) Ife € F is a non-zero scalar and a € V such that
ca =0
Then «
(iit) For any a €V
(De =-a
Proof. (i) For any c € F,
040=c0=c0+0)=0 40
Hence, 0 = 0
(i) If € F is non-zero and a € V such that
‘Then,
But
Hence, a = 0
fii) For any a €V,
R Vy,
a+ (-Ia= la + (la = (1+ (-1))a=0a=0
But a + (a) = 0 and (~a) is the unique vector with this property. Hence,
(“1a = (-a)
20Remark 1.4. Since vector space addition is
V, we have
sociative, for any vectors a1, a2, 03,04 €
ay + (a2 + (a3 + @41))
can be written in many different ways by moving the parentheses around. For instance,
(oa + a2) + (a3 +04)
denotes the same vector. Hence, we simply drop all parentheses, and write this vector
as
Gy > Og + 03 + Oy
‘The same is true for any finite number of vectors a1, a2,...,0% € V, so the expression
ay + a2 + + on
denotes the common vector associated to all possible re-arrangements of parentheses.
The next definition is the most fundamental operation in a vector space, and is the
reason for defining our axioms the way we have done.
Definition 1.5. Lot V be a vector space over a field F, and ay,02,...,0 € V. A
vector § € V is said to be a linear combination of a1,a2,...,aq if there exist scalars
G1, €y--4€q © F such that
e104 + 202 +... + Cn
When this happens, we write
Note that, by the distributivity properties (Properties (vit) and (viii) of Definition 1.1),
we have
eas + Y das = Wlce + deo
a mi
‘ (So) = Desde
a r=
Exercise: Read the end of {Ilofiman-Kunze, Seotion 2.1] concerning the geometric in-
terpretation of vector spaces, addition, and scalar multiplication
212. Subspaces
Definition 2.1. Let V be a vector space over a field F. A subspace of subset WoC
V which is itself a vector space with the addition and scalar multiplication operations
inherited from V.
Remark 2.2. What this definition means is that WC V should have the following
properties
(i) Ifa, 8 € W, then (a+ 8) must be in W.
(i) fa € W and ce F, then ca must be in W.
We say that W is closed under the operations of addition and scalar multiplication.
Theorem 2.3. Let V be a vector space over a field F and W CV be a non-emply set.
Then W is a subspace of V if and only if, for any a,8 € W and c & F, the vector
(ca + 8) lies in W
Proof. Suppose W is a subspace of V, then W is closed under the operations of scalar
multiplication and addition as mentioned above. Hence, if a, 8 € W and c € F, then
ca € W, so (ca + 8) € W as well.
Conversely, suppose W satisfies this condition, and we wish to show that W is subspace
In other words, we wish to show that W satisfies the conditions of Definition 1.1. By
hypothesis, the addition map + maps W x W — W, and the scalar multiplication map
maps F x W to W
(i) Addition is commutative because it is commutative in V.
(ii) Addition is associative because it is associative in V
(ii) V has a zero clement 0 € V. To see that this vector lies in 1, observe that W is
non-empty, 50 it contains some vector a € W. Then 0a = 0 € W by Lemma 1.3.
(iv) If € W, then @ € V, so there is a unique vector (~a) € V so that
But we know that 0 € W, so by Lemma 1.3 once again,
(-a) =0- (a) =0+4(-l)aeW
(v) For each a € W, we have 1-a@ = @ in V. But the scalar multiplication on W is
the same as that of V, so the same property holds in W’ as well.
(vi) The remaining three properties of a vector space are satisfied in W because they
atisfied in V (Check!)
are
22Example 2.4. (i) Let V be any vector space, then W = {0} is a subspace of V.
Similarly, W := V is a subspace of V. These are both called the trivial subspaces
of.
(i) Let V = F as in Example 1.2. Let
{(2,225+-+43) €V ex, = OF
Note that if a = (03, 22,...,2n),9 = (Yt. Ya-+sYn) € W and c€ P, then
n= =0Sen+y=0
Hence, (ca +8) € W. Thus W is a subspace by Theorem 2.3.
Note: If F = R and n= 2, then W defines a line passing through the origin
(ii) Let V =F as before, and let
W = {(tayta.-0yt5) € Vern = 1}
‘Then W is not a subspace
Note: If F =R and n =2, thon W defines a line that does not pass through the
origin:
(iv) Let V denote the set of all functions from F to F, and let W denote the set of all
polynomial functions from F to F. Then W is subspace of V
(v) Let V = F™" denote the set of all n x n matrices over a field F. A matrix A € V
is said to be symmetric if
Ais = Ay
for all 1 < i,j 8;
a =
Thus, ca + 8 is also of the form in Equation I1.7, and so ca + 8 € W. So, by
Theorem 2.3, W is a subspace of V a(ii) If L is any other subspace of V containing S, then Wc L.
Proof. If 8 € W, then there exists ¢, € F and a; € S such that
Yaa
Since L is a subspace containing S, a; € L for all 1 < i 0, define f, € V by
In(z) =
‘Then, W is the subspace spanned by the set { fo, fir fs ---}
Definition 2.10. Let 51, $2,...,S, be k subsets of a vector space V. Define
Sit Soto. +S
to be the set consisting of all vectors of the form
ay Fag bo HOE
where a € S; for all 1 0, define
fn €V by
JIn(z) = 2"
‘Then, as we saw in Example 2.9, § >= {fo, fi, fay---} is a spanning set. Also, if
€0,€2;--+4¢e € F are scalars such that
Srau-o
‘=o
‘Then, it follows that the polynomial
co + ye + cpa? +... + cyt
is the zero polynomial. Since a non-zero polynomial can only have finitely many
roots, it follows that c; = 0 for all 0 m. Since {(;,42,-.., 8m} is a spanning set, there exist sealars {A,, > 1 Ere)
But the {a;} are a basis, so we conclude that
a 1o:k=j
PQ
= iQ {; kAG
Thus, if Q = (Q,,), then we conclude that
PQ=1
Hence the matrix P chosen above is invertible and Q = P~!, The following theorem is
the conclusion of this discussion
37Theorem 4.5. Let V be an n-dimensional vector space and B and B' be two ordered
bases of V. Then, there is a unique n x n invertible marie P such that, for any ainV,
we have
lols = Plalsr
and
[ols = Pols
Furthermore, the columns of P are given by
P,
3 = (Ble
Definition 4.6. The matrix P constructed in the above theorem is called a change of
basis matrix.
(End of Week 2)
The next theorem is a converse to Theorem 4.5.
Theorem 4.7. Let P be ann xn invertible matrix over F. Let V be an n-dimensional
vector space over F and let B be an ordered basis of V. Then there is a unique ordered
basis B' of V such that, for any vector a €V, we have
[ale = Ploler
and
lel = P~ [al
Proof. We write B= {ay,02,...,a,} and set P = (P,;). We define
(11.6)
‘Then we claim that B = {(3;, 3,...,8,} is a basis for V.
wealars ¢; € F such that
(i) B'is linearly independent: If we have
‘Then we get
Rewriting the above expression, and using the linear independence of B, we con-
clude that »
DY Pisces
a
for cach 1 }, then we have PQ = I, so
ba Qu= {' ify
ij
‘Thus, from Equation I1.6, we get
YS ih = OY ns Pines
rat ijt
=> (5 0) ij
ya \iet
a
Hence, if a € V as above, we have
o= Stee PY ada =F (Heeu,) Br (Ls)
a ae ma
‘Thus, every vector a € V is in the subspace spanned by B’, whence B’ is a basis
for V.
Finally, if @ € V and suppose
[ole = and [ale =
dn, &n,
so that Equation IL-7 holds, then by Equation IL8, we
ce = Yo Quid
a
Hence,
[als = Qla|s — Pals
By symmetry, it follows that
[als = Play
This completes the proof a
39Example 4.8. Let F = R and V = R? and B = {e1,¢} the standard ordered basis.
For a fixed @ € R, let
cos(0) —sin(0)
sin(9) cos(0)
Then P is an invertible matrix and
po ( cos(0) sn)
—sin(0) cos(0)
for V. It is, geometrically, the
for «= (1,22) € V, we have
Then B' = {(cos(0), sin(@)), (— sin(0), cos(0))} is a bi
usual pair of axes rotated by an angle 9. In this basis
la sry cos(9) + 2 sin()
s —2 sin(9) + 22 cos(0)
5. Summary of Row Equivalence
Definition 5.1. Let A be an m x n matrix over a field F, and write its rows as vectors
{011,042 .5.,4m} CF
(i) The row space of A is the subspace of F” spanned by this set.
(i) The row rank of A is the dimension of the row space of A.
Theorem 5.2. Row equivalent matrices have the same row space.
Proof. If A and B are two row-equivalent m x n matrices, then there is an invertible
matrix P such that
B=PA
If {a1,09,... cq} are the row vectors A and {9),82,..., 8yn} are the row vectors of B,
then m
DY Paes
If W, and Wp are the row spaces of A and B respectively, then we see that
{Bt Bays Bm} C Wa
Since Wg is the smallest subspace containing this set, we conclude that
We c Wa
Since row equivalence is an equivalen
‘elation, we have WC We as well. a
Theorem 5.3. Let R be a row-reduced echelon matrix, then the non-zero rows of R form
a basis for the row space of R.
40Proof. Let pi, p2,--.,pr be the non-zero rows of R and write
Ria, Riss Riga)
By definition, the set {p1, p2,...,-} spans the row space Wp of R. Therefore, it suffices
to check that this set is linearly independent. Since R is a row-reduced echelon matrix,
there are positive integers ky < ky <... < ky such that, for all i 2
Hence, m = p:. Thus proceeding, we may conclude that m, = p; for all 1, whence
S=R.
42ao
Corollary 5.5. Everymxn matrix A is row-equivalent to one and only one row-reduced
echelon matric.
Proof. We know that A is row-equivalent to one row-reduced echelon matrix from The-
orem 1.2.9. If A is row-equivalent to two row-reduced echelon matrices K and S, then
by Theorem 5.3, both R and S have the same row space. By Theorem 5.4, R=S. O
Corollary 5.6. Let A and B be two m xn matrices over a field F, Then A is row
equivalent to B if and only if they have the same row space
Proof. We know from Theorem 5.2 that if A and B are row-equivalent, then they have
the same row space.
Conversely, suppose A and B have the same row space. By ‘Theorem 1.2.9, A and
B are both row-equivalent to row-reduced echelon matrices R and S respectively. By
Theorem 5.2, R and S have the same row space. By Theorem 5.4, R = $. Hence, A
and B are row-equivalent to each other. a
43WM. Linear Transformations
1.
ear Transformations
Definition 1.1. Let V and W be two vector spaces over a common field F. A function
T : V + W is called a linear transformation if, for any two vectors a, 8 € V and any
scalar ¢ € F, we have
T(ca + 8) = Ta) +T(8)
Example 1.2.
(i) Let V be any vector space and I: V—+ V be the identity map. Then J is linear.
(ii) Similarly, the zero map 0: V + V is a linear map
(aii) Let V = F" and W = F™, and A € F™*" be an m x n matrix with entries in F.
‘Then T: V > W given by
T(X) := AX
is a linear transformation by Lemma II.2.5.
(iv) Let V be the space of all polynomials over F. Define D : V > V be the ‘derivative’
map, defined by the rule: If
L(
cot eae ten +...+ ene"
Then
(D(x) = er + ear +... nega”
(v) Let F = R and V be the space of all functions f : R - R that are continu.
ous (Note that V is, indeed, a vector space with the point-wise operations as in
Example IT.1.2). Define T : V+ V by
rene) = [" seoae
With V as in the previous example and
by
R, we may also define T: V+ W
Remark 1.3. If 7’: V > W is a linear transformation
44(i) T(0) =0 because if a == T(0), then
Qa =a+a=T(0) +10) =T(0 +0) =70) =a
Hence, a = 0 by Lemma I1.1.3
(ii) If 8 is a linear combination of vectors {a7,02,...,}, then we may write
B= Yeas
oa
for some scalars ¢1,¢2,..., Cy € F. Then it follows that
708) = Darl)
Theorem 1.4. Let V be a finite dimensional vector space over a field F and let a, a9,
be an ordered basis of V. Let W be another vector space over F and (1, 2,...,8n} be
any set of n vectors in W. Then, there is a unique linear transformation T : V > W
such that
Tla)=8 VWi W by
Since the above expression is uniquely associated to a, this map is well-defined.
Now we check linearity: If
and c € F a scalar, then we have
cat f= Vleet dias
‘=
So by definition
T(ca + 8) = Lec + d,)B;
cotNow consider
> =) + z 48
= Dlea says,
{
=T(ca + 8)
Ta) +T(8)=e
Hence, T’is linear as required.
(di) Uniqueness: If $: V + W is another linear transformation such that
S(a;) =f Vi R® such that
T(ax) = (3,2,1) and T(a2) = (6,5,4)
We find T(e2): To do that, we write
& = C404 + C20 = (cy + Beg, 2eq + deg) = (1,0)
Hence,
= -2,e) =
So that
T(c) = -2T (ax) + T(a2) = -2(3,2, 1) + (6,5,4) = (0,1,2)
(i) IT; F* + F* is a linear transformation, then Tis uniquely determined by the
vectors
B=Tle),1 sei) =05 > cia € ker(T)
Sen Sea
Hence, there exist scalars dy, d2,-..,dy € F such that
k
YS cai = Yoda;
Sm Sat
Since the set B is lincarly independent, we conclude that
«0-4;
for alll
W be two linear transformations, and ¢ € F a scalar
(i) Define (T +U):V + W by
(T+ U)(a) = T(a) + U(a)
48(ti) Define (eT): V > W by
(Tle) = T(0)
Then (T +U) and cT are both linear transformations.
Proof. We prove that (T | U) is a linear transformation. The proof for (cT) is similar.
Fix a, 8 € V and d € F a scalar, and consider
(I + U)(da + 8) =T(de + 8) + U(de + 8)
= dT (a) +7(8) + dU(a) + U(3)
= d(T(a) + U(a)) + (T(8) + U(8))
= d(T + U)a) + (T+ U)(8)
Hence, (1’ + U) is linear. a
Definition 2.2. Let V and W be two vector spaces over a common field F. Let L(V,W)
be the space of all linear transformations from V to W.
Theorem 2.3. Under the operations defined in Lemma 2.1, L(V,W) is a vector space.
Proof. By Lemma 2.1, the operations
+ LV,W) x L(V,W) > L(V, W)
and
Fx L(V,W) + L(V,W)
are well-defined operations. We now need to verify all the axioms of Definition II1.1
For convenience, we simply verify a few of them, and leave the rest for you
(i) Addition is commutative: If T,U € L(V,1W), we need to check that (7+ U) =
(U+T). Hence, we need to check that, for any a € V,
(U+T)(a) =(T + UY(a)
But this follows from the fact that addition in W is commutative, and so
(T +U)(a) = T(a) + U(a) = U(a) + Ta) = (U + T)(a)
(ii) Observe that the zero linear transformation 0: V + W is the zero element in
LIV,W).
(ai) Lot d € F and T,U € L(V,W), then we verify that
d(T +U)=dT + dU
So fix a € V, then
[d(T + U)] (a) = d(T + Ua)
=d(T(a) + U(a))
= dT (a) + dU (a)
= (dT + aU)(a)
This is true for every ainV, so d(T +U) = dT +d.
49The other axioms are verified in a similar fashion. o
Theorem 2.4. Let V and W be two finite dimensional vector spaces over F. Then
L(V,W) is finite dimensional, and
dim(L(V, W)) = dim(V) dim(WV)
Proof. Let
B= {ay,03,-..,0n} and BY = (81, 82,
be bases of V and W respectively. Then, we wish to show that
dim(L(V,W)) = mn
For each 1 V be the linear transformation
T(f)(2) = xf (2)
Then, if f(x) = 2" and n > 1, then
(DT —TD)(fa)(z) = DT(fn)(e) = TDUn)(2)
= D(x fn(2)) — T(nz"™)
= D(x") —ne®
(n+ 12" = n2*
By Theorem 1.4,
DT-TD=1
In particular, DT # TD. Hence, composition of operators is not necessarily a
commutative operation.
(di) Lot B = {a1,02,....,a,} be an ordered basis of a vector space V. For 1 < p,q AB
(End of Week 3)
Definition 2.9. A linear transformation T : V + W is said to be invertible if there is
a linear transformtion $ : W > V such that
ST =Iy and TS = Iw
Definition 2.10. A function f : $+ T between two sets is said to be
(a) injective if f is one-to-one, In other words, if x,y € Sand f(x) = f(y), then x =y
(ii) surjective if f is onto. In other words, for any 2 € T, there exists x € S such that
f(z) = 2,
(ai) biyective if f is both injective and surjective.
Theorem 2.11. Let T: V+ W be a linear transformation. Then T is invertible if and
only if T is bijective.
Proof. (i) If Tis invertible, then there is a linear transformation S': V > W as above
(i) Sis injective: If a, 8 € V are such that T(a) = 7(8), Then
ST(
But ST = Iy, soa