Linear
Linear
Jakob Scholbach
0 Preface 5
2 Vector spaces 29
2.1 R2 , R3 and Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 Solution sets of homogeneous linear systems . . . . . . . . . . . . 35
2.3 Intersection of subspaces . . . . . . . . . . . . . . . . . . . . . . . 38
2.4 Further examples of vector spaces . . . . . . . . . . . . . . . . . . 40
2.5 Linear combinations . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.6 Linear independence . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.7 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.8 The dimension of a vector space . . . . . . . . . . . . . . . . . . 56
2.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3 Linear maps 71
3.1 Definition and first examples . . . . . . . . . . . . . . . . . . . . 71
3.2 Multiplication of a matrix with a vector . . . . . . . . . . . . . . 73
3.3 Outlook: current research . . . . . . . . . . . . . . . . . . . . . . 80
3.4 Kernel and image of a linear map . . . . . . . . . . . . . . . . . . 81
3.5 Revisiting linear systems . . . . . . . . . . . . . . . . . . . . . . . 88
3.6 Linear maps defined on basis vectors . . . . . . . . . . . . . . . . 91
3.7 Matrices associated to linear maps . . . . . . . . . . . . . . . . . 93
3.8 Composing linear maps and multiplying matrices . . . . . . . . . 94
3.9 Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.10 Transposition of matrices . . . . . . . . . . . . . . . . . . . . . . 114
3.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3
4 CONTENTS
4 Determinants 129
4.1 Determinants of 2 ˆ 2-matrices . . . . . . . . . . . . . . . . . . . 129
4.2 Determinants of larger matrices . . . . . . . . . . . . . . . . . . . 131
4.3 Invertibility and determinants . . . . . . . . . . . . . . . . . . . . 136
4.4 Further properties of determinants . . . . . . . . . . . . . . . . . 137
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
References 249
Chapter 0
Preface
These are growing notes for a lecture on Linear algebra and geome-
try, offered in Spring 2023 at the University of Padova to an audience
of engineering students.
5
6 CHAPTER 0. PREFACE
Chapter 1
7
8 CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS
x
´5 ´4 ´3 ´2 ´1 1 2 3 4 5
´5
x2 ` 4y 3 “ 5
logpxq ´ 4 sinpxq “ 0
are not primarily studied in linear algebra. For such more com-
plicated equations, linear algebra is still useful, however. This is
accomplished by replacing such equations by linear approximations.
The first idea in that direction is the derivative of a function f ,
which serves as a best linear approximation of a differentiable func-
tion. Such linearization techniques are beyond the scope of this
lecture.
x`y “4 (1.9)
x ´ y “ 1.
p´y ` 4q ´ y “ 1,
10 CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS
or
´2y ` 4 “ 1
or
´2y “ ´3
or finally
3
y“ .
2
Inserting this back above, gives
3 5
x“´ `4“ .
2 2
Note that again each equation holds (for given values of x and y)
precisely if the preceding one holds. Thus, the original system has
the same solution set as the last two equation (together). This
system of equations therefore has a unique solution, namely
5 3
px “ , y “ q.
2 2
To say the same using different symbols: the solution set of the
system (1.9) is a set consisting of a single element:
5 3
tp , qu
2 2
It is very useful to also understand this process geometrically,
which we do by plotting the two lines that are the solutions of the
individual equations:
y x`y “4
x´y “1
5
x
´5 ´4 ´3 ´2 ´1 1 2 3 4 5
´5
1.2. SYSTEMS OF LINEAR EQUATIONS 11
The above linear system (1.9) had exactly one solution. This
need not always be the case, as the following examples show:
Example 1.10. The system
x`y “4
x`y “1
has no solution. This can be seen algebraically (!) and also geomet-
rically:
y x`y “4
8
x`y “1
6
2
x
´5 ´4 ´3 ´2 ´1 1 2 3 4 5
´2
´4
The system has no solution, which is paralleled by the fact that two
parallel, but distinct lines in the plane do not intersect.
x`y “4
´2x ´ 2y “ ´8
px, y “ 4 ´ xq,
y x`y “4
8
´2x ´ 2y “ ´8
6
x
´5 ´4 ´3 ´2 ´1 1 2 3 4 5
In other words, even though there are two equations above, they
both have the same solution set. Thus, in some sense one of the
equations is redundant, i.e., the solution set of the entire system
equals the solution set of either of the equations individually.
ax ` by “ c
ax ` by “ c
dx ` ey “ f
can take three forms:
x ` 2z “ ´1
´2x ´ 3z “ 1
2y “ ´2.
x ` 2z “ ´1
z “ ´1
2y “ ´2.
x ` 2z “ ´1
2y “ ´2
z “ ´1.
1.4. MATRICES 15
1
We multiply the second equation by 2
(in other words, we divide it
by 2; elementary operation (2)):
x ` 2z “ ´1
y “ ´1
z “ ´1.
We add p´2q times the third equation to the first (elementary op-
eration (3))
x“1
y “ ´1
z “ ´1.
1.4 Matrices
It is time to use some better tools to do the bookkeeping needed to
solve linear systems. Matrices help doing that. Later on (§3), we
will use matrices in a much more profound way.
Definition 1.20. A matrix is a rectangular array of numbers. We
speak of an m ˆ n-matrix (or m-by-n matrix) if it has m rows and
n columns, respectively. If m “ n, we also call it a square matrix .
An 1 ˆ n-matrix (i.e., m “ 1 and n is arbitrary) is called a row
vector . Similarly, an m ˆ 1-matrix is called a column vector .
Example 1.21. It is customary to denote matrices by capital let-
ters. For example, ˆ ˙
3 4
A“
0 ´7
16 CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS
is a column
` vector
˘ (or a 2 ˆ 1-matrix), whose entries are two vari-
ables; x1 x2 is a row vector (or a 1 ˆ 2-matrix).
Notation 1.22. A matrix whose entries are unspecified numbers is
denoted like so:
¨ ˛
a11 a12 a13 . . . a1n
˚ a21 a22 a23 . . . a2n ‹
A“˚˝ ... .. .. .. .. ‹ .
. . . . ‚
am1 am2 am3 ... amn
Thus, the number aij is the entry in the i-th row and the j-th
column. A more compressed notation expressing the same is
A “ paij qi“1,...,m,j“1,...,n
or even just
A “ paij q.
Definition 1.23. Let
a11 x1 ` a12 x2 ` ¨ ¨ ¨ ` a1n xn “ b1 (1.24) (1.25)
a21 x1 ` a22 x2 ` ¨ ¨ ¨ ` a2n xn “ b2
..
.
am1 x1 ` am2 x2 ` ¨ ¨ ¨ ` amn xn “ bm
1.5. GAUSSIAN ELIMINATION 17
0 1 * * * * *
¨ ˛
˚ 0 0 0 1 * * * ‹
0 0 0 0 1 * * ‹
˚ ‹
‹.
˚
0 0 0 0 0 0 1 ‹
˚
˚
˝ 0 0 0 0 0 0 0 ‚
0 0 0 0 0 0 0
The first three steps don’t change the matrix (since the top-left
entry is already 1). Step (4): add ´2 times the first row to the
20 CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS
second row; and add ´5 times the first row to the third, which gives
¨ ˛
1 2 5 7
˝ 0 ´3 ´6 ´12 ‚
0 ´6 ´12 ´24
The remaining steps only affect the second and third row. Step (2)
picks the second row, and a “ ´3 (underlined). It is already in the
top position (the first row being discarded for the remainder of the
algorithm), so Step (2) does not change the matrix. Step (3) gives
the matrix ¨ ˛
1 2 5 7
˝ 0 1 2 4 ‚
0 ´6 ´12 ´24
Step (4) adds ´6 times the second row to the third, which gives
¨ ˛
1 2 5 7
˝ 0 1 2 4 ‚.
0 0 0 0
At this point also the second row is discarded, which leaves only the
last row, which consists of zeros. By Step (1), the algorithm stops
at this point.
This matrix is in row-echelon form, but not yet reduced. To
reduce it, add ´2 times the second row to the first, which gives
¨ ˛
1 0 1 ´1
˝ 0 1 2 4 ‚
0 0 0 0
1.6 Exercises
Exercise 1.1. Describe all the solutions of the equation
x ` y “ 3.
Draw a picture of that solution set. Is it a homogeneous equation?
Exercise 1.2. Consider the equation
x “ 3.
What is its solution set?
Consider the same equation, but now with two variables x and y
being present (so we could rewrite the equation as x ` 0 ¨ y “ 3 in
order to emphasize the presence of y). What is the solution set this
time?
Exercise 1.3. Consider the system
2x1 ´ x2 ` x3 ` x4 “ 1
5x2 ´ 3x3 ´ 5x4 “ ´3
3x1 ´ 4x2 ` 3x3 ` 4x4 “ 3.
What is the matrix associated to that system? Using Method 1.32,
find all solutions of that system.
Exercise 1.4. Consider the (augmented) matrix
¨ ˛
1 0 3 0 0 0 1
˚ 0 1 2 4 1 0 0 ‹
˚ ‹
˚ 0 0 0 2 1
A :“ ˚ 0 2 ‹‹.
˝ 0 0 0 0 0 1 3 ‚
0 0 0 0 0 0 0
What type of matrix is that? (I.e., what mˆn-matrix.) If A “ paij q,
what is a13 and a31 ? What is the linear system associated to that
matrix? (Hint: one equation reads “¨ ¨ ¨ “ 3”. For consistency, call
the variables x1 , x2 , . . . , x6 .)
Is the matrix in row-echelon form? Is it in reduced row-echelon
form? If not, use the Gaussian algorithm (Method 1.30) in order
to transform it into reduced row-echelon form. Name the columns
which contain a leading 1 (Hint: there are 4 of them). Which vari-
ables are free, which variables are not free? Use Method 1.32 and
solve the linear system associated to that augmented matrix.
1.6. EXERCISES 23
Exercise 1.5. Using Method 1.32, find all solutions of the following
systems
x`y´z “1
3x ´ y ` 2z “ 5
4x ` z “ 6.
and
x`y´z “1
3x ´ y ` 2z “ 0
x ` y ´ 2z “ 2.
Exercise 1.6. (Solution at p. 201) Let
ax ` by “ c
be a linear equation. For which values of a, b and c does this equa-
tion have no solution? For which values of a, b and c does it have
infinitely many solutions?
Exercise 1.7. Compute the reduced row-echelon form of the ma-
trices associated to the linear systems in 1.9, Example 1.10 and
Example 1.11.
Exercise 1.8. Consider the system
x`y “1
x ´ y “ b,
where b is a real number. What is its solution set? Illustrate the
system geometrically for b “ 0 and for b “ 1.
Exercise 1.9. Consider the system
ax ` by “ 1
x ´ y “ 2.
Here x and y are the variables and a and b are the coefficients.
(1) For which values of a and b does the system above have no
solution?
(2) For which values does it have exactly one solution?
24 CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS
x1 ´ x3 ` 2x4 “ 0
x2 ` 2x3 ´ 2x4 “ 0
x1 ` x2 ` x3 “ 0.
x1 ` x2 ` x3 “ 1
x1 ´ x3 “ 0.
Exercise 1.18. Consider the following linear system (in the un-
knowns x1 , x2 , x3 ):
x1 ´ x2 ` 3x3 “ 0
x1 ´ x2 “ 1.
px1 , x2 , x3 q “ p1 ` t, t ` q, ´t ` 2q ` 1q
satisfies
3x1 ` 2x2 ´ x3 “ 5?
26 CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS
ppxq “ a0 ` a1 x ` a2 x2 ` a3 x3
2x ´ y ` z “ 1
pα ` 2qx ´ 2y ` αz “ ´α.
A U B
V W
D
X Z
Y
C
1.6. EXERCISES 27
At the point labelled A, 500 cars per hour drive into the city, and
at B, 400 cars exit the city, while at C 100 cars exit the city per
hour.
Describe the possible scenarios regarding the numbers of cars
driving through the streets U , V , W , X, Y and Z.
28 CHAPTER 1. SYSTEMS OF LINEAR EQUATIONS
Chapter 2
Vector spaces
2.1 R2 , R3 and Rn
Definition 2.1. For n ě 1, an ordered n-tuple of real numbers is a
collection of n real numbers in a fixed order. For n “ 2, an ordered
2-tuple is usually called an ordered pair , and an ordered 3-tuple is
called an ordered triple. If these numbers are r1 , r2 , . . . , rn , then the
ordered n-tuple consisting of these numbers is denoted
pr1 , r2 , . . . , rn q.
For example, p2, 3q is an (ordered) pair. This pair is different
from the (ordered) pair p3, 2q. It makes good sense to insist on the
ordering, e.g., if a pair consists of the information
p“weight of a parcel (in kg)”, “prize (in €)”q,
?
then p3, 10q is of course different from p10, 3q. p 43 , 2, ´7q, p0, 0, 0q
are examples of (ordered) 3-tuples. An ordered 1-tuple is simply a
single real number.
Definition 2.2. For n ě 1, the set Rn is the set of all ordered
n-tuples of real numbers. Thus (see §A for general mathematical
notation)
Rn “ tpr1 , r2 , . . . , rn q | r1 , r2 , . . . , rn P Ru.
Thus, R1 “ R is just the set of real numbers. Next, R2 is the
set of ordered pairs of real numbers:
R2 “ tpr1 , r2 q | r1 , r2 P Ru.
29
30 CHAPTER 2. VECTOR SPACES
R2 “ tpx1 , x2 q | x1 , x2 P Ru
“ tpx, yq | x, y P Ru.
x P Rn
5 y
4
3
2 p3, 2q
p´3, 0q 1
x
´5 ´4 ´3 ´2 ´1
´1 1 2 3 4 5
´2
´3 p3, ´4q
´4
´5
2.1. R2 , R3 AND RN 31
z
8
6 y
4 8
p´4, ´4, 6q
6
2 4
´8 ´6 ´4 ´2 0 2 2 4 6 8x
´2 ´2
´4
´6 ´4
´8
´6
´8
px1 ` y1 , x2 ` y2 , . . . , xn ` yn q.
Example 2.4. What is the sum of p1, 1q and p´2, 1q? Visualize
that sum graphically!
Remark 2.5. The sum of two vectors is only defined if they belong
to the same Rn : a sum such as p1, 2q ` p3, 4, 5q is undefined, i.e. is
a meaningless expression.
These identities are easy to prove since they quickly boil down
to similar identities for the sum of real numbers. Here is a visual
intuition for the commutativity of addition, which is also called the
parallelogram law .
7 y
6
x ` y “ y ` x “ p5, 5q
5
y “4 p2, 4q
3
2
1 x “ p3, 1q
x
´2´1 1 2 3 4 5 6 7 8 9 10 11 12
r ¨ x :“ pr ¨ x1 , . . . , r ¨ xn q.
(7) 1x “ x,
(8) 0x “ 0 (at the left 0 denotes the real number zero, at the right
it denotes the zero vector)
` : V ˆ V Ñ V, pv, wq ÞÑ v ` w
¨ : R ˆ V, pr, vq ÞÑ rv (or r ¨ v)
3x1 ` 2x2 “ 3.
3x ` 4y ´ 2z “ 0
3x ` 4y
tpx, y, q | x, y P Ru.
2
z
8
6 y
4 8
6
p0, y, 2yq px, 0, 32 xq
4
´8 ´6 ´4 ´2 0 2 4 6 8x
´2 ´2
´4
´6 ´4
´8
´6
´8
y
x p1, 3, 0q
(3) for all v P V and all real numbers r P R, the scalar multiple r ¨ v
is required to be an element of V .
More generally, a subset V of another vector space W is a subspace
if V satisfies the three preceeding conditions.
Proof. Let us call S the solution set of the system. I.e., an element
x “ px1 , . . . , xn q belongs to S precisely if it is a solution of the linear
system (2.14).
We check the three conditions in Definition 2.17:
• In a similar manner, one shows (do it!) that for any r P R and
v “ pv1 , . . . , vn q P S the scalar multiple rv “ prv1 , . . . , rvn q is
again in S.
A X B :“ tv P V | v P A and v P Bu
A1 X ¨ ¨ ¨ X An “ tv P V | v P A1 , v P A2 , . . . , v P An u.
B
AXB
x`y PAXB
xPAXB
y PAXB
A
Note that this need not be the case: if A “ B is the same plane,
for example, then certainly A X B “ A is also 2-dimensional. This
relates to the discussion about the intersections of lines in R2 in
Summary 1.12: if A, B Ă R2 are “1-dimensional” (i.e., lines), their
intersection may still be a line, namely if A “ B. If the ambient
vector space V is even larger, for example V “ R4 (which has
“dimension 4”), then it is no longer reasonable to write down all
possible constellations of how A, B lie in V .
pxq f“pxq
f40 ´2x ´ 1
gpxq “ x3 ` 2x2 ` 1
3 2
pf ` gqpxq
30 “ x ` 2x ´ 2x
20
10
x
´3 ´2 ´1 1 2 3
Rrxs :“ tf : R Ñ R | f is a polynomialu
The set
d
ÿ
Rrxsďd :“ t ai xi |a0 , . . . , ad P RupĂ Rrxsq
i“0
x1
x11
How to define the sum and scalar multiplication on that set V {L?
Given L1 , L2 P V {L, take any x1 P L1 and any x2 P L2 . (These are
both vectors in V “ R2 .) Form the unique line that passes through
x1 ` x2 and is parallel to L. Call this line L1 ` L2 . Similarly, the
scalar multiple rL9 1 is the line passing through r ¨ x1 and parallel to
L. What is remarkable is that makes sense, i.e., that the resulting
lines do not depent on the choices of x1 , x2 above. In the illustration
below, we indicate two choices for x1 (the second one being denoted
x11 ). The sum x1 ` x2 is clearly different from x11 ` x2 , but they do
lie on the same line (that is parallel to L). This holds since x11 ´ x1
lies in L. Thus
px11 ` x2 q ´ px1 ` x2 q “ x11 ´ x1
also lies in L, and therefore x11 ` x2 and x1 ` x2 lie on the same line
that is parallel to L.
With this settled, one can show (withouth much head-ache) that
V {L is indeed a vector space. (What is the zero vector in V {L?)
A conceptually important insight is that there is no natural way
in which this V {L is a subspace of R2 . E.g., one may assign to an
element L1 P V {L, say, the y-coordinate of the intersection of L1
2.5. LINEAR COMBINATIONS 45
with the y-axis. But, this idea is ad-hoc and problem-laden (why
not take the x-axis instead, and what is worse, what happens if L
is in fact the y-axis...).
a1 v1 ` ¨ ¨ ¨ ` am vm ,
since the third components of these two vectors are always different.
In fact, the linear combinations of v1 and v2 are precisely the vectors
px, y, zq that satisfy z “ 0.
46 CHAPTER 2. VECTOR SPACES
v2 “ p0, 1, 0q
y
v1 “ p1, 0, 0q 1
v ` 12 v2
2 1
x
Lpv1 , . . . , vm q :“ ta1 v1 ` ¨ ¨ ¨ ` am vm | a1 , . . . , am P Ru
A ` B :“ tv ` w | v P A, w P Bu.
A1 ` ¨ ¨ ¨ ` An :“ tv1 ` ¨ ¨ ¨ ` vn |v1 P A1 , . . . , vn P An u.
The proof of this is very similar to the one of Lemma 2.32 and
will be omitted.
Remark 2.36. Given some vectors v1 , . . . , vn P V , we have
0 ¨ v1 ` ¨ ¨ ¨ ` 0 ¨ vm “ 0 ¨ pv1 ` ¨ ¨ ¨ ` vm q “ 0.
This follows from the distributive law and the scalar multiplication
of any vector with 0, cf. (4) and (8) in Definition 2.10. So, there
is always a “trivial” way to obtain the zero vector from v1 , . . . , vm .
We can ask if there are other ways of achieving the zero vector.
Definition 2.46. We say v1 , . . . , vm are linearly dependent if there
is a non-zero linear combination of these that gives the zero vector.
I.e., if there are a1 , . . . , am P R of which at least one is non-zero,
such that
a1 v1 ` ¨ ¨ ¨ ` am vm “ 0. (2.47)
If this is not the case, then we say the vectors are linearly inde-
pendent.
a1 e1 ` a2 e2 ` a3 e3 “ p0, 0, 0q
This reduced row-echelon matrix has only 2 leading ones, so the vec-
tors are not linearly independent, i.e., they are linearly dependent.
v “ a1 v1 ` ¨ ¨ ¨ ` am vm and
v “ b1 v1 ` ¨ ¨ ¨ ` bm vm
Proof. Subtracting these two equations from one another (and us-
ing the commutativity of addition, and the law of distributivity,
cf. Definition 2.10), we obtain
0“v´v
“ pa1 ´ b1 qv1 ` ¨ ¨ ¨ ` pam ´ bm qvm .
2.7 Bases
Definition 2.58. A collection of vectors in a vector space
v1 , . . . , vm P V
is called a basis if they span V and if they are linearly independent.
Example 2.59. The vectors
e1 “ p1, 0, . . . , 0q, e2 “ p0, 1, 0, . . . 0q, . . . , en “ p0, . . . , 0, 1q P Rn
are a basis, called the standard basis. Indeed, we have observed in
Example 2.41 and Example 2.48 that they span Rn and that they
are linearly independent.
We try and modify this basis a little bit and see what happens.
If we omit one of the vectors and only consider, say
e2 “ p0, 1, 0, . . . 0q, . . . , en “ p0, . . . , 0, 1q P Rn
these do not form a basis: while they are still linearly independent,
they do not span Rn .
On the other hand, we now consider
e1 , . . . , en , v,
for an arbitrary vector v P Rn . These do not form a basis: while they
span Rn (even without the v), they are not linearly independent.
Indeed, since e1 , . . . , en span Rn , this means that
v “ a1 e1 ` ¨ ¨ ¨ ` an en
for appropriate a1 , . . . , an P R. According to Lemma 2.51, this
means that e1 , . . . , en , v are linearly dependent.
Example 2.60. The vectors
v1 “ p0, 2, 1q, v2 “ p1, 0, 2q, v3 “ p´1, 1, 1q
form a basis of R3 . To see this, we apply Method 2.53 and Method 2.44:
¨ ˛ ¨ ˛ ¨ ˛ ¨ ˛
0 2 1 1 0 2 1 0 2 1 0 2
˝ 1 0 2 ‚ ⇝ ˝ 0 2 1 ‚ ⇝ ˝ 0 2 1 ‚ ⇝ ˝ 0 1 1 ‚.
2
´1 1 1 ´1 1 1 0 0 3 0 0 1
This matrix has three leading ones, so the vectors are linearly inde-
pendent and span R3 , so they form a basis.
Note that this is a different basis than e1 , e2 , e3 considered above.
56 CHAPTER 2. VECTOR SPACES
the direct sum V ‘W is given by pv1 , 0q, . . . , pvn , 0q, p0, w1 q, . . . , p0, wm q.
These are n ` m vectors, so that
Remark 2.65. It can be shown that every vector space has a basis.
In this course, we only consider vector spaces with a basis consisting
of finitely many vectors, as in Definition 2.58. We call such vector
spaces finite-dimensional .
An example of a vector space not having a finite basis (i.e., an
infinite-dimensional vector space) is Rrxs (for which a basis is given
by the polynomials 1, x, x2 , x3 , . . . ).
dim V ě n.
Proof. This follows from the theorem above. For example, suppose
they span V . If they are not linearly independent, then some vi
lies in the span of the remaining vectors. Thus V is the span of all
vectors but vi so that n ´ 1 ě dim V by Theorem 2.66(1). This is a
contradiction to our assumption.
The converse implication is proved similarly.
Example 2.69. Let a P R be a fixed real number. Consider the
vector space Rrxsďd . The polynomials
v0 pxq “ px ´ aq0 “ 1, v1 pxq “ px ´ aq, . . . , vd pxq “ px ´ aqd
are linearly independent. To see this, suppose
0 “ a0 v0 ` a1 v1 ` ¨ ¨ ¨ ` ad vd .
Note that vd has degree d, all the remaining ones have degree ď d´1.
Thus, looking at the coefficient for xd , we see ad “ 0. Continuing
this, we note that
0 “ a0 v0 ` a1 v1 ` ¨ ¨ ¨ ` ad´1 vd´1
forces ad´1 “ 0 (by looking at the coefficient of xd´1 ). Repeating
this argument, one sees that a0 “ ¨ ¨ ¨ “ ad “ 0.
We know dim Rrxsďd “ d ` 1 (Example 2.64). Thus, by Corol-
lary 2.68, these polynomials v0 , . . . , vd form a basis. According to
Proposition 2.61, any polynomial f pxq of degree ď d therefore can
be uniquely written as
f pxq “ a0 ` a1 px ´ aq ` ¨ ¨ ¨ ` ad px ´ aqd .
Colloquially, every polynomial can be expressed as a sum of powers
of x´a. (By definition of a polynomial, it can certainly be expressed
as a sum of powers of x ´ a “ x.)
A“B A‰B
15 y
y A
4 A B
10
B
2 5
x
x
´4 ´2 2 4
´4 ´2 2 4
´5
´2
´10
´4
´15
2.9 Exercises
Exercise 2.1. Let V “ tpx, y, zq | x, y, z P Ru. (Thus, V “ R3 .)
We use the regular addition of vectors. However, in contrast to
the regular scalar multiplication (Definition 2.7), we now use the
following. Decide in each case whether this turns V into a vector
space:
• r ¨ px, y, zq “ prx, y, rzq,
• r ¨ px, y, zq “ p0, 0, 0q,
• r ¨ px, y, zq “ p2rx, 2ry, 2rzq.
Exercise 2.2. Let V Ă R2 be a subspace. Which of the following
statements are correct?
(1) V contains at least one element.
(2) V contains at least two elements.
(3) V contains the zero vector p0, 0q.
(4) If v, w P V then also v ´ w P V .
Exercise 2.3. Using basic properties of differentiable functions from
your calculus class, show that the space
tf : R Ñ R | f is differentiable u
is a vector space (with the sum and scalar multiple defined as in
(2.22) and (2.22)).
Hint: structure your thinking as in Definition and Lemma 2.22.
Exercise 2.4. Give an example of two subspaces V, W Ă R2 such
that their union
V Y W “ tx “ px1 , x2 q P R2 | x P V or x P W u
64 CHAPTER 2. VECTOR SPACES
is not a subspace.
Hint: Example 2.13.
Also give an example of two subspaces V, W Ă R2 , where the
union V Y W is a subspace.
Hint: be very lazy and minimalistic. What is the smallest sub-
space you can come up with?
(2) tx ¨ f | f P Rrxsď2 u,
(3) tx ¨ f ` p1 ´ xqg | f, g P Rrxsď2 u,
(4) tf | f P Rrxsď3 , f p0q “ 0u.
S “ tpx, y, z, tq | x ` y ` z ` t “ 0u.
2x1 ´ x2 ´ 3x4 “ 0
2x1 ` x3 ` x4 “ 0.
Determine S X T .
x 1 ´ x2 “ 0
x1 ` x2 ` x3 “ 0.
Determine T X W .
Linear maps
71
72 CHAPTER 3. LINEAR MAPS
Thus, for a linear map, the zero vector of V is mapped to the zero
vector in W .
Checking (3.3) is similarly simple. The linearity of the map can also
be visualized geometrically:
y
6 v ` v1
4 v1
2
v “ px, yq x
´2 2 f pvq
4 “ px,6´yq
´2
´4 f pv 1 q
´6 f pv ` v 1 q “ f pvq ` f pv 1 q
řd n
g“ n“0 bn x , we check (3.2), say:
˜ ¸1
ÿd
pf ` gq1 pxq “ pan ` bn qxn
n“0
d
ÿ
“ npan ` bn qxn´1
n“1
ÿd d
ÿ
“ nan xn´1 ` nbn xn´1
n“1 n“1
“ f 1 pxq ` g 1 pxq.
f px ` yq “ x ` y ` 1 ‰ px ` 1q ` py ` 1q “ f pxq ` f pyq.
Reflections
ˆ ˙
1 0
Example 3.13. We consider A “ . According to the
0 ´1
above we have ˆ ˙
x
Av “ .
´y
We plot a few points v and the corresponding Av:
5 y
4 ˆ ˙
3 3
w Av “
2 4
1
x
´5´4´3´2´1
´1 1 2 3 4 5
´2
´3
Aw ´4 ˆ ˙
´5 3
v“
´6 ´4
´7
Rescalings
ˆ 1
˙
2
0
Example 3.14. The matrix A “ describes the map that
0 1
compresses everything in the x-direction by the factor 12 , and leaves
the y-direction untouched.
7 y
6
Aw 5
4
3
w 2
1
x
´5´4´3´2´1
´1 1 2 3 4 5ˆ 6 7 ˙
3
´2 v“
´3 ˆ 3 ´2˙
´4 Av “ 2
´4
´5
´6
´7
Shearing
Rotations
5 y
4 ˆ ˙
4
3 Av “
w 3
2
1
x
´5´4´3´2´1
´1 1 2 3 4 5 6 7 8 9 10
´2
´3 ˆ ˙
3
´4 v“
´4
Aw ´5
´6
´7
¨ ˛
v1
We regard a vector v “ ˝ ... ‚ as an element of Rn . (Thus,
vn
instead of using the notation pv1 , . . . , vn q for an ordered tuple, as in
Definition 2.1, we write the n numbers underneath in a row.) Fix an
m ˆ n-matrix A. Then the product Av, which is an column vector
with m entries, is an element in Rm . We now regard this matrix A
as fixed, and consider the vector v as a variable. In other words, we
consider the function (or map)
Rn Ñ Rm , v ÞÑ Av.
(1) f is injective,
(2) ker f “ t0V u.
Proof. Suppose f is injective, we prove ker f “ t0u. Since f p0q “ 0
by linearity (Remark 3.4), we have 0 P ker f . If v P ker f , then
f pvq “ 0W , so both v and 0V are in the preimage of 0W . By the
injectivity of f , this forces v “ 0.
Conversely, suppose ker f “ 0. Suppose two vectors v, v 1 P V are
in the preimage of some w P W , i.e., f pvq “ f pv 1 q “ w. Then, by
linearity of f
f pv ´ v 1 q “ f pv ` p´1qv 1 q “ f pvq ` p´1qf pv 1 q “ f pvq ´ f pv 1 q “ 0.
Thus, v ´ v 1 P ker f , which means by assumption that v ´ v 1 “ 0.
That is: v “ v 1 . Therefore f is injective.
Theorem 3.26. (Rank-nullity theorem) Let f : V Ñ W be a map
between (finite-dimensional) vector spaces. Then
dimpker f q ` dimpim f q “ dim V.
The rank of f is defined to be
rk f :“ dimpim f q,
while the nullity of f is defined to be dimpker f q.
A proof of this theorem appears in any linear algebra textbook,
e.g. [Nic95, Theorem 7.2.4]. As a remark on the proof, we note
that one can prove the following fact, which is very useful in its own
right.
Theorem 3.27. Let f : V Ñ W be a linear map. Let
v1 , . . . , vr , vr`1 , . . . vn
be a basis of V such that
v1 , . . . , vr
is a basis of ker f . Then f pvr`1 q, . . . , f pvn q is a basis of im f .
The following facts are immediate consequences of the rank-
nullity theorem.
3.4. KERNEL AND IMAGE OF A LINEAR MAP 85
0
etc. Then we have
¨ ˛ ¨ ˛
a11 ¨ 0 ` ¨ ¨ ¨ ` a1i ¨ 1 ` ¨ ¨ ¨ ` a1n ¨ 0 a1i
f pei q “ Aei “ ˝ .. ‚ “ ˝ ... ‚.
.
am1 ¨ 0 ` ¨ ¨ ¨ ` ami ¨ 1 ` ¨ ¨ ¨ ` amn ¨ 0 ami
(3.29)
In other words, the product Aei is precisely the i-th column of the
matrix A!
86 CHAPTER 3. LINEAR MAPS
f ´1 pbq “ tr P Rn | Ar “ bu
f ps ´ rq “ f psq ´ f prq
b “ A0 “ 0.
Instead, the solution set of the system with a non-zero vector b, i.e.,
f ´1 pbq is a translation of ker f , as is
90 CHAPTER 3. LINEAR MAPS
f ´1 pbq
ker f “ f ´1 p0q
f pvi q “ wi . (3.39)
i.e., we can express v in such a form and the real numbers bi are
uniquely determined by v. Moreover, we can think of these numbers
b1 , . . . , bn as the coordinates of v (with respect to our coordinate
system given by the basis). Namely, given another vector v 1 “
ř n 1
i“1 bi vi and some a P R, we have
n
ÿ
v ` v1 “ pbi ` b1i qvi
i“1
ÿn
av “ pabi qvi .
i“1
92 CHAPTER 3. LINEAR MAPS
So, the map defined in (3.40) is the only linear map satisfying
(3.39).
Example 3.41. We consider V “ R3 , with the basis
v1 “ e1 “ p1, 0, 0q, v2 “ e2 “ p0, 1, 0q, v3 “ p0, 1, ´1q.
(Note that e1 , e2 are part of the standard basis of R3 .) According
to Proposition 3.38, there is a unique linear map f : R3 Ñ R3 such
that
f pv1 q “ p2, ´1, 0q, f pv2 q “ p1, ´1, 1q, f pv3 q “ p0, 2, 2q.
We determine f pe3 q, where e3 “ p0, 0, 1q is the third standard basis
vector. We have
e3 “ v2 ´ v3 .
Thus
f pe3 q “ f pv2 ´v3 q “ f pv2 q´f pv3 q “ p1, ´1, 1q´p0, 2, 2q “ p1, ´3, ´1q.
Thus, with respect to the standard basis e1 , e2 , e3 (which is distinct
from the one above!), the matrix of f is given by
¨ ˛
2 1 1
A “ ˝ ´1 ´1 3 ‚.
0 1 ´1
That is, f agrees with the map
f : R3 Ñ R3 , v ÞÑ Av.
3.7. MATRICES ASSOCIATED TO LINEAR MAPS 93
which is true.
If, by contrast, we consider the standard basis e1 , e2 , e3 of V “ R3
(and still w1 , w2 , w3 in W “ R3 ), then the matrix reads
¨ ˛
1 0 0
˝ 0 1 0 ‚.
0 1 ´1
For example, the third line of this matrix expresses the identity
g ˝ f : U Ñ W, u ÞÑ gpf puqq.
In other “words” n
ÿ
AB :“ p aie bej q.
e“1
96 CHAPTER 3. LINEAR MAPS
I.e., one picks the i-th row of A and the j-th column of B; one
traverses these and multiplies the corresponding entries together
one by one and finally adds up these products.
Example 3.47.
ˆ ˙ˆ ˙ ˆ ˙
1 2 ´1 0 1 ¨ p´1q ` 2 ¨ 6 1 ¨ 0 ` 2 ¨ p´2q
“
3 4 6 ´2 3 ¨ p´1q ` 4 ¨ 6 3 ¨ 0 ` 4 ¨ p´2q
ˆ ˙
11 ´4
“ ,
21 ´8
¨ ˛
ˆ ˙ 0 1
1 ´1 2 ˝ 1 2 ‚“
1 3 ´2
2 3
“
¨ ˛
0 1 ˆ ˙
˝ 1 2 ‚ 1 ´1 2
“
1 3 ´2
2 3
“
ˆ ˙ˆ ˙
1 x 1 y
“
0 1 0 1
“
Note that the second product is a 2 ˆ 2-matrix while the product of
the same matrices in the other order is a 3 ˆ 3-matrix!
The product AB is only defined if the number of columns of A
is the same as the number of rows of B. For example,
ˆ ˙ˆ ˙
0 1 1 3 4
2 2 3 5 6
is not defined, i.e., it is a meaningless expression.
Remark 3.48. In the case when B is a column vector with n en-
tries, we can regard it as an n ˆ 1-matrix. In this case the product
AB defined in Definition 3.46 is an m ˆ 1-matrix, which agrees with
the column vector AB as defined in Definition 3.9, so the product
considered now is a generalization of that previous construction. In
general, if B is an n ˆ k-matrix, we can write it as
B “ pb1 b2 . . . bn q,
3.8. COMPOSING LINEAR MAPS AND MULTIPLYING MATRICES 97
Similarly,
m
ÿ
f pei q “ Aei “ asi es
s“1
and
n
ÿ
gpei q “ Bei “ bri er .
r“1
98 CHAPTER 3. LINEAR MAPS
So that ˆ ˙ˆ ˙ ˆ ˙ˆ ˙
1 1 1 0 1 0 1 1
‰ !
0 1 1 1 1 1 0 1
g ˝ f ‰ f ˝ g.
f pgpwqq 5 y
4
3
w 2
1
x
´5´4´3´2´1
´1 1 2 3 4 5 6 7 8 9 10
´2
´3
f pwq ´4
gpwq ´5 gpf pwqq
´6
´7
(2) Let A1 be the matrix obtained by multiplying the i-th row with
a real number r. Then
¨ ˛
1
˚ ... ‹
˚ ‹
1
˚ ‹
˚ ‹
1
A “˚ r ‹ A.
˚ ‹
˚
˚ 1 ‹
‹
˚
˝ . ..
‹
‚
1
looooooooooooooooooomooooooooooooooooooon
p2q
Ei,r
• For s “ i, the only coefficients bse that are non-zero are bss “ 1
and bsj “ r. Thus, the sum above consists of two terms, and
therefore
cst “ bss ast ` bsj ajt “ ast ` rajt .
3.9 Inverses
Given a linear map f : V Ñ W it is a natural question whether the
process of applying f can be undone. For example, if f encodes a
counter-clockwise rotation in the plane by 60˝ , it can be undone by
rotating clockwise by 60˝ . On the other hand, the linear map
R2 Ñ R2 , px, yq ÞÑ px, 0q
(By definition of the composition (see also §A) this means gpf pvqq “
v for all v P V and f ˝ g “ idW (i.e., f pgpwqq “ w for all w P W .)
104 CHAPTER 3. LINEAR MAPS
• dim V “ dim W .
f pg 1 pwqq “ f pgpwqq.
AB “ id and BA “ id.
and also that the two columns of A are linearly dependent. We will
later prove that either of these two conditions are equivalent to A
not being invertible (Corollary 3.86).
Example 3.64. We revisit the reflection, rescaling, rotation and
shearing matrices (Example 3.13 onwards) and compute their in-
verses:
¨ ˛
x1
where x “ ˝ ... ‚ is a vector consisting of n unknowns and b “
xn
¨ ˛
b1
˝ ... ‚ is a vector. This linear system has a unique solution, which
bn
is given by
x “ A´1 b,
i.e., the product of the inverse of A with the vector b.
a11 x1 ` ¨ ¨ ¨ ` a1n xn “ b1
..
.
am1 x1 ` ¨ ¨ ¨ ` amn xn “ bm .
z :“ A´1 b ´ y “ 0.
z “ A´1 Az “ A´1 0 “ 0.
108 CHAPTER 3. LINEAR MAPS
1 11
If A and B are invertible, then AB is invert- ab
“ ba
.
ible:
pABq´1 “ B ´1 A´1 . (3.70)
1 1
If A1 , . . . , Ak are invertible, then their prod- a1 ¨¨¨¨¨ak
“ ak
¨¨¨¨¨
uct A1 A2 . . . Ak is also invertible: 1
a1
Proof. To illustrate this, we check this for the last one, where for
simplicity of notation we just treat the case of 2 ˆ 2-matrices. I.e.,
we prove
ˆ ˙´1 ˆ ˙
1 r 1 ´r
“ .
0 1 0 1
To do this, we compute the product
ˆ ˙ˆ ˙ ˆ ˙ ˆ ˙
1 r 1 ´r 1 0
“ “ .
0 1 0 1 1 ¨ p´rq ` r1 0 1
A ⇝ B,
then
B “ UA
for an invertible m ˆ m-matrix U . (In particular, if A ⇝ id, then
id “ U A.)
110 CHAPTER 3. LINEAR MAPS
• r`1ě1
• rě0
• r ´ 4 ě ´4
are equivalent. By contrast, the three statements
• r`1ě1
• rě0
• r2 ě 0
are not equivalent, since the third does not imply, say, the second:
for r “ ´1, the third statement holds, but the second does not. A
convenient way to show that three statements are equivalent is to
show “X” ñ “Y”, then “Y” ñ “Z”, and then “Z” ñ “X”. Of course,
this also works similarly for more than three statements.
Theorem 3.78. The following conditions on a square matrix A P
Matnˆn are equivalent:
(1) A is invertible.
(2) For any b P Rn (regarded as a column vector with n rows), the
equation Ax “ b (for x P Rn being a column vector consisting
of n unknowns x1 , . . . , xn ) has exactly one solution.
(3) For any b P Rn , the equation Ax “ b has at most one solution.
(4) The system Ax “ 0 (0 being the zero row vector consisting of n
zeros) has only the trivial solution x “ 0 (cf. Remark 1.14).
(5) Using the Gaussian algorithm (Method 1.30), A can be trans-
formed to the identity matrix idn .
(6) A is a product of (appropriate) elementary matrices.
(7) There is a matrix B P Matnˆn such that AB “ id.
If these conditions are satisfied, the inverse of A can be computed
as follows: write the identity n ˆ n-matrix to the right of A (this
gives a n ˆ p2nq-matrix):
¨ ˛
a11 . . . a1n 1 . . . 0
B :“ pA | idn q “ ˝ ... ... ..
.
.. . .
.
.
. .. ‚.
an1 . . . ann 0 . . . 1
112 CHAPTER 3. LINEAR MAPS
(The bar in the middle is just there for visual purposes, it has no
deeper meaning.) Apply Gaussian elimination in order to bring the
matrix B to reduced row echelon form, which according to the above
gives a matrix of the form
pidn | Eq.
Then E “ A´1 , i.e., E is the inverse of A.
Proof. (1) ñ (2): This is just the content of Theorem 3.66.
The implications (2) ñ (3) and (3) ñ (4) are clear.
(4) ñ (5): we can bring A into reduced row-echelon form, say,
A ⇝ R. We need to show that R “ id. If this is not the case, then R
contains a zero row (since R is a square matrix). Method 1.32 then
tells us that the system Rx “ 0 has (at least) one free parameter,
and therefore the system has not only the zero vector as a solution.
The original system Ax “ 0, which by Corollary 3.75 has the same
solutions as Rx “ 0, then also has a non-trivial solution. This is a
contradiction to our assumption that R is not the identity matrix.
(5) ñ (6): by Lemma 3.74, we have U A “ id for U being a
product of elementary matrices, say U “ U1 . . . Un . Then, using
(3.71), we have
A “ U ´1 U A “ U ´1 “ Un´1 . . . U1´1 ,
and this is also a product of elementary matrices.
(6) ñ (7): if A “ U1 . . . Un for some elementary matrices, then
AUn´1 . . . U1´1 “ U1 . . . Un Un´1 . . . U1´1 “ id.
(7) ñ (1): suppose B is such that AB “ id. We observe that
then the only vector x P Rn such that Bx “ 0 is the zero vector:
x “ idn x “ ABx “ A0 “ 0.
Applying the implication (4) ñ (7) (which was already proved) to
B, we obtain a matrix C such that BC “ id. Therefore
A “ Aidn “ ApBCq “ pABqC “ idC “ C.
This means that BA “ id.
This finishes the proof that all the given statements are equiva-
lent. The statement about the computation of A´1 holds since the
row operations that bring A ⇝ id also bring the augmented matrix
pA | idq to pU A | U idq “ pid |U q.
3.9. INVERSES 113
¨˛
1 0 ´1
Example 3.79. We apply this to A “ ˝ 3 1 ´3 ‚:
1 2 ´2
¨ ˛
1 0 ´1 1 0 0
B “ ˝ 3 1 ´3 0 1 0 ‚.
1 2 2 0 0 1
We subtract the first row 3, resp. 2 times from the other ones, which
gives ¨ ˛
1 0 ´1 1 0 0
˝ 0 1 0 ´3 1 0 ‚.
0 2 ´1 ´1 0 1
We subtract 2 times the second row from the third:
¨ ˛
1 0 ´1 1 0 0
˝ 0 1 0 ´3 1 0 ‚.
0 0 ´1 5 ´2 1
We bring the matrix into row echelon form by multiplying the last
row with ´1, which yields
¨ ˛
1 0 ´1 1 0 0
˝ 0 1 0 ´3 1 0 ‚.
0 0 1 ´5 2 ´1
Finally, to bring it into reduced row-echelon form, we add the third
row to the first, which gives
¨ ˛
1 0 0 ´4 2 ´1
˝ 0 1 0 ´3 1 0 ‚.
0 0 1 ´5 2 ´1
Thus, according to Theorem 3.78, A is indeed invertible, and its
inverse is ¨ ˛
´4 2 ´1
A´1 “ ˝ ´3 1 0 ‚.
´5 2 ´1
Corollary 3.80. If A is a square matrix such that for some other
square matrix B we have AB “ id, then we also have BA “ id.
Proof. We use the theorem to see that A is invertible, and then
B “ idB “ A´1 AB “ A´1
And we have seen in (3.69) above that A´1 A “ id.
114 CHAPTER 3. LINEAR MAPS
AT :“ paji q.
¨ ˛
1 2
Example 3.82. For A “ ˝ 3 4 ‚,
5 6
ˆ ˙
T 1 3 5
A “ .
2 4 6
pABqT “ B T AT . (3.84)
Proof. The first two rules are quite immediate to check (and hardly
surprising). The first one can also be seen by noting that doing twice
the reflection of the entries along the main diagonal gives back the
original matrix.
3.10. TRANSPOSITION OF MATRICES 115
Similarly,
Thus the product of AT and pA´1 qT (in the two possible orders)
equals id, so they are inverse to each other.
Ax “ x1 c1 ` ¨ ¨ ¨ ` xn cn ,
(2) ô (3): The rank is, by definition, the dimension of the column
space, i.e., the subspace of Rn generated by the columns c1 , . . . , cn .
In order to show that these vectors span Rn , let b P Rn . By the
invertibility of A, we know that
řn the system Ax “ b has a (unique)
solution x. Therefore Ax “ k“1 xi ci “ b.
(1) ô (4): A is invertible if and only if the transpose AT is
invertible. Now use that the rows of AT are the columns of A, and
apply the (already proved) equivalence (1) ô (2).
3.11 Exercises
Exercise 3.1. Determine which 2 ˆ 2-matrix A is such that the
function
f : R2 Ñ R2 , v ÞÑ Av
are the following:
• f pvq is the point v reflected along the y-axis,
• f pvq is the same point as v,
• f pvq is the origin p0, 0q,
• f pvq is the point v reflected along the line tpx, xq |x P Ru (i.e.,
the “southwest-northeast diagonal”),
• f pvq is the point v rotated counterclockwise, resp. clockwise by
60˝ ?
ˆ ˙
y
Exercise 3.2. Determine the matrix A such that Av “ .
x
Describe the behaviour of the function v ÞÑ Av geometrically.
Exercise 3.3. Write down the matrix A such that the function f :
R4 Ñ R3 , v ÞÑ Av satisfies
f : R3 Ñ R2
given by
f px, y, zq “ p2x ´ z, x ` y ` zq.
(1) Determine the matrix of f with respect to the standard basis in
R3 and the standard basis R2 .
(2) Determine ker f and im f .
(3) Determine the preimage f ´1 pp0, 1qq. Write down the linear sys-
tem whose solution set is this preimage. Is it a subspace of R3 ?
(4) Show that the vectors v1 “ p0, 1, 2q, v2 “ p0, ´1, 1q and v3 “
p1, 1, 1q are a basis of R3 . Determine the matrix of f with respect
to this basis of R3 and the standard basis in the codomain R2 .
v4 “ a1 v1 ` a2 v2 ` a3 v3 .
f: R4 Ñ R3
¨ ˛
x ¨ ˛
˚y ‹ ´x ` z
˚ ‹ ÞÑ ˝ ´y ` t ‚
˝z ‚
x´y
t
3.11. EXERCISES 121
f2 : R3 Ñ R2
¨ ˛
x1 ˆ ˙
˝x2 ‚ ÞÑ x 3 ` x 1
3x2 ` 4x1 ` 1
x3
f3 : R3 Ñ R2
¨ ˛
x1 ˆ ˙
˝x2 ‚ ÞÑ x23
x2 ` x1
x3
f4 : R3 Ñ R3
¨ ˛ ¨ ˛
x1 x1
˝x2 ‚ Ñ ˝x2 ‚
x3 x3
f5 : R3 Ñ R3
¨ ˛ ¨ ˛
x1 x1 ` x2
˝x2 ‚ ÞÑ ˝x2 ` x3 ‚
x3 x3 ` x1
id : R3 Ñ R3
and the basis v1 “ p1, 0, ´1q, v2 “ p2, 1, 1q, v3 “ p´1, ´1, 7q on the
domain and the standard basis on the codomain. Compute the base
change matrix with respect to these bases.
3.11. EXERCISES 125
Exercise 3.29. Find the base change matrix from the standard ba-
sis e1 , e2 , e3 in R3 to the basis v1 “ p1, 1, 2q, v2 “ p1, 1, 3q, v3 “
p7, ´1, 0q.
f : R2 Ñ R3 , v ÞÑ Av,
¨ ˛
2 1
where A “ ˝ 0 1 ‚. Compute the matrix B of f with respect
´3 1
to the basis v “ tv1 “ p1, ´1q, v2 “ p3, ´1qu in R2 , and the basis
t “ tt1 “ p1, 0, 1q, t2 “ p2, 1, 1q, t3 “ p´1, ´1, ´1qu in R3 .
Hint: We may consider the following diagram:
id f id
R2v ÝÑ R2e ÝÑ R3e ÝÑ R3t .
H A K
f : R3 Ñ R3
which in the standard basis (on both the domain and the codomain)
is given by ¨ ˛
2 0 0
A “ ˝ 1 2 1 ‚.
´1 0 1
Compute the matrix of f with respect to the basis
f : R2 Ñ R2 ,
126 CHAPTER 3. LINEAR MAPS
ˆ ˙
6 ´1
which is given by the matrix A “ with respect to the
2 3
standard basis in the domain and the codomain.
Find its matrix with respect to the basis v1 “ p1, 1q, v2 “ p1, 2q
both in the domain and the codomain.
¨ ˛
0 0 1
Exercise 3.33. (Solution at p. 222) Let A “ ˝ 0 1 0 ‚. De-
1 0 0
¨ ˛
x1
termine the vectors x “ ˝ x2 ‚ such that
x3
Ax “ x.
Exercise
¨ 3.34.
˛ (Solution at p. 223) Find, if possible the vectors
x1
x “ ˝ x2 ‚ such that
x3
¨ ˛
1 0 0
˝ 0 4 2 ‚x “ 5x.
0 2 1
ˆ ˙
3 0
Exercise 3.35. (Solution at p. 224) Consider the matrix A “
8 ´1
2 2
which represents f : R Ñ R with respect to the standard basis.
Find the matrix of f with respect to the basis
v “ tv1 “ p2, 1q, v2 “ p0, 1qu.
Exercise 3.36. For A and f as in ??, consider now the basis
v “ tv1 “ p1, 2q, v2 “ p0, 1qu.
Compute the matrix of f with respect to that basis.
Exercise 3.37. (Solution at p. 224) Let f : R3 Ñ R3 be the map
whose matrix with respect to the standard basis is
¨ ˛
´1 1 0
˝ 0 2 0 ‚.
1 ´1 ´2
3.11. EXERCISES 127
The following two exercises are all concerned with linear systems
of the form
Ax “ λx,
where A is a certain square matrix, x is a vector and λ P R a real
number. We will study these systems systematically in §5.
Exercise 3.41. (Solution at p. 227) Find the solutions of the linear
system ¨ ˛¨ ˛ ¨ ˛
1 0 0 x1 x1
˝ 0 4 2 ‚˝ x2 ‚ “ 5 ˝ x2 ‚.
0 2 1 x3 x3
Determinants
129
130 CHAPTER 4. DETERMINANTS
where the right hand side denotes the area of the parallelogram
spanned by the two vectors v1 , v2 .
v 1 “ px1 , y 1 q
v “ px, yq
x
Lemma 4.3 does not give any information about the sign of the
determinant. Regarding that, we observe the following:
ˆ ˙
x1 x2
Lemma 4.4. Let A “ and let
y1 y2
ˆ ˙
1 x2 x1
A :“
y2 y1
ˆ ˙
2 y1 y2
A :“
x1 x2
be the matrices obtained from A by swapping the two columns,
resp. the two rows. Then
det A2 “ det A1 “ ´ det A.
In other words, swapping two rows or two columns will change the
sign of the determinant.
Proof. This is directly clear from the definition. For example,
det A1 “ x2 y1 ´ y2 x1 “ ´px1 y2 ´ x2 y1 q “ ´ det A.
Thus, the determinant (as opposed to only its absolute value)
records the area of the parallelogram spanned by the vectors and
also the orientation.
det : Matnˆn Ñ R
det A “ 0. (4.8)
det A “ 0
det A “ ´ det A
¨ ˛ ¨ ˛ ¨ ˛
v1 v1 v1
.. ˚ .. ‹ .. ‹
. ˚ . ‹ . ‹
˚ ‹ ˚
˚ ‹ ˚
det ˚ vi ` rvj ‹ “ det ˚ vi ‹ ` r det ˚ vj ‹ where vj is in the i-th row!
˚ ‹ ˚ ‹ ˚ ‹
˚ .. ‹
˝ ... ‚
˚ ‹ ˚ .. ‹
˝ . ‚ ˝ . ‚
vn vn vn
..
¨ ˛ ¨ ˛
v1 .
˚ .. ‹ ˚
v
‹
˚ . ‹ ˚ .j
˚ ‹
‹
˚ ..
“ det ˚ vi ‹ ` r ˚
˚ ‹ ‹
˚ . ‹ ‹
˝ .. ‚ ˚ v
˝ j
‹
‚
vn ..
.
¨ ˛
v1
˚ .. ‹
˚ . ‹
“ det ˚ vi ‹ by the above remark.
˚ ‹
˚ . ‹
˝ .. ‚
vn
¨ ˛
´2 1 8
A“ ˝ 1 3 5 ‚.
0 1 4
¨ ˛ ¨ ˛ ¨ ˛ ¨ ˛
´2 1 8 0 7 18 0 1 6 0 1 6
˝ 1 3 5 ‚⇝ ˝ 1 3 5 ‚⇝ ˝ 1 3 5 ‚⇝ ˝ 1 3 5 ‚
0 2 4
loooooooomoooooooon 0 2 4
loooooooomoooooooon 0 2 4
looooooomooooooon 0 0 ´8
loooooooomoooooooon
“A “A1 “A2 “A3
¨ ˛ ¨ ˛ ¨ ˛
0 1 6 0 1 6 0 1 0
⇝ ˝ 1 3 5 ‚⇝ ˝ 1 3 5 ‚⇝ ˝ 1 3 0 ‚
0 0 1
looooooomooooooon 0 0 1
looooooomooooooon 0 0 1
looooooomooooooon
“A4 “A5 “A6
¨ ˛ ¨ ˛
0 1 0 1 0 0
⇝ ˝ 1 0 0 ‚⇝ ˝ 0 1 0 ‚
0 0 1
looooooomooooooon 0 0 1
looooooomooooooon
“A7 “A8 “id3
Thus, the definition of det for general matrices agrees with the
one in Definition 4.1.
(3) For a 3 ˆ 3-matrix one can show that the determinant is given
by the so-called Sarrus’ rule:
¨ ˛
a b c
det ˝ d e f ‚ “ aei ` bf g ` cdh ´ ceg ´ bdi ´ af h. (4.11)
g h i
A way to remember this formula is to write
¨ ˛
a b c a b
˝ d e f d e ‚
g h i g h
136 CHAPTER 4. DETERMINANTS
` ` `
a b c a b
d e f d e
g h i g h
´ ´ ´
detpABq “ detpBAq,
e.g.
ˆ ˙ ˆ ˙ ˆ ˙
2 0 1 0 1 0
det “ 4 ‰ 1 ` 1 “ det ` det .
0 2 0 1 0 1
Proof. If one of the entries on the main diagonal, i.e., a11 , . . . , ann
is zero, then the columns of A are linearly dependent, so that A is
not invertible and det A “ 0. If instead all aii ‰ 0, we can divide
the i-th row by aii , and assume the entries on the main diagonal are
all 1. Then, adding appropriate multiples of the rows to the rows
above (resp. below in the case of a lower triangular matrix), which
does not affect the determinant, gives A ⇝ id, so that det A “ 1, so
the claim holds in this case.
det A “ detpAT q,
i.e., the determinant does not change when passing from A to its
transpose (Definition 3.81).
Proof. For small matrices (of size at most 3 ˆ 3), this can be proved
directly from the formulae in §4.2.1.
In general, one may argue like this: if A is not invertible, then
AT is not invertible either (by Lemma 3.83). In this case, both
sides of the equation are zero. If A is invertible, it is a product of
elementary matrices: A “ U1 . . . Un . We then have AT “ UnT . . . U1T .
By the product formula (Proposition 4.15), we may therefore assume
A is an elementary matrix. In this case, one checks the claim by
inspection:
¨ ˛
1
..
.
˚ ‹
˚ ‹
˚ ‹
˚ 0 1 ‹
• for A “ ˚
˚ .. ‹
‹, we have AT “ A, so
˚ . ‹
˚
˚ 1 0 ‹
‹
˚ .. ‹
˝ . ‚
1
the claim clearly holds.
140 CHAPTER 4. DETERMINANTS
¨ ˛
1
˚ ... ‹
˚ ‹
1
˚ ‹
˚ ‹
• Likewise, for A “ ˚ r ‹, we have A “
˚ ‹
1
˚ ‹
˚ ‹
˚ .. ‹
˝ . ‚
1
AT , so again the claim holds obviously.
¨ ˛
1
˚ ... ‹
˚ ‹
˚ ‹
˚ 1 ‹
• The matrix ˚
˚
˚ ... ‹
‹ is a lower trian-
‹
˚
˚ r 1 ‹
‹
˚
˝ ... ‹
‚
1
gular matrix, and its transpose an upper triangular matrix.
Both have determinant 1 according to Proposition 4.17.
The choice of the second row (as opposed to the others) is arbitrary,
and the result is the same. However, the presence of the a22 “ 0
simplifies the computation.
4.5 Exercises
Exercise 4.1. For which values of a, b P R is the following matrix
invertible? In this event, what is its inverse?
¨ ˛
a b 3
A “ ˝ 2 1 ´1 ‚.
1 ´1 4
Exercise
¨ 4.5.˛(Solution at p. 227) Compute the determinant of
3 0 0
˝ 1 4 0 ‚ in three ways:
2 ´3 5
• by using Theorem 4.5,
• by using Sarrus’s rule, (4.11),
• by using Proposition 4.17.
Exercise 4.6. (Solution at p. 228) Compute the determinants of
¨ ˛ ¨ ˛
1 5 8 1 5 8
˝ 40 ´9 1 ‚ and ˝ 40 ´9 1 ‚.
0 0 0 1 5 8
Eigenvalues and
eigenvectors
(i.e., the only non-zero entries are on the main diagonal; such ma-
trices are called diagonal matrices) are particularly simple to com-
prehend and to use in computations. For example, products of the
can be computed easily:
¨ ˛¨ ˛ ¨ ˛
a11 0 b11 0 a11 b11 0
˝ ... ‚˝ ... ‚“ ˝ ... ‚,
0 ann 0 bnn 0 ann bnn
i.e., the (diagonal) entries can just be multiplied one by one, some-
thing that clearly goes wrong for products of general matrices. Eigen-
values and eigenvectors can, in certain cases, be used to reduce com-
putations for general matrices to those for diagonal matrices.
143
144 CHAPTER 5. EIGENVALUES AND EIGENVECTORS
5.1 Definitions
Definition 5.1. Let A be a square matrix. A real number λ is
called an eigenvalue of A if there exists a nonzero vector v P Rn
that satisfies the equation
Av “ λv.
Such a vector v is called an eigenvector for the eigenvalue λ. In
other words, multiplying the matrix A by the vector v results in a
scaled version of v, where the scaling factor is the eigenvalue λ.
Likewise, for a linear map f : V Ñ V , λ is an eigenvalue if there
is v P V, v ‰ 0 such that
f pvq “ λv. (5.2)
We consider some of the linear maps f : R2 Ñ R2 of §3.2.1.
Example 5.3. If f is the reflection along, say, the x-axis, i.e., f px, yq “
px, ´yq, then (5.2) reads
px, ´yq “ pλx, λyq,
i.e., x “ λx and ´y “ λy. The first equation holds if x “ 0 and λ
arbitrary or if λ “ 1 and x arbitrary. Similarly, the second forces
y “ 0 or λ “ ´1. Since, by definition, we have px, yq ‰ p0, 0q, we
cannot have both x “ 0 and y “ 0. If x ‰ 0, then λ “ 1, which
forces y “ 0. If x “ 0, then y ‰ 0 and therefore λ “ ´1. Thus,
there are two eigenvalues and eigenvectors are as follows:
• λ “ 1, with eigenvectors px, 0q for arbitrary x P R,
• λ “ ´1, with eigenvectors p0, yq for arbitrary y P R.
Before going on, we make the following observation, for any nˆn-
matrix A. The equation
Av “ λv
can be rewritten as
0 “ Av ´ λv “ Av ´ pλidqv “ pA ´ λidqv.
Here we have used standard properties of matrix multiplication
(Lemma 3.57). We seek a non-zero vector v satisfying this con-
dition. Such a vector exists if and only if A ´ λid is not invertible.
5.2. THE CHARACTERISTIC POLYNOMIAL 145
˙ˆ
1 r
Example 5.4. We consider the shearing matrix A “ ,
0 1
where we assume r ‰ 0 (otherwise A “ id). We check the invert-
ibility of the matrix:
ˆ ˙
1´λ r
A ´ λid “ .
0 1´λ
It suffices to compute the determinant (Theorem 4.13):
ˆ ˙
1´λ r
det “ p1 ´ λq2 ´ r ¨ 0 “ pλ ´ 1q2 .
0 1´λ
This is zero precisely if λ “ 1, i.e., λ “ 1 is the only eigenvalue of
A. Eigenvectors for this eigenvalue are those vectors such that
ˆ ˙
9 x
pA ´ 1idq “ 0,
y
ˆ ˙ ˆ ˙ˆ ˙
ry 0 r x
i.e., “ “ 0. Since we have assumed r ‰ 0,
0 0 0 y
this implies y “ 0, andˆ x is˙ arbitrary. Thus, the eigenvectors for
x
λ “ 1 are of the form for x P R.
0
pptq “ an xn ` ¨ ¨ ¨ ` a0 ,
5.3 Eigenspaces
In the above examples, the set of all eigenvectors for a given eigen-
value has a particularly nice shape. This is a general phenomenon:
Definition and Lemma 5.11. Let A P Matnˆn be a square ma-
trix and λ P R a fixed real number. The set
Eλ :“ tv P Rn | Av “ λvu
Example 5.13.
ˆ ˙ We compute the eigenspaces of the matrix A “
0 ´1
. Its characteristic polynomial is
´2 0
ˆ ˙
´λ ´1
χA ptq “ det “ λ2 ´ 2.
´2 ´λ
?
Its zeros,
? i.e., the eigenvalues of A are λ1{2 “ ˘ 2. The eigenspace
for 2 is the solution space of the homogeneous system
ˆ ? ˙ˆ ˙
´ 2 ´1 ? x
“ 0.
´2 ´ 2 y
looooooooomooooooooon
?
“A´ 2¨id“:B
x
´5 ´4 ´3 ´2 ´1 1 2 3 4 5
´2
P AP ´1 “ D,
¨ ˛
d11 0
where D “ ˝ ... ‚ is a diagonal matrix.
0 dnn
150 CHAPTER 5. EIGENVALUES AND EIGENVECTORS
A2 A3 A4
exp A “ 1 ` A ` ` ` ` ....
2 6 24
Here A3 “ A ¨ A ¨ A etc. Instead of computing all these powers
of A one after another, one can use the above definition: if A is
diagonalizable, i.e., P AP ´1 “ D, then A “ pP ´1 P qApP ´1 P q “
P ´1 pP AP ´1 qP “ P ´1 DP . Then,
A2 “ P ´1 DP ¨ P ´1 DP “ P ´1 D2 P, A3 “ P ´1 D3 P
λ1 , . . . , λ1 , . . . , loooomoooon
loooomoooon λk , . . . , λk .
dim E1 times dim Ek times
5.4. DIAGONALIZING MATRICES 151
One can show that above, one always has k ď n. One does this
by proving that the sum of the subspaces Eλ1 , . . . , Eλk is a direct
sum, so that
Example
ˆ 5.21.˙ We continue the discussion of the rotation matrix
0 ´1
A “ . Its (complex) eigenvalues are λ1 “ i, λ2 “ ´i.
1 0
According to Corollary 5.16, A is diagonalizable. We compute the
eigenspaces, where we regard A as a complex matrix:
Ei “ tv P C2 | pA ´ i ¨ idqv “ 0u.
ˆ ˙
z1
If v “ , then
z2
ˆ ˙ˆ ˙ ˆ ˙ ˆ ˙
´i ´1 z1 ´iz1 ´ z2 ! 0
pA ´ i ¨ idqv “ “ “ .
1 ´i z2 z1 ´ iz2 0
This means z1 “ iz2 from the second equation; the first is then also
satisfied since ´iz1 ´ z2 “ ´ipiz2 q ´ z2 “ z2 ´ z2 “ 0. Thus
Ei “ tpiz, zq | z P Cu,
5.5 Exercises
¨ ˛
2 1 1
Exercise 5.1. Let A “ ˝ 0 1 0 ‚. Following Method 5.15,
1 ´1 2
decide whether A is diagonalizable.
Help: you will find that the eigenvalues of A are among the
numbers 0, 1, 2, 3. You will be able to choose basis vectors of the
eigenspaces all of whose coordinates are ´1, 0, 1.
ˆ ˙
a b
Exercise 5.2. Let A “ . Show that:
c d
• χA ptq “ t2 ´ trpAqt ` det A “ t2 ´ pa ` dqt ` pad ´ bcq. Here
trpAq is the trace of A, cf. Exercise 3.24.
• The eigenvalues of A are
c
a`d pa ´ bq2
λ1{2 “ ˘ ` 4bc.
2 4
Exercise 5.3. For each of the following matrices, compute χA ptq,
the eigenvalues of A, the eigenspaces for these eigenvalues. Also
decide whether A is diagonalizable and compute an eigenbasis if
one exists.
154 CHAPTER 5. EIGENVALUES AND EIGENVECTORS
ˆ ˙
3 5
(1) A “
1 ´1
¨ ˛
0 1 0
(2) A “ ˝ 3 0 1 ‚
2 0 0
¨ ˛
1 0 0
3 ‚
(3) A “ ˝ 0 0 2
0 0 1
¨ ˛
0 1 0
(4) A “ ˝ 0 0 1 ‚
0 0 0
¨ ˛
1 0 0
Exercise 5.4. Consider the matrix A “ ˝ 1 1 2 ‚. Compute
1 0 1
its characteristic polynomial, its eigenvalues and its eigenspaces. Is
A diagonalizable? If so, find a basis of R3 such that the associated
matrix is a diagonal matrix, as in Definition 5.14.
f : R3 Ñ R3
be the linear map such that f p1, 0, 1q “ p2, 0, 2q, ker f “ Lpp1, 1, 1qq
and f p2, 0, ´3q “ p´2, 0, 3q. Compute the matrix of f with respect
to the standard basis.
diagonalizable?
Euclidean spaces
159
160 CHAPTER 6. EUCLIDEAN SPACES
Lemma 6.4. The norm ||v|| is the length of the line segment from
the origin to v.
For v, w P R2 , there holds
Proof. The formula for the norm follows from repeatedly applying
the Pythagorean theorem. Illustrating this for n “ 3, we see that
the line segment (shown dotted below)a from the origin O “ p0, 0, 0q
to the point pv1 , v2 , 0q has length v12 ` v22 . Therefore the length of
the segment from O to v is
dˆ ˙2
b b
v1 ` v2 ` v3 “ v12 ` v22 ` v32 .
2 2 2
v “ pv1 , v2 , v3 q
pv1 , v2 , 0q
w
v´w
||w|| v
r
||v||
which is positive
ˆ ˙ definite. By contrast, for gpx, yq “ x2 ´ y 2 , it is
2 0
, which is indefinite. One proves in analysis that the
0 ´2
positive defininetess of the Hesse matrix implies that there is a local
minimum at a given point px, yq, provided that Bf Bx
“ Bf
By
“ 0 at this
point. Thus, f has a local minimum at the point p0, 0q, but g does
not.
vKw.
We call a
||v|| :“ xv, vypP Rě0 q
the norm of the vector v. For v, w P V , the distance between v and
w is defined as
dpv, wq :“ ||v ´ w||.
Proposition 6.17. Let pV, x´, ´yq be an Euclidean space. For each
v, w P V , there holds:
(1) ||v|| ě 0,
(2) ||v|| “ 0 if and only if v “ 0,
(3) ||rv|| “ |r|||v|| for r P R,
Proof. The first and third statement is immediate. The second holds
since x´, ´y is (by definition) positive definite.
4 y
W
WK
2
x
´4 ´2 2 4
´2
´4
ˆ ˙ ˆ ˙
1 1
Example 6.22. Consider the subspace W “ Lp 2 , 1 q Ă
4 0
R3 (with
ˆ its K
˙ standard scalar product). We compute W . A vector
x1
x“ x2 P R3 will be orthogonal to W if and only if it is orthog-
x3
ˆ ˙ ˆ ˙
1 1
onal to v1 “ 2 and v2 “ 1 . This follows from the linearity
4 0
of xx, ´y. We make the conditions xKv1 and xKv2 explicit:
xKv1 ñ x1 ` 2x2 ` 4x3 “0
xKv2 ñ x1 ` x2 “ 0.
We solve this homogeneous system
ˆ ˙ ˆ ˙ ˆ ˙
1 2 4 1 2 4 1 2 4
⇝ ⇝
1 1 0 0 ´1 ´4 0 1 4
which shows that x3 is a free variable, and that the solution space
of the system, i.e., W K is the subspace
ˆ ˙
K 4
W “ Lp ´4 q.
1
In this sum, all terms except the one with k “ l are zero, since
uk Kul for k ‰ l. We also have xul , ul y “ 1, which shows that al “ 0,
and therefore the linear independence of the given vectors.
Example 6.28. The
˜standard
¸ basis e1 , . . . , en of Rn is an orthonor-
v1
.
mal basis. For v “ .
.
, we have xei , vy “ vi and the represen-
vn
tation in (6.27) is the usual expansion of v:
v “ v1 e1 ` ¨ ¨ ¨ ` vn en .
In general, the identity (6.27) is a convenient way to compute the
coordinates of a given vector in terms of an (orthonomal) basis.
6.3. EUCLIDEAN SPACES 171
Proof. In each step, the vector wr1 is constructed in such a way that
wr1 is orthogonal to the preceding vectors w1 , . . . , wr´1 , cf. (6.25).
The division by the norms of the vectors wr1 ensures that ||wr || “ 1.
Note that this is possible since ||wr1 || ą 0 since wr1 ‰ 0 and x´, ´y is
positive definite.
172 CHAPTER 6. EUCLIDEAN SPACES
1 v1 v2
w1
´1 0 1 2 3
1
w2 x
w2
´1
For a proof of this, see, e.g. [Nic95, Theorem 8.2.2]. The vectors
of an orthonormal eigenbasis are also called the principal axes of A.
The theorem is sometimes called the principal axes theorem. We
only point out that the difficult direction is to show that (3) ñ (1).
One does this by proving that a symmetric real matrix has only real
eigenvalues (as opposed to complex). For 2ˆ2-matrices, one can see
this by direct computation (see also Exercise 5.2):
ˆ the ˙ characteristic
a b
polynomial of a symmetric 2 ˆ 2-matrix A “ is
b d
w “ pW pv0 q
v0 “ w ` v
v “ pW K pvx0 q
z
W
v0 ` W
H “ v0 ` W
(2) a P H K ,
(3) H “ tx P Rn | xx, ay “ du.
This vector can be computed as
v
a“ ,
||v||
where v is the unique element in H X W K or (equivalently) the point
in H that is closest to the origin.
The equation xx, ay “ d (which is a linear equation in the un-
knowns x1 , . . . , xn ) is called the Cartesian equation of the hyper-
plane.
Example 6.45. We continue the example in Example 6.22:
ˆ ˙ ˆ ˙
1 1
W “ Lp 2 , 1 q
4 0
ˆ ˙
4
W K “ Lp ´4 q,
1
ˆ ˙
11
and consider the hyperplane H “ 11 ` W . We compute H X
11
1
Thus, c “ 3
above, so that
ˆ ˙
1 4
v“ ´4 .
3 1
ˆ ˙ ˆ ˙
1 1 0
w “ 1 K λ ,
0 λ1
Z “ W ` W1
X “v`W
x
x1
pZ pvq v
X“W
ZK
m “ pZ K pvq
X1 “ W 1
The two subspaces W and W 1 are spanned by p1, ´1, 2q and p´1, 1, 2q,
respectively. These two vectors are linearly independent, so that the
lines are not parallel. We determine whether they have an intersec-
tion point by solving the system
p1 ` s, ´3 ´ s, 5 ` 2sq “ p4 ´ t, ´3 ` t, 6 ` 2tq.
6.7 Exercises
Exercise 6.1. Let V “ Pď2 “ tat2 ` bt ` c | a, b, c P Ru be the
vector space of (real) polynomials of degree ď 2. We consider the
scalar product in Example 6.16(3), i.e.,
ż1
xp, qy “ ppxqqpxqdx.
´1
x1 ´ x2 ` x3 ` 2x4 “ 0.
(1) Determine whether L and M are the same line, parallel, or skew.
(2) Compute the cartesian equation of the plane that contains the
line M and that is parallel to L. (Recall that a cartesian equa-
tion is of the form xx, ay “ d for an appropriate vector a and an
appropriate d P R.)
(3) Given the point l “ p0, 1, ´1q P L compute a point m P M such
that the line passing through l and m is parallel to the plane
defined by the equation 3x ´ z “ 0.
(4) Consider the family of planes πα : z “ α, for some parameter α P
R. Let rα “ L X πα and sα “ M X πα . Let mα be the midpoint
of the segment with endpoints rα and sα . Verify that the points
mα are all lying on the same line. Moreover, determine the
parametric equation of that line.
Appendix A
Sets
Symbol Reads Explanation Example
t ...u a set The elements of the set t1, 2, 3u denotes the set con-
are written inside the sisting of the numbers 1, 2
braces. and 3.
t. . . | . . . u The set of This denotes the set con- t all vegetables V | I eat V reg
all . . . sat- sisting of all objects sat- consists of all the vegetables
isfying the isfiying a certain condi- that I eat regularly.
condition tion.
....
P is an ele- If M is a set the expres- ♢ P t♢, ♡, ♠, ♣u
ment of sion x P M means that x
is a member of M .
R is not an If M is a set the expres- ♢ R t♡, ♠, ♣u
element of sion x P M means that x
is a member of M .
Logic
Symbol Reads Explanation Example
ñ Implies If A and B are two (mathemat- x ě 1 ñ x2 ě 1
ical) statements, then “A ñ
B” means that if A holds then
B also holds.
ô Equivalent If A and B are two mathemati- x ě 0 ô x ` 1 ě 1
cal statements, then “A ô B”
is an abbreviation for A ñ B
and (at the same time) B ñ
A.
:“ is defined x :“ 2 means that we
to be define the variable x
to take the value 2
Trigonometric functions
angle radian
(in degree) (no unit)
180˝ π
π
90˝ 2
π
α 180
α
180
π
r r
197
198 APPENDIX B. TRIGONOMETRIC FUNCTIONS
r
sinpαq
α
cospαq
Given any radian r, the ray that has an angle r between itself
and the positive x-axis meets the circle with radius 1 and mid-point
p0, 0q in exactly one point p. The trigonometric functions sin and
cos are defined to be the coordinates of that point:
p “ pcosprq, sinprqq.
1 cospxq
sinpxq
0.5
2 4 6
´0.5
´1
200 APPENDIX B. TRIGONOMETRIC FUNCTIONS
Appendix C
Solutions of selected
exercises
201
202 APPENDIX C. SOLUTIONS OF SELECTED EXERCISES
We then add p´2q times the second line to the fourth (equivalently,
subtract 2 times the second line from the fourth):
¨ ˛
1 ´1 1 0 ´2
˚ 0 0 1 ´1 1 ‹
˚ ‹.
˝ 0 0 0 0 0 ‚
0 0 0 0 0
C.1. SYSTEMS OF LINEAR EQUATIONS 203
This matrix is in row-echelon form, with the leading 1’s being un-
derlined above. We finally bring it into reduced row-echelon form
by subtracting the second from the first line, which gives
¨ ˛
1 ´1 0 1 ´3
˚ 0 0 1 ´1 1 ‹
˚ ‹.
˝ 0 0 0 0 0 ‚
0 0 0 0 0
The matrix has no entry of the form 0 . . . 0 1, so the system does
have a solution. The first column of the matrix corresponds to the
variable x1 etc., so that the free variables are x2 and x4 . We let
x2 “ α, x4 “ β, where α and β are arbitrary real numbers. The
non-free variables x1 and x3 are uniquely determined by α and β.
To compute them, we use the equations obtained by the matrix
x3 ´ β “ 1
x1 ´ α ` β “ ´3
which we solve as x3 “ 1 ` β and x1 “ α ´ β ´ 3. Thus, the solution
set is
tpα ´ β ´ 3, α, 1 ` β, βq | α, β P Ru.
This simplifies to
4t ` 2 “ 5
which has the solution t “ 34 . Since the variable q does not appear
in that equation it is a free variable. Thus, for all q P R, the vector
3 3 1 7 3 1
px1 “ 1 ` , x2 “ ` q, x3 “ ` 2qq “ p , ` q, ` 2qq
4 4 4 4 4 4
satisfies the requested conditions. Note that these are infinitely
many solutions.
pp1q “ a0 ` a1 ` a2 ` a3 “0
pp2q “ a0 ` a1 ¨ 2 ` a2 ¨ 22 ` a3 ¨ 23 “ 3.
a0 ` a1 ` a2 ` a3 “ 0
a0 ` 2a1 ` 4a2 ` 8a3 “ 3.
Subtracting the second from the first yields a reduced row echelon
matrix: ˆ ˙
1 0 ´2 ´6 ´3
.
0 1 3 7 3
C.2. VECTOR SPACES 205
a0 “ ´3, a1 “ 3,
so that
ppxq “ ´3 ` 3x
is a solution to the problem. Another solution would be a2 “ a3 “ 1,
which gives a1 “ ´7 and a0 “ 5, i.e.,
ppxq “ 5 ´ 7x ` x2 ` x3 .
Comparing the entries of the matrix, this gives the linear system
α ` 3β “ ´1
α ` 2β “0
2α ` 3β “2
2α ` 5β “ 4.
The second gives α “ ´2β, inserting into the first gives ´2β ` 3β “
´1, which means β “ ´1. However, inserting into the third equation
gives ´4β ` 3β “ 2, so that β “ ´2, contradicting the previous
equation. Thus, there is no solution, so C is not a linear combination
of A and B.
206 APPENDIX C. SOLUTIONS OF SELECTED EXERCISES
p1 1 1 1q.
This matrix is already in reduced row echelon form: the leading one
is for the variable x, the variables y, z, t are free variables. Thus,
S “ tp´α ´ β ´ γ, α, β, γq | α, β, γ P Ru.
We have
x1 “ a ` 2b
x2 “ ´a ` b
x3 “ ´2b ` c
x4 “a`c
such that
3b ´ 3c “ 0
3a ` 2b ` 2c “ 0.
C.2. VECTOR SPACES 207
4
S X T “ t´ cp1, ´1, 0, 1q ` cp2, 1, ´2, 0q ` cp0, 0, 1, 1q | c P Ru
ˆ3 ˙
4 4 4
“ tc p´ , , 0, ´ q ` p2, 1, ´2, 0q ` p0, 0, 1, 1q | c P Ru
3 3 3
2 7 1
“ tcp , , ´1, ´ q | c P Ru
3 3 3
2 7 1
“ Lpp , , ´1, ´ qq.
3 3 3
a ` 2b “ ´α
b “ ´α ` 3β
a “ α.
We solve this system: the last equation gives a “ α and, from the
first equation, b “ ´α. The second equation implies β “ 0. There
is no condition on α, this α “ r for an arbitrary real number r P R.
Instead of solving the above system by hand, we may also use
Gaussian elimination to solve this linear system. The matrix is the
208 APPENDIX C. SOLUTIONS OF SELECTED EXERCISES
The three leading ones are for the variables a, b, β, and α is a free
variable, so let α “ r, where r P R is an arbitrary real number. This
gives again β “ 0, b ` r ´ 3β “ 0, so that b “ ´r and a “ r.
Thus the intersection W1 X W2 consists of the vectors
Thus,
W1 X W2 “ Lpp1, ´1, 1qq,
so a basis of W1 X W2 consists of (the single vector) p1, ´1, 1q, and
in particular
dim W1 X W2 “ 1.
We now consider W1 ` W2 . According to Definition 2.34,
W1 ` W2 “ tw1 ` w2 | w1 P W1 , w2 P W2 u,
these are, we apply Method 2.53 and Method 2.44. The matrix
built out of the four vectors is
¨ ˛ ¨ ˛ ¨ ˛ ¨ ˛
v1 1 0 1 1 0 1 1 0 1
˚ v2 ‹ ˚ 2 1 0 ‹ ˚ 0 1 ´2
‹ ˚ ‹ ˚ 0 1 ´2 ‹
‹.
˝ w1 ‚ “ ˝ ´1 ´1 1 ‚ ⇝ ˝ 0 ´1 2
˚ ‹ ˚ ‹⇝˚
‚ ˝ 0 0 0 ‚
w2 0 3 0 0 3 0 0 0 6
This matrix has three leading ones, so the vectors are linearly inde-
pendent as claimed.
We “guess” v “ p1, 2, 3, 4q and check that these vectors v1 , v2 , v3 , v
are linearly independent. By Lemma 2.51, this will then imply that
v is not a linear combination of the other vectors, so that W Ĺ
210 APPENDIX C. SOLUTIONS OF SELECTED EXERCISES
¨ ˛ ¨ ˛ ¨ ˛
1 0 1 0 1 0 1 0 1 0 1 0
˚ 2 0 1 1 ‹
‹ ˚ 0 0 ´1 1 ‹ ˚ 0 0 ´1 1 ‹
˚
˝ 0 ⇝˚ ‹⇝˚ ‹
0 1 3 ‚ ˝ 0 0 1 3 ‚ ˝ 0 0 1 3 ‚
1 2 3 4 0 2 2 4 0 1 1 2
¨ ˛
1 0 1 0
˚ 0 0 0 4 ‹
⇝˚
˝ 0
‹
0 1 3 ‚
0 1 1 2
W Ĺ R4 “ Lpv1 , v2 , v3 , vq.
This has two columns not having a leading one, namely the last two.
These are the free variables, say x3 “ a, x4 “ b for a, b P R. To
determine a basis of W , we therefore have to consider the system
x1 ` x2 ` a “ 0
x2 ` a ` 3b “ 0
a ` b ` p´a ` 2b ` cq “ 0
a ´ 3p2a ´ 2b ´ cq “ 0.
3b ` c “ 0
´5a ` 6b ` 3c “ 0.
2x2 ´ x2 ` x3 ` x4 “ 0
5x2 ´ 3x3 ´ 5x4 “ 0
3x1 ´ 4x2 ` 3x3 ` 4x4 “ 0.
For the second task, one has to solve the non-homogeneous sys-
tem
2x2 ´ x2 ` x3 ` x4 “ 1
5x2 ´ 3x3 ´ 5x4 “ ´3
3x1 ´ 4x2 ` 3x3 ` 4x4 “ 3.
¨ ˛
0
˚ 0 ‹
This solution set is not a subspace, since the zero vector ˚ ˝ 0 ‚ is
‹
0
not a solution for the system: the left hand side of all three equations
is 0, while the right ones are not.
If t ‰ ´4, then we can further divide the last row by t ` 4p‰ 0q, and
the rank is then 3. For t “ ´4, the rank is 2.
C.3. LINEAR MAPS 215
x1 ` 3x2 ´ x3 ` 2x4 “ 1
x1 ` 5x2 ` x3 ` x4 “ α
2x1 ` 4x2 ´ 4x3 ` 5x4 “ 0.
This means that, for all t, the kernel of f is not just consisting of
the zero vector, hence the answer to the question is no.
the equations
1 1
x1 ` x2 ´ a ` b “ 0
2 2
1
x2 ` a “ 0.
2
This gives x2 “ ´ 12 a and x1 “ a ´ 21 b. Therefore,
1 1
ker f “ tpa ´ b, ´ a, a, b | a, b P Ru
2 2
1 1
“ tap1, ´ , 1, 0q ` bp´ , 0, 0, 1q | a, b P Ru
2 2
1 1
“ Lpp1, ´ , 1, 0q, p´ , 0, 0, 1qq.
2 2
These two vectors form a basis of ker f .
In order to determine im f X ker f , we need to consider elements
of
im f “ tap2, ´1, 1, 0q ` bp´1, 0, 1, 2q | a, b P Ru
that also belong to the kernel, i.e., the vector p2a ´ b, ´a, a ` b, 2bq
must lie in ker f . This means that
¨ ˛ ¨ ˛
2a ´ b 0
˚ ´a ‹ ˚ 0 ‹
A˚ ˝ a ` b ‚ “ ˝ 0 ‚.
‹ ˚ ‹
2b 0
This simplifies to
a b
´ “0
2 2
a b
´ ` “ 0.
2 2
This is equivalent to the condition a “ b. Therefore,
ker f X im f “ tap2, ´1, 1, 0q ` ap´1, 0, 1, 2q | a P Ru
“ tap1, ´1, 2, 2q | a P Ru
“ Lpp1, ´1, 2, 2qq.
That is, the vector p1, ´1, 2, 2q is a basis of ker f X im f .
This matrix has three leading ones, so that the vectors do form a
basis. ¨ ˛¨ ˛ ¨ ˛
2 ´1 0 1 ´3
We compute v4 “ f pv3 q “ ˝ 1 0 2 ‚ ˝ 5 ‚“ ˝ 1 ‚.
0 2 ´1 0 10
The equation v4 “ a1 v1 ` a2 v2 ` a3 v3 is the linear system
a1 ` a2 ` a3 “ ´3
a1 ` a2 ` 5a3 “ 1
2a2 “ 10.
We solve this: the last equation gives a2 “ 5, which leads to
a1 ` 5 ` a3 “ ´3
a1 ` 5 ` 5a3 “ 1.
Therefore
a1 ` a3 “ ´8
a1 ` 5a3 “ ´4.
This can be solved to a3 “ 1 and a1 “ ´9. Thus, pa1 , a2 , a3 q “
p1, 5, ´9q are the coordinates of v4 in the basis v1 , v2 , v3 .
We now determine the matrix of f with respect to the basis
v1 , v2 , v3 (both in the domain and the codomain of f ). We therefore
write each f pvi q as a linear combination of these three vectors:
f pv1 q “ v2 “ 0v1 ` 1v2 ` 0v3
f pv2 q “ v3 “ 0v1 ` 0v2 ` 1v3
f pv3 q “ v4 “ 1v1 ` 5v2 ` p´9qv3 .
According to Proposition 3.42, the matrix of f with respect to
v1 , v2 , v3 is ¨ ˛
0 0 1
˝ 1 0 5 ‚.
0 1 ´9
After multiplying the first two rows with ´1, we get a matrix with
three leading ones. Therefore dim im f “ 3, which implies that
im f “ R3 . This tells us that dim ker f “ 1, so ker f is generated
¨ by
˛
1
˚ 1 ‹
any non-zero vector in it. An example of such a vector is ˚ ˝ ´1 ‚,
‹
´1
which therefore constitutes a basis of ker f .
v1 Ñ
Þ v1 “ p1, ´1q “ e1 ´ e2 ,
v2 ÑÞ v2 “ p3, ´1q “ 3e1 ´ e2 .
ˆ ˙
1 3
Thus H “ .
´1 ´1
We now compute the base change matrix K from the standard
basis to the basis v “ tv1 , v2 , v3 u:
e1 “ p1, 0, 0q ÞÑ p1, 0, 0q “ α1 v1 ` α2 v2 ` α3 v3 .
1 “ α1 ` 2α2 ´ α3
0 “ α2 ´ α3
0 “ α1 ` α2 ´ α3 .
The second equation holds for all x, and the first is equivalent to
the third. Therefore, the system is equivalent to the one consisting
of the single equation
x1 ´ x3 “ 0.
This corresponds to the system
Bx “ x,
This simplifies to
¨ ˛¨ ˛ ¨ ˛
´4 0 0 x1 0
˝ 0 ´1 2 ‚˝ x2 ‚ “ ˝ 0 ‚,
0 2 ´4
looooooooooomooooooooooon x3 0
“:B
Lpp0, 2, 1qq.
224 APPENDIX C. SOLUTIONS OF SELECTED EXERCISES
We have ¨ ˛
1 0 0
H“ ˝ ´2 1 0 ‚.
´3 0 1
We compute the inverse using Theorem 3.78:
¨ ˛ ¨ ˛
1 0 0 1 0 0 1 0 0 1 0 0
˝ ´2 1 0 0 1 0 ‚ ⇝ ˝ 0 1 0 2 1 0 ‚ “ pid | H ´1 q.
´3 0 1 0 0 1 0 0 1 3 0 1
C.4 Determinants
The roots of this polynomial, i.e., the eigenvalues are 2 and 0 (re-
gardless of the value of a). The exponent of t ´ 2 in the above
polynomial is 2, the one for t is 1. This implies that
Thus, the equation p1, 0, 0q “ av1 ` bv2 ` cv3 amounts to the linear
system
1“a`b`c
0“b`c
0 “ a ` b ` 2c
One solves this: a “ 1, b “ 1, c “ ´1. Similarly, one solves the
linear system p0, 1, 0q “ av1 ` bv2 ` cv3 . Its solution is a “ ´1,
b “ 1, c “ 0. Finally, for p0, 0, 1q “ av1 ` bv2 ` cv3 one gets the
solution a “ 0, b “ ´1, c “ 1.
Thus, since f is linear (!), we have
f p1, 0, 0q “ f pv1 ` v2 ´ v3 q
“ f pv1 q ` f pv2 q ´ f pv3 q
“ p0, 0, 0q ` p1, 0, 3q ´ 4p1, 1, 2q
“ p´3, ´4, ´5q.
Likewise
f p0, 1, 0q “ f p´v1 ` v2 q “ ´f pv1 q ` f pv2 q “ p1, 0, 3q
f p0, 0, 1q “ f p´v2 ` v3 q “ ´f pv2 q ` f pv3 q “ p3, 4, 5q
3 1
“ p´3, 0, 1q ´ ? ? p1, 1, 0q
2 2
3 3
“ p´3, 0, 1q ´ p , , 0q
2 2
9 3
“ p´ , ´ , 1q.
2 2
C.6. EUCLIDEAN SPACES 235
b b
81 9 37
We finally normalize this vector: ||w21 || “ 4
` 4
` 1 “ 2
.
Therefore c
2 9 3
w2 “ p´ , ´ , 1q.
37 2 2
According to Theorem 6.24, the orthogonal projection is given by
„ ˆ ? ˙ȷ « c ff
1{?2 2 9 3
xt, w1 yw1 ` xt, w2 yw2 “ p0, 1, 5q 1{ 2 w1 ` p0, 1, 5q p´ , ´ , 1q w2
0 37 2 2
c
1 2 7
“ ? w1 ` w2 .
2 37 2
An additional observation is the following: since t “ tU ` tK
is the unique decomposition, we can compute tU “ ttK . By the
positive definitness of x´, ´y we have pU K qK “ U , so we can also
apply Gram–Schmidt to U K . This is somewhat simpler since U K
has dimension
? 1. Applying Gram–Schmidt to v “ p1, ´1, 3q, we
1
have ||v|| “ 11, so that w “ ?11 p1, ´1, 3q is an orthonormal basis
of U K . Then, again by Theorem 6.24
tK “ xt, wyw
„ ȷ
1
“ p0, 1, 5q ? p1, ´1, 3q w
11
14 1
“ ? ? p1, ´, 1, 3q
11 11
14 14 42
“ p , ´ , q.
11 11 11
Therefore
14 14 42
tU “ t ´ tK “ p1, ´, 1, 3q ´ p , ´ , q.
11 11 11
ˆ ˙
a
This is quickly solved as b “ 132, so that the projection of t
c
ˆ ˙ ˆ ˙
0 0
onto U is a ´1 “ ´1 .
1 1
p1, 1, 0q “ p1, 5, 6q ` tK
Solution of Exercise
ˆ ˙ 6.9: The twoˆlines ˙
have underlying vector
1 1
spaces W “ Lp 1 q and W 1 “ Lp ´1 q respectively. These
1 1{2
two one-dimensional subspaces are not contained in each other: the
two vectors are linearly
ˆ independent.
˙ ˆ ˙ ˆ ˙
0 2 ´2
Let d “ v0 ´ v01 “ 1 ´ 0 “ 1 . We compute the
0 0 0
orthogonal complement of
ˆ ˙ ˆ ˙ ˆ ˙ ˆ ˙
1 1 p 1 p
Z “ W ` W “ Lp 1 , 1 1{2q “ Lp 1 , 2 1q.
1 ´1 1 ´2
x`y`z “0
2x ´ 2y ` z “ 0
ˆ ˙
1
which can be solved as Lp 1 q. This vector has norm 6, and
´2
ˆ ˙
1 1
is renormalized to norm 1 as wK “ ?6 1 . We compute the
´2
K
orthogonal projection of d onto Z as
ˆ ˙ ˆ ˙ ˆ ˙
´2 ´1 1 1 ´11
pZ K pdq “ d ´ xd, wK ywK “ 1 ´ 1 “ 7 .
0 6 ´2 6 ´2
b
This vector has norm 29 6
.
In particular, this distance is positive. This, together with the
above observation that W ‰ W 1 means that the lines are skew.
(If one is only required to show that the lines are skew, without
computing their distance, one can also solve the linear system given
by the intersection of L and L1 ; one finds out that this system has
no solution, so the lines are skew.)
while for L two such points are p0, 5, 0q, p0, 0, 5q. This leads to the
following sketch:
y
P
x
z
The equation can be rewritten as
Bˆ ˙ ˆ ˙F
4 x
5 , y “ 20.
10 z
20
The left hand side equals rp16 ` 25 ` 100q “ 141r, so that r “ 141 ,
and therefore the point in P that is as close as possible to the origin
is ˆ ˙
20 4
5 .
141 10
This equates to 2t ` 1 “ 0 or t “ ´ 12 .
We now compute the pair(s) of points realizing the minimal dis-
tance between P and L :“ L´ 1 . We will use that these are exactly
2
the points pp, lq such that p ´ lKW and p ´ lKW 1 .
The point l P L “ L´ 1 is of the form
2
1
l “ p1, 0, 0, 1q ` rp´ , 1, 0, ´1q
2
r
“ p1 ´ s, 2s, 0, 1 ´ 2sq with s “ .
2
On the other hand a point p “ px1 , . . . , x4 q P P satisfies
2x1 ` x3 ´ x4 “ 4,
p ´ l “ pa ` s ´ 1, b ´ 2s, c, 2a ` c ` 2s ´ 5q.
a ` s ´ 1 “ 2λ
b ´ 2s “ 0
c“λ
2a ` c ` 2s ´ 5 “ ´λ.
a ` s ´ 2λ “ 1
b ´ 2s “ 0
c´λ“0
2a ` c ` 2s ` λ “ 5
´5a ` 2b ´ 2c ´ 9s “ ´11.
1 1
pa, b “ 4 ´ 2a, c “ , λ “ , s “ 2 ´ aq.
2 2
Thus
1 7
p “ pa, 4 ´ 2a, , 2a ´ q
2 2
l “ p1 ´ p2 ´ aq, 2p2 ´ aq, 0, 1 ´ 2p2 ´ aqq “ pa ´ 1, 4 ´ 2a, 0, ´3 ` 2aq.
3 ´ 3t “ ´3
1“0
3t “ ´3.
246 APPENDIX C. SOLUTIONS OF SELECTED EXERCISES
equations.
¨ ˛ ¨ ˛
´x ´ 1 1 ´4 ´x ´ 1 1 ´4
˝ ´y ´ 1 0 ´2 ‚ ⇝ ˝ ´y ´ 1 0 ´2 ‚
´x ´ 1 ´1 ´1 ´2x ´ 2 0 ´5
¨ ˛
´x ´ 1 1 ´4
if y‰´1 2
⇝ ˝ 1 0 y`1 ‚
´2x ´ 2 0 ´5
0 1 ´4 ` 2 x`1
¨ ˛
y`1
2
⇝˝ 1 0 y`1
‚.
2x`2
0 0 ´5 ` 2 y`1
249
250 REFERENCES
Index
251
252 INDEX
injective, 81 transformation, 71
linear, 40 linear combination, 45
quadratic, 40 linear hull, 46
surjective, 81 linear map
Fundamental theorem of inverse, 104
algebra, 147 linear system
equivalent, 13
Gaussian algorithm, 19 homogeneous, 12
Gaussian elimination, 18 linearly dependent, 51
Gaussian eliminiation, 19
linearly independent, 51
generating system, 48
Gram–Schmidt
main diagonal, 100
orthogonalization, 171
map
Hesse matrix, 164 identity, 73
Hesse normal form, 178 zero, 73
homogeneous linear system, matrix, 15
35 associated to a linear
hyperplane, 177 map, 93
associated to a linear
identity matrix, 100 system, 17
image, 81, 119 augmented, 17
imaginary unit, 147 diagonal, 143
intersection, 194 indefinite, 163
of subspaces, 38 lower triangular, 138
intersects, 186 orthogonal, 173
inverse, 91, 104 positive definite, 163
invertible, 104 rotation, 79, 141
isomorphism, 104 row-echelon, 18
shearing, 151
kernel, 82
similar, 156
law of cosines, 160 square, 15
linear upper triangular, 124, 138
approximation, 9 Minkowski space, 163
combination, 45
equation, 7 negative definite, 163
hull, 46 norm, 160, 163
independence, 51 nullity
map, 71 of a linear map, 84
system, 9 of a matrix, 85
INDEX 253
one-to-one, 81 reduced, 18
onto, 81
Sarrus’ rule, 135
ordered
scalar multiplication, 32, 33
pair, 29
scalar product, 159, 161
triple, 29
skew, 186
tuple, 29
solution set, 7
origin, 36
span, 46, 48
orthogonal, 162, 165
standard basis, 55
orthogonal complement, 167
subset, 194
orthogonal projection, 169,
subspace, 37
172
sum
orthogonally diagonalizable,
of polynomials, 41
173
of subspaces, 47
orthonormal basis, 169
of vectors in Rn , 31
orthonormal system, 168
of vectors in a vector
parallel, 186 space, 33
parallelogram law, 32 symmetric, 123
polynomial, 40 symmetry, 161
linear, 40 system, 9
quadratic, 40 system of linear equations, 9
preimage, 81, 119
trace, 123, 153
principal axes, 174
transpose, 114
principal axes theorem, 174
trigonometric functions, 198
principal submatrix, 164
trivial solution, 13
product, 95, 194
of polynomials, 42 union, 63, 194
product formula, 137 unknown, 7
proper subset, 194
variable, 7
Pythagorean theorem, 160
free, 21
rank vector, 30, 33
of a linear map, 84 column, 15
of a matrix, 85 row, 15
rank-nullity theorem, 60, 84 zero, 30, 33
realize the minimal distance, vector equations, 180
176 vector space, 33
reciprocal, 108 finite-dimensional, 57
reflection, 72 infinite-dimensional, 57
row space, 86
zero vector, 30
row-echelon form, 18