Chapter 3
Chapter 3
Recall that a (real) n-vector (or vector) is a collection of n ordered real numbers,
v1
v2
≿v = . , where vi ∈ R for i = 1, ..., n.
..
vn
Here the entry vi is also known as the i-th coordinate.
Definition
The Euclidean n-space, denoted as Rn , is the collection of all n-vectors
v1
v2
Rn = v=. vi ∈ R for i = 1, ..., n. .
..
vn
Geometric Interpretation of Vectors
Geometrically, a vector v can be interpreted as an arrow, with the tail placed at the origin 0, and the
head of the
1
arrow at v, or it could represent a position in the Euclidean n-space. For example, the vector v = 1 in R3
1
represents both the point and the arrow.
Geometric Interpretation of Vector Algebra
Since vectors are matrices, we are able to apply the matrix algebra on vectors. These operations have geometrical
interpretations.
1. Adding u to v is visualized as putting the tail of v at the head of u, and the head of v is the resultant,
v
u+v
v u u
cu, c >1
u
Vectors Algebra
The following properties follows from properties of matrix algebra. However, try using geometrical interpretations to
prove the following properties.
Theorem
Let Rn be a Euclidean vector space. Let u, v, w be vectors in Rn and a, b be some real numbers.
(i) The sum u + v is a vector in Rn .
(ii) (Commutative) u + v = v + u.
(iii) (Associative) u + (v + w) = (u + v) + w.
(iv) (Zero vector) 0 + v = v.
(v) The negative −v is a vector in Rn such that v − v = 0.
(vi) (Scalar multiple) av is a vector in Rn .
(vii) (Distribution) a(u + v) = au + av.
(viii) (Distribution) (a + b)u = au + bu.
(ix) (Associativity of scalar multiplication) (ab)u = a(bu).
(x) If au = 0, then either a = 0 or u = 0.
3.2 Dot Product, Norm, Distance
Discussion
Matrix addition and scalar multiplication can be applied directly to vectors. However, how do we, if it is even
possible, define the multiplication of vectors?
u v
(n × 1) (n × 1)
is undefined.
Multiplying Vectors
We are able to multiply if we transpose one of the vectors.
u1 u1 v1 u1 v2 · · · u1 vn
u2 v1 u2 v2 · · · u2 vn
u2
1. (Outer Product) u ⊗ v = uvT = . v1 v2 · · · vn = . = (ui vj )n (Not part
. . .
.. .. .. .. ..
un un v1 un v2 · · · un vn
of syllabus)
v1
v2 Pn
2. (Inner Product) u · v = uT v = u1 u2 · · · un . = u1 v1 + u2 v2 + · · · + un vn = i=1 ui vi . Also known
. .
vn
as dot product.
Definition
The inner product (or dot product) of vectors u = (ui ) and v = (vi )in Rn is defined to be
u · v = u1 v1 + u2 v2 + · · · + un vn .
Example
1 2
1. 2 · 2 = (1)(2) + (2)(2) + (−1)(2) = 4.
−1 2
1 1
2. 0 · 1 = (1)(1) + (0)(1) + (−1)(1) = 0.
−1 1
2 1
3. · = (2)(1) + (3)(−2) = −4.
3 −2
Norm in R2
x
The distance between the point u = and the origin in R2 is given by
y
p
distance = x 2 + y 2.
u
y
Question
1
What is the length of the vector v = 1?
1
Norm in R3
x
The distance between the point v = y and the origin in R3 is given by
z
qp p
distance = ( x 2 + y 2 )2 + z 2 = x 2 + y 2 + z 2 .
Norm in Rn
Definition
The norm of a vector u ∈ Rn , u = (ui ), is the square root of the inner product of u with itself, and is denoted as ∥u∥,
√ q
∥u∥ = u · u = u12 + u22 + · · · + un2 .
Partial Proof.
Proof for (iv) only. The rest are left as exercise. Let u = (ui )n×1 . Since ui ∈ R, ui2 ≥ 0 for all i = 1, ..., n. Therefore,
Note also that this is a sum of nonnegative numbers, which is equal to 0 if and only if all the ui2 = 0, which is
equivalent to ui = 0 for all i = 1, ..., n.
Unit Vectors
Definition
A vector u in Rn is a unit vector if its norm is 1,
∥u∥ = 1
Example
1. Let ei denote the i-th column of the n × n identity matrix In . Then ei is a unit vector for all i = 1, 2, ..., n.
1
2. √12 0 is a unit vector.
−1
1 1
3. 2 is not a unit vector; √16 2 is a unit vector pointing in the same direction.
1 1
Normalizing a Vector
Let u be a nonzero vector u ̸= 0. By multiplying by the reciprocal of the norm, we get a unit vector,
u
u −→ .
∥u∥
u
Indeed, ∥u∥ is a unit vector,
u u u·u
· = = 1.
∥u∥ ∥u∥ ∥u∥2
This is called normalizing u.
Distance Between Vectors
x1 x2
By Pythagorous theorem, the distance between and in R2 is
y1 y2
x1 x
− 2
p
distance = (x1 − x2 )2 + (y1 − y2 )2 = .
y1 y2
x1 x2
Similarly in R3 , the distance between y1 and y2 is
z1 z2
p x1 x2
distance = (x1 − x2 )2 + (y1 − y2 )2 + (z1 − z2 )2 = y1 − y2 .
z1 z2
Definition
The distance between two vectors u and v, denoted as d(u, v), is defined to be
d(u, v) = ∥u − v∥.
Angle
x x
Let u = and v = 0 .
y 0
Note that 0 ≤ θ ≤ π.
Definition
Define the angle θ between two nonzero vectors, u, v ̸= 0
to be such that u
u·v
cos(θ) = .
∥u∥∥v∥ θ
v
3.3 Linear Combinations and Linear Spans
Linear Combinations
Definition
Let u1 , u2 , ..., uk be vectors in Rn . A linear combination of the vectors u1 , u2 , ..., uk is
c1 u1 + c2 u2 + · · · + ck uk ,
Think of u1 , u2 , ..., uk as the directions, and c1 , c2 , ..., ck as the amount of units to walk in the respective directions.
Example
2 −1
Consider the vectors u1 = and u2 = in R2 .
1 1
Click on the following link https://www.geogebra.org/m/qzhtjwcc. Adjust the different values of c1 and c2 to
visualize the linear combinations of u1 and u2 .
1
(i) When c1 = c2 = 1, u1 + u2 = .
2
5
(ii) When c1 = 2 and c2 = −1, u1 + u2 = .
1
5/2
(iii) When c1 = 3/2 and c2 = 1/2, u1 + u2 = .
2
−5
(iv) When c1 = −1 and c2 = 3, u1 + u2 = .
2
Linear Span
Definition
Let u1 , u2 , ..., uk be vectors in Rn . The span of u1 , u2 , ..., uk is the subset of Rn containing all the linear
combinations of u1 , u2 , ..., uk ,
That is every vector v in the set span{u1 , u2 , ..., uk } is a linear combination of u1 , u2 , ..., uk ,
v = c1 u1 + c2 u2 + · · · + ck uk ,
▶ Click on the play button besides c1 and c2 to see the different linear combinations of u1 and u2 .
▶ The collection of all these linear combination is the orange plane.
1
▶ Consider the vector w = 1. Is it in the span of u1 and u2 ?
0
Example
1 1 1 1
Consider the vectors u1 = 1, u2 = −1, and u3 = 2. Is the vector v = 2 a linear combination of u1 ,
1 0 1 3
u2 and u3 ? Equivalently, is v in span{u1 , u2 , u3 }?
v is in span{u1 , u2 , u3 } if and only if there exists coefficients c1 , c2 , and c3 such that v = c1 u1 + c2 u2 + c3 u3 , that is,
1 1 1 1
c1 1 + c2 −1 + c3 2 = 2 .
1 0 1 3
Since the system is consistent, we can conclude that v is in span{u1 , u2 , u3 }. Moreover, the solution of the system
tells us that c1 = 6, c2 = −2, c3 = −3, that is,
1 1 1 1
2 = 6 1 − 2 −1 − 3 2 .
3 1 0 1
Example
1 0 2 1
Now consider u1 = 0, u2 = 1 , and u3 = 1. Let v = 1. Is v in span{u1 , u2 , u3 }?
1 −1 1 1
v = c1 u1 + c2 u2 + · · · + ck uk .
1 0 2 1
Let u1 = 0, u2 = 1 , and u3 = 1. Let v = 1.
1 −1 1 1
1. Is v in span{u1 , u2 , u3 }?
v = c1 u1 + c2 u2 + c3 u3 .
1 0 2 x
Let u1 = 0, u2 = 1 , and u3 = 1. Find a vector y that is not in span{u1 , u2 , u3 }.
1 −1 1 z
When will span(S) = Rn ?
Let S = {u1 , u2 , ...uk } be a set of vectors in Rn . Now instead of checking if a specific vector v is in span(S), we may
ask if every vector is in the span, that is, whether span(S) = Rn .
Example
1 1 2 x
1. S = 1 , 2 , 3 . Now we check if every y is in span(S).
1 1 2 z
1 1 2 x 1 0 1 2x − y
RREF
1 2 3 y −−−→ 0 1 1 −x + y .
1 1 2 z 0 0 0 −x + z
1 1 1 x 1 0 0 −x − y + 3z
RREF
1 −1 2 y −−−→ 0 1 0 x −z .
1 0 1 z 0 0 1 x + y − 2z
3
The
system
is always consistent regardless of any choice of x, y , z. This show that span(S) = R . In fact, given
x
any y ∈ R3 ,
z
x 1 1 1
y = (−x − y + 3z) 1 + (x − z) −1 + (x + y − 2z) 2 .
z 1 0 1
Discussion
x1
x2
Consider now a vector . in Rn . Observe that elementary row operations would not make any entries zero; every
..
xn
entry would still be a linear combination of x1 , x2 , ..., xn .
Example
x1 x1
R2 ↔R3
1. x2 −−−−→ x3
x3 x2
x1 x1
R3 −aR1
2. x2 −−−−→ x2
x3 x3 − ax1
x1 x1
cR2
3. x2 −−→ cx2 , for some c ̸= 0.
x3 x3
Discussion
x1
x2
This means that in the reduction of u1 u2 ··· uk , the entries in the last column will never be 0, but
..
.
xn
some linear combination of x1 , x2 , ..., xn . In this case, the system is consistent if and only if the reduced row-echelon
form of u1 u2 · · · uk does not have any zero row.
Algorithm to check if span(S) = Rn .
▶ Form the n × k matrix A = u1 u2 ··· uk whose columns are the vectors in S.
Explicitly, span{u1 , u2 , ..., uk } = Rn if and only if the reduced row-echelon form of ( u1 u2 ··· uk ) has no zero
rows.
Example
The n × n identity matrix In is in reduced row-echelon form and does not have any zero rows. Hence, its columns
span Rn .
Indeed, let ei denote the i-th column of In for i = 1, ..., n. Then for any vector w,
w1 1 0 0
w2 0 1 0
w = . = w1 . + w2 . + · · · + wn . .
.. .. .. ..
wn 0 0 1
1 1 2 1 1 2 1 0 1
RREF
Let u1 = 1, u2 = −1, and u3 = 0. Then u1 u2 u3 = 1 −1 0 −−−→ 0 1 1 tells
1 0 1 1 0 1 0 0 0
̸ R3 . Indeed,
us that span{u1 , u2 , u3 } =
1 1 2 x 1 1 1 2 x
R2 −R1 R3 − 2 R2
1 −1 0 y − −−−→−−−−−→ 0 −2 −2 y −x
R3 −R1
1 0 1 z 0 0 0 z − y /2 − x/2
x x
tells us that whenever z − y /2 − x/2 ̸= 0, the vector y is not in the span, y ̸∈ span{u1 , u2 , u3 }.
z z
Question
Proof.
We will only provide the main idea of the proof, the details are left to the readers.
(i) 0 = 0u1 + 0u2 + · · · + 0uk .
(ii) Write v = c1 u1 + c2 u2 + · · · + ck uk . Then αv = (αc1 )u1 + (αc2 )u2 + · · · + (αck )uk .
(iii) Write u = c1 u1 + c2 u2 + · · · + ck uk and v = d1 u1 + d2 u2 + · · · + dk uk . Then
u + v = (c1 + d1 )u1 + (c2 + d2 )u2 + · · · + (ck + dk )uk .
Properties of Linear Spans
Remark
Properties (ii) and (iii) can be combined together into one property (ii’):
The span is closed under linear combinations, that is, if u, v are vectors in span(S) and α, β are any scalars, then the
linear combination αu + βv is a vector in span(S).
Observe that property (ii’) implies that span(S) is closed under linear combination. That is, suppose v1 , v2 , ..., vm are
vectors in span(S), then for any scalars c1 , c2 , ..., cm , the linear combination c1 v1 + c2 v2 + · · · + cm vm is also in the
span. For by property (ii’), c1 v1 + c2 v2 is in span(S), and thus by property (ii’) again, we have (c1 v1 + c2 v2 ) + c3 v3
is in span(S) too. Thus, by induction, we can conclude that c1 v1 + c2 v2 + · · · + cm vm is in span(S).
Since this is true for any scalars c1 , c2 , ..., cm , we have arrived at the following corollary.
Indeed,
v2
c1 + c2 1 0 u2
c2 − c1 = (c1 +c2 )u1 +(c2 −c1 )u2 = (c1 +c2 ) 0+(c2 −c1 ) 1 . u1
0 0 0
v1
Now suppose we are given 2 sets of vectors T = {v1 , v2 , ..., vm } and S = {u1 , u2 , ..., uk }.
▶ By the corollary, if vi ∈ span(S) for i = 1, ..., m, we can conclude that span(T ) ⊆ span(S).
▶ Recall that to check if vi ∈ span(S), we check that the system ( u1 u2 ··· uk vi ) is consistent for all
i = 1, ..., m.
▶ There are in total m such linear systems to check. However, since they have the same coefficient matrix, we
may combine and check them together, that is, check that
u1 u2 · · · uk v1 v2 · · · vm
is consistent.
Algorithm to check for Set Relations between Spans
Theorem
Let S = {u1 , u2 , ..., uk } and T = {v1 , v2 ,..., vm } be sets of vectors in Rn . Then span(T ) ⊆ span(S) if and only if
u1 u2 · · · uk v1 v2 · · · vm is consistent.
1. Observe the left hand side of the augmented matrix in the reduction
1 1 1 1 1 2 1 0 0 1 0 1
RREF
1 −1 2 1 2 3 − −−→ 0 1 0 0 0 0 .
1 0 1 1 1 2 0 0 1 0 1 1
2. Observe the left hand side of the augmented matrix in the reduction
1 1 2 1 1 1 1 0 1 1 0 0
RREF
1 2 3 1 −1 2 − −−→ 0 1 1 0 0 1 .
1 1 2 1 0 1 0 0 0 0 1 0
Recall that the set of solutions to a linear system Ax = b is a subset in Rn (it is the empty set if the system is
inconsistent). We may express this set implicitly as
u ∈ Rn
V = Au = b ,
or explicitly as
V = u + s1 v1 + s2 v2 + · · · + sk vk s1 , s2 , ..., sk ∈ R ,
where u + s1 v1 + s2 v2 + · · · + sk vk , s1 , s2 , ..., sk ∈ R is the general solution.
Example
or explicitly as
0 −1
V = 0 + s 1 s∈R .
1 0
Example
x = 1 − 2s + t, y = 2 + s, z = t −1
⇒ x + 2y − z = 6
x = 1 − 2(y − 2) + z + 1, s = y − 2, t = z +1
So, implicitly, the set has the expression
x
y x + 2y − z = 6 .
z
Discussion
Recall that the general solution of a homogeneous system Ax = 0 has the form
s1 v1 + s2 v2 + · · · + sk vk , s1 , s2 , ..., sk ∈ R.
By the properties of a linear span, this would mean that the solution set to a homogeneous system is a vector space
that is a subset of the Euclidean vector space. We call a vector space nested inside another vector space a subspace.
Example
x − y + z = 0 .
Explicitly,
1 −1 1 −1
V = s 1 + t 0 s, t ∈ R = span 1 , 0 .
0 1 0 1
Subspace
It turns out that for a subset V of the Euclidean space Rn to satisfy all 10 axioms of being a vector space, suffice for
it to satisfies only 3 of them.
Definition
A subset V of Rn is a subspace if it satisfies the following properties.
(i) V contains the zero vector 0 ∈ V .
(ii) V is closed under scalar multiplication. For any vector v in V and scalar α, the vector αv is in V .
Remark
(i) Property (i) can be replaced with property (i’): V is nonempty.
(ii) Properties (ii) and (iii) is equivalent to property (ii’):
V is closed under linear combination. For any u, v in V , and scalars α, β, the linear combination αu + βv is in V .
Solution Space of Homogeneous System
Theorem
The solution set V = { u Au = b } to a linear system Ax = b is a subspace if and only if b = 0, that is, the
system is homogeneous.
Proof.
(⇒) Suppose V = { u Au = b } is a subspace. By property (i), it must contain the origin, which means that 0
must be a solution to Ax = b. Hence,
0 = A0 = b ⇒ b = 0.
Definition
The solution set to a homogeneous system is call a solution space.
Examples
x
Let V = y x + y + z = 0 . Since it is a solution set of a homogeneous system, it is a subspace. We will
z
also show that itsatisfies the 3 criteria.
0
(i) Clearly 0 is in V .
0
x
(ii) Suppose y ∈ V , that is x + y + z = 0. Then for any α ∈ R, αx + αy + αz = α(x + y + z) = α(0) = 0.
z
x1 x2
(iii) Suppose y1 and y2 are in V . Then
z1 z2
x
Is the set V = y x, y ∈ R a subspace?
1
0
It is not a subspace since it does not contain 0.
0
Equivalent Definition for Subspaces
Theorem
A subset V ⊆ Rn is a subspace if and only if is a linear span, V = span(S), for some finite set S = {u1 , u2 , ..., uk }.
Proof.
(⇐) This follows from the property of linear span.
(⇒) Only present a sketch, details are left as exercise.
Since V is a subspace, it is nonempty. Take a u1 ∈ V . If span(u1 ) = V , let S = {u1 }. Otherwise, there is a
u2 ∈ V \span(u1 ). If span(u1 , u2 ) = V , let S = {u1 , u2 }. Otherwise, continue this process to define
ui ∈ V \span{u1 , u2 , ..., ui−1 }. Eventually, the process must stop, that is, there is a k ∈ Z such that
span{u1 , u2 , ..., uk } = V (why?).
Remarks
x 1 0 1 0
1. V = y = x 0 + y 1 x, y ∈ R =span 0 , 1 is a subspace.
0 0 0 0 0
x +y 1 1 1 1
2. V = x − y = x 1 + y −1 x, y ∈ R =span 1 , −1 is a subspace.
0 0 0 0 0
Example
a
1 1
b 0 0
3. V = ab = cd
c is not a subspace because
1 and 0 belong to to V , but
d 0 1
1 1 2
0 0 0
+ = does not.
1 0 1
0 1 1
s 1 1 2
4. V = s 2 s, t ∈ R is not a subspace since 1 belongs to V , but 2 1 = 2 does not.
t 0 0 0
Question
1. Show that the set containing the zero vector {0} is a subspace.
2. Construct a set V such that it satisfies condition (i) and (ii) but not (iii); that is, V contains the origin and is
closed under scalar multiplication, but not closed under addition.
Question
Let V be a subspace of Rn and S = {u1 , u2 , ..., uk } a subset of V , S ⊆ V . Show that the span of S is contained in
V , span(S) ⊆ V .
Subspaces of R2
0
(i) Zero space: This is a point.
0
x1 x1 0
(ii) Lines, L = span for some fixed ̸= , These are lines, which looks like R1 .
y1 y1 0
(iii) Whole R2 .
Subspaces of R3
0
(i) Zero space: 0 This is a point.
0
x1 x1 0
(ii) Lines: L = span y1 for some fixed y1 ̸= 0. These are lines, which looks like R1 .
z1 z1 0
x1 x2 x1 x2
(iii) Planes, P = span y1 , y2 for some y1 , y2 that are not a scalar multiple of each other, These
z1 z2 z1 z2
2
are planes, which looks like R .
(iv) Whole R3 .
Solution Set to Non-homogeneous System
Recall that
u + s1 v1 + s2 v2 + · · · + sk vk , s1 , s2 , ..., sk ∈ R
is a general solution to a consistent non-homogeneous system Ax = b, b ̸= 0 if and only if
s1 v1 + s2 v2 + · · · + sk vk , s1 , s2 , ..., sk ∈ R
is a general solution to the homogeneous system Ax = 0, where u is a particular solution to the non-homogeneous
system Ax = b.
where V = { v Av = 0 } is the solution space to the associated homogeneous system and u is a particular
solution, Au = b.
The solutionset
V is not a subspace as it does not contain the origin. It is shifted away from the origin via the
−1
2
vector u = 0 . Observe that W and V are parallel planes.
0
Solution Set to Linear System
Question
Is R2 ⊆ R3 ?
3.5 Linear Independence
Motivation
Consider
1 0 1 1
u1 = 0 , u2 = 1 , u3 = 1 , u4 = −1.
0 0 0 0
https://www.geogebra.org/m/w2avu5ft
▶ This is a nontrivial solution, and this checks for all i = 1, .., k simultaneously!
Discussion
▶ For if suppose we are able to find some c1 , c2 , ..., ck not all zero such that
c1 u1 + c2 u2 + · · · + ck uk = 0.
c1 u1 + c2 u2 + · · · + ck uk = 0.
Linearly Independent
Definition
A set {u1 , u2 , ..., uk } is linearly independent if the only coefficients c1 , c2 , ..., ck satisfying the equation
c1 u1 + c2 u2 + · · · + ck uk = 0,
1 1 2 1 1 2 0
Consider the set v1 = 1 , v2 = 2 , v3 = 3 . Suppose c1 1 + c2 2 + c3 3 = 0.
1 −1 0 1 −1 0 0
1 1 2 c1 0
Convert the set into a matrix equation, we are solving for 1 2 3 c2 = 0.
1 −1 0 c3 0
1 1 2 0 1 0 1 0
RREF
1 2 3 0 −−−→ 0 1 1 0 .
1 −1 0 0 0 0 0 0
The system has nontrivial solutions. Hence, the set is linearly dependent.
Example
1 1 1
0 1 1
Is the set S = v1 = , v2 = , v3 = linearly independent?
0 0 1
0 0 0
Suppose
c1
v1 + c2 v2 +c3 v3 = 0. Writing it as a matrix equation, we are asking if the homogeneous system
1 1 1 0
0 1 c1
1 0
c = has nontrivial solutions.
1 2
0 0 0
c3
0 0 0 0
1 1 1 0 1 0 0 0
0 1 1 0 R1 −R2 R2 −R3 0 1 0 0
−−−−→−−−−→
0 0 1 0 0 0 1 0
0 0 0 0 0 0 0 0
tells us that the homogeneous system has only the trivial solution, and hence, S is linearly independent.
Algorithm to Check for Linear Independence
▶ {u1 , u2 , ..., uk } is linearly independent if and only if the homogeneous system (u1 u2 ··· uk )x = 0 has only
the trivial solution.
▶ The homogeneous system has only the trivial solution if and only if the reduce row-echelon form of
u1 u2 · · · uk has no non-pivot column.
Theorem
n
A subset S = {u1 , u2 , ...,
uk } of R is linearly independent if and only if the reduced row-echelon form of
A = u1 u2 · · · uk has no non-pivot columns.
Examples
1 0 1 1
1. S = 0 , 1 , 1 , −1 .
0 0 0 0
1 0 1 1
0 1 1 −1 is already in RREF. Since it has a nonpivot column, S is linearly dependent.
0 0 0 0
1 1 2
2. S = 1 , 2 , 3
1 −1 0
1 1 2 1 0 1
RREF
1 2 3 −−−→ 0 1 1. Since the RREF has a nonpivot column, S is linearly dependent.
1 −1 0 0 0 0
Question
v1 = u1 ,
v2 = u1 + u2 ,
v3 = u1 + u2 + u3 .
Let S = {u1 , u2 , ..., uk } be a set of vectors in of Rn . Show that if k > n, then S is linearly dependent.
Special Cases
Take say, c1 = 1, then we have (1)0 = 0. Alternatively, the matrix (0) is in RREF and the only column is a
non-pivot column.
The only solution to cv = 0 is c = 0. Alternatively, (v) reduces to the matrix with 1 in the first entry and zero
otherwise, and the only column is a pivot column.
3. {v1 , v2 } is linearly dependent if and only if one is a scalar multiple of the other, αv1 = v2 or v1 = βv2 .
{v1 , v2 } linearly dependent if and only if c1 or c2 ̸= 0. Say c1 ̸= 0. Then v1 = − cc21 v2 . The argument for c2 ̸= 0
is analogous.
Theorem
Suppose S = {u1 , u2 , ..., uk } is linearly dependent set of vectors in Rn . Then for any vector u in Rn ,
{u1 , u2 , ..., uk , u}
is linearly dependent.
Since the set {u1 , u2 , ..., uk } is linearly dependent, we can find a say ci ̸= 0 such that
c1 u1 + · · · + ci ui + · · · + ck uk = 0.
c1 u1 + · · · + ci ui + · · · + ck uk + cu = 0,
Hence, any set {v1 , ..., vk , 0} containing the zero vector is linearly dependent.
Linear Dependency and Adding or Removing Vectors
Theorem
Suppose {u1 , u2 , ..., uk } is linearly independent set of vectors in Rn and u is not a linearly combination of
u1 , u2 , ..., uk . Then the set {u1 , u2 , ..., uk , u} is linearly independent.
i.e. {u1 , u2 , ..., uk } linearly independent and u ̸∈ span{u1 , u2 , ..., uk } ⇒ {u1 , u2 , ..., uk , u} linearly independent.
Here is a heuristic explanation. Readers may refer to the appendix for the proof.
Since {u1 , u2 , ..., uk } is linearly independent, the RREF of u1 u2 · · · uk has no non-pivot column. Now since
u ̸∈ span{u1 , u2 , ..., uk }, the last column of the RREF of ( u1 · · · uk u ) is a pivot column. But observe that
the LHS of the RREF of ( u1 · · · uk u ) is the RREF of u1 u2 · · · uk . Hence, every column in the RREF
of ( u1 · · · uk u ) = u1 u2 · · · uk u is a pivot column. This shows that {u1 , u2 , ..., uk , u} is linearly
independent.
Linear Dependency and Adding or Removing Vectors
Theorem
Suppose {u1 , u2 , ..., uk } is linearly independent set of vectors in Rn . Then any subset of {u1 , u2 , ..., uk } is linearly
independent.
If {u1 , u2 , ..., uk } has no redundancy, then it is clear that any subset cannot have redundancy. Readers may refer to
the appendix for the proof.
3.6 Basis and Coordinates
Motivation
1 0 0 x
Consider the set E = e1 = 0 , e2 = 1 , e3 = 0 . It is clear that any vector y in R3 can be unique
0 0 1 z
written as a linear combination of the vectors in E ,
x 1 0 0
y = x 0 + y 1 + z 0 .
z 0 0 1
x
In fact, we call x, y , z the coordinates of the vector y . However, the set E is not the only set that enjoys this
z
property.
Motivation
1 1 0 x
Consider the set B = u1 = 1 , u2 = 0 , u3 = 1 . Now let y be a vector in R3 . Then
0 1 1 z
1 1 0 x 1 0 0 (x + y − z)/2
RREF
1 0 1 y −−−→ 0 1 0 (x − y + z)/2
0 1 1 z 0 0 1 (y − x + z)/2
is unique.
Motivation
1 0 1 0
On the other hand, consider the set S = 1 , 1 , 0 . The vector 0 is not a linear combination of the
1 1 0 1
vectors in S,
1 0 1 0 1 0 1 0
RREF
1 1 0 0 − −−→ 0 1 −1 0 .
1 1 0 1 0 0 0 1
This shows that span(S) ̸= R3 .
Motivation
1 0 1 0
Consider another set S = 1 , 1 , 0 , 0 . Check that the span of S is indeed the whole R3 ,
1 1 0 1
1
span(S) = R3 . However, the linear combination is not unique. For example, consider the vector 2,
1
1 0 1 0 1 1 0 1 0 1
RREF
1 1 0 0 2 −−−→ 0 1 −1 0 1
1 1 0 1 1 0 0 0 1 −1
tells us that
1 1 0 1 0
2 = (1 − s) 1 + (1 + s) 1 + s 0 − (1 + s) 0
1 1 1 0 1
for any s ∈ R. Observe that this is because the set S is not linearly independent.
Motivation
x
Consider now the solution space V = y x + y − 2z = 0 . Since it is a subspace of R3 , it is a vector space
z
itself. Explicitly, we have
−1 2 −1 2
V = s 1 + t 0 s, t ∈ R = span 1 , 0 .
0 1 0 1
2. Type in u1 = (−1, 1, 0) and hit enter, and u2 = (2, 0, 1) and hit enter.
3. It is easy to see that every vector in V can be written uniquely as a linear combination of the u1 and u2 .
Motivation
x 1 0 1 0
Let V = y y − z = 0 = s 0 + t 1 s, t, ∈ R = span 0 , 1 . Check that the set
z 0 1 0 1
1 0 1 1
S = 1 , 1 , 0 spans V . However, the vector 2 in V can be written as as linear combination of
1 1 0 2
vectors in S in more than one way,
1 1 0 1
2 = 1 + 1 + 0 0
2 1 1 0
1 0 1
= 0 1 + 2 1 + 0
1 1 0
Theorem
Suppose S is a basis for V . then every vector v ∈ V can be written as a linear combination of vectors in S uniquely.
Proof.
(i) span(S) = V tells us that every vector v ∈ V can be written as a combination of vectors in S.
(ii) S is linearly independent tells us that if v is a linear combination of vectors in S, the coefficient is unique.
v = c1 u1 + · · · + ck uk = d1 u1 + · · · + dk uk
⇔ (c1 − d1 )u1 + · · · + (ck − dk )uk = 0
⇔ c1 = d1 , ··· ck = dk
Example
x
Let V = y x +y −z =0 .
z
−1 1
▶ The general solution to the linear system is s 1 + t 0, s, t ∈ R.
0 1
▶ This
shows
that
every
vector v in the solution space V is a linear combination of the vectors in
−1 1
S = 1 , 0 . Hence, S spans V .
0 1
▶ Since S contains only 2 vectors which are not a multiple of each other, S is linearly independent too.
s1 u1 + s2 u2 + · · · + sk uk , s1 , s2 , ..., sk ∈ R
is the general solution. Then S = {u1 , u2 , ..., uk } is a basis for the subspace V = { u Au = 0 }.
Example
1 1 0 2 1 0 1 0 1 2/3 4/3 0
2 −1 3 0 3 0 RREF 0 1 −1 4/3 −1/3 0
1 −2 3 −2 2 0 −−−→ 0 0 0
Solving the system, , we conclude that
0 0 0
2
−1 3 0 3 0 0 0 0 0 0 0
−1 −2/3 −4/3
1 −4/3 1/3
S= 1
, 0 , 0 spans V . Using the last 3 coordinates, we can
also conclude that S is linearly
0 1 0
0 0 1
independent (details left to readers). Hence, S is a basis for V .
Example
x −1 1
Let V = y x + y − z = 0 . It was shown that T = 1 , 0 is a basis for V . Show that
z 0 1
−1 1
S = 2 , 1 is a basis for V .
1 2
Recall that the zero space {0} is a subspace. Find a basis for {0}
The basis for the zero space {0} is the empty set {} or ∅.
▶ Firstly, span{0} = {0} but the set {0} is not linearly independent.
▶ However, if S is a set that contains any nonzero vector, then span(S) will be strictly bigger than the zero space,
{0} ⫋ span(S).
▶ The empty set is linearly independent vacuously.
▶ The real definition of the span of S is the smallest subspace V such that S ⊆ V . That is V = span(S) if
V ⊆ W for all subspaces W containing S.
▶ Since the zero space is the smallest subspace containing the empty set, span of the empty set is the zero space.
Question
Let V be a subspace of Rn and S = {u1 , u2 , ..., uk } a subset of vectors in V . Which of the following statements
is/are true?
Proof.
Write A = u1 u2 · · · un and let S = {u1 , u2 , ..., un } be the set containing the columns of A. Then A is
invertible if and only if the reduce row-echelon form is the identity matrix. But we have also seen that S is linearly
independent if and only if the reduce row-echelon form of A has no non-pivot columns, which for a square matrix,
must mean that the reduce row-echelon form is the identity matrix.
Theorem
A n × n square matrix A is invertible if and only if the columns spans Rn .
Proof.
Let S = {u1 , u2 , ..., un } be the set containing the columns of A. Then S spans Rn if and only if the reduced
row-echelon form of A do not have any nonzero row, which for a square matrix, would mean that the reduce
row-echelon form is the identity matrix. This is equivalent to A being invertible.
Basis for Rn and Invertibility
Corollary
Let S = {u1 , u2 , ..., un } be a subset of Rn containing n vectors. Then S is linearly independent if and only if S spans
Rn .
Proof.
Let A = u1 u2 · · · un be the matrix whose columns are the vectors in S. Then A is a square matrix. Then by
the two theorems, S is linearly independent if and only if A is invertible, if and only if S spans Rn .
Corollary
Let S = {u1 , u2 , ..., uk } be a subset of Rn and A = u1 u2 · · · uk be the matrix whose columns are vectors in
S. Then S is a basis for Rn if and only if k = n and A is an invertible matrix.
Proof.
(⇒) If k < n, then S cannot span Rn . If k > n, then S cannot be linearly independent. Hence, if S is a basis, S
must have exactly n vectors, and by the previous theorem, A must be invertible.
(⇐) Conversely, if k = n and A is invertible, then S is a basis by the previous theorem.
Basis for Rn and Invertibility
Theorem
A n × n square matrix A invertible if and only if the rows of A form a basis for Rn .
Theorem
A square matrix A of order n is invertible if and only if the rows of A are linearly independent.
The proofs of the 2 theorems follow from the fact that A is invertible if and only if AT is, and the rows of A are the
columns of AT .
Equivalent Statements for Invertibility
Theorem
Let A be a square matrix of order n. The following statements are equivalent.
(i) A is invertible.
(ii) AT is invertible.
(iii) (left inverse) There is a matrix B such that BA = I.
(iv) (right inverse) There is a matrix B such that AB = I.
(v) The reduced row-echelon form of A is the identity matrix.
(vi) A can be expressed as a product of elementary matrices.
(vii) The homogeneous system Ax = 0 has only the trivial solution.
(viii) For any b, the system Ax = b has a unique solution.
(ix) The determinant of A is nonzero, det(A) ̸= 0.
(x) The columns/rows of A are linearly independent.
(xi) The columns/rows of A spans Rn .
Introduction to Coordinates Relative to a Basis
x 1 0
x
Let V = y z = 0 = span 0 , 1 . Observe that any vector in R2 identifies with a unique
y
z 0 0
1 0 x
vector x 0 + y 1 = y in V .
0 0 0
1 1
Let T = 1 , −1 , it is also a basis for V .
0 0
1 1 x +y
▶ Now a vector x in R2 defines a vector x 1 + y −1 = x − y in V .
y
0 0 0
x 1 1
▶ Conversely, a vector y = x+y 1 + x−y −1 in V defines a vector (x + y )/2 in R2 .
2 2 (x − y )/2
0 0 0
Introduction
toCoordinates Relative
to
a
Basis
x −1 1
Let V = y x +y −z =0 = span 1 , 0 . Then we have the unique correspondence
z 0 1
−1 1 y −x
x
∈ R2 ←→ x 1 + y 0 = x ∈ V .
y
0 1 y
Definition
Let S = {u1 , u2 , ..., uk } be a basis for V , a subspace of Rn and
v = c1 u1 + c2 u2 + · · · + ck uk
be the unique expression of a vector v in V in terms of the basis S. The vector in Rk defined by the coefficients of
the linear combination is called the coordinates of v relative to basis S, and is denoted as
c1
c2
[v]S = . .
..
ck
Examples
1. Let E = {e1 , e2 , ..., en } be the standard basis for Rn . For any w = (wi ) ∈ Rn ,
w = w 1 e1 + w 2 e2 + · · · + w n en .
⇒ [w]E = w
Example
x 1 0 0 x x
y = x 0 + y 1 + z 0 ⇒ y = y .
z 0 0 1 z E
z
Examples
−1 1 x 3
2. S = 1 , 0 is a basis for V = y x + y − z = 0 . Let v = −1. To compute the
0 1 z 2
−1 1 3
coordinates of v relative to basis S, find c1 , c2 such that c1 1 + c2 0 = −1.
0 1 2
−1 1 3 1 0 −1
RREF
1 0 −1 −−−→ 0 1 2
0 1 2 0 0 0
So,
3 −1 1
−1 = (−1) 1 + 2 0 −1
⇒ [v]S = .
2
2 0 1
Remarks
▶ Even though v ∈ V ⊆ Rn has n coordinates, its coordinates relative to basis S, [v]S , has k coordinates if the
basis S has k vectors.
▶ Note that the correspondence is unique only if S is a basis. If S is not linearly independent, a few vectors in Rk
can map to the same v ∈ V .
1 0
▶ The relative coordinates depend on the ordering of the basis. If S = u1 = 0 , u2 = 1 and
0 0
0 1 1
T = v1 = 1 , v2 = 0 , then for v = 2,
0 0 0
1 2
[v]S = ̸= = [v]T .
2 1
Algorithm for Computing Relative Coordinate
Let V be a subspace of Rn and S = {u1 , u2 , ..., uk } be a basis for V .
▶ Let v be a vector in V . To find [v]S , we must find the coefficients c1 , c2 , ..., ck such that
v = c1 u1 + c2 u2 + · · · + ck uk .
( u1 u2 ··· uk v ).
Example
x1
−3 4
x2 −1 , 2 .
V = x3 x1 − 2x2 + x3 = 0, x2 + x3 − 2x4 = 0 . Basis: S = 1 0
x4 0 1
1
1
Find the coordinates of v =
∈ V relative to S.
1
1
−3 4 1 1 0 1
−1 2 1 RREF 0 1 1 1
− −−→ ⇒ [v]S = .
1 0 1 0 0 0 1
0 1 1 0 0 0
Question
( u1 u2 ··· uk v ).
(ii) Suppose
( u1 u2 ··· uk v )
is inconsistent. What can you conclude?
3.7 Dimensions
Introduction
▶ Recall that bases for any nonzero subspace V ̸= {0} is not unique.
▶ So suppose now S = {u1 , u2 , ..., uk } and T = {v1 , v2 , ..., vm } are bases for a subspace V . Using S, we identify
V with Rk and using T , we identify V with Rm . Then do we say that V is k-dimensional, or m-dimensional?
Theorem
Let V be a subspace of Rn and B a basis for V . Suppose B contains k vectors, |B| = k.
(i) If S = {v1 , v2 , ..., vm } is a subset of V with m > k, then S is linearly dependent.
(ii) If S = {v1 , v2 , ..., vm } is a subset of V with m < k, then S is cannot span V .
Corollary
Suppose S = {u1 , u2 , ..., uk } and T = {v1 , v2 , ..., vm } are bases for a subspace V ⊆ Rn . Then k = m.
Definition
Let V be a subspace of Rn . The dimension of V , denoted by dim(V ), is defined to be the number of vectors in any
basis of V .
In other words, V is k-dimensional if and only if V identifies with Rk using coordinates relative to any basis B of V .
Example
1. The dimension of the Euclidean n-space, Rn is n, since the standard basis E = {e1 , e2 , ..., en } has n vectors.
x 1 0
2. V = y z =0 is 2-dimensional since the basis 0 , 1 has 2 vectors.
z 0 0
x1
x2
3. V = .. a1 x1 + a2 x2 + · · · + an xn = 0 is n − 1-dimensional if not all ai = 0. This is called a
.
xn
hyperplane in Rn .
Dimension of the Zero Space {0}
We will provide a intuitive reasoning of why the empty set is the basis for the zero space {0} in Rn .
▶ Intuitively, the dimension is the independent degree of freedom of movement: In a 3 dimensional space, we can
travel forwards backwards, side ways, and up and down; in a 2-dimensional space, we can travel forwards
backwards, as well as side ways; in a 1-dimensional space, we can only walk forward or backwards.
▶ So, since we have no freedom of movement in the zero space, the zero space should be 0-dimensional.
▶ But this would tell us that by definition, the basis for the zero space must have no vectors, that is, it must be
the empty set.
Dimension of Solution Space
Recall that the vectors in the general solution of a homogeneous system form a basis for the solution space. This
means that the dimension of the solution space is equal to the number of parameters in the general solution. This is
in turn equal to the number of non-pivot columns in the reduce row-echelon form of the coefficient matrix.
Theorem
Let A be a m × n matrix. The number of non-pivot columns in the reduced row-echelon form of A is the dimension
of the solution space
V = { u ∈ Rn Au = 0 }.
Let s1 u1 + s2 u2 + · · · + ck uk be the general solution to the homogeneous system Ax = 0. Then S = {u1 , u2 , ..., uk }
is a basis for V and so by definition, dim(V ) = k. But this means that the reduced row-echelon form of A has k
non-pivot columns.
Example
x
Let V = y 2x − 3y + z = 0 . This is a hyperplane in R3 , so dim(V ) = 2. We can see this also from the
z
fact that the coefficient matrix 2 −3 1 has 2 non-pivot columns.
1 1 1
Now consider the set S = v1 = −1 , v2 = 1 , v3 = 0 . Check that S is a subset of V . Since S
−5 1 −2
contains 3 vectors and dim(V ) = 2 < 3, S must be linearly dependent. Indeed,
1 1 1 1 0 1/2
RREF
−1 1 0 −−−→ 0 1 1/2 .
−5 1 −2 0 0 0
Example
x1
x2
Let V = x3 x1 + x2 + x3 + x4 = 0 . It is a hyperplane in R4 , hence dim(V ) = 3.
x4
1
1
−1 0
Consider the set S = , . Check that S is a subset of V . Since S only contains 2 vectors, it cannot
−1
0
0 0
1 1 1 1
1 −1 0 1
1 is in V , but 0 −1 1 is inconsistent.
span the whole of V . For example, the vector
−3 0 0 −3
Spanning Set Theorem
Theorem
Let S = {u1 , u2 , ..., uk } be a subset of vectors in Rn , and let V = span(S). Suppose V is not the zero space,
V ̸= {0}. Then there must be a subset of S that is a basis for V .
Proof.
If S is linearly independent, then S is a basis for V . Otherwise, one of the vectors ui in S can be written as a linear
combination of the other. Without lost of generality (rearranging if necessary), say
uk = c1 u1 + c2 u2 + · · · + ck−1 uk−1
for some coefficients c1 , c2 , ..., ck−1 . We claim that {u1 , u2 , ..., uk−1 } still spans V . For if v is a vector in V , we have
v = a1 u1 + a2 u2 + · · · + ak uk
= a1 u1 + a2 u2 + · · · + ak (c1 u1 + c2 u2 + · · · + ck−1 uk−1 )
= (a1 + ak c1 )u1 + (a2 + ak c2 )u2 + · · · + (ak−1 + ak ck−1 )uk−1
which shows that v is a linear combination of vectors in {u1 , u2 , ..., uk−1 }. If {u1 , u2 , ..., uk−1 } is linearly
independent, then it is a basis for V . Otherwise, continue the process of throwing away some redundant vectors, we
can conclude that there must be a subset of S that is a basis for V .
Linear Independence Theorem
Theorem
Let V be a subspace of Rn and S = {u1 , u2 , ..., uk } a linearly independent subset of V , S ⊆ V . Then there must be
a set T containing S, S ⊆ T such that T is a basis for V .
Proof.
If span(S) = V , then S is a basis for V . Otherwise, since span(S) ⫅ V , there must be a vector in V that is not
contained in span(S), uk+1 ∈ V \ span(S). Note that since uk+1 ̸∈ span{u1 , u2 , ..., uk }, the set
S1 = {u1 , u2 , ..., uk , uk+1 } is linearly independent and dim(span(S1 )) = k + 1. If span(S1 ) = V , we are done.
Otherwise, repeating the argument above, we can find uk+2 in V such that S2 = {u1 , u2 , ..., uk , uk+1 , uk+2 } is a
linearly independent subset of V . Continue inductively, this process must stop when the number of vectors in Sm is
equal to the dimension of V , for otherwise, if |Sm | > dim(V ), then Sm cannot be linearly independent. So let
T = Sm when |Sm | = dim(V ).
Challenge
Let V be a k-dimensional subspace of Rn . Using the dimension of V (instead of proving using equivalent statements
of invertibility), prove that a subset S in V containing k vectors, |S| = k, is linearly independent if and only if it
spans V .
Discussion
Recall that for a set S to be a basis for a subspace V in Rn , we must check that
(i) span(S) = V , and
However, if we know the dimension of V and if the number of vectors in the set S is equal to the dimension of V ,
|S| = dim(V ), then it suffice to check one of the above criteria.
Dimension and Subspaces
Theorem
Let U and V be subspaces of Rn .
(ii) If U is a strict subset of V , U ⫋ V , then the dimension of U is strictly smaller than V , dim(U) < dim(V ).
Theorem (B1)
Let V be a k-dimensional subspace of Rn , dim(V ) = k. Suppose S is a linearly independent subset of V containing
k vectors, |S| = k. Then S is a basis for V .
Proof.
Let U = span(S). Since S is linearly independent, S is a basis for U, and hence, dim(U) = k. Since S ⊆ V , U ⊆ V .
Also, dim(U) = k = dim(V ). Therefore, U = V , and so S is a basis for V .
Theorem (B2)
Let V be a k-dimensional subspace of Rn , dim(V ) = k. Suppose S is a set containing k vectors, |S| = k, such that
V ⊆ span(S). Then S is a basis for V .
Proof.
Let U = span(S), then V ⊆ U. So, k = dim(V ) ≤ dim(U) ≤ k which shows that k = dim(U) and hence
V = U = span(S). Next, observe that S must be linearly independent. For if S is linearly dependent, then
k = dim(U) = dim(span(S)) < k, a contradiction.
Equivalent ways to check for basis
In summary
Definition (B1) (B2)
(1) |S| = dim(V )
(1) span(S) = V (2) S ⊆ V (1) |S| = dim(V )
(2) S is L.I. (3) S is Linearly (2) V ⊆ span(S)
independent
x −1 1
Let V = y x + y − z = 0 . Show that S = 2 , 1 is a basis for V .
z 1 2
▶ Check that S ⊆ V
(−1) + (2) − (1) = 0, (1) + (1) − (2) = 0.
▶ Check that S is linearly independent. But this is clear since the 2 vectors in S cannot be a multiple of each other.
▶ dim(V ) = 2 since the RREF, which is just the coefficient matrix, has 2 non-pivot columns.
▶ Check that S ⊆ V
▶ Check that S is linearly independent. But this is clear since the 2 vectors in S cannot be a multiple of each
other. Hence, dim(span(S)) = 2.
▶ dim(V ) = 2 since the RREF, which is just the coefficient matrix, has 2 non-pivot columns.
Proof.
Let c1 , c2 , ..., ck , c be coefficients satisfying the equation
c1 u1 + c2 u2 + · · · + ck uk + cu = 0.
c1 u1 + c2 u2 + · · · + ck uk 0
tells us that c1 = c2 = · · · = ck = 0 by the independence of S. Therefore only the trivial coefficients satisfy the
equation above, which proves that the set {u1 , u2 , ..., uk , u} is linearly independent
Linear Dependency and Adding or Removing Vectors
Theorem
Suppose {u1 , u2 , ..., uk } is linearly independent set of vectors in Rn . Then any subset of {u1 , u2 , ..., uk } is linearly
independent.
Proof.
Let {ui1 , ui2 , ..., uil } be a subset of {u1 , u2 , ..., uk }. Relabel index, or rearranging the vectors in the set, we may
assume that the subset is {u1 , u2 , ..., ul } for some l ≤ k. Suppose c1 , c2 , ..., cl are coefficients satisfying the equation
c1 u1 + c2 u2 + · · · + cl ul = 0.
c1 u1 + c2 u2 + · · · + cl ul + 0ul+1 + · · · + 0uk = 0.
This is a linear combination of the vectors {u1 , u2 , ..., uk }, and since the set is indepenendent, necessary the
coefficients are 0. In particular, c1 = c2 = · · · = cl = 0, which proves that the set {u1 , u2 , ..., ul } is independent.
Properties of Coordinates Relative to a Basis
Theorem
Let V be a subspace of Rn and B a basis for V .
Proof.
Exercise.
More Properties of Coordinates Relative to a Basis
Theorem
Let B be a basis for V containing k vectors, |B| = k. Let v1 , v2 , ..., vm be vectors in V . Then
(i) v1 , v2 , ..., vm is linearly independent (respectively, dependent) if and only if [v1 ]B , [v2 ]B , ..., [vm ]B is linearly
independent (respectively, dependent) in Rk ; and
(ii) {v1 , v2 , ..., vm } spans V if and only if {[v1 ]B , [v2 ]B , ..., [vm ]B } spans Rk .
Proof.
(i) Follows from the properties of coordinates relative to a basis, c1 v1 + c2 v2 + · · · + ck vm = 0n×1 if and only if
Continue of Proof.
(ii) (⇐) Suppose {[v1 ]B , [v2 ]B , ..., [vm ]B } spans Rk . Given any v ∈ V , [v]B ∈ Rk and so can find c1 , ..., cm such that
[v]B = c1 [v1 ]B + c2 [v2 ]B + · · · + cm [vm ]B in Rk . Then v = c1 v1 + c2 v2 + · · · + cm vm , which proves that
{v1 , v2 , ..., vm } spans V .
(⇒) Let B = {u1 , u2 , ..., uk }. Suppose {v1 , v2 , ..., vm } spans V . Any vector w = (w1 , w2 , ..., wk ) ∈ Rk defines a
vector v = w1 u1 + w2 u2 + · · · + wk uk in V , and so can write v = c1 v1 + c2 v2 + · · · + cm vm . Then