0% found this document useful (0 votes)
6 views140 pages

Chapter 3

Chapter 3 of MA1508E covers Euclidean vector spaces, focusing on the definition and geometric interpretation of vectors, vector algebra, and properties of the dot product, norm, and distance. It introduces concepts such as linear combinations and spans of vectors, emphasizing their significance in Rn. The chapter also includes examples and theorems that illustrate the properties and operations involving vectors.

Uploaded by

Aswin R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views140 pages

Chapter 3

Chapter 3 of MA1508E covers Euclidean vector spaces, focusing on the definition and geometric interpretation of vectors, vector algebra, and properties of the dot product, norm, and distance. It introduces concepts such as linear combinations and spans of vectors, emphasizing their significance in Rn. The chapter also includes examples and theorems that illustrate the properties and operations involving vectors.

Uploaded by

Aswin R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 140

MA1508E: Linear Algebra for Engineering

Chapter 3: Euclidean Vector Spaces


3.1 Euclidean Vector Spaces
Vectors

Recall that a (real) n-vector (or vector) is a collection of n ordered real numbers,
 
v1
v2 
≿v =  .  , where vi ∈ R for i = 1, ..., n.
 
 .. 
vn
Here the entry vi is also known as the i-th coordinate.
Definition
The Euclidean n-space, denoted as Rn , is the collection of all n-vectors
   

 v1 


 v2  

Rn = v=. vi ∈ R for i = 1, ..., n. .
 

  ..  

 
vn
 
Geometric Interpretation of Vectors
Geometrically, a vector v can be interpreted as an arrow, with the tail placed at the origin 0, and the
  head of the
1
arrow at v, or it could represent a position in the Euclidean n-space. For example, the vector v = 1 in R3
1
represents both the point and the arrow.
Geometric Interpretation of Vector Algebra
Since vectors are matrices, we are able to apply the matrix algebra on vectors. These operations have geometrical
interpretations.

1. Adding u to v is visualized as putting the tail of v at the head of u, and the head of v is the resultant,

v
u+v
v u u

2. Scalar multiple of a vector is scaling the vector,

cu, c >1
u
Vectors Algebra
The following properties follows from properties of matrix algebra. However, try using geometrical interpretations to
prove the following properties.
Theorem
Let Rn be a Euclidean vector space. Let u, v, w be vectors in Rn and a, b be some real numbers.
(i) The sum u + v is a vector in Rn .
(ii) (Commutative) u + v = v + u.
(iii) (Associative) u + (v + w) = (u + v) + w.
(iv) (Zero vector) 0 + v = v.
(v) The negative −v is a vector in Rn such that v − v = 0.
(vi) (Scalar multiple) av is a vector in Rn .
(vii) (Distribution) a(u + v) = au + av.
(viii) (Distribution) (a + b)u = au + bu.
(ix) (Associativity of scalar multiplication) (ab)u = a(bu).
(x) If au = 0, then either a = 0 or u = 0.
3.2 Dot Product, Norm, Distance
Discussion

Matrix addition and scalar multiplication can be applied directly to vectors. However, how do we, if it is even
possible, define the multiplication of vectors?

Let u, v ∈ Rn be two (column) vectors. The multiplication

u v
(n × 1) (n × 1)

is undefined.
Multiplying Vectors
We are able to multiply if we transpose one of the vectors.
   
u1 u1 v1 u1 v2 · · · u1 vn
u2 v1 u2 v2 · · · u2 vn 
u2    
1. (Outer Product) u ⊗ v = uvT =  .  v1 v2 · · · vn =  . = (ui vj )n (Not part
 
. . .
 ..   .. .. .. .. 
un un v1 un v2 · · · un vn
of syllabus)
 
v1
  v2  Pn
2. (Inner Product) u · v = uT v = u1 u2 · · · un  .  = u1 v1 + u2 v2 + · · · + un vn = i=1 ui vi . Also known
 
. .
vn
as dot product.

Definition
The inner product (or dot product) of vectors u = (ui ) and v = (vi )in Rn is defined to be

u · v = u1 v1 + u2 v2 + · · · + un vn .
Example

   
1 2
1.  2  · 2 = (1)(2) + (2)(2) + (−1)(2) = 4.
−1 2

   
1 1
2.  0  · 1 = (1)(1) + (0)(1) + (−1)(1) = 0.
−1 1

   
2 1
3. · = (2)(1) + (3)(−2) = −4.
3 −2
Norm in R2
 
x
The distance between the point u = and the origin in R2 is given by
y
p
distance = x 2 + y 2.

u
y
Question
 
1
What is the length of the vector v = 1?
1
Norm in R3  
x
The distance between the point v = y  and the origin in R3 is given by
z
qp p
distance = ( x 2 + y 2 )2 + z 2 = x 2 + y 2 + z 2 .
Norm in Rn

Definition
The norm of a vector u ∈ Rn , u = (ui ), is the square root of the inner product of u with itself, and is denoted as ∥u∥,
√ q
∥u∥ = u · u = u12 + u22 + · · · + un2 .

This is also known as the length or magnitude of the vector.


Properties of Inner Product and Norm
Theorem
Let u and v be vectors in Rn , and a, b, c be some scalars.
(i) (Symmetric) u · v = v · u.

(ii) (Scalar multiplication) cu · v = (cu) · v = u · (cv).

(iii) (Distribution) u · (av + bw) = au · v + bu · w.

(iv) (Positive definite) u · u ≥ 0 with equality if and only if u = 0.

(v) ∥cu∥ = |c|∥u∥.

Partial Proof.
Proof for (iv) only. The rest are left as exercise. Let u = (ui )n×1 . Since ui ∈ R, ui2 ≥ 0 for all i = 1, ..., n. Therefore,

u · u = u12 + u22 + · · · + un2 ≥ 0.

Note also that this is a sum of nonnegative numbers, which is equal to 0 if and only if all the ui2 = 0, which is
equivalent to ui = 0 for all i = 1, ..., n.
Unit Vectors

Definition
A vector u in Rn is a unit vector if its norm is 1,
∥u∥ = 1

Example
1. Let ei denote the i-th column of the n × n identity matrix In . Then ei is a unit vector for all i = 1, 2, ..., n.
 
1
2. √12  0  is a unit vector.
−1
   
1 1
3. 2 is not a unit vector; √16 2 is a unit vector pointing in the same direction.
1 1
Normalizing a Vector

Let u be a nonzero vector u ̸= 0. By multiplying by the reciprocal of the norm, we get a unit vector,
u
u −→ .
∥u∥
u
Indeed, ∥u∥ is a unit vector,
   
u u u·u
· = = 1.
∥u∥ ∥u∥ ∥u∥2
This is called normalizing u.
Distance Between Vectors
   
x1 x2
By Pythagorous theorem, the distance between and in R2 is
y1 y2
   
x1 x
− 2
p
distance = (x1 − x2 )2 + (y1 − y2 )2 = .
y1 y2
   
x1 x2
Similarly in R3 , the distance between y1  and y2  is
z1 z2
   
p x1 x2
distance = (x1 − x2 )2 + (y1 − y2 )2 + (z1 − z2 )2 = y1  − y2  .
z1 z2

Definition
The distance between two vectors u and v, denoted as d(u, v), is defined to be

d(u, v) = ∥u − v∥.
Angle
   
x x
Let u = and v = 0 .
y 0

The angle θ between u and v is


x xx0 u·v u
cos(θ) = p = = . θ
x2 + y2 ∥u∥x0 ∥u∥∥v∥
v

Note that 0 ≤ θ ≤ π.
Definition
Define the angle θ between two nonzero vectors, u, v ̸= 0
to be such that u
u·v
cos(θ) = .
∥u∥∥v∥ θ
v
3.3 Linear Combinations and Linear Spans
Linear Combinations

Definition
Let u1 , u2 , ..., uk be vectors in Rn . A linear combination of the vectors u1 , u2 , ..., uk is

c1 u1 + c2 u2 + · · · + ck uk ,

for some c1 , c2 , ..., ck ∈ R. The scalars c1 , c2 , ..., ck are called coefficients.

Think of u1 , u2 , ..., uk as the directions, and c1 , c2 , ..., ck as the amount of units to walk in the respective directions.
Example

   
2 −1
Consider the vectors u1 = and u2 = in R2 .
1 1

Click on the following link https://www.geogebra.org/m/qzhtjwcc. Adjust the different values of c1 and c2 to
visualize the linear combinations of u1 and u2 .
 
1
(i) When c1 = c2 = 1, u1 + u2 = .
2
 
5
(ii) When c1 = 2 and c2 = −1, u1 + u2 = .
1
 
5/2
(iii) When c1 = 3/2 and c2 = 1/2, u1 + u2 = .
2
 
−5
(iv) When c1 = −1 and c2 = 3, u1 + u2 = .
2
Linear Span

Definition
Let u1 , u2 , ..., uk be vectors in Rn . The span of u1 , u2 , ..., uk is the subset of Rn containing all the linear
combinations of u1 , u2 , ..., uk ,

span{u1 , u2 , ..., uk } = { c1 u1 + c2 u2 + · · · + ck uk c1 , c2 , ..., ck ∈ R }.

That is every vector v in the set span{u1 , u2 , ..., uk } is a linear combination of u1 , u2 , ..., uk ,

v = c1 u1 + c2 u2 + · · · + ck uk ,

for some scalars c1 , c2 , ..., ck .


Example

Click on the following


 link
 https://www.geogebra.org/m/n7ypnzsn.
  This activity will demonstrate the span of
1 −1
the 2 vectors u1 = −2 and u2 =  1  in R3 .
1 1

▶ Click on the play button besides c1 and c2 to see the different linear combinations of u1 and u2 .
▶ The collection of all these linear combination is the orange plane.
 
1
▶ Consider the vector w = 1. Is it in the span of u1 and u2 ?
0
Example
       
1 1 1 1
Consider the vectors u1 = 1, u2 = −1, and u3 = 2. Is the vector v = 2 a linear combination of u1 ,
1 0 1 3
u2 and u3 ? Equivalently, is v in span{u1 , u2 , u3 }?

v is in span{u1 , u2 , u3 } if and only if there exists coefficients c1 , c2 , and c3 such that v = c1 u1 + c2 u2 + c3 u3 , that is,
       
1 1 1 1
c1 1 + c2 −1 + c3 2 = 2 .
1 0 1 3

This is a vector equation, which when written as a matrix equation gives


    
1 1 1 c1 1
1 −1 2 c2  = 2 .
1 0 1 c3 3
Example

This is a linear system. Solving it, we have


   
1 1 1 1 1 0 0 6
 RREF
u1 u2 u3 v = 1 −1 2 2  −−−→  0 1 0 −2  .
1 0 1 3 0 0 1 −3

Since the system is consistent, we can conclude that v is in span{u1 , u2 , u3 }. Moreover, the solution of the system
tells us that c1 = 6, c2 = −2, c3 = −3, that is,
       
1 1 1 1
2 = 6 1 − 2 −1 − 3 2 .
3 1 0 1
Example

       
1 0 2 1
Now consider u1 = 0, u2 =  1 , and u3 = 1. Let v = 1. Is v in span{u1 , u2 , u3 }?
1 −1 1 1

Find c1 , c2 , and c3 such that v = c1 u1 + c2 u2 + c3 u3 .


   
1 0 2 1 1 0 2 1
R3 −R1 R3 +R2
 0 1 1 2  −−−−→−−−−→  0 1 1 2 .
1 −1 1 3 0 0 0 4

The system is inconsistent. Hence, v is not in span{u1 , u2 , u3 }.


In fact, if you plot the span of u1 , u2 , u3 in https://geogebra.org/3d, you will see that the span is a plane and v
is outside the plane.
Algorithm to Check for Linear Combination

Let S = {u1 , u2 , ..., uk } be a set of vectors in Rn .



▶ Form the n × k matrix A = u1 u2 · · · uk whose columns are the vectors in S.

▶ Then a vector v in Rn is in span{u1 , u2 , ..., uk } if and only if the system Ax = v is consistent.

▶ If the system is consistent, then


 the solutions to the system are the possible coefficients of the linear
c1
 c2 
combination. That is, if u =  .  is a solution to Ax = v, then
 
 .. 
ck

v = c1 u1 + c2 u2 + · · · + ck uk .

Explicitly, v ∈ span{u1 , u2 , ..., uk } if and only if ( u1 u2 ··· uk v ) is consistent.


Question

       
1 0 2 1
Let u1 = 0, u2 =  1 , and u3 = 1. Let v = 1.
1 −1 1 1

1. Is v in span{u1 , u2 , u3 }?

2. If it is, write v as a linear combination of u1 , u2 , u3 ,

v = c1 u1 + c2 u2 + c3 u3 .

3. Are the coefficients c1 , c2 , c3 unique?


Question

       
1 0 2 x
Let u1 = 0, u2 =  1 , and u3 = 1. Find a vector y  that is not in span{u1 , u2 , u3 }.
1 −1 1 z
When will span(S) = Rn ?

Let S = {u1 , u2 , ...uk } be a set of vectors in Rn . Now instead of checking if a specific vector v is in span(S), we may
ask if every vector is in the span, that is, whether span(S) = Rn .
Example
       
 1 1 2  x
1. S = 1 , 2 , 3 . Now we check if every y  is in span(S).
1 1 2 z
 

   
1 1 2 x 1 0 1 2x − y
RREF
 1 2 3 y  −−−→  0 1 1 −x + y  .
1 1 2 z 0 0 0 −x + z

The system is consistent if and


 only z −x = 0. This show that not every vector in R3 is in span(S), that is,
 if 
1 0
span(S) ̸= R3 . For example, 0 or 0 is not in the span.
0 1
When will span(S) = Rn ?
      
 1 1 1 
2. Let S = 1 , −1 , 2 . Is span(S) = R3 ?
1 0 1
 

   
1 1 1 x 1 0 0 −x − y + 3z
RREF
 1 −1 2 y  −−−→  0 1 0 x −z .
1 0 1 z 0 0 1 x + y − 2z
3
The 
system
 is always consistent regardless of any choice of x, y , z. This show that span(S) = R . In fact, given
x
any y  ∈ R3 ,
z
       
x 1 1 1
y  = (−x − y + 3z) 1 + (x − z) −1 + (x + y − 2z) 2 .
z 1 0 1
Discussion
 
x1
 x2 
Consider now a vector  .  in Rn . Observe that elementary row operations would not make any entries zero; every
 
 .. 
xn
entry would still be a linear combination of x1 , x2 , ..., xn .
Example
   
x1 x1
R2 ↔R3
1. x2  −−−−→ x3 
x3 x2
   
x1 x1
R3 −aR1
2. x2  −−−−→  x2 
x3 x3 − ax1
   
x1 x1
cR2
3. x2  −−→ cx2 , for some c ̸= 0.
x3 x3
Discussion

 
x1
 x2 
This means that in the reduction of  u1 u2 ··· uk , the entries in the last column will never be 0, but
 
..
 . 
xn
some linear combination of x1 , x2 , ..., xn . In this case, the system is consistent if and only if the reduced row-echelon
form of u1 u2 · · · uk does not have any zero row.
Algorithm to check if span(S) = Rn .

Let S = {u1 , u2 , ..., uk } be a set of vectors in Rn .


▶ Form the n × k matrix A = u1 u2 ··· uk whose columns are the vectors in S.

▶ Then span(S) = Rn if and only if the system Ax = v is consistent for all v.

▶ This is equivalent to the reduced row-echelon form of A having no zero rows.

Explicitly, span{u1 , u2 , ..., uk } = Rn if and only if the reduced row-echelon form of ( u1 u2 ··· uk ) has no zero
rows.
Example

The n × n identity matrix In is in reduced row-echelon form and does not have any zero rows. Hence, its columns
span Rn .

Indeed, let ei denote the i-th column of In for i = 1, ..., n. Then for any vector w,
       
w1 1 0 0
w2  0 1 0
w =  .  = w1  .  + w2  .  + · · · + wn  .  .
       
 ..   ..   ..   .. 
wn 0 0 1

Hence, span{e1 , e2 , ..., en } = Rn . This set is called the standard basis of Rn .


Example
       
1 1 1 1
Let u1 = 1, u2 = −1, u3 = 2, and u4 = 0. Putting the vectors as columns of a matrix and reducing,
1 0 1 1
   
1 1 1 1 1 0 0 2
RREF
 1 −1 2 0  − −−→  0 1 0 0 , we can conclude that span{u1 , u2 , u3 , u4 } = R3 . Indeed, given any
1 0 1 1 0 0 1 −1
 
x
y  in R3 ,
z    
1 1 1 1 x 1 0 0 2 3z − y − x
RREF
 1 −1 2 0 y  −−−→  0 1 0 0 x −z 
1 0 1 1 z 0 0 1 −1 x + y − 2z
         
x 1 1 1 1
tells us that y  = (3z − y − x − 2s) 1 + (x − z) −1 + (x + y − 2x + s) 2 + s 0 for any s ∈ R.
z 1 0 1 1
Example

         
1 1 2 1 1 2 1 0 1
 RREF
Let u1 = 1, u2 = −1, and u3 = 0. Then u1 u2 u3 =  1 −1 0  −−−→  0 1 1  tells
1 0 1 1 0 1 0 0 0
̸ R3 . Indeed,
us that span{u1 , u2 , u3 } =
   
1 1 2 x 1 1 1 2 x
R2 −R1 R3 − 2 R2
 1 −1 0 y  − −−−→−−−−−→  0 −2 −2 y −x 
R3 −R1
1 0 1 z 0 0 0 z − y /2 − x/2
   
x x
tells us that whenever z − y /2 − x/2 ̸= 0, the vector y  is not in the span, y  ̸∈ span{u1 , u2 , u3 }.
z z
Question

Let S = {u1 , u2 , ..., uk } be a set of k vectors in Rn .

1. Show that if k < n then span(S) ̸= Rn .

2. If k > n, can we make any conclusion?


Properties of Linear Spans

Theorem (Properties of Linear Spans)


Let S = {u1 , u2 , ..., uk } be a set of vectors in Rn .
(i) The zero vector 0 is in span(S).
(ii) The span is closed under scalar multiplication, that is, for any vector u in span(S) and scalar α, the vector αu is
a vector in span(S).
(iii) The span is closed under addition, that is, for any vectors u, v in span(S), the sum u + v is a vector in span(S).

Proof.
We will only provide the main idea of the proof, the details are left to the readers.
(i) 0 = 0u1 + 0u2 + · · · + 0uk .
(ii) Write v = c1 u1 + c2 u2 + · · · + ck uk . Then αv = (αc1 )u1 + (αc2 )u2 + · · · + (αck )uk .
(iii) Write u = c1 u1 + c2 u2 + · · · + ck uk and v = d1 u1 + d2 u2 + · · · + dk uk . Then
u + v = (c1 + d1 )u1 + (c2 + d2 )u2 + · · · + (ck + dk )uk .
Properties of Linear Spans

Remark
Properties (ii) and (iii) can be combined together into one property (ii’):
The span is closed under linear combinations, that is, if u, v are vectors in span(S) and α, β are any scalars, then the
linear combination αu + βv is a vector in span(S).

Observe that property (ii’) implies that span(S) is closed under linear combination. That is, suppose v1 , v2 , ..., vm are
vectors in span(S), then for any scalars c1 , c2 , ..., cm , the linear combination c1 v1 + c2 v2 + · · · + cm vm is also in the
span. For by property (ii’), c1 v1 + c2 v2 is in span(S), and thus by property (ii’) again, we have (c1 v1 + c2 v2 ) + c3 v3
is in span(S) too. Thus, by induction, we can conclude that c1 v1 + c2 v2 + · · · + cm vm is in span(S).
Since this is true for any scalars c1 , c2 , ..., cm , we have arrived at the following corollary.

Corollary (Linear span is closed under linear combinations)


Let S = {u1 , u2 , ..., uk } be a set of vectors in Rn . For any vectors v1 , v2 , ..., vm in span(S), the span of v1 , v2 , ..., vm
is a subset of span(S),
span{v1 , v2 , ..., vm } ⊆ span(S).
Example
        
 1 0  1 1
Let S = u1 = 0 , u2 = 1 . It is easy to see that the vectors v1 = −1 and v2 = 1 are in span(S).
0 0 0 0
 
 
c1 + c2
By the corollary, given any c1 , c2 , the linear combination c1 v1 + c2 v2 = c2 − c1  is in span(S).
0

Indeed,
      v2
c1 + c2 1 0 u2
c2 − c1  = (c1 +c2 )u1 +(c2 −c1 )u2 = (c1 +c2 ) 0+(c2 −c1 ) 1 . u1
0 0 0
v1

In fact, observe that in this case, span{v1 , , v2 } = span{u1 , u2 }.


Algorithm to check for Set Relations between Spans

Now suppose we are given 2 sets of vectors T = {v1 , v2 , ..., vm } and S = {u1 , u2 , ..., uk }.

▶ By the corollary, if vi ∈ span(S) for i = 1, ..., m, we can conclude that span(T ) ⊆ span(S).

▶ Recall that to check if vi ∈ span(S), we check that the system ( u1 u2 ··· uk vi ) is consistent for all
i = 1, ..., m.
▶ There are in total m such linear systems to check. However, since they have the same coefficient matrix, we
may combine and check them together, that is, check that

u1 u2 · · · uk v1 v2 · · · vm

is consistent.
Algorithm to check for Set Relations between Spans

Theorem
Let S = {u1 , u2 , ..., uk } and T = {v1 , v2 ,..., vm } be sets of vectors in Rn . Then span(T ) ⊆ span(S) if and only if
u1 u2 · · · uk v1 v2 · · · vm is consistent.

So, to check if span(S) = span(T ), we check that


▶ span(S) ⊆ span(T ), that is,
 
“T ” “S” = v1 v2 ··· vm u1 u2 ··· uk is consistent, and

▶ span(T ) ⊆ span(S), that is,


 
“S” “T ” = u1 u2 ··· uk v1 v2 ··· vm is consistent.
Example
         
 1 0   1 1 
Let S = u1 = 0 , u2 = 1 ,and T = v1 = 1 , v2 = −1 .
0 0 0 0
   

The augmented matrix  


 1 0 1 1
u1 u2 v1 v2 = 0 1 1 −1 
0 0 0 0
is already in reduced row-echelon form, and since the system is consistent, we can conclude that span(T ) ⊆ span(S).

On the other hand,


   
1 1 1 0 1 0 1/2 1/2
 RREF
v1 v2 u1 u2 = 1 −1 0 1  −−−→  0 1 1/2 −1/2 
0 0 0 0 0 0 0 0

is consistent too. This shows that span(S) ⊆ span(T ) too.

Therefore we conclude that span(S) = span(T ).


Example
            
 1 1 2   1 1 1 
Let S = 1 , 2 , 3 and T = 1 , −1 , 2 .
1 1 2 1 0 1
   

To check if span(S) ⊆ span(T ),


   
1 1 1 1 1 2 1 0 0 1 0 1
RREF
 1 −1 2 1 2 3  −−−→  0 1 0 0 0 0  is consistent.
1 0 1 1 1 2 0 0 1 0 1 1

To check if span(T ) ⊆ span(S),


   
1 1 2 1 1 1 1 0 1 1 0 0
RREF
 1 2 3 1 −1 2  −−−→  0 1 1 0 0 1  is not consistent.
1 1 2 1 0 1 0 0 0 0 1 0
 
1
This shows that span(T ) ̸⊆ span(S). In particular, −1 ̸∈ span(S).
0
Question
Let S and T be the sets given in the previous example.

1. Observe the left hand side of the augmented matrix in the reduction
   
1 1 1 1 1 2 1 0 0 1 0 1
RREF
 1 −1 2 1 2 3  − −−→  0 1 0 0 0 0 .
1 0 1 1 1 2 0 0 1 0 1 1

What can you conclude about span(T )?

2. Observe the left hand side of the augmented matrix in the reduction
   
1 1 2 1 1 1 1 0 1 1 0 0
RREF
 1 2 3 1 −1 2  − −−→  0 1 1 0 0 1 .
1 1 2 1 0 1 0 0 0 0 1 0

What can you conclude about span(S)?


3.4 Subspaces
Solution Sets to a Linear system

Recall that the set of solutions to a linear system Ax = b is a subset in Rn (it is the empty set if the system is
inconsistent). We may express this set implicitly as

u ∈ Rn

V = Au = b ,

or explicitly as

V = u + s1 v1 + s2 v2 + · · · + sk vk s1 , s2 , ..., sk ∈ R ,
where u + s1 v1 + s2 v2 + · · · + sk vk , s1 , s2 , ..., sk ∈ R is the general solution.
Example

Consider the linear system 


x + y = 0
z = 1

It can be written implicitly as    


 x 
V = y  x = −y , z = 1
z
 

or explicitly as      
 0 −1 
V = 0 + s  1  s∈R .
1 0
 
Example

Consider the linear system


3x + 2y − z = 1
y − z = 0

Implicitly, it can be written as    


 x 
y  3x + 2y − z = 1, y − z = 0 .
z
 

The general solution is


1
x= (1 − s), y = s, z = s, s ∈ R.
3
So, explicitly, the solution set is  1  1 
 3 −3 
0 + s  1  s∈R .
0 1
 
Solution Sets to Linear Systems

Write the implicit expression of the following solution set


       
 1 −2 1 
 2  + s  1  + t 0 s, t ∈ R .
−1 0 1
 

x = 1 − 2s + t, y = 2 + s, z = t −1
⇒ x + 2y − z = 6
x = 1 − 2(y − 2) + z + 1, s = y − 2, t = z +1
So, implicitly, the set has the expression
   
 x 
y  x + 2y − z = 6 .
z
 
Discussion

Recall that the general solution of a homogeneous system Ax = 0 has the form

s1 v1 + s2 v2 + · · · + sk vk , s1 , s2 , ..., sk ∈ R.

Explicitly, the solution set is



V = s1 v1 + s2 v2 + · · · + sk vk s1 , s2 , ..., sk ∈ R .

Observe however that this is just span{v1 , v2 , ..., vk },



V = s1 v1 + s2 v2 + · · · + sk vk s1 , s2 , ..., sk ∈ R = span{v1 , v2 , ..., vk }.

By the properties of a linear span, this would mean that the solution set to a homogeneous system is a vector space
that is a subset of the Euclidean vector space. We call a vector space nested inside another vector space a subspace.
Example

Let V be the solution set to the system

x − y + z = 0 .

Explicitly,          
 1 −1   1 −1 
V = s 1 + t  0  s, t ∈ R = span 1 ,  0  .
0 1 0 1
   
Subspace

It turns out that for a subset V of the Euclidean space Rn to satisfy all 10 axioms of being a vector space, suffice for
it to satisfies only 3 of them.

Definition
A subset V of Rn is a subspace if it satisfies the following properties.
(i) V contains the zero vector 0 ∈ V .

(ii) V is closed under scalar multiplication. For any vector v in V and scalar α, the vector αv is in V .

(iii) V is closed under addition. For any vectors u, v in V , the sum u + v is in V .

Remark
(i) Property (i) can be replaced with property (i’): V is nonempty.
(ii) Properties (ii) and (iii) is equivalent to property (ii’):
V is closed under linear combination. For any u, v in V , and scalars α, β, the linear combination αu + βv is in V .
Solution Space of Homogeneous System
Theorem
The solution set V = { u Au = b } to a linear system Ax = b is a subspace if and only if b = 0, that is, the
system is homogeneous.

Proof.
(⇒) Suppose V = { u Au = b } is a subspace. By property (i), it must contain the origin, which means that 0
must be a solution to Ax = b. Hence,

0 = A0 = b ⇒ b = 0.

(⇐) Suppose b = 0, that is, V = { u Au = 0 } the solution set to a homogeneous system.


▶ Clearly 0 ∈ V
▶ For any v ∈ V , that Av = 0, and any α ∈ R, A(αv) = αAv = α0 = 0 ⇒ αv ∈ V .
▶ Suppose u, v ∈ V , that is Au = 0 and Av = 0. Then A(u + v) = Au + Av = 0 + 0 = 0 ⇒ u + v ∈ V .

Definition
The solution set to a homogeneous system is call a solution space.
Examples
   
 x 
Let V = y  x + y + z = 0 . Since it is a solution set of a homogeneous system, it is a subspace. We will
z
 
also show that itsatisfies the 3 criteria.
0
(i) Clearly 0 is in V .
0
 
x
(ii) Suppose y  ∈ V , that is x + y + z = 0. Then for any α ∈ R, αx + αy + αz = α(x + y + z) = α(0) = 0.
z
   
x1 x2
(iii) Suppose y1  and y2  are in V . Then
z1 z2

(x1 + x2 ) + (y1 + y2 ) + (z1 + z2 ) = (x1 + y1 + z1 ) + (x2 + y2 + z2 ) = (0) + (0) = 0.


Example

   
 x 
Is the set V = y  x, y ∈ R a subspace?
1
 

 
0
It is not a subspace since it does not contain 0.
0
Equivalent Definition for Subspaces

Theorem
A subset V ⊆ Rn is a subspace if and only if is a linear span, V = span(S), for some finite set S = {u1 , u2 , ..., uk }.

Proof.
(⇐) This follows from the property of linear span.
(⇒) Only present a sketch, details are left as exercise.
Since V is a subspace, it is nonempty. Take a u1 ∈ V . If span(u1 ) = V , let S = {u1 }. Otherwise, there is a
u2 ∈ V \span(u1 ). If span(u1 , u2 ) = V , let S = {u1 , u2 }. Otherwise, continue this process to define
ui ∈ V \span{u1 , u2 , ..., ui−1 }. Eventually, the process must stop, that is, there is a k ∈ Z such that
span{u1 , u2 , ..., uk } = V (why?).
Remarks

1. To show that a set V is a subspace, we can either


(a) find a spanning set, that is find a set S such that V = span(S), or

(b) show that V satisfies the 3 conditions of being a subspace.

2. To show that a subset V is not a subspace, we can either


(i) show that it does not contain the zero vector, 0 ̸∈ V ,

(ii) find a vector v ∈ V and a scalar α ∈ R such that αv ̸∈ V , or

(iii) find vectors u, v ∈ V such that the sum is not in V , u + v ̸∈ V .


Example

           
 x 1 0   1 0 
1. V = y  = x 0 + y 1 x, y ∈ R =span 0 , 1 is a subspace.
0 0 0 0 0
   

           
 x +y 1 1   1 1 
2. V = x − y  = x 1 + y −1 x, y ∈ R =span 1 , −1 is a subspace.
0 0 0 0 0
   
Example

       

 a 
 1 1
b 0 0
   
3. V =   ab = cd
c  is not a subspace because 
1 and 0 belong to to V , but
  

 

d 0 1
 
     
1 1 2
0 0 0
  +   =   does not.
1 0 1
0 1 1
         
 s  1 1 2
4. V = s 2  s, t ∈ R is not a subspace since 1 belongs to V , but 2 1 = 2 does not.
t 0 0 0
 
Question

1. Show that the set containing the zero vector {0} is a subspace.

2. Construct a set V such that it satisfies condition (i) and (ii) but not (iii); that is, V contains the origin and is
closed under scalar multiplication, but not closed under addition.
Question

Let V be a subspace of Rn and S = {u1 , u2 , ..., uk } a subset of V , S ⊆ V . Show that the span of S is contained in
V , span(S) ⊆ V .
Subspaces of R2

 
0
(i) Zero space: This is a point.
0

     
x1 x1 0
(ii) Lines, L = span for some fixed ̸= , These are lines, which looks like R1 .
y1 y1 0

(iii) Whole R2 .
Subspaces of R3

  
 0 
(i) Zero space: 0 This is a point.
0
 

      
 x1  x1 0
(ii) Lines: L = span y1  for some fixed y1  ̸= 0. These are lines, which looks like R1 .
z1 z1 0
 

       
 x1 x2  x1 x2
(iii) Planes, P = span y1  , y2  for some y1  , y2  that are not a scalar multiple of each other, These
z1 z2 z1 z2
 
2
are planes, which looks like R .

(iv) Whole R3 .
Solution Set to Non-homogeneous System
Recall that
u + s1 v1 + s2 v2 + · · · + sk vk , s1 , s2 , ..., sk ∈ R
is a general solution to a consistent non-homogeneous system Ax = b, b ̸= 0 if and only if

s1 v1 + s2 v2 + · · · + sk vk , s1 , s2 , ..., sk ∈ R

is a general solution to the homogeneous system Ax = 0, where u is a particular solution to the non-homogeneous
system Ax = b.

Theorem (Affine Space)


The solution set W = { w Aw = b } of a non-homogeneous linear system Ax = b, b ̸= 0 is given by

u + V := u + v v ∈ V ,

where V = { v Av = 0 } is the solution space to the associated homogeneous system and u is a particular
solution, Au = b.

That is, vectors in u + V are of the form u + v for some v in V .


Example
 
−1 1 2 1
Let A =  3 3 6 9.
3 −1 −2 1
   
−1 1 2 1 0 1 0 0 1 0
RREF
 3 3 6 9 0  −−−→  0 1 2 2 0 
3 −1 −2 1 0 0 0 0 0 0

tells us that the solution set to the homogeneous system is


         

 0 −1 
 
 0 −1 
   
−2 −2 −2  , −2 .
    
V = s 1  + t  0  s, t ∈ R = span 
  
   1   0 
   
0 1 0 1
   

The solution set V is a subspace.


Example    
−1 1 2 1 3
Let A =  3 3 6 9 and b =  3 .
3 −1 −2 1 −5
   
−1 1 2 1 3 1 0 0 1 −1
RREF
 3 3 6 9 3  −−−→  0 1 2 2 2 
3 −1 −2 1 −5 0 0 0 0 0

tells us that the solution set to the non-homogeneous system is


             

 −1 0 −1 
 −1 
 0 −1 
−2 −2    
2 2 −2 −2
    
W =   + s   + t   s, t ∈ R =   + span   ,   .
 0 1 0  0  1   0 
   
0 0 1 0 0 1
   

The solutionset 
V is not a subspace as it does not contain the origin. It is shifted away from the origin via the
−1
2
vector u =  0 . Observe that W and V are parallel planes.

0
Solution Set to Linear System
Question

Is R2 ⊆ R3 ?
3.5 Linear Independence
Motivation

Consider
       
1 0 1 1
u1 = 0 , u2 = 1 , u3 = 1 , u4 = −1.
0 0 0 0

https://www.geogebra.org/m/w2avu5ft

Observe that V = span{u1 , u2 , u3 , u4 } = span{u1 , u2 , u3 } = span{u1 , u2 } =


̸ span{u1 }. This shows that the set
{u1 , u2 } is “optimal”; that is, it is the minimal set to span V . This is because we may use u1 + u2 in place of u3 ,
and u1 − u2 in place of u4 . Hence, we might say that u3 and u4 are “redundant” since they are linear combinations
of u1 and u2 .
Example
     
 1 1 2 
Consider the set 1 ,  2  , 3 .
1 −1 0
 
       
1 1 2 2
▶ Observe that 1 +  2  = 3 tells us that 3 is “redundant”.
1 −1 0 0
           
2 1 1 2 1 1
▶ But manipulating the equation, we have 3 −  2  = 1 and 3 − 1 =  2 , which tells us
    0 −1 1 0 1 −1
1 1
that 1 and  2  are equally “redundant”.
1 −1
▶ So instead, we might put all the vectors to the left side of the equation and write it as
       
1 1 2 0
1 +  2  − 3 = 0 .
1 −1 0 0
Discussion

Now given a set {u1 , u2 , ..., uk }.


▶ A vector ui is a redundant vector in the span if it is linearly dependent on the others,

ui = c1 u1 + · · · + ci−1 ui−1 + ci+1 ui+1 + · · · + ck uk .

▶ To check for redundancy, we have to check if the system

c1 u1 + · · · + ci−1 ui−1 + ci+1 ui+1 + · · · + ck uk = ui

is consistent for each i = 1, ..., k. This is very tedious.


▶ However, if ui is linearly dependent on the other vectors, then we have

c1 u1 + · · · + ci−1 ui−1 − ui + ci+1 ui+1 + · · · + ck uk = 0.

▶ This is a nontrivial solution, and this checks for all i = 1, .., k simultaneously!
Discussion

▶ For if suppose we are able to find some c1 , c2 , ..., ck not all zero such that

c1 u1 + c2 u2 + · · · + ck uk = 0.

▶ Without lost of generality, say ck ̸= 0. Manipulating the equation, we have


c1 c2 ck−1
u1 + u2 + · · · + uk−1 = uk ,
−ck −ck −ck
Then we conclude that uk is linearly dependent on {u1 , u2 , ..., uk−1 }.
▶ If none of the vector is linearly dependent on the others, or that the vectors are linearly independent if we
cannot find c1 , c2 , ..., ck not all zero such that

c1 u1 + c2 u2 + · · · + ck uk = 0.
Linearly Independent

Definition
A set {u1 , u2 , ..., uk } is linearly independent if the only coefficients c1 , c2 , ..., ck satisfying the equation

c1 u1 + c2 u2 + · · · + ck uk = 0,

are c1 = c2 = · · · = ck = 0. Otherwise, we say that the set is linearly dependent.


Example

Let ei be the i-th column of the n × n identity matrix In . Then


         
c1 1 0 0 0
 c2  0 1 0 0
 ..  = c1  ..  + c2  ..  + · · · + cn  ..  =  ..  if and only if c1 = 0, c2 = 0, ..., cn = 0.
         
. . . . .
cn 0 0 1 0

Hence the standard basis is linearly independent.


Example

              
 1 1 2  1 1 2 0
Consider the set v1 = 1 , v2 =  2  , v3 = 3 . Suppose c1 1 + c2  2  + c3 3 = 0.
1 −1 0 1 −1 0 0
 
    
1 1 2 c1 0
Convert the set into a matrix equation, we are solving for 1 2 3 c2  = 0.
1 −1 0 c3 0
   
1 1 2 0 1 0 1 0
RREF
 1 2 3 0  −−−→  0 1 1 0 .
1 −1 0 0 0 0 0 0

The system has nontrivial solutions. Hence, the set is linearly dependent.
Example
      

 1 1 1 
 0 1 1
Is the set S = v1 =   , v2 =   , v3 =   linearly independent?
     

 0 0 1 
0 0 0
 

Suppose
 c1
v1 + c2 v2 +c3 v3 = 0. Writing it as a matrix equation, we are asking if the homogeneous system
1 1 1   0
0 1 c1
1 0
 c  =   has nontrivial solutions.
1 2

0 0 0
c3
0 0 0 0
   
1 1 1 0 1 0 0 0
 0 1 1 0  R1 −R2 R2 −R3  0 1 0 0 
 −−−−→−−−−→  
 0 0 1 0   0 0 1 0 
0 0 0 0 0 0 0 0

tells us that the homogeneous system has only the trivial solution, and hence, S is linearly independent.
Algorithm to Check for Linear Independence

Let {u1 , u2 , ..., uk } be a set of vectors in Rn .

▶ {u1 , u2 , ..., uk } is linearly independent if and only if the homogeneous system (u1 u2 ··· uk )x = 0 has only
the trivial solution.
▶ The homogeneous system has only the trivial solution if and only if the reduce row-echelon form of

u1 u2 · · · uk has no non-pivot column.

Theorem
n
A subset S = {u1 , u2 , ...,
 uk } of R is linearly independent if and only if the reduced row-echelon form of
A = u1 u2 · · · uk has no non-pivot columns.
Examples
       
 1 0 1 1 
1. S = 0 , 1 , 1 , −1 .
0 0 0 0
 

 
1 0 1 1
0 1 1 −1 is already in RREF. Since it has a nonpivot column, S is linearly dependent.
0 0 0 0
     
 1 1 2 
2. S = 1 ,  2  , 3
1 −1 0
 

   
1 1 2 1 0 1
RREF
1 2 3 −−−→ 0 1 1. Since the RREF has a nonpivot column, S is linearly dependent.
1 −1 0 0 0 0
Question

Suppose {u1 , u2 , u3 } is linearly independent. Let

v1 = u1 ,
v2 = u1 + u2 ,
v3 = u1 + u2 + u3 .

Show that {v1 , v2 , v3 } is linearly independent too.


Question

Let S = {u1 , u2 , ..., uk } be a set of vectors in of Rn . Show that if k > n, then S is linearly dependent.
Special Cases

1. {0}, where 0 ∈ Rn is the zero vector is always linearly dependent.

Take say, c1 = 1, then we have (1)0 = 0. Alternatively, the matrix (0) is in RREF and the only column is a
non-pivot column.

2. If v ̸= 0, then {v} ∈ Rn is linearly independent.

The only solution to cv = 0 is c = 0. Alternatively, (v) reduces to the matrix with 1 in the first entry and zero
otherwise, and the only column is a pivot column.

3. {v1 , v2 } is linearly dependent if and only if one is a scalar multiple of the other, αv1 = v2 or v1 = βv2 .

{v1 , v2 } linearly dependent if and only if c1 or c2 ̸= 0. Say c1 ̸= 0. Then v1 = − cc21 v2 . The argument for c2 ̸= 0
is analogous.

4. The empty set {} = ∅ is linearly independent.

Vacuously true since there are no vector to check.


Linear Dependency and Adding or Removing Vectors

Theorem
Suppose S = {u1 , u2 , ..., uk } is linearly dependent set of vectors in Rn . Then for any vector u in Rn ,

{u1 , u2 , ..., uk , u}

is linearly dependent.

Since the set {u1 , u2 , ..., uk } is linearly dependent, we can find a say ci ̸= 0 such that

c1 u1 + · · · + ci ui + · · · + ck uk = 0.

Hence, by adding 0u, that is, let c = 0, we have

c1 u1 + · · · + ci ui + · · · + ck uk + cu = 0,

where not all c, c1 , ..., ci , ..., ck are zero.

Hence, any set {v1 , ..., vk , 0} containing the zero vector is linearly dependent.
Linear Dependency and Adding or Removing Vectors

Theorem
Suppose {u1 , u2 , ..., uk } is linearly independent set of vectors in Rn and u is not a linearly combination of
u1 , u2 , ..., uk . Then the set {u1 , u2 , ..., uk , u} is linearly independent.

i.e. {u1 , u2 , ..., uk } linearly independent and u ̸∈ span{u1 , u2 , ..., uk } ⇒ {u1 , u2 , ..., uk , u} linearly independent.

Here is a heuristic explanation. Readers may refer to the appendix for the proof.

Since {u1 , u2 , ..., uk } is linearly independent, the RREF of u1 u2 · · · uk has no non-pivot column. Now since
u ̸∈ span{u1 , u2 , ..., uk }, the last column of the RREF of ( u1 · · · uk u ) is a pivot column. But observe that
the LHS of the RREF of ( u1 · · · uk u ) is the RREF of u1 u2 · · · uk . Hence, every column in the RREF
of ( u1 · · · uk u ) = u1 u2 · · · uk u is a pivot column. This shows that {u1 , u2 , ..., uk , u} is linearly
independent.
Linear Dependency and Adding or Removing Vectors

Theorem
Suppose {u1 , u2 , ..., uk } is linearly independent set of vectors in Rn . Then any subset of {u1 , u2 , ..., uk } is linearly
independent.

If {u1 , u2 , ..., uk } has no redundancy, then it is clear that any subset cannot have redundancy. Readers may refer to
the appendix for the proof.
3.6 Basis and Coordinates
Motivation

        
 1 0 0  x
Consider the set E = e1 = 0 , e2 = 1 , e3 = 0 . It is clear that any vector y  in R3 can be unique
0 0 1 z
 
written as a linear combination of the vectors in E ,
       
x 1 0 0
y  = x 0 + y 1 + z 0 .
z 0 0 1
 
x
In fact, we call x, y , z the coordinates of the vector y . However, the set E is not the only set that enjoys this
z
property.
Motivation

        
 1 1 0  x
Consider the set B = u1 = 1 , u2 = 0 , u3 = 1 . Now let y  be a vector in R3 . Then
0 1 1 z
 

   
1 1 0 x 1 0 0 (x + y − z)/2
RREF
 1 0 1 y  −−−→  0 1 0 (x − y + z)/2 
0 1 1 z 0 0 1 (y − x + z)/2

tells us that the linear combination


       
x 1 1 0
y  = x + y − z 1 + x − y + z 0 + −x + y + z 1
2 2 2
z 0 1 1

is unique.
Motivation

       
 1 0 1  0
On the other hand, consider the set S = 1 , 1 , 0 . The vector 0 is not a linear combination of the
1 1 0 1
 
vectors in S,    
1 0 1 0 1 0 1 0
RREF
 1 1 0 0 − −−→  0 1 −1 0  .
1 1 0 1 0 0 0 1
This shows that span(S) ̸= R3 .
Motivation
       
 1 0 1 0 
Consider another set S = 1 , 1 , 0 , 0 . Check that the span of S is indeed the whole R3 ,
1 1 0 1
 
 
1
span(S) = R3 . However, the linear combination is not unique. For example, consider the vector 2,
1
   
1 0 1 0 1 1 0 1 0 1
RREF
 1 1 0 0 2  −−−→  0 1 −1 0 1 
1 1 0 1 1 0 0 0 1 −1

tells us that          
1 1 0 1 0
2 = (1 − s) 1 + (1 + s) 1 + s 0 − (1 + s) 0
1 1 1 0 1
for any s ∈ R. Observe that this is because the set S is not linearly independent.
Motivation
   
 x 
Consider now the solution space V = y  x + y − 2z = 0 . Since it is a subspace of R3 , it is a vector space
z
 
itself. Explicitly, we have
         
 −1 2   −1 2 
V = s  1  + t 0 s, t ∈ R = span  1  , 0 .
0 1 0 1
   

Open GeoGebra: https://geogebra.org/3d.


1. Type in x + y − 2z = 0, enter.

2. Type in u1 = (−1, 1, 0) and hit enter, and u2 = (2, 0, 1) and hit enter.

3. It is easy to see that every vector in V can be written uniquely as a linear combination of the u1 and u2 .
Motivation
             
 x   1 0   1 0 
Let V = y  y − z = 0 = s 0 + t 1 s, t, ∈ R = span 0 , 1 . Check that the set
  z    0 1 0 1
     
 
 1 0 1  1
S = 1 , 1 , 0 spans V . However, the vector 2 in V can be written as as linear combination of
1 1 0 2
 
vectors in S in more than one way,
       
1 1 0 1
2 = 1 + 1 + 0 0
2 1 1 0
     
1 0 1
= 0 1 + 2 1 + 0
1 1 0

Observe that the set S is linearly dependent.


Basis
Definition
Let V be a subspace of Rn . A set S = {u1 , · · · uk } ⊆ V is a basis for V if
(i) S spans V , span(S) = V , and

(ii) S is linearly independent.

Theorem
Suppose S is a basis for V . then every vector v ∈ V can be written as a linear combination of vectors in S uniquely.

Proof.
(i) span(S) = V tells us that every vector v ∈ V can be written as a combination of vectors in S.

(ii) S is linearly independent tells us that if v is a linear combination of vectors in S, the coefficient is unique.

v = c1 u1 + · · · + ck uk = d1 u1 + · · · + dk uk
⇔ (c1 − d1 )u1 + · · · + (ck − dk )uk = 0
⇔ c1 = d1 , ··· ck = dk
Example

   
 x 
Let V = y  x +y −z =0 .
z
 
   
−1 1
▶ The general solution to the linear system is s  1  + t 0, s, t ∈ R.
0 1
▶ This 
shows
 that
 every
vector v in the solution space V is a linear combination of the vectors in
 −1 1 
S =  1  , 0 . Hence, S spans V .
0 1
 

▶ Since S contains only 2 vectors which are not a multiple of each other, S is linearly independent too.

▶ Therefore, S is a basis for V .


Basis for Solution Set of Homogeneous System

Let V = { u Au = 0 } be the solution space to the homogeneous system Ax = 0. Suppose

s1 u1 + s2 u2 + · · · + sk uk , s1 , s2 , ..., sk ∈ R

is the general solution. Then S = {u1 , u2 , ..., uk } is a basis for the subspace V = { u Au = 0 }.
Example

Let V be the solution set to




 x1 + x2 + 2x4 + x5 = 0
2x1 − x2 + 3x3 + 3x5 = 0


 x1 − 2x2 + 3x3 − 2x4 + 2x5 = 0
2x1 − x2 + 3x3 + 3x5 = 0

   
1 1 0 2 1 0 1 0 1 2/3 4/3 0
 2 −1 3 0 3 0  RREF  0 1 −1 4/3 −1/3 0 
 1 −2 3 −2 2 0  −−−→  0 0 0
Solving the system,    , we conclude that
0 0 0 
   2
  −1 3 0 3 0 0 0 0 0 0 0

 −1 −2/3 −4/3 

 1  −4/3  1/3 

 
     
S=  1
  
 ,  0  ,  0  spans V . Using the last 3 coordinates, we can
   also conclude that S is linearly
 0   1   0 


 

0 0 1
 
independent (details left to readers). Hence, S is a basis for V .
Example
       
 x   −1 1 
Let V = y  x + y − z = 0 . It was shown that T =  1  , 0 is a basis for V . Show that
  z  0 1
   
 −1 1 
S =  2  , 1 is a basis for V .
1 2
 

1. First we show that span(S) = V = span(T ).


   
−1 1 −1 1 1 0 2 1
RREF
(i)  1 0 2 1  −−−→  0 1 1 2  shows that span(S) ⊆ span(T ).
 0 1 1 2   0 0 0 0 
−1 1 −1 1 1 0 2/3 −1/3
RREF
(ii)  2 1 1 0  −−−→  0 1 −1/3 2/3  shows that span(T ) ⊆ span(S).
1 2 0 1 0 0 0 0
Therefore, span(S) = span(T ) = V .
2. Next, since S contains 2 vectors that are not a multiple of each other, S is linearly independent.
Hence, S is a basis for V too. This also shows that basis for a subspace may not be unique.
Basis for the zero space {0}

Recall that the zero space {0} is a subspace. Find a basis for {0}

The basis for the zero space {0} is the empty set {} or ∅.
▶ Firstly, span{0} = {0} but the set {0} is not linearly independent.

▶ However, if S is a set that contains any nonzero vector, then span(S) will be strictly bigger than the zero space,
{0} ⫋ span(S).
▶ The empty set is linearly independent vacuously.

▶ However, span{} does not make sense.

▶ The real definition of the span of S is the smallest subspace V such that S ⊆ V . That is V = span(S) if
V ⊆ W for all subspaces W containing S.
▶ Since the zero space is the smallest subspace containing the empty set, span of the empty set is the zero space.
Question

Let V be a subspace of Rn and S = {u1 , u2 , ..., uk } a subset of vectors in V . Which of the following statements
is/are true?

1. If S is linearly independent, then S spans V .

2. If S is linearly dependent, then S does not span V .

3. If S spans V , then S is linearly independent.

4. If S does not span V , then S is linearly dependent.


Basis for Rn and Invertibility
A priori, there is no relationship between linear independence and spanning a subspace. However, in the special case
when the subset S of Rn contains exactly n vectors, then linear independence is equivalent to spanning Rn .
Theorem
A n × n square matrix A is invertible if and only if the columns are linearly independent.

Proof. 
Write A = u1 u2 · · · un and let S = {u1 , u2 , ..., un } be the set containing the columns of A. Then A is
invertible if and only if the reduce row-echelon form is the identity matrix. But we have also seen that S is linearly
independent if and only if the reduce row-echelon form of A has no non-pivot columns, which for a square matrix,
must mean that the reduce row-echelon form is the identity matrix.

Theorem
A n × n square matrix A is invertible if and only if the columns spans Rn .

Proof.
Let S = {u1 , u2 , ..., un } be the set containing the columns of A. Then S spans Rn if and only if the reduced
row-echelon form of A do not have any nonzero row, which for a square matrix, would mean that the reduce
row-echelon form is the identity matrix. This is equivalent to A being invertible.
Basis for Rn and Invertibility

Corollary
Let S = {u1 , u2 , ..., un } be a subset of Rn containing n vectors. Then S is linearly independent if and only if S spans
Rn .

Proof. 
Let A = u1 u2 · · · un be the matrix whose columns are the vectors in S. Then A is a square matrix. Then by
the two theorems, S is linearly independent if and only if A is invertible, if and only if S spans Rn .

Corollary

Let S = {u1 , u2 , ..., uk } be a subset of Rn and A = u1 u2 · · · uk be the matrix whose columns are vectors in
S. Then S is a basis for Rn if and only if k = n and A is an invertible matrix.

Proof.
(⇒) If k < n, then S cannot span Rn . If k > n, then S cannot be linearly independent. Hence, if S is a basis, S
must have exactly n vectors, and by the previous theorem, A must be invertible.
(⇐) Conversely, if k = n and A is invertible, then S is a basis by the previous theorem.
Basis for Rn and Invertibility

Theorem
A n × n square matrix A invertible if and only if the rows of A form a basis for Rn .

Theorem
A square matrix A of order n is invertible if and only if the rows of A are linearly independent.

The proofs of the 2 theorems follow from the fact that A is invertible if and only if AT is, and the rows of A are the
columns of AT .
Equivalent Statements for Invertibility
Theorem
Let A be a square matrix of order n. The following statements are equivalent.
(i) A is invertible.
(ii) AT is invertible.
(iii) (left inverse) There is a matrix B such that BA = I.
(iv) (right inverse) There is a matrix B such that AB = I.
(v) The reduced row-echelon form of A is the identity matrix.
(vi) A can be expressed as a product of elementary matrices.
(vii) The homogeneous system Ax = 0 has only the trivial solution.
(viii) For any b, the system Ax = b has a unique solution.
(ix) The determinant of A is nonzero, det(A) ̸= 0.
(x) The columns/rows of A are linearly independent.
(xi) The columns/rows of A spans Rn .
Introduction to Coordinates Relative to a Basis
       
 x  1 0   
x

Let V = y  z = 0 = span 0 , 1 . Observe that any vector in R2 identifies with a unique
y
z 0 0
   
     
1 0 x
vector x 0 + y 1 = y  in V .
0 0 0
    
 1 1 
Let T = 1 , −1 , it is also a basis for V .
0 0
 
     
  1 1 x +y
▶ Now a vector x in R2 defines a vector x 1 + y −1 = x − y  in V .
y
0 0 0
     
x 1 1  
▶ Conversely, a vector y  = x+y 1 + x−y −1 in V defines a vector (x + y )/2 in R2 .
2 2 (x − y )/2
0 0 0
Introduction
 toCoordinates Relative
 to
 a
Basis
 
 x   −1 1 
Let V = y  x +y −z =0 = span  1  , 0 . Then we have the unique correspondence
z 0 1
   
     
  −1 1 y −x
x
∈ R2 ←→ x  1  + y 0 =  x  ∈ V .
y
0 1 y

▶ These examples demonstrate that a subspace V of Rn can be identified with some Rk .


▶ That is, instead of giving a vector in V in terms of its coordinates in Rn , we may represent it with a vector in
Rk for some k ≤ n.
▶ This identification depends on the choice of basis of V .
▶ Explicitly, let S = {u1 , u2 , ..., uk } be a basis for V , a subspace of Rn . Then we have a unique correspondence
 
c1
 c2 
 ..  ∈ Rk ←→ v = c1 u1 + c2 u2 + · · · + ck uk ∈ V .
 
.
ck
Coordinates Relative to a Basis

Definition
Let S = {u1 , u2 , ..., uk } be a basis for V , a subspace of Rn and

v = c1 u1 + c2 u2 + · · · + ck uk

be the unique expression of a vector v in V in terms of the basis S. The vector in Rk defined by the coefficients of
the linear combination is called the coordinates of v relative to basis S, and is denoted as
 
c1
c2 
[v]S =  .  .
 
 .. 
ck
Examples

1. Let E = {e1 , e2 , ..., en } be the standard basis for Rn . For any w = (wi ) ∈ Rn ,

w = w 1 e1 + w 2 e2 + · · · + w n en .

⇒ [w]E = w

Example
           
x 1 0 0 x x
y  = x 0 + y 1 + z 0 ⇒ y  = y  .
z 0 0 1 z E
z
Examples

          
 −1 1   x  3
2. S =  1  , 0 is a basis for V = y  x + y − z = 0 . Let v = −1. To compute the
0 1 z 2
   
     
−1 1 3
coordinates of v relative to basis S, find c1 , c2 such that c1  1  + c2 0 = −1.
0 1 2
   
−1 1 3 1 0 −1
RREF
 1 0 −1  −−−→  0 1 2 
0 1 2 0 0 0

So,      
3 −1 1  
−1 = (−1)  1  + 2 0 −1
⇒ [v]S = .
2
2 0 1
Remarks

▶ Even though v ∈ V ⊆ Rn has n coordinates, its coordinates relative to basis S, [v]S , has k coordinates if the
basis S has k vectors.

▶ Note that the correspondence is unique only if S is a basis. If S is not linearly independent, a few vectors in Rk
can map to the same v ∈ V .
    
 1 0 
▶ The relative coordinates depend on the ordering of the basis. If S = u1 = 0 , u2 = 1 and
0 0
 
      
 0 1  1
T = v1 = 1 , v2 = 0 , then for v = 2,
0 0 0
 

   
1 2
[v]S = ̸= = [v]T .
2 1
Algorithm for Computing Relative Coordinate
Let V be a subspace of Rn and S = {u1 , u2 , ..., uk } be a basis for V .

▶ Let v be a vector in V . To find [v]S , we must find the coefficients c1 , c2 , ..., ck such that

v = c1 u1 + c2 u2 + · · · + ck uk .

▶ Converting it to a matrix equation, we have


 
c1
 c2 
 
u1 u2 · · · uk  .  = v,
 .. 
ck

▶ which is equivalent to solving the linear system

( u1 u2 ··· uk v ).
Example

        

 x1 
 
 −3 4 
    
x2  −1  , 2 .
    
V = x3  x1 − 2x2 + x3 = 0, x2 + x3 − 2x4 = 0 . Basis: S =  1  0

 
 
 
x4 0 1
   
 
1
1
Find the coordinates of v =  
 ∈ V relative to S.
1
1
   
−3 4 1 1 0 1  
 −1 2 1  RREF  0 1 1  1
 − −−→   ⇒ [v]S = .
 1 0 1   0 0 0  1
0 1 1 0 0 0
Question

Suppose S = {u1 , u2 , u3 } is a basis for a subspace V ⊆ R5 . Let v ∈ V be such that


 
1 0 0 1
 0 1 0 −5 
  
u1 u2 u3 v −→   0 0 1 0 .

 0 0 0 0 
0 0 0 0

Which of the following is [v]S ?


 
1  
−5 1  
−5 1  
 
 , (ii)   , (iii) −5 (iv ) 1
(i)  0
 
0
0 −5
0
0
0
Question

Suppose S = {u1 , u2 , ..., uk } is a set of vectors in Rn and V a subspace. Let v be a vector in V .


(i) Suppose there is a non-pivot column in the left side of the reduced row-echelon form of

( u1 u2 ··· uk v ).

What can you conclude?

(ii) Suppose
( u1 u2 ··· uk v )
is inconsistent. What can you conclude?
3.7 Dimensions
Introduction

▶ Intuitively, we say that R3 is 3-dimensional, and R2 is 2-dimensional.


        
 x   1 0 
▶ Let V = y  z = 0 = span 0 , 1 . By the discussion in coordinates relative to a basis, we
z 0 0
   
2
can identify V with R , and hence intuitively say that V is 2-dimensional.

▶ However, the identification of V with Rk depends on the choice of the basis of V .

▶ Recall that bases for any nonzero subspace V ̸= {0} is not unique.

▶ So suppose now S = {u1 , u2 , ..., uk } and T = {v1 , v2 , ..., vm } are bases for a subspace V . Using S, we identify
V with Rk and using T , we identify V with Rm . Then do we say that V is k-dimensional, or m-dimensional?

▶ Ideally, we want m = k, which is in fact true!


Dimension

Theorem
Let V be a subspace of Rn and B a basis for V . Suppose B contains k vectors, |B| = k.
(i) If S = {v1 , v2 , ..., vm } is a subset of V with m > k, then S is linearly dependent.
(ii) If S = {v1 , v2 , ..., vm } is a subset of V with m < k, then S is cannot span V .

Corollary
Suppose S = {u1 , u2 , ..., uk } and T = {v1 , v2 , ..., vm } are bases for a subspace V ⊆ Rn . Then k = m.

Definition
Let V be a subspace of Rn . The dimension of V , denoted by dim(V ), is defined to be the number of vectors in any
basis of V .

In other words, V is k-dimensional if and only if V identifies with Rk using coordinates relative to any basis B of V .
Example

1. The dimension of the Euclidean n-space, Rn is n, since the standard basis E = {e1 , e2 , ..., en } has n vectors.
       
 x   1 0 
2. V = y  z =0 is 2-dimensional since the basis 0 , 1 has 2 vectors.
z 0 0
   

   

 x1 

  x2 
 

3. V =  ..  a1 x1 + a2 x2 + · · · + an xn = 0 is n − 1-dimensional if not all ai = 0. This is called a
 

 . 

 
xn
 
hyperplane in Rn .
Dimension of the Zero Space {0}

We will provide a intuitive reasoning of why the empty set is the basis for the zero space {0} in Rn .

▶ Intuitively, the dimension is the independent degree of freedom of movement: In a 3 dimensional space, we can
travel forwards backwards, side ways, and up and down; in a 2-dimensional space, we can travel forwards
backwards, as well as side ways; in a 1-dimensional space, we can only walk forward or backwards.

▶ So, since we have no freedom of movement in the zero space, the zero space should be 0-dimensional.

▶ But this would tell us that by definition, the basis for the zero space must have no vectors, that is, it must be
the empty set.
Dimension of Solution Space

Recall that the vectors in the general solution of a homogeneous system form a basis for the solution space. This
means that the dimension of the solution space is equal to the number of parameters in the general solution. This is
in turn equal to the number of non-pivot columns in the reduce row-echelon form of the coefficient matrix.

Theorem
Let A be a m × n matrix. The number of non-pivot columns in the reduced row-echelon form of A is the dimension
of the solution space
V = { u ∈ Rn Au = 0 }.

Let s1 u1 + s2 u2 + · · · + ck uk be the general solution to the homogeneous system Ax = 0. Then S = {u1 , u2 , ..., uk }
is a basis for V and so by definition, dim(V ) = k. But this means that the reduced row-echelon form of A has k
non-pivot columns.
Example

   
 x 
Let V = y  2x − 3y + z = 0 . This is a hyperplane in R3 , so dim(V ) = 2. We can see this also from the
z
 

fact that the coefficient matrix 2 −3 1 has 2 non-pivot columns.
      
 1 1 1 
Now consider the set S = v1 = −1 , v2 = 1 , v3 =  0  . Check that S is a subset of V . Since S
−5 1 −2
 
contains 3 vectors and dim(V ) = 2 < 3, S must be linearly dependent. Indeed,
   
1 1 1 1 0 1/2
RREF
−1 1 0  −−−→ 0 1 1/2 .
−5 1 −2 0 0 0
Example

   

 x1 

 x2 
   
Let V =  x3  x1 + x2 + x3 + x4 = 0 . It is a hyperplane in R4 , hence dim(V ) = 3.

 

x4
 

    
 1
 1  
 −1  0 
Consider the set S =   ,   . Check that S is a subset of V . Since S only contains 2 vectors, it cannot
  −1
 0
 
0 0
 
   
1 1 1 1
1  −1 0 1 
 1  is in V , but  0 −1 1  is inconsistent.
span the whole of V . For example, the vector    

−3 0 0 −3
Spanning Set Theorem
Theorem
Let S = {u1 , u2 , ..., uk } be a subset of vectors in Rn , and let V = span(S). Suppose V is not the zero space,
V ̸= {0}. Then there must be a subset of S that is a basis for V .

Proof.
If S is linearly independent, then S is a basis for V . Otherwise, one of the vectors ui in S can be written as a linear
combination of the other. Without lost of generality (rearranging if necessary), say

uk = c1 u1 + c2 u2 + · · · + ck−1 uk−1

for some coefficients c1 , c2 , ..., ck−1 . We claim that {u1 , u2 , ..., uk−1 } still spans V . For if v is a vector in V , we have

v = a1 u1 + a2 u2 + · · · + ak uk
= a1 u1 + a2 u2 + · · · + ak (c1 u1 + c2 u2 + · · · + ck−1 uk−1 )
= (a1 + ak c1 )u1 + (a2 + ak c2 )u2 + · · · + (ak−1 + ak ck−1 )uk−1

which shows that v is a linear combination of vectors in {u1 , u2 , ..., uk−1 }. If {u1 , u2 , ..., uk−1 } is linearly
independent, then it is a basis for V . Otherwise, continue the process of throwing away some redundant vectors, we
can conclude that there must be a subset of S that is a basis for V .
Linear Independence Theorem

Theorem
Let V be a subspace of Rn and S = {u1 , u2 , ..., uk } a linearly independent subset of V , S ⊆ V . Then there must be
a set T containing S, S ⊆ T such that T is a basis for V .

Proof.
If span(S) = V , then S is a basis for V . Otherwise, since span(S) ⫅ V , there must be a vector in V that is not
contained in span(S), uk+1 ∈ V \ span(S). Note that since uk+1 ̸∈ span{u1 , u2 , ..., uk }, the set
S1 = {u1 , u2 , ..., uk , uk+1 } is linearly independent and dim(span(S1 )) = k + 1. If span(S1 ) = V , we are done.
Otherwise, repeating the argument above, we can find uk+2 in V such that S2 = {u1 , u2 , ..., uk , uk+1 , uk+2 } is a
linearly independent subset of V . Continue inductively, this process must stop when the number of vectors in Sm is
equal to the dimension of V , for otherwise, if |Sm | > dim(V ), then Sm cannot be linearly independent. So let
T = Sm when |Sm | = dim(V ).
Challenge

Let V be a k-dimensional subspace of Rn . Using the dimension of V (instead of proving using equivalent statements
of invertibility), prove that a subset S in V containing k vectors, |S| = k, is linearly independent if and only if it
spans V .
Discussion

Recall that for a set S to be a basis for a subspace V in Rn , we must check that
(i) span(S) = V , and

(ii) S is linearly independent.

However, if we know the dimension of V and if the number of vectors in the set S is equal to the dimension of V ,
|S| = dim(V ), then it suffice to check one of the above criteria.
Dimension and Subspaces

Theorem
Let U and V be subspaces of Rn .

(i) If U is a subset of V , U ⊆ V , then the dimension of U is no greater than the dimension of V ,


dim(U) ≤ dim(V ).

(ii) If U is a strict subset of V , U ⫋ V , then the dimension of U is strictly smaller than V , dim(U) < dim(V ).

i.e. U ⊆ V , then dim(U) ≤ dim(V ) with equality ⇔ U = V .


Sketch of Proof.
Let S = {u1 , u2 , ..., uk } be a basis for U. Then dim(U) = k. Since U is a subset of V , S is a linearly independent
subset of V . So necessary dim(V ) ≥ k. If U ̸= V , then we can find a set T strictly bigger than S, S ⫋ T such that
T is a basis for V . Hence, dim(V ) = |T | > |S| = k = dim(U).
Equivalent ways to check for Basis

Theorem (B1)
Let V be a k-dimensional subspace of Rn , dim(V ) = k. Suppose S is a linearly independent subset of V containing
k vectors, |S| = k. Then S is a basis for V .

Proof.
Let U = span(S). Since S is linearly independent, S is a basis for U, and hence, dim(U) = k. Since S ⊆ V , U ⊆ V .
Also, dim(U) = k = dim(V ). Therefore, U = V , and so S is a basis for V .

Theorem (B2)
Let V be a k-dimensional subspace of Rn , dim(V ) = k. Suppose S is a set containing k vectors, |S| = k, such that
V ⊆ span(S). Then S is a basis for V .

Proof.
Let U = span(S), then V ⊆ U. So, k = dim(V ) ≤ dim(U) ≤ k which shows that k = dim(U) and hence
V = U = span(S). Next, observe that S must be linearly independent. For if S is linearly dependent, then
k = dim(U) = dim(span(S)) < k, a contradiction.
Equivalent ways to check for basis

In summary
Definition (B1) (B2)
(1) |S| = dim(V )
(1) span(S) = V (2) S ⊆ V (1) |S| = dim(V )
(2) S is L.I. (3) S is Linearly (2) V ⊆ span(S)
independent

▶ Using (B1), we do not need to check that span(S) = V .

▶ Using (B2), we do not need to check that S is linearly independent.


Example

       
 x   −1 1 
Let V = y  x + y − z = 0 . Show that S =  2  , 1 is a basis for V .
z 1 2
   

▶ Check that S ⊆ V
(−1) + (2) − (1) = 0, (1) + (1) − (2) = 0.
▶ Check that S is linearly independent. But this is clear since the 2 vectors in S cannot be a multiple of each other.

▶ dim(V ) = 2 since the RREF, which is just the coefficient matrix, has 2 non-pivot columns.

▶ Hence, span(S) ⊆ V and dim(span(S)) = dim(V ) tells us that span(S) = V .


Example
       

 x1 
 
 1 3 
   
 x2  1 ,  1  is a basis for V .
    
Let V = x1 − 2x2 + x3 = 0, x2 + x3 − 2x4 = 0 . Show that S = 


 x3  
 

1 −1 


x4 1 0
   

▶ Check that S ⊆ V

(1) − 2(1) + (1) = 0, (1) + (1) − 2(1) = 0 ⇒ (1, 1, 1, 1) ∈ V


(3) − 2(1) + (−1) = 0, (1) + (−1) − 2(0) = 0 ⇒ (3, 1, −1, 0) ∈ V

▶ Check that S is linearly independent. But this is clear since the 2 vectors in S cannot be a multiple of each
other. Hence, dim(span(S)) = 2.
▶ dim(V ) = 2 since the RREF, which is just the coefficient matrix, has 2 non-pivot columns.

▶ Hence, span(S) ⊆ V and dim(span(S)) = dim(V ) tells us that span(S) = V .


Example
       
 1
 0 0 0  
0 1 0  0 


       

Let T = 0 , 0 , 1 ,  0  and V = span(T ).
       


 0 0 0  1  

 
0  0   1 −1
 
   

 0 0 4 0 
2   1   6   2 


       

Show that S =  1  ,  0  ,  1  ,  0  is a basis for
        V.


  2   1   4   1  

 
−1 −1 −3 −1
 
   
0 0 4 0 1 0 0 0 1 0 0 0 −1/4 0 1 0
 2
 1 6 2 0 1 0 0   0
 RREF  1 0 0 0 −1 −2 2 

 −−−→  0
 1 0 1 0 0 0 1 0   0 1 0 1/4 0 0 0 
 
 2 1 4 1 0 0 0 1   0 0 0 1 −1/2 1 0 −1 
−1 −1 −3 −1 0 0 1 −1 0 0 0 0 0 0 0 0
It is clear that T is linearly independent. So, dim(span(T )) = 4. The augmented matrix above shows that
span(T ) ⊆ span(S), and the LHS of the augmented matrix shows that S is linearly independent too. Hence,
dim(span(S)) = 4 too. Therefore span(S) = span(T ).
Appendix
Linear Dependency and Adding Vectors
Theorem
Suppose S = {u1 , u2 , ..., uk } is linearly independent set of vectors in Rn and u is not a linearly combination of
u1 , u2 , ..., uk . Then the set {u1 , u2 , ..., uk , u} is linearly independent.

Proof.
Let c1 , c2 , ..., ck , c be coefficients satisfying the equation

c1 u1 + c2 u2 + · · · + ck uk + cu = 0.

If c ̸= 0, then manipulating the equation gives


c1 c2 ck
− u1 − u2 − · · · − uk = u,
c c c
a contradiction to u not being a linear combination of u1 , u2 , ..., uk . So, necessarily c = 0. Then

c1 u1 + c2 u2 + · · · + ck uk 0

tells us that c1 = c2 = · · · = ck = 0 by the independence of S. Therefore only the trivial coefficients satisfy the
equation above, which proves that the set {u1 , u2 , ..., uk , u} is linearly independent
Linear Dependency and Adding or Removing Vectors

Theorem
Suppose {u1 , u2 , ..., uk } is linearly independent set of vectors in Rn . Then any subset of {u1 , u2 , ..., uk } is linearly
independent.

Proof.
Let {ui1 , ui2 , ..., uil } be a subset of {u1 , u2 , ..., uk }. Relabel index, or rearranging the vectors in the set, we may
assume that the subset is {u1 , u2 , ..., ul } for some l ≤ k. Suppose c1 , c2 , ..., cl are coefficients satisfying the equation

c1 u1 + c2 u2 + · · · + cl ul = 0.

Pad the equation by 0, we have

c1 u1 + c2 u2 + · · · + cl ul + 0ul+1 + · · · + 0uk = 0.

This is a linear combination of the vectors {u1 , u2 , ..., uk }, and since the set is indepenendent, necessary the
coefficients are 0. In particular, c1 = c2 = · · · = cl = 0, which proves that the set {u1 , u2 , ..., ul } is independent.
Properties of Coordinates Relative to a Basis

Theorem
Let V be a subspace of Rn and B a basis for V .

(i) For any vectors u, v ∈ V , u = v if and only if [u]B = [v]B .

(ii) For any v1 , v2 , ..., vm ∈ V ,

[c1 v1 + c2 v2 + · · · + cm vm ]B = c1 [v1 ]B + c2 [v2 ]B + · · · + cm [vm ]B .

Proof.
Exercise.
More Properties of Coordinates Relative to a Basis

Theorem
Let B be a basis for V containing k vectors, |B| = k. Let v1 , v2 , ..., vm be vectors in V . Then
(i) v1 , v2 , ..., vm is linearly independent (respectively, dependent) if and only if [v1 ]B , [v2 ]B , ..., [vm ]B is linearly
independent (respectively, dependent) in Rk ; and

(ii) {v1 , v2 , ..., vm } spans V if and only if {[v1 ]B , [v2 ]B , ..., [vm ]B } spans Rk .

Proof.
(i) Follows from the properties of coordinates relative to a basis, c1 v1 + c2 v2 + · · · + ck vm = 0n×1 if and only if

0k×1 = [0n×1 ]B = [c1 v1 + c2 v2 + · · · + ck vm ]B = c1 [v1 ]B + c2 [v2 ]B + · · · + ck [vm ]B .


More Properties of Coordinates Relative to a Basis

Continue of Proof.
(ii) (⇐) Suppose {[v1 ]B , [v2 ]B , ..., [vm ]B } spans Rk . Given any v ∈ V , [v]B ∈ Rk and so can find c1 , ..., cm such that
[v]B = c1 [v1 ]B + c2 [v2 ]B + · · · + cm [vm ]B in Rk . Then v = c1 v1 + c2 v2 + · · · + cm vm , which proves that
{v1 , v2 , ..., vm } spans V .
(⇒) Let B = {u1 , u2 , ..., uk }. Suppose {v1 , v2 , ..., vm } spans V . Any vector w = (w1 , w2 , ..., wk ) ∈ Rk defines a
vector v = w1 u1 + w2 u2 + · · · + wk uk in V , and so can write v = c1 v1 + c2 v2 + · · · + cm vm . Then

w = [v]B = c1 [v1 ]B + c2 [v2 ]B + · · · + cm [vm ]B

shows that {[v1 ]B , [v2 ]B , ..., [vm ]B } spans Rk .

You might also like