0% found this document useful (0 votes)
102 views18 pages

Gaussian Elimination in Systems of Equations

1) Gaussian elimination involves performing row operations on the augmented matrix A|b until the matrix A is in echelon form. 2) If this process produces a row of all zeros except a non-zero element in the b column, the system is inconsistent and has no solutions. 3) If no such row is produced, the system is consistent. It will have either one solution, infinitely many solutions, or no solution depending on the rank of A.

Uploaded by

victor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Topics covered

  • homogeneous systems,
  • parametric form,
  • subspace properties,
  • free variables,
  • null space,
  • column space,
  • coefficient matrix,
  • consistent systems,
  • solution uniqueness,
  • linear transformations
0% found this document useful (0 votes)
102 views18 pages

Gaussian Elimination in Systems of Equations

1) Gaussian elimination involves performing row operations on the augmented matrix A|b until the matrix A is in echelon form. 2) If this process produces a row of all zeros except a non-zero element in the b column, the system is inconsistent and has no solutions. 3) If no such row is produced, the system is consistent. It will have either one solution, infinitely many solutions, or no solution depending on the rank of A.

Uploaded by

victor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Topics covered

  • homogeneous systems,
  • parametric form,
  • subspace properties,
  • free variables,
  • null space,
  • column space,
  • coefficient matrix,
  • consistent systems,
  • solution uniqueness,
  • linear transformations

page 1 of Section 5.

CHAPTER 5 SYSTEMS OF EQUATIONS

SECTION 5.1 GAUSSIAN ELIMINATION

matrix form of a system of equations


The system
2x + 3y + 4z = 1
5x + 6y + 7z = 2

can be written as
Û Û
Ax = b
where
 x 
A = [ 2
5
3
6
4
7 ] ,
Û
x = y ,
 z 
Û
b = [ 12 ]
The system is abbreviated by writing

(1)
2
5
3
6
4
7 | 12
The matrix A is called the coefficient matrix. The 2 ≈ 4 matrix in (1) is called the
augmented matrix and is denoted A|b.

Gaussian elimination
Row ops on A|b amount to interchanging two equations or multiplying an equation by
a nonzero constant or adding a multiple of one equation to another. They do not
change the solution so they may be used to simplify the system. In particular,
performing row ops on A|b until A is in echelon form is called Gaussian elimination.
There are two possibilities (Fig 1).
1. The row ops produce a row of the form

(2) 0 0 0 0 |nonzero

Then the system has no solution and is called inconsistent.


For example, if a system row ops to

1 0 2 4  2
0 1 3 5  0
0 0 0 0  6

then it has no solutions because the third row is the equation

0x1 + 0x2 + 0x3 + 0x4 = 6

which is impossible to satisfy.

2. The row ops do not produce a row of the form 0 0 0 0|nonzero.


Then the system has a solution (at least one) and is called consistent.
There are two subcases (see the summary in the box below).
2a. All the echelon cols of A have pivots.
Then there is exactly one solution.
For example if the row ops produce
1 0 0   2
0 1 0  5
0 0 1   6
0 0 0  0

and the variables are named x,y,z then the solution is x=2, y=5, z=6.
page 2 of Section 5.1

2b. Not all the echelon cols of A have pivots.


Then there are infinitely many solutions.
It is always possible to solve for some of the variables in terms of the
others. Those others are called free variables or parameters.
One way to do this is to choose the variables corresponding to the cols without
pivots to be free. If the system Ax = b has n variables (so that A has n cols) and
rank A = r (so that there are r cols with pivots in the echelon form of A)
then the solution has n – r free variables.

For example, if the row ops produce

1 2 0 3  5

0 0 1 4  6
0 0 0 0  0

0 0 0 0  0

and the variables are named x,y,z,w then one way to write the solution is

x = 5 - 3w - 2y
z = 6 - 4w (w, y free)

Another way to write the sol is

x = 5 - 3s - 2t
y = t
z = 6 - 4s
w = s

→ →
Ax=b

Has an echelon row of No echelon row of the


the form 0...0|nonzero form 0...0|nonzero
INCONSISTENT CONSISTENT
NO SOLUTIONS

All echelon cols Has an echelon col


have pivots without a pivot
ONE SOLUTION INFINITELY MANY SOLS
# parameters
= # cols w/o pivots
= n-r where
n = # variables
r = rank A

example 1
Solve
2x1 + 4x2 - 2x3 + 8x4 + 4x5 = 6
3x1 + 6x2 + x3 + 12x4 - 2x5 = 1
9x1 + 18x2 + x3 + 36x4 + 38x5 = 0
solution Begin with
2 4 -2 8 4 
 6
3 6 1 12 -2  1
9 18 1 36 38 
 0

and do row ops


page 3 of Section 5.1

1
R1 = 2
R1
R2 = -3R1 + R2
R3 = -9R1 + R3
1
R2 = 4
R2
R1 = R2 + R1
R3 = -10R2 + R3
1
R3 = 40
R3
R2 = 2R3 + R2
to get
1 2 0 4 0  1
0 0 1 0 0  -94/40
0 0 0 0 1  -7/40

Choose x2 and x4, the variables corresponding to the cols without pivots, to be the
free variables (other choices are possible but I like these) and solve for
x1, x3, x5 in terms of them. A final sol is
7
x5 = - 40
94
x3 = - 40
x1 = 1 - 4x4 - 2x2
(x2, x4 free)

Another version of the sol is


7
x5 = - 40
x4 = t
94
x3 = - 40
x2 = s
x1 = 1 - 4t - 2s

example 2
Suppose A is 4 ≈ 3 and Ax = b has infinitely many solutions. Consider the new
system Ax = c. Will it have infinitely many solutions also; i.e., what happens if
you change the righthand side of the system.
solution Since Ax = b has infinitely many sols, the echelon form of A must have at
least one col without a pivot and A|b must row op into something like this:
1 0 3   8 1 3 0   8 1 3 7   8
0 1 7  9 0 0 1  9 0 0 0  0
0 0 0  or 0 0 0  or 0 0 0 
 0  0  0
0 0 0  0 0 0 0  0 0 0 0  0
warning I'm not saying that the original system looks like this. But A|b must
row operate into something like this.

Now look at Ax = c, same A but different righthand side.


Can't have just one sol because the echelon form of A has at least one col without a
pivot.
Can have infinitely many sols or no solutions: If A|b row opped to
1 0 3   8
0 1 7  9
0 0 0   0
0 0 0  0
page 4 of Section 5.1

it's possible for A|c to row op to say

1 0 3  5

0 1 7  5
0 0 0  6

0 0 0  0

and have no solutions, and it's also possible for A|c to row op to

1 0 3  π

0 1 7  e
0 0 0  0

0 0 0  0

and have infinitely many sols.


So if Ax = b has infinitely many sols then Ax = c has either infinitely many sols
or no solutions (but can't have just one solution).

solving with unreduced echelon form and back substitution (much more efficient)
Row operate on the system so that the coeff matrix is in unreduced echelon form
(upper triangular form). Then starting with the last row, solve for the first
variable in each row and back substitute as you go along.
For example, if the row operations produce

1 2 3 5 7  2

0 0 4 6 8  6
0 0 0 0 2  6

0 0 0 0 0  0
then
x5 = 3 from the third row
6 - 8x5 - 6x4
x3 = from second row
4

6 - 8(3) - 6x4
= back substitute
4

-18 - 6x4
=
4

x1 = 2 - 7x5 - 5x4 - 3x3 - 2x2 from first row


-18-6x4
= 2 - 7(3) - 5x4 - 3( ) - 2x2 back substitute
4
11 1
= - 2 - 2x2 - 2 x4

The solution can also be written as


x5 = 3
x4 = t
-18 - 6t
x3 =
4
x2 = s
11 1
x1 = - 2 - 2s - 2 t
page 5 of Section 5.1

PROBLEMS FOR SECTION 5.1

1. Here are some systems of equations, already in echelon form.


Let the variables be named x1,x2,... .
Solve and identify the free variables. Then express the solution using parameters
r,s,t,...

1
0
2
0
0
1
3
4
5
6
0
0
 5
6
1 0 
 5
(a) 0 0 0 0 0 1  7 (b)
0
0
1
0


6
3
0 0 0 0 0 0  0 0 0

 0
0 0 0 0 0 0 0

1 0  5
 1 2 0 0  5
(c)
0
0
0
1
0
0




6
0
0
(d)
1
0
0
0
0
0
0
0 | 0
0 (e) 0
0
0
0
1
0
0
0
 3
 0

2. Let
u = (1,0,1,1)
v = (0,1,1,0)
w = (1,1,2,1)
y = (2,-1,1,2)
Decide if y is in the subspace spanned by u,v,w using
(a) ideas from Section 3.1
(b) ideas from this section

If it is, express y as a combination of u,v,w.

3. Look at the system


x - 2y + 3z = 1
2x + ky + 6z = 6
-x + 3y + (k-3)z = 0
For what values of k will it have
(a) no solutions (b) one solution (c) infinitely many solutions

4. True or False. If False, what would the correct conclusion be.


(a) If there are fewer equations than unknowns then the unknowns are
"underdetermined" and there will be infinitely many sols.
(b) If there are more equations than unknowns then the unknowns are
"overdetermined" and there are no solutions.
(c) If there are the same number of unknowns as equations then everything is
hunky—dory and there will be exactly one solution.

5. Suppose A is 5 ≈ 7 and the 2 ≈ 2 subdet in the northeast corner is nonzero but


all the 3 ≈ 3 subdets are 0. What can you conclude about the number of solutions
Û Û
and number of free variables for the system Ax = b.

6. Solve using row ops and back substitution.


2 3 4 6 1  4
(a) 0 0 5 7 2  -2
0 0 0 0 0  0

2x + 8y + 5z + 2u- 6v = 8
4y + 6z + 3u + 3v = 9 2x + y + z = 1
(b) 2x + 12y + 11z + 5u - 3v = 17 (c) 4x + y = -2
4y + 6z + 3u + 6v = 9 -2x + 2y + z = 7
page 6 of Section 5.1

7. A system of equations in unknowns x,y,z,w row ops to

1 2 0 3  5

0 0 1 4  6
0 0 0 0  0

0 0 0 0  0

Solve and, if possible, make the free variables


(a) w and y
(b) x and w
(c) y and z
(d) x and y

Û Û
8. Suppose A is 4 ≈ 3. If Ax = b has one solution, what can you conclude about
Û Û
solutions to Ax = c.

Û Û
9. What can you conclude about sols to Ax = b if
(a) A is 3 ≈ 5 with rank 3
(b) A is 3 ≈ 5 with rank 2
(c) A is 5 ≈ 3 with rank 3

10. Let
 1 2 1 
 
M =  
0 0 1
0 0 2 
 
 0 0 3 
Û Û Û
For what vectors b is Mx = b consistent. In that consistent case, solve.

11. You have a fixed 3 ≈ 4 matrix A (3 rows, 4 columns) with rank 3.


You're looking for a right inverse for A; i.e., you're looking for a matrix

 x1 y1 z1 
=  
x2 y2 z2
B
 x3 y3 z3 
 x4 y4 z4 
so that AB = I.
Can you find such a B. If so, how many.
Suggestion: Think about systems of equations and remember that A is fixed and B is
filled with unknowns.
page 1 of Section 5.2

SECTION 5.2 HOMOGENEOUS SYSTEMS

homog systems and the null space of a matrix


Û Û
A homog system is one of the form Mx = 0.
It is always consistent since it at least has the (trivial) solution

x1 = 0, x2 = 0, ..., xn = 0.

A homog system will always have either one solution (the trivial one) or
infinitely many sols (the trivial one plus infinitely many others).
Û Û
Furthermore, the set of solutions to Mx = 0 is a subspace called the null space
of M.
If M has n cols (so that the system has n variables) and rank M = r then the null
space is an (n – r)-dim subspace of Rn; the dimension of the null space is the number of
free variables.

The dimension of the null space of M is called the nullity of M.


Û
At worst, the null space is a 0—dim subspace containing only 0.
Here's an example to illustrate why the sols to a homog system are a subspace and
to show how to find a basis for the null space.
Û
Suppose M|0 row ops to
1 -7 0 2 5 0  0
0 0 1 -3 -6 0  0
0 0 0 0 0 1  0
0 0 0 0 0 0  0
Then
x6 = 0
(1) x3 = 6x5 + 3x4
x1 = -5x5 - 2x4 + 7x2

The solution can also be written as

x1 = -5r - 2s + 7t
x2 = t
x3 = 6r + 3s
x4 = s
x5 = r
x6 = 0
In vector notation, the solution is

 xx1   -50   -20   71 


 x2  6  3  0
 3 = r  0  s  1  t  0 
(2)
 x4  1 
+
0 
+
0
 x5  0  0  0
 x6 
Û Û Û
u v w
page 2 of Section 5.2

The set of sols is the set of all combinations of u,v,w so it is a subspace of R6.
Furthermore u,v,w are ind (look at their 2nd, 4th and 5th components to see that no
one can be a combination of the other). So the sols are a 3—dim subspace with basis
u,v,w. The dimension of the subspace matches the number of free variables.
Now that you know the sols are a subspace, here's another way to extract a basis
from the solution in (1) without rewriting it as (2). Assign values to the three
free variables so as to get 3 ind solutions. The easiest way to do this is to let

x2 = 1, x4 = 0, x5 = 0

to get solution (7,1,0,0,0,0); then let

x2 = 0, x4 = 1, x5 = 0

to get solution (-2,0,3,1,0,0); then let

x2 = 0, x4 = 0, x5 = 1

to get solution (-5,0,6,0,1,0).

homogeneous versus non-homogeneous


The set of solutions to a homogeneous system of equations is a subspace but the set
of solutions to a non—homogeneous system of equations can never be a subspace. For
Û
one thing it never contains 0 because a non—homogeneous system never has the
solution x1 = 0, x2 = 0,..., xn = 0.

example 1
Û Û
Suppose the system Mx = 0 row ops to

1 2 0 3  0
0 0 1 4  0
0 0 0 0  0
Then
x3 = -4x4
(3) x1 = -3x4 - 2x2
The sols are a 2—dim subspace of R4 since there are 2 free variables.
To pick a basis let x4 = 1, x2 = 0 in (3) to get solution
u = (-3,0,-4,1); and set x4 = 0, x2 = 1 in (3) to get solution v = (-2,1,0,0). Then u
and v are two ind solutions and are a basis for the subspace of solutions.

summary
Suppose A is 8 ≈ 10 (8 rows and 10 columns) with rank r.
The row space of A is an r—dim subspace of R10.
The col space of A is an r—dim subspace of R8.
Û Û
The null space of A (the set of sols to Ax = 0) is a (10-r)—dim subspace of R10.

how to show that a set of vectors is a subspace


Section 2.5 gave two ways to show that a set of vectors is a subspace:
method 1 Show that it's closed under addition and scalar mult.
method 2 Show that the set is composed of all combinations of a bunch of vectors.
Now you have a third way.
method 3 If the set of vectors is the solution of a homogeneous system of
equations then it's a subspace.

example 2
page 3 of Section 5.2

Look at the set of vectors of the form (a,b,c,-a). In example 3 of Section 2.5, I
used two methods to show that the set is a subspace. And I found a basis for the
subspace. Here's a third way.
The set consists of all vectors (x1,x2,x3,x4) where x4 = -x1. So the set is the
solution of the homogeneous system

x1 + 0x2 + 0x3 + x4 = 0 (one equation, four unknowns)

So the set is a subspace.

The solution to the system is

x4 = -x1; x1,x2,x3 free.


There are 3 free variables so the subspace of sols is 3—dim. Here's how to get a
basis for the subspace:

Let x1=1,x2=0,x3=0 to get sol (1,0,0,-1).


Let x1=0,x2=1,x3=0 to get sol (0,1,0,0).
Let x1=0,x2=0,x3=1 to get sol (0,0,1,0).

A basis is (1,0,0,-1), (0,1,0,0), (0,0,1,0).

PROBLEMS FOR SECTION 5.2


1. Write the solutions in parametric form and find a basis for the subspace of
solutions.
1 5 2 4 0   0
0 1 3 5 0  0
(a) 0 0 0 0 1   0
0 0 0 0 0  0
x + 2y + z = 0 2x + y + z = 0
(b) x + 3y + 2z = 0 (c) 4x + y = 0
-2x + 2y + z = 0

2. Find a basis for the null space of M.

 0 1 0 0 
 1 1 2 0   
(b) M =  
0 0 1 0
(a) M =  0 0 1 1   (c) M = [1 1 2]
 1 1 2 0   0 0 0 1 
 0 0 0 0 
3. Find an orthogonal basis for the subspace of solutions to

x + 2y + z - w = 0
2x + 5y + 3z - w = 0

4. Describe the set of solutions geometrically.

2 0 0  0 2 3 4  0 0 0 0  0
(a) 0 0 0  0 (b) 4 6 8  0 (c) 0 0 0  0
0 0 0  0 0 0 0  0 0 0 0  0
1 3 0  0 1 0 -2  0
(d) 0 0 0  0 (e) 0 1 -3  0
0 0 0  0 0 0 0  0
Û Û
5. Solve Ax = b where A is the 3 ≈ 5 zero matrix and
page 4 of Section 5.2

Û Û Û
(a) b = 0 (b) b ≠ 0

6. Suppose A has six cols, the cols are dep but the last five cols are ind. What can
you conclude about the number of solutions (and the number of free variables) to
Û Û Û Û Û Û
Ax = b where (a) b = 0 (b) b ≠ 0

7. Let u = (1,0,2), v = (1,1,4). Find all the vectors orthog to both u and v
(a) using geometry and calculus
(b) using the ideas of this section

8. Let u = (1,1,4,-2). Find as many independent vectors as possible orthogonal to u.

9. Use ideas from this section to show that the set of points is a subspace and find
a basis.
(a) The set of points of the form (x1,..., x6) where x2 = -x3 + x5 and x4 = 2x3 + x5
(b) The set of points of the form (a,a,2a,2a,b)

10. Let A and B be fixed 10 ≈ 7 matrices.


Û Û Û
Look at the set of column vectors x in R7 such that Ax = Bx, i.e., the set of
vectors x such that A and B "do the same thing to them". Use ideas of this section
to show that this set is a subspace.

0 2
11. Solve the system of equations 0 3 |00 (call the unknowns x and y) and find a
basis for the subspace of solutions.
page 1 of Section 5.3

SECTION 5.3 ORTHOGONAL COMPLEMENTS

definition of the orthogonal complement of a subspace


Suppose V is a subspace of Rn.
The set of all vectors orthog to every vector in V is called the orthogonal
perp
complement of V and denoted by V or VÚ .
In other words, u is in VÚ iff u is orthog to every vector in V.
For example, in R3, a plane through the origin is a 2—dim subspace; its orthogonal
complement is the line through the origin perpendicular to the plane (Fig 1). The
Û Û
line consists precisely of those points u such that the u is perp to every vector in
the plane.

perp
V V

point on •
the line point in
• the plane


• •0
point in
the plane

FIG 1

all about the orthog complement


Let V be a subspace of Rn.

(1) VÚ is also a subspace of Rn.

(2) The sum of the dimensions of V and VÚ is n (e.g., if V is a 6—dim sub space of

R30 then VÚ is a 24—dim subspace of R30).


(3) Not only does VÚ contain everything orthog to all the vectors in V but it can
be shown that V in turn contains everything orthog to all the vectors in VÚ so that
the spaces V and VÚ are orthog complements of each other.
For example in Fig 1, the line is the orthog complement of the plane and the plane
is the orthogonal complement of the line.
(4) Suppose V is spanned by vectors u,v,w,p. To find VÚ, let A be the matrix with
Û Û
rows u,v,w,p. Then the set of solutions to Ax = 0 (the null space of A) is VÚ.

(5) The null space and the row space of a matrix are orthog complements.

proof of (4)
Û
Suppose V is spanned by u,v,w,p. To find VÚ, find all x orthog to the spanning
vectors by solving
Û Û Û Û Û Û Û Û
(•) u…x = 0, v…x = 0, w…x = 0, p…x = 0 .

Once x is orthog to u,v,w,p then x is orthog to every vector in V (see problem #6


in Section 2.5).
So the solution to (•) is the orthog complement of V.
Û Û Û Û Û Û Û
The system in (•) can be written as Ax = 0 where the rows of A are u,v,w,p, and x
page 2 of Section 5.3

Û Û
is written as a column So to find VÚ, solve Ax = 0, i.e., find the null space of A.

proof of (5)
This follows from (4) which shows that if a matrix A has rows u,v,w,p then the
null space of the matrix is the orthog complement of the space spanned by u,v,w,p.

proof of (1) and (2)


Suppose V is a 6-dim subspace of R30.
Let A be a marix whose rows are a basis for V (so that V is the row space of A).
A is 6 ≈ 30.
By (4), VÚ is the null space of A which makes it a subspace, proving (1).
Furthermore,
rank A = 6
number of cols in A = 30
dim of null space = n-r = 24

So dim VÚ is 24, proving (2).

proof of (3) omitted (subtle)

example 1
Let
u = (0,1,0,1,0)
v = (0,0,1,0,2)

Let V be the space spanned by u,v. Find a basis for VÚ


Û Û
solution Let A be the matrix with rows u,v. Solve the system Ax = 0 :

0
0
1
0
0
1
1
0
0
2 | 00
It's already in echelon form. The solution is

x3 = -2x5
x2 = -4x4
To find a basis for VÚ, set x1 = 1, x4 = 0, x5 = 0 to get

p = (1,0,0,0,0);

set x1 = 0, x4 = 1, x5 = 0 to get

q = (0,-1,0,1,0);

set x1 = 0, x4 = 0, x5 = 1 to get
r = (0,0,-2,0,1)

A basis for VÚ is p,q,r.


Note that V and VÚ are subspaces of R5 so, by (2), their dimensions should add up to
5. And they do: dim V = 2 (since u and v are ind) and dim VÚ = 3.

how many perps


Suppose u,v,w are independent vectors in R30.
You can get 27 independent vectors orthogonal to u,v,w, but no more than that.
You can get 28 independent vectors orthogonal to u and v, but no more than that.
You can get 29 independent vectors orthogonal to u, but no more than that.
page 3 of Section 5.3

proof
The number of independent vectors orthog to u,v,w is the same as the dimension of
the orthog complement of the subspace spanned by u,v,w. The subspace spanned by u,v,w
has dimension 3 (since u,v,w are ind). By (2), the orthog complement has dimension
27. So that's the most independent vectors you can get which are orthog to u,v and w.
Similarly, the number of independent vectors orthog to u is the same as the
dimension of the orthog complement of the 1—dim subspace spanned by u. By (2), the
orthog complement has dimension 29.
mathematical catechism
question What does it mean to say that U is the orthogonal complement of a subspace
W of R8.
answer It means that U is the set of all vectors in R8 that are orthogonal to every
vector in W.

PROBLEMS FOR SECTION 5.3

1. Find a basis for the orthog complement of the subspace spanned by


p = (1,1,0,0), q = (0,1,0,1).

2. Find all the vectors perp to


p = (1,0,0,0,0,1), q = (1,1,0,0,0,0), r = (0,0,1,1,0,0).

3. Let
 1 2 0 3 0 
 
A =  
2 4 1 10 0
0 0 1 4 1 
 
 1 2 1 7 0 
Find the dimension of and a basis for

(a) the row space of A


(b) the col space of A
(c) the null space of A
(d) the orthogonal complement of the null space of A

 2 4 1 

4. Let A = 
3 6 0 
 4 8 0 
 5 10 0 
(a) Find a basis for the orthogonal complement of the column space.
(b) Continue from part (a) and find an orthogonal basis.

5. Do you remember from calculus that the graph in 3—space of 2x + 3y + 4z = 0


Û Û Û Û
is a plane through the origin with normal vector n = 2i + 3j + 4k.
Û Û
Furthermore, the only vectors perp to the plane are n and multiples of n.
Here's the 5—dimensional version of this idea.
Look at the set of points (a "hyperplane") satisfying the equation

(•) 2x1 + 3x2 + 5x3 - 7x4 + x5 = 0.

A vector is called orthog to the hyperplane if it is orthog to every vector in the


hyperplane.
Let
Û
n = (2,3,5,-7,1)
Û
Use the suff from this section to show that n is a orthog to the hyperplane and
Û Û
furthermore the only vectors orthog to the hyperplane are n and multiples of n.
page 4 of Section 5.3

6. Let L1 and L2 be perpendicular lines through the origin in space. They are
subspaces since a line through the origin in R3 is a subspace. Are they orthogonal
complements.

7. Let W be a subspace of Rn. Let w be in W.


Can you decide if w and u are orthogonal if
(a) u is the orthogonal complement of W
(b) u is not in the orthogonal complement of W

8. Suppose (3,1,2) is in the row space of M. Is it possible for (2,1,1) to be in the


null space.

8. No. the row space of a matrix and its null space are orthogonal complements. so a
vector in the null space has to be orthog to everything in the row space. But
(2,1,1) is not orthog to (3,1,2).
page 1 of Section 5.4

SECTION 5.4 SQUARE SYSTEMS

Throughout this section, matrices are square, and all systems of equations have the
same number of equations as unknowns.

Û Û
number of solutions to Mx = b
Û Û Û Û
(a) If M is invertible then Mx = b has one solution, namely x = M-1 b.
Û Û
(b) If M is not invertible then Mx = b has either no solutions or infinitely many
solutions.

proof of (b)
Suppose M is 3 ≈ 3 and is not invertible. Then M row ops into something like

1 2 0 1 0 2
0 0 1 or 0 1 3 etc.
0 0 0 0 0 0

And M|b row ops to something like

1 0 2 5
0 1 3 6
0 0 0 7

in which case there are no solutions or it row ops to something like

1 0 2 8
0 1 3 9
0 0 0 0

in which case there are infinitely many solutions (because there is a free variable
corresponding to the column without a pivot). You can't get exactly one solution
because when the system is consistent there will always be at least one free
variable.
Û Û
number of solutions to Mx = 0
This is a special case of (a) and (b) above.
Û Û
Remember that a homogeneous system always has at least the trivial solution x = 0.
Û Û Û Û
(aa) If M is invertible then Mx = 0 has only the trivial solution x = 0.
Û Û
(bb) If M is not invertible then Mx = 0 has more (infinitely many more) than just
the trivial solution.
I'm going to add this to the invertible rule.

invertible rule.
Let M be n ≈ n.
The following are equivalent; i.e., either all are true or all are false.

(1) M is invertible (nonsingular).


(2) |M| ≠ 0.
(3) Echelon form of M is I.
(4) Rows of M are independent.
(5) Cols of M are independent.
(6) Rank of M is n.
Û Û
(7) Mx = 0 has only the trivial solution x = 0.
Û
In other words, the null space of M contains only 0.
Û Û
In other words, if x ≠ 0 then Mx ≠ 0
page 2 of Section 5.4

Cramer's rule
Û Û
Look at Mx = b where M is square.
(1) If |M| = 0 then either the system is inconsistent or it has infinitely many
solutions (this is just a restatement of (b) from above).
(2) If |M| ≠ 0 then the system has one solution (this is a restatement of (a)).
Û Û
Furthermore, in case (2), the solution is x = M-1 b which boils down to
Û
replace col 1 of M with b and then take det
x1 =
det M
(•)
Û
replace col 2 of M with b and then take det
x2 =
det M
etc.

proof of the formula in ( • )


Here's where the formulas in (•) come from for a 3 ≈ 3 matrix M.
Û Û
If |M| ≠ 0, the sol is x = M-1 b. Use the adjoint method for finding M-1 to get

 x1   cof of m11 cof of m12 cof of m13  T  b1 


     
 x2  = 1  cof of m cof of m22 cof of m23   b2 
  |M|  21   
 x3    
   cof of m31 cof of m32 cof of m33   b3 
So
1
x2 = (b1… cof of m12 + b2… cos of m22 + b3… cof of m32 )
|M|

 m m23   m m13   m m13  


x2 =
1  -b  21 
 + b2 
11 
 - b3 
11 
 
 1  m33   m31 m33   m21 m23  
|M|  m31   

 m11 b1 m13

 m21 b2 m23 
 
=
 m31 b3 m33 
det M

Similarly for x1 and x3.

example 1
The system
2x + y + z = 0
x - y + 5z = 0
y - z = 4

has one solution because

 2 1 1 
 1 -1 5  = -6 ≠ 0
 0 1 -1 
The solution is
page 3 of Section 5.4

 0 1 1 
 0 -1 5 
 4 1 -1 
x = = -4
-6

 2 0 1 
 1 0 5 
 0 4 -1 
y = = 6
-6

 2 1 0 
 1 -1 0 
 0 1 4 
z = = 2
-6

PROBLEMS FOR SECTION 5.4

 1   0 
1. Suppose M is 3 ≈ 3 and M  2  =  0  .
 3   0 

What important fact(s) can you conclude about M.

2. Try Cramer's rule and if that doesn't help, solve some other way.

2x - 5y + 4z = 5 2y + 4z = 10 2y + 4z = 6
(a) x - 3y + 2z = 1 (b) x + y + 4z = 4 (c) x + y + 4z = 4
y + z = 7 x + 2z = 1 x + 2z = 1

3. Look at the system

3x + y = 3
2x - y = 5

Solve it three times.

(a) Use Cramer's rule.


(b) Use an inverse matrix
(c) Use Gaussian elimination.

4. Use Cramer's rule to solve


ax + by = c
dx + ey = f

for x and y. What assumption is necessary for your solution to be valid.

Û  5   1 
5. Let A be 3 ≈ 3 and let b =  9  , cÛ =  2 
 8   0 
Û Û
Suppose Ax = b has no solutions.
What can you conclude about
Û Û
(a) the number of sols (and the number of free variables) to Ax = 0
Û Û Û Û
(b) the number of sols (and the number of free variables) to Ax = c where c ≠ 0
page 4 of Section 5.4

(c) |A|
(d) rank A

 1/
√6 2/
√6 1/
√6 
 
6. Let A =  -2/
√5 1/
√5 0 
 
 1/
√30 2/
√30 -5/
√30 

 x   2 
Without any agony, solve A  y =  3 .
 z   4 

You might also like