Numerical Methods
ECE 410
Lecture 8
Gauss Elimination
1
Introduction
Roots of a single equation: f ( x) 0
A general set of
equations:
f1 ( x1 , x2 , xn ) 0
- n equations,
f 2 ( x1 , x2 , xn ) 0
- n unknowns.
f n ( x1 , x 2 , x n ) 0
2
Linear Algebraic Equations
a11 x1 a12 x2 a1n xn b1
a21 x1 a22 x2 a2 n xn b2
an1 x1 a n 2 x2 a nn xn bn
Nonlinear Equations
a11 x1 a12 x1 x2 a1n ( xn ) 5 b1
a21 ( x1 ) 3 a 22 e x2 a 2 n ( x2 ) 3 / xn b2
a x a x a x b
n1 1 n2 2 nn n n
3
Review of Matrices
a11 a12 a1m
a a 2m
a 22 2nd row Elements are indicated by a
[A]
21 ij
an1 a n2 anm n m
row column
mth column
Row vector: Column vector:
c1
[R ] r1 r2 rn 1 n c
[C]
2
Square matrix:
c m m 1
- [A]nxm is a square matrix if n=m.
- A system of n equations with n unknonws has a square
coefficient matrix. 4
Review of Matrices
• Main (principle) diagonal:
[A]nxn consists of elements aii ; i=1,...,n
• Symmetric matrix:
If aij = aji [A]nxn is a symmetric matrix
• Diagonal matrix:
[A]nxn is diagonal if aij = 0 for all i=1,...,n ; j=1,...,n
and ij
• Identity matrix:
[A]nxn is an identity matrix if it is diagonal with aii=1
i=1,...,n . Shown as [I] 5
Review of Matrices
• Upper triangular matrix:
[A]nxn is upper triangular if aij=0 i=1,...,n ; j=1,...,n and i>j
• Lower triangular matrix:
[A]nxn is lower triangular if aij=0 i=1,...,n ; j=1,...,n and i<j
• Inverse of a matrix:
[A]-1 is the inverse of [A]nxn if [A]-1[A] = [I]
• Transpose of a matrix:
[B] is the transpose of [A]nxn if bij=aji Shown as [A] or [A]T
6
Special Types of Square Matrices
5 1 2 16 a11 1
1 3 7 39 1
[ A] [ D]
a 22 [I ]
2 7 9 6 1
16 39 6 88 a nn
1
Symmetric Diagonal Identity
a11 a12 a1n a11
a a a a
[ A] 22 2n [ A] 21 22
a nn a n1 ann
Upper Triangular Lower Triangular
7
Review of Matrices
• Matrix multiplication:
r
Note: [A][B] [B][A] cij aik bkj
k 1
8
Review of Matrices
• Augmented matrix: is a special way of showing two
matrices together.
a11 a12
For example A a 21 a 22 augmented with the
b1 a11 a12 b1
column vector B is a a b 2
b 2 21 22
• Determinant of a matrix:
A single number. Determinant of [A] is shown as |A|.
9
Part 3- Objectives
10
Solving Small Numbers of Equations
There are many ways to solve a system of linear
equations:
• Graphical method
• Cramer’s rule For n ≤ 3
• Method of elimination
• Numerical methods for solving larger number of
linear equations:
- Gauss elimination (Chp.9)
- LU decompositions and matrix inversion(Chp.10)
11
1. Graphical method
• For two equations (n = 2):
a11 x1 a12 x2 b1
a21 x1 a22 x2 b2
• Solve both equations for x2: the intersection of the lines presents the
solution.
a11 b1
x2 x1 x2 (slope) x1 intercept
a12 a12
a21 b2
x2
x1
a22 a22
• For n = 3, each equation will be a plane on a 3D coordinate system.
Solution is the point where these planes intersect.
• For n > 3, graphical solution is not practical.
12
Graphical Method -Example
• Solve:
3 x1 2 x2 18
x1 2 x2 2
• Plot x2 vs. x1, the
intersection of the
lines presents the
solution.
13
Graphical Method
No solution Infinite solution ill condition
(sensitive to round-off errors)
14
2.Determinants and Cramer’s Rule
Determinant can be illustrated for a set of three
equations:
a11 a12 a13
A. x B D a21 a22 a23
a31 a32 a33
Where [A] is the coefficient
a22 a23
matrix: D11 a22 a33 a32 a23
a32 a33
a11 a12 a13 a21 a23
A a21 a22 a23 D12
a31 a33
a21 a33 a31 a23
a31 a32 a33
D13
a21 a22
a21 a32 a31 a22
a31 a32
15
16
Graphical Method
Mathematically
• Coefficient matrices of (a) & (b) are singular. There
is no unique solution for these systems.
Determinants of the coefficient matrices are zero
and these matrices can not be inverted.
• Coefficient matrix of (c) is almost singular. Its
inverse is difficult to take. This system has a unique
solution, which is not easy to determine
numerically because of its extreme sensitivity to
round-off errors.
17
Cramer’s Rule
AX B
a11 a12 a13 x1 b1
a a22 a23 x2 b2
21
a31 a32 a33 x3 b3
a22 a23 a21 a23 a21 a22
D a11 a12 a13
a32 a33 a31 a33 a31 a32
b1 a12 a13 a11 b1 a13 a11 a12 b1
b2 a22 a23 a21 b2 a23 a21 a22 b2
b3 a32 a33 a31 b3 a33 a31 a32 b3
x1 x2 x3
D D D
18
19
20
Cramer’s Rule
• For a singular system D = 0 Solution can not
be obtained.
• For large systems Cramer’s rule is not practical
because calculating determinants is costly.
21
3. Method of Elimination
• The basic strategy is to successively solve one of
the equations of the set for one of the
unknowns and to eliminate that variable from
the remaining equations by substitution.
• The elimination of unknowns can be extended to
systems with more than two or three equations.
However, the method becomes extremely
tedious to solve by hand.
22
Elimination of Unknowns Method
2.5x1 + 6.2x2 = 3.0
Given a 2x2 set of equations:
4.8x1 - 8.6x2 = 5.5
• Multiply the 1st eqn by 8.6 21.50x1 + 53.32x2=25.8
and the 2nd eqn by 6.2 29.76x1 – 53.32x2=34.1
• Add these equations 51.26 x1 + 0 x2 = 59.9
• Solve for x1 : x1 = 59.9/51.26 = 1.168552478
• Using the 1st eqn solve for x2 :
x2 =(3.0–2.5*1.168552478)/6.2 = 0.01268045242
• Check if these satisfy the 2nd eqn:
4.8*1.168552478–8.6*0.01268045242 = 5.500000004
23
(Difference is due to the round-off errors).
Naive Gauss Elimination Method
• It is a formalized way of the previous elimination
technique to large sets of equations by developing a
systematic scheme or algorithm to eliminate
unknowns and to back substitute.
• As in the case of the solution of two equations, the
technique for n equations consists of two phases:
1. Forward elimination of unknowns.
2. Back substitution.
24
Naive Gauss Elimination Method
• Consider the following system of n equations.
a11x1 + a12x2 + ... + a1nxn = b1 (1)
a21x1 + a22x2 + ... + a2nxn = b2 (2)
...
an1x1 + an2x2 + ... + annxn = bn (n)
Form the augmented matrix of [A|B].
Step 1 : Forward Elimination: Reduce the system to an upper triangular
system.
1.1- First eliminate x1 from 2nd to nth equations.
- Multiply the 1st eqn. by a21/a11 & subtract it from the 2nd equation.
This is the new 2nd eqn.
- Multiply the 1st eqn. by a31/a11 & subtract it from the 3rd equation.
This is the new 3rd eqn.
...
- Multiply the 1st eqn. by an1/a11 & subtract it from the nth equation.
This is the new nth eqn. 25
Naive Gauss Elimination Method (cont’d)
Note:
- In these steps the 1st eqn is the pivot equation and a11 is
the pivot element.
- Note that a division by zero may occur if the pivot
element is zero. Naive-Gauss Elimination does not check
for this.
a11 a12 a13 a1n x 1 b1
0 a a23 a2n x 2 b2
22
The modified system is 0 a32 a33 a3n x 3 b3
indicates that the 0 an2 an3 ann x n bn
system is modified once.
26
Naive Gauss Elimination Method (cont’d)
1.2- Now eliminate x2 from 3rd to nth equations.
a11 a12 a13 a1n x 1 b1
0 a22 a23 a2n x b
The modified system is
2
2
0 0 a33
a3n x 3 b3
0 0 an3 ann
xn bn
Repeat steps (1.1) and (1.2) upto (1.n-1).
a11 a12 a13 a1n x 1 b1
we will get this upper 0
a22 a2 n
a23 x b
2 2
triangular system a3n
0 0 a33 x 3 b3
( n 1)
0 0 0 ( n 1)
0 ann
x n b n
27
Naive Gauss Elimination Method (cont’d)
Step 2 : Back substitution
Find the unknowns starting from the last equation.
( n 1)
1. Last equation involves only xn. Solve for it. b
xn n
( n 1)
a nn
2. Use this xn in the (n-1)th equation and solve for xn-1.
...
3. Use all previously calculated x values in the 1st eqn
and solve for x1.
n
bi
( i 1)
a
j i 1
i 1
ij xj
xi ( i 1)
for i n-1, n-2 , ..., 1
a ii 28
Summary of Naive Gauss Elimination Method
29
Pseudo-code of Naive Gauss Elimination Method
(a) Forward Elimination (b) Back substitution
k,j
30
Naive Gauss Elimination Method
Example 1
Solve the following system using Naive Gauss Elimination.
6x1 – 2x2 + 2x3 + 4x4 = 16
12x1 – 8x2 + 6x3 + 10x4 = 26
3x1 – 13x2 + 9x3 + 3x4 = -19
-6x1 + 4x2 + x3 - 18x4 = -34
Step 0: Form the augmented matrix
6 –2 2 4 | 16
12 –8 6 10 | 26 R2-2R1
3 –13 9 3 | -19 R3-0.5R1
-6 4 1 -18 | -34 R4-(-R1)31
Naive Gauss Elimination Method
Example 1 (cont’d)
Step 1: Forward elimination
1. Eliminate x1 6 –2 2 4 | 16 (Does not change. Pivot is 6)
0 –4 2 2 | -6
0 –12 8 1 | -27 R3-3R2
0 2 3 -14 | -18 R4-(-0.5R2)
2. Eliminate x2 6 –2 2 4 | 16 (Does not change.)
0 –4 2 2 | -6 (Does not change. Pivot is-4)
0 0 2 -5 | -9
0 0 4 -13 | -21 R4-2R3
3. Eliminate x3 6 –2 2 4 | 16 (Does not change.)
0 –4 2 2 | -6 (Does not change.)
0 0 2 -5 | -9 (Does not change. Pivot is 2)
0 0 0 -3 | -3
32
Naive Gauss Elimination Method
Example 1 (cont’d)
Step 2: Back substitution
Find x4 x4 =(-3)/(-3) = 1
Find x3 x3 =(-9+5*1)/2 = -2
Find x2 x2 =(-6-2*(-2)-2*1)/(-4) = 1
Find x1 x1 =(16+2*1-2*(-2)-4*1)/6= 3
33
Naive Gauss Elimination Method Example 2
(Using 6 Significant Figures)
3.0 x1 - 0.1 x2 - 0.2 x3 = 7.85
0.1 x1 + 7.0 x2 - 0.3 x3 = -19.3 R2-(0.1/3)R1
0.3 x1 - 0.2 x2 + 10.0 x3 = 71.4 R3-(0.3/3)R1
Step 1: Forward elimination
3.00000 x1- 0.100000 x2 - 0.200000 x3 = 7.85000
7.00333 x2 - 0.293333 x3 = -19.5617
- 0.190000 x2 + 10.0200 x3 = 70.6150
3.00000 x1- 0.100000 x2 - 0.20000 x3 = 7.85000
7.00333 x2 - 0.293333 x3 = -19.5617
10.0120 x3 = 70.0843
34
Naive Gauss Elimination Method Example 2
(cont’d)
Step 2: Back substitution
x3 = 7.00003
x2 = -2.50000
x1 = 3.00000
Exact solution:
x3 = 7.0
x2 = -2.5
x1 = 3.0
35
Pitfalls of Gauss Elimination Methods
1. Division by zero
2 x2 + 3 x3 = 8
a11 = 0
4 x1 + 6 x2 + 7 x3 = -3
(the pivot element)
2 x1 + x2 + 6 x3 = 5
It is possible that during both elimination and back-
substitution phases a division by zero can occur.
2. Round-off errors
In the previous example where up to 6 digits were kept
during the calculations and still we end up with close to
the real solution.
x3 = 7.00003, instead of x3 = 7.0
36
Pitfalls of Gauss Elimination (cont’d)
3. Ill-conditioned systems
x1 + 2x2 = 10
1.1x1 + 2x2 = 10.4 x1 = 4.0 & x2 = 3.0
x1 + 2x2 = 10
1.05x1 + 2x2 = 10.4 x1 = 8.0 & x2 = 1.0
Ill conditioned systems are those where small changes in
coefficients result in large change in solution. Alternatively, it
happens when two or more equations are nearly identical,
resulting a wide ranges of answers to approximately satisfy the
equations. Since round off errors can induce small changes in the
coefficients, these changes can lead to large solution errors.
37
Pitfalls of Gauss Elimination (cont’d)
4. Singular systems.
• When two equations are identical, we would be dealing
with case of n-1 equations for n unknowns.
To check for singularity:
• After getting the forward elimination process and
getting the triangle system, then the determinant for
such a system is the product of all the diagonal
elements. If a zero diagonal element is created, the
determinant is Zero then we have a singular system.
• The determinant of a singular system is zero.
38
Techniques for Improving Solutions
1. Use of more significant figures to solve for the
round-off error.
2. Pivoting. If a pivot element is zero, elimination step
leads to division by zero. The same problem may arise,
when the pivot element is close to zero. This Problem
can be avoided by:
Partial pivoting. Switching the rows so that the
largest element is the pivot element.
Complete pivoting. Searching for the largest element
in all rows and columns then switching.
3. Scaling
Solve problem of ill-conditioned system.
Minimize round-off error
39
Partial Pivoting
Before each row is normalized, find the largest
available coefficient in the column below the
pivot element. The rows can then be switched
so that the largest element is the pivot
element so that the largest coefficient is used
as a pivot.
40
Use of more significant figures to solve for the
round-off error :Example.
Use Gauss Elimination to solve these 2 equations:
(keeping only 4 sig. figures)
0.0003 x1 + 3.0000 x2 = 2.0001
1.0000 x1 + 1.0000 x2 = 1.000
0.0003 x1 + 3.0000 x2 = 2.0001
- 9999.0 x2 = -6666.0
Solve: x2 = 0.6667 & x1 = 0.0
The exact solution is x2 = 2/3 & x1 = 1/3 41
Use of more significant figures to solve for the round-
off error :Example (cont’d).
2 2.0001 3(2 / 3)
x2 x1
3 0.0003
Significant
Figures x2 x1
3 0.667 -3
4 0.6667 0.000
5 0.66667 0.3000
6 0.666667 0.33000
7 0.6666667 0.333000
42
Pivoting: Example
Now, solving the pervious example using the partial
pivoting technique:
1.0000 x1+ 1.0000 x2 = 1.000
0.0003 x1+ 3.0000 x2 = 2.0001
The pivot is 1.0
1.0000 x1+ 1.0000 x2 = 1.000
2.9997 x2 = 1.9998
x2 = 0.6667 & x1=0.3333
Checking the effect of the # of significant digits:
# of dig x2 x1 Ea% in x1
4 0.6667 0.3333 0.01
5 0.66667 0.33333 0.001
43
Scaling: Example
• Solve the following equations using naïve gauss elimination:
(keeping only 3 sig. figures)
2 x1+ 100,000 x2 = 100,000
x1 + x2 = 2.0
• Forward elimination:
2 x1+ 10.0×104 x2 = 10.0×104
- 5.00 ×104 x2 = - 5.00 ×104
Solve x2 = 1.00 & x1 = 0.00
• The exact solution is x1 = 1.00002 & x2 = 0.99998
44
Scaling: Example (cont’d)
B) Using the scaling algorithm to solve:
2 x1+ 100,000 x2 = 100,000
x1 + x2 = 2.0
Scaling the first equation by dividing by 100,000:
0.00002 x1+ x2 = 1.0
x1+ x2 = 2.0
Rows are pivoted:
x1 + x2 = 2.0
0.00002 x1+ x2 = 1.0
Forward elimination yield:
x1 + x2 = 2.0
x2 = 1.00
Solve: x2 = 1.00 & x1 = 1.00
The exact solution is x1 = 1.00002 & x2 = 0.99998 45
Scaling: Example (cont’d)
C) The scaled coefficient indicate that pivoting is necessary.
We therefore pivot but retain the original coefficient to give:
x1 + x2 = 2.0
2 x1+ 100,000 x2 = 100,000
Forward elimination yields:
x1 + x2 = 2.0
100,000 x2 = 100,000
Solve: x2 = 1.00 & x1 = 1.00
Thus, scaling was useful in determining whether pivoting was
necessary, but the equation themselves did not require
scaling to arrive at a correct result.
46
Determinate Evaluation
The determinate can be simply evaluated at the end
of the forward elimination step, when the program
employs partial pivoting:
1
( n 1) p
D a a a a
' ''
11 22 33 nn
where:
p represents the number of times that rows are pivoted
47
Example: Gauss Elimination
2x 2 x 4 0
2x 1 2x 2 3x 3 2x 4 2
4x 1 3x 2 x 4 7
6x 1 x 2 6x 3 5x 4 6
a) Forward Elimination
0 2 0 1 0 6 1 6 5 6
2 2 3 2 2 2 2 3 2 2
R 4
R 1
4 3 0 1 7 4 3 0 1 7
6 1 6 5 6 0 2 0 1 0
48
Example: Gauss Elimination (cont’d)
6 1 6 5 6
2 2 3 2 2 R 2 0.33333 R 1
4 3 0 1 7 R 3 0.66667 R 1
0 2 0 1 0
6 1 6 5 6
0 1.6667 5 3.6667 4
R 2 R 3
0 3.6667 4 4.3333 11
0 2 0 1 0
6 1 6 5 6
0 3.6667 4 4.3333 11
0 1.6667 5 3.6667 4
0 2 0 1 0
Example: Gauss Elimination (cont’d)
6 1 6 5 6
0 3.6667 4 4.3333 11
0 1.6667 5 3.6667 4 R 3 0.45455 R 2
0 2 0 1 0 R 4 0.54545 R 2
6 1 6 5 6
0 3.6667 4 4.3333 11
0 0 6.8182 5.6364 9.0001
0 0 2.1818 3.3636 5.9999 R 4 0.32000 R 3
6 1 6 5 6
0 3.6667 4 4.3333 11
0 0 6.8182 5.6364 9.0001
0 0 0 1.5600 3.1199
50
Example: Gauss Elimination (cont’d)
6 1 6 5 6
0 3.6667 4 4.3333 11
0 0 6.8182 5.6364 9.0001
0 0 0 1.5600 3.1199
b) Back Substitution
3.1199
x4 1.9999
1.5600
9.0001 5.6364 1.9999
x3 0.33325
6.8182
11 4.3333 1.9999 4 0.33325
x2 1.0000
3.6667
6 5 1.9999 6 0.33325 1 1.0000
x1 0.50000 51
6
Gauss-Jordan Elimination
• It is a variation of Gauss elimination. The
major differences are:
– When an unknown is eliminated, it is eliminated
from all other equations rather than just the
subsequent ones.
– All rows are normalized by dividing them by their
pivot elements.
– Elimination step results in an identity matrix.
– It is not necessary to employ back substitution to
obtain solution.
52
Gauss-Jordan Elimination- Example
0 2 0 1 0 1 0.16667 1 0.83335 1
R 4
2 2 3 2 2 R 1
2 2 3 2 2
4 3 0 1 7 R 4 / 6.0 4 3 0 1 7
6 1 6 5 6 0 2 0 1 0
1 0.16667 1 0.83335 1
2 2 3 2 2 R 2 2 R 1
4 3 0 1 7 R 3 4 R 1
0 2 0 1 0
53
1 0.16667 1 0.83335 1
0 1.6667 5 3.6667 2
0 3.6667 4 4.3334 7
0 2 0 1 0
Dividing the 2nd row by 1.6667 and reducing the second column.
(operating above the diagonal as well as below) gives:
1 0 1.5 1.2000 1.4000
0 1 2.9999 2.2000 2.4000
0 0 15.000 12.400 19.800
0 0 5.9998 3.4000 4.8000
Divide the 3rd row by 15.000 and make the elements in the 3rd
Column zero.
54
1 0 0 0.04000 0.58000
0 1 0 0.27993 1.5599
0 0 1 0.82667 1.3200
0 0 0 1.5599 3.1197
Divide the 4th row by 1.5599 and create zero above the diagonal in the fourth
column.
1 0 0 0 0.49999
0 1 0 0 1.0001
0 0 1 0 0.33326
0 0 0 1 1.9999
Note: Gauss-Jordan method requires almost 50 % more operations than
Gauss elimination; therefore it is not recommended to use it.
55