Chapter 3: System of Linear Equation
Chapter 3: System of Linear Equation
a 11 x 1 +a12 x 2 +⋯+a1n x n =b 1
a21 x1 +a22 x 2 +⋯+a2 n x n=b2
⋮
an 1 x 1 +a n2 x 2 +⋯+a nn x n =b n ……………………………(3.1)
Where aij (i , j=1,2 ,⋯,n ) are the known coefficients b j ( j=1,2 ,⋯,n ) are known values and
x i ,i=1,2,⋯,n are unknown to be determined.
Ax=b−−−−−−−−−−−−−−−−−−−−−− (3 . 2)
The matrix [A/b] is called the augmented matrix and it is formed bi appending the column b
to be nxn matrix A.
If all bj are zero ,then the system of equation (3.1) is said to be homogeneous and if at least one
of bj is not zero ,then it is said to be homogenous .
The non homogenous system (3.1) has a unique solution if and only if the det(A) ¿ 0
Notation
1|Page
A-1 → inverse of A
AT → transpose.
O → zero matrix.
I → identity matrix.
The method of solution of the linear algebraic equation (3.1) may broadly be classified in to two
types.
A) Direct method: These methods produce the exact solution after a finite numbers of steps.
B) Indirect method: These method give a sequence of approximate solution which converges
when the numbers of of steps tend to infinitely.
A Direct method
We assume that the coefficients aik are not all zero, so that the matrix A is not the zero matrix.
The matrix
2|Page
a11 a12 . . . a1n b 1
A, B a21 a22 . . . a2 n b2
. . .
a am 2 amn bm
m1 is called the augmented matrix of the system (1).
Now here still we can introduce (without any change, of course) some equations and/or
unknowns to the system (1) to make A as a square matrix, because it is very simple to think
about any square matrix.
a11 a 12 . . . a1n
a21 a22 . . . a2 n
.
A
.
.
a ann
So let n1 an 2
To solve system (1) consider the augmented matrix (A, B). Our aim in this method is to reduce
the augmented matrix (A, B) to an upper triangular matrix. i.e.,
3|Page
and we can solve it using backward substitution.
a i1
a 11
To do this we multiply the first row of (3) by ( i 2,3,..., n) , and add to the ith row of (3)
(If a11 0. ). If a11 is zero, then take the row with the first column which is different from zero
and interchange with the first row so that a11 now is different from zero. So for this discussion
we assume that a11 0 . By doing this all elements of the first column of (3) except a11 are made
to zero.
Now taking b22 as a pivot element, we make all elements in the second column of (4) below b22
bi 2
b 22
to be zero. That is, multiply the second row of (4) by and add to the corresponding
elements of the ith row
( i 3,4,5,..., n) . Thus we get all elements below b22 are made to be zero. Thus (4) reduces to
4|Page
Now again consider C 33 as a pivot element and make all elements below C 33 in (5) as zeros. On
doing this process again and again, all elements below the leading diagonal elements of matrix A
are made to zero.
kn
xn
Starting from the last equation we get nn and using this we solve for xn 1 by backward
Note that if we make (3) a lower triangular matrix we get the same values as the upper triangular
x yz 3
x 2 y 3z 4
x 4 y 9z 6
5|Page
Using Gauss elimination method
1 1 1 3
0 1 2 1
0 3 8 3
and this matrix is equivalent to (A,B).
1 1 1 3
0 1 2 1
0 0 2 0
still this is equivalent to (A,B).
x yz 3
y 2z 1
2z 0
z 0, y 1 and x 2
6|Page
2 x1 x 2 2 x3 x 4 6
6 x1 6 x 2 6 x3 12 x 4 36
4 x1 3 x 2 3 x3 3 x 4 1
2 x1 2 x 2 x3 x 4 10
2 1 2 1 6
6 6 6 12 36
4 3 3 3 1
2 2 1 1 10
Consider the augmented matrix (A, B) =
2 1 2 1 6
0 9 0 9 18
0 1 1 5 13
0 1 3 0 4
(A, B)
2 1 2 1 6
0 1 0 1 2
1 0 1 1 5 13
R2 1 3 0 4
Operating 9 we get 0
2 1 2 1 6
0 1 0 1 2
0 0 1 4 11
0 0 3 1 6
7|Page
2 1 2 1 6
0 1 0 1 2
0 0 1 4 11
0 0 0 13 39
2 x 1 +x 2 +2 x 3 + x 4 =6
x 2 −x 4 =−2
−x2 −4 x 4 =−11
13 x 4 =39
⇒ x 4 =3
x3 4 3 1 x3 1
x2 3 2 x2 1
2 x1 1 2(1) 3 6 x1 2
5 x1 x 2 3 x3 2 x 4 2
2 x1 x 2 2 x3 x 4 1
x1 x 2 x3 x 4 1
x1 2 x 2 x3 3 x 4 1
8|Page
Solution: To solve the system first interchange the first row with the third row or the fourth row,
without loss of any generality, let us interchange with the third row. Thus the augmented matrix
has the form.
1 1 1 1 1
2 1 2 1 1
5 1 3 2 2
1 2 1 3 1
1 1 1 1 1
0 3 4 3 1
0 4 8 3 3
0 3 2 4 0
3
R3 R2 , R4 R3
Again operating 4 , we get
1 1 1 1 1
0 3 4 3 1
3 5
0 0 2 4 4
0 0 2 1 1
1 1 1 1 1
1 1 1 1 1
0 3 4 3 1
0 3 4 3 1
0 0 2 3 5
3 5
4 0 0 2
4
4
1 1 4
0 0 0 0 0 0 1 1
4 4
1
R1 R2
Operating 3 , we get
9|Page
1 2 1 2
1 0 0
3
1 0 0
3 3 3
0 3 4 3 1
0 3 4 3 1
3 5
0 0 2 4 0 0 8 3 5
4
0 0
0 1 1 0 0 0 1 1
1 2
1 0 0 3 0 1 0 2
3 3
0 3 4 0 2 0 3 4 0 2
3 8
0 0 8 4 8 0 0 8 0
0 0 0 1 1
0 0
0 1 1
1 1
R2 R3 , R1 R3 ,
Operating 2 8 we get
3 0 0 0 3
0 3 0 0 6
0 0 8 0 8
0 0 0 1 1
Which is equivalent to
3x1 3 3x2 6 8 x3 8
x1 1 x 2 2 x 3 1 x 4 1
a 11 x 1 +a12 x 2 +⋯+a1n x n =b 1
a21 x1 +a22 x 2 +⋯+a2 n x n=b2
⋮
an 1 x 1 +a n2 x 2 +⋯+a nn x n =b n (2.1)
10 | P a g e
In matrix equation;
a11 a 12 ⋯ a1 n x1 b1
( a21 a 22 ⋯ a2 n
⋮ ⋱ ⋮
a n1 an 2 ⋯ ann
)( ) ( )
x2 = b2
⋮
xn
⋮
bn
T T
That is Ax=b , where x=( x 1 , x 2 ,⋯, x n ) , b=( b1 ,⋯, b n ) .
In this method, the coefficient matrix A of A the system (4.1) is decomposed in to product of
lower triangular matrix L and upper triangular matrix U.
Where
l 11 0 ⋯ 0 u11 u 12 ⋯ u 1n
(
L= l 21 l 22 ⋯ 0
⋮ ⋱ ⋮
l n1 l n2 ⋯ l nn
) and
(
U= 0 u 22 ⋯ u 2n
⋮ ⋱
0 0 ⋯ unn
⋮
)
Using matrix multiplication L and U comparing the elements of the resulting matrix with those
of A.
When we choose ii
l =1 ,the method is called Doolittle method and if uii =1 ,the method is
called Crouts method.
LUX=b (2.3)
UX=Z (2.4)
LZ=b (2.5)
11 | P a g e
T
The unknowns Z =( z 1 , z 2 ,. .. , z n ) in (2.4) are determining by forward substation and the
T
unknown X =( x 1 , x 2 , . .. , x n ) in (2.5) are obtained by back substation.
x 1 + x 2 + x 3 += 1
{ 4 x 1 +3 x 2−x 3 =6
3 x 1 +5 x 2 +3 x 3 =4
In matrix equation
1 1 1 x1 1
( 4 3 −1 x 2 = 6
3 5 3 x
3
4 )( ) ( )
1 1 1
⇒
(
A= 4 3 −1
3 5 3 ) , b=(1,6,4 )
T T
, X =( x 1 , x 2 , x 3 )
Let
l 11 0 0 1 u12 u13
(
L= l l 21
l 31
l 22 0
l 32 l 33 ) and U= 0 1
0 0
( u23
1
)
Then LU=A
l 11 0 0 1 u12 u13 1 1 1
⇒
( l l 21 l 22 0
l31 l 32 l 33 )( 0 1 u23 =
0 0 1
) ( 4 3 −1
3 5 3 )
12 | P a g e
l 11 l11 u12 l 11 u13 1 1 1
(
⇒ l 21 l 21 u12+l 22
l 31 l31 u12+l 32
l 21 u 13+l 22 u23 =
l 31 u13 +l 32 u 23+l 33 ) ( )
4 3 −1
3 5 3
Thus
1 0 0 1 1 1
(
L= 4 −1 0
3 2 −10 ) and U = 0 1 5
0 0 1( )
T
To find the unknowns X =( x 1 , x 2 , x 3 )
T
Let UX=Z ,where Z =( z 1 , z 2 , z 3 )
1 1 1 x1 z1
(
⇒ 0
0
1
0 )( ) ( )
5 x 2 = z2
1 x
3 z3
But LZ=b
13 | P a g e
1 0 0 z1 1
(
⇒ 4 −1 0 z2 = 6
3 2 −10 z
3
4 )( ) ( ) ¿
2¿
⇒ z 1 =1 , 4 z1 −z 2 =6 ⇒ z 2 =−2 , 3 z 1 +2 z2 −10 z 3 =4 ⇒ z 3 =−1 ¿
¿
−1 T
Z =(1,−2, )
Thus 2
1
1 x1
)( ) ( )
1 1
(
⇒ 0 1
0 0
5 x2 =
1 x
3
−2
−1
2
By back substation
1 1
x 3=− , x 2= , x 3 =1
2 2
1 1
X =(1, ,− )T
Therefore the solution 2 2
Indirect method for solving system of linear equation is almost we use iterative schem in the
( K +1) (K )
form of X =HX +C where X ( K +1) and X ( K ) is the (k+1)th approximation of X
and kth approximation of X respectively .H is called the iterative matrix depending on A and b is
column vector.
Consder
a 11 x 1 +a12 x 2 +⋯+a1n x n =b 1
a21 x1 +a22 x 2 +⋯+a2 n x n=b2
⋮
an 1 x 1 +a n2 x 2 +⋯+a nn x n =b n
14 | P a g e
We assume the quantities
aii ≠0 in (1) are pivot element ,so that the equations in system
(1)may written as;
1
x ( k +1) =− (a x +a x +⋯+a1 n x ( k ) )+ b1
1 a11 12 2( k ) 13 3( k ) n
1
x ( k +1 )=− (a x ( k ) +a23 x (k ) +⋯+a 2n x ( k ) )+b2
2 a 22 21 1 3 n
−−−−−−−−−−−−−−−−−−−−−−−−−−−−
1
x ( k +1 )=− (a x +a x +⋯+ ann−1 x ( k ) )+ bn
n a nn n 1 1(k ) n2 2( k ) n−1
20 x 1 + x 2 −2 x 3 += 17
{ 3 x 1 + 20 x 2 −x 3=18
2 x 1 −3 x 2 + 20 x 3 =25
1 17
x (x ( k ) −2 x ( k ) )+
{
( k +1) =−
1 20 2 3 20
1 18
x (k +1 )=− (3x ( k ) −x ( k ) )−
2 20 1 3 20
1 25
x ( k +)=− (2 x ( k ) +3 x ( k ) )+
3 20 1 2 20
( *)
15 | P a g e
x ( 0 )=x ( 0) =x (0 )=0 ,
Let the initial approximation 1 2 3 now substute these initial value in right
x ( 1) , x (1 ) , x ( 1)
sides of (*) to get 1 2 3 .
Thus
17
x (1) = =0 . 85
1 20
−18
x (1) = =−0 . 9
2 20
25
x (1 )= 20 =1 .25
3
1 17
x ( 2)=− (−0 . 9−2(1 . 25))+ =1 . 02
1 20 20
1 18
x (2 )=− (3(0 . 85)−1 . 25)− =−0 .965
2 20 20
1 25
x (2 )=− (2(0 . 85)−3(−0. 9 )+ =1. 03
3 20 20
1 17
x (3 )=− (−0 .965−2(1. 03 ))+
=1 .00125
1 20 20
1 −18
x (3 )=− (3(1 .02 )−1 .03 )+ =−1. 0015
2 20 20
1 25
x (3 )=− (2(1. 02)−3(−0 . 965)+ =1. 00325
3 20 20
We now use the right hand side of the Jacobi iteration method all the values from the present
iteration .
The Gaussian –Sidel iteration method for system of linear equation define as;
16 | P a g e
1 b
x ( a12 x ( k ) +a13 x ( k ) +⋯+a1 n x ( k ) )+ 1
( k +1) =−
1 a11 2 3 n a11
1 b2
x (k +1) =− ( a21 x ( k +1 ) +a 23 x ( k ) +⋯+ a2n x ( k ))+
2 a22 1 3 n a22
−−−−−−−−−−−−−−−−−−−−−−−−−−−−
1 bn
x ( k +1 )=− ( an 1 x (k +1) +an 2 x (k +1) +⋯+ ann−1 x ( k +1) )+
n a nn 1 2 n−1 a nn
The Jacobi and Gassian –Siedel method converges for any choice of the initial approximation
and if each equation
n
|aii|≥ ∑ |aij| , j=1,2,⋯, n
j=1
The convergence of Gaussian Seidel method is twice as fast as the Jacobi’s method.
20 x 1 + x 2 −2 x 3 += 17
{ 3 x 1 + 20 x 2 −x 3=18
2 x 1 −3 x 2 + 20 x 3 =25
Solution:First we have to write the given system of equation in Gaussian –Seidel iteration scheme
1 17
x (x ( k )−2 x ( k ) )+
{
(k +1) =−
1 20 2 3 20
1 18
x ( k +1 )=− (3 x ( k +1) −x ( k ) )−
2 20 1 3 20
1 25
x ( k +)=− (2x ( k +1) +3 x ( k +1 ) )+
3 20 1 2 20
(**)
x ( 0 )=x ( 0) =x (0 )=0 ,
Let the initial approximation 1 2 3 now substute these initial value in right
x ( 1) , x (1 ) , x ( 1)
sides of (**) to get 1 2 3
17 | P a g e
17
x ( 1)=
=0 . 85
1 20
−1 18
x ( 1)= (3(0 . 85)−0)− )=−1. 0275
2 20 20
1 25
x (1 )=− (−2(0. 85 )−3(−1 .0275 ))+ =1. 0109
3 20 20
1 17
x ( 2)=− (−1 . 0275−2(1 .0109 ))+ =1 .0025
1 20 20
1 18
x (2 )=− (3(1. 0025 )−1 .0109 )− =−0 . 9998
2 20 20
1 25
x (2 )=− (2(1. 0025 )−3(−0 . 9998)+ =0 . 9998
3 20 20
x (3 )=1
1
x (3 )=−1
2
x33 =1
solution
x 1=1 , x 2 =−1 , x 3 =1
FIRST DIFFERENCE
Consider the differences y1 y 0 , y 2 y1 , ..., y n y n 1 these results are called the first differences
18 | P a g e
i.e., y0 y1 y0
y1 y2 y1
y2 y3 y2
yn 1 yn yn 1
Higher Differences
Similarly, y1 y2 y1
3 2 2
n 1 n 1
In general, yi yi 1 yi
n
Though x 0 , x1 , x 2 , x3 ,..., x n need not be equally spaced, for the time being and for purposes of
19 | P a g e
Operators
Ñ f (x) = f x f x h
By definition Ñ y1 = y1 y 0 , y 2 y 2 y1 , etc.
2 f (x) = Ñ f x f x h = Ñ f (x) - Ñ f ( x h)
= f x f x h - f x h f x 2h
= f ( x ) 2 f ( x h ) f ( x 2h )
h h
f x f x f x
2 2
y x y h y h
x x
Or 2 2
Ef ( x ) f ( x h)
Ey x y x h
20 | P a g e
This means Ey 0 y1 , Ey1 y 2 , Ey 2 y 3 etc
E 2 yx E yx h yx 2 h
E 3 y x y x 3h
Thus E f x f x nh E n y x y x nh
n
or
E x f x
x h f x
x f x h
Thus, E f x f x h
1
In general, E f x f x rh
r
v) Averaging operator
The averaging operator is defined by
1
y x y h y h
2 x 2 x
2
21 | P a g e
d
Df x f x
dx
d2
D2 f x f x
dx 2 etc.
1. f ( x) f ( x) f ( x).1
Note that all the above operators satisfy the following properties
We know that f x f x h f x Ef x f x E 1 f x
Notice that two operators G1 f (x) = G2 f (x) if and only if G1 & G2 are equal for all f (x)
E 1
f x f x f x h = 1. f x E 1 f x 1 E 1 f x
22 | P a g e
1 E 1
E 1 1 so that E 1
1
d
Df x f x
dx
We observe that from the famous Taylor’s theorem for a given function f
h 2 D 2 f x h3 D 3 f x
Ef x f x h Df x ...
2! 3!
(hD) 2 hD
3
1 hD ... f x
2! 3!
E 1 hD
hD 2 hD 3 ...
hD
2! 3! =e
2 3
...
2! 3!
1 2 3
D ...
h 2! 3!
23 | P a g e
h
f x f x
2
h
f x E 2 f x E 2 f x E 2 E 2 f x
2
1 1 1 1
1
E 2 E
1
2
E
1
2
E 1 E 1
2
Also E 1 E E
2
1 1 2
1
Now let us see these operators using tables because in the subsequent discussions we use such
The finite forward differences of a function are represented below in a tabular form
x y y 2 y 3 y 4 y 5 y 6 y
x0 y0 y0
x1 y1 2 y0
y1 3 y0
4 y0
x2 y2 y22
y 2 3 y1 5 y0
4 y1 6 y0
x3 y3 y22
y3 3 y2 5 y1
x4 y4 2 y3 4 y2
y 4 3 y3
x5 y5 2 y4
y5
x6 y6
24 | P a g e
Whenever we need to establish the finite forward differences of a certain function starting from
the first forward differences we go by increasing the order of these differences till the n th
x y y 2 y 3 y 4 y 5 y 6 y
x0 y0
x1 y1 y1
2 y2
x2 y2 y2 3 y3
y3
2 4 y4 5 y5
x3 y3 y3 y4
3
2 y4 4 y5 6 y6
x4 y4 y 4 3 y5 4 y6 5 y6
2 y5
x5 y5 y5 3 y6
2 y6
x6 y6 y6
y k E k y0 1 y 0
k
k k
1 2 ... k y0
1 2
k k
y0 y0 2 y0 ... k y0
= 1 2 (*)
Now let us see some examples how we can efficiently use this result to find a solution for
different problems.
25 | P a g e
Example 1 Find the next term of the sequence 2, 9, 28, 65, 126, 217, … and also find the general
term.
Solution: For this given problem first we establish the forward difference table and we try to use
the above equation (*) in order to find the 7th term and the general term.
x y y 2 y 3 y 4 y
0 2
7
12
1 9
19 6
18 0
2 28 6
37
24
0
3 65
61 6
30
4 126
91
5 217
6 6 6 6 6 6
y6 y0 y0 2 y0 3 y0 4 y0 5 y 0 6 y0
7th term = 1 2 3 4 5 6
= 344
n n 1
yn 2 n 7 12 n n 1 n 2 6
2 3!
26 | P a g e
= 2 7 n 6n 6n n 3n 2n
2 3 2
= (n 1) 1
3
y6 6 1 1 344
3
Example 2: Find f (x) from the table below and also find f (7) .
x: 0 1 2 3 4 5 6
y x E x y0 1 y0
x
Solution:
x x x
y0 y0 2 y0 3 y0 ...
= 1 2 3
x x 1
1 4 x 12 x x 1 x 2 6 0
2 6
x3 3x 2 1
f (7) = 489
Using backward difference operator we can also develop a method how to solve some problems
We know that y n y n y n 1
yn 1 yn yn 1 yn
27 | P a g e
Similarly yn 2 1 yn 1
=
1 2 y n
y n 3 1 y n etc.
3
y n k 1 y n
k
Thus
k k
yn yn 2 yn ... 1 k yn
k
= 1 2
Example 3 Find y (1) if y (0) 2, y (1) 9, y (2) 28, y (3) 65, y ( 4) 126, y (5) 217 .
Solution
2 3 4
x y ∇y ∇ y ∇y ∇ y
0 2
7
12
1 9
19 6
18 0
2 28 6
37
24
0
3 65
61 6
30
4 126
91
5 217
28 | P a g e
6 6 6 6
y5 += y5 2 y5 3 y5 ... 6 y5
1 2 3 6
217 6(91)
- 15(30) 20(6) 0
1
5 5 5
y5 y5 2 y5 ... 5 y5
1 2 5
2
Solution: y 3 ( E 1) y 3 ( E 4 E 6 E 4 E 1) y 3
4 4 4 3 2
= E y 3 4 E y 3 6 E y 3 4 Ey3 y 3
4 3 2
= y7 4 y 6 6 y5 4 y 4 y3
Example 5: Find y 6 if y 0 9, y1 16, y 2 30, y 3 45 given that third differences are constants.
29 | P a g e
y 6 E 6 y 0 (1 ) 6 y 0 1 6C1 6C 2 6C 3 6C 4 6C 5 6 y 0
= (1 6( E 1) 15( E 1) 20( E 1) ) y 0
2 3
= (20 E 45 E 36 E 10) y 0
3 2
= 20 y 3 45 y 2 36 y1 10 y 0
x: 2 3 4 5 6
Solution: Since only four values of f (x) are given, we assume that the polynomial which fits
( E 1) 4 y 0 0 ( E 4 4 E 3 6 E 2 4 E 1) y 0 0
y 4 4 y3 6 y 2 4 y1 y 0 0
30 | P a g e
y 3 60.05
Example 7: Estimate the production for 1964 and 1966 from the following data:
That is ( E 1) y k 0 ( E 5 E 10 E 10 E 5 E 1) y k 0
5 5 4 3 2
Taking k 0 , we get y 5 5 y 4 10 y 3 10 y 2 5 y1 y 0 0
y5 10 y 3 3450 …. (*)
Taking k 1 , we get y 6 5 y 5 10 y 4 10 y 3 5 y 2 y1 0
5 y 5 10 y 3 5010 ….(**)
31 | P a g e
Example 8: Find the missing term in the following
x: 1 2 3 4 5 6 7
y: 2 4 8 - 32 64 128
Solution: There are six values given. We can have a unique fifth degree polynomial to satisfy
( E 6 E 15 E 20 E 15E 6 E 1) y 0 0
6 5 4 3 2
y 6 6 y 5 15 y 4 20 y 3 15 y 2 6 y1 y 0 0
From these examples we have seen that to find the missing terms, say if we have n
missing terms we have to establish n equations using the shifting operator n times; and then
solve these equations simultaneously by the methods developed so far for the missed terms.
Exercise
5. Prove y 4 y 3 y 2 y1 y 0 .
2 3
32 | P a g e
y 6 if y 0 9, y1 18, y 2 20, y 3 24 given that the third differences are constants.
6. Find
8. Find the nth term of the sequence 1,4,10,20,35,56,... . Also find the 8th term.
Chapter 5: Interpolations
Introduction
In this chapter, we consider the problem of approximating a given function by a class of simpler
function, mainly polynomials.
The first use is in reconstruction the function f(x) when it is not given explicitly and only the
value of f(x) and derivative at set of points are called nodes or tabular points are known.
33 | P a g e
The second use is to replace the function f(x) by an interpolating polynomials p(x) so that many
common operation such as for finding differentiation ,integrations e .t.c which are intended for
the function may be performed using p(x).
Definition: A polynomials p(x) is called an interpolating polynomials if the value of p(x) or its
certain order of derivative coincides with those of f(x) or its same order derivative at one or more
tabular points.
34 | P a g e
2
p1 ( x )=∑ Li ( x )f ( x i )
i=1
=L0 ( x ) f ( x 0 )+L1 ( x ) f ( x 1 )
( x−x 1 ) ( x−x 0 )
⇒ p1 ( x )= f ( x 0 )+ f ( x1 )
( x 0 −x 1 ) ( x 1−x 0 )
Example1: Given f (2)=4, f(2.5)=5.5.Find the linear interpolating polynomials using Lagrange
interpolating
2
p1 ( x )=∑ Li ( x )f ( x i )
i=1
=L0 ( x )f ( x 0 )+L1 ( x )f ( x 1 )
( x−2. 5) ( x−2)
⇒ p1 (x )= 4+ 5.5
(2−2. 5 ) (2 . 5−2 )
=−8( x−2. 5 )+11( x−2)
=3 x−2
Example2:
Let x 0 =2 , x 1 =2. 5 , x3=4 ,then find the quadratic interpolating polynomials for
35 | P a g e
2
p2 ( x )=∑ Li ( x )f ( x i )
i=1
(x−2. 5)( x−4 ) 1 ( x−2)( x−4 ) 1 ( x−2)( x−2 .5 ) 1
⇒ p2 ( x )= ( )+ ( )+ ( )
(2−2. 5 )(2−4 ) 2 (2. 5−2)(2. 5−4 ) 2 .5 ( 4−2)( 4−2. 5) 4
=0 . 05 x2 −0 . 425 x+1. 15
The graphical representation of the given function and the interpolating polynomial as follow as
36 | P a g e
The truncation error in Lagrange interpolation is given by
En ( x )=f ( x )− p( x )
( x−x 0 )( x−x 1 )⋯( x−x n ) (n+1)
= f ( x)
(n+1 )!
Where
x 0≤ξ≤x n
y 1− y 0 y 2− y1 y n − y n−1
f [ x 0 , x 1 ]= , f [ x 1 , x 2 ]= , ⋯ , f [ x n−1 , x n ]=
x 1 −x 0 x 2 −x 1 x n −x n−1
Note: The zero divided difference of the function of f with respect to xi is denoted by f[xi] and
simply evaluated f at xi
i.e f[xi]=f(xi)
37 | P a g e
The divided difference table at any order
et
y 0 , y 1 ,⋯, y n be the values of corresponding to the node x 0 < x 1 < x 2 <,⋯, x n
L .
The Newton’s divided difference interpolating formula for unequally space define as :
Example: Construct the divided difference table for the following data and find the
interpolating polynomials for the data
38 | P a g e
Soln:first we have to prepared divided difference table
Solution: First we form the divided difference table for the data.
x f (x ) First d.d Second d.d Third d.d Fourth d.d
-4 1245
-404
-1 33 94
-28 -14
0 5 10 3
2 13
2 9 88
442
5 1335
39 | P a g e
+( x−x 0 )( x−x 1 )(x −x 2 )( x−x 3 )f [ x 0 , x 1 , x 2 , x3 , x 4 ]
+ ( x + 4)( x + 1) ( x) ( x -2)(3)
2 3 2
= 1245 – 404 x – 1616 + ( x + 5 x + 4)(94) + ( x +5 x + 4 x )(-14)
4 3 2
+( x +3 x –6 x – 8 x )(3)
4 3 2
=3 x –5 x +6 x – 14 x + 5.
Since, the fourth order differences are zeros, the data represents a third degree polynomial.
Newton’s divided difference formula gives the polynomial as
40 | P a g e
2 3 2 3
= 9 + 7 x + 14 – 3 x –9 x –6+ x +3 x +2 x = x + 17.
Activity5.4:
We derived two important interpolation formulas by means of forward and back ward difference
operator.
y n( x ) agree at the tabulated points. Let the values of x be equidistant, that is,
41 | P a g e
x i−x i−1 =h , for i=1,2,3,...,n.
Therefore ,
x 1=x 0 +h , x 2=x 0 +2 h , etc.
y n ( x )=a0 +a1 ( x−x 0 )+a2 ( x−x 0 )( x−x 1 )+a3 (x −x0 )( x−x 1 )( x−x 2 )+. . .
a a
The (n+1) unknowns 0 , a1 , a2 , . . . , n can be found as follows
Put
x=x 0 in ( 2.1) we obtain , y n ( x 0 )= y 0=a0 i.e
a0 = y 0 (since the other terms
in
y 1− y 0 Δy 0 Δy 0
a1 = = a1 =
x 1−x 0 h i.e h
Similarly put
x=x i for i=2,3,...,n−1 , in ( 5.2) we obtain
,
Δ2 y 0 Δ3 y 0 Δn y 0
a2 = a3 = an =
2 ! h2 3 ! h3 n ! hn
, , . . .,
now substitute
a0 , a1 , a2 , . . . , an in ( 5.2)
42 | P a g e
Δy 0 Δ2 y 0 Δ3 y 0 Δn y 0
a1 = a2 = a3 = an =
by
a0 = y 0 h 2 ! h2 3 ! h3 n ! hn we obtain ,
, , , , . . .,
Δy 0 Δ2 y 0 Δ3 y 0
y n ( x )= y 0 + ( x−x 0 )+ ( x−x 0 )( x−x 1 )+ ( x−x 0 )( x−x 1 )( x−x 2 )+
h 2! h 2 3 ! h3
Δn y 0
.. .+ ( x−x 0 )( x−x 1 )( x−x 2 ). ..( x−x n−1 ).
n ! hn ( 2.2)
Put
x=x 0 + ph in ( 2.2), we get
y n ( x )=a0 +a1 ( x−x n )+a2 ( x−x n )( x−x n−1 )+a 3 ( x−x n )( x−x n−1 )( x−x n−2 )+.. .
a a
then the (n+1) unknowns 0 , a1 , a2 , . . . , n can be found as follows
Put
x=x n in ( 5.5) we obtain ,
y n ( x n )= y n=a0 i.e
a0 = y n (since the other terms in ( 2.4) vanish)
again put
x=x n−1 in ( 5.5) we obtain ,
43 | P a g e
y n−1 = y n +a1 (x n−1−x n ) then solve for a1 we obtain ,
y n − y n−1 ∇ y n ∇ yh
a1 = = a1 =
x n −x n−1 h i.e h
Similarly put
x=x n−i for i=2,3,...,n−1 , in ( 5.5) we obtain
,
∇ 2 yn ∇3 yn ∇n yn
a2 = a3 = an =
2 ! h2 3 ! h3 n! hn now substitute
a0 , a1 ,
, , . . .,
a2 , . . . , a n
∇ yh ∇ 2 yn ∇3 yn ∇n yn
a1 = a2 = a3 = an =
in ( 2.4) by
a0 = y n h 2 ! h2 3 ! h3 n! hn we
, , ,. . .
obtain ,
∇ yn ∇2 y n ∇3 yn
y n ( x )= y n + ( x−x n )+ (x −x n )( x−x n−1 )+ ( x−x n )( x−x n−1 )( x−x n−2 )+
h 2 ! h2 3! h3
∇ n yn
.. .+ ( x−x n )( x−x n−1 )( x−x n−2 ). . .( x−x 1 ).
n ! hn ( 2.5)
Put
x=x n + ph in ( 5.6), we get
Example 2.1: Using Newton’s forward difference interpolation formula, find the form of the
44 | P a g e
f (x ) 1 2 1 10
Solution: We have the following forward difference table for the data.
The forward differences can be written in a tabular form as in Table 5.1.
Δ 2 3
x f (x ) Δ Δ
0 1
Table 2.1. 1 forward
differences. 1 2 -2
Since -1 12 n=3,the cubic
Newton’s 2 1 10 forward
difference 9 interpolation
polynomial 3 10 becomes:
2 2
y 3 ( x )=1+ x−( x −x )+2( x )( x −3 x +2).
3 2
y 3 ( x )=2 x −7 x +6 x +1.
Example 2.2: Find the interpolating polynomial corresponding to the data points
(1,5),(2,9),(3,14),and (4,21). Using Newton’s backward difference
interpolation polynomial.
Solution: We have the following backward difference table for the data.
45 | P a g e
The backward differences can be written in a tabular form as in Table 2.2.
2 3
x f (x ) ∇ ∇ ∇
1 5
4
2 9 1
5 1
3 14 2
7
4 21
Table 2.2. backward differences.
Since n=3,the cubic Newton’s backward difference interpolation polynomial becomes:
p( p+1) 2 p ( p+1)( p+2) 3
y 3 ( x )= y n + p ∇ y n + ∇ yn+ ∇ yn .
2! 3!
x−x n x−4
p= = =x−4 .
where h 1
Find:
46 | P a g e
Solution: We have the following forward difference table for the data.
The forward differences can be written in a tabular form as in Table 5.3.
0.10 0.1003
0.0508
0.15 0.1511 0.0008
0.0516 0.0002
0.20 0.2027 0.0010 0.0002
0.0526 0.0004
0.25 0.2553 0.0014
0.0540
0.30 0.3093
ii) T0 find tan 0 .26 we use Newton’s back ward difference interpolation polynomial.
x−x n 0 . 26−0 . 30
h=xi+1−x i =0 . 05 p= = =−0 . 8 .
we have x= 0.26 , and h 0. 05
Hence formula (5.7) gives
47 | P a g e
−0 . 8(−0 .8+1)
y 4 (0 .26 )=tan 0 .26=0 . 3093−0 . 8(0 . 0540)+ (0. 0014 )+
2
=0.2662
Example 5.4: Using Newton’s forward difference formula , find the sum
3
S n+1−S n =(n+1 ) .
Hence
ΔS n =( n+1)3 .
Or
Δ 5 S n =Δ 6 S n =.. .=0 , Sn
Since is a fourth degree polynomial in n .
(5.4) gives
Hence formula
(n−1)(n−2) (n−1)(n−2)(n−3)
S n =1+(n−1)(8 )+ (19 )+ (18)+
2 6
48 | P a g e
(n−1)(n−2)(n−3)(n−4 )
+ (6 ).
24
1 1 1
= n 4 + n 3 + n2
4 2 4
2
n( n+1)
=[ 2
. ]
Activity1.1:
3. Given
x 0.20 0.22 0.24 0.26 0.28 0.3
f (x ) 1.6596 1.669 1.6804 1.691 1.7024 1.7139
8 2
Using Newton’s difference interpolation formula, find f (0. 23) and f (0.29) .
REVIEW PROBLEMS
49 | P a g e
2 .For the data ( xi , f i ), i=0,1,2,...,n. construct the Lagrange fundamental polynomials
ℓ i ( x j ) using the information that they satisfy the conditions ℓ i ( x j )=0 , for i≠ j and
ℓ i ( x j )=1 . for i= j .
3. Construct the Lagrange interpolating polynomials for the following functions
2x x 0=0 , x 1=0 . 3, x 2=0. 6,
A . f ( x)=e cos3 x
B . f (x )=ln x x 0=1, x 1=1 .1, x 2=1 .3 , x 3=1.4,
4.Use appropriate Lagrange interpolating polynomials of degrees one, two, and three to
approximate each of the following:
5.Use the Lagrange and the Newton-divided difference formulas to calculate f (3 ) from
the following table :.
x 0 1 2 4 5 6
f (x ) 1 14 15 5 6 19
50 | P a g e
CHAPTER 5: CURVE FITTING
4.1 Introduction
Data is often given for discrete values along a continuum. However estimates of points between
these discrete values may be required. One way to do this is to formulate a function to fit these
values approximately. This application is called curve fitting. There are two general approaches
to curve fitting.
The first is to derive a single curve that represents the general trend of the data. One method
of this nature is the least-squares regression.
The second approach is interpolation which is a more precise one. The basic idea is to fit a
curve or a series of curves that pass directly through each of the points.
5.2 Least squares regression
The simplest example of the least squares approximation is fitting a straight line to a set of paired
observations: (x1,y1),(x2,y2)……(xn, yn). The mathematical expression for the straight line is
y = ao + a1x+ e
where ao, and a1 are coefficients representing the y-intercept and the slope of the line respectively
while e is the error or residual between the model and the observations, which can be
represented as
e = y - ao - a1x
Thus the error is the discrepancy between the true value of y(observed value) and the
approximate value ao + a1x, predicted by the linear equation. Any strategy of approximating a set
of data by a linear equation (best fit) should minimize the sum of residuals. The least squares fit
of straight line minimizes the sum of the squares of the residuals.
n n n
S r =∑ e2i =∑ ( y i , measured − y i ,mod el )2=∑ ( y i−ao −a 1 x i )2
i =1 i=1 i=1 (5.1)
To determine the values of ao and a1, differentiate (4.1) with respect to each coefficient
∂ Sr
=−2 ∑ ( y i −a o−a1 x i )
∂ ao
∂ Sr
=−2 ∑ [( y i −ao −a1 x i ) xi ]
∂ a1
(5.2)
51 | P a g e
Setting the der ivatives equal to zero will result in a minimum S r.The equations can then be
expressed as
∑ y i −∑ a o−∑ a1 x i=0
∑ y i x i−∑ ao x i −∑ a1 x 2i =0 (5.3)
n ∑ xi y i −∑ x i ∑ y i
a1 = 2
n ∑ x 2i −( ∑ x i )
(5.4)
¿ ¿
a o= y −a1 x
(5.5)
Linear regression provides a powerful technique for fitting a “best” line to a data. However it is
predicated on the fact that the relationship between the independent and dependent variables is
linear. But usually this is not the case. Visual inspection of the plot of the data will provide
useful information whether linear regression is acceptable. In situations where linear regression
is inadequate other methods such as polynomial regression are appropriate. For others,
transformations can be used to express the data in a form that is compatible with linear
regression.
i Exponential functions
b1 x
y=a1 e
This function can be linearized by taking the natural logarithm of both sides of the equation
ln y=ln a1 +b 1 x ln e
ln y=ln a1 +b 1 x
the plot of ln y versus ln x will yield a straight line with a slope of b1 and an intercept of ln a1.
52 | P a g e
ii Power functions
b2
y=a2 x
the plot of logy versus logx will yield a straight line with a slope of b2 and an intercept of log a2.
x
y=a3
b3 +x
1 b3 1 1
= +
y a3 x a3
the plot of 1/y versus 1/x will be linear, with a slope of b3/a3 and an intercept of 1/a3.
In their transformed forms, these models are fit using linear regression in order to evaluate the
constant coefficients. Then they can be transformed back to their original state and used for
predictive purposes.
Example Find the standard "least squares line" for the data points
.
53 | P a g e
And the graph will look like:
54 | P a g e
4.2.2 Polynomial regression
Some engineering data, although exhibiting marked pattern, is poorly represented by straight
line. For these cases a curve would be better suited to fit the data. One of the possible ways of
solving this kind of problems is to fit the data by a polynomial function. This is called
polynomial regression. The least squares method can be extended to fit data to a higher-order
polynomial. Suppose we want to fit a second order polynomial:
2
y=ao + a1 x+ a2 x + e
Taking the derivative of the above equation with respect to each unknown coefficients of the
polynomial gives
∂ Sr
=−2 ∑ ( y i −a o−a1 x i−a2 x 2i )
∂ ao
∂ Sr
=−2 ∑ xi ( y i−a o −a1 x i−a2 x 2i )
∂ a1
∂ Sr
=−2 ∑ x 2i ( y i −ao −a1 xi −a 2 x2i )
∂ ao
Setting these equations equal to zero and rearranging to develop the following set of equations
nao +( ∑ x i )a 1 +( ∑ x 2i )a2 =∑ y i
( ∑ x i )ao +( ∑ x 2i )a1 +( ∑ x3i )a2 =∑ x i y i
( ∑ x 2i )a o +( ∑ x3i )a1 +( ∑ x 4i )a2 =∑ x 2i y i
55 | P a g e
Solving for the coefficients of the quadratic regression is equivalent to solving three
simultaneous linear equations. The techniques for solving these problems are discussed in
chapter two.
Sr
S y / x=
√ n−(m+1)
Example Find the standard "least squares parabola" for the data points
.
56 | P a g e
4.2.3 Multiple linear regression
A useful extension of linear regression is the case where y is a linear function of more than one
variable, say x1 and x2,
y=ao + a1 x 1 +a 2 x 2 +e
Such an equation is useful when fitting experimental data where the variable being studied is a
function of two other variables.
In the same manner as the previous cases the best values of the coefficients are determined by
setting the sum of the squares of the residuals to a minimum.
S r =∑ ( y i−a o −a1 x 1i −a 2 x 2 i )2
57 | P a g e
∂ Sr
=−2 ∑ ( y i −a o−a1 x 1i −a2 x 2i )
∂ ao
∂ Sr
=−2 ∑ x1 i ( y i−ao −a 1 x1 i−a2 x 2i )
∂ a1
∂ Sr
=−2 ∑ x 2i ( y i −a o−a1 x 1i −a2 x 2i )
∂ ao
The coefficients yielding the minimum sum of the residuals are obtained by setting the partial
derivatives equal to zero and expressing the result in a matrix form as
2
[n ∑ x1i ∑ 2i ¿][∑ 1i ∑ 1i
x x x ∑ x1i x2i ¿]¿¿¿
¿
The above case can be extended to m dimension,
y=ao + a1 x 1 +a 2 x 2 + .. .+am x m+ e
Sr
S y / x=
√ n−(m−1 )
Multiple linear regression has utility in the derivation of power equations of the general form
a 1 a2 am
y=ao x 1 x2 . .. . x m
Transformation of this form of equation can be achieved by taking the logarithm of the equation.
In the preceding discussions we have seen three types of regression: linear, polynomial and
multiple linear. All these belong to the general linear least squares model given by
58 | P a g e
where zo, z1, z2, .....,zm are m+1 different functions. For multiple linear regression z o = 1, z1 =x1,
z2 = x2, ...., zm = xm. For polynomial regression, the z's are simple monomials as in z o= 1, z1 = x,
z2 = x2 , ..., zm = xm.
The terminology linear refers only to the model's dependence on its parameters i.e., the a's.
Equation (4.9) can be expressed in a matrix form as
{ y }=[ z ] { A }+ { E }
where [z] is a matrix of calculated values of the z functions at the measured values of the
independent variables.
zo 1 z 11 .. .. zm 1
[ zo 2
.
.
z on
.
.
z 12
z1 n
..
.
.
.
..
.
.
.
zm 2
.
.
z mn
]
where m is the number of variables in the model and n is the number of data points. Because
The column vector {y} contains the observed values of the dependent variable.
{y}T = ⌊ y 1 , y 2 , .. . , y n ⌋
The column vector {A} contains the unknown coefficients
{A}T= ⌊ a o , a1 , .. . , am ⌋
the column vector {E} contains the residuals
{E}T = ⌊ e 1 , e2 ,. .. , e n ⌋
The sum of the squares of the residuals can be defined as
n m 2
(
S r =∑ y i− ∑ a j z ji
i =1 j =0
)
This quantity can be minimized by taking its partial derivative with respect to each of the
coefficients and setting the resulting equation equal to zero. The outcome of this process can be
expressed in matrix form as
59 | P a g e
[ [ Z ] T [ Z ] ] { A }={ [ Z ]T { y } }
60 | P a g e