0% found this document useful (0 votes)
159 views60 pages

Chapter 3: System of Linear Equation

1) The document describes the Gauss-Elimination method for solving systems of linear equations. 2) The method works by reducing the augmented matrix of the system to upper triangular form through elementary row operations, which allows the system to be solved using back-substitution. 3) The row operations eliminate variables below the diagonal of the matrix one by one, until the system is in upper triangular form and can be solved sequentially for each unknown.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
159 views60 pages

Chapter 3: System of Linear Equation

1) The document describes the Gauss-Elimination method for solving systems of linear equations. 2) The method works by reducing the augmented matrix of the system to upper triangular form through elementary row operations, which allows the system to be solved using back-substitution. 3) The row operations eliminate variables below the diagonal of the matrix one by one, until the system is in upper triangular form and can be solved sequentially for each unknown.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 60

Chapter 3: System of linear Equation

Consider a system of n-linear algebraic equation in n unknowns

a 11 x 1 +a12 x 2 +⋯+a1n x n =b 1
a21 x1 +a22 x 2 +⋯+a2 n x n=b2

an 1 x 1 +a n2 x 2 +⋯+a nn x n =b n ……………………………(3.1)

Where aij (i , j=1,2 ,⋯,n ) are the known coefficients b j ( j=1,2 ,⋯,n ) are known values and
x i ,i=1,2,⋯,n are unknown to be determined.

In matrix notation ,the system (3.1) can be written as

Ax=b−−−−−−−−−−−−−−−−−−−−−− (3 . 2)

The matrix [A/b] is called the augmented matrix and it is formed bi appending the column b
to be nxn matrix A.

If all bj are zero ,then the system of equation (3.1) is said to be homogeneous and if at least one
of bj is not zero ,then it is said to be homogenous .

The non homogenous system (3.1) has a unique solution if and only if the det(A) ¿ 0

a11 a12 ⋯ a1n


a a ⋯ a2n
det( A )=| 21 22 |≠0
⋮ ⋱ ⋮ ⋮
an 1 an 2 ⋯ ann
−1
The solution of the system (3.2) may be written x= A b .

The homogenous system (bj=0,j=1,2,3,…,n) posses only a trivial solution


x 1=x 2 =⋯=x n =0

We introduce the following notation and definition .

Notation

A → square matrix of order n

aij → element in the ith row and jth column of matrix A.

1|Page
A-1 → inverse of A

AT → transpose.

det(A) → determinant of matrix A.

O → zero matrix.

I → identity matrix.

D → diagonal matrix of order n.

L → lower triangular matrix of order n.

U → upper triangular matrix order n.

X → column vector with elements xj,j=1,2,…,n.

The method of solution of the linear algebraic equation (3.1) may broadly be classified in to two
types.

A) Direct method: These methods produce the exact solution after a finite numbers of steps.
B) Indirect method: These method give a sequence of approximate solution which converges
when the numbers of of steps tend to infinitely.

A Direct method

In the direct method we introduce the the following methods.

1) Matrix inversion method


2) Gaussian elimination method
3) Gaussian Jordan elimination method.
4) Matrix decomposition method

1. Gauss- Elimination method

Gauss-Elimination method is a method based on the elimination of the unknowns by combining


equations such that the n equations in n unknowns are reduced to an equivalent upper triangular
system which can be solved by backward substitution.

We assume that the coefficients aik are not all zero, so that the matrix A is not the zero matrix.

The matrix

2|Page
 
 
 a11 a12 . . . a1n b 1
 A, B    a21 a22 . . . a2 n b2 
 
 . . . 
a am 2 amn bm 
 m1 is called the augmented matrix of the system (1).

Now here still we can introduce (without any change, of course) some equations and/or
unknowns to the system (1) to make A as a square matrix, because it is very simple to think
about any square matrix.

 a11 a 12 . . . a1n 
 
 a21 a22 . . . a2 n 
 . 
A 
 . 
 
 . 
a ann 
So let  n1 an 2

To solve system (1) consider the augmented matrix (A, B). Our aim in this method is to reduce
the augmented matrix (A, B) to an upper triangular matrix. i.e.,

 a11 a12 . . . a1n b1 


 
 a21 a22 . . . a2 n b2 
 . 
( A, B )    ... 3
 . 
 . 
 
a bn 
 n1 an 2 . . . ann

to bring this matrix into the form

 a11 a12 a13 . . . a1n b1 


 
 0 b22 a23 . . . b2 n CR 
 0 0 c33 . . . C3n d3 
 
 . 
 
 . 
 . 
 
 0 0 0 . . .  nn kn 

3|Page
and we can solve it using backward substitution.

 a i1
a 11
To do this we multiply the first row of (3) by ( i  2,3,..., n) , and add to the ith row of (3)

(If a11  0. ). If a11 is zero, then take the row with the first column which is different from zero

and interchange with the first row so that a11 now is different from zero. So for this discussion

we assume that a11  0 . By doing this all elements of the first column of (3) except a11 are made
to zero.

 a11 a12 a13 . . . a1n b1 


 
0 b22 b23 . . . b2 n C2 
 . 
 
 . 
 
 . 
0 bn 2 bn 3 . . . bnn Cn 
Now (3) becomes  ……… (4)

Now taking b22 as a pivot element, we make all elements in the second column of (4) below b22

 bi 2
b 22
to be zero. That is, multiply the second row of (4) by and add to the corresponding
elements of the ith row

( i  3,4,5,..., n) . Thus we get all elements below b22 are made to be zero. Thus (4) reduces to

 a11 a12 a13 . . . a1n b1 


 
0 b22 b23 . . . b2 n C2 
0 0 c33 c3n d3 
  ...(5)
 . 
 
 . 
0 bn 2 bn 3 . . . d nn d n 

4|Page
Now again consider C 33 as a pivot element and make all elements below C 33 in (5) as zeros. On
doing this process again and again, all elements below the leading diagonal elements of matrix A
are made to zero.

Hence, after some operations the augmented matrix (A,B) reduced to

 a11 a12 a13 . . . a1n b1 


 
0 b22 b23 . . . b2 n c2 
0 0 c33 . . . c3n 3 
 
 .  ........(6)
 
 . 
 . 
 
0 0 0 . . .  nn kn 

Thus from (6), the given system of linear equations is equivalent to

a11 x1  a12 x2  a13 x3  ...  a1n xn  b1


b22 x2  b23 x3  ...  b2 n xn  c2
c33 x3  ...  c3n xn  d 3

 nn xn  kn

kn
xn 
Starting from the last equation we get  nn and using this we solve for xn 1 by backward

substitution method, we solve for xn , xn 1 , xn  2 ,...x2 , x1 .

Note that if we make (3) a lower triangular matrix we get the same values as the upper triangular

matrix x1 , x 2 , x3 ,..., x n as solution, too.

Example 1 Solve the simultaneous equations

x yz 3
x  2 y  3z  4
x  4 y  9z  6

5|Page
Using Gauss elimination method

Solution: By Gauss elimination method


1 1 1 3
 
 A, B   1 2 3 4
1 4 9 6 
Consider the augmented matrix 

(Adopt the notation Rk-Ri to mean that row Rk=Rk-Ri)

Operating R2 – R1, R3 – R1, we get

1 1 1 3
 
0 1 2 1
0 3 8 3 
 and this matrix is equivalent to (A,B).

Again operating R3 – 3R2, we obtain

1 1 1 3
 
0 1 2 1
0 0 2 0 
 still this is equivalent to (A,B).

This means that

x yz 3
y  2z  1
2z  0

Now solving for each variable, we get

z  0, y  1 and x  2

Example 2 Solve the system of equations

6|Page
2 x1  x 2  2 x3  x 4  6
6 x1  6 x 2  6 x3  12 x 4  36
4 x1  3 x 2  3 x3  3 x 4  1
2 x1  2 x 2  x3  x 4  10

Solution: By Gauss elimination method

2 1 2 1 6
 
 6  6 6 12 36 
4 3 3 3  1
 
2 2 1 1 10 
Consider the augmented matrix (A, B) = 

Operating R3 – 3R1, R3-2R1, R4-R1, we get

2 1 2 1 6 
 
0  9 0 9 18 

0 1 1  5  13 
 
0 1  3 0 4 
(A, B) 

2 1 2 1 6 
 
0 1 0 1 2

1 0 1 1  5  13 
 
R2  1 3 0 4 
Operating 9 we get  0

Now operating R3 – R2, R4 – R2, we obtain

2 1 2 1 6 
 
0 1 0 1 2

0 0 1  4  11 
 
0 0 3 1 6 

Again operating R4 – 3R3, we get

7|Page
2 1 2 1 6 
 
0 1 0 1 2

0 0 1  4  11 
 
0 0 0 13 39 

This matrix is equivalent to

2 x 1 +x 2 +2 x 3 + x 4 =6
x 2 −x 4 =−2
−x2 −4 x 4 =−11
13 x 4 =39

⇒ x 4 =3

Substituting this into the third equation

 x3  4 3  1  x3  1

Substituting these values again in the 2nd equation,

x2  3  2  x2  1

Using these values in the first equation, we get

2 x1  1  2(1)  3  6  x1  2

 The solution is x1  2, x 2  1, x3  1 and x 4  3

Example 3: Apply Gauss-Jordan method to solve the following system of equations.

5 x1  x 2  3 x3  2 x 4  2
2 x1  x 2  2 x3  x 4  1
x1  x 2  x3  x 4  1
x1  2 x 2  x3  3 x 4  1

8|Page
Solution: To solve the system first interchange the first row with the third row or the fourth row,
without loss of any generality, let us interchange with the third row. Thus the augmented matrix
has the form.

1 1 1 1 1
 
2 1 2 1 1
5 1 3 2 2
 
1  2 1 3 1 

Operating R2 – 2R1, R3 –5R1, R4 – R1, we get

1 1 1 1 1 
 
0  3 4 3  1

0 4 8 3  3
 
0  3 2 4 0 

3
R3  R2 , R4  R3
Again operating 4 , we get

 1 1 1 1 1
 
0 3 4 3  1
 3 5
0 0  2 4 4 
0 0  2 1 1

Operating R4 – R3, we get

1 1 1 1 1 
  1 1 1 1 1
0  3 4 3 1   
0 3 4 3  1

 0 0  2 3 5  
 3 5
4   0 0  2
4 
 4
1 1  4
0 0 0   0 0 0 1  1
 4 4

1
R1  R2
Operating 3 , we get

9|Page
 1 2  1 2
1 0 0 
3   
1 0 0
 3 3 3
0 3 4 3  1
   0  3 4 3  1
 3 5 
0 0  2 4 0 0 8 3 5
4   
0 0
 0 1  1  0 0 0 1  1

Operating R2 - 3R4, R3 - 3R4, we get

 1 2 
1 0 0  3 0 1 0 2
 3 3   

0 3 4 0 2   0  3 4 0 2
 3   8
0 0  8 4 8  0 0  8 0

0 0 0 1  1
0 0
 0 1  1 

1 1
R2  R3 , R1  R3 ,
Operating 2 8 we get

3 0 0 0 3
 
0  3 0 0 6

0 0 8 0 8
 
0 0 0 1  1

Which is equivalent to

3x1  3  3x2  6  8 x3  8
 x1  1  x 2  2  x 3  1 x 4  1

2. Matrix decomposition method(lu decomposition method)

This method is also known as the factorization method.

Let the system of equation:

a 11 x 1 +a12 x 2 +⋯+a1n x n =b 1
a21 x1 +a22 x 2 +⋯+a2 n x n=b2

an 1 x 1 +a n2 x 2 +⋯+a nn x n =b n (2.1)

10 | P a g e
In matrix equation;

a11 a 12 ⋯ a1 n x1 b1

( a21 a 22 ⋯ a2 n
⋮ ⋱ ⋮
a n1 an 2 ⋯ ann
)( ) ( )
x2 = b2

xn

bn

T T
That is Ax=b , where x=( x 1 , x 2 ,⋯, x n ) , b=( b1 ,⋯, b n ) .

In this method, the coefficient matrix A of A the system (4.1) is decomposed in to product of
lower triangular matrix L and upper triangular matrix U.

i.e LU=A ( 2.2)

Where

l 11 0 ⋯ 0 u11 u 12 ⋯ u 1n

(
L= l 21 l 22 ⋯ 0
⋮ ⋱ ⋮
l n1 l n2 ⋯ l nn
) and
(
U= 0 u 22 ⋯ u 2n
⋮ ⋱
0 0 ⋯ unn

)
Using matrix multiplication L and U comparing the elements of the resulting matrix with those
of A.

To determine the elements of L and U ,it is convenient to choose either


uii =1 or lii =1 .

When we choose ii
l =1 ,the method is called Doolittle method and if uii =1 ,the method is
called Crouts method.

Note: If the decomposition exists, then it is unique.

Strategy to find the unknowns:

Having determined matrices L and U, the system of equation becomes

LUX=b (2.3)

We write (4.3) as follows

UX=Z (2.4)

LZ=b (2.5)

11 | P a g e
T
The unknowns Z =( z 1 , z 2 ,. .. , z n ) in (2.4) are determining by forward substation and the
T
unknown X =( x 1 , x 2 , . .. , x n ) in (2.5) are obtained by back substation.

Note: This method fails if any of the the diagonal element


uii =1 or lii =1 is zero.

The LU decomposition granted when a matrix A positive definite.

Example: Solve the following system of equation Matrix decomposition method.

x 1 + x 2 + x 3 += 1

{ 4 x 1 +3 x 2−x 3 =6
3 x 1 +5 x 2 +3 x 3 =4

Soln: we use court’s method


uii =1 , for i=1,2,3.

In matrix equation

1 1 1 x1 1
( 4 3 −1 x 2 = 6
3 5 3 x
3
4 )( ) ( )
1 1 1


(
A= 4 3 −1
3 5 3 ) , b=(1,6,4 )
T T
, X =( x 1 , x 2 , x 3 )

Let

l 11 0 0 1 u12 u13

(
L= l l 21
l 31
l 22 0
l 32 l 33 ) and U= 0 1
0 0
( u23
1
)
Then LU=A

l 11 0 0 1 u12 u13 1 1 1


( l l 21 l 22 0
l31 l 32 l 33 )( 0 1 u23 =
0 0 1
) ( 4 3 −1
3 5 3 )
12 | P a g e
l 11 l11 u12 l 11 u13 1 1 1
(
⇒ l 21 l 21 u12+l 22
l 31 l31 u12+l 32
l 21 u 13+l 22 u23 =
l 31 u13 +l 32 u 23+l 33 ) ( )
4 3 −1
3 5 3

By comparing the elements of matrix equation;

From first row, we have

l11=1, u12=1 ,u13=1

From the second row;

l 21=4 , l 21 u12 +l 22=3 ⇒l 22=−1 and l 21 u 13+l 22 u 23=−1 ⇒u23 =−5

From the third row,

l31=3 , l 31 u12 +l 32=5 ⇒l 32=2 and l 31 u 13+l 32 u 23+l 33 =3 ⇒l 33=−10 .

Thus

1 0 0 1 1 1
(
L= 4 −1 0
3 2 −10 ) and U = 0 1 5
0 0 1( )
T
To find the unknowns X =( x 1 , x 2 , x 3 )

T
Let UX=Z ,where Z =( z 1 , z 2 , z 3 )

1 1 1 x1 z1

(
⇒ 0
0
1
0 )( ) ( )
5 x 2 = z2
1 x
3 z3

But LZ=b

13 | P a g e
1 0 0 z1 1

(
⇒ 4 −1 0 z2 = 6
3 2 −10 z
3
4 )( ) ( ) ¿
2¿
⇒ z 1 =1 , 4 z1 −z 2 =6 ⇒ z 2 =−2 , 3 z 1 +2 z2 −10 z 3 =4 ⇒ z 3 =−1 ¿
¿
−1 T
Z =(1,−2, )
Thus 2

To find X,we use the equation UX=Z

1
1 x1
)( ) ( )
1 1
(
⇒ 0 1
0 0
5 x2 =
1 x
3
−2
−1
2

By back substation

1 1
x 3=− , x 2= , x 3 =1
2 2

1 1
X =(1, ,− )T
Therefore the solution 2 2

B. Indirect method for system of linear equation

Indirect method for solving system of linear equation is almost we use iterative schem in the
( K +1) (K )
form of X =HX +C where X ( K +1) and X ( K ) is the (k+1)th approximation of X
and kth approximation of X respectively .H is called the iterative matrix depending on A and b is
column vector.

1. Jacobi iteration method

Consder

a 11 x 1 +a12 x 2 +⋯+a1n x n =b 1
a21 x1 +a22 x 2 +⋯+a2 n x n=b2

an 1 x 1 +a n2 x 2 +⋯+a nn x n =b n

14 | P a g e
We assume the quantities
aii ≠0 in (1) are pivot element ,so that the equations in system
(1)may written as;

a11 x 1 =−(a12 x 2+⋯+a1 n x n )+b1


a22 x 2 =−(a 21 x 1+⋯+a 2n x n )+b2

ann x nn=−( an1 x 1 +⋯+ann−1 xn−1 ))+bn
The Jacobi iteration schem method define as

1
x ( k +1) =− (a x +a x +⋯+a1 n x ( k ) )+ b1
1 a11 12 2( k ) 13 3( k ) n

1
x ( k +1 )=− (a x ( k ) +a23 x (k ) +⋯+a 2n x ( k ) )+b2
2 a 22 21 1 3 n

−−−−−−−−−−−−−−−−−−−−−−−−−−−−
1
x ( k +1 )=− (a x +a x +⋯+ ann−1 x ( k ) )+ bn
n a nn n 1 1(k ) n2 2( k ) n−1

Example:solve the following SLE by Jacobi iteration method

20 x 1 + x 2 −2 x 3 += 17

{ 3 x 1 + 20 x 2 −x 3=18
2 x 1 −3 x 2 + 20 x 3 =25

Soln: First we write the equations in Jacobi iteration schem.

1 17
x (x ( k ) −2 x ( k ) )+ 

{
( k +1) =−
1 20 2 3 20
1 18
x (k +1 )=− (3x ( k ) −x ( k ) )−
2 20 1 3 20
1 25
x ( k +)=− (2 x ( k ) +3 x ( k ) )+
3 20 1 2 20
( *)

15 | P a g e
x ( 0 )=x ( 0) =x (0 )=0 ,
Let the initial approximation 1 2 3 now substute these initial value in right
x ( 1) , x (1 ) , x ( 1)
sides of (*) to get 1 2 3 .

Thus

17
x (1) = =0 . 85
1 20
−18
x (1) = =−0 . 9
2 20
25
x (1 )= 20 =1 .25
3

The second approximation

1 17
x ( 2)=− (−0 . 9−2(1 . 25))+ =1 . 02
1 20 20
1 18
x (2 )=− (3(0 . 85)−1 . 25)− =−0 .965
2 20 20
1 25
x (2 )=− (2(0 . 85)−3(−0. 9 )+ =1. 03
3 20 20

Again putting these value in (*) to get X(3)

1 17
x (3 )=− (−0 .965−2(1. 03 ))+
=1 .00125
1 20 20
1 −18
x (3 )=− (3(1 .02 )−1 .03 )+ =−1. 0015
2 20 20
1 25
x (3 )=− (2(1. 02)−3(−0 . 965)+ =1. 00325
3 20 20

Continue like these the exact solution


x 1=1 , x 2 =−1 , x 3 =1

2. Gaussian –Seidel iteration method

We now use the right hand side of the Jacobi iteration method all the values from the present
iteration .

The Gaussian –Sidel iteration method for system of linear equation define as;

16 | P a g e
1 b
x ( a12 x ( k ) +a13 x ( k ) +⋯+a1 n x ( k ) )+ 1
( k +1) =−
1 a11 2 3 n a11
1 b2
x (k +1) =− ( a21 x ( k +1 ) +a 23 x ( k ) +⋯+ a2n x ( k ))+
2 a22 1 3 n a22

−−−−−−−−−−−−−−−−−−−−−−−−−−−−
1 bn
x ( k +1 )=− ( an 1 x (k +1) +an 2 x (k +1) +⋯+ ann−1 x ( k +1) )+
n a nn 1 2 n−1 a nn

The Jacobi and Gassian –Siedel method converges for any choice of the initial approximation
and if each equation
n
|aii|≥ ∑ |aij| , j=1,2,⋯, n
j=1

That is the coefficient matrix A is diagonally dominant.

The convergence of Gaussian Seidel method is twice as fast as the Jacobi’s method.

Example: solve the following SLE by Gaussian- Seidel iteration method

20 x 1 + x 2 −2 x 3 += 17

{ 3 x 1 + 20 x 2 −x 3=18
2 x 1 −3 x 2 + 20 x 3 =25

Solution:First we have to write the given system of equation in Gaussian –Seidel iteration scheme

1 17
x (x ( k )−2 x ( k ) )+ 

{
(k +1) =−
1 20 2 3 20
1 18
x ( k +1 )=− (3 x ( k +1) −x ( k ) )−
2 20 1 3 20
1 25
x ( k +)=− (2x ( k +1) +3 x ( k +1 ) )+
3 20 1 2 20
(**)

x ( 0 )=x ( 0) =x (0 )=0 ,
Let the initial approximation 1 2 3 now substute these initial value in right
x ( 1) , x (1 ) , x ( 1)
sides of (**) to get 1 2 3

17 | P a g e
17
x ( 1)=
=0 . 85
1 20
−1 18
x ( 1)= (3(0 . 85)−0)− )=−1. 0275
2 20 20
1 25
x (1 )=− (−2(0. 85 )−3(−1 .0275 ))+ =1. 0109
3 20 20

The second approximation

1 17
x ( 2)=− (−1 . 0275−2(1 .0109 ))+ =1 .0025
1 20 20
1 18
x (2 )=− (3(1. 0025 )−1 .0109 )− =−0 . 9998
2 20 20
1 25
x (2 )=− (2(1. 0025 )−3(−0 . 9998)+ =0 . 9998
3 20 20

The third approximation

x (3 )=1
1
x (3 )=−1
2
x33 =1

solution
x 1=1 , x 2 =−1 , x 3 =1

CHAPTER 4: FINITE DIFFERENCES

FIRST DIFFERENCE

Let y  f (x) be a given function of x and let y 0 , y1 , y 2 , y 3 ,..., y n be the values of y

corresponding to x0 , x1 , x 2 , x3 ,..., x n the values of x .

Consider the differences y1  y 0 , y 2  y1 , ..., y n  y n 1 these results are called the first differences

of y , and are denoted by y .

18 | P a g e
i.e., y0  y1  y0

y1  y2  y1

y2  y3  y2

yn 1  yn  yn 1

 is called forward difference operator

Higher Differences

2 y0   y0     y1  y0   y1  y0

2 y10   y1    y2  y1   y2  y1



 yn 1   yn 1    yn  yn 1   yn  yn 1
2

2 is called second order forward difference operator.

Similarly,  y1   y2   y1
3 2 2

n 1 n 1
In general,  yi   yi 1   yi
n

Though x 0 , x1 , x 2 , x3 ,..., x n need not be equally spaced, for the time being and for purposes of

practical work, we take them as equally spaced.

Consider x0 , x 0  h, x 0  2h, ..., x 0  nh so that x1  x 0  x 2  x1  ....  x n  x n 1  h ,

here h is called the interval of differencing

19 | P a g e
Operators

i) Backward difference operator    :

Ñ f (x) =  f  x   f  x  h  

By definition Ñ y1 = y1  y 0 , y 2  y 2  y1 , etc.

 2 f (x) = Ñ  f  x   f  x  h   = Ñ f (x) - Ñ f ( x  h)

=  f  x   f  x  h   -  f  x  h   f  x  2h  

= f ( x )  2 f ( x  h )  f ( x  2h )

ii) Central difference operator    :

The central difference operator  is defined by

 h  h
f  x   f  x    f x  
 2  2

y x  y h y h
x x
Or 2 2

iii) Shifting or displacement or translation operator E:

We define the shifting operator E such that

Ef ( x )  f ( x  h)

Ey x  y x  h

20 | P a g e
This means Ey 0  y1 , Ey1  y 2 , Ey 2  y 3 etc

E 2 yx  E  yx  h   yx  2 h
E 3 y x  y x  3h

Thus E f  x   f  x  nh  E n y x  y x  nh
n
or

iv) Inverse operator E-1

We know that E Ef ( x)  f ( x) and so let E f  x     x 


1 1

 E  x   f  x 

  x  h  f  x 

   x   f  x  h

Thus, E f  x   f  x  h 
1

In general, E f  x   f  x  rh 
r

v) Averaging operator 
The averaging operator  is defined by

1 
 y x   y h  y h 
2  x 2 x
2

vi) Differential operator D

The differential operator D is defined by

21 | P a g e
d
Df  x   f  x
dx

d2
D2 f  x  f  x
dx 2 etc.

vii) Unit operator 1

1. f ( x)  f ( x)  f ( x).1

Note that all the above operators satisfy the following properties

1. All operators are linear operators

2. All operators are distributive and commutative.

Exercise: Show that how the above two properties holds.

Relations between the operators

a) Relation between E and 

We know that f  x   f  x  h   f  x   Ef  x   f  x    E  1 f  x 

   E  1 called separation of symbols

Notice that two operators G1 f (x) = G2 f (x) if and only if G1 & G2 are equal for all f (x)

 E  1 

b) Relation between E and Ñ

f  x   f  x   f  x  h  = 1. f  x   E 1 f  x   1  E 1  f  x 

22 | P a g e
   1  E 1

 E 1  1   so that E  1   
1

c) Relation between D and  :

d
Df  x   f  x
dx

We observe that from the famous Taylor’s theorem for a given function f

h 2 f ' '  x  h3 f ' ' '  x 


f  x  h  f  x  h f ' x    ...
2! 3!

h 2 D 2 f  x  h3 D 3 f  x 
 Ef  x   f  x   h Df  x     ...
2! 3!

 (hD) 2  hD  
3

 1  hD    ...  f  x 
 2! 3! 

 E  1  hD 
 hD  2   hD  3  ...
hD
2! 3! =e

 E  1    e hD . This implies that hD  log 1   

2 3
   ...
2! 3!

1 2 3 
D       ... 
h 2! 3! 

d) Relation between E and  :

23 | P a g e
 h
f  x   f  x   
2
 h
 
f  x    E 2 f  x  E 2 f  x  E 2  E 2 f  x
2
1 1 1 1

 

1
  E 2 E
1
2

E
1
2
 E  1  E 1
2

Also   E 1  E   E 
2
1 1 2
1

Now let us see these operators using tables because in the subsequent discussions we use such

tables very frequently.

Forward Difference Table

The finite forward differences of a function are represented below in a tabular form

x y y 2 y 3 y 4 y 5 y 6 y
x0 y0 y0

x1 y1 2 y0
y1 3 y0
4 y0
x2 y2  y22

y 2 3 y1 5 y0
4 y1 6 y0
x3 y3  y22

y3 3 y2 5 y1
x4 y4 2 y3 4 y2
y 4 3 y3
x5 y5 2 y4
y5
x6 y6

Here y0 is called the leading term.

24 | P a g e
Whenever we need to establish the finite forward differences of a certain function starting from

the first forward differences we go by increasing the order of these differences till the n th

differences become all zero.

Backward Difference Table

x y y 2 y 3 y 4 y 5 y 6 y
x0 y0
x1 y1 y1
 2 y2
x2 y2 y2 3 y3
 y3
2  4 y4 5 y5
x3 y3 y3  y4
3

 2 y4  4 y5  6 y6
x4 y4 y 4 3 y5  4 y6  5 y6
 2 y5
x5 y5 y5 3 y6
 2 y6
x6 y6 y6

Here y6 is called the leading term

To find y k in terms of y 0 ,  y 0 ,  2 y 0 , … where k is a positive integer.

y k  E k y0   1    y 0
k

 k  k  
 1      2  ...  k  y0
 1  2 

k  k 
y0   y0   2 y0  ...  k y0
= 1  2 (*)

Now let us see some examples how we can efficiently use this result to find a solution for

different problems.

25 | P a g e
Example 1 Find the next term of the sequence 2, 9, 28, 65, 126, 217, … and also find the general

term.

Solution: For this given problem first we establish the forward difference table and we try to use

the above equation (*) in order to find the 7th term and the general term.

Thus, the table is as follows

x y  y 2 y 3 y 4 y

0 2
7
12
1 9
19 6
18 0
2 28 6
37
24
0
3 65
61 6
30
4 126
91
5 217

 6  6 6  6 6  6
y6  y0   y0   2 y0   3 y0   4 y0   5 y 0   6 y0
7th term = 1  2  3  4 5  6

 2  6 7   1512  20 6   15 0 

= 344

n n  1
yn  2  n 7   12  n n  1 n  2  6
2 3!

26 | P a g e
= 2  7 n  6n  6n  n  3n  2n
2 3 2

= (n  1)  1
3

 y6   6  1  1  344
3

Example 2: Find f (x) from the table below and also find f (7) .

x: 0 1 2 3 4 5 6

f (x) : -1 3 19 53 111 199 323

y x  E x y0  1    y0
x
Solution:

 x  x  x
y0   y0   2 y0   3 y0  ...
= 1  2  3

x x  1
   1  4 x  12  x x  1 x  2  6  0
2 6

 x3  3x 2  1

f (7) = 489

Using backward difference operator we can also develop a method how to solve some problems

similar to the forward difference operator.

We know that y n  y n  y n 1

 yn 1  yn  yn  1    yn

27 | P a g e
Similarly yn  2  1    yn 1

=
1    2 y n

y n 3  1    y n etc.
3

y n  k  1    y n
k
Thus

k  k 
yn   yn    2 yn ...    1  k yn
k

= 1  2

Example 3 Find y (1) if y (0)  2, y (1)  9, y (2)  28, y (3)  65, y ( 4)  126, y (5)  217 .

Solution

2 3 4
x y ∇y ∇ y ∇y ∇ y

0 2
7
12
1 9
19 6
18 0
2 28 6
37
24
0
3 65
61 6
30
4 126
91
5 217

Now y (1)  y 1  y 56

28 | P a g e
6 6 6 6
 y5 +=  y5    2 y5    3 y5  ...    6 y5
1  2  3 6

 217 6(91)
-  15(30) 20(6)  0

1

To verify y (0)  y 0  y 55

 5  5 5
 y5   y5    2 y5  ...   5 y5
1  2 5

 217  5 91  10 30   10 6

2

Example 4: Given y 3  2, y 4  6, y 5  8, y 6  9 and y 7  17 . Calculate  y 3 .


4

Solution:  y 3  ( E  1) y 3  ( E  4 E  6 E  4 E  1) y 3
4 4 4 3 2

= E y 3  4 E y 3  6 E y 3  4 Ey3  y 3
4 3 2

= y7  4 y 6  6 y5  4 y 4  y3

= 17  4(9)  6(8)  4(6)  2 = 55

Example 5: Find y 6 if y 0  9, y1  16, y 2  30, y 3  45 given that third differences are constants.

Solution: Since the third differences are constants,  y 0  0   y 0   y 0


4 5 6

29 | P a g e
y 6  E 6 y 0  (1   ) 6 y 0  1 6C1   6C 2   6C 3   6C 4   6C 5   6  y 0

= (1  6  15  20 ) y 0 since  y 0  0   y 0   y 0


2 3 4 5 6

= (1  6( E  1)  15( E  1)  20( E  1) ) y 0
2 3

= (20 E  45 E  36 E  10) y 0
3 2

= 20 y 3  45 y 2  36 y1  10 y 0

= 20(45)  45(30)  36(16)  10(9) = 76

Example 6: From the following table, find the missing value

x: 2 3 4 5 6

y: 45.0 49.2 54.1 - 67.4

Solution: Since only four values of f (x) are given, we assume that the polynomial which fits

the data, the interpolating polynomial, is of degree three.

Hence, fourth differences are zero. i.e.  y 0  0 .


4

 ( E  1) 4 y 0  0  ( E 4  4 E 3  6 E 2  4 E  1) y 0  0

 y 4  4 y3  6 y 2  4 y1  y 0  0

 67.4  4 y3  6(54.1)  4( 49.2)  45.0  0

30 | P a g e
y 3  60.05

Example 7: Estimate the production for 1964 and 1966 from the following data:

Year: 1961 1962 1963 1964 1965 1966 1967

Production: 200 220 260 - 350 - 430

Solution: We have given five values. Hence,  y k  0 .


5

That is ( E  1) y k  0  ( E  5 E  10 E  10 E  5 E  1) y k  0
5 5 4 3 2

Taking k  0 , we get y 5  5 y 4  10 y 3  10 y 2  5 y1  y 0  0

 y 5  5(350)  10 y 3  10(260)  5(220)  200  0

 y5  10 y 3  3450 …. (*)

Taking k  1 , we get y 6  5 y 5  10 y 4  10 y 3  5 y 2  y1  0

 430  5 y5  10(350)  10 y 3  5(260)  220  0

 5 y 5  10 y 3  5010 ….(**)

Solving for y 3 and y 5 from (*) and (**), y 3  306, y 5  390

Hence the missing values are 306 and 390.

31 | P a g e
Example 8: Find the missing term in the following

x: 1 2 3 4 5 6 7

y: 2 4 8 - 32 64 128

Solution: There are six values given. We can have a unique fifth degree polynomial to satisfy

the data. Hence,  y 0  0  ( E  1) y 0  0


6 6

 ( E  6 E  15 E  20 E  15E  6 E  1) y 0  0
6 5 4 3 2

 y 6  6 y 5  15 y 4  20 y 3  15 y 2  6 y1  y 0  0

 128  6(64)  15(32)  20 y 3  15(8)  6(4)  2  0

Hence, the missing value is y 3  16.1

From these examples we have seen that to find the missing terms, say if we have n

missing terms we have to establish n equations using the shifting operator n times; and then

solve these equations simultaneously by the methods developed so far for the missed terms.

Exercise

1. Find the fifth term of the sequence 3, 6, 11, 18…

2. If the third differences are constants, find u 6 if u 0  9, u1  18, u 2  20, u 3  24 .

3. If u 0  3, u1  12, u 2  81, u 3  200, u 5  8 , then find u 4 .

4. If y1  y 7  786, y 2  y 6  686, y 3  y 5  1088 , then find y 4 .

5. Prove y 4  y 3  y 2   y1   y 0 .
2 3

32 | P a g e
y 6 if y 0  9, y1  18, y 2  20, y 3  24 given that the third differences are constants.
6. Find

7. Given y 3  2, y 4  6, y 5  8, y 6  9 and y 7  17 , calculate  4 y 3 .

8. Find the nth term of the sequence 1,4,10,20,35,56,... . Also find the 8th term.

Chapter 5: Interpolations

Introduction

In this chapter, we consider the problem of approximating a given function by a class of simpler
function, mainly polynomials.

The are two main use of interpolation:

The first use is in reconstruction the function f(x) when it is not given explicitly and only the
value of f(x) and derivative at set of points are called nodes or tabular points are known.

33 | P a g e
The second use is to replace the function f(x) by an interpolating polynomials p(x) so that many

common operation such as for finding differentiation ,integrations e .t.c which are intended for
the function may be performed using p(x).

Definition: A polynomials p(x) is called an interpolating polynomials if the value of p(x) or its
certain order of derivative coincides with those of f(x) or its same order derivative at one or more
tabular points.

1. Interpolation with unequally spaced points

If there are n+1 distinct points 0 a≤x <x <x <,⋯, x ≤b


1 2 n be unequally spaced points
[a,b],then the problem of interpolation is to obtain the interpolation p(x) satisfying the condition.

i) p( x i )=f ( xi ), i=1,2 ,⋯,n


' '
ii) p (x i )=f (x i ), i=1,2,⋯, n

1.1 Lagrange’s interpolation formula

The Lagrange’s interpolation polynomials of degree n based on (n+1) distinct points


a≤x 0 <x 1 <x 2 <,⋯, xn ≤b and which satisfying the conditions

i) p( x i )=f ( xi ), i=1,2 ,⋯,n


' '
ii) p (x i )=f (x i ), i=1,2,⋯, n

Can be written in the form


n
pn ( x )=∑ Li ( x )f ( x i )
i=1 (1)

( x−x 1 )( x−x 2 ). . .(x −xi−1 )( x−x i+1 ).. .( x−x n )


Li ( x )= ,i=0,1,2 ,⋯,n
where ( x i−x 0 )( x i−x 1 ). . .( xi −x i−1 )( x i −xi+1 ). ..( x−x n )

And eq(1) is called the Lagrange’s interpolating polynomials of degree n.

⊗ If n=1,that is given two distinct points ( ( x 0 ,f ( x 0 ) and ( x 1 , f ( x 1 )) ,then the interpolating


polynomials p(x) is linear

34 | P a g e
2
p1 ( x )=∑ Li ( x )f ( x i )
i=1
=L0 ( x ) f ( x 0 )+L1 ( x ) f ( x 1 )
( x−x 1 ) ( x−x 0 )
⇒ p1 ( x )= f ( x 0 )+ f ( x1 )
( x 0 −x 1 ) ( x 1−x 0 )

If n=2,we consider the distinct points ( x 0 , f ( x 0 )),( x 1 , f ( x 1 )),( x2 , f ( x 2 )) and p2 ( x )


is
quadratic polynomials and define by
2
p2 ( x )=∑ Li ( x )f ( x i )
i=1
( x− x1 )( x−x 2 ) ( x−x 0 )( x−x 2 ) ( x−x 0 )( x−x 1 )
⇒ p2 ( x )= f ( x 0 )+ f ( x1 )+ f ( x 2)
( x 0 −x 1 )( x 0 −x 2 ) ( x 1 −x0 )( x 1 −x 2 ) ( x 2 −x 0 )( x 2 −x 1 )

Example1: Given f (2)=4, f(2.5)=5.5.Find the linear interpolating polynomials using Lagrange
interpolating

Soln: we have x 0=2, x 1=2. 5 , f ( x 0 )=4 , f ( x 1 )=5 .5 . Then

2
p1 ( x )=∑ Li ( x )f ( x i )
i=1
=L0 ( x )f ( x 0 )+L1 ( x )f ( x 1 )
( x−2. 5) ( x−2)
⇒ p1 (x )= 4+ 5.5
(2−2. 5 ) (2 . 5−2 )
=−8( x−2. 5 )+11( x−2)
=3 x−2

Example2:
Let x 0 =2 , x 1 =2. 5 , x3=4 ,then find the quadratic interpolating polynomials for

Soln:It is given that unequal spaced node


Let x 0 =2 , x 1 =2. 5 , x 2=4 ,
1 1 1
f (x 0 )= , f ( x 1 )= , f ( x 2 )=
2 2 .5 4

35 | P a g e
2
p2 ( x )=∑ Li ( x )f ( x i )
i=1
(x−2. 5)( x−4 ) 1 ( x−2)( x−4 ) 1 ( x−2)( x−2 .5 ) 1
⇒ p2 ( x )= ( )+ ( )+ ( )
(2−2. 5 )(2−4 ) 2 (2. 5−2)(2. 5−4 ) 2 .5 ( 4−2)( 4−2. 5) 4
=0 . 05 x2 −0 . 425 x+1. 15

The graphical representation of the given function and the interpolating polynomial as follow as

Example 4: Construct Lagrange interpolation polynomials

36 | P a g e
The truncation error in Lagrange interpolation is given by

En ( x )=f ( x )− p( x )
( x−x 0 )( x−x 1 )⋯( x−x n ) (n+1)
= f ( x)
(n+1 )!

The bound error becomes

|En ( f , x )|≤ max|(x− x )( x−x )⋯( x−x )


(n+1 )!
0
f
1 n (n+1)
(ξ )|
x 0≤ x ii ≤x n

Where
x 0≤ξ≤x n

1.2 Divided difference formula

Let ( x 0 , y 0 ),( x 1 , y 1 ),⋯,( x n , y n ) be given points.

Then the first divided difference for the nodes


x 0 , x 1 ,⋯, x n is define by the relation

y 1− y 0 y 2− y1 y n − y n−1
f [ x 0 , x 1 ]= , f [ x 1 , x 2 ]= , ⋯ , f [ x n−1 , x n ]=
x 1 −x 0 x 2 −x 1 x n −x n−1

The second divided difference for


x 0 , x 1 , x 2 ,⋯,x n is defining as:

f [ x 1 , x 2 ]−f [ x 0 , x 1 ] f [ x n−1 , x n ]−f [ x n−2 , x n−1 ]


f [ x 0 , x 1 , x 2 ]= ,⋯, f [ x n−2 , x n−1 , x n ]=
x 2 −x 0 x n −x n−2

In general the nth divided difference for


x 0 , x 1 , x 2 ,⋯,x n defied by:

Note: The zero divided difference of the function of f with respect to xi is denoted by f[xi] and
simply evaluated f at xi

i.e f[xi]=f(xi)

37 | P a g e
The divided difference table at any order

Newton‘s divided difference formula

et
y 0 , y 1 ,⋯, y n be the values of corresponding to the node x 0 < x 1 < x 2 <,⋯, x n
L .

The Newton’s divided difference interpolating formula for unequally space define as :

pn ( x )= y 0 +( x−x 0 )f [ x 0 , x 1 ]+( x−x 0 )( x−x 1 )f [ x 0 , x1 , x 2 ]+( x−x 0 )( x−x 1 )( x−x 2 )f [ x 0 , x 1 , x 2 , x 3 ]+⋯+


(x −x0 )( x−x 1 )⋯( x−x n−1 ) f [ x 0 , x 1 ,⋯, x n ]

Example: Construct the divided difference table for the following data and find the
interpolating polynomials for the data

x 0.5 1.5 3 5 6.5 8


f(x) 1.625 5.875 31 131 282.125 521

38 | P a g e
Soln:first we have to prepared divided difference table

x f (x i ) f [ x i−1 , x i ] f [ x i−2 , xi−1 , x i ] f [ x i−3 , xi−2 , x i−1 , x i ] 4 th


x 0=0. 5 1 . 625 4 . 25=f [ x 0 , x 1 ] 5=f [ x 0 , x 1 , x 2 ] 1 0
x 1=1 .5 5 . 875 16 .75=f [ x 1 , x2 ] 9 . 5=f [ x 1 , x 2 , x3 ] 1 0
x 2=3 31 50=f [ x2 , x 3 ] 14 . 9=f [ x 2 , x 3 , x 4 ] 1
x 3=5 131 100. 75=f [ x 3 , x 4 ] 19 . 5=f [ x 3 , x 4 , x 5 ]
x 4 =6 .5 282 . 125 159. 25=f [ x 4 , x 5 ]
x5 =8 521

Using Newton’s divided difference formula

pn ( x )= y 0 +( x−x 0 )f [ x 0 , x 1 ]+( x−x 0 )( x−x 1 )f [ x 0 , x1 , x 2 ]+( x−x 0 )( x−x 1 )( x−x 2 )f [ x 0 , x 1 , x 2 , x 3 ]+⋯+


(x −x0 )( x−x 1 )⋯( x−x n−1 ) f [ x 0 , x 1 ,⋯, x n ]
=1. 625+( x−0. 5 )425+( x−0 . 5)( x−1 .5 ) 5+( x−0. 5 )(x −1. 5)( x−3 )
=x3 +x+1

Example 2: Find f (x ) as a polynomial in x for the following data by Newton’s divided


difference formula
x -4 -1 0 2 5
f (x ) 1245 33 5 9 1335

Solution: First we form the divided difference table for the data.
x f (x ) First d.d Second d.d Third d.d Fourth d.d
-4 1245
-404
-1 33 94
-28 -14
0 5 10 3
2 13
2 9 88
442
5 1335

The Newton’s divided difference formula gives

f (x ) =f ( x 0 )+ ( x−x 0 )f [ x 0 , x 1 ]+ ( x−x 0 )(x −x1 )f [ x 0 , x1 , x 2 ]

+( x−x 0 )( x−x 1 )(x −x 2 )f [ x 0 , x 1 , x 2 , x 3 ]

39 | P a g e
+( x−x 0 )( x−x 1 )(x −x 2 )( x−x 3 )f [ x 0 , x 1 , x 2 , x3 , x 4 ]

= 1245 + ( x + 4)(- 404) + ( x + 4)( x + 1)(94) + ( x + 4)( x + 1) ( x) (-


14)

+ ( x + 4)( x + 1) ( x) ( x -2)(3)
2 3 2
= 1245 – 404 x – 1616 + ( x + 5 x + 4)(94) + ( x +5 x + 4 x )(-14)
4 3 2
+( x +3 x –6 x – 8 x )(3)
4 3 2
=3 x –5 x +6 x – 14 x + 5.

Example 3: Find f (x ) as a polynomial in x for the following data by Newton’s divided


difference formula
x –2 –1 0 1 3 4
f (x ) 9 16 17 18 44 81

Hence, interpolate at x = 0.5 and x = 3.1.


Solution: We form the divided difference table for the given data.
x f (x ) First d.d Second d.d Third d.d Fourth d.d
–2 9
7
-1 16 -3
1 1
0 17 0 0
1 1
1 18 4 0
13 1
3 44 8
37
4 81

Since, the fourth order differences are zeros, the data represents a third degree polynomial.
Newton’s divided difference formula gives the polynomial as

f (x ) =f ( x 0 )+ ( x−x 0 )f [ x 0 , x 1 ]+ ( x−x 0 )(x −x1 )f [ x 0 , x1 , x 2 ]

+( x−x 0 )( x−x 1 )(x −x 2 )f [ x 0 , x 1 , x 2 , x 3 ]

= 9 + ( x + 2)(7) + ( x + 2)( x + 1)(– 3) + ( x + 2)( x + 1) ( x) (1)

40 | P a g e
2 3 2 3
= 9 + 7 x + 14 – 3 x –9 x –6+ x +3 x +2 x = x + 17.

Hence, f (0.5)= (0.5)3 + 17 = 17.125.


f (3.1)= (3.1)3 + 17 = 47.791.

Activity5.4:

1. Find f (x ) as a polynomial in x for the following data by Newton’s divided


difference formula
x 1 3 4 5 7 10
f (x ) 3 31 69 131 351 1011
Hence, interpolate at x = 3.5 and x = 8.0.
2. Does the Newton’s divided difference interpolating polynomial have the permanence
property?

3. Using divided difference formula, find f (3) given


f (1)= = – 26, f (2)= 12, f (4 )= 256, and f (6)= 844.

4. Using Newton’s divided difference interpolation, find y(10) given that


y(5)= 12, y(6)= 13, y(9)= 14, y(11)= 16.

5. Find f (8) by Newton’s divided difference formula, for the data


x 4 5 7 10 11 13
f (x ) 48 100 294 900 1210 2028

2 Interpolation with equally spaced points

We derived two important interpolation formulas by means of forward and back ward difference
operator.

2.1 Newton‘s forward interpolation formula

Given the set of (n+1) values, viz., ( x 0 , y 0 ), ( x1 , y 1 ), ( x 2 , y 2 ), ... ,( x n , y n ) of x

and y , it is required to find y n ( x ) , a polynomial of nth degree such that y and

y n( x ) agree at the tabulated points. Let the values of x be equidistant, that is,

41 | P a g e
x i−x i−1 =h , for i=1,2,3,...,n.

Therefore ,
x 1=x 0 +h , x 2=x 0 +2 h , etc.

x i=x 0 +ih , for i=1,2,3,...,n.

Since y n ( x ) is a polynomial of nth degree, it may be written as

y n ( x )=a0 +a1 ( x−x 0 )+a2 ( x−x 0 )( x−x 1 )+a3 (x −x0 )( x−x 1 )( x−x 2 )+. . .

+a n ( x−x 0 )( x−x 1 )( x−x 2 ). . .( x−x n−1 ).


( 2.1)

a a
The (n+1) unknowns 0 , a1 , a2 , . . . , n can be found as follows

Put
x=x 0 in ( 2.1) we obtain , y n ( x 0 )= y 0=a0 i.e
a0 = y 0 (since the other terms
in

( 2.1) vanish again put x=x 1 in ( 2.1) we obtain ,

y n ( x 1 )= y 1 =a 0 +a1 ( x1 −x 0 )= y 0 +a1 ( x1 −x 0 ) (since the other terms in ( 2.1) vanish)

y 1 = y 0 +a 1 ( x 1−x 0 ) then solve for a1 we obtain ,

y 1− y 0 Δy 0 Δy 0
a1 = = a1 =
x 1−x 0 h i.e h

Similarly put
x=x i for i=2,3,...,n−1 , in ( 5.2) we obtain
,

Δ2 y 0 Δ3 y 0 Δn y 0
a2 = a3 = an =
2 ! h2 3 ! h3 n ! hn
, , . . .,

now substitute
a0 , a1 , a2 , . . . , an in ( 5.2)

42 | P a g e
Δy 0 Δ2 y 0 Δ3 y 0 Δn y 0
a1 = a2 = a3 = an =
by
a0 = y 0 h 2 ! h2 3 ! h3 n ! hn we obtain ,
, , , , . . .,

Δy 0 Δ2 y 0 Δ3 y 0
y n ( x )= y 0 + ( x−x 0 )+ ( x−x 0 )( x−x 1 )+ ( x−x 0 )( x−x 1 )( x−x 2 )+
h 2! h 2 3 ! h3

Δn y 0
.. .+ ( x−x 0 )( x−x 1 )( x−x 2 ). ..( x−x n−1 ).
n ! hn ( 2.2)

Put
x=x 0 + ph in ( 2.2), we get

p ( p−1) 2 p( p−1)( p−2) 3


y n ( x )= y 0 + pΔy 0 + Δ y0 + Δ y 0 +. . .
2! 3!
p ( p−1)( p−2).. .( p−n+1 ) n
+ Δ y0
n! (2.3)
Which is Newton’s forward difference interpolation formula and useful for interpolating near the
beginning of a set of tabular values.

Instead of assuming y n( x ) as in ( 2.1) if we choose it in the form

y n ( x )=a0 +a1 ( x−x n )+a2 ( x−x n )( x−x n−1 )+a 3 ( x−x n )( x−x n−1 )( x−x n−2 )+.. .

+a n ( x−x n )( x−x n−1 )( x−x n−2 ).. .( x−x1 ).


( 2.4)

a a
then the (n+1) unknowns 0 , a1 , a2 , . . . , n can be found as follows

Put
x=x n in ( 5.5) we obtain ,

y n ( x n )= y n=a0 i.e
a0 = y n (since the other terms in ( 2.4) vanish)

again put
x=x n−1 in ( 5.5) we obtain ,

y n ( x n−1 )= y n−1 =a 0 +a 1 ( x n−1 −x n )= y n +a1 ( x n−1 −x n )

43 | P a g e
y n−1 = y n +a1 (x n−1−x n ) then solve for a1 we obtain ,

y n − y n−1 ∇ y n ∇ yh
a1 = = a1 =
x n −x n−1 h i.e h

Similarly put
x=x n−i for i=2,3,...,n−1 , in ( 5.5) we obtain
,

∇ 2 yn ∇3 yn ∇n yn
a2 = a3 = an =
2 ! h2 3 ! h3 n! hn now substitute
a0 , a1 ,
, , . . .,

a2 , . . . , a n

∇ yh ∇ 2 yn ∇3 yn ∇n yn
a1 = a2 = a3 = an =
in ( 2.4) by
a0 = y n h 2 ! h2 3 ! h3 n! hn we
, , ,. . .
obtain ,

∇ yn ∇2 y n ∇3 yn
y n ( x )= y n + ( x−x n )+ (x −x n )( x−x n−1 )+ ( x−x n )( x−x n−1 )( x−x n−2 )+
h 2 ! h2 3! h3

∇ n yn
.. .+ ( x−x n )( x−x n−1 )( x−x n−2 ). . .( x−x 1 ).
n ! hn ( 2.5)

Put
x=x n + ph in ( 5.6), we get

p( p+1) 2 p ( p +1)( p+2) 3


y n ( x )= y n + p ∇ y n + ∇ yn+ ∇ y n +. . .
2! 3!
p ( p+1)( p +2).. .( p+n−1) n
+ ∇ yn .
n! (2.6)
This is Newton’s backward difference interpolation formula and useful for interpolating near the
end of the tabular values.

Example 2.1: Using Newton’s forward difference interpolation formula, find the form of the

function y( x) from the following table


x 0 1 2 3

44 | P a g e
f (x ) 1 2 1 10

Solution: We have the following forward difference table for the data.
The forward differences can be written in a tabular form as in Table 5.1.

Δ 2 3
x f (x ) Δ Δ
0 1
Table 2.1. 1 forward
differences. 1 2 -2
Since -1 12 n=3,the cubic
Newton’s 2 1 10 forward
difference 9 interpolation
polynomial 3 10 becomes:

p ( p−1) 2 p( p−1)( p−2 ) 3


y 3 ( x )= y 0 + pΔy 0 + Δ y0 + Δ y0 .
2! 3!
x−x 0 x−0
p= = =x
where h 1

x ( x−1) x( x−1 )( x−2)


y 3 ( x )=1+ x(1 )+ (−2)+ (12).
2! 3!
2
y 3 ( x )=1+ x−( x −x )+2( x )( x−1)( x−2 ).

2 2
y 3 ( x )=1+ x−( x −x )+2( x )( x −3 x +2).

3 2
y 3 ( x )=2 x −7 x +6 x +1.

Example 2.2: Find the interpolating polynomial corresponding to the data points
(1,5),(2,9),(3,14),and (4,21). Using Newton’s backward difference
interpolation polynomial.
Solution: We have the following backward difference table for the data.

45 | P a g e
The backward differences can be written in a tabular form as in Table 2.2.
2 3
x f (x ) ∇ ∇ ∇
1 5
4
2 9 1
5 1
3 14 2
7
4 21
Table 2.2. backward differences.
Since n=3,the cubic Newton’s backward difference interpolation polynomial becomes:
p( p+1) 2 p ( p+1)( p+2) 3
y 3 ( x )= y n + p ∇ y n + ∇ yn+ ∇ yn .
2! 3!

x−x n x−4
p= = =x−4 .
where h 1

( x−4 )(x −4 +1) ( x−4 )( x−4+1 )( x−4 +2)


y 3 ( x )=21+( x−4 )(7 )+ (2)+ (1).
2 6

( x−4 )(x −3) ( x−4 )(x −3)( x−2 )


y 3 ( x )=21+( x−4 )(7 )+ (2)+ (1 ).
2 6
3 2
x x 26 x
y 3 ( x )= − + +1.
6 2 6

Example 2.3: The table below gives the value of


tan x for 0 .10≤x≤0 . 30:

x 0.10 0.15 0.20 0.25 0.30


f (x ) 0.1003 0.1511 0.2027 0.2553 0.3093

Find:

i)tan 0.12 ii) tan0.26

46 | P a g e
Solution: We have the following forward difference table for the data.
The forward differences can be written in a tabular form as in Table 5.3.

The forward differences can be written in a tabular form as in Table 5.3.


Δ 2 3 4
x f (x ) Δ Δ Δ

0.10 0.1003
0.0508
0.15 0.1511 0.0008
0.0516 0.0002
0.20 0.2027 0.0010 0.0002
0.0526 0.0004
0.25 0.2553 0.0014
0.0540
0.30 0.3093

Table 2.3 forward differences.

i) T0 find tan 0 .12 we use Newton’s forward difference interpolation polynomial.


x−x 0 0 .12−0 .10
h=xi+1−x i =0 . 05 p= = =0. 4 .
we have x= 0.12 , and h 0 . 05
(5.4) gives
Hence formula

0 . 4(0 . 4−1 ) 0. 4(0 . 4−1 )(0 .4−2)


y 4 (0.12 )=tan0 .12=0 .1003+0 . 4 (0. 0508 )+ (0 .0008 )+ (0 . 0002)+
2 6
0 . 4( 0. 4−1 )(0 . 4−2)(0 . 4−3 )
(0 .0002 ).
24
=0.1205.

ii) T0 find tan 0 .26 we use Newton’s back ward difference interpolation polynomial.
x−x n 0 . 26−0 . 30
h=xi+1−x i =0 . 05 p= = =−0 . 8 .
we have x= 0.26 , and h 0. 05
Hence formula (5.7) gives

47 | P a g e
−0 . 8(−0 .8+1)
y 4 (0 .26 )=tan 0 .26=0 . 3093−0 . 8(0 . 0540)+ (0. 0014 )+
2

−0. 8(−0 . 8+1)(−0 . 8+2 )


(0 . 0004 )+
6

−0. 8(−0.8+1)(−0 .8+2 )(−0 . 8+3 )


(0 . 0002).
24

=0.2662

Example 5.4: Using Newton’s forward difference formula , find the sum

S n =13 +23 +33 +. . .+ n3 .


Solution:
3 3 3 3 3
S n+1=1 +2 +3 +. ..+n +(n+1) .
We have

3
S n+1−S n =(n+1 ) .
Hence
ΔS n =( n+1)3 .
Or

Δ 2 S n =ΔS n+1− ΔS n =(n+2 )3 −(n+1 )3=3 n2 +9 n+7 .


It follows that

Δ 3 S n =3(n+1 )2 +9(n+1)+7−(3 n2 +9 n+7 )=6 n+12 .


4
Δ Sn =6 (n+1)+12−(6 n+12 )=6 .

Δ 5 S n =Δ 6 S n =.. .=0 , Sn
Since is a fourth degree polynomial in n .

S 1 =1, ΔS 1 =8 , Δ 2 S 1=19 , Δ 3 S 1=18 , Δ 4 S1 =6 .


Further,

(5.4) gives
Hence formula

(n−1)(n−2) (n−1)(n−2)(n−3)
S n =1+(n−1)(8 )+ (19 )+ (18)+
2 6

48 | P a g e
(n−1)(n−2)(n−3)(n−4 )
+ (6 ).
24
1 1 1
= n 4 + n 3 + n2
4 2 4

2
n( n+1)
=[ 2
. ]
Activity1.1:

1. Find f (x ) as a polynomial in x for the following data by Newton’s forward


difference formula
x 3 4 5 6 7 8 9
f (x ) 13 21 31 43 57 73 91
Hence, interpolate at x = 3.5 .

2. Find f (12 ) by Newton’s backward difference formula, for the data


x 4 5 7 10 11 13
f (x ) 48 100 294 900 1210 2028

3. Given
x 0.20 0.22 0.24 0.26 0.28 0.3
f (x ) 1.6596 1.669 1.6804 1.691 1.7024 1.7139
8 2

Using Newton’s difference interpolation formula, find f (0. 23) and f (0.29) .

REVIEW PROBLEMS

1. Does the Lagrange interpolating polynomial have the permanence property?

49 | P a g e
2 .For the data ( xi , f i ), i=0,1,2,...,n. construct the Lagrange fundamental polynomials

ℓ i ( x j ) using the information that they satisfy the conditions ℓ i ( x j )=0 , for i≠ j and

ℓ i ( x j )=1 . for i= j .
3. Construct the Lagrange interpolating polynomials for the following functions
2x x 0=0 , x 1=0 . 3, x 2=0. 6,
A . f ( x)=e cos3 x
B . f (x )=ln x x 0=1, x 1=1 .1, x 2=1 .3 , x 3=1.4,
4.Use appropriate Lagrange interpolating polynomials of degrees one, two, and three to
approximate each of the following:

A. f (8.4) if f (8.1)= 16.94410, f (8.3)= 17.56492, f (8.6)= 18.50515, f (8.7 )=


18.82091
1
f (− )
B. 3 if f (−0.75)= −0.07181250, f (−0.5)= −0.02475000, f (−0.25)=
0.33493750,

4.Find the unique polynomial P( x) of degree 2 or less such that


P(1)=1, P(3)=27, P(4)=64 using each of the following methods :

(i) Lagrange interpolation formula, and

(ii) Newton divided difference formula. Evaluate P(1.5).

5.Use the Lagrange and the Newton-divided difference formulas to calculate f (3 ) from
the following table :.
x 0 1 2 4 5 6
f (x ) 1 14 15 5 6 19

50 | P a g e
CHAPTER 5: CURVE FITTING

4.1 Introduction

Data is often given for discrete values along a continuum. However estimates of points between
these discrete values may be required. One way to do this is to formulate a function to fit these
values approximately. This application is called curve fitting. There are two general approaches
to curve fitting.

 The first is to derive a single curve that represents the general trend of the data. One method
of this nature is the least-squares regression.
 The second approach is interpolation which is a more precise one. The basic idea is to fit a
curve or a series of curves that pass directly through each of the points.
5.2 Least squares regression

5.2.1 Linear regression

The simplest example of the least squares approximation is fitting a straight line to a set of paired
observations: (x1,y1),(x2,y2)……(xn, yn). The mathematical expression for the straight line is

y = ao + a1x+ e

where ao, and a1 are coefficients representing the y-intercept and the slope of the line respectively
while e is the error or residual between the model and the observations, which can be
represented as

e = y - ao - a1x

Thus the error is the discrepancy between the true value of y(observed value) and the
approximate value ao + a1x, predicted by the linear equation. Any strategy of approximating a set
of data by a linear equation (best fit) should minimize the sum of residuals. The least squares fit
of straight line minimizes the sum of the squares of the residuals.
n n n
S r =∑ e2i =∑ ( y i , measured − y i ,mod el )2=∑ ( y i−ao −a 1 x i )2
i =1 i=1 i=1 (5.1)

To determine the values of ao and a1, differentiate (4.1) with respect to each coefficient

∂ Sr
=−2 ∑ ( y i −a o−a1 x i )
∂ ao
∂ Sr
=−2 ∑ [( y i −ao −a1 x i ) xi ]
∂ a1
(5.2)

51 | P a g e
Setting the der ivatives equal to zero will result in a minimum S r.The equations can then be
expressed as
∑ y i −∑ a o−∑ a1 x i=0
∑ y i x i−∑ ao x i −∑ a1 x 2i =0 (5.3)

Solving for ao and a1 simultaneously

n ∑ xi y i −∑ x i ∑ y i
a1 = 2
n ∑ x 2i −( ∑ x i )
(5.4)

¿ ¿
a o= y −a1 x
(5.5)

Linearization of non-linear relationships

Linear regression provides a powerful technique for fitting a “best” line to a data. However it is
predicated on the fact that the relationship between the independent and dependent variables is
linear. But usually this is not the case. Visual inspection of the plot of the data will provide
useful information whether linear regression is acceptable. In situations where linear regression
is inadequate other methods such as polynomial regression are appropriate. For others,
transformations can be used to express the data in a form that is compatible with linear
regression.

The followings are examples of functions which can be linearized

i Exponential functions
b1 x
y=a1 e

where a1 and b1 are constants

This function can be linearized by taking the natural logarithm of both sides of the equation

ln y=ln a1 +b 1 x ln e
ln y=ln a1 +b 1 x

the plot of ln y versus ln x will yield a straight line with a slope of b1 and an intercept of ln a1.
52 | P a g e
ii Power functions
b2
y=a2 x

where a2 and b2 are constant coefficients.

This equation can be linearized by taking its base 10 logarithm to give

log y=b 2 log x+ log a2

the plot of logy versus logx will yield a straight line with a slope of b2 and an intercept of log a2.

iii Saturation growth rate equation

x
y=a3
b3 +x

this equation can be linearized by inverting to give

1 b3 1 1
= +
y a3 x a3

the plot of 1/y versus 1/x will be linear, with a slope of b3/a3 and an intercept of 1/a3.

In their transformed forms, these models are fit using linear regression in order to evaluate the
constant coefficients. Then they can be transformed back to their original state and used for
predictive purposes.

Example   Find the standard "least squares line"     for the data points
.  

53 | P a g e
And the graph will look like:

Error (the sum of residual’s squared)

54 | P a g e
4.2.2 Polynomial regression

Some engineering data, although exhibiting marked pattern, is poorly represented by straight
line. For these cases a curve would be better suited to fit the data. One of the possible ways of
solving this kind of problems is to fit the data by a polynomial function. This is called
polynomial regression. The least squares method can be extended to fit data to a higher-order
polynomial. Suppose we want to fit a second order polynomial:
2
y=ao + a1 x+ a2 x + e

For this case the sum of the squares of the residuals is

S r =∑ ( y i−a o −a1 x i − a2 xi2 )2

Taking the derivative of the above equation with respect to each unknown coefficients of the
polynomial gives

∂ Sr
=−2 ∑ ( y i −a o−a1 x i−a2 x 2i )
∂ ao
∂ Sr
=−2 ∑ xi ( y i−a o −a1 x i−a2 x 2i )
∂ a1
∂ Sr
=−2 ∑ x 2i ( y i −ao −a1 xi −a 2 x2i )
∂ ao

Setting these equations equal to zero and rearranging to develop the following set of equations

nao +( ∑ x i )a 1 +( ∑ x 2i )a2 =∑ y i
( ∑ x i )ao +( ∑ x 2i )a1 +( ∑ x3i )a2 =∑ x i y i
( ∑ x 2i )a o +( ∑ x3i )a1 +( ∑ x 4i )a2 =∑ x 2i y i

55 | P a g e
Solving for the coefficients of the quadratic regression is equivalent to solving three
simultaneous linear equations. The techniques for solving these problems are discussed in
chapter two.

This discussion can easily be extended to an mth order polynomial as


2 m
y=ao + a1 x+ a2 x +. ..+ am x +e

Thus determination of the coefficients of an mth order polynomial is equivalent to solving a


system of m+1 simultaneous linear equations. For this case the standard error is formulated as

Sr
S y / x=
√ n−(m+1)

Example Find the standard "least squares parabola"     for the data points

And the graph will look like:

56 | P a g e
4.2.3 Multiple linear regression

A useful extension of linear regression is the case where y is a linear function of more than one
variable, say x1 and x2,

y=ao + a1 x 1 +a 2 x 2 +e

Such an equation is useful when fitting experimental data where the variable being studied is a
function of two other variables.

In the same manner as the previous cases the best values of the coefficients are determined by
setting the sum of the squares of the residuals to a minimum.

S r =∑ ( y i−a o −a1 x 1i −a 2 x 2 i )2

Differentiating with respect to the unknown coefficients, we have

57 | P a g e
∂ Sr
=−2 ∑ ( y i −a o−a1 x 1i −a2 x 2i )
∂ ao
∂ Sr
=−2 ∑ x1 i ( y i−ao −a 1 x1 i−a2 x 2i )
∂ a1
∂ Sr
=−2 ∑ x 2i ( y i −a o−a1 x 1i −a2 x 2i )
∂ ao

The coefficients yielding the minimum sum of the residuals are obtained by setting the partial
derivatives equal to zero and expressing the result in a matrix form as

2
[n ∑ x1i ∑ 2i ¿][∑ 1i ∑ 1i
x x x ∑ x1i x2i ¿]¿¿¿
¿
The above case can be extended to m dimension,

y=ao + a1 x 1 +a 2 x 2 + .. .+am x m+ e

where the standard error is formulated as

Sr
S y / x=
√ n−(m−1 )

The coefficient of determination is computed as in equation (4.7).

Multiple linear regression has utility in the derivation of power equations of the general form
a 1 a2 am
y=ao x 1 x2 . .. . x m

Transformation of this form of equation can be achieved by taking the logarithm of the equation.

4.2.4 General linear least squares

In the preceding discussions we have seen three types of regression: linear, polynomial and
multiple linear. All these belong to the general linear least squares model given by

y=ao z o +a1 z 1 +a 2 z2 +.. .+a m z m+e (4.9)

58 | P a g e
where zo, z1, z2, .....,zm are m+1 different functions. For multiple linear regression z o = 1, z1 =x1,
z2 = x2, ...., zm = xm. For polynomial regression, the z's are simple monomials as in z o= 1, z1 = x,
z2 = x2 , ..., zm = xm.

The terminology linear refers only to the model's dependence on its parameters i.e., the a's.
Equation (4.9) can be expressed in a matrix form as

{ y }=[ z ] { A }+ { E }
where [z] is a matrix of calculated values of the z functions at the measured values of the
independent variables.

zo 1 z 11 .. .. zm 1

[ zo 2
.
.
z on
.
.
z 12

z1 n
..
.
.
.
..
.
.
.
zm 2
.
.
z mn
]
where m is the number of variables in the model and n is the number of data points. Because

n ≥m+1 , most of the time [z] is not a square matrix.

The column vector {y} contains the observed values of the dependent variable.

{y}T = ⌊ y 1 , y 2 , .. . , y n ⌋
The column vector {A} contains the unknown coefficients

{A}T= ⌊ a o , a1 , .. . , am ⌋
the column vector {E} contains the residuals

{E}T = ⌊ e 1 , e2 ,. .. , e n ⌋
The sum of the squares of the residuals can be defined as

n m 2

(
S r =∑ y i− ∑ a j z ji
i =1 j =0
)
This quantity can be minimized by taking its partial derivative with respect to each of the
coefficients and setting the resulting equation equal to zero. The outcome of this process can be
expressed in matrix form as

59 | P a g e
[ [ Z ] T [ Z ] ] { A }={ [ Z ]T { y } }

60 | P a g e

You might also like