0% found this document useful (0 votes)
15 views7 pages

OALibJ 2016071611013323

Uploaded by

hwodesa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views7 pages

OALibJ 2016071611013323

Uploaded by

hwodesa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Open Access Library Journal

Numerical Solutions of Initial Value


Ordinary Differential Equations
Using Finite Difference Method
Negesse Yizengaw
Mathematics Department, University of Gondar, Gondar, Ethiopia
Email: [email protected]

Received 25 May 2015; accepted 12 June 2015; published 23 June 2015

Copyright © 2015 by author and OALib.


This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

Abstract
Initial value ordinary differential equations arise in formulation of problems in various fields such
as physics and Engineering. The present paper shows the method how to solve the initial value or-
dinary differential equation on some interval by using finite difference method in a very accurate
manner with the formulation of error estimation.

Keywords
Ordinary Differential Equation, Finite Difference Method, Interpolation, Error Estimation

Subject Areas: Mathematical Analysis, Ordinary Differential Equation

1. Introduction
Differential equations are used to model problems in science and engineering that involve the change of some
variable with respect to the other. Most of these problems require the solution of an initial-value problem, that is,
the solution to a differential equation that satisfies a given initial condition. In common real-life situations, the
differential equation that models the problem is too complicated to solve exactly [1]. There are numerical me-
thods which simplify such problems and the one is finite difference method which is a numerical procedure that
solves a differential equation by discrediting the continuous physical domain into a discrete finite difference grid
[2]. Finite difference methods are very suitable when the functions being dealt with are smooth and the differ-
ences decrease rapidly with increasing orderas discussed by Colletz, L. [3]: calculations with these methods are
best carried out with fairly small length of step. Suppose that the first order IV differential equation
= ( x, y ) , y ( x0 ) y0
y ′ f= (1.1)

How to cite this paper: Yizengaw, N. (2015) Numerical Solutions of Initial Value Ordinary Differential Equations Using Finite
Difference Method. Open Access Library Journal, 2: e1614. http://dx.doi.org/10.4236/oalib.1101614
N. Yizengaw

is integrated numerically by dividing the interval [ x0 , b] on which the solution is desired, into a finite number
of sub intervals
x0 < x1 < x2 <  < x p =
b.
The points are called mesh points or grid points. The spacing between the points is given by
hr =xr +1 − xr , r =1, 2, , p.
If the spacing is uniform, then hr= h= constant , r = 1, 2, , p . For this discussion, consider the case of
uniform mesh only. Let the range of integration be covered by the equally spaced points x0 , x1 , , xn with the
constant difference h = ∆xr = xr +1 − xr (the step length) and let yr be an approximation to the value y ( xr )
of the exact solution at the point xr . The finite difference methods are based on the integrated form

y (=
xr +1 ) y ( xr ) + ∫x f ( x, y ( x ) ) dx.
xr +1
(1.2)
r

That is obtained by integrating Equation (1.1) in the interval [ xr , xr +1 ] then the aim of the finite difference
method is to approximate this integral more accurately. Let denote the numerical solution and the exact solution
at xr by yr and y ( xr ) respectively. Suppose that the integration has already been carried as far as the point
x = xr so that approximations y1 , , yr − 2 , yr −1 , yr , and hence also approximate values f r = f ( xr , yr ) , are
known. The aim is to calculate yr +1 .
Since f ( x, y ( x ) ) cannot be integrated without knowing y ( x ) , which is the solution to the problem, in-
stead integrate an interpolating polynomial, P ( x ) determined by some of the previously obtained data points
( x0 , y0 ) , ( x1 , y1 ) , , ( xr , yr ) . Assuming, in addition, y ( xr ) ≈ yr and f ( x , y ( x ) ) dx ≈ ∫ P ( x ) dx , then,
xr +1 xr +1
∫x r xr

y ( xr +1 ) ≈ yr + ∫x P ( x ) dx.
xr +1
(1.3)
r

This takes the values f r at a certain number of points xr and then integrates this polynomial over the in-
terval xr to xr +1 . We need to have a sequence of approximations f r . If the solution at any point xr +1 is ob-
tained using the solution at only the previous points, then the method is called an explicit method. If the right
hand side of (1.2) depends on yr +1 also, then it is called an implicit method. According to [4], a general p-step
explicit method can be written as
yr +=
1 yr + hφ ( xr − p +1 , , xr −1 , xr , yr − p +1 , , yr −1 , yr , h ) .

And a general p-step implicit method can be written as


yr +=
1 yr + hφ ( xr − p +1 , , xr , xr +1 , yr − p +1 , , yr , yr +1 , h ) .
The objective of finite difference method for solving an ordinary differential equation is to transform a calcu-
lus problem to an algebra problem [5]. Consequently the finite-difference methods consist of two distinct stages:
I) Approximations y1 , y2 , , the “starting values” (we reserve the “initial values” for values at the initial
point x = x0 ) sufficiently many to calculate the values f r required for the first application of the finite differ-
ence formula, are obtained by some other means.
II) The solution is continued step by step by the finite-difference formulae; these give the values of y at the
point xr +1 once the values at xr , xr −1 , are known “main calculation”.
The approximate solution in finite difference method is converging to the true solution (convergence). If
f ( x, y ) satisfies the Lipschitz condition i.e. ( ) ( )
f ( x, y ) − f x, y ∗ ≤ L x, y ∗ . L being a constant, then the se-
quence of approximations to the numerical solution converges to the exact solution [6]. A finite difference me-
thod is convergent if the numerical solution approaches the exact solution as ∆x → 0 .

2. Calculation of Starting Values


The starting values needed for the main calculation can be obtained in a variety of ways. Particular care must be
exercised in the calculation of these starting values, for the whole calculation can be rendered useless by inac-
curacies in them. Several possible ways of obtaining starting values are mentioned below:

OALibJ | DOI:10.4236/oalib.1101614 2 June 2015 | Volume 2 | e1614


N. Yizengaw

2.1. Using Some Other Method of Integration


Provided that it is sufficiently accurate, any method of integration which does not require starting values (as dis-
tinct from initial values) can be used. Bearing in mind the high accuracy desired, one would normally choose the
Runge-Kutta method: further one would work preferably with a step of half the length to be used in the main
calculation and with a great number of decimals.

2.2. Using the Taylor Series for y(x)


If the function f ( x, y ) is of simple analytical form, the derivatives y ′ ( x0 ) , y ′′ ( x0 ) , y ′′′ ( x0 ) , can deter-
mined by differentiation of the differential equation; starting values can be calculated from the Taylor series
( vh )
2

y ( xv ) = y ( x0 + vh ) = y ( x0 ) + vhy ′ ( x0 ) +
y ′′ ( x0 ) + (1.4)
2!
of which as many terms are taken as are necessary for the truncation not to affect the last decimal carried (al-
ways assuming that the series converges).Several of the finite difference methods needs three starting values,
and for these it suffices to use (1.4) for v = ±1 ; this usually posses advantages over using (2.4) for v =1, 2, par-
ticularly as regards convergence.

2.3. Using Quadrature Formulae


Using the forward difference relation, we have
 1 1   1 1 
yr + h = yr + h  f r + ∇f r +1 − ∇ 2 f r + 2 +   ⤇ y1 = y0 + h  f 0 + ∇f1 − ∇ 2 f 2 +   .
 2 2   2 2 
Here the procedure which is suitable for the construction of two ( y1 , y2 ) or three ( y1 , y2 , y3 ) starting val-
ues can be given. The procedure is completely described by the following formulae.
1) y= 1 y0 + hf 0 , consequently f1 = f ( x1 , y1 ) ; ∇f1 = f1 − f 0 .
 1 
= y1[ ] y0 + h  f 0 + ∇f1  .
0
2)
 2 
Again we have the following formulae
 1 1 
yr + h= yr − h + h  2 f r + ∇ 2 f r +1 − ∇ 4 f r + 2 +  
 3 90 
 1 2 1 4 
⤇ y2 = y0 + h  f1 + ∇ f 2 − ∇ f r + 2 +   From these there is also
 3 90 
3) y2[ =
0]
y0 + 2hf1[ ] , f1[ ] = f ( x1 , y2 )
0 0

Improving these, the following three starting values ( y1 , y2 , y3 ) can be obtained as;
 1 1 0 
4) y1[ ] = y0 + h  f 0 + ∇f1[ ] − ∇ 2 f 2[ ] 
1 0

 2 12 
 1 0 
y2[ ] = y0 + h  2 f1[ ] + ∇ 2 f 2[ ] 
1 0

 3 
5)
 1 1 1 
y3[ ] = y1[ ] + h  2 f 2[ ] + ∇ 2 f3[ ] 
1 1
(1.6)
 3 
Generally for v = 1, 2, (or v = 0,1, )
 1 1 1 v 
y1[ ] = y0 + h  f 0 + ∇f1[ ] − ∇ 2 f 2[ ] + ∇3 f3[ ] 
v +1 v v

 2 12 24 
 1 v 
y2[ =
v +1]
y0 + h  2 f1[ ] + ∇ 2 f 2[ ] 
v
(1.7)
 3 
 1 v 
y3[ =
v +1]
y1[ ] + h  2 f 2[ ] + ∇ 2 f3[ ] 
v +1 v

 3 

OALibJ | DOI:10.4236/oalib.1101614 3 June 2015 | Volume 2 | e1614


N. Yizengaw

Thus alternatively three y values can be improved and the function values can be revised. f j[v] = f x j , y[jv] ( )
and their differences. This starting process should be carried out with a sufficiently small step length.

3. Formulae for the Main Calculation


The next approximate value yr +1 can be obtained once the values y1 , y2 , , yr at the points x1 , x2 , , xr have
been computed. To do this the following methods are used.

3.1. The Adams Extrapolation Method


In the extrapolation methods we consider first the function f ( x, y ( x ) ) is reduced by the interpolation poly-
nomial P(x) which takes the values f r − p , , f r −1 , f r at the points xr − p , , xr −1 , xr [where fl = f ( xl , yl ) ].
In effect the integral can be evaluated and with yr +1 and yr replacing y ( yr +1 ) and y ( yr ) , (1.1) becomes
 1 5 3 251 4 
yr +1 = yr + h  f r + ∇f r + ∇ 2 f r + ∇3 f r + ∇ f r +  . (1.8)
 2 12 8 720 
The exact solution y ( x ) satisfies the corresponding exact form

y ( xr ) + h  f ( xr , y ( xr ) ) + ∇f ( xr , y ( xr ) ) + ∇ 2 f ( xr , y ( xr ) )
1 5
y ( xr +=
1)
 2 12
(1.9)

+ ∇ f ( xr , y ( xr ) ) + ∇ f ( xr , y ( xr ) ) +  + s p +1
3 3 251 4
8 720 
where s p +1 is the remainder term and it is estimated by integrating Newton forward interpolation formula for
=
x x0 + uh , we have the following.
x0 + h
u ( u + 1) ( u + p )
h p +1 f (
p +1)
s p +1 = ∫ ( ξ ) dx
x0 ( p + 1)!
h p+2 1
f ( ) (ξ ) ∫ u ( u + 1) ( u + p )  du
p +1

( p + 1)! 0

∫ 0 u ( u + 1) ( u + p ) du


1

f(
p+2 p +1)
≤h .
( p + 1)! max

3.2. The Adams Interpolation Method


Here the integrand f ( x, y ( x ) ) in the Equation (1.1) is replaced by the polynomial P*(x) which takes the values
f r − p +1 , , f r , f r +1 at the points xr − p +1 , , xr −1 , xr , xr +1 then from the quadrator formula, it follows that
 1 1 1 
yr +1 = yr + h  f r +1 − ∇f r +1 − ∇ 2 f r +1 − ∇3 f r +1 −  . (1.10)
 2 12 24 
For the exact solution y(x) we have the following formula
 1 1 
y ( xr +=
1) y ( xr ) + h  f ( xr +1 ) − ∇f ( xr +1 ) − ∇ 2 f ( xr +1 ) −  + s*p +1 . (1.11)
 2 12 
With the remainder term s*p +1 , for which an estimate is given by
x0 + h
( u − 1) u ( u + 1) ( u + p − 1) p +1 ( p +1)
s*p +1 = ∫ h f ( ξ ) dx
x0 ( p + 1)!
∫ 0 ( u − 1) u  ( u + p − 1) du
1

f(
p+2 p +1)
≤h .
( p + 1)! max

OALibJ | DOI:10.4236/oalib.1101614 4 June 2015 | Volume 2 | e1614


N. Yizengaw

3.3. Central Difference Interpolation Method


If we integrate both sides of Equation (1.3) over the interval xr − h to xr + h , using Stirling’s interpolation for-
mula, we obtain (with p even)

( xr −1 ) h  2 f ( xr ) + ∇ 2 f ( xr +1 ) − ∇ 4 f ( xr + 2 ) + ∇6 f ( xr +3 ) −  .
1 1 1
y ( xr +1 ) − y=
 3 90 756 
In the remainder term is neglected, the approximations yl , is

 1 1 1 6 
yr +1 = yr −1 + h  2 f r + ∇ 2 f r +1 − ∇ 4 f r + 2 + ∇ f r + 3 −  . (1.12)
 3 90 756 
Usually this formula is truncated after the term in ∇ 2 , which gives Simpson’s rule:
 1 
yr +1= yr −1 + h  2 f r + ∇ 2 f r +1 
 3 
 1 
= yr −1 + h  2 f r + ∇ ( f r +1 − f r ) 
 3 
 1 
= yr −1 + h  2 f r + ( f r +1 − f r − f r + f r −1 ) 
 3 
h
yr +1 = yr −1 + ( f r −1 + 4 f r + f r +1 ) . (1.13)
3
An estimate for the remainder term s2** in the corresponding formula

 
y ( xr −1 ) + h  2 f ( xr , y ( xr ) ) + ∇ 2 f ( xr +1 , y ( xr +1 ) )  + s2**
1
y ( x=
r +1 ) (1.14)
 3 
for the exact solution the remainder term is calculated as
h5 ( iv )
s2** ≤ f .
90 max

4. Recursive Error Estimates


This section describes how error is estimated for the finite difference methods. Care must be taken that the
number of decimals carried in the calculation is sufficient for rounding errors to be neglected.
I) Taylor series method: If the necessary starting values are calculated by Taylor series method, the error can
1
usually be estimated very easily; the maximum rounding error, i.e. × 10− d for a d decimal number, will often
2
provide a suitable upper bound [7].
II) Quadrature formulae: If the iteration method quadrature formula (1.6) is used to obtain the starting values,
the error can be estimated as follows. For the exact solution we have

 1 1 1 
y ( x1 ) = y0 + h  F0 + ∇F1 − ∇ 2 F2 + ∇3 F3  + s1 ,
 2 12 24 
 1 
y ( x2 ) = y0 + h  2 F1 + ∇ 2 F2  + s2 , (1.15)
 3 
 1 
y ( x3 ) = y ( x1 ) + h  2 F1 + ∇ 2 F3  + s3 .
 3 

h p+2 1
there is Fv f ( xv , y ( xv ) ) ⋅ s p have the from s1 ≤
In which = f( )
n +1
∫ ( u − 1) u  ( u + p − 1) du
( p + 1)! max
0

OALibJ | DOI:10.4236/oalib.1101614 5 June 2015 | Volume 2 | e1614


N. Yizengaw

at three points (p = 3)
1
h5 ( 4 )
=
4!
f
max
∫ ( u − 1) u ( u + 1)( u + 2 ) du
0
5 1

∫ (u )
h
= f( ) + 2u 3 − u 2 − 2u du
4 4

4! max
0
1
h5 ( 4 )  u5 u 4 u3 2
= f  + − −u 
4! max
 5 2 3 0

19h5 19h5 ( 4 )
= = f( )
4
f . (1.16)
30 × 4! max 720! max

Similarly,
h5 ( iv ) h5 ( iv )
s2 ≤ f , s3 ≤ f . (1.17)
90 max 90 max

III) Adams interpolation method: Let us investigate the Adams interpolation method, which is based on the
formula (1.11).

 1 1 1 
yr +1 = yr + h  f r +1 − ∇f r +1 − ∇ 2 f r +1 − ∇3 f r +1 −  .
 2 12 24 
A similar relation, but with a remainder term s*p +1 , holds for the exact solution

 1 1 1 
y ( xr +=
1) y ( xr ) + h  f r +1 − ∇f r +1 − ∇ 2 f r +1 − ∇3 f r +1 −  + s*p +1 .
 2 12 24 
The truncation error is then ε=r y ( xr ) − y=
r s*p +1 .
For this remainder term, or “truncation error”, there exists the estimate

∫ 0 ( u − 1) u  ( u + p − 1) du
1

f(
p+2 p +1)
s *
≤h .
p +1
( p + 1)! max

5. Conclusion
In this research, finite difference approximate methods for solving initial value ordinary differential equation
have been studied. Even if the method is long, it is shown that finite difference method is fundamental to get
very accurate solution. Basically the solution method is based on Equation (1.2) by some rearrangement of Equ-
ation (1.1). Finite-difference methods are very suitable when the functions being dealt with are smooth and the
differences decrease rapidly with increasing order; calculations with these methods are best carried out with a
fairly small length of step. On the other hand, if the functions are not smooth, perhaps given by experimental
results, or if we want to use a large step, then the Runge-Kutta method is to be preferred; it is also advantageous
to use this method when we have to change the length of step frequently, particularly when this change is a de-
crease. Clearly we should not choose too large a step even for the Runge-Kutta method.

References
[1] Burden, R.L. and Faires, J.D. (2011) Numerical Analysis. 9th Edition, Brookscole, Boston, 259-253.
[2] Kumar, M. and Mishra, G. (2011) An Introduction to Numerical Methods for the Solutions of Partial Differential Equ-
ations. American Journal of Mathematics, 2, 1327-1338.
[3] Colletz, L. (1966) The Numerical Treatment of Differential Equations. 3rd Edition, Vol. 60, Springer-Verlag, Berlin,
48-94.
[4] Iyengar, S.R.K. and Jain, R.K. (2009) Numerical Methods. New Age International Publishers, New Delhi, 182-184.

OALibJ | DOI:10.4236/oalib.1101614 6 June 2015 | Volume 2 | e1614


N. Yizengaw

[5] Hoffman, J.D. (2001) Numerical Methods for Engineers and Scientists. 2nd Edition, Marcel Dekker, Inc., New York,
323-416.
[6] Kress, R. (1998) Graduate Texts in Mathematics. Springer-Verlag, New York.
[7] Grewal, B.S. (2002) Numerical Methods in Engineering & Science. 6th Edition, Khanna Publishers, India.

OALibJ | DOI:10.4236/oalib.1101614 7 June 2015 | Volume 2 | e1614

You might also like