OALibJ 2016071611013323
OALibJ 2016071611013323
Abstract
Initial value ordinary differential equations arise in formulation of problems in various fields such
as physics and Engineering. The present paper shows the method how to solve the initial value or-
dinary differential equation on some interval by using finite difference method in a very accurate
manner with the formulation of error estimation.
Keywords
Ordinary Differential Equation, Finite Difference Method, Interpolation, Error Estimation
1. Introduction
Differential equations are used to model problems in science and engineering that involve the change of some
variable with respect to the other. Most of these problems require the solution of an initial-value problem, that is,
the solution to a differential equation that satisfies a given initial condition. In common real-life situations, the
differential equation that models the problem is too complicated to solve exactly [1]. There are numerical me-
thods which simplify such problems and the one is finite difference method which is a numerical procedure that
solves a differential equation by discrediting the continuous physical domain into a discrete finite difference grid
[2]. Finite difference methods are very suitable when the functions being dealt with are smooth and the differ-
ences decrease rapidly with increasing orderas discussed by Colletz, L. [3]: calculations with these methods are
best carried out with fairly small length of step. Suppose that the first order IV differential equation
= ( x, y ) , y ( x0 ) y0
y ′ f= (1.1)
How to cite this paper: Yizengaw, N. (2015) Numerical Solutions of Initial Value Ordinary Differential Equations Using Finite
Difference Method. Open Access Library Journal, 2: e1614. http://dx.doi.org/10.4236/oalib.1101614
N. Yizengaw
is integrated numerically by dividing the interval [ x0 , b] on which the solution is desired, into a finite number
of sub intervals
x0 < x1 < x2 < < x p =
b.
The points are called mesh points or grid points. The spacing between the points is given by
hr =xr +1 − xr , r =1, 2, , p.
If the spacing is uniform, then hr= h= constant , r = 1, 2, , p . For this discussion, consider the case of
uniform mesh only. Let the range of integration be covered by the equally spaced points x0 , x1 , , xn with the
constant difference h = ∆xr = xr +1 − xr (the step length) and let yr be an approximation to the value y ( xr )
of the exact solution at the point xr . The finite difference methods are based on the integrated form
y (=
xr +1 ) y ( xr ) + ∫x f ( x, y ( x ) ) dx.
xr +1
(1.2)
r
That is obtained by integrating Equation (1.1) in the interval [ xr , xr +1 ] then the aim of the finite difference
method is to approximate this integral more accurately. Let denote the numerical solution and the exact solution
at xr by yr and y ( xr ) respectively. Suppose that the integration has already been carried as far as the point
x = xr so that approximations y1 , , yr − 2 , yr −1 , yr , and hence also approximate values f r = f ( xr , yr ) , are
known. The aim is to calculate yr +1 .
Since f ( x, y ( x ) ) cannot be integrated without knowing y ( x ) , which is the solution to the problem, in-
stead integrate an interpolating polynomial, P ( x ) determined by some of the previously obtained data points
( x0 , y0 ) , ( x1 , y1 ) , , ( xr , yr ) . Assuming, in addition, y ( xr ) ≈ yr and f ( x , y ( x ) ) dx ≈ ∫ P ( x ) dx , then,
xr +1 xr +1
∫x r xr
y ( xr +1 ) ≈ yr + ∫x P ( x ) dx.
xr +1
(1.3)
r
This takes the values f r at a certain number of points xr and then integrates this polynomial over the in-
terval xr to xr +1 . We need to have a sequence of approximations f r . If the solution at any point xr +1 is ob-
tained using the solution at only the previous points, then the method is called an explicit method. If the right
hand side of (1.2) depends on yr +1 also, then it is called an implicit method. According to [4], a general p-step
explicit method can be written as
yr +=
1 yr + hφ ( xr − p +1 , , xr −1 , xr , yr − p +1 , , yr −1 , yr , h ) .
y ( xv ) = y ( x0 + vh ) = y ( x0 ) + vhy ′ ( x0 ) +
y ′′ ( x0 ) + (1.4)
2!
of which as many terms are taken as are necessary for the truncation not to affect the last decimal carried (al-
ways assuming that the series converges).Several of the finite difference methods needs three starting values,
and for these it suffices to use (1.4) for v = ±1 ; this usually posses advantages over using (2.4) for v =1, 2, par-
ticularly as regards convergence.
Improving these, the following three starting values ( y1 , y2 , y3 ) can be obtained as;
1 1 0
4) y1[ ] = y0 + h f 0 + ∇f1[ ] − ∇ 2 f 2[ ]
1 0
2 12
1 0
y2[ ] = y0 + h 2 f1[ ] + ∇ 2 f 2[ ]
1 0
3
5)
1 1 1
y3[ ] = y1[ ] + h 2 f 2[ ] + ∇ 2 f3[ ]
1 1
(1.6)
3
Generally for v = 1, 2, (or v = 0,1, )
1 1 1 v
y1[ ] = y0 + h f 0 + ∇f1[ ] − ∇ 2 f 2[ ] + ∇3 f3[ ]
v +1 v v
2 12 24
1 v
y2[ =
v +1]
y0 + h 2 f1[ ] + ∇ 2 f 2[ ]
v
(1.7)
3
1 v
y3[ =
v +1]
y1[ ] + h 2 f 2[ ] + ∇ 2 f3[ ]
v +1 v
3
Thus alternatively three y values can be improved and the function values can be revised. f j[v] = f x j , y[jv] ( )
and their differences. This starting process should be carried out with a sufficiently small step length.
f(
p+2 p +1)
≤h .
( p + 1)! max
f(
p+2 p +1)
≤h .
( p + 1)! max
( xr −1 ) h 2 f ( xr ) + ∇ 2 f ( xr +1 ) − ∇ 4 f ( xr + 2 ) + ∇6 f ( xr +3 ) − .
1 1 1
y ( xr +1 ) − y=
3 90 756
In the remainder term is neglected, the approximations yl , is
1 1 1 6
yr +1 = yr −1 + h 2 f r + ∇ 2 f r +1 − ∇ 4 f r + 2 + ∇ f r + 3 − . (1.12)
3 90 756
Usually this formula is truncated after the term in ∇ 2 , which gives Simpson’s rule:
1
yr +1= yr −1 + h 2 f r + ∇ 2 f r +1
3
1
= yr −1 + h 2 f r + ∇ ( f r +1 − f r )
3
1
= yr −1 + h 2 f r + ( f r +1 − f r − f r + f r −1 )
3
h
yr +1 = yr −1 + ( f r −1 + 4 f r + f r +1 ) . (1.13)
3
An estimate for the remainder term s2** in the corresponding formula
y ( xr −1 ) + h 2 f ( xr , y ( xr ) ) + ∇ 2 f ( xr +1 , y ( xr +1 ) ) + s2**
1
y ( x=
r +1 ) (1.14)
3
for the exact solution the remainder term is calculated as
h5 ( iv )
s2** ≤ f .
90 max
1 1 1
y ( x1 ) = y0 + h F0 + ∇F1 − ∇ 2 F2 + ∇3 F3 + s1 ,
2 12 24
1
y ( x2 ) = y0 + h 2 F1 + ∇ 2 F2 + s2 , (1.15)
3
1
y ( x3 ) = y ( x1 ) + h 2 F1 + ∇ 2 F3 + s3 .
3
h p+2 1
there is Fv f ( xv , y ( xv ) ) ⋅ s p have the from s1 ≤
In which = f( )
n +1
∫ ( u − 1) u ( u + p − 1) du
( p + 1)! max
0
at three points (p = 3)
1
h5 ( 4 )
=
4!
f
max
∫ ( u − 1) u ( u + 1)( u + 2 ) du
0
5 1
∫ (u )
h
= f( ) + 2u 3 − u 2 − 2u du
4 4
4! max
0
1
h5 ( 4 ) u5 u 4 u3 2
= f + − −u
4! max
5 2 3 0
19h5 19h5 ( 4 )
= = f( )
4
f . (1.16)
30 × 4! max 720! max
Similarly,
h5 ( iv ) h5 ( iv )
s2 ≤ f , s3 ≤ f . (1.17)
90 max 90 max
III) Adams interpolation method: Let us investigate the Adams interpolation method, which is based on the
formula (1.11).
1 1 1
yr +1 = yr + h f r +1 − ∇f r +1 − ∇ 2 f r +1 − ∇3 f r +1 − .
2 12 24
A similar relation, but with a remainder term s*p +1 , holds for the exact solution
1 1 1
y ( xr +=
1) y ( xr ) + h f r +1 − ∇f r +1 − ∇ 2 f r +1 − ∇3 f r +1 − + s*p +1 .
2 12 24
The truncation error is then ε=r y ( xr ) − y=
r s*p +1 .
For this remainder term, or “truncation error”, there exists the estimate
∫ 0 ( u − 1) u ( u + p − 1) du
1
f(
p+2 p +1)
s *
≤h .
p +1
( p + 1)! max
5. Conclusion
In this research, finite difference approximate methods for solving initial value ordinary differential equation
have been studied. Even if the method is long, it is shown that finite difference method is fundamental to get
very accurate solution. Basically the solution method is based on Equation (1.2) by some rearrangement of Equ-
ation (1.1). Finite-difference methods are very suitable when the functions being dealt with are smooth and the
differences decrease rapidly with increasing order; calculations with these methods are best carried out with a
fairly small length of step. On the other hand, if the functions are not smooth, perhaps given by experimental
results, or if we want to use a large step, then the Runge-Kutta method is to be preferred; it is also advantageous
to use this method when we have to change the length of step frequently, particularly when this change is a de-
crease. Clearly we should not choose too large a step even for the Runge-Kutta method.
References
[1] Burden, R.L. and Faires, J.D. (2011) Numerical Analysis. 9th Edition, Brookscole, Boston, 259-253.
[2] Kumar, M. and Mishra, G. (2011) An Introduction to Numerical Methods for the Solutions of Partial Differential Equ-
ations. American Journal of Mathematics, 2, 1327-1338.
[3] Colletz, L. (1966) The Numerical Treatment of Differential Equations. 3rd Edition, Vol. 60, Springer-Verlag, Berlin,
48-94.
[4] Iyengar, S.R.K. and Jain, R.K. (2009) Numerical Methods. New Age International Publishers, New Delhi, 182-184.
[5] Hoffman, J.D. (2001) Numerical Methods for Engineers and Scientists. 2nd Edition, Marcel Dekker, Inc., New York,
323-416.
[6] Kress, R. (1998) Graduate Texts in Mathematics. Springer-Verlag, New York.
[7] Grewal, B.S. (2002) Numerical Methods in Engineering & Science. 6th Edition, Khanna Publishers, India.