0% found this document useful (0 votes)
15 views47 pages

Project

Uploaded by

Grace Llobrera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views47 pages

Project

Uploaded by

Grace Llobrera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 47

Don Mariano Marcos Memorial State University

Mid-La Union, Campus


City of San Fernando

Submitted by:
Palabay, Miguel R.

Submitted to:

ENGR.Grace A. Llobrera

March 6,2008
The Trapezoidal Rule

The trapezoidal rule is the first of the Newton’s cotes close integration formula. It
corresponds to the case where the polynomials in first order.
b b

I = ∫a f(x) dx ≡ ∫a f1(x) dx

That straight line can be represented as:

b f(b) – f(a)
I = ∫a f1(x) = f(a) + b-a ( x-a )

The area under this straight line is estimated of the integral of f(x) between the limits a
and b.
b f (b) – f (a)
I = ∫a [ f(a) + b-a (x-a)] dx

The result of integrated is:


f (a) + f (b)
I = (b-a) 2 which is called the trapezoidal rule.

The trapezoidal rule is a method for approximating a definite integral by


evaluating the integrand at two points. The formal rule is given by

Where, h = b – a .
This rule comes from determining the area of a right trapezoid with bases of
lengths f(a) and f(b) respectively and a height of length h. When using a graph to
illustrate the trapezoidal rule, the height of the right trapezoid is actually horizontal and
the bases are vertical. This may be confusing to someone who is seeing the trapezoidal
rule for the first time. An example is shown below.

The figure in red need not be a right trapezoid. If either f (a) = 0 or f (b) = 0, the
figure will be a right triangle. If both f (a) = 0 and f (b) = 0, the figure will be a line
segment. In any case, the same rule for approximating the corresponding definite integral
is used.

The trapezoidal rule is the first Newton-Cotes quadrature formula. It has degree of
precision 1. This means it is exact for polynomials of degree less than or equal to one.
We can see this with a simple example.

If f is Riemann integrable on [a,b]with for all , then


Following is an example of the trapezoidal rule.

Using the fundamental theorem of calculus shows

In this case, the trapezoidal rule gives the exact value,

It is important to note that most calculus books give the wrong definition of the
trapezoidal rule. Typically, they define it to be what is actually the composite trapezoidal
rule, which uses the trapezoidal rule on a specified number of subintervals. Some
examples of calculus books that define the trapezoidal rule to be what is actually the
composite trapezoidal rule are:

 Stewart, James. Calculus. Pacific Groves, CA: International Thomson Publishing


Co., 1995.
 Bittinger, Marvin L. Calculus. Reading, MA: Addison-Wesley Publishing Co.,
1989.

Also note the trapezoidal rule can be derived by integrating a linear interpolation or by
using the method of undetermined coefficients. The latter is probably a bit easier.The
trapezoidal rule is a method for finding an approximate value for a definite integral.
Suppose we have the definite integral

First the area under the curve y = f (x) is divided into n strips, each of equal width
.

The area under the curve y = f (x )is divided into n = 4 strips of thickness . The
area of each strip is then approximated to be that of a trapezium. The sum of these

trapezoidal areas gives an approximation for our definite integral .

The shape of each strip is approximated to be like that of a trapezium. Hence the area of
the first strip is approximately

similarly we approximate the area of the t strip to be

Adding up these areas gives us an approximate value for our definite integral:

This estimate for generally improves as increases.


Example.
Let's use the trapezoidal rule to calculate an approximate value for the definite integral
We shall divide the interval into n = 5 strips each of width .
The trapezoidal rule then gives us:

Geometrically, the trapezoidal rule is equivalent to approximating the area of the


trapezoid under the straight line connecting f(a) and f(b). recall from geometry that the
formula computing the area of a trapezoid is the height times the averages of the bases, in
our cases the consept is the same when the trapezoid is on its side. Therefore, the integral
estimate can be represented as:

I = width x average height

Here, the integral is computed on each of the sub-intervals by using linear interpolating
formula, i.e. for and then summing them up to obtain the desired integral.

Note that

Now using the formula for on the interval we get,

Thus, we have,

i.e.

This is called Trapezoidal rule.It is a simple quadrature formula, but is not very accurate.
An estimate for the error E1 in numerical integration using the Trapezoidal rule is given
by

where is the average value of the second forward differences.

Recall that in the case of linear function, the second forward differences is zero, hence,
the Trapezoidal rule gives exact value of the integral if the integrand is a linear function.

Error of the trapezoidal rule

When the employ the integral under a straight line segment to approximate the
integral under a curve obviously can incur an error that maybe substantial. An estimate
for the local truncation error of a single application of the trapezoidal rule.

3 major conclusion

1. For individual application with nicely behave function the multi segment trapezoidal
rule is just fine for attaining the type of accuracy is required in many engineering
applications.

2. If high accuracy is required the multiple segment trapezoidal demands a great deal of
computational effort. Although his effort maybe negligible for a single application. It
could be very important when (a) numerous integrals are being evaluated or (b) where the
function itself is time consuming to evaluate. For more cases more efficient approaches
be necessary.

3. Finally, round off errors can limit our ability to determine integrals. This is due both to
the machine precision as well as to numerous computation involved in some techniques
like the multiple segment trapezoidal rule.
Simpson's Rule

Simpson's Rule, named after the great English mathematician Thomas Simpson,
is an ingenious method for approximating integrals. If you don't know what integrals are
used for, don't feel bad. Many college students who complete three semesters of calculus
may be able to “compute” an integral, but won't know its practical application either.
Computation of integrals is difficult to learn and easy to forget.

Simpson's rule is a Newton-Cotes formula for approximating the integral of a


function f using quadratic polynomials (parabolic arcs instead of the straight line
segments used in the trapezoidal rule). Simpson's rule can be derived by integrating a
third-order Lagrange interpolating polynomial fit to the function at three equally spaced
points. In particular, let the function f be tabulated at points x0, x1, and x2equally spaced
by distance h, and denote fn = f(xn)

Simpson's rule is a method for numerical integration, the numerical


approximation of definite integrals. Specifically, it is the following approximation:

Divide the area under the curve into n (where n is even) parabolas from x = a to x = b.
The area can be found with the following formula:

Let P(x) = Ax2 +Bx + C be the equation of one parabola.

Then
Proof:

Repeating this

gives:

Derivation

Quadratic interpolation

One derivation replaces the integrand f(x) by the quadratic polynomial P(x) which takes
the same values as f(x) at the end points a and b and the midpoint m = (a+b) / 2. One can
use Lagrange polynomial interpolation to find an expression for this polynomial,

An easy (albeit tedious) calculation shows that


Averaging the midpoint and the trapezium rules.

Another derivation constructs Simpson's rule from two simpler approximations: the
midpoint rule

and the trapezium rule

The errors in these approximations are

respectively. It follows that the leading error term vanishes if we take the weighted
average

This weighted average is exactly Simpson's rule.


Using another approximation (for example, the trapezium rule with twice as many
points), it is possible to take a suitable weighted average and eliminate another error
term..

Undetermined coefficients

The third derivation starts from the ansatz


The coefficients α, β and γ can be fixed by requiring that this approximation be exact for
all quadratic polynomials. This yields Simpson's rule.

Error

The error in approximating an integral by Simpson's rule is

where ξ is some number between a and b.[3]

The error is (asymptotically) proportional to (b − a)5. However, the above derivations


suggest an error proportional to (b − a)4. Simpson's rule gains an extra order because the
points at which the integrand are evaluated, are distributed symmetrically in the interval
[a, b].

Composite Simpson's rule

If the interval of integration [a,b] is in some sense "small", then Simpson's rule
will provide an adequate approximation to the exact integral. By small, what we really
mean is that the function being integrated is relatively smooth over the interval [a,b]. For
such a function, a smooth quadratic interpolant like the one used in Simpson's rule will
give good results.

However, it is often the case that the function we are trying to integrate is not
smooth over the interval. Typically, this means that either the function is highly
oscillatory, or it lacks derivatives at certain points. In these cases, Simpson's rule may
give very poor results. One common way of handling this problem is by breaking up the
interval [a,b] into a number of small subintervals. Simpson's rule is then applied to each
subinterval, with the results being summed to produce an approximation for the integral
over the entire interval. This sort of approach is termed the composite Simpson's rule.
Suppose that the interval [a,b] is split up in n subintervals, with n an even number. Then,
the composite Simpson's rule is given by:

where xi = a + ih for i = 0,1,...,n − 1,n with h = (b − a) / n; in particular, x0 = a and xn = b.


The above formula can also be written as

The error committed by the composite Simpson's rule is bounded (in absolute value) by

where h is the "step length", given by h = (b − a) / n. This formulation splits the interval
[a,b] in subintervals of equal length. In practice, it is often advantageous to use
subintervals of different lengths, and concentrate the efforts on the places where the
integrand is less well-behaved. This leads to the adaptive Simpson's method.

Python implementation of Simpson's rule

Here is an implementation of Simpson's rule in Python.

def simpson_rule(f, a, b):


"Approximate the definite integral of f from a to b by Simpson's rule."
c = (a + b) / 2.0
h3 = abs(b - a) / 6.0
return h3 * (f(a) + 4.0*f(c) + f(b))

# Calculates integral of sin(x) from 0 to 1


from math import sin
print simpson_rule(sin, 0, 1)

Here is the version of the Composite Simpson's rule, also in Python.

def c_simpson_rule(f,a,b,n):
"Approximate the definite integral of f from a to b by Composite Simpson's rule,
dividing the interval in n parts."
dx = (float(b)-a)/n
i =0
fks = 0.0
while i<=n:
xk = a+i*dx
if i==0 or i==n:
fk = f(xk)
elif i%2 == 1:
fk = 4*f(xk)
else:
fk = 2*f(xk)
fks += fk
i += 1
return (dx/3)*fks

# Calculates integral of sin(x) from 0 to 1


from math import sin
print c_simpson_rule(sin, 0, 1, 10000)

Integrating sin(x) from 0 to 1 with the first code gives 0.459862189871, the second yields
0.459697694132 whereas the true value is 1 − cos(1) = 0.45969769413... . The composite
version of the rule gives a noticeably better approximation.
Romberg Integration

The subroutine Romberg is "dynamic" in the following sense. At the start, we


initialize the array with the command and it contains one row and one

element , in which we place one element . Next, the


increment command, is used to make , and the Append
command, , is invoked which adds a second row to ,
which is initialized with zeros, . Then the TrapRule subroutine is called to
perform the sequential trapezoidal rule and fill in the first entry and Romberg's
rule is used to fill in the second entry . And so it goes, the sequential trapezoidal
rule is used to fill in the first entry in succeeding rows and Romberg's rule fills in rest of
the entries in that row. The algorithm is terminated when .

Romberg integration is same as Richardson’s extrapolation formula by However,


Romberg used a recursive algorithm for the extrapolation as follows.
The estimate of the true error in trapezoidal rule is given by

Since the segment width, h

equation (2) can be written as

The estimate of true error is given by

It can be shown that the exact true error could be written as

Since we used in the formula ,the result obtained from Equation (8) has an error
of and can be written as
where the variable TV is replaced by as the value obtained using Richardson’s
extrapolation formula. Note also that the sign is replaced by = sign.
Hence the estimate of the true value now is

Determine another integral value with further halving the step size (doubling the number
of segments),

then
.

From Equation (13) and (14),

The above equation now has the error of . The above procedure can be further
improved by using the new values of the estimate of true value that has the error of
to give an estimate of .
Based on this procedure, a general expression for Romberg integration can be written as

The index k represents the order of extrapolation. For example, represents the
values obtained from the regular Trapezoidal rule, represents values obtained using
the true error estimate as . The index j represents the more and less accurate
estimate of the integral. The value of the integral with j+1 index is more accurate than
with j index.

For , ,
For , ,

Example:
The vertical distance covered by a rocket from to seconds is given by

Use Romberg’s rule to find the distance covered. Use the 1, 2, 4, and 8-segment
Trapezoidal rule results as given in the Table 1.

Solution

From Table 1, the needed values from original Trapezoidal rule are

where the above four values correspond to using 1, 2, 4 and 8 segment Trapezoidal rule,
respectively. To get the first order extrapolation values,

Similarly,
For the second order extrapolation values,

Similarly

For the third order extrapolation values,

Romberg's method (Romberg 1955) generates a triangular array consisting of numerical


estimates of the definite integral,

by using Richardson extrapolation (Richardson 1910) repeatedly on the trapezium rule.


Romberg's method evaluates the integrand at equally-spaced points. The integrand must
have continuous derivatives, though fairly good results may be obtained if only a few
derivatives exist. If it is possible to evaluate the integrand at unequally-spaced points,
then other methods such as Gaussian quadrature and Clenshaw-Curtis quadrature are
generally more accurate.

Method
he method can be defined inductively in this way:

or

where

In big O notation, the error for R(n,m) is:

The first extrapolation, R(n,1), is equivalent to Simpson's rule with n + 2 points.When


function evaluations are expensive, it may be preferable to replace the polynomial
interpolation of Richardson with the rational interpolation proposed by Bulirsch & Stoer
(1967).

Gauss Quadrature
Seeks to obtain the best numerical estimate of an integral by picking optimal
abscissas at which to evaluate the function . The fundamental theorem of Gaussian
quadrature states that the optimal abscissas of the -point Gaussian quadrature formulas
are precisely the roots of the orthogonal polynomial for the same interval and weighting
function. Gaussian quadrature is optimal because it fits all polynomials up to degree
exactly. Slightly less optimal fits are obtained from Radau quadrature and
Laguerre quadrature.

GAUSS QUADRATURE RULE:


Background:
Previously, we developed Trapezoidal Rule and Simpson’s 1/3rd Rule by several
methods. One of them was the method of undetermined coefficients.
To derive Trapezoidal rule from the method of undetermined coefficients, we
approximated

Let the right hand side be exact for integrals of a straight line, that is, for integrated form

So

But from Equation (1), we want

to give the same result as Equation (2) for .

Hence from Equations (2) and (4),

Since and are arbitrary constants for a general straight line


Multiply Equation (5a) by a and subtracting from Equation (5b) gives

Substituting the above found value of in Equation (5a) gives

Therefore

Derivation of two-point Gaussian Quadrature Rule:


Method 1:
The two-point Gauss Quadrature Rule is an extension of the Trapezoidal Rule
approximation where the arguments of the function are not predetermined as
and , but as unknowns and . So in the two-point Gauss Quadrature Rule,
the integral is approximated as

There are four unknowns , , and . These are found by assuming that the
formula gives exact results for integrating a general third order polynomial,
. Hence

The formula would then give


Equating Equations (8) and (9) gives

Since in Equation (10), the constants and are arbitrary, the coefficients of
and are equal. This gives us four equations as follows.

Without proof (see Example 1 for proof of a related problem), we can find that the above
four simultaneous nonlinear equations have only one acceptable solution

Hence

Method 2:
We can derive the same formula by assuming that the expression gives exact values for

integrals of individual integrals of and . The reason the


formula can also be derived using this method is that the linear combination of the above
integrands is a general third order polynomial given by .
These will give four equations as follows

These four simultaneous nonlinear Equations (14) can be solved with a single acceptable
solution

Hence

Since two points are chosen, it is called the two-point Gauss Quadrature Rule. Higher
point versions can also be developed.

Higher point Gaussian Quadrature Formulas:

For example
is called the three-point Gauss Quadrature Rule. The coefficients , and , and the
function arguments , and are calculated by assuming the formula gives exact
expressions for integrating a fifth order polynomial

. General n-point rules would approximate the

integral

Arguments and weighing factors for n-point Gauss Quadrature Rules:

In handbooks (see Table 1), coefficients and arguments given for n-point Gauss
Quadrature Rule are given for integrals of the form

So if the table is given for integrals, how does one solve ?

The answer lies in that any integral with limits of can be converted into an integral
with limits . Let

If then
If then
such that

Solving these two simultaneous linear Equations (21) gives

Hence
Substituting our values of and into the integral gives us

Example 1:

For an integral show that the two-point Gauss Quadrature rule approximates

to

where

Solution,

Assuming the formula

gives exact values for integrals and . Then


Multiplying Equation (E1.3) by and subtracting from Equation (E1.5) gives
.

The solution to the above equation is


or/and
or/and
or/and
.
I. is not acceptable as Equations (E1.2-E1.5) reduce to
and . But since , then from

, but conflicts with .

II. is not acceptable as Equations (E1.2-E1.5) reduce to ;


. Since , then or has to be

zero but this violates .

III. is not acceptable as Equations (E1.2-E1.5) reduce to ;


. If , then

gives and that violates . If ,


then violates

That leaves the solution of as the only possible acceptable solution and in fact, it
does not have violations (see it for yourself)

Equations (E1.4) and (E1.9) gives

Since Equation (E1.7) requires that the two results be of opposite sign, we get

Hence,
Example 2:

For an integral, derive the one-point Gaussian Quadrature Rule.

Solution:

The one-point Gaussian quadrature rule is

Assuming the formula gives exact values for integrals and

Since the other equation becomes

Therefore, one-point Gauss Quadrature Rule can be expressed as

Example 3:
What would be the formula for
if you want the above formula to give you exact values of that is a

linear combination of and .

Solution:

If the formula is exact for linear combination of and , then

Solving the two Equations (E3.1) simultaneously gives

So,

Let us see if the formula works.

Evaluate using the above formula.

The exact value of is given by


Any surprises?

Now evaluate using the above formula

The exact value of is given by

Eulers Method

In mathematics and computational science, the Euler method, named after


Leonhard Euler, is a first order numerical procedure for solving ordinary differential
equations (ODEs) with a given initial value. It is the most basic kind of explicit method
for numerical integration for ordinary differential equations.

A method for solving ordinary differential equations using the formula which
advances a solution from to . Note that the method increments a solution
through an interval while using derivative information from only the beginning of the
interval. As a result, the step's error is . This method is called simply "the Euler
method" by Press et al. (1992), although it is actually the forward version of the
analogous Euler backward method.

While Press et al. (1992) describe the method as neither very accurate nor very
stable when compared to other methods using the same step size, the accuracy is actually
not too bad and the stability turns out to be reasonable as long as the so-called Courant-
Friedrichs-Lewy condition is fulfilled. This condition states that, given a space
discretization, a time step bigger than some computable quantity should not be taken. In
situations where this limitation is acceptable, Euler's forward method becomes quite
attractive because of its simplicity of implementation.

The simplest method of numerical integration is Euler's method, which is


presented now. Consider the first order differential equation:

Starting at some time, to, he value of y(to+h) can then be approximated by the value of
y(to) plus the time step multiplied by the slope of the function, which is the derivative of
y(t) (Note: this is simply a first order Taylor expansion)::

we will call this approximate value y*(t)

So, if we can calculate the value of dy/dt at time to (using equation 1) then we can
generate an approximation for the value of y at time to+h using equation 2. We can then
use this new value of y (at to) to find dy/dt (at to) and repeat. Although this seems
circular, if properly used it can generat an approximate solution. This is refered to as
Euler's method.

This process is shown graphically below for a sinlge interation of the process.

With this simple background the Euler method for first order differential equations can be
stated as follows:

1) Starting at time to, choose a value for h, and find initial condition y (to).

2) From y(to) calculate the derivative of y(t) at t=to using equation 1. Call this k1. The
slope is shown as the red line in the drawing above.

3) From this value find an approximate value for y*(to+h) using equation 2.

This is labeled in red in the diagram above. The exact value is shown in blue. Note that the difference
between approximate and exact solutions is quite large; in practice h is chosen small enough to make this
error small.

4) Let to=to+h, and y(to)=y*(to+h).

5) Repeat steps 2 through 4 until the solution is finished.

Numerical Example (1st order differential equation - Euler)

Now lets go through a numerical example with to=0 for the equation:

which has an exact solution


First Time Step

First, let's use h=0.1 To find y*(to+h)=y*(h) (since to=0) we first evaluate the derivative:

We then find the approximate solution after the time step:

Our approximate solution is 1.1, the exact solution is 1.04.

Second Time Step

To find the next approximation we take to=0.1 and repeat. First we evaluate the
derivative (using our approximation for the value of y(to):

We then find the approximate solution after the second time step (again using our
approximate value):
Our approximate solution is 1.08, the exact solution is 1.00.Let's start by looking at an
initial value problem whose solution is known:

We know that the solution is . This means that after we find our approximate
solution, we will be able to determine how good of an approximation it really is.

Let's suppose that we are interested in the value of the solution at . We know the
value at since that is a part of the initial value problem---namely, . Notice
that the differential equation also tells us the derivative of the solution at since

If we now form the linear approximation at , we find that


.

Then our approximation yields

The problem with the approximation is that the derivative of the solution is changing
across the interval but the approximation assumes that it is constantly 1 . We can try
to fix this up by diving the interval into two pieces: First, we will use the linear
approximation based at to approximate the value at . Then we will use a
linear approximation at to obtain an approximate value at .

We have already obtained the linear approximation based at . This


produces the approximate value . This tells us that the solution curve
approximately passes through . That means that
We will then form the linear approximation at the point : it produces

Now we will work with a general initial value problem

We will again form an approximate solution by taking lots of little steps. We will call the
distance between the steps h and the various points . To get from one step to the
next, we will form the linear approximation at . The derivative at this point is given by
the differential equation: . The linear approximation is then

so that

This technique is called Euler's Method.

The logistic equation

Now we will consider the initial value problem

Notice that this has the basic form of the logistic equation. We have studied this equation
qualitatively, but we do not explicitly know solutions. As an example, we will

approximate the solution on the interval by taking steps of width h.

Applying Euler's Method, we can generate an approximate solution by


In the demonstration below, you can enter the number of steps and see the approximate
solution. Again, as you take more steps, the solution does not vary too much when you
increase the number of steps. You can then feel confident that your solution is a good
approximation.

Euler's Method for First Order Differential Equations

The simplest method of numerical integration is Euler's method, which is presented now.
Consider the first order differential equation:

Starting at some time, to, he value of y(to+h) can then be approximated by the value of
y(to) plus the time step multiplied by the slope of the function, which is the derivative of
y(t) (Note: this is simply a first order Taylor expansion)::

we will call this approximate value y*(t)

So, if we can calculate the value of dy/dt at time to (using equation 1) then we can
generate an approximation for the value of y at time to+h using equation 2. We can then
use this new value of y (at to) to find dy/dt (at to) and repeat. Although this seems
circular, if properly used it can generat an approximate solution. This is refered to as
Euler's method.

With this simple background the Euler method for first order differential equations can be
stated as follows:

1) Starting at time to, choose a value for h, and find initial condition y(to).

2) From y(to) calculate the derivative of y(t) at t=to using equation 1. Call this k1. The
slope is shown as the red line in the drawing above.

3) From this value find an approximate value for y*(to+h) using equation 2.

This is labeled in red in the diagram above. The exact value is shown in blue. Note that the difference
between approximate and exact solutions is quite large; in practice h is chosen small enough to make this
error small.

4) Let to=to+h, and y(to)=y*(to+h).

5) Repeat steps 2 through 4 until the solution is finished.

Numerical Example (1st order differential equation - Euler)

Now lets go through a numerical example with to=0 for the equation:

which has an exact solution


First Time Step

First, let's use h=0.1 To find y*(to+h)=y*(h) (since to=0) we first evaluate the derivative:

We then find the approximate solution after the time step:

Our approximate solution is 1.1, the exact solution is 1.04.

Second Time Step

To find the next approximation we take to=0.1 and repeat. First we evaluate the
derivative (using our approximation for the value of y(to):
We then find the approximate solution after the second time step (again using our
approximate value):

Our approximate solution is 1.08, the exact solution is 1.00.

Subsequent time steps

The procedure is repeated for subsequent time steps.

Results

The table below show the results of the Euler approximations, as well as the exact
solution up to 4 seconds. Here is a link to Matlab code, and C code, that performs the
approximation. The Matlab code printed the results and made the plot (I added results for
h=0.01 -- see below)..

If h is decreased to 0.01 accuracy is improved, but the complexity (and time taken) of the
calculation is greatly increased. Compare resulsts with h=0.1 at t=0.1, and t=4.0. Note
that on the graph above, the exact solution, and the approximate solution with h=0.01, are
almost identical.

Runge–Kutta methods

In numerical analysis, the Runge–Kutta methods are an important family of


implicit and explicit iterative methods for the approximation of solutions of ordinary
differential equations. These techniques were developed around 1900 by the German
mathematicians C. Runge and M.W. Kutta.See the article on numerical ordinary
differential equations for more background and other methods. See also List of Runge-
Kutta methods.

The classical fourth-order Runge–Kutta method

One member of the family of Runge–Kutta methods is so commonly used that it


is often referred to as "RK4" or simply as "the Runge–Kutta method".

Let an initial value problem be specified as follows.


Then, the RK4 method for this problem is given by the following equations:

where yn + 1 is the RK4 approximation of y(tn + 1), and

Thus, the next value (yn+1) is determined by the present value (yn) plus the product of the
size of the interval (h) and an estimated slope. The slope is a weighted average of slopes:

 k1 is the slope at the beginning of the interval;


 k2 is the slope at the midpoint of the interval, using slope k1 to determine the value
of y at the point tn + h/2 using Euler's method;
 k3 is again the slope at the midpoint, but now using the slope k2 to determine the y-
value;
 k4 is the slope at the end of the interval, with its y-value determined using k3.

In averaging the four slopes, greater weight is given to the slopes at the midpoint:

The RK4 method is a fourth-order method, meaning that the error per step is on the order
of h5, while the total accumulated error has order h4.
Note that the above formulas are valid for both scalar- and vector-valued functions (i.e., y
can be a vector and f an operator). For example one can integrate Schrödinger's equation
using Hamiltonian operator as function f.

Explicit Runge–Kutta methods

The family of explicit Runge–Kutta methods is a generalization of the RK4 method


mentioned above. It is given by

where

To specify a particular method, one needs to provide the integer s (the number of stages),
and the coefficients aij (for 1 ≤ j < i ≤ s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s).
These data are usually arranged in a mnemonic device, known as a Butcher tableau (after
John C. Butcher):

c2 a21

c3 a31 a32

cs as1 as2 as,s − 1

b1 b2 bs − 1 bs
The Runge–Kutta method is consistent if

There are also accompanying requirements if we require the method to have a


certain order p, meaning that the truncation error is O(hp+1). These can be derived from
the definition of the truncation error itself. For example, a 2-stage method has order 2 if
b1 + b2 = 1, b2c2 = 1/2, and b2a21 = 1/2.

Example:

The RK4 method falls in this framework. Its tableau is:

1/2 1/2

1/2 0 1/2

1 0 0 1

1/6 1/3 1/3 1/6

However, the simplest Runge–Kutta method is the (forward) Euler method, given by the
formula yn + 1 = yn + hf(tn,yn). This is the only consistent explicit Runge–Kutta method with
one stage. The corresponding tableau is:

An example of a second-order method with two stages is provided by the midpoint


method
The corresponding tableau is:

1/2 1/2

0 1

Note that this 'midpoint' method is not the optimal RK2 method. An alternative is
provided by Heun's method, where the 1/2's in the tableau above is simply replaced by
1's. If one wants to minimize the truncation error, the method below should be used
(Atkinson p. 423). Other important methods are Fehlberg, Cash-Karp and Dormand-
Prince. Also, read the article on Adaptive Stepsize.

Usage

The following is an example usage of a two-stage explicit Runge–Kutta method:

2/3 2/3

1/4 3/4

to solve the initial-value problem

with step size h=0.025.

The tableau above yields the equivalent corresponding equations below defining the
method:
t0 = 1

y0 = 1

t1 =
1.025

f(t0,k1) = k2 = y0 + 2 / 3hf(t0,k1) =
k1 = y0 = 1
2.557407725 1.042623462

y1 = y0 + h(1 / 4 * f(t0,k1) + 3 / 4 * f(t0 + 2 / 3h,k2)) = 1.066869388

t2 = 1.05

k1 = y1 = f(t1,k1) = k2 = y1 + 2 / 3hf(t1,k1) =
1.066869388 2.813524695 1.113761467

y2 = y1 + h(1 / 4 * f(t1,k1) + 3 / 4 * f(t1 + 2 / 3h,k2)) = 1.141332181

t3 =
1.075

k1 = y2 = f(t2,k1) = k2 = y2 + 2 / 3hf(t2,k1) =
1.141332181 3.183536647 1.194391125

y3 = y2 + h(1 / 4 * f(t2,k1) + 3 / 4 * f(t2 + 2 / 3h,k2)) = 1.227417567

t4 = 1.1

k1 = y3 = f(t3,k1) = k2 = y3 + 2 / 3hf(t3,k1) =
1.227417567 3.796866512 1.290698676

y4 = y3 + h(1 / 4 * f(t3,k1) + 3 / 4 * f(t3 + 2 / 3h,k2)) = 1.335079087


The numerical solutions correspond to the underlined values. Note that f(ti,k1) has been
calculated to avoid recalculation in the yis.

Adaptive Runge-Kutta methods

The adaptive methods are designed to produce an estimate of the local truncation
error of a single Runge-Kutta step. This is done by having two methods in the tableau,
one with order p and one with order p − 1.

The lower-order step is given by

where the ki are the same as for the higher order method. Then the error is

which is O(hp). The Butcher Tableau for this kind of method is extended to give the

values of :

c2 a21

c3 a31 a32

cs as1 as2 as,s − 1

b1 b2 bs − 1 bs

The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended
Butcher Tableau is:
0

1/4 1/4

3/8 3/32 9/32

12/13 1932/2197 -7200/2197 7296/2197

1 439/216 -8 3680/513 -845/4104

1/2 -8/27 2 -3544/2565 1859/4104 -11/40

16/135 0 6656/12825 28561/56430 -9/50 2/55

25/216 0 1408/2565 2197/4104 -1/5 0

However, the simplest adaptive Runge-Kutta method involves combining the Heun
method, which is order 2, with the Euler method, which is order 1. Its extended Butcher
Tableau is:

1 1

1/2 1/2

1 0

The error estimate is used to control the stepsize.

Implicit Runge-Kutta methods

The implicit methods are more general than the explicit ones. The distinction shows up in
the Butcher Tableau: for an implicit method, the coefficient matrix aij is not necessarily
lower triangular:
The approximate solution to the initial value problem reflects the greater number of
coefficients:

Due to the fullness of the matrix aij, the evaluation of each ki is now considerably
involved and dependent on the specific function f(t,y). Despite the difficulties, implicit
methods are of great importance due to their high (possibly unconditional) stability,
which is especially important in the solution of partial differential equations. The
simplest example of an implicit Runge-Kutta method is the backward Euler method:

The Butcher Tableau for this is simply:

It can be difficult to make sense of even this simple implicit method, as seen from the
expression for k1:

In this case, the awkward expression above can be simplified by noting that

so that

from which
follows. Though simpler then the "raw" representation before manipulation, this is an
implicit relation so that the actual solution is problem dependent. Multistep implicit
methods have been used with success by some researchers. The combination of stability,
higher order accuracy with fewer steps, and stepping that depends only on the previous
value makes them attractive; however the complicated problem-specific implementation
and the fact that ki must often be approximated iteratively means that they are not
common.

You might also like