Project
Project
Submitted by:
Palabay, Miguel R.
Submitted to:
ENGR.Grace A. Llobrera
March 6,2008
The Trapezoidal Rule
The trapezoidal rule is the first of the Newton’s cotes close integration formula. It
corresponds to the case where the polynomials in first order.
b b
I = ∫a f(x) dx ≡ ∫a f1(x) dx
b f(b) – f(a)
I = ∫a f1(x) = f(a) + b-a ( x-a )
The area under this straight line is estimated of the integral of f(x) between the limits a
and b.
b f (b) – f (a)
I = ∫a [ f(a) + b-a (x-a)] dx
Where, h = b – a .
This rule comes from determining the area of a right trapezoid with bases of
lengths f(a) and f(b) respectively and a height of length h. When using a graph to
illustrate the trapezoidal rule, the height of the right trapezoid is actually horizontal and
the bases are vertical. This may be confusing to someone who is seeing the trapezoidal
rule for the first time. An example is shown below.
The figure in red need not be a right trapezoid. If either f (a) = 0 or f (b) = 0, the
figure will be a right triangle. If both f (a) = 0 and f (b) = 0, the figure will be a line
segment. In any case, the same rule for approximating the corresponding definite integral
is used.
The trapezoidal rule is the first Newton-Cotes quadrature formula. It has degree of
precision 1. This means it is exact for polynomials of degree less than or equal to one.
We can see this with a simple example.
It is important to note that most calculus books give the wrong definition of the
trapezoidal rule. Typically, they define it to be what is actually the composite trapezoidal
rule, which uses the trapezoidal rule on a specified number of subintervals. Some
examples of calculus books that define the trapezoidal rule to be what is actually the
composite trapezoidal rule are:
Also note the trapezoidal rule can be derived by integrating a linear interpolation or by
using the method of undetermined coefficients. The latter is probably a bit easier.The
trapezoidal rule is a method for finding an approximate value for a definite integral.
Suppose we have the definite integral
First the area under the curve y = f (x) is divided into n strips, each of equal width
.
The area under the curve y = f (x )is divided into n = 4 strips of thickness . The
area of each strip is then approximated to be that of a trapezium. The sum of these
The shape of each strip is approximated to be like that of a trapezium. Hence the area of
the first strip is approximately
Adding up these areas gives us an approximate value for our definite integral:
Here, the integral is computed on each of the sub-intervals by using linear interpolating
formula, i.e. for and then summing them up to obtain the desired integral.
Note that
Thus, we have,
i.e.
This is called Trapezoidal rule.It is a simple quadrature formula, but is not very accurate.
An estimate for the error E1 in numerical integration using the Trapezoidal rule is given
by
Recall that in the case of linear function, the second forward differences is zero, hence,
the Trapezoidal rule gives exact value of the integral if the integrand is a linear function.
When the employ the integral under a straight line segment to approximate the
integral under a curve obviously can incur an error that maybe substantial. An estimate
for the local truncation error of a single application of the trapezoidal rule.
3 major conclusion
1. For individual application with nicely behave function the multi segment trapezoidal
rule is just fine for attaining the type of accuracy is required in many engineering
applications.
2. If high accuracy is required the multiple segment trapezoidal demands a great deal of
computational effort. Although his effort maybe negligible for a single application. It
could be very important when (a) numerous integrals are being evaluated or (b) where the
function itself is time consuming to evaluate. For more cases more efficient approaches
be necessary.
3. Finally, round off errors can limit our ability to determine integrals. This is due both to
the machine precision as well as to numerous computation involved in some techniques
like the multiple segment trapezoidal rule.
Simpson's Rule
Simpson's Rule, named after the great English mathematician Thomas Simpson,
is an ingenious method for approximating integrals. If you don't know what integrals are
used for, don't feel bad. Many college students who complete three semesters of calculus
may be able to “compute” an integral, but won't know its practical application either.
Computation of integrals is difficult to learn and easy to forget.
Divide the area under the curve into n (where n is even) parabolas from x = a to x = b.
The area can be found with the following formula:
Then
Proof:
Repeating this
gives:
Derivation
Quadratic interpolation
One derivation replaces the integrand f(x) by the quadratic polynomial P(x) which takes
the same values as f(x) at the end points a and b and the midpoint m = (a+b) / 2. One can
use Lagrange polynomial interpolation to find an expression for this polynomial,
Another derivation constructs Simpson's rule from two simpler approximations: the
midpoint rule
respectively. It follows that the leading error term vanishes if we take the weighted
average
Undetermined coefficients
Error
If the interval of integration [a,b] is in some sense "small", then Simpson's rule
will provide an adequate approximation to the exact integral. By small, what we really
mean is that the function being integrated is relatively smooth over the interval [a,b]. For
such a function, a smooth quadratic interpolant like the one used in Simpson's rule will
give good results.
However, it is often the case that the function we are trying to integrate is not
smooth over the interval. Typically, this means that either the function is highly
oscillatory, or it lacks derivatives at certain points. In these cases, Simpson's rule may
give very poor results. One common way of handling this problem is by breaking up the
interval [a,b] into a number of small subintervals. Simpson's rule is then applied to each
subinterval, with the results being summed to produce an approximation for the integral
over the entire interval. This sort of approach is termed the composite Simpson's rule.
Suppose that the interval [a,b] is split up in n subintervals, with n an even number. Then,
the composite Simpson's rule is given by:
The error committed by the composite Simpson's rule is bounded (in absolute value) by
where h is the "step length", given by h = (b − a) / n. This formulation splits the interval
[a,b] in subintervals of equal length. In practice, it is often advantageous to use
subintervals of different lengths, and concentrate the efforts on the places where the
integrand is less well-behaved. This leads to the adaptive Simpson's method.
def c_simpson_rule(f,a,b,n):
"Approximate the definite integral of f from a to b by Composite Simpson's rule,
dividing the interval in n parts."
dx = (float(b)-a)/n
i =0
fks = 0.0
while i<=n:
xk = a+i*dx
if i==0 or i==n:
fk = f(xk)
elif i%2 == 1:
fk = 4*f(xk)
else:
fk = 2*f(xk)
fks += fk
i += 1
return (dx/3)*fks
Integrating sin(x) from 0 to 1 with the first code gives 0.459862189871, the second yields
0.459697694132 whereas the true value is 1 − cos(1) = 0.45969769413... . The composite
version of the rule gives a noticeably better approximation.
Romberg Integration
Since we used in the formula ,the result obtained from Equation (8) has an error
of and can be written as
where the variable TV is replaced by as the value obtained using Richardson’s
extrapolation formula. Note also that the sign is replaced by = sign.
Hence the estimate of the true value now is
Determine another integral value with further halving the step size (doubling the number
of segments),
then
.
The above equation now has the error of . The above procedure can be further
improved by using the new values of the estimate of true value that has the error of
to give an estimate of .
Based on this procedure, a general expression for Romberg integration can be written as
The index k represents the order of extrapolation. For example, represents the
values obtained from the regular Trapezoidal rule, represents values obtained using
the true error estimate as . The index j represents the more and less accurate
estimate of the integral. The value of the integral with j+1 index is more accurate than
with j index.
For , ,
For , ,
Example:
The vertical distance covered by a rocket from to seconds is given by
Use Romberg’s rule to find the distance covered. Use the 1, 2, 4, and 8-segment
Trapezoidal rule results as given in the Table 1.
Solution
From Table 1, the needed values from original Trapezoidal rule are
where the above four values correspond to using 1, 2, 4 and 8 segment Trapezoidal rule,
respectively. To get the first order extrapolation values,
Similarly,
For the second order extrapolation values,
Similarly
Method
he method can be defined inductively in this way:
or
where
Gauss Quadrature
Seeks to obtain the best numerical estimate of an integral by picking optimal
abscissas at which to evaluate the function . The fundamental theorem of Gaussian
quadrature states that the optimal abscissas of the -point Gaussian quadrature formulas
are precisely the roots of the orthogonal polynomial for the same interval and weighting
function. Gaussian quadrature is optimal because it fits all polynomials up to degree
exactly. Slightly less optimal fits are obtained from Radau quadrature and
Laguerre quadrature.
Let the right hand side be exact for integrals of a straight line, that is, for integrated form
So
Therefore
There are four unknowns , , and . These are found by assuming that the
formula gives exact results for integrating a general third order polynomial,
. Hence
Since in Equation (10), the constants and are arbitrary, the coefficients of
and are equal. This gives us four equations as follows.
Without proof (see Example 1 for proof of a related problem), we can find that the above
four simultaneous nonlinear equations have only one acceptable solution
Hence
Method 2:
We can derive the same formula by assuming that the expression gives exact values for
These four simultaneous nonlinear Equations (14) can be solved with a single acceptable
solution
Hence
Since two points are chosen, it is called the two-point Gauss Quadrature Rule. Higher
point versions can also be developed.
For example
is called the three-point Gauss Quadrature Rule. The coefficients , and , and the
function arguments , and are calculated by assuming the formula gives exact
expressions for integrating a fifth order polynomial
integral
In handbooks (see Table 1), coefficients and arguments given for n-point Gauss
Quadrature Rule are given for integrals of the form
The answer lies in that any integral with limits of can be converted into an integral
with limits . Let
If then
If then
such that
Hence
Substituting our values of and into the integral gives us
Example 1:
For an integral show that the two-point Gauss Quadrature rule approximates
to
where
Solution,
That leaves the solution of as the only possible acceptable solution and in fact, it
does not have violations (see it for yourself)
Since Equation (E1.7) requires that the two results be of opposite sign, we get
Hence,
Example 2:
Solution:
Example 3:
What would be the formula for
if you want the above formula to give you exact values of that is a
Solution:
So,
Eulers Method
A method for solving ordinary differential equations using the formula which
advances a solution from to . Note that the method increments a solution
through an interval while using derivative information from only the beginning of the
interval. As a result, the step's error is . This method is called simply "the Euler
method" by Press et al. (1992), although it is actually the forward version of the
analogous Euler backward method.
While Press et al. (1992) describe the method as neither very accurate nor very
stable when compared to other methods using the same step size, the accuracy is actually
not too bad and the stability turns out to be reasonable as long as the so-called Courant-
Friedrichs-Lewy condition is fulfilled. This condition states that, given a space
discretization, a time step bigger than some computable quantity should not be taken. In
situations where this limitation is acceptable, Euler's forward method becomes quite
attractive because of its simplicity of implementation.
Starting at some time, to, he value of y(to+h) can then be approximated by the value of
y(to) plus the time step multiplied by the slope of the function, which is the derivative of
y(t) (Note: this is simply a first order Taylor expansion)::
So, if we can calculate the value of dy/dt at time to (using equation 1) then we can
generate an approximation for the value of y at time to+h using equation 2. We can then
use this new value of y (at to) to find dy/dt (at to) and repeat. Although this seems
circular, if properly used it can generat an approximate solution. This is refered to as
Euler's method.
This process is shown graphically below for a sinlge interation of the process.
With this simple background the Euler method for first order differential equations can be
stated as follows:
1) Starting at time to, choose a value for h, and find initial condition y (to).
2) From y(to) calculate the derivative of y(t) at t=to using equation 1. Call this k1. The
slope is shown as the red line in the drawing above.
3) From this value find an approximate value for y*(to+h) using equation 2.
This is labeled in red in the diagram above. The exact value is shown in blue. Note that the difference
between approximate and exact solutions is quite large; in practice h is chosen small enough to make this
error small.
Now lets go through a numerical example with to=0 for the equation:
First, let's use h=0.1 To find y*(to+h)=y*(h) (since to=0) we first evaluate the derivative:
To find the next approximation we take to=0.1 and repeat. First we evaluate the
derivative (using our approximation for the value of y(to):
We then find the approximate solution after the second time step (again using our
approximate value):
Our approximate solution is 1.08, the exact solution is 1.00.Let's start by looking at an
initial value problem whose solution is known:
We know that the solution is . This means that after we find our approximate
solution, we will be able to determine how good of an approximation it really is.
Let's suppose that we are interested in the value of the solution at . We know the
value at since that is a part of the initial value problem---namely, . Notice
that the differential equation also tells us the derivative of the solution at since
The problem with the approximation is that the derivative of the solution is changing
across the interval but the approximation assumes that it is constantly 1 . We can try
to fix this up by diving the interval into two pieces: First, we will use the linear
approximation based at to approximate the value at . Then we will use a
linear approximation at to obtain an approximate value at .
We will again form an approximate solution by taking lots of little steps. We will call the
distance between the steps h and the various points . To get from one step to the
next, we will form the linear approximation at . The derivative at this point is given by
the differential equation: . The linear approximation is then
so that
Notice that this has the basic form of the logistic equation. We have studied this equation
qualitatively, but we do not explicitly know solutions. As an example, we will
The simplest method of numerical integration is Euler's method, which is presented now.
Consider the first order differential equation:
Starting at some time, to, he value of y(to+h) can then be approximated by the value of
y(to) plus the time step multiplied by the slope of the function, which is the derivative of
y(t) (Note: this is simply a first order Taylor expansion)::
So, if we can calculate the value of dy/dt at time to (using equation 1) then we can
generate an approximation for the value of y at time to+h using equation 2. We can then
use this new value of y (at to) to find dy/dt (at to) and repeat. Although this seems
circular, if properly used it can generat an approximate solution. This is refered to as
Euler's method.
With this simple background the Euler method for first order differential equations can be
stated as follows:
1) Starting at time to, choose a value for h, and find initial condition y(to).
2) From y(to) calculate the derivative of y(t) at t=to using equation 1. Call this k1. The
slope is shown as the red line in the drawing above.
3) From this value find an approximate value for y*(to+h) using equation 2.
This is labeled in red in the diagram above. The exact value is shown in blue. Note that the difference
between approximate and exact solutions is quite large; in practice h is chosen small enough to make this
error small.
Now lets go through a numerical example with to=0 for the equation:
First, let's use h=0.1 To find y*(to+h)=y*(h) (since to=0) we first evaluate the derivative:
To find the next approximation we take to=0.1 and repeat. First we evaluate the
derivative (using our approximation for the value of y(to):
We then find the approximate solution after the second time step (again using our
approximate value):
Results
The table below show the results of the Euler approximations, as well as the exact
solution up to 4 seconds. Here is a link to Matlab code, and C code, that performs the
approximation. The Matlab code printed the results and made the plot (I added results for
h=0.01 -- see below)..
If h is decreased to 0.01 accuracy is improved, but the complexity (and time taken) of the
calculation is greatly increased. Compare resulsts with h=0.1 at t=0.1, and t=4.0. Note
that on the graph above, the exact solution, and the approximate solution with h=0.01, are
almost identical.
Runge–Kutta methods
Thus, the next value (yn+1) is determined by the present value (yn) plus the product of the
size of the interval (h) and an estimated slope. The slope is a weighted average of slopes:
In averaging the four slopes, greater weight is given to the slopes at the midpoint:
The RK4 method is a fourth-order method, meaning that the error per step is on the order
of h5, while the total accumulated error has order h4.
Note that the above formulas are valid for both scalar- and vector-valued functions (i.e., y
can be a vector and f an operator). For example one can integrate Schrödinger's equation
using Hamiltonian operator as function f.
where
To specify a particular method, one needs to provide the integer s (the number of stages),
and the coefficients aij (for 1 ≤ j < i ≤ s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s).
These data are usually arranged in a mnemonic device, known as a Butcher tableau (after
John C. Butcher):
c2 a21
c3 a31 a32
b1 b2 bs − 1 bs
The Runge–Kutta method is consistent if
Example:
1/2 1/2
1/2 0 1/2
1 0 0 1
However, the simplest Runge–Kutta method is the (forward) Euler method, given by the
formula yn + 1 = yn + hf(tn,yn). This is the only consistent explicit Runge–Kutta method with
one stage. The corresponding tableau is:
1/2 1/2
0 1
Note that this 'midpoint' method is not the optimal RK2 method. An alternative is
provided by Heun's method, where the 1/2's in the tableau above is simply replaced by
1's. If one wants to minimize the truncation error, the method below should be used
(Atkinson p. 423). Other important methods are Fehlberg, Cash-Karp and Dormand-
Prince. Also, read the article on Adaptive Stepsize.
Usage
2/3 2/3
1/4 3/4
The tableau above yields the equivalent corresponding equations below defining the
method:
t0 = 1
y0 = 1
t1 =
1.025
f(t0,k1) = k2 = y0 + 2 / 3hf(t0,k1) =
k1 = y0 = 1
2.557407725 1.042623462
t2 = 1.05
k1 = y1 = f(t1,k1) = k2 = y1 + 2 / 3hf(t1,k1) =
1.066869388 2.813524695 1.113761467
t3 =
1.075
k1 = y2 = f(t2,k1) = k2 = y2 + 2 / 3hf(t2,k1) =
1.141332181 3.183536647 1.194391125
t4 = 1.1
k1 = y3 = f(t3,k1) = k2 = y3 + 2 / 3hf(t3,k1) =
1.227417567 3.796866512 1.290698676
The adaptive methods are designed to produce an estimate of the local truncation
error of a single Runge-Kutta step. This is done by having two methods in the tableau,
one with order p and one with order p − 1.
where the ki are the same as for the higher order method. Then the error is
which is O(hp). The Butcher Tableau for this kind of method is extended to give the
values of :
c2 a21
c3 a31 a32
b1 b2 bs − 1 bs
The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended
Butcher Tableau is:
0
1/4 1/4
However, the simplest adaptive Runge-Kutta method involves combining the Heun
method, which is order 2, with the Euler method, which is order 1. Its extended Butcher
Tableau is:
1 1
1/2 1/2
1 0
The implicit methods are more general than the explicit ones. The distinction shows up in
the Butcher Tableau: for an implicit method, the coefficient matrix aij is not necessarily
lower triangular:
The approximate solution to the initial value problem reflects the greater number of
coefficients:
Due to the fullness of the matrix aij, the evaluation of each ki is now considerably
involved and dependent on the specific function f(t,y). Despite the difficulties, implicit
methods are of great importance due to their high (possibly unconditional) stability,
which is especially important in the solution of partial differential equations. The
simplest example of an implicit Runge-Kutta method is the backward Euler method:
It can be difficult to make sense of even this simple implicit method, as seen from the
expression for k1:
In this case, the awkward expression above can be simplified by noting that
so that
from which
follows. Though simpler then the "raw" representation before manipulation, this is an
implicit relation so that the actual solution is problem dependent. Multistep implicit
methods have been used with success by some researchers. The combination of stability,
higher order accuracy with fewer steps, and stepping that depends only on the previous
value makes them attractive; however the complicated problem-specific implementation
and the fact that ki must often be approximated iteratively means that they are not
common.