0% found this document useful (0 votes)
58 views

Lecture22 PDF

The document discusses Newton-Cotes quadrature, a method for approximating integrals. It introduces the trapezoid rule, which approximates the integral of a function f over an interval [a,b] by computing f at the endpoints and taking the average of the two values, weighted by the interval length. This yields the formula: ∫ab f(x) dx ≈ (b-a)(f(a)+f(b))/2. The method is illustrated and its accuracy analyzed by deriving an error bound based on the interpolation error of the linear function used.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Lecture22 PDF

The document discusses Newton-Cotes quadrature, a method for approximating integrals. It introduces the trapezoid rule, which approximates the integral of a function f over an interval [a,b] by computing f at the endpoints and taking the average of the two values, weighted by the interval length. This yields the formula: ∫ab f(x) dx ≈ (b-a)(f(a)+f(b))/2. The method is illustrated and its accuracy analyzed by deriving an error bound based on the interpolation error of the linear function used.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

122

lecture 22: Newton–Cotes quadrature 12

f (x)
10

3.2 Newton–Cotes quadrature 8

You encountered the most basic method for approximating an inte- 6

gral when you learned calculus: the Riemann integral is motivated by


4
approximating the area under a curve by the area of rectangles that
touch that curve, which gives a rough estimate that becomes increas- 2
ingly accurate as the width of those rectangles shrinks. This amounts
to approximating the function f by a piecewise constant interpolant, 0
0 2 4 6 8 10
and then computing the exact integral of the interpolant. When only x
one rectangle is used to approximate the entire integral, we have the 12

most simple Newton–Cotes formula; see Figure 3.1.


10
Newton–Cotes formulas are interpolatory quadrature rules where
the quadrature notes x0 , . . . , xn are uniformly spaced over [ a, b], 8

✓ ◆
b a 6
xj = j .
n
4

Given the lessons we learned about polynomial interpolation at uni-


formly spaced points in Section 1.6, you should rightly be suspicious 2

of applying this idea with large n (i.e., high degree interpolants). 0


A more reliable way to increase accuracy follows the lead of basic 0 2 4 6 8
x
10

R 10
Riemann sums: partition [ a, b] into smaller subintervals, and use Figure 3.1: Estimates of 0 f ( x ) dx,
low-degree interpolants to approximate the integral on each of these shown in gray: the first approximates f
by a constant interpolant; the second, a
smaller domains. Such methods are called composite quadrature rules. composite rule, uses a piecewise constant
In some cases, the function f may be fairly regular over most of interpolant. You probably have encoun-
the domain [ a, b], but then have some small region of rapid growth or tered this second approximation as a
Riemann sum.
oscillation. Modern adaptive quadrature rules are composite rules on
which the subintervals of [ a, b] vary in size, depending on estimates
of how rapidly f is changing in a given part of the domain. Such
methods seek to balance the competing goals of highly accurate
approximate integrals and as few evaluations of f as possible.We
shall not dwell much on these sophisticated quadrature procedures
here, but rather start by understanding some methods you were
probably introduced to in your first calculus class.

3.2.1 The trapezoid rule

The trapezoid rule is a simple improvement over approximating the


integral by the area of a single rectangle. A linear interpolant to f can
be constructed, requiring evaluation of f at the interval end points
x0 = a and x1 = b. Using the interpolatory quadrature methodology
123

described in the last section, we write


✓ ◆ ✓ ◆
x b x a
p1 ( x ) = f ( a ) + f (b) ,
a b b a

and compute its integral as


Z b Z b ✓ ◆ ✓ ◆
x b x a
p1 ( x ) dx = f ( a) + f (b) dx
a a a b b a
Z b Z b
x x1 x x1
= f ( a) dx + f (b) dx
a x0 x1 a x0 x1
✓ ◆ ✓ ◆
b a b a
= f ( a) + f (b) .
2 2

In summary, 12
63.5714198. . . (trapezoid)
73.4543644. . . (exact)
Trapezoid rule: 10

Z b
b a⇣ ⌘ 8
f ( x ) dx ⇡ f ( a) + f (b) .
a 2
6

4
The procedure behind the trapezoid rule is illustrated in Figure 3.2
where the area approximating the integral is colored gray. 2

To derive an error bound for the trapezoid rule, simply integrate


0
the fundamental interpolation error formula in Theorem 1.3. That 0 2.5 5 7.5 10
x
gave, for each x 2 [ a, b], some x 2 [ a, b] such that Figure 3.2: Trapezoid rule estimate of
R 10
1 00 0 f ( x ) dx, shown in gray.
f (x) p1 ( x ) = 2 f ( x )( x a)( x b ).

Note that x will vary with x, which we emphasize by writing x ( x ).


Integrate this formula to obtain
Z b Z b Z b
1 00
f ( x ) dx p1 ( x ) dx = f (x ( x ))( x a)( x b) dx
a a a 2
Z b
1 00
= f (h ) (x a)( x b) dx
2 a

1 00 1 1 2 1 1 3
= f (h )( a3 a b + ab2 b ) The mean value theorem for inte-
2 6 2 2 6 grals states that if h, g 2 C [ a, b] and
h does not change sign on [ a, b], then
1 00
= f (h )(b a )3 there exists some h 2 [ a, b] such that
12 Rb Rb
a g ( t ) h ( t ) dt = g ( h ) a h ( t ) dt. The re-
quirement that h not change sign is es-
for some h 2 [ a, b]. The second step follows from the mean value
sential. For example, if g(t) = h(t) = t
theorem for integrals. R1
then 1 g(t)h(t) dt = 1 t2 dt = 2/3,
R1

In a forthcoming lecture we shall develop a much more general R1 R1


yet 1 h(t) dt = t dt = 0, so for
R1 1
theory, based on the Peano kernel, from which we can derive this error all h 2 [ 1, 1], g(h ) 1 h(t) dt = 0 6=
R1
1 g ( t ) h ( t ) dt = 2/3.
124

bound, plus bounds for more complicated schemes, too. For now, we
summarize the bound in the following Theorem.

Theorem 3.2. Let f 2 C2 [ a, b]. The error in the trapezoid rule is


Z b ✓ ⌘◆
b a⇣ 1 00
f ( x ) dx f ( a) + f (b) = f (h )(b a )3
a 2 12

for some h 2 [ a, b].

This bound has an interesting feature: if we are integrating over


the small interval, b a = h ⌧ 1, then the error in the trapezoid
rule approximation is O(h3 ) as h ! 0, while the error in the linear
interpolant upon which this quadrature rule is based is only O(h2 )
(from Theorem 1.3).

Example 3.1 ( f ( x) = ex (cos x + sin x)). Here we demonstrate the


difference between the error for linear interpolation of a function,
f ( x ) = ex (cos x + sin x ), between two points, x0 = 0 and x1 = h, and
the trapezoid rule applied to the same interval. The theory reveals
that linear interpolation will have an O(h2 ) error as h ! 0, while the
trapezoid rule has O(h3 ) error, as confirmed in Figure 3.3.

10 0 Figure 3.3: Error of linear interpolation


inter and trapezoid rule approximation for
p olati
on er f ( x ) = ex (cos x + sin x ) for x 2 [0, h] as
10 -2 ror h ! 0.
tra
pez
10 -4 o id O( h 2
err )
o r
10 -6

10 -8

10 -10 O(
h 3)

10 -12
10 0 10 -1 10 -2 10 -3
h

3.2.2 Simpson’s rule


To improve the accuracy of the trapezoid rule, increment the degree
of the interpolating polynomial. This will increase the number of
evaluations of f (often very costly), but hopefully will significantly
125

decrease the error. Indeed it does – by an even greater margin than


we might expect.
Simpson’s rule integrates the quadratic interpolant p2 2 P2 to f at
the uniformly spaced points

x0 = a, x1 = ( a + b)/2, x2 = b.

Using the interpolatory quadrature formulation of the last section,


Z b
p2 ( x ) dx = w0 f ( a) + w1 f ( 21 ( a + b)) + w2 f (c),
a

where
Z b✓ ◆✓ ◆
x x1 x x2 b a
w0 = dx =
a x0 x1 x0 x2 6
Z b✓ ◆✓ ◆
x x0 x x2 2( b a )
w1 = dx =
a x1 x0 x1 x2 3
Z b✓ ◆✓ ◆
x x0 x x1 b a
w2 = dx = .
a x2 x0 x2 x1 6

In summary: 12
76.9618331. . . (Simpson)
73.4543644. . . (exact)
Simpson’s rule: 10

Z b
b a⇣ ⌘ 8
f ( x ) dx ⇡ f ( a) + 4 f ( 12 ( a + b)) + f (b) .
a 6
6

4
Simpson’s rule enjoys a remarkable feature: though it only approxi-
mates f by a quadratic, it integrates any cubic polynomial exactly! One 2
can verify this by directly applying Simpson’s rule to a generic cu-
bic polynomial. Write f ( x ) = ax3 + q( x ), where q 2 P2 . Let 0
Rb 0 2.5 5 7.5
x
10
I ( f ) = a f ( x ) dx and let I2 ( f ) denote the Simpson’s rule approx-
Figure 3.4: Simpson’s rule estimate of
imation. Then, by linearity of the integral, R 10
0 f ( x ) dx, shown in gray.

I ( f ) = aI ( x3 ) + I (q)

and, by linearity of Simpson’s rule,

I2 ( f ) = aI2 ( x3 ) + I2 (q).

Since Simpson’s rule is an interpolatory quadrature rule based on


quadratic polynomials, its degree of exactness must be at least 2
(Theorem 3.1), i.e., it exactly integrates q: I2 (q) = I (q). Thus
⇣ ⌘
I ( f ) I2 ( f ) = a I ( x3 ) I2 ( x3 ) .
126

So Simpson’s rule will be exact for all cubics if it is exact for x3 . A


simple computation gives
✓ ⇣ a + b ⌘3 ◆
b a 3
I2 ( x3 ) = a +4 + b3
6 2
b a⇣ 3 ⌘ b4 a4
= 3a + 3a2 b + 3ab2 + 3b3 = = I ( x 3 ),
12 4
In fact, Newton–Cotes formulas based
confirming that Simpson’s rule is exact for x3 , and hence for all cu- on approximating f by an even-degree
bics. For now we simply state an error bound for Simpson’s rule, polynomial always exactly integrate
which we will prove in a future lecture. polynomials one degree higher.

Theorem 3.3. Let f 2 C4 [ a, b]. The error in the Simpson’s rule is


Z b ✓ ⌘◆
b a⇣ 1 (4)
f ( x ) dx f ( a) + 4 f (( a + b)/2) + f (b) = f (h )(b a )5
a 6 90

for some h 2 [ a, b].

This error formula captures the fact that Simpson’s rule is exact
for cubics, since it features the fourth derivative f (4) (h ), two deriva-
tives greater than f 00 (h ) in the trapezoid rule bound, even though
the degree of the interpolant has only increased by one. Perhaps it is
helpful to visualize the exactness of Simpson’s rule for cubics. Fig-
ure 3.5 shows f ( x ) = x3 (blue) and its quadratic interpolant (red).
On the left, the area under f is colored gray: its area is the integral
we seek. On the right, the area under the interpolant is colored gray.
Accounting area below the x axis as negative, both integrals give an
identical value even though the functions are quite different. It is
remarkable that this is the case for all cubics.
Typically one does not see Newton–Cotes rules based on poly-
nomials of degree higher than two (i.e., Simpson’s rule). Because Integrating the cubic interpolant at
it can be fun to see numerical mayhem, we give an example to em- four uniformly spaced points is called
Simpson’s three-eighths rule.
phasize why high-degree Newton–Cotes rules can be a bad idea.
Recall that Runge’s function f ( x ) = 1/(1 + x2 ) gave a nice exam-
ple for which the polynomial interpolant at uniformly spaced points
over [ 5, 5] fails to converge uniformly to f . This fact suggests that
Newton–Cotes quadrature will also fail to converge as the degree of
the interpolant grows. The exact value of the integral we seek is
Z 5
1 1
dx = 2 tan (5) = 2.75680153 . . . .
5 1 + x2
Just as the interpolant at uniformly spaced points diverges, so too
does the Newton–Cotes integral. Figure 3.6 illustrates this diver-
gence, and shows that integrating the interpolant at Chebyshev
127

3.5 3.5 Figure 3.5: Simpson’s rule applied to


area under area under f ( x ) = x3 on x 2 [ 1, 3/2]. The areas
3
f ( x ) = x3 3
quadratic interpolant under f ( x ) (blue) and its quadratic
interpolant (red) are the same, even
2.5 2.5
though the functions are quite different.
2 2

1.5 1.5

1 1

0.5 0.5

0 0

-0.5 -0.5

-1 -1

-1.5 -1.5
-1 0 1 2 -1 0 1 2
x x

points, called Clenshaw–Curtis quadrature, does indeed converge.


Section 3.4 describes this latter quadrature in more detail. Before
discussing it, we describe a way to make Newton–Cotes rules more
robust: integrate low-degree polynomials over subintervals of [ a, b].

10 3 Figure 3.6: Integrating interpolants


pn at n + 1 uniformly spaced points
10 2 (red) and at Chebyshev points (blue) for
oints
rm p Runge’s function, f ( x ) = 1/(1 + x2 )
unifo
pn ( x ) dx

10 1
over x 2 [ 5, 5].

10 0
5
Z 5
f ( x ) dx

10 -1
Che
bys
-2 hev p
10 oint
5
Z 5

10 -3

10 -4
1 8 16 24
n

3.2.3 Composite rules

As an alternative to integrating a high-degree polynomial, one can


pursue a simpler approach that is often very effective: Break the
interval [ a, b] into subintervals, then apply a standard Newton–Cotes
rule (e.g., trapezoid or Simpson) on each subinterval. Applying the
128

trapezoid rule on n subintervals gives


Z b n Z xj n ⇣ ⌘
(xj xj 1)
a
f ( x ) dx = Â f ( x ) dx ⇡ Â 2
f (xj 1 + f ( x j .
)
j =1 x j 1 j =1

The standard implementation assumes that f is evaluated at uni-


formly spaced points between a and b, x j = a + jh for j = 0, . . . , n and
h = (b a)/n, giving the following famous formulation:

Composite Trapezoid rule:


Z b
h⇣ n 1 ⌘
a
f ( x ) dx ⇡
2
f ( a) + 2 Â f ( a + jh) + f (b) .
j =1

(Of course, one can readily adjust this rule by partitioning [ a, b]


into subintervals of different sizes.) The error in the composite trape-
zoid rule can be derived by summing up the error in each application
of the trapezoid rule:
Z b
h⇣ n 1 ⌘ n ⇣ ⌘
a
f ( x ) dx
2
f ( a) + 2 Â f ( a + jh) + f (b) = Â 1 00
12 f ( h j )( x j xj 1)
3
j =1 j =1
n
h3
=
12 Â f 00 (hj )
j =1

for h j 2 [ x j 1 , x j ]. We can simplify these f 00 terms by noting that


1 n 00 00
n ( Â j=1 f ( h j )) is the average of n values of f evaluated at points in
the interval [ a, b]. Naturally, this average cannot exceed the maximum
or minimum value that f 00 assumes on [ a, b], so there exist points
x 1 , x 2 2 [ a, b] such that
n
1
f 00 (x 1 ) 
n  f 00 (hj )  f 00 (x 2 ).
j =1

Thus the intermediate value theorem guarantees the existence of


some h 2 [ a, b] such that
n
1
f 00 (h ) =
n  f 00 (hj ).
j =1

We arrive at a bound on the error in the composite trapezoid rule.

Theorem 3.4. Let f 2 C2 [ a, b]. The error in the composite trapezoid


rule over n intervals of uniform width h = (b a)/n is
Z b
h⇣ n 1 ⌘ h2
f ( x ) dx f ( a) + 2 Â f ( a + jh) + f (b) = (b a) f 00 (h ).
a 2 j =1
12

for some h 2 [ a, b].


129

This error analysis has an important consequence: the error for the
composite trapezoid rule is only O(h2 ), not the O(h3 ) we saw for the
usual trapezoid rule (in which case b a = h since n = 1).

A similar construction leads to the composite Simpson’s rule. We


now must ensure that n is even, since each interval on which we
apply the standard Simpson’s rule has width 2h. Simple algebra
leads to the following formula.

Composite Simpson’s rule:


Z b
h⇣ n/2 n/2 1 ⌘
f ( x ) dx ⇡ f ( a) + 4 Â f ( a + (2j 1) h ) + 2 Â f ( a + 2jh) + f (b) .
a 3 j =1 j =1

Now use Theorem 3.3 to derive an error formula for the composite
Simpson’s rule, using the same approach as for the composite trape-
zoid rule.

Theorem 3.5. Let f 2 C2 [ a, b]. The error in the composite Simpson’s


rule over n/2 intervals of uniform width 2h = 2(b a)/n is
Z b
h⇣ n/2 n/2 1 ⌘
f ( x ) dx f ( a) + 4 Â f ( a + (2j 1) h ) + 2 Â f ( a + 2jh) + f (b)
a 3 j =1 j =1

h4
= (b a ) f (4) ( h )
180

for some h 2 [ a, b].

The illustrations in Figure 3.7 compare the composite trapezoid and


Simpson’s rules for the same number of function evaluations. One
can see that Simpson’s rule, in this typical case, gives considerably
better accuracy.

Reflect for a moment. Suppose you are willing to evaluate f a


fixed number of times. How can you get the most bang for your
buck? If f is smooth, a rule based on a high-order interpolant (such
as the Clenshaw–Curtis and Gaussian quadrature rules we will
present in a few lectures) are likely to give the best result. If f is not
smooth (e.g., with kinks, discontinuous derivatives, etc.), then a ro-
bust composite rule would be a good option. (A famous special case:
If the function f is sufficiently smooth and is periodic with period
b a, then the trapezoid rule converges exponentially.)
130

12 12
73.2181469. . . (comp. trapezoid) 73.4610862. . . (comp. Simpson)
73.4543644. . . (exact) 73.4543644. . . (exact)
10 10

8 8

6 6

4 4

2 2

0 0
0 2.5 5 7.5 10 0 2.5 5 7.5 10
x x
Figure 3.7: Composite trapezoid rule
(left) and composite Simpson’s rule
3.2.4 Adaptive Quadrature (right).

If f is continuous, we can attain arbitrarily high accuracy with com-


posite rules by taking the spacing between function evaluations, h,
to be sufficiently small. This might be necessary to resolve regions
of rapid growth or oscillation in f . If such regions only make up a
small proportion of the domain [ a, b], then uniformly reducing h over
the entire interval will be unnecessarily expensive. One wants to
concentrate function evaluations in the region where the function is
the most ornery. Robust quadrature software adjusts the value of h
locally to handle such regions. To learn more about such techniques,
which are not foolproof, see W. Gander and W. Gautschi, “Adaptive
quadrature—revisited,” BIT 40 (2000) 84–101. This paper criticizes the routines
quad and quad8 that were included
in MATLAB version 5. In light of
this analysis MATLAB improved its
software, essentially incorporating the
two routines suggested in this paper
starting in version 6 as the routines
quad (adaptive Simpson’s rule) and
quadl (an adaptive Gauss–Lobatto rule).

You might also like