0% found this document useful (0 votes)
55 views18 pages

Numerical Analysis: Lecture-7

1. The document discusses error analysis of iterative methods for finding roots of functions. 2. It defines the order of convergence for iterative sequences, noting that a higher order indicates faster convergence. 3. Newton's method is shown to have quadratic convergence, making it faster than linearly convergent methods. The document shows how Newton's method can be derived by optimizing the convergence of fixed point iterations.

Uploaded by

Sandy Cyrus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views18 pages

Numerical Analysis: Lecture-7

1. The document discusses error analysis of iterative methods for finding roots of functions. 2. It defines the order of convergence for iterative sequences, noting that a higher order indicates faster convergence. 3. Newton's method is shown to have quadratic convergence, making it faster than linearly convergent methods. The document shows how Newton's method can be derived by optimizing the convergence of fixed point iterations.

Uploaded by

Sandy Cyrus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

1

Numerical Analysis
Lecture-7

Dr. Ilyas Fakhir

CS Dept. GCU, Lahore

February 28, 2020

Dr. Ilyas Fakhir Numerical Analysis


2

Error Analysis

In the previous section we discussed four different algorithms


for finding the root of f (x) = 0.
We made some (sometime vague) arguments for why one
method would be faster than another...
Now, we are going to look at the error analysis of iterative
methods, and we will quantify the speed of our methods.

Dr. Ilyas Fakhir Numerical Analysis


3

Order of Convergence
Definition
Suppose the sequence {pn }∞ n=0 converges to p, with pn 6= p for all
n. If positive constants λ and α exists with

|pn+1 − p|
lim =λ
n→∞ |pn − p|α

then {pn }∞
n=0 converges to p of order α, with asymptotic error
constant λ.

An iterative technique of the form pn = g (pn−1 ) is said to be of


order α if the sequence {pn }∞ n=0 converges to the solution
p = g (p) of order α.
Bottom line: High order (α) ⇒ Faster convergence (more
desirable).
λ has an effect, but is less important than the order.
Dr. Ilyas Fakhir Numerical Analysis
4

Special Cases: α = 1, and α = 2

When α = 1 the sequence is linearly convergent.


When α = 2 the sequence is quadratically convergent.
When α < 1 the sequence is sub-linearly convergent (very
undesirable, or “painfully slow.”)
When ((α = 1 and λ = 0) or 1 < α < 2), the sequence is
super-linearly convergent.

Dr. Ilyas Fakhir Numerical Analysis


5

Linear vs. Quadratic

Suppose we have two sequences converging to zero:

|pn+1 | |qn+1 |
lim = λp , lim = λq
n→∞ |pn | n→∞ |qn |2

Roughly this means that


n −1 n
|pn | ≈ λp |pn−1 | ≈ λnp |p0 |, |qn | ≈ λq |qn−1 |2 ≈ λ2q |q0 |2

Dr. Ilyas Fakhir Numerical Analysis


6

Example

Now, assume λp = λq = 0.9 and p0 = q0 = 1, we get the following

Table (Linear vs. Quadratic): A dramatic difference! After 8


iterations, qn has 11 correct decimals, and pn still none. qn roughly
doubles the number of correct digits in every iteration. Here pn
needs more than 20 iterations/digit-of-correction.
Dr. Ilyas Fakhir Numerical Analysis
7

Convergence of General Fixed Point Iteration

Theorem
Let g ∈ C [a, b] be such that g (x) ∈ [a, b] for all x ∈ [a, b].
Suppose, in addition that g 0 (x) is continuous on (a, b) and there is
a positive constant k < 1 so that

|g 0 (k)| ≤ k, ∀x ∈ (a, b)

If g 0 (p∗) 6= 0, then for any number p0 in [a, b], the sequence

pn = g (pn−1 ), n≥1

converges only linearly to the unique fixed point p∗ in [a, b].


In a sense, this is bad news since we like fast convergence...

Dr. Ilyas Fakhir Numerical Analysis


8

Convergence of General Fixed Point Iteration (Contd...)

The existence and uniqueness of the fixed point follows from the
fixed point theorem.
We use the mean value theorem to write
pn+1 − p∗ = g (pn ) − g (p∗) = g 0 (ξn )(pn − p∗), ξn ∈ (pn , p∗)
Since pn → p∗ and ξn is between pn and p∗, we must also have
ξn → p∗.
Further, since g 0 (.) is continuous, we have

lim g 0 (ξn ) = g 0 (p∗)


n→∞

Thus,
|pn+1 − p ∗ |
lim = lim |g 0 (ξn )| = |g 0 (p∗)|
n→∞ |pn − p ∗ | n→∞

So if g 0 (p∗) 6= 0, the fixed point iteration converges linearly with


asymptotic error constant |g 0 (p∗)|.
Dr. Ilyas Fakhir Numerical Analysis
9

Speeding up Convergence of Fixed Point Iteration

Bottom Line: The theorem tells us that if we are looking to


design rapidly converging fixed point schemes, we must design
them so that g 0 (p∗) = 0...
We state the following without proof:
Theorem
Let p∗ be a solution of p = g (p). Suppose g 0 (p∗) = 0, and g 00 (x)
is continuous and strictly bounded by M on an open interval I
containing p∗. Then there exists a δ > 0 such that, for
p0 ∈ [p ∗ −δ, p ∗ +δ] the sequence defined by pn = g (pn−1 )
converges at least quadratically to p∗. Moreover, for sufficiently
large n
M
|pn+1 − p ∗ | < |pn − p ∗ |2
2

Dr. Ilyas Fakhir Numerical Analysis


10

Practical Application of the Theorem

“Look for quadratically convergent fixed point methods


among functions whose derivative is zero at the fixed point.”
We want to solve: f (x) = 0 using fixed point iteration. We write
the problem as an equivalent fixed point problem:
g (x) = x − f (x) Solve: x = g (x)
g (x) = x − αf (x) Solve: x = g (x) α a constant
g (x) = x − Φ(x)f (x) Solve: x = g (x) Φ(x) differentiable

We use the most general form (the last one).


Remember, we want g 0 (p∗) = 0 when f (p∗) = 0.

Dr. Ilyas Fakhir Numerical Analysis


11

Practical Application of the Theorem... Newton’s Method


Rediscovered

d
g 0 (x) = [x − Φ(x)f (x)] = 1 − Φ0 (x)f (x) − Φ(x)f 0 (x)
dx
at x = p∗ we have f (p∗) = 0, so

g 0 (p∗) = 1 − Φ(p∗)f 0 (p∗).

For quadratic convergence we want this to be zero, thats true if


1
Φ(p∗) = .
f 0 (p∗)

Hence, our scheme is


g (x) = x − ff 0(x)
(x) , Newton’s Method, rediscovered!

Dr. Ilyas Fakhir Numerical Analysis


12

Newton’s Method

We have “discovered” Newtons method in two scenarios:


From formal Taylor expansion.
From convergence optimization of Fixed point iteration.
It is clear that we would like to use Newton’s method in many
settings. One major problem is that it breaks when f 0 (p∗) = 0
(division by zero).
The good news is that this problem can be fixed!
– We need a short discussion on the multiplicity of zeroes.

Dr. Ilyas Fakhir Numerical Analysis


13

Multiplicity of Zeroes

Definition: Multiplicity of a Root

A solution p∗ of f (x) = 0 is a zero of multiplicity m of f if for


x 6= p we can write

f (x) = (x − p∗)m q(x), lim q(x) 6= 0


x→p∗

Basically, q(x) is the part of f (x) which does not contribute to the
zero of f (x).
If m = 1 then we say that f (x) has a simple zero.
Theorem
f ∈ C 1 [a, b] has a simple zero at p∗ in (a, b) if and only if
f (p∗) = 0, but f 0 (p∗) 6= 0.

Dr. Ilyas Fakhir Numerical Analysis


14

Multiplicity of Zeroes

Theorem (Multiplicity & Derivatives)


The function f ∈ C m [a, b] has a zero multiplicity m at p∗ in (a, b)
if and only if

0 = f (p∗) = f 0 (p∗ = ...f (m−1) (p∗), but f (m) (p∗) 6= 0

.
We know that Newton’s method runs into trouble when we have a
zero of multiplicity higher than 1.
Simple root: f satisfies f (p∗) = 0, but f 0 (p∗) 6= 0.

Dr. Ilyas Fakhir Numerical Analysis


15

Newton’s Method for Zeroes of Higher Multiplicity

Suppose f (x) has a zero of multiplicity m > 1 at p∗...


Define the new function
µ(x) = ff 0(x)
(x) .
m
We can write f (x) = (x − p∗) q(x), hence

(x − p∗)m q(x)
µ(x) =
m(x − p∗)m−1 q(x) + (x − p∗)m q 0 (x)

q(x)
= (x − p∗)
mq(x) + (x − p∗)q 0 (x)
This expression has a simple zero at p∗, since

q(p∗) 1
= 6= 0
mq(p∗) + (p ∗ −p∗)q 0 (p∗) m

Dr. Ilyas Fakhir Numerical Analysis


16

Newton’s Method for Zeroes of Higher Multiplicity

Now we apply Newton’s method to µ(x):

µ(x)
x = g (x) = x −
µ0 (x)
f (x)
f 0 (x)
=x− [f 0 (x)]2 −f (x)f 00 (x)
[f 0 (x)]2

f (x)f 0 (x)
=x−
[f 0 (x)]2 − f (x)f 00 (x)
This iteration will converge quadratically!

Dr. Ilyas Fakhir Numerical Analysis


17

Newton’s Method for Zeroes of Higher Multiplicity

Strategy: “Fixed Newton” (for Zeroes of Multiplicity ≥ 2)

f (xn )f 0 (xn )
xn+1 = xn −
[f 0 (x 2 00
n )] − f (xn )f (xn )

Drawbacks: We have to compute f 00 (x) – more expensive and


possibly another source of numerical and/or measurement errors.
We have to compute a more complicated expression in each
iteration – more expensive.
Roundoff errors in the denominator – both f 0 (x) and f (x)
approach zero, so we are computing the difference between two
small numbers; a serious cancelation risk.

Dr. Ilyas Fakhir Numerical Analysis


18
x 0 00
Example: f (x) = e − x − 1, f (0) = f (0) = 0, f (0) = 1

Dr. Ilyas Fakhir Numerical Analysis

You might also like