0% found this document useful (0 votes)
18 views

Comp Numerical Analysis Problems

1) Newton's method converges linearly for finding roots of f(x)=(x-α)mg(x). A method that converges faster is using H(x)=f(x)-m(f'(x))^(-1), which converges quadratically. 2) Lagrange basis functions and Gauss-Legendre points are used to prove properties of polynomial interpolation. 3) The θ-method for numerical solutions of differential equations is analyzed. It is consistent, zero-stable, and A-stable for 0≤θ≤1/2. Only the backward Euler method (θ=0) has stiff decay. 4) An explicit finite difference scheme for the heat equation is uncondition

Uploaded by

kooobeng6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Comp Numerical Analysis Problems

1) Newton's method converges linearly for finding roots of f(x)=(x-α)mg(x). A method that converges faster is using H(x)=f(x)-m(f'(x))^(-1), which converges quadratically. 2) Lagrange basis functions and Gauss-Legendre points are used to prove properties of polynomial interpolation. 3) The θ-method for numerical solutions of differential equations is analyzed. It is consistent, zero-stable, and A-stable for 0≤θ≤1/2. Only the backward Euler method (θ=0) has stiff decay. 4) An explicit finite difference scheme for the heat equation is uncondition

Uploaded by

kooobeng6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Numerical Analysis Comprehensive Exam Questions

1. Let f (x) = (x − α)m g(x) where m ≥ 2 is an integer and g(x) ∈ C 2 (R), g(α) 6= 0.
Write down the Newton’s method for finding the root α of f (x), and study the order
of convergence for the method. Can you propose a method that converges faster?

Solution: Newton’s method is


f (xn )
xn+1 = xn − := h(xn )
f 0 (xn )

For the fixed-point iteration xn+1 = h(xn ), we can investigate h0 (α).


If 0 < |h0 (α)| < 1, the iteration converges linearly with the rate of conv. tending
to |h0 (α)|; if h0 (α) = 0, then if converges at least quadratically.
In fact,
f (x)f 00 (x) m−1
h0 (x) = 0 2
→ , as x → α.
[f (x)] m
m−1
∴0< < 1 if m ≥ 2.
m
The Newton’s method only converges linearly if xo is chosen close enough to α.
To improve the order of convergence, once could use

f (xn )
xn+1 = xn − m := H(xn ).
f 0 (xn )

It’s easy to show that H 0 (x) → 0 as x → α. Therefore, the new method converges
at least quadratically.

2. Let x0 , x1 , . . . , xn be distinct real numbers and lk (x) be the Lagrange’s basis function.
Let Ψn (x) = Πnk=0 (x − xk ). Prove that
(1) For any polynomial p(x) of degree n + 1,
n
X 1
p(x) − p(xk )lk (x) = p(n+1) (x)Ψn (x).
k=0
(n + 1)!

(2) If further x0 , . . . , xn are Gauss-Legendre points in the interval [−1, 1], then
Z 1
li (x)lj (x)dx = 0 for i 6= j.
−1
Solution: (1) Note that p(x) − nk=0 p(xk )lk (x) has zeros at x = x0 , . . . , xn ,
P

therefore, p(x) − nk=0 p(xk )lk (x) = CΨn (x) for some constant C.
P

Since nk=0 p(xk )lk (x) is of degree n and p(x) is of degree n + 1,


P
1
C must be the highest degree coeff. of p(x), thus C = (n+1)!
p(n+1) (x). 
(2) Since x0 , . . . , xn are Gauss-Legendre points,
Z 1 n
X
p(x)dx = wk p(xk )
−1 k=0

for any polynomial p of degree ≤ 2n + 1. Therefore,


Z 1 n
X
li (x)lj (x)dx = wk li (xk )lj (xk ) = 0, if i 6= j. 
−1 k=0

3. For the initial value problem y 0 (t) = f (t, y), y(0) = y0 , t ≥ 0, consider the θ-method
yn+1 = yn + h[θf (tn , yn ) + (1 − θ)f (tn+1 , yn+1 )],
where time has been discretized such that tn = nh, and yn is the numerical approxi-
mation of y(tn ).
(a) Is this method consistent? What is the order? Is this method zero-stable? How
does the result differ for different θ?
(b) What is the region of absolute stability? What is the region when θ = 0, 12 , or 1?
For what θ values is this method A−stable?
(c) Does this method have stiff decay? Show why or why not.

Solution: (a) Consider the local truncation error around tn ,


h2 00 2
y + hy 0 + 2!
y+ · · · − y − h(θy 0 + (1 − θ)(y 0 + hy 00 + h2! y 000 + · · · )
= h2 y 00 ( 12 − (1 − θ)) + h3 (− 31 + 21 θ)y 000 + · · ·

The local truncation error is at least 2nd order, i.e. globally at least first order
method, so this method is consistent. The order of the method becomes 2nd order
if θ = 12 , otherwise, it is a first order method. The first characteristic polynomial
is ρ(z) = z − 1, i.e. z = 1, so this method is zero-stable for any θ. 
(b) Consider y 0 = λy, then yn+1 = yn + h[θλyn + (1 − θ)λyn+1 ],

1 + hθλ 1 + hθλ
yn+1 = yn , |R(hθ)| := ≤ 1 gives the stability region.
1 − h(1 − θ)λ 1 − h(1 − θ)λ

Page 2
When θ = 0, the region is outside of the unit circle centered at z = 1 in the complex
plane, when θ = 12 , the negative half plane of the complex plane, and when θ = 1,
inside of the unit circle centered at z = −1 in the complex plane. Therefore, this
method is A-stable for 0 ≤ θ ≤ 12 , since the stability region includes the negative
half plane. 
(c) To have stiff decay, one must show limhθ→−∞ R(hθ) → 0.

1 + hθλ θ
lim R(hθ) = lim =
hθ→−∞ hθ→−∞ 1 − h(1 − θ)λ 1−θ

i.e. goes to 0 only when θ = 0. Only when θ = 0 (which is the backward Euler
method), this method has stiff decay. 

4. Consider the initial boundary values problem



 ut = uxx , (x, t) ∈ (0, 1) × (0, T ] = D
u(0, t) = g1 (t), u(1, t) = g2 (t), t ∈ [0, T ]
u(x, 0) = f (x), x ∈ [0, 1]

Let (xi , tn ) be a grid point in a uniform rectangular grid, s.t. xi = i∆x, tn = n∆t
for i = 0, 1, . . . , I and n = 0, 1, . . . , N where I∆x = 1 and N ∆t = T , and let Uin
be a numerical approximation of u(xi , tn ). Assuming the exact solution is sufficiently
smooth, show that the scheme
n+1
Uin+1 − Uin Ui+1 − 2Uin+1 + Ui−1
n+1
=
∆t ∆x2
is unconditionally stable and |Uin − u(xi , tn )| = O(∆t + ∆x2 ) as ∆t, ∆x → 0.

Solution: Given some perturbation of f (x), g1 (t), and g2 (t), the difference between
the perturbed Solution and Uin , say ρni will also satisfy

ρn+1 − ρni ρn+1 − 2ρn+1 + ρn+1


i
= i+1 i i−1
.
∆t ∆x2
n+1
∆t
Let λ = ∆x 2 , then ρi
λ
= 1+2λ ρn+1 1 n λ n+1 n+1
i+1 + 1+2λ ρi + 1+2λ ρi−1 . Therefore, ρi is a strict
n+1 n n+1
convex combination of ρi+1 , ρi and ρi−1 and satisfy the maximum principle, which
proves the stability.
Note that the exact Solution u satisfies
un+1 − uni un+1 − 2un+1 + un+1
i
= i+1 i i−1
+ O(∆t + ∆x2 ).
∆t ∆x2

Page 3
Let eni = uni − Uin , then en+1
i = λ
en+1
1+2λ i+1
+ 1
en
1+2λ i
+ λ
en+1
1+2λ i−1
+ O(∆t2 + ∆t∆x2 ).

∴ |en+1
i
λ
| ≤ 1+2λ 1
ken+1 k∞ + 1+2λ λ
ken k∞ + 1+2λ ken+1 k∞ + O(∆t2 + ∆t∆x2 )
λ 1 λ
∴ ken+1 k∞ ≤ 1+2λ ken+1 k∞ + 1+2λ ken k∞ + 1+2λ ken+1 k∞ + O(∆t2 + ∆t∆x2 )
λ 1
∴ 1+2λ ke k∞ ≤ 1+2λ ke k∞ + C(∆t2 + ∆t∆x2 )
n+1 n

This implies

ken k∞ ≤ ke0 k∞ + C(n∆t2 + n∆t∆x2 ) ≤ C(∆t + ∆x2 )

since ke0 k∞ → 0. 

5. Consider the boundary-value problem



 −uxx + u = f (x), x ∈ (0, 1)
u(0) = a
ux (1) = b

Given a partition 0 = x0 < x1 < · · · < xn = 1, please formulate a piecewise linear


continuous finite element method to solve the problem. Show that your method has a
unique solution.

Solution:
Let φi (x), i = 0, . . . , n be continuous piecewise linear functions defined on [0, 1] s.t.
φi (xj ) = γij for j = 0, . . . , n. Let

Vh = span{φi (x) : i = 1, 2, . . . , n} and Wh = aφ0 + Vh .

For any φ ∈ Vh ,
Z 1 Z 1 Z 1 Z 1
1
(−uxx + u)φdx = −(ux φ)|0 + ux φx dx + uφdx = f φdx
0 0 0 0

here −(ux φ)|10 = −ux (1)φ(1) = −bφ(1).


The FEM is to find U ∈ Wh s.t.
Z 1 Z 1 Z 1
Ux φx dx + U φdx = f φdx + bφ(1),
0 0 0

for any φ ∈ Vh .
This system has a unique solution if and only if the associated homogeneous system
Z 1 Z 1
Ux φx dx + U φdx = 0
0 0

Page 4
has a unique 0 Solution, assuming a, b = 0.
In fact, let φ = U ,then
Z 1 Z 1
2
|Ux | dx + |U |2 dx = 0 =⇒ U ≡ 0. 
0 0

6. Consider the following matrix A and solving the linear system A~x = ~b by iterative
methods,  
1 α β
A =  −α 1 −γ  .
β γ 1
(a) What are the conditions on the variables α, β, and γ for Jacobi’s method and
Gauss-Seidel method to converge?
(b) Describe the Jacobi’s method and Gauss-Seidel method.
(c) Find a set of values (if any exist) of α, β, and γ for which the Jacobi method
converges but Gauss-Seidel does not, and vice versa.

Solution: (a) The matrix A should be strictly diagonally dominant:


|α| + |β| < 1, |α| + |γ| < 1, and |β| + |γ| < 1. 

(b) A = D − L − U , where D is the diagonal matrix, L lower triangular matrix and


U upper triangular matrix.

(D − L − U )~x = ~b
D~x = (L + U )~x + ~b
~xn+1 = D−1 (L + U )~xn + D−1~b
This is Jacobi iteration, and for Gauss-Seidel,

(D − L)~x = U~x + ~b
~xn+1 = (D − L)−1 U~xn + (D − L)−1~b

Algorithm: With any initial condition ~x0 , iterate ~xn+1 = D−1 (L + U )~xn + D−1~b
(Jacobi) or ~xn+1 = (D − L)−1 U~xn + (D − L)−1~b (Gauss-Seidel) until it converges.

(c) (Outline of solution) The standard convergence condition is when the spectral
radius of the iteration matrix is less than 1. Compute the eigenvalues of D−1 (L+U ),
let’s call them λJ s, and the eigenvalues of (D − L)−1 U to be λG s, then find the
condition on α, β, and γ which corresponds to |λJ | < 1 and |λG | > 1 and vice versa.


Page 5
7. Describe an algorithm to compute least squares solution by singular value decomposi-
tion. Prove the solution obtained is the least squares solution and estimate the leading
computation cost for your algorithm. Assume the least squares system is

A~x = ~b, A ∈ Rm×n , ~b ∈ Rm , and ~x ∈ Rn .

Solution: Algorithm:

(a) Compute the reduced SVD: A = U ΣV ∗ .

(b) Compute the vector ~y = U ∗~b.

(c) Solve the diagonal system Σw


~ = ~y .

(d) Set ~x = V w.
~

Proof: Least squares solution

A∗ A~x = A∗~b
⇔ V Σ∗ U ∗ U ΣV ∗~x = V Σ∗ U ∗~b
⇔ Σ∗ ΣV ∗~x = Σ∗ U ∗~b
⇔ ΣV ∗~x = ~y
⇔ Σw~ = ~y .

Leading cost: dominated by SVD ∼ 2mn2 + 11n3 flops. 

8. Describe the Conjugate Gradient Iteration method for solving a linear system

A~x = ~b.

Prove that the residuals are orthogonal.

Solution: Algorithm:

ro = ~b, p~o = r~o


(a) x~o = 0,~

(b) for n = 1, 2, 3, . . .
T
i. αn = (~rn−1 ~rn−1 )/(~pTn−1 A~pn−1 )
ii. ~xn = ~xn−1 + αn p~n−1 .
iii. ~rn = ~rn−1 − αn A~pn−1 .

Page 6
iv. βn = (~rnT ~rn )/(~rn−1
T
~rn−1 )
v. p~n = ~rn + βn p~n−1 .

end.

Proof for ~rnT ~rj = 0 for j < n.


By induction on n,
~rnT ~rj = ~rn−1
T
~rj − αn p~Tn−1 A~rj .

If j < n − 1, it is true by induction,


If j = n − 1,
~rnT ~rn−1 = 0 ⇔ αn = (~rnT ~rn−1 )/~pTn A~rn−1 .

Page 7

You might also like