Newton-Raphson en INGLES
Newton-Raphson en INGLES
1
Recap
• Optimization
• Steepest descent
2
Recap
• Method of steepest decent: xi+1 = xi ↵i g(xi )
• Estimating an optimal ↵i with a Taylor expansion:
1 2
f (xi+1 ) = f (xi ) ↵i g(xi ) g(xi ) + ↵i g(xi )T H(xi )g(xi ) + . . .
T
2
0.8
0.6
0.4
0.2
g(xi )T g(xi )
x2 0 ↵i =
−0.2
g(xi )T H(xi )g(xi )
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
3
x1
Recap
mẍ = ẋ + F
mẍ = ẋ rU
4
Recap
• Method of steepest decent: xi+1 = xi ↵i g(xi )
3
-3
-2
2
1
-1
2
1 -2 -1
-3
x2 0 -5
x2
-2
-3
-2
2
1
-4
-4
-3
0
-2
-1
-1
-2
0
-1
-2
-3
-3 2
-3 -2 -1 0 1 2 3
x1
x1 5
Recap
• Method of steepest decent: xi+1 = xi ↵i g(xi )
3
-3
-2
2
1
-1
2
1 -2 -1
-3
x2 0 -5
x2
-2
-3
-2
2
1
-4
-4
-3
0
-2
-1
-1
-2
0
-1
-2
-3
-3 2
-3 -2 -1 0 1 2 3
x1
x1 6
Unconstrained Optimization
• Conjugate gradient method:
1 T
• Consider the minimization of: f (x) = x Ax
2
T
b x
g(x) = rf (x) = Ax b
H(x) = A
• Ax = b
• the Hessian, A, is symmetric, positive definite
7
Unconstrained Optimization
• Conjugate gradient method
1 2 T
T
f (xi+1 ) = f (xi ) + ↵i g(xi ) pi + ↵i pi Ap
2
• f (xi+1 ) is quadratic in ↵i . g(xi )T pi
• f (xi+1 ) is minimized when ↵i = pTi Api
T
[A (xi+1 + ↵i+1 pi+1 ) b] pi = 0 ) pTi+1 Api = 0 8
Unconstrained Optimization
• Conjugate gradient method:
1 2 T
T
f (xi+1 ) = f (xi ) + ↵i g(xi ) pi + ↵i pi Ap
2
• f (xi+1 ) is quadratic in ↵i . g(xi )T pi
• f (xi+1 ) is minimized when ↵i = pTi Api
x2 0
−2
−4
−6
−8
−10
−10 −8 −6 −4 −2 0 2 4 6 8 10
10
x1
Unconstrained Optimization
11
Newton-Raphson
• Finding local minima in unconstrained optimization
problems involve solutions of the equation:
g(x) = rf (x) = 0
• at minima in f (x)
• Newton-Raphson iteration:
12
Unconstrained Optimization
• Method of steepest decent/Newton-Raphson:
x2 0
−2
−4
−6
−8
−10
−10 −8 −6 −4 −2 0 2 4 6 8 10
13
x1
Unconstrained Optimization
• Method of steepest decent/Newton-Raphson:
0.8
0.6
0.4
0.2
x2 0
−0.2
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
14
x1
Newton-Raphson
• Compare:
• xi+1 = xi H(xi ) 1
g(xi )
• What is the difference?
15
Trust-Region Methods
• Both Newton-Raphson and the optimized steepest
descent methods assume the objective function can be
described locally by a quadratic function.
xi
xi+1
16
Trust-Region Methods
• Both Newton-Raphson and the optimized steepest
descent methods assume the objective function can be
described locally by a quadratic function.
xi
xi+1
17
Trust-Region Methods
• Trust region methods choose between the Newton-
Raphson direction when the quadratic approximation is
good and the steepest decent direction when it is not.
Newton-Raphson
Steepest decent Ri
18
Trust-Region Methods
Newton-Raphson
• Newton step: diNR 1
= H(xi ) g(xi ) Steepest decent
• SD
Steepest decent: di = ↵i g(xi )
Ri
• NR NR
If kdi k2 < Ri and f (xi + di ) < f (xi )
• Else
• Take a step in the steepest descent direction
• If kdi k2 < Ri and f (xi + di ) < f (xi ) with
SD SD
Ri
21
Unconstrained Optimization
• Method of steepest decent/Newton-Raphson/Trust-Region:
0.8
0.6
0.4
0.2
x2 0
−0.2
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
22
x1
Unconstrained Optimization
• Method of steepest decent/Newton-Raphson/Trust-Region:
1.08
1.06
1.04
1.02
x2 1
0.98
0.96
0.94
0.92
0.9
0.9 0.92 0.94 0.96 0.98 1 1.02 1.04 1.06 1.08 1.1
23
x1
MIT OpenCourseWare
httpV://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: httpV://ocw.mit.edu/terms.