Newton-Raphson Method
Nikhilesh Sutradhar
1 Detailed Introduction and Historical Context
The Newton-Raphson method, often called Newton’s method, is arguably the most famous and
widely used root-finding algorithm in numerical analysis. While the core idea was developed by Sir
Isaac Newton around 1669, its current iterative form was described by Joseph Raphson in 1690. It
is a powerful tool because, under the right conditions, it converges to a root much faster than bracketing
methods like the bisection method.
The method is fundamental in computational science and engineering for solving equations that
cannot be solved analytically. It is used in optimization problems (to find minima or maxima by finding
the roots of the derivative), in calculating complex functions in computer graphics, and in solving systems
of non-linear equations in fields like fluid dynamics and electrical engineering.
The key principle is linear approximation. The method assumes that a function can be locally
approximated by its tangent line. By finding the x-intercept of this tangent line, we obtain a better
approximation of the function’s root. Repeating this process refines the approximation with each step.
2 Rigorous Derivation via Taylor Series Expansion
A more formal derivation of the Newton-Raphson formula comes from the Taylor series expansion.
Let xn+1 = xn + h, where h is a small correction to our current guess xn that will bring us closer to
the actual root, say α. We want f (xn+1 ) = 0.
According to the Taylor series expansion of f (x) around xn , we have:
h2 ′′
f (xn+1 ) = f (xn + h) = f (xn ) + h · f ′ (xn ) + f (xn ) + . . .
2!
Since we want f (xn+1 ) = 0, and assuming h is small, we can truncate the series after the first-order
term:
0 ≈ f (xn ) + h · f ′ (xn )
Now, we can solve for the correction term h:
h · f ′ (xn ) ≈ −f (xn )
f (xn )
h≈−
f ′ (xn )
Since we defined xn+1 = xn + h, we substitute the expression for h:
f (xn )
xn+1 = xn −
f ′ (xn )
This is the Newton-Raphson formula. The truncation of the Taylor series is also the source of the
method’s error and explains why it is an approximation, not an exact step to the root.
3 Convergence Analysis
The speed of convergence is the primary advantage of this method.
• Order of Convergence: The Newton-Raphson method has quadratic convergence. This
means that if the error at step n is En , the error at the next step, En+1 , is roughly proportional
to the square of the previous error (En+1 ∝ En2 ). In practice, this means the number of correct
decimal places approximately doubles with each iteration.
1
• Condition for Convergence: For the iteration xn+1 = g(xn ) where g(x) = x− ff′(x) (x) , the method
is guaranteed to converge if, in an interval around the root α, the condition |g ′ (x)| < 1 is satisfied.
Calculating the derivative:
f ′ (x)f ′ (x) − f (x)f ′′ (x) f (x)f ′′ (x)
g ′ (x) = 1 − ′
=
[f (x)]2 [f ′ (x)]2
f (x)f ′′ (x)
Therefore, the condition for convergence is [f ′ (x)]2 < 1 for values of x near the root.
• Importance of the Initial Guess: Convergence is only guaranteed for an initial guess x0 that
is ”sufficiently close” to the actual root.
4 Pitfalls and Failure Cases
The Newton-Raphson method is powerful but not foolproof. Here are common scenarios where it can
fail:
1. Stationary Points (f ′ (xn ) = 0): If the derivative at a guess is zero, the tangent line is horizontal
and will never intersect the x-axis. This results in a division by zero in the formula.
2. Oscillation: The iterations may enter a cycle, oscillating between two or more values without ever
converging to a root. For example, f (x) = x3 − 2x + 2 with x0 = 0 leads to an oscillation between
0 and 1.
3. Divergence: The iterations may produce values that move successively farther away from the
root.
4. Root Jumping: If a function has multiple roots, the initial guess determines which root the
method converges to. A guess might send the iteration to an unexpected root.
5 Detailed Solved Problems
5.1 Problem 1: Finding Intersection of Two Curves
Find the positive x-coordinate where y = x2 and y = cos(x) intersect, accurate to five decimal places.
We need to find the root of f (x) = x2 − cos(x) = 0. Let x0 = 0.8.
• Function: f (x) = x2 − cos(x)
• Derivative: f ′ (x) = 2x + sin(x)
Iteration 1:
(0.8)2 − cos(0.8) −0.056706
x1 = 0.8 − = 0.8 − ≈ 0.82445
2(0.8) + sin(0.8) 2.317356
Iteration 2:
(0.82445)2 − cos(0.82445)
x2 = 0.82445 − ≈ 0.82413
2(0.82445) + sin(0.82445)
Answer: The curves intersect at approximately x = 0.82413.
5.2 Problem 2: Engineering Application (Fluid Dynamics)
√
Find the friction factor f from the Colebrook equationfor ϵ/D = 0.001 and Re = 2 × 105 . Let x = f.
1
0.001 2.51
√
The equation is F (x) = x + 2.0 log10 3.7 + (2×10 5 )x = 0. Use initial guess x0 = 0.02 ≈ 0.14.
• Iteration 1: With x0 = 0.14, the formula yields x1 ≈ 0.1452.
• Iteration 2: With x1 = 0.1452, the formula yields x2 ≈ 0.1451.
Answer: x ≈ 0.1451, so the friction factor is f = x2 ≈ 0.02105.
2
5.3 Problem 3: Finding a Minimum Value
Find the value of x that minimizes G(x) = e−x + x4 . We find the root of the derivative, f (x) = G′ (x) =
−e−x + 4x3 = 0. Let x0 = 0.5.
• Function to find root of: f (x) = −e−x + 4x3
• Derivative for Newton’s method: f ′ (x) = e−x + 12x2
Iteration 1:
−e−0.5 + 4(0.5)3
x1 = 0.5 − ≈ 0.5295
e−0.5 + 12(0.5)2
Iteration 2:
−e−0.5295 + 4(0.5295)3
x2 = 0.5295 − ≈ 0.5286
e−0.5295 + 12(0.5295)2
Answer: The function G(x) has a minimum at approximately x = 0.5286.
5.4 Problem 4: Root of a Logarithmic Function
Find a root for f (x) = x ln(x) − 1 = 0 accurate to four decimal places. Let x0 = 1.5.
• Function: f (x) = x ln(x) − 1
• Derivative: f ′ (x) = ln(x) + 1
Iteration 1:
1.5 ln(1.5) − 1
x1 = 1.5 − ≈ 1.7789
ln(1.5) + 1
Iteration 2:
1.7789 ln(1.7789) − 1
x2 = 1.7789 − ≈ 1.7634
ln(1.7789) + 1
Iteration 3:
1.7634 ln(1.7634) − 1
x3 = 1.7634 − ≈ 1.7632
ln(1.7634) + 1
Answer: The root is approximately 1.7632.
5.5 Problem 5: Finding the Fifth Root
√
Calculate 5
200. This is equivalent to solving f (x) = x5 − 200 = 0. Let x0 = 2.8.
• Function: f (x) = x5 − 200
• Derivative: f ′ (x) = 5x4
Iteration 1:
(2.8)5 − 200
x1 = 2.8 − ≈ 2.8907
5(2.8)4
Iteration 2:
(2.8907)5 − 200
x2 = 2.8907 − ≈ 2.8894
5(2.8907)4
Answer: The fifth root of 200 is approximately 2.8894.
3
September 23, 2025
Example 1: A Classic Cubic Polynomial
We want to find a root for the equation f (x) = x3 − x − 1.
• Function: f (x) = x3 − x − 1
• Derivative: f ′ (x) = 3x2 − 1
• Initial Guess: x0 = 1
Iterations:
(−1)
x1 = 1.0 − = 1.5
2
0.875
x2 = 1.5 − ≈ 1.347826087
5.75
0.100682173
x3 = 1.347826087 − ≈ 1.325200399
4.449576191
0.002058363
x4 = 1.325200399 − ≈ 1.324718174
4.268469738
0.000001421
x5 = 1.324718174 − ≈ 1.324717957
4.264633334
0.0
x6 = 1.324717957 − ≈ 1.324717957
4.264632901
x7 = 1.324717957 (Converged)
Result: The root converges to approximately 1.324717957. Notice how the num-
ber of correct digits roughly doubles after the third iteration, showcasing quadratic
convergence.
1
Example 2: A Transcendental Equation (Trigonometry)
We want to find the solution to cos(x) = x, which means finding the root of
f (x) = cos(x) − x.
• Function: f (x) = cos(x) − x
• Derivative: f ′ (x) = − sin(x) − 1
• Initial Guess: x0 = 0.5 (Calculations must be in radians)
Iterations:
0.377582562
x1 = 0.5 − ≈ 0.755222417
−1.479425539
−0.027103363
x2 = 0.755222417 − ≈ 0.739141666
−1.685451057
−0.000094645
x3 = 0.739141666 − ≈ 0.739085133
−1.673652874
−1.13 × 10−10
x4 = 0.739085133 − ≈ 0.739085133
−1.673612029
x5 = 0.739085133 (Converged)
x6 = 0.739085133
x7 = 0.739085133
Result: The root converges extremely fast to 0.739085133, reaching the limit of
double-precision floating-point accuracy by the 4th iteration.
2
Example 3: Finding a Square Root
√
We want to calculate 20 by finding the positive root of f (x) = x2 − 20.
• Function: f (x) = x2 − 20
• Derivative: f ′ (x) = 2x
• Initial Guess: x0 = 2 (A deliberately suboptimal guess)
Iterations:
−16
x1 = 2 − = 6.0
4
16
x2 = 6.0 − ≈ 4.666666667
12
1.777777778
x3 = 4.666666667 − ≈ 4.485714286
9.333333333
0.120408163
x4 = 4.485714286 − ≈ 4.472292015
8.971428571
0.001550592
x5 = 4.472292015 − ≈ 4.472136001
8.944584031
2.39 × 10−7
x6 = 4.472136001 − ≈ 4.472135955
8.944272002
x7 = 4.472135955 (Converged)
Result: The root converges to 4.472135955. Even with a poor starting guess,
the method corrects its course and finds the root efficiently.