0% found this document useful (0 votes)
20 views22 pages

WTW123 - Formulas, Proofs and Graphs

The document outlines various mathematical concepts including error analysis, polynomial equations, fixed point iteration, and numerical integration methods. It covers the Newton-Raphson method, convergence types, and local/global error estimates for different numerical methods. Additionally, it discusses Euler's method for solving first-order differential equations and systems of differential equations, providing formulas and theorems relevant to these topics.

Uploaded by

elishaneburpy99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views22 pages

WTW123 - Formulas, Proofs and Graphs

The document outlines various mathematical concepts including error analysis, polynomial equations, fixed point iteration, and numerical integration methods. It covers the Newton-Raphson method, convergence types, and local/global error estimates for different numerical methods. Additionally, it discusses Euler's method for solving first-order differential equations and systems of differential equations, providing formulas and theorems relevant to these topics.

Uploaded by

elishaneburpy99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

WTW123 – Proofs, Formulas and Graphs

1. Formulas
Section 1.1 – Graphs and Interval Division
𝑥−𝑦
Error
(If 𝑦 is an approximation for 𝑥)
|𝑥 − 𝑦|
Absolute Error
(If 𝑦 is an approximation for 𝑥)
|𝑥 − 𝑦|
Relative Error |𝑥|
(If 𝑦 is an approximation for 𝑥)

Section 1.2 – Polynomial Equations


|𝑎0 | + |𝑎1 | + ⋯ + |𝑎𝑛−1 |
𝑚=
|𝑎𝑛 |
Theorem 4 (𝒎) for 𝑓(𝑥) = 𝑎𝑛 𝑥 𝑛 + 𝑎𝑛−1 𝑥 𝑛−1 + ⋯ + 𝑎0
• If 𝑚 ≥ 1, then all the roots have absolute values less or equal to 𝑚.
• If 𝑚 < 1, all roots will have absolute value less than 1.

Section 1.3 – Fixed Point Iteration


Iterations 𝑥𝑘+1 = 𝑔(𝑥𝑘 )
𝒆𝒏 𝑒𝑛 = 𝑐 − 𝑥𝑛
𝒗𝒏 𝑣𝑛 = |𝑥𝑛+1 − 𝑥𝑛 |
|𝑐 − 𝑥𝑛 | ≤ 𝑚|𝑐 − 𝑥𝑛−1 | = |𝑒𝑛 | ≤ 𝑚|𝑒𝑛−1 |
1 = 𝑣𝑛
Theorem 6 (Error |𝑐 − 𝑥𝑛 | ≤ |𝑥𝑛+1 − 𝑥𝑛 | |𝑒𝑛 | ≤
1−𝑚 1−𝑚
Estimates) 𝑚 𝑛
𝑚𝑛
=
|𝑐 − 𝑥𝑛 | ≤ |𝑥 − 𝑥0 | |𝑒𝑛 | ≤ 𝑣
1−𝑚 1 1−𝑚 0

Section 1.4 – Newton-Raphson Method


Newton-Raphson 𝑓(𝑥𝑘 )
𝑥𝑘+1 = 𝑥𝑘 − , 𝑘 = 0,1,2, …
Iterations 𝑓 ′ (𝑥𝑘 )

Section 1.5 – Convergence


𝑒𝑛+1 𝑛
→𝑘
𝑒𝑛 ∞
Fixed Point Iteration (with 𝑘 constant)
(Convergence) • If 𝑘 ≠ 0, the convergence is called linear
• If 𝑘 = 0 the convergence is called superlinear.
𝑒𝑛+1 𝑛
→𝑘
𝑒𝑛 ∞
Newton-Raphson (with 𝑘 constant)
(Linear Convergence)
• If 𝑘 ≠ 0, the convergence is called linear
• If 𝑘 = 0 the convergence is called superlinear.
Newton-Raphson 𝑒𝑛+1 𝑛
→𝑘
(Quadratic 𝑒𝑛2 ∞
Convergence) (with 𝑘 = 1 a constant)
Section 2.1 – Numerical Integration and Rectangle Rules
𝑛

𝑳𝑹(𝒇, 𝒉) 𝐿𝑅(𝑓, ℎ) = ∑ 𝑓(𝑥𝑘−1 )ℎ = ℎ[𝑓(𝑥0 ) + 𝑓(𝑥1 ) + ⋯ + 𝑓(𝑥𝑛−1 )]


𝑘=1
𝑛

𝑹𝑹(𝒇, 𝒉) 𝑅𝑅(𝑓, ℎ) = ∑ 𝑓(𝑥𝑘 )ℎ = ℎ[𝑓(𝑥1 ) + 𝑓(𝑥2 ) + ⋯ + 𝑓(𝑥𝑛 )]


𝑘=1
𝑛

𝑀𝑅(𝑓, ℎ) = ∑ 𝑓(𝑥𝑘∗ )ℎ = ℎ[𝑓(𝑥1∗ ) + 𝑓(𝑥2∗ ) + ⋯ + 𝑓(𝑥𝑛∗ )]


𝑴𝑹(𝒇, 𝒉) 𝑘=1
𝑥𝑘 +𝑥𝑘−1
(where 𝑥𝑘∗ = is the midpoint of the interval [𝑥𝑘−1 , 𝑥𝑘 ])
2

Section 2.2 – Trapezoidal and Simpson Rules


𝑛
1
𝑇(𝑓, ℎ) = ∑ (𝑓(𝑥𝑘−1 ) + 𝑓(𝑥𝑘 ))ℎ
2
𝑘=1

𝑻(𝒇, 𝒉) = (𝑓(𝑥0 ) + 2𝑓(𝑥1 ) + 2𝑓(𝑥2 ) + ⋯ + 2𝑓(𝑥𝑛−1 ) + 𝑓(𝑥𝑛 ))
2
𝑛−1

= (𝑓(𝑥0 ) + 𝑓(𝑥𝑛 )) + ℎ ∑ 𝑓(𝑥𝑘 )
2
𝑘=1
𝑛

𝑆(𝑓, ℎ) = ∑ (𝑓(𝑥2𝑘−2 ) + 4𝑓(𝑥2𝑘−1 ) + 𝑓(𝑥2𝑘 ))
3
𝑘=1
ℎ 2ℎ
𝑺(𝒇, 𝒉) 𝑆(𝑓, ℎ) = (𝑓(𝑥0 ) + 𝑓(𝑥2𝑛 )) + (𝑓(𝑥2 ) + 𝑓(𝑥4 ) + ⋯ 𝑓(𝑥2𝑛−2 ))
3 3
4ℎ
+ (𝑓(𝑥1 ) + 𝑓(𝑥3 ) + ⋯ 𝑓(𝑥2𝑛−1 ))
3

Section 2.3 – Piecewise Continuous Functions and Accuracy


𝑥𝑘
Local Error 𝑒𝑘 = ∫ 𝑓(𝑥) 𝑑𝑥 − ℎ𝑓(𝑥𝑘∗ )
𝑥𝑘−1
𝑏 𝑛

𝐸(𝑓, ℎ) = ∫ 𝑓(𝑥) 𝑑𝑥 − ∑ ℎ𝑓(𝑥𝑘∗ )


𝑎 𝑘=1
𝑛 𝑥𝑘 𝑛

= ∑∫ 𝑓(𝑥) 𝑑𝑥 − ∑ ℎ𝑓(𝑥𝑘∗ )
Global Error 𝑘=1 𝑥𝑘−1 𝑘=1
𝑛 𝑥𝑘
= ∑ (∫ 𝑓(𝑥) 𝑑𝑥 − ℎ𝑓(𝑥𝑘∗ ))
𝑘=1 𝑥𝑘−1

= 𝑒1 + 𝑒2 + 𝑒3 + ⋯ + 𝑒𝑛
𝑏
1
Global Error Estimate |∫ 𝑓(𝑥) 𝑑𝑥 − 𝐿𝑅(𝑓, ℎ)| ≤ 𝑀1 (𝑏 − 𝑎)ℎ
𝑎 2
for LR
with 𝑀1 = max|𝑓 ′ |
[𝑎,𝑏]
𝑏
1
Global Error Estimate |∫ 𝑓(𝑥) 𝑑𝑥 − 𝑅𝑅(𝑓, ℎ)| ≤ 𝑀1 (𝑏 − 𝑎)ℎ
𝑎 2
for RR
with 𝑀1 = max|𝑓 ′ |
[𝑎,𝑏]
𝑏
1
Global Error Estimate |∫ 𝑓(𝑥) 𝑑𝑥 − 𝑀𝑅(𝑓, ℎ)| ≤ 𝑀 (𝑏 − 𝑎)ℎ2
𝑎 24 2
for MR
with 𝑀2 = max|𝑓 ′ ′|
[𝑎,𝑏]
𝑏
1
Global Error Estimate |∫ 𝑓(𝑥) 𝑑𝑥 − 𝑇(𝑓, ℎ)| ≤ 𝑀 (𝑏 − 𝑎)ℎ2
𝑎 12 2
for T
with 𝑀2 = max|𝑓 ′ ′|
[𝑎,𝑏]
𝑏
1
Global Error Estimate |∫ 𝑓(𝑥) 𝑑𝑥 − 𝑆(𝑓, ℎ)| ≤ 𝑀 (𝑏 − 𝑎)ℎ4
𝑎 180 4
for S
with 𝑀4 = max|𝑓 (4) |
[𝑎,𝑏]

Section 2.4 – Local Errors for the Rectangle Rules


𝑥𝑘
1
Local Error for LR |𝑒𝑘 | = |∫ 𝑓 − 𝑓(𝑥𝑘−1 )ℎ𝑘 | ≤ 𝑀ℎ2
𝑥𝑘−1 2
𝑥𝑘
1
Local Error for RR |𝑒𝑘 | = |∫ 𝑓 − 𝑓(𝑥𝑘 )ℎ𝑘 | ≤ 𝑀ℎ2
𝑥𝑘−1 2
𝑥𝑘
1
|𝑒𝑘 | = |∫ 𝑓 − 𝑓(𝑥𝑘∗ )ℎ𝑘 | ≤ 𝑀ℎ3
Local Error for MR 𝑥𝑘−1 24
𝑥𝑘 +𝑥𝑘−1
(where 𝑥𝑘∗ = is the midpoint of the interval [𝑥𝑘−1 , 𝑥𝑘 ])
2

Section 3.1 – First Order Differential Equations: Euler’s Method


𝒕𝒌+𝟏 𝑡𝑘 + ℎ
𝒚𝒌+𝟏
𝑦𝑘 + ℎ(𝑓(𝑦𝑘 ))
(Euler’s Method)

Section 3.2 – First Order Differential Equations: Improved Euler’s Method


𝒕𝒌+𝟏 𝑡𝑘 + ℎ
𝒑𝒌+𝟏 𝑦𝑘 + ℎ(𝑓(𝑦𝑘 ))
𝒚𝒌+𝟏 ℎ
(Improved Euler’s 𝑦𝑘 + (𝑓(𝑦𝑘 ) + 𝑓(𝑝𝑘+1 ))
Method) 2

Section 3.3 – Accuracy


Global Error 𝐸𝑘 = 𝑦(𝑡𝑘 ) − 𝑦𝑘
Local Error 𝑒𝑘 = 𝑦(𝑡𝑘 ) − 𝑦𝑘∗ where 𝑦𝑘∗ = 𝑦(𝑡𝑘−1 ) + ℎ𝑓(𝑦(𝑡𝑘−1 )
Global Error Estimate |𝐸𝑘 | ≤ 𝐶𝑒 𝐷𝑇 ℎ
1
Local Error Estimate |𝑒𝑘 | ≤ 𝑀2 ℎ2
2

Please Note…

𝑬𝒌 Denotes the global error


𝒆𝒌 Denotes the local error
𝑪
Constants that exist if the solution of an initial value problem is approximated on [0. 𝑇]
𝑫
𝑻 The solution of an initial value problem is approximated on an interval [0, 𝑇]
𝑴𝟐 𝑀2 = max|𝑦 ′′ |
𝒉 Denotes the step length
Section 3.4 – Systems of Differential Equations
Euler algorithm
𝑢𝑘+1 = (𝐼 + ℎ𝐴)𝑢𝑘
(for homogeneous linear systems)
Improved Euler algorithm ℎ2 2
𝑢𝑘+1 = (𝐼 + ℎ𝐴 + 𝐴 ) 𝑢𝑘
(for homogeneous linear systems) 2

Section 3.5 – Higher Order Differential Equations


No formulas

Section 4.1 – Gauss Elimination


Gauss Elimination Notation
𝐄𝐪𝐮𝐚𝐭𝐢𝐨𝐧(𝒌) ← 𝐄𝐪𝐮𝐚𝐭𝐢𝐨𝐧(𝒌) − 𝒎 × 𝐄𝐪𝐮𝐚𝐭𝐢𝐨𝐧(𝒊) Equation(𝑘) is substituted by Equation(𝑘) minus
𝑚 times Equation(𝑖)
𝐄𝐪𝐮𝐚𝐭𝐢𝐨𝐧(𝒌) ↔ 𝐄𝐪𝐮𝐚𝐭𝐢𝐨𝐧(𝒊) Equation(𝑘) and Equation(𝑖) are interchanged
𝐫𝐨𝐰(𝒌) ← 𝐫𝐨𝐰(𝒌) − 𝒎 × 𝐫𝐨𝐰(𝒊) row(𝑘) is substituted by row(𝑘) minus 𝑚 times
row(𝑖)
𝐫𝐨𝐰(𝒌) ↔ 𝐫𝐨𝐰(𝒊) row(𝑘) and row(𝑖) are interchanged

Determinant
By row operations det 𝐴 = (−1)𝑝 det(𝑈)
𝑛

By Row 𝒊 det 𝐴 = ∑(−1)𝑖+𝑗 𝑎𝑖𝑗 𝑀𝑖𝑗


𝑗=1
𝑛

By Column 𝒋 det 𝐴 = ∑(−1)𝑖+𝑗 𝑎𝑖𝑗 𝑀𝑖𝑗


𝑖=1
Section 4.2 – Iterative Methods for Linear Systems
Strictly diagonally |𝑎𝑖𝑗 | > ∑|𝑎𝑖𝑖 | for 𝑖 = 1,2,3,4 … , 𝑛
dominant matrix 𝑗≠𝑖
𝑛
(𝑟+1) (𝑟)
𝑥1 = 𝑑1 + ∑ 𝑐1𝑘 𝑥𝑘
𝑘=1
𝑛
(𝑟+1) (𝑟)
𝑥2 = 𝑑2 + ∑ 𝑐2𝑘 𝑥𝑘
Jacobi-iteration
𝑘=1

𝑛
(𝑟+1) (𝑟)
𝑥𝑛 = 𝑑𝑛 + ∑ 𝑐𝑛𝑘 𝑥𝑘
𝑘=1
𝑛
(𝑟+1) (𝑟)
𝑥1 = 𝑑1 + ∑ 𝑐1𝑘 𝑥𝑘
𝑘=2
𝑛
(𝑟+1) (𝑟+1) (𝑟)
𝑥2 = 𝑑2 + 𝑐21 𝑥1 + ∑ 𝑐2𝑘 𝑥𝑘
𝑘=3

Gauss-Seidel 𝑖−1 𝑛
(𝑟+1) (𝑟+1) (𝑟)
𝑥𝑖 = 𝑑𝑖 + ∑ 𝑐𝑖𝑘 𝑥𝑘 + ∑ 𝑐𝑖𝑘 𝑥𝑘
𝑘=1 𝑘=𝑖+1

𝑛−1
(𝑟+1) (𝑟+1)
𝑥𝑛 = 𝑑𝑛 + ∑ 𝑐𝑛𝑘 𝑥𝑘
𝑘=1
𝑛
2-norm ‖𝑥̅ ‖2 = √(∑|𝑥𝑖 |2 )
𝑖=1
𝑛

1-norm ‖𝑥̅ ‖1 = ∑|𝑥𝑖 |


𝑖=1
Maximum norm ‖𝑥̅ ‖∞ = max{|𝑥𝑖 | ∶ 𝑖 = 1,2, … , 𝑛}
2. Proofs
Section 1.1 – Graphs and Interval Division
No proofs

Section 1.2 – Polynomials Equations


Theorem 2
Each polynomial equation of odd degree has at least one real root.

Proof
Considerations:
• Consider 𝑓(𝑥) = 𝑎𝑛 𝑥 𝑛 + 𝑎𝑛−1 𝑥 𝑛−1 + ⋯ + 𝑎0 = 0
• Consider the case when 𝑎𝑛 = 1 (both sides of the equation can be divided by 𝑎𝑛 )
Note the following:
• If 𝑛 is odd, 𝑥 𝑛 has the same sign as 𝑥
Choose 𝑥 such that |𝑥| > 1 and |𝑥| > |𝑎0 | + |𝑎1 | + ⋯ + |𝑎𝑛−1 |, then
|𝑎0 + 𝑎1 𝑥 + ⋯ + 𝑎𝑛−1 𝑥 𝑛−1 |
≤ |𝑎0 | + |𝑎1 ||𝑥| + ⋯ + |𝑎𝑛−1 ||𝑥|𝑛−1 (triangle inequality and absolute value properties)
< (|𝑎0 | + |𝑎1 | + |𝑎𝑛−1 |)|𝑥|𝑛−1 (multiply with applicable 𝑥 to factorise out 𝑥 𝑛−1 )
< |𝑥|𝑛 (since |𝑥| > |𝑎0 | + |𝑎1 | + ⋯ + |𝑎𝑛−1 |)
Case 1: If 𝑥 = 𝑏 > 0,
then 𝑏 𝑛 > 0 and
−𝑏 𝑛 < (𝑎0 + 𝑎1 𝑏 + ⋯ + 𝑎𝑛−1 𝑏 𝑛−1 )
∴ 𝑓(𝑏) = 𝑎0 + 𝑎1 𝑏 + ⋯ + 𝑎𝑛−1 𝑏𝑛−1 + 𝑏 𝑛 > 0
Case 2: If 𝑥 = 𝑐 < 0,
then 𝑐 𝑛 < 0 and |𝑐 𝑛 | = −𝑐 𝑛 and
𝑎0 + 𝑎1 𝑐 + ⋯ + 𝑎𝑛−1 𝑐 𝑛−1 < −𝑐 𝑛
∴ 𝑓(𝑐) = 𝑎0 + 𝑎1 𝑐 + ⋯ + 𝑎𝑛−1 𝑐 𝑛−1 + 𝑐 𝑛 < 0
The result follows from the intermediate value theorem.
Theorem 4
|𝑎0 |+|𝑎1 |+⋯+|𝑎𝑛−1 |
Consider the equation 𝑓(𝑥) = 0 with 𝑓(𝑥) = 𝑎𝑛 𝑥 𝑛 + 𝑎𝑛−1 𝑥 𝑛−1 + ⋯ + 𝑎0 and let 𝑚 = .
|𝑎𝑛 |

• If 𝑚 ≥ 1, then all the roots have absolute values less or equal to 𝑚.


• If 𝑚 < 1, all roots will have absolute value less than 1.

Proof
Considerations:
• 𝑓(𝑥) = 𝑎𝑛 𝑥 𝑛 + 𝑎𝑛−1 𝑥 𝑛−1 + ⋯ + 𝑎0 = 0
|𝑎 |+|𝑎 |+⋯+|𝑎 |
• 𝑚 = 0 1|𝑎 | 𝑛−1 .
𝑛
If 𝑥 is a nonzero root of the equation then,
|𝑎𝑛 ||𝑥|𝑛
= |𝑎𝑛 𝑥 𝑛 |
= |−(𝑎0 + 𝑎1 𝑥 + ⋯ + 𝑎𝑛−1 𝑥 𝑛−1 )|
≤ |𝑎0 | + |𝑎1 ||𝑥| + ⋯ + |𝑎𝑛−1 ||𝑥|𝑛−1 (triangle inequality and properties of absolute value)
If |𝑥| ≥ 1 then
|𝑎𝑛 | ≤ (|𝑎0 | + |𝑎1 | + ⋯ + |𝑎𝑛−1 |)|𝑥|𝑛−1
Dividing |𝑎𝑛 ||𝑥|𝑛−1 , it follows that |𝑥| ≤ 𝑚
|𝑎𝑛 | (|𝑎0 | + |𝑎1 | + ⋯ + |𝑎𝑛−1 |)|𝑥|𝑛−1

|𝑎𝑛 ||𝑥|𝑛−1 |𝑎𝑛 ||𝑥|𝑛−1
|𝑎0 | + |𝑎1 | + ⋯ + |𝑎𝑛−1 |
|𝑥| ≤ =𝑚
|𝑎𝑛 |
As a consequence, we have 𝑚 ≥ 1.
Hence if 𝑚 ≥ 1, and 𝑥 is a root of the equation, then |𝑥| ≤ 𝑚
(we can also conclude that a root 𝑥 with |𝑥| > 1 D.N.E. if 𝑚 < 1).

Section 1.3 – Fixed Point Iteration


Theorem 1
If |𝑔′ (𝑥)| < 1 for every 𝑥 ∈ (𝑎, 𝑏), then the equation 𝑥 = 𝑔(𝑥) has at most one solution in [𝑎, 𝑏].

Proof
Let ℎ(𝑥) = 𝑥 − 𝑔(𝑥)
then ℎ(𝑥) = 0 if and only if 𝑥 = 𝑔(𝑥).
Now, ℎ′ (𝑥) = 1 − 𝑔′ (𝑥) > 0 (since −1 > 𝑔′ (𝑥) > 1) for every 𝑥 ∈ (𝑎, 𝑏).
Thus, ℎ(𝑥) is strictly increasing on (𝑎, 𝑏)
Therefore
• This implies that ℎ has at most one zero.
• 𝑥 = 𝑔(𝑥) can have at most 1 solution

Section 1.4 – Newton-Raphson Method


No Proofs

Section 1.5 – Convergence


No Proofs
Section 2.1 – Numerical Integration and Rectangle Rules
𝑳𝑹(𝒇 + 𝒈, 𝒉) = 𝑳𝑹(𝒇, 𝒉) + 𝑳𝑹(𝒈, 𝒉)
𝑛

𝐿𝑅(𝑓 + 𝑔, ℎ) = ∑(𝑓 + 𝑔)(𝑥𝑘−1 )ℎ


𝑘=1
𝑛

= ∑[𝑓(𝑥𝑘−1 ) + 𝑔(𝑥𝑘−1 )] ℎ
𝑘=1
𝑛 𝑛

= ∑ 𝑓(𝑥𝑘−1 )ℎ + ∑ 𝑔(𝑥𝑘−1 )ℎ
𝑘=1 𝑘=1
= 𝐿𝑅(𝑓, ℎ) + 𝐿𝑅(𝑔, ℎ)

𝑹𝑹(𝒇 + 𝒈, 𝒉) = 𝑹𝑹(𝒇, 𝒉) + 𝑹𝑹(𝒈, 𝒉)


𝑛

𝑅𝑅(𝑓 + 𝑔, ℎ) = ∑(𝑓 + 𝑔)(𝑥𝑘 )ℎ


𝑘=1
𝑛

= ∑[𝑓(𝑥𝑘 ) + 𝑔(𝑥𝑘 )] ℎ
𝑘=1
𝑛 𝑛

= ∑ 𝑓(𝑥𝑘 )ℎ + ∑ 𝑔(𝑥𝑘 )ℎ
𝑘=1 𝑘=1
= 𝑅𝑅(𝑓, ℎ) + 𝑅𝑅(𝑔, ℎ)

𝑴𝑹(𝒇 + 𝒈, 𝒉) = 𝑴𝑹(𝒇, 𝒉) + 𝑴𝑹(𝒈, 𝒉)


𝑛

𝑀𝑅(𝑓 + 𝑔, ℎ) = ∑(𝑓 + 𝑔)(𝑥𝑘∗ )ℎ


𝑘=1
𝑛

= ∑[𝑓(𝑥𝑘∗ ) + 𝑔(𝑥𝑘∗ )] ℎ
𝑘=1
𝑛 𝑛

= ∑ 𝑓(𝑥𝑘∗ )ℎ + ∑ 𝑔(𝑥𝑘∗ )ℎ
𝑘=1 𝑘=1
= 𝑀𝑅(𝑓, ℎ) + 𝑀𝑅(𝑔, ℎ)
1
where 𝑥𝑘∗ = (𝑥𝑘 + 𝑥𝑘−1 )
2

𝑳𝑹(𝒄𝒇, 𝒉) = 𝒄𝑳𝑹(𝒇, 𝒉)
𝑛

𝐿𝑅(𝑐𝑓, ℎ) = ∑ 𝑐𝑓(𝑥𝑘−1 ) ℎ
𝑘=1
𝑛

= 𝑐 ∑ 𝑓(𝑥𝑘−1 )ℎ
𝑘=1
= 𝑐𝐿𝑅(𝑓, ℎ)

𝑹𝑹(𝒄𝒇, 𝒉) = 𝒄𝑹𝑹(𝒇, 𝒉)
𝑛

𝑅𝑅(𝑐𝑓, ℎ) = ∑ 𝑐𝑓(𝑥𝑘 ) ℎ
𝑘=1
𝑛

= 𝑐 ∑ 𝑓(𝑥𝑘 )ℎ
𝑘=1
= 𝑐𝑅𝑅(𝑓, ℎ)
𝑴𝑹(𝒄𝒇, 𝒉) = 𝒄𝑴𝑹(𝒇, 𝒉)
𝑛

𝑀𝑅(𝑐𝑓, ℎ) = ∑ 𝑐𝑓(𝑥𝑘∗ ) ℎ
𝑘=1
𝑛

= 𝑐 ∑ 𝑓(𝑥𝑘∗ )ℎ
𝑘=1
= 𝑐𝑀𝑅(𝑓, ℎ)

The degree of precision for the left-hand rectangle rule is 0


Let 𝑓(𝑥) = 𝑘
Result 1 𝑏
∫ 𝑓(𝑥) 𝑑𝑥
𝑎
= 𝑘𝑥|𝑏𝑎
= 𝑘(𝑏 − 𝑎)
Result 2 𝐿𝑅(𝑓, ℎ)
𝑛

= ∑ 𝑓(𝑥𝑘−1 )ℎ
𝑘=1
= ℎ[𝑓(𝑥0 ) + 𝑓(𝑥1 ) + ⋯ + 𝑓(𝑥𝑛−1 )]
= ℎ(𝑘 + 𝑘 … + 𝑘)
= ℎ(𝑛𝑘)
𝑏−𝑎
=( ) 𝑛𝑘
𝑛
= 𝑘(𝑏 − 𝑎)
We calculate 𝐸(𝑓, ℎ) by combining result 1 and 2
𝑏
𝐸(𝑓, ℎ) = ∫ 𝑓(𝑥) 𝑑𝑥 − 𝐿𝑅(𝑓, ℎ)
𝑎
= 𝑘(𝑏 − 𝑎) − 𝑘(𝑏 − 𝑎)
=0
Therefore, the degree of precision is at least 0

The degree of precision for the right-hand rectangle rule is 0


Let 𝑓(𝑥) = 𝑘
Result 1 𝑏
∫ 𝑓(𝑥) 𝑑𝑥
𝑎
= 𝑘𝑥|𝑏𝑎
= 𝑘(𝑏 − 𝑎)
Result 2 𝑅𝑅(𝑓, ℎ)
𝑛

= ∑ 𝑓(𝑥𝑘 )ℎ
𝑘=1
= ℎ[𝑓(𝑥1 ) + 𝑓(𝑥2 ) + ⋯ + 𝑓(𝑥𝑛 )]
= ℎ(𝑘 + 𝑘 … + 𝑘)
= ℎ(𝑛𝑘)
𝑏−𝑎
=( ) 𝑛𝑘
𝑛
= 𝑘(𝑏 − 𝑎)
We calculate 𝐸(𝑓, ℎ) by combining result 1 and 2
𝑏
𝐸(𝑓, ℎ) = ∫ 𝑓(𝑥) 𝑑𝑥 − 𝑅𝑅(𝑓, ℎ)
𝑎
= 𝑘(𝑏 − 𝑎) − 𝑘(𝑏 − 𝑎)
=0
Therefore, the degree of precision is at least 0
The degree of precision for the midpoint rectangle rule is 1
1
Let 𝑓(𝑥) = 𝑥 and let 𝑥𝑘∗ = 2 (𝑥𝑘−1 + 𝑥𝑘 )

Result 1 𝑏
∫ 𝑓(𝑥) 𝑑𝑥
𝑎
𝑛 𝑥𝑘
= ∑∫ 𝑓(𝑥) 𝑑𝑥
𝑘=1 𝑥𝑘−1
𝑛
1 2 𝑥𝑘
=∑ 𝑥 |
2 𝑥𝑘−1
𝑘=1
𝑛
1
= ∑ (𝑥𝑘2 − 𝑥𝑘−1
2
)
2
𝑘=1
𝑛
1
= ∑ (𝑥𝑘 − 𝑥𝑘−1 )(𝑥𝑘 + 𝑥𝑘−1 )
2
𝑘=1
𝑛
1
= ∑ ℎ(𝑥𝑘 + 𝑥𝑘−1 )
2
𝑘=1
Result 2 𝑀𝑅(𝑓, ℎ)
𝑛

= ∑ 𝑓(𝑥𝑘∗ )ℎ
𝑘=1
𝑛
1
= ∑ ℎ ( (𝑥𝑘 + 𝑥𝑘−1 ))
2
𝑘=1
𝑛
1
= ∑ ℎ(𝑥𝑘 + 𝑥𝑘−1 )
2
𝑘=1
We calculate 𝐸(𝑓, ℎ) by combining result 1 and 2
𝑏
𝐸(𝑓, ℎ) = ∫ 𝑓(𝑥) 𝑑𝑥 − 𝑀𝑅(𝑓, ℎ)
𝑎
𝑛 𝑛
1 1
= ∑ ℎ(𝑥𝑘 + 𝑥𝑘−1 ) − ∑ ℎ(𝑥𝑘 + 𝑥𝑘−1 )
2 2
𝑘=1 𝑘=1
=0
Therefore, the degree of precision is at least 1

Section 2.2 – Trapezoidal and Simpson Rules


𝑻(𝒇 + 𝒈, 𝒉) = 𝑻(𝒇, 𝒉) + 𝑻(𝒈, 𝒉)
𝑛
1
𝑇(𝑓 + 𝑔, ℎ) = ∑ [ ℎ((𝑓 + 𝑔)(𝑥𝑘−1 ) + (𝑓 + 𝑔)(𝑥𝑘 ))]
2
𝑘=1
𝑛
1
= ∑ [ ℎ[𝑓(𝑥𝑘−1 ) + 𝑔(𝑥𝑘−1 ) + 𝑓(𝑥𝑘 ) + 𝑔(𝑥𝑘 ))]
2
𝑘=1
𝑛
1 1
= ∑ [ ℎ(𝑓(𝑥𝑘−1 ) + 𝑓(𝑥𝑘 )) + ℎ(𝑔(𝑥𝑘−1 ) + 𝑔(𝑥𝑘 ))]
2 2
𝑘=1
𝑛 𝑛
1 1
= ∑ ℎ(𝑓(𝑥𝑘−1 ) + 𝑓(𝑥𝑘 )) + ∑ ℎ (𝑔(𝑥𝑘−1 ) + 𝑔(𝑥𝑘 ))
2 2
𝑘=1 𝑘=1
= 𝑇(𝑓, ℎ) + 𝑇(𝑔, ℎ)
𝑺(𝒇 + 𝒈, 𝒉) = 𝑺(𝒇, 𝒉) + 𝑺(𝒈, 𝒉)
𝑛
1
𝑆(𝑓 + 𝑔, ℎ) = ∑ ℎ[(𝑓 + 𝑔)(𝑥2𝑘−2 ) + 4(𝑓 + 𝑔)(𝑥2𝑘−1 ) + (𝑓 + 𝑔)(𝑥2𝑘 )]
3
𝑘=1
𝑛
1
= ∑ ℎ[𝑓(𝑥2𝑘−2 ) + 𝑔(𝑥2𝑘−2 ) + 4𝑓(𝑥2𝑘−1 ) + 4𝑔(𝑥2𝑘−1 ) + 𝑓(𝑥2𝑘 ) + 𝑔(𝑥2𝑘 )]
3
𝑘=1
𝑛
1 1
= ∑ [ ℎ(𝑓(𝑥2𝑘−2 ) + 4𝑓(𝑥2𝑘−1 ) + 𝑓(𝑥2𝑘 ))] + [ ℎ(𝑔(𝑥2𝑘−2 ) + 4𝑔(𝑥2𝑘−1 ) + 𝑔(𝑥2𝑘 ))]
3 3
𝑘=1
𝑛 𝑛
1 1
= ∑ ℎ(𝑓(𝑥2𝑘−2 ) + 4𝑓(𝑥2𝑘−1 ) + 𝑓(𝑥2𝑘 )) + ∑ ℎ(𝑔(𝑥2𝑘−2 ) + 4𝑔(𝑥2𝑘−1 ) + 𝑔(𝑥2𝑘 ))
3 3
𝑘=1 𝑘=1
= 𝑆(𝑓, ℎ) + 𝑆(𝑔, ℎ)

𝑻(𝒄𝒇, 𝒉) = 𝒄𝑻(𝒇, 𝒉)
𝑛
1
𝑇(𝑐𝑓, ℎ) = ∑ 𝑐 [ ℎ(𝑓(𝑥𝑘−1 ) + 𝑓(𝑥𝑘 ))]
2
𝑘=1
𝑛
1
= 𝑐 ∑ [ ℎ(𝑓(𝑥𝑘−1 ) + 𝑓(𝑥𝑘 ))]
2
𝑘=1
= 𝑐𝑇(𝑓, ℎ)

𝑺(𝒄𝒇, 𝒉) = 𝒄𝑺(𝒇, 𝒉)
𝑛
1
𝑆(𝑐𝑓, ℎ) = ∑ 𝑐 [ ℎ(𝑓(𝑥2𝑘−2 ) + 4𝑓(𝑥2𝑘−1 ) + 𝑓(𝑥2𝑘 ))]
3
𝑘=1
𝑛
1
= 𝑐 ∑ [ ℎ(𝑓(𝑥2𝑘−2 ) + 4𝑓(𝑥2𝑘−1 ) + 𝑓(𝑥2𝑘 ))]
3
𝑘=1
= 𝑐𝑆(𝑓, ℎ)
The degree of precision for the trapezoidal rule is 1
Let 𝑓(𝑥) = 𝑥
Result 1 𝑏
∫ 𝑓(𝑥) 𝑑𝑥
𝑎
𝑛 𝑥𝑘
= ∑∫ 𝑓(𝑥) 𝑑𝑥
𝑘=1 𝑥𝑘−1
𝑛 𝑥
1 𝑘
= ∑ 𝑥2|
2 𝑥𝑘−1
𝑘=1
𝑛
1
= ∑ (𝑥𝑘2 − 𝑥𝑘−1
2
)
2
𝑘=1
𝑛
1
= ∑ (𝑥𝑘 − 𝑥𝑘−1 )(𝑥𝑘 + 𝑥𝑘−1 )
2
𝑘=1
𝑛
1
= ∑ ℎ(𝑥𝑘 + 𝑥𝑘−1 )
2
𝑘=1
Result 2 𝑇(𝑓, ℎ)
𝑛
1
= ∑ ℎ(𝑓(𝑥𝑘−1 ) + 𝑓(𝑥𝑘 ))
2
𝑘=1
𝑛
1
= ∑ ℎ(𝑥𝑘 + 𝑥𝑘−1 )
2
𝑘=1
We calculate 𝐸(𝑓, ℎ) by combining result 1 and 2
𝑏
𝐸(𝑓, ℎ) = ∫ 𝑓(𝑥) 𝑑𝑥 − 𝑇(𝑓, ℎ)
𝑎
𝑛 𝑛
1 1
= ∑ ℎ(𝑥𝑘 + 𝑥𝑘−1 ) − ∑ ℎ(𝑥𝑘 + 𝑥𝑘−1 )
2 2
𝑘=1 𝑘=1
=0
Therefore, the degree of precision is at least 1
The degree of precision for Simpson’s rule is 3
Let 𝑓(𝑥) = 𝑥
Result 1 𝑏
∫ 𝑓(𝑥) 𝑑𝑥
𝑎
𝑛 𝑥2𝑘
= ∑∫ 𝑓(𝑥) 𝑑𝑥
𝑘=1 𝑥2𝑘−2
𝑛
1 2 𝑥2𝑘
=∑ 𝑥 |
2 𝑥2𝑘−2
𝑘=1
𝑛
1 2 2
= ∑ (𝑥2𝑘 − 𝑥2𝑘−2 )
2
𝑘=1
𝑛
1
= ∑ (𝑥2𝑘 − 𝑥2𝑘−2 )(𝑥2𝑘 + 𝑥2𝑘−2 )
2
𝑘=1
𝑛
1
= ∑ 2ℎ(𝑥2𝑘 + 𝑥2𝑘−2 )
2
𝑘=1
𝑛

= ∑ ℎ(𝑥2𝑘 + 𝑥2𝑘−2 )
𝑘=1
Result 2 𝑆(𝑓, ℎ)
𝑛
1
= ∑ ( ℎ(𝑓(𝑥2𝑘−2 ) + 4𝑓(𝑥2𝑘−1 ) + 𝑓(𝑥2𝑘 )))
3
𝑘=1
𝑛
1
= ∑ ℎ(𝑥2𝑘−2 + 4𝑥2𝑘−1 + 𝑥2𝑘 )
3
𝑘=1
𝑛
1 𝑥2𝑘−2 + 𝑥2𝑘
= ∑ ℎ (𝑥2𝑘−2 + 4 ( ) + 𝑥2𝑘 )
3 2
𝑘=1
𝑛
1
= ∑ ℎ(𝑥2𝑘−2 + 2𝑥2𝑘−2 + 2𝑥2𝑘 + 𝑥2𝑘 )
3
𝑘=1
𝑛
1
= ∑ ℎ(3𝑥2𝑘−2 + 3𝑥2𝑘 )
3
𝑘=1
𝑛

= ∑ ℎ(𝑥2𝑘−2 + 𝑥2𝑘 )
𝑘=1
We calculate 𝐸(𝑓, ℎ) by combining result 1 and 2
𝑏
𝐸(𝑓, ℎ) = ∫ 𝑓(𝑥) 𝑑𝑥 − 𝑆(𝑓, ℎ)
𝑎
𝑛 𝑛

= ∑ ℎ(𝑥2𝑘−2 + 𝑥2𝑘 ) − ∑ ℎ(𝑥2𝑘−2 + 𝑥2𝑘 )


𝑘=1 𝑘=1
=0
Therefore, the degree of precision is at least 1
Section 2.3 – Piecewise Continuous Functions and Accuracy
𝒃
𝟏
|∫ 𝒇(𝒙) 𝒅𝒙 − 𝑳𝑹(𝒇, 𝒉)| ≤ 𝑴 (𝒃 − 𝒂)𝒉
𝒂 𝟐 𝟏
Consider the following
𝑛
|𝐸(𝑓, ℎ)| ≤ |𝑒1 | + |𝑒2 | + ⋯ + |𝑒𝑛 | = ∑|𝑒𝑘 |
𝑘=1

Choose 𝑀1 = max|𝑓 ′ (𝑥)| then


[𝑎,𝑏]
max |𝑓 ′ | ≤ max|𝑓 ′ | = 𝑀1
[𝑥𝑘−1 ,𝑥𝑘 ] [𝑎,𝑏]
2
ℎ ℎ2 ℎ2
⇒ |𝑒𝑘 | ≤ max |𝑓 ′ | ≤ max|𝑓 ′ | = 𝑀1
2 [𝑥𝑘−1 ,𝑥𝑘 ] 2 [𝑎,𝑏] 2
Combine these results we get
𝑛
|𝐸(𝑓, ℎ)| ≤ ∑|𝑒𝑘 |
𝑘=1
𝑛
ℎ2
≤∑ 𝑀
2 1
𝑘=1
2

=𝑛 𝑀
2 1
1
= 𝑀1 (𝑏 − 𝑎)ℎ
2
𝒃
𝟏
|∫ 𝒇(𝒙) 𝒅𝒙 − 𝑹𝑹(𝒇, 𝒉)| ≤ 𝑴 (𝒃 − 𝒂)𝒉
𝒂 𝟐 𝟏
Consider the following
𝑛
|𝐸(𝑓, ℎ)| ≤ |𝑒1 | + |𝑒2 | + ⋯ + |𝑒𝑛 | = ∑|𝑒𝑘 |
𝑘=1

Choose 𝑀1 = max|𝑓 ′ (𝑥)| then


[𝑎,𝑏]
max |𝑓 ′ | ≤ max|𝑓 ′ | = 𝑀1
[𝑥𝑘−1 ,𝑥𝑘 ] [𝑎,𝑏]
2
ℎ ℎ2 ℎ2
⇒ |𝑒𝑘 | ≤ max |𝑓 ′ | ≤ max|𝑓 ′ | = 𝑀1
2 [𝑥𝑘−1 ,𝑥𝑘 ] 2 [𝑎,𝑏] 2
Combine these results we get
𝑛

𝐸(𝑓, ℎ) ≤ ∑|𝑒𝑘 |
𝑘=1
𝑛
ℎ2
≤∑ 𝑀
2 1
𝑘=1
2

=𝑛 𝑀
2 1
1
= 𝑀1 (𝑏 − 𝑎)ℎ
2
𝒃
𝟏
|∫ 𝒇(𝒙) 𝒅𝒙 − 𝑴𝑹(𝒇, 𝒉)| ≤ 𝑴 (𝒃 − 𝒂)𝒉𝟐
𝒂 𝟐𝟒 𝟐
Consider the following
𝑛
|𝐸(𝑓, ℎ)| ≤ |𝑒1 | + |𝑒2 | + ⋯ + |𝑒𝑛 | = ∑|𝑒𝑘 |
𝑘=1

Choose 𝑀2 = max|𝑓 ′′ (𝑥)| then


[𝑎,𝑏]
max |𝑓 | ≤ max|𝑓 ′ ′| = 𝑀2
′′
[𝑥𝑘−1 ,𝑥𝑘 ] [𝑎,𝑏]
3
ℎ ℎ3 ℎ3
⇒ |𝑒𝑘 | ≤ max |𝑓 ′′ | ≤ max|𝑓 ′′ | = 𝑀
24 [𝑥𝑘−1 ,𝑥𝑘 ] 24 [𝑎,𝑏] 24 2
Combine these results we get
𝑛
|𝐸(𝑓, ℎ)| ≤ ∑|𝑒𝑘 |
𝑘=1
𝑛
ℎ3
≤∑ 𝑀
24 2
𝑘=1
3

=𝑛 𝑀
24 2
1
= 𝑀 (𝑏 − 𝑎)ℎ2
24 2
𝒃
𝟏
|∫ 𝒇(𝒙) 𝒅𝒙 − 𝑻(𝒇, 𝒉)| ≤ 𝑴 (𝒃 − 𝒂)𝒉𝟐
𝒂 𝟏𝟐 𝟐
Consider the following
𝑛
|𝐸(𝑓, ℎ)| ≤ |𝑒1 | + |𝑒2 | + ⋯ + |𝑒𝑛 | = ∑|𝑒𝑘 |
𝑘=1

Choose 𝑀2 = max|𝑓 ′′ (𝑥)| then


[𝑎,𝑏]
max |𝑓 ′′ | ≤ max|𝑓 ′ ′| = 𝑀2
[𝑥𝑘−1 ,𝑥𝑘 ] [𝑎,𝑏]
3
ℎ ℎ3 ℎ3
⇒ |𝑒𝑘 | ≤ max |𝑓 ′′ | ≤ max|𝑓 ′′ | = 𝑀
12 [𝑥𝑘−1 ,𝑥𝑘 ] 12 [𝑎,𝑏] 12 2
Combine these results we get
𝑛
|𝐸(𝑓, ℎ)| ≤ ∑|𝑒𝑘 |
𝑘=1
𝑛
ℎ3
≤ ∑ 𝑀2
12
𝑘=1
3

=𝑛 𝑀
12 2
1
= 𝑀 (𝑏 − 𝑎)ℎ2
12 2
𝒃
𝟏
|∫ 𝒇(𝒙) 𝒅𝒙 − 𝑺(𝒇, 𝒉)| ≤ 𝑴 (𝒃 − 𝒂)𝒉𝟒
𝒂 𝟏𝟖𝟎 𝟒
Consider the following
𝑛
|𝐸(𝑓, ℎ)| ≤ |𝑒1 | + |𝑒2 | + ⋯ + |𝑒𝑛 | = ∑|𝑒𝑘 |
𝑘=1

Choose 𝑀4 = max|𝑓 (4) (𝑥)| then


[𝑎,𝑏]
max |𝑓 (4) | ≤ max|𝑓 (4) | = 𝑀4
[𝑥𝑘−1 ,𝑥𝑘 ] [𝑎,𝑏]
ℎ5 (4) ℎ5 (4) ℎ5
|𝑒 |
⇒ 𝑘 ≤ max |𝑓 | ≤ max|𝑓 | = 𝑀
180 [𝑥𝑘−1 ,𝑥𝑘] 180 [𝑎,𝑏] 180 4
Combine these results we get
𝑛
|𝐸(𝑓, ℎ)| ≤ ∑|𝑒𝑘 |
𝑘=1
𝑛
ℎ5
≤∑ 𝑀
180 4
𝑘=1
ℎ5
=𝑛 𝑀
180 4
1
= 𝑀 (𝑏 − 𝑎)ℎ4
180 4
Section 2.4 – Local Errors for Rectangle Rules
𝒙 𝟏
|𝒆𝒌 | = |∫𝒙 𝒌 𝒇 − 𝒇(𝒙∗𝒌 )𝒉𝒌 | ≤ 𝑴𝒉𝟐 (Local Error for LR)
𝒌−𝟏 𝟐

First note that


𝑥𝑘
𝑒𝑘 = ∫ 𝑓(𝑥) 𝑑𝑥 − ℎ(𝑥𝑘−1 ) (from definition)
𝑥𝑘−1
𝑥𝑘 𝑥𝑘
=∫ 𝑓(𝑥) 𝑑𝑥 − ∫ 𝑓(𝑥𝑘−1 ) 𝑑𝑥 (∵ ℎ = 𝑥𝑘 − 𝑥𝑘−1 and 𝑓(𝑥𝑘−1 ) is a constant)
𝑥𝑘−1 𝑥𝑘−1
𝑥𝑘
=∫ 𝑓(𝑥) − 𝑓(𝑥𝑘−1 )𝑑𝑥
𝑥𝑘−1
Therefore,
𝑥𝑘
|𝑒𝑘 | = |∫ 𝑓(𝑥) − 𝑓(𝑥𝑘−1 )𝑑𝑥|
𝑥𝑘−1
𝑥𝑘
≤∫ |𝑓(𝑥) − 𝑓(𝑥𝑘−1 )|𝑑𝑥 
𝑥𝑘−1

Now,
Let 𝑀 be an upper bound for |𝑓 ′ | on [𝑥𝑘−1 , 𝑥𝑘 ]
From the Mean Value Theorem, 𝑓(𝑏) − 𝑓(𝑎) = 𝑓 ′ (𝑐)(𝑏 − 𝑎)
Therefore 𝑓(𝑥) − 𝑓(𝑥𝑘−1 ) = 𝑓 ′ (𝑐)(𝑥 − 𝑥𝑘−1 ) where 𝑐 ∈ [𝑥𝑘−1 , 𝑥𝑘 ]
∴ |𝑓(𝑥) − 𝑓(𝑥𝑘−1 )| = |𝑓 ′ (𝑐)||𝑥 − 𝑥𝑘−1 | ≤ 𝑀|𝑥 − 𝑥𝑘−1 | 
𝑥𝑘
|𝑒𝑘 | ≤ ∫ |𝑓(𝑥) − 𝑓(𝑥𝑘−1 )| 𝑑𝑥 ()
𝑥𝑘−1
𝑥
≤ ∫𝑥 𝑘 𝑀|𝑥 − 𝑥𝑘−1 | 𝑑x ()
𝑘−1
𝑥𝑘
1
= 𝑀 ( 𝑥 2 − 𝑥. 𝑥𝑘−1 | )
2 𝑥𝑘−1 (evaluating the integral)
1 (algebra)
= 𝑀 [ (𝑥𝑘2 − 𝑥𝑘−1
2
) − 𝑥𝑘 (𝑥𝑘 − 𝑥𝑘−1 )]
2
1 (algebra)
= 𝑀 [ (𝑥𝑘 − 𝑥𝑘−1 )(𝑥𝑘 + 𝑥𝑘−1 ) − 𝑥𝑘 (𝑥𝑘 − 𝑥𝑘−1 )]
2
1 (algebra)
= 𝑀 [ ℎ(𝑥𝑘 − 𝑥𝑘−1 )]
2
1
= 𝑀ℎ2 (ℎ = (𝑥𝑘−1 − 𝑥𝑘 ))
2
𝒙 𝟏
|𝒆𝒌 | = |∫𝒙 𝒌 𝒇 − 𝒇(𝒙∗𝒌 )𝒉𝒌 | ≤ 𝑴𝒉𝟐 (Local Error for RR)
𝒌−𝟏 𝟐

First note that


𝑥𝑘
𝑒𝑘 = ∫ 𝑓(𝑥) 𝑑𝑥 − ℎ(𝑥𝑘 ) (from definition)
𝑥𝑘−1
𝑥𝑘 𝑥𝑘
=∫ 𝑓(𝑥) 𝑑𝑥 − ∫ 𝑓(𝑥𝑘 ) 𝑑𝑥 (because ℎ = 𝑥𝑘 − 𝑥𝑘−1 and 𝑓(𝑥𝑘∗ ) is a constant)
𝑥𝑘−1 𝑥𝑘−1
𝑥𝑘
=∫ 𝑓(𝑥) − 𝑓(𝑥𝑘 )𝑑𝑥
𝑥𝑘−1
Therefore,
𝑥𝑘
|𝑒𝑘 | = |∫ 𝑓(𝑥) − 𝑓(𝑥𝑘 )𝑑𝑥|
𝑥𝑘−1
𝑥𝑘
≤∫ |𝑓(𝑥) − 𝑓(𝑥𝑘 )|𝑑𝑥 
𝑥𝑘−1

Now,
Let 𝑀 be an upper bound for |𝑓 ′ | on [𝑥𝑘−1 , 𝑥𝑘 ]
From the Mean Value Theorem, 𝑓(𝑏) − 𝑓(𝑎) = 𝑓 ′ (𝑐)(𝑏 − 𝑎)
Therefore 𝑓(𝑥) − 𝑓(𝑥𝑘 ) = 𝑓 ′ (𝑐)(𝑥 − 𝑥𝑘 ) where 𝑐 ∈ [𝑥, 𝑥𝑘∗ ]
∴ |𝑓(𝑥) − 𝑓(𝑥𝑘 )| = |𝑓 ′ (𝑐)||𝑥 − 𝑥𝑘 | ≤ 𝑀|𝑥 − 𝑥𝑘 | 
𝑥𝑘
|𝑒𝑘 | ≤ ∫ |𝑓(𝑥) − 𝑓(𝑥𝑘 )| 𝑑𝑥 ()
𝑥𝑘−1
𝑥
≤ ∫𝑥 𝑘 𝑀|𝑥 − 𝑥𝑘 | 𝑑x ()
𝑘−1
𝑥𝑘
1 (evaluating the integral)
= 𝑀 (𝑥. 𝑥𝑘 − 𝑥 2 | )
2 𝑥𝑘−1
1 (algebra)
= 𝑀 [𝑥𝑘 (𝑥𝑘 − 𝑥𝑘−1 ) − (𝑥𝑘2 − 𝑥𝑘−1
2
)]
2
1
= 𝑀 [𝑥𝑘 (𝑥𝑘 − 𝑥𝑘−1 ) − (𝑥𝑘 − 𝑥𝑘−1 )(𝑥𝑘 + 𝑥𝑘−1 )] (algebra)
2
1
= 𝑀 [ℎ𝑥𝑘 − ℎ(𝑥𝑘 + 𝑥𝑘−1 )]
2 (algebra)
1 2
= 𝑀ℎ (ℎ = (𝑥𝑘−1 − 𝑥𝑘 ))
2
Section 3.1 – First Order Differential Equations: Euler’s Method
Derivation of Euler’s Method
Consider a general initial value problem 𝑦 ′ (𝑡) = 𝑓(𝑦(𝑡)) with 𝑦(𝑡0 ) = 𝑦0 .

The tangent to the graph of the solution 𝑦 through the point (𝑡0 , 𝑦0 ) has slope:
𝑦 ′ (𝑡0 ) = 𝑓(𝑦(𝑡0 )) = 𝑓(𝑦0 )
𝑦1 −𝑦0
If (𝑡1 , 𝑦1 ) is the point on the tangent at 𝑡1 = 𝑡0 + ℎ, the slope of the tangent is also given by . From this it

follows that 𝑦1 = 𝑦0 + ℎ𝑓(𝑦0 )

Now to calculate 𝑦2 : The tangent to the graph of the solution 𝑦 at the point (𝑡1 , 𝑦(𝑡1 )) has a slope 𝑦 ′ (𝑡1 ) =
𝑓(𝑦(𝑡1 )).

As only an estimate 𝑦1 for 𝑦(𝑡1 ) is known, we only have an estimate for this slope: 𝑦 ′ (𝑡1 ) = 𝑓(𝑦1 )
Also the point (𝑡1 , 𝑦(𝑡1 )) is unknown and only the point (𝑡1 , 𝑦1 ) close to the graph of 𝑦 is known.

For the time interval [𝑡1 , 𝑡2 ] the function 𝑦 can be approximated by a line (not a tangent line) through the point with
𝑦 −𝑦
(𝑡1 , 𝑦1 ) with slope 𝑓(𝑦1 ). It then follows that 2 1 = 𝑓(𝑦1 ) and 𝑦2 = 𝑦1 + ℎ𝑓(𝑦1 )

Section 3.2 – First Order Differential Equations: Improved Euler’s Method


Derivation of Improved Euler’s Method
For an initial value problem 𝑦 ′ (𝑡) = 𝑓(𝑦(𝑡)), 𝑦(𝜏) = 𝑏
We choose evenly spaced times 𝜏 = 𝑡0 < 𝑡1 < 𝑡2 < ⋯ < 𝑡𝑛 < 𝑇 with step length ℎ.

For the interval [𝑡𝑘 , 𝑡𝑘+1 ] it follows from the differential equations and the second fundamental theorem of calculus
that
𝑡𝑘+1
𝑦(𝑡𝑘+1 ) − 𝑦(𝑡𝑘 ) = ∫ 𝑓(𝑦(𝑡)) 𝑑𝑡
𝑡𝑘

Using the trapezoidal rule for numerical integration it follows that



𝑦𝑘+1 ≈ 𝑦𝑘 + (𝑓(𝑦(𝑡𝑘 )) + 𝑓(𝑦(𝑡𝑘 + 1)))
2

≈ 𝑦𝑘 + (𝑓(𝑦𝑘 ) + 𝑓(𝑦𝑘+1 ))
2
If 𝑦𝑘 is known, the value of 𝑦𝑘+1 cannot be calculated directly from this equation due to the term 𝑓(𝑦𝑘+1 ). An
estimate (the predictor) 𝑝𝑘 for 𝑦𝑘+1 is obtained using Euler’s method:
𝑝𝑘 = 𝑦𝑘 + ℎ(𝑓(𝑦𝑘 )) ≈ 𝑦𝑘+1

And therefore, we have



𝑦𝑘+1 = 𝑦𝑘 + (𝑓(𝑦(𝑡𝑘 )) + 𝑓(𝑦(𝑡𝑘 + 1)))
2

Section 3.3 – Accuracy


No Proofs

Section 3.4 – Systems of Differential Equations


No Proofs

Section 3.5 – Higher Order Differential Equations


No Proofs

Section 4.1 – Gauss Elimination


No Proofs

Section 4.2 – Iterative Methods for Linear Systems


No Proofs
3. Graphs
Section 1.1 – Graphs and Interval Division
No graphs

Section 1.2 – Polynomial Equations


No graphs

Section 1.3 – Fixed Point Iteration


Graphical Illustration of Convergence and Divergence

Section 1.4 – Newton-Raphson Method


Graphical Illustration of the Newton-Raphson Method
Section 1.5 – Convergence
No Graphs

Section 2.1 – Numerical Integration and Rectangle Rules


No Graphs

Section 2.2 – Trapezoidal and Simpson Rules


No Graphs

Section 2.3 – Piecewise Continuous Functions and Accuracy


No Graphs

Section 2.4 – Local Errors for Rectangle Rules


No Graphs

Section 3.1 – First Order Differential Equations: Euler’s Method


Graphical Illustration of Euler’s Method
Section 3.2 – First Order Differential Equations: Improved Euler’s Method
Graphical Illustration of Improved Euler’s Method

Section 3.3 – Accuracy


No graphs

Section 3.4 – Systems of Differential Equations


No graphs

Section 3.5 – Higher Order Differential Equations


No graphs

Section 4.1 – Gauss Elimination


No graphs

Section 4.2 – Iterative Methods for Linear Systems


No graphs

You might also like