CHAPTER ONE
Introduction
There are two methods for solutions of mathematical problems .those are numerical method and
analytical method.
Analytical method is a direct method and its solution is error free that means we get the exact
value but numerical method shows the way to obtain the numerical answers to applied problem,
when the problem is difficult to solve by analytical method, then we use numerical methods and
we get the approximate value.
1. BASIC CONCEPTS IN ERROR ANALYSIS
1.1 Sources of error
The solution of a problem obtained by numerical methods containing some errors, to
minimize the error it is essential to identify the causes or sources of errors 𝒊. 𝒆 the
are three types of sources of numerical errors.
Those are 1. Round off error
2. Truncation error
3. Inherent error
1. Round-Off Error
All computing devices represent numbers, except for integers, with some
imprecision. Digital computers will nearly always use floating-point numbers of fixed
word length; the true values are not expressed exactly by such representations. We
call the error due to this computer imperfection the round-off error. When numbers
are rounded when stored as floating-point numbers, the round-off error is less than
if the trailing digits were simply chopped off.
The simplest way of reducing the number of significant digits in the representation
of a number is merely to ignore the unwanted digits.
Rules to rounding a number to n-significant digits – Discard all digits to the right
off 𝑛𝑡ℎ digit, if this discarded number is
a. Less than half a unit in the 𝑛𝑡ℎ place leave the 𝑛𝑡ℎ digit unchanged.
b. Greater than half a unit in the 𝑛𝑡ℎ place, increase the 𝑛𝑡ℎ digit by unit.
c. Exact half a unit in the 𝑛𝑡ℎ place, increase the 𝑛𝑡ℎ digit by unit if it is odd
otherwise leave.
For example:- Since π is 3.14159…,
when rounded to four decimal places (4D) it is 3.1416 and the representation 3.1416
is correct to five significant digits (involved in the reduction of the number of digits
is called round-off error).
i. 7.886 can be rounded to 3 significant digits i.e 7.89
ii. 7.885 can be rounded to 3 significant digits i.e 7.88
iii. 7.875 can be rounded to 3 significant digits i.e 7.88
iv. 7.884 can be rounded to 3 significant digits i.e 7.88
Exercise 1. Round of the following to five significant figure
a. 8.99997
b. 9.99998
2. Truncation Error (chopping error)
Occur due to the finite representation of an inherently infinite process.
Taylor series: - is an expansion of a function has a form
(𝑥−𝑥0 )2
𝑓(𝑥) = 𝑓(𝑥0 ) + (𝑥 − 𝑥0 )𝑓 ′ (𝑥0 ) + 𝑓 ′′ (𝑥0 ) + ⋯
2!
And at 𝑥0 = 0 we say the function MacLaurin series.
Example 1:
The Taylor’s series expansion of sin 𝑥 is
𝑥3 𝑥5 𝑥7
Sin 𝑥 = 𝑥 − + − +⋯
3! 5! 7!
This is an infinite series expansion. If only first five terms are taken to compute the
value of 𝑠𝑖𝑛 𝑥 for a given 𝑥, then we obtain an approximate result. Here, the error
occurs due to the truncation of the series.
Example 2:
The Maclaurin series for 𝑒 𝑥 is
𝑥2 𝑥3
𝑒𝑥 = 1 + 𝑥 + + +⋯
2! 3!
This series has an infinite number of terms but when using this series to calculate
𝑒 𝑥 only a finite number of terms can be used. For instance if we use three terms to
calculate 𝑒 𝑥 , then
𝑥2
𝑒𝑥 ≈ 1 + 𝑥 + 2!
The truncation error for such an approximation is
𝑥2
Truncation error = 𝑒 𝑥 − (1 + 𝑥 + )
2!
𝑥3 𝑥4
= + +⋯
3! 4!
3. Inherent Error
This type of errors is present in the statement of the problem itself, before
determining its solution. Inherent errors occur due to the simplified assumptions
made in the process of mathematical modelling of a problem. It can also arise when
the data is obtained from certain physical measurements of the parameters of the
proposed problem.
Generally this error occurs due to human mistakes such as data uncertainty.
Example: 1. Height of mr x is 1.4 cm
2. Population of maychew city is 11,000.
So from this two examples it can occur an error when they don’t measure height of
some body correctly and also can occur an error when they don’t count the population
properly or getting false data.
1.2 Measurement of errors
There are three measurement of errors:- those are
Precision, Accuracy and Significant digits
Precision: - Refers to how closely a computed value agree with each other (how
repeatable a measurement is).
Accuracy: - Refers to how closely you are to the true value.
Example: - let the true value is 0.5009 then
Mr A gets the value 0.345, 0.345, 0.343 and mr B gets the value 0.459, 0.499, 0.501
Therefore mr A is more precise and mr B is more accurate.
Significant digits (figures)
Another term that is commonly used to express accuracy is significant digits, that
is, how many digits in the number having meaning.
Rules of significant digit
1. Any number that is not zero is significant.
Ex:-3453 has 4 s.f
2. Zero’s that are in the middle of nonzero number are significant.
Ex:-3.045 and 3045 has 4 s.f
3. Any leading or beginning zero are not significant.
Ex:-0.0003 has 1 s.f
4. Any zero that has no decimal are not significant.
Ex:-2000 has 1 s.f
5. Zeros at the end of a number with decimal are significant.
Ex:-2.300 has 4 s.f
Floating –point number
Real numbers can be represented in computer in to two ways .
1. Fixed point representation: the decimal point is placed in a fixed place.
Example: 11.34
2. Floating point representation: the position of the decimal point is not
specified. By this the number can be expressed in different ways.
Example: 0.1134× 102 or 1.134× 101 or 11.34× 100 or 113.4× 10−1 ….
1.3 Exact and Approximation Number
To solve numerical problems we use two types of numbers.
Exact number: - Gives a true value of a result.
Example: we can say 2,4,2.3,1/2, 𝑒, 𝑃𝑖 are exact numbers.
Approximate Number: - The numbers obtained by retaining a few digits are called
approximate numbers. These numbers are not exact numbers or not unique numbers
but it gives a value which is closed to the true value.
Example: we can say the approximate value of 𝑃𝑖 is 3.1416.
Therefore: let 𝑥𝑇 = 𝑡𝑟𝑢𝑒(𝑒𝑥𝑎𝑐𝑡)𝑣𝑎𝑙𝑢𝑒, 𝑥𝐴 = 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒 𝑣𝑎𝑙𝑢𝑒 then
𝒆𝒓𝒓𝒐𝒓 = 𝒙𝑻 − 𝒙𝑨
1.4 Types of error
There are three common ways to express the size of the error in a computed result:
those are :- absolute error, relative error and percentage error.
1. Absolute error
The absolute error is the absolute difference between the exact value 𝑥 and the
approximate value 𝑋, i.e. absolute error= |𝐭𝐫𝐮𝐞𝐯𝐚𝐥𝐮𝐞 − 𝐚𝐩𝐩𝐫𝐨𝐱𝐢𝐦𝐚𝐭𝐞 𝐯𝐚𝐥𝐮𝐞| which is
symbolized by:
∆𝒙 = | 𝒙𝑻 − 𝒙𝑨 |
If we know the absolute error, we can get the correct (exact) value by adding the
absolute error to the approximation.
A given size of error is usually more serious when the magnitude of the true value is
small.
2. Relative error
This error is defined as the ratio of the absolute error to the absolute exact
number,
𝑎𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟
i.e., relative error = and it symbolized by:
𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒
∆𝑥
𝛿𝑥 =
𝑥𝑇
When the true value is zero, the relative error is undefined.
3. Percentage error
The percentage error of an approximate number 𝑥𝐴 is 𝛿𝑥 × 100%.
%𝐸 = 𝛿𝑥 × 100%
It is a particular type of relative error. This error is sometimes called relative
percentage error. The percentage error gives the total error while measuring 100
unit instead of 1 unit. This error also calculates the quantity and quality of
measurement. When relative error is very small then the percentage error is
calculated.
Example:- Find the absolute, relative and percentage error in 𝑥𝐴 when 𝑥𝑇 = 1/3 and
𝑥𝐴 = 0 .333.
Solution:- The absolute error
1
∆𝒙 = |𝑥𝑇 − 𝑥𝐴 | = | − 0.333| = 0.00033
3
The relative error
∆𝒙 0.333
𝛿𝑥 = = ≅ 0.001
𝑥𝑇 1
3
The percentage error is 𝛿𝑥 × 100% = 0.00099 × 100% = 0.099% ≅ 0.1%.
Error propagation and stability
Propagated error is more subtle than the other errors. By propagated error we mean
an error in the succeeding steps of a process due to an occurrence of an earlier
error – such error is in addition to the local errors. It is somewhat analogous to
errors in the initial conditions. Some root finding methods find additional zero’s by
changing the function to remove the first root; this technique is called reducing or
deflating the equation. Here the reduced equations reflect the errors in the previous
stages. The solution, of course is, to confirm the later results with the original
equation. If errors are magnified continuously as the method continuous, eventually
they will overshadow the true value, destroying its validity; we call such a method
unstable. For a stable method – the desirable kind – errors made at early points die
out as the method continues.