0% found this document useful (0 votes)
3 views90 pages

Module - 3

The document provides an introduction to computational physics, focusing on sequences, probability, and random variables. It explains various types of sequences, classical probability definitions, and examples of probability calculations, including card drawing and horse racing scenarios. Additionally, it covers the concepts of random numbers, random variables, their distributions, means, variances, and important theorems related to these statistical measures.

Uploaded by

raghav122006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views90 pages

Module - 3

The document provides an introduction to computational physics, focusing on sequences, probability, and random variables. It explains various types of sequences, classical probability definitions, and examples of probability calculations, including card drawing and horse racing scenarios. Additionally, it covers the concepts of random numbers, random variables, their distributions, means, variances, and important theorems related to these statistical measures.

Uploaded by

raghav122006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Introduction to Computational Physics

Course Code: PHY1003

Module - 3

Dr. SANJIB NAYAK


Assistant Professor, (VIT Bhopal)

1
Sequential Numbers
A sequence is a collection of elements in a particular order. Elements of a
sequence can be related to each other, and are often defined using recursion.
Sequences can be finite or infinite.

Some of the most common examples of sequences are:


•Arithmetic Sequences
•Geometric Sequences
•Harmonic Sequences
•Fibonacci Numbers
Sequential Numbers
Arithmetic Sequences
A sequence in which every term is created by adding or subtracting a definite
number to the preceding number is an arithmetic sequence.
Example:

Geometric Sequences
A sequence in which every term is obtained by multiplying or dividing a
definite number with the preceding number is known as a geometric
sequence.
Sequential Numbers
Harmonic Sequences
A series of numbers is said to be in harmonic sequence if the reciprocals of
all the elements of the sequence form an arithmetic sequence.
𝟏 𝟏 𝟏 𝟏
Harmonic Sequences = 𝟓 , 𝟏𝟎, 𝟏𝟓, 𝟐𝟎,….

Fibonacci Numbers
Fibonacci numbers form an interesting sequence of numbers in which each
element is obtained by adding two preceding elements and the sequence starts
with 0 and 1. Sequence is defined as, F0 = 0 and F1 = 1 and Fn = Fn-1 + Fn-2
Probability

Sample Space: In probability, the set of all possible outcomes is called the Sample
Space. We will use S to represent the sample space. In terms of the language of sets, a
sample space is a universal set and an outcome is an element of the universal set.

Sample space means the total number of outcomes.


Probability
The classical probability P(A) for an event A is defined by the number of
favorable results n, divided by the number of possible results m
𝑛
𝑃 𝐴 =
𝑚
Probability of complementary event
𝑛
𝑃 𝐴 =1−
𝑚
The statistical definition of the probability for an event A is given by:

𝑛
𝑃 𝐴 = lim
𝑚→∞ 𝑚
Probability
Example 1: One card is drawn from a deck of 52 cards, well-shuffled.
Calculate the probability that the card will (i) be a king, (ii) not be a
king.
Well-shuffling ensures equally likely outcomes.
(i) There are 4 kings in a deck.
Let E be the event the card drawn is ace.
The number of favourable outcomes to the event E = 4
The number of possible outcomes = 52
Therefore, P(E) = 4/52 = 1/13

(ii) Let F is the event of ‘card is not a king’


The number of favourable outcomes to F = 52 – 4 = 48
The number of possible outcomes = 52
Therefore, P(F) = 48/52 = 12/13
Probability
Example 2: Three horses A, B, and C are in a race; A is twice as likely to win
as B and B is twice as likely to win as C. What are their respective
probabilities of winning, i.e. P(A), P(B) and P(C)?

Let P(C)= p; since B is twice as likely to win as C, P(B) = 2p; and since A is
twice as likely to win as B,
P(A)= 2P(B)= 2(2p) = 4p. Now the sum of the probabilities must be 1;
hence
1
p+2p+4p = 1 or 7p = 1 or p = 7

𝟒 𝟐 𝟏
Accordingly, P(A) = 4p = 𝟕, P(B) = 2p = 𝟕, P(C) = p = 𝟕
Probability
Example 3: A coin is tossed, and a die is rolled. What is the probability that
the coin shows the head and the die shows 3?

When a coin is tossed, the outcome is either a head or a tail. Similarly, when
a die is rolled, the outcomes will be 1, 2, 3, 4, 5, 6.
The probability to get the head = ½
The probability of getting 3 = 1/6
Hence, the required probability = (1/2) (1/6) = 1/12.
Random Numbers
A random number is a number chosen as if by chance from some specified
distribution. Such numbers are independent, having no correlations between
successive numbers.

Random numbers → Completely unpredictable. No way to determine in


advance what value will be chosen from a set of equally probable elements.

Impossible to know the next value using any program. However, we can
generate random numbers using some function.

pseudo-random number
Random Numbers
A pseudo-random number is a number generated with the help of a deterministic
algorithm, however, it shows a behavior as if it were random.

A random number generator will have to comply with the following criteria

1.Produce pseudo-random numbers whose


statistical properties are as close as
possible to those of real random numbers.
1.Have a long period: It should generate a non-repeating sequence of random numbers
which is sufficiently long for computational purposes.
2.Be reproducible in the sense defined above, as well as re-startable from an
arbitrary break-point.
3. Be fast and parallelizable: It should not be the limiting component in
simulations.
Randomness
Randomness: It is the lack of predictability of events, i.e., there is no
pattern involved. A random process is non-deterministic.

Brownian motion refers to the random movement displayed by small particles


that are suspended in fluids. It is commonly referred to as the Brownian
movement”. This motion is a result of the collisions of the particles with other
fast-moving particles in the fluid.
Random Variable
Random variables: A random variable is a variable whose value is unknown
or a function that assigns values to each of an experiment's outcomes. A
random variable can be either discrete (having specific values) or
continuous (any value in a continuous range).

The use of random variables is most common in probability and


statistics, where they are used to quantify outcomes of
random occurrences.
Risk analysts use random variables to estimate the probability of an adverse
event occurring.

A random variable in statistics is a function that assigns a real value to an


outcome in the sample space of a random experiment. For example: if you
roll a die, you can assign a number to each possible outcome.
Random Variable
Let X be a random variable on a sample space S with a finite image set; say,
X(S)= {x1, x2, . . .,xn}. We make X(S) into a probability space by defining the
probability of xi to be P(X=xi) which we write f(xi). This function f on X(S),
i.e. defined by f(xi) = P(X = xi), is called the distribution or probability
function of X and is usually given in the form of a table:

x1 x2 x3 .. xn

f(x1) f(x2) f(x3) .. f(xn)

The distribution f satisfies the conditions


(i) f(xi) ≥ 0 and (ii) 𝒏𝒊=𝟏 𝒇(𝒙𝒊 ) = 1
Random Variable
Now if X is a random variable with the above distribution, then the mean or
expectation (or: expected value) of X, denoted by E(X) or μx, or simply E or μ,
is defined by
E(X) = x1 f(x1) + x2 f(x2) + …. + xn f(xn) = 𝑛𝑖=1 𝑥𝑖 𝑓(𝑥𝑖 )
Example: A pair of fair dice is tossed. We obtain the finite equiprobable space
S consisting of the 36 ordered pairs of numbers between 1 and 6:
s = {(1,1), (1,2),…..(6,6}
Let X assign to each point (a,b) in S the maximum of its number, i.e., X(a,b) =
max(a, b). Then X is a random variable with the image set
X(S) = {1,2,3,4,5,6}
We compute the distribution f of X:
1
𝑓 1 = 𝑃 𝑋 = 1 = 𝑃 (1,1) =
36
3
𝑓 2 =𝑃 𝑋=2 =𝑃 2,1 , 2,2 , (1,2) = 36
5
𝑓 3 =𝑃 𝑋=3 =𝑃 3,1 , 3,2 , 3,3 , 2,3 , (1,3) =
36
Random Variable
7
𝑓 4 =𝑃 𝑋=4 =𝑃 4,1 , 4,2 , 4,3 , 4,4 , 3,4 , 2,4 , (1,4) =
36
Similarly
9 11
𝑓 5 =𝑃 𝑋=5 = and 𝑓 6 = 𝑃 𝑋 = 6 =
36 36

𝒙𝒊 1 2 3 4 5 6
𝑓(𝑥𝑖 ) 1 3 5 7 9 11
36 36 36 36 36 36

We next compute the mean of X:


1 5 7 9 11
𝐸 𝑋 = 𝑥𝑖 𝑓 𝑥𝑖 = 1. + 3. + 4. + 5. + 6.
36 36 36 36 36
161
= = 4.47
36
Random Variable
Now let Y assign to each point (a,b) in S the sum of its numbers, i.e. Y(a,b) =
a+b. Then Y is also a random variable on S with image set
Y(S) = (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)

𝒚𝒊 2 3 4 5 6 7 8 9 10 11 12
g(𝑦𝑖 ) 1 2 3 4 5 6 5 4 3 2 1
36 36 36 36 36 36 36 36 36 36 36

We next compute the mean of Y:


1 2 1
𝐸 𝑋 = 𝑦𝑖 𝑔 𝑦𝑖 = 2. + 3. + ⋯ . +12.
36 36 36
= 7
Random Variable

The charts that follow graphically describe the above distributions: Observe
that the vertical lines drawn above the numbers on the horizontal axis are
proportional to their probabilities
Random Variable
Some of the important theorems regarding the mean of random
variables.

Theorem 1: Let X be a random variable and k a real number. Then


(i) E(kX)= kE(X) and (ii) E(X+k) = E(X) + k.

Theorem 2: Let X and Y be


¯
random variables on the same sample space
S. Then E(X+ Y) = E(X) + E(Y).

A simple induction argument yields:


Let XI, X2, . . . ,Xn be random variables on S. Then E(X1+ …. + Xn)
= E(X1)+…… + E(Xn)
Random Variable
The mean of a random variable X measures, in a certain sense, the “average”
value of X. The next concept, that of the variance of X, measures the
“spread” or “dispersion” of X.
x1 x2 x3 .. xn
f(x1) f(x2) f(x3) .. f(xn)

Then the variance of X, denoted


¯ by Var (X),is defined by
𝑛

𝑉𝑎𝑟 𝑋 = 𝑥𝑖 − 𝜇 2 𝑓 𝑥𝑖 = 𝐸((𝑋 − 𝜇)2 )


𝑖=1
Where μ is the mean of X. The standard deviation of X, denoted by 𝝈𝒙 , is the
(nonnegative) square root of Var (X).

𝝈𝒙 = 𝑽𝒂𝒓(𝑿)
Random Variable
The theorem below gives us an alternate and sometimes more useful formula
for calculating the variance of the random variable X.
𝑛
Theorem 3: 𝑉𝑎𝑟 𝑋 = 𝑥𝑖2 𝑓 𝑥𝑖 − 𝜇2 = 𝐸(𝑋 2 ) − 𝜇2
𝑖=1
Proof: 𝑥𝑖 𝑓 𝑥𝑖 = 𝜇 and 𝑓(𝑥𝑖 ) = 1

Example: Variance of the problem shown previously


𝒙𝒊 1 2 3 4 5 6
𝑓(𝑥𝑖 ) 1 3 5 7 9 11
36 36 36 36 36 36
𝑛

𝑉𝑎𝑟 𝑋 = 𝑥𝑖2 𝑓 𝑥𝑖 − 𝜇2 = 𝐸(𝑋 2 ) − 𝜇2


𝑖=1
Random Variable
The mean is estimated earlier, and it is 4.47. We compute the variance and
standard deviation of X. First we compute E(X2):

Example: Variance of the problem shown previously with the random


variable Y.

𝒚𝒊 2 3 4 5 6 7 8 9 10 11 12
g(𝑦𝑖 ) 1 2 3 4 5 6 5 4 3 2 1
36 36 36 36 36 36 36 36 36 36 36
Random Variable
The mean was 7.0 for random variable Y.

Some important properties¯ of variance


Let X be a random variable and k a real number. Then
(i) Var(X+k) = Var (X) and (ii) Var (kX)= k2Var(X).
Hence, (i) 𝝈𝑿+𝒌 = 𝝈𝑿 and (ii) 𝝈𝒌𝑿 = |𝒌|𝝈𝑿

Physical interpretation of mean and variance: Suppose at each point xi on


the x axis there is placed a unit with mass f(xi). Then the mean is the center of
gravity of the system, and the variance is the moment of inertia of the system.
Random Variable
Continuous Random Variable: Suppose that X is a random variable whose
image set X(S) is a continuum of numbers such as an interval.
𝒃
𝑷 𝒂≤𝑿≤𝒃 = 𝒇 𝒙 𝒅𝒙
𝒂
In this case X is said to be a
continuous random variable. The
function f is called the distribution
or the continuous probability ¯
function (or: density function) of
X; it satisfies the conditions
(i) 𝒇 𝒙 ≥ 𝟎 and 𝒇 𝒙 𝒅𝒙 = 𝟏

Like, we prove for the discrete case.


Random Variable
Example: Let X be a continuous random
variable with the following distribution:

¯
Central Tendencies and Dispersion
Measures that indicate the approximate center of a distribution are called
measures of central tendency. Measures that describe the spread of the data
are measures of dispersion. These measures include the mean, median, mode,
range, upper and lower quartiles, variance, and standard deviation.
𝑛
1 𝑎1 + 𝑎2 + 𝑎3 … … . . +𝑎𝑛
𝐴= 𝑎𝑖 = Arithmetic mean
𝑛 𝑛
𝑖=1

Example: Consider the data set: 17, 10, 9, 14, 13, 17, 12, 20, 14

𝑎𝑖 17 + 10 + 9 + 14 + 13 + 17 + 12 + 20 + 14 126
𝑚𝑒𝑎𝑛 = = = = 14
𝑛 9 9
The mean of the data set is 14.
Central Tendencies and Dispersion
Weighted arithmetic mean: When some quantities are more important than the
others and do not contribute equally to the final result thus multiplying them to a
coefficient is called weighted average
𝑛
𝑖=1 𝑤𝑖 𝑥𝑖
𝐴= 𝑛
𝑖=1 𝑤𝑖

Real-life Example: For appointing a person for a job, the interviewer looks at
his personality, working capabilities, educational qualifications, and team
working skills. Based on the job profile, these criteria are given different levels
of importance(weights) and then the final selection is done.
Central Tendencies and Dispersion
Numerical Problem: The table below presents the weights of different decision
features of an automobile. With the help of this information, we need to calculate
the weighted average.
Central Tendencies and Dispersion
Median: The median of a set of data is the “middle element” when the data is
arranged in ascending order.

If there are an odd number of data 1, 2, 3, 4, 5, 6, 7, 8, 9


points, the median will be the number in
the absolute middle.

1, 2, 3, 4, 5, 6, 7, 8, 9, 10 If there is an even number of


data points, the median is the
Median = (5+6)/2 mean of the two center data
points, meaning the two center
values should be added together
and divided by 2
Central Tendencies and Dispersion
Mode: The mode is the most frequently
occurring measurement in a data set.
There may be one mode;
multiple modes, if more than one
number occurs most frequently;
or no mode at all, if every number
occurs only once.

1, 3, 6, 6, 6, 6, 7, 8, 12, 15, 17

The above data set 6 occurs the most.


Hence the mode of data set is 6.
Central Tendencies and Dispersion
The quartiles of a group of data are the medians of the upper and lower
halves of that set. The lower quartile, Q1, is the median of the lower half,
while the upper quartile, Q3, is the median of the upper half. If your data set
has an odd number of data points, you do not consider your median when
finding these values, but if your data set contains an even number of data points,
you will consider both middle values that you used to find your median as parts
of the upper and lower halves.
Consider the data set: 17, 10, 9, 14, 13, 17, 12, 20, 14. Estimate Q1 and Q3

Lower half: 9,10,12,13


10 + 12
𝑄1 = = 11
2

Upper half: 14,17,17,20


17 + 17
𝑄3 = = 17
2
Central Tendencies and Dispersion
Suppose, we have a dataset with even numbers (say ten values): 3, 3, 6, 8, 10,
14, 16, 16, 19, 24
The median value is the average of the middle two values, which is (10
+ 14) / 2 = 12. We will not include this median value when calculating the
quartiles.

Q1 = 3, 3, 6, 8, 10. The médian value for the lower half is 6. Q1 = 6

Q3 = 14, 16, 16, 19, 24. The médian value for the upper half is 16. Q3 = 16
Central Tendencies and Dispersion
Variance: It is a measurement of the spread between numbers in any particular
data set.
Standard deviation: It measures the dispersion of a data set relative to its mean.
¯

Home work: In the case of the sample, we have used n-1 in the denominator
than n. Explain the reason behind it.
Frequency Distribution
Probability density function: It is a function that provides the relative
likelihood of the random variable at that given point.
The uniform distribution (continuous) is one
Continuous uniform of the simplest probability distributions in
distribution statistics. It is a continuous distribution,
which means that it takes values within a
specified range, e.g. between 0 and 1.
1
𝑓 𝑥 = 𝑏 − 𝑎 𝑖𝑓 𝑎 ≤ 𝑥 ≤ 𝑏
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
The expected value of a uniform The variance of a uniform distribution
distribution is: is:
𝑏 𝑏
𝑥 𝑉𝑎𝑟 𝑋 = 𝐸 𝑋 2 − 𝐸 2 𝑋
𝐸 𝑋 = 𝑥𝑓 𝑥 𝑑𝑥 = 𝑑𝑥 𝑏 2
𝑎 𝑎 (𝑏 − 𝑎) 𝑥2 𝑎+𝑏 (𝑏 − 𝑎)2
= 𝑑𝑥 − =
𝑎+𝑏 𝑎 𝑏 − 𝑎 2 12
=
2
Frequency Distribution
Exponential Distribution: In probability theory and statistics, the exponential
distribution is a continuous probability distribution. The continuous random
variable, say X is said to have an exponential distribution, if it has the following
probability density function:
λ𝑒 −λ𝑥 𝑓𝑜𝑟 𝑥 > 0
𝑓 𝑥 =
0 𝑓𝑜𝑟 𝑥 ≤ 0

1
𝑀𝑒𝑎𝑛 =
λ
1
𝜎2 = 2
λ

Exponential distribution
Frequency Distribution
Binomial distribution: We consider repeated and independent trials of an
experiment with two outcomes; we call one of the outcomes success and the
other outcome failure. Let p be the probability of success, so that q = 1-p is the
probability of failure. If we are interested in the number of successes and not in
the order in which they occur, then we say.
The probability of exactly k successes in n repeated trials is denoted and given
by 𝑛 𝑘 𝑛−𝑘 𝑛
𝑏 𝑘; 𝑛, 𝑝 = 𝑝 𝑞 Here is the binomial coefficient.
𝑘 𝑘
Example: A fair coin is tossed 6 times or, equivalently, six fair coins are tossed;
1
call heads a success. Then n = 6 and p = q = 2.
(i) The probability that exactly two heads occur (i.e. k = 2) is

1 6 1 2 1 4 15
𝑏 2; 6, = ( ) ( ) =
2 2 2 2 64
Frequency Distribution
(ii) The probability of getting at least four heads (i.e. k = 4, 6, 1/2) is

1 1
(iii) The probability of no heads (i.e. all failures) is q6 = (2)6 = 64, and so the
1 63
probability of at least one head is 1-q6 = 1-63 = 64
Frequency Distribution
If we say, n and p as constant, then the above function P(k) = b(k,n,p) is a
discrete probability distribution.

This distribution is also called the


𝑛 𝑘 𝑛−𝑘
𝑏 𝑘; 𝑛, 𝑝 = 𝑝 𝑞 Bernoulli distribution, and independent
𝑘
trials with two outcomes are called
Bernoulli trials.
Where
n = number of trials
k = number of successes
p = probability of getting
success in one trial
q = probability of getting
failure in one trial.
Frequency Distribution
Geometric distribution: A geometric distribution is defined as a
discrete probability distribution of a random variable “X” which satisfies some
of the conditions. The geometric distribution conditions are

Geometric distribution
•A phenomenon that has a series of
trials
•Each trial has only two possible
outcomes – either success or failure
•The probability of success is
the same for each trial

Example: A pharmaceutical company is designing a new drug to treat a


certain disease that will have minimal side effects (success here). What is the
probability that zero drugs fail the test, one drug fails the test, two drugs fail
the test, and so on until they have designed the ideal drug?
Frequency Distribution
Geometric distribution defines the probability that the first success occurs after
k number of trials. If p is the probability of success or failure of each trial, then
the probability that success occurs on the kth trial is given by the formula.
𝟏 𝟏−𝒑
𝑃 𝑋 = (1 − 𝑝)𝑘−1 𝑝; Mean = ; 𝝈𝟐 = 𝟐
𝒑 𝒑
Example: Let us say a person is throwing dice and will stop once he gets 5.
1
Since there are 6 possible outcomes, the probability of success p = 6 = 0.17
Probability of failure = 1-0.17 = 0.83
The person gets number 5 for the first time. The number of failures before the
first success is zero. Therefore X = 0, k = 1
𝑃 𝑋 = 0 = (0.83)1−1 × 0.17 = 0.17
If he gets second time, then X=1 and k=2

𝑃 𝑋 = 1 = (0.83)2−1 × 0.17 = 0.83 × 0.17 = 0.14


This way we can construct a series of geometric distributions for a series of
trials.
Frequency Distribution
Normal distribution: In statistics, a normal distribution or Gaussian
distribution is a type of continuous probability distribution for a real-valued
random variable. The general form of its probability density function is
1 1(𝑥−𝜇)2

𝑓 𝑥 = 𝑒 2 𝜎2
𝜎 2𝜋
where μ and σ > 0 are arbitrary constants. This function is certainly one of the
most important examples of a continuous probability distribution.

Bell curve
when x= μ
Frequency Distribution
The properties of normal distribution

If μ = 0 and σ =1 then

1 1
−2𝑥 2
𝑓 𝑥 = 𝑒
2𝜋
Frequency Distribution
Frequency Distribution
Central Limit Theorem: The normal approximation to the binomial
distribution:
The binomial distribution P(k) = b(k;n,p) is closely approximated by the
normal distribution providing n is large and neither p nor q is close to zero.

We have chosen the


binomial distribution
corresponding to n =
1
8 and p = q = 2.
Frequency Distribution
Poisson distribution: The Poisson distribution is defined as follows

λ𝑘 𝑒 −λ
𝑝 𝑘; λ =
𝑘!

where λ > 0 is some constant. This countably infinite distribution appears in


many natural phenomena, such as the number of telephone calls per minute at
some switchboard, the number of misprints per page in a large text, and the
number of particles emitted by a radioactive substance.
Frequency Distribution

Although the Poisson distribution is of independent interest, it also provides us


with a close approximation of the binomial distribution for small k provided
that p is small and λ = np
Error Analysis
Errors are inevitable in computer simulations. When simulating phenomena,
errors can occur due to humans or computers.
Generally, errors can be divided into two broad and rough but useful classes:
systematic and random.
Systematic Errors: Systematic errors are errors that tend to shift all
measurements systematically so their mean value is displaced. The systematic
errors may be due to (1) incorrect calibration of instruments, (2) improper use
of equipment, (3) failure to properly account for some effect, and (4) external
effects.

Random Errors: Random errors are errors that fluctuate from one
measurement to the next. They yield results distributed about some mean
value. They can occur for a variety of reasons, namely, lack of sensitivity,
noise, unpredictable fluctuations, and human errors.

Random errors are unavoidable and they are part of


any experiment.
Error Analysis
Suppose an experiment were repeated many, say N, times to get the
N measurements of the same quantity, x. The value of the parameter is as
follows: 𝑥1 , 𝑥2 , 𝑥3 … … . . 𝑥𝑛 . The mean value is
𝑁
𝑥1+ 𝑥2+ 𝑥3+⋯……+ 𝑥𝑛 𝑖=1 𝑥𝑖
𝜇= =
𝑁 𝑁

Maximum Error: The maximum and minimum values of the data set,
𝑥𝑚𝑎𝑥 𝑎𝑛𝑑 𝑥𝑚𝑖𝑛. Then the maximum error is
𝑥𝑚𝑎𝑥 − 𝑥𝑚𝑖𝑛
∆𝑥𝑚𝑎𝑥 =
2
And, virtually no measurements should ever fall outside 𝜇 ± ∆𝑥𝑚𝑎𝑥

Probable Error: The probable error, ∆𝑥𝑝𝑟𝑜𝑏 , specifies the range 𝜇 ±


∆𝑥𝑝𝑟𝑜𝑏 which contains 50% of the measure values.
Error Analysis
Average Deviation: The average deviation is the average of the deviations
from the mean;
|𝑥𝑖 − 𝜇|
∆𝑥𝐴𝐷 =
𝑁
For a Gaussian distribution of the data, about 58% data will lie within 𝜇 ±
∆𝑥𝐴𝐷
Standard Error: It defines an estimate of standard deviation that has been
computed from the sample. It is calculated as the ratio of the standard deviation
to the root of sample size, such as:
𝝈
𝑺𝑬 =
𝑵
Error Analysis
In the realm of engineering, you'll encounter the term Approximation Error.
This concept is extensively employed to assess the precision of numerical
approximations.

Approximation Error is the discrepancy between the exact value of a


quantity and its approximated value.
Approximation error = Exact value-Approximated Values.

 Real-world problems can be complex and require simplifications. These


simplifications will result in an Approximation Error.
 Approximation Errors are not mistakes but rather an expected part of the
problem-solving process.
 The goal is often to minimize the Approximation Error to ensure the results
are as accurate as possible.
Error Analysis

𝑥3 𝑥5
sin 𝑥 = 𝑥 − + 5! …………………….(approximate up to n = 2)
3!
𝜋 𝜋
𝑥 = 2 , 𝑡ℎ𝑒𝑛, sin = 1 (𝑤ℎ𝑖𝑐ℎ 𝑖𝑠 𝑡𝑟𝑢𝑒/𝑎𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑣𝑎𝑙𝑢𝑒)
2
𝜋3 𝜋5
𝜋 2 2
Now , due to approximation, the value will be = 2 − + = 1.004
3! 5!

Absolute error = |1-1.004| = 0.004

Percentage error = (0.004/1)  100 % = 0.4 %


Error Analysis
Round-off errors: This is the imprecision arising from the finite number of
digits used to store floating-point numbers.

1
= 0.3333333333
3
The difference between an approximation of a number used in
computation and its correct (true) value is called round-off error.

The overall round-off error accumulates as the computer handles more


numbers, that is, as the number of steps in a computation increases, and may
cause some algorithms to become unstable with a rapid increase in error.
Simple Functions
A function is defined as the relation between a set of inputs and their outputs,
where the input can have only one output i.e. a domain can yield a particular
range.

The domain of Domain Range


any function is
defined as the
set of all
possible values
for which the
function can be
defined
The range is the output given by a function for a particular domain, A
co-domain of a function is the set of possible outcomes.
Simple Functions
Simple Functions
Simple Functions
Example: Find the domain and range of a function f(x) = 3𝑥 2 + 5
Domain: We know that the domain of a function is the set of input values for
f(x), in which the function is real and defined. The given function has no
undefined values of x.
Thus, for the given function, the domain is the set of all real numbers. Hence,
the domain is [-∞, +∞]
Range: Also, the range of a function comprises the set of values of a
dependent variable for which the given function is defined.

𝑦 = 3𝑥 2 − 5
3𝑥 2 = 𝑦 + 5 The square root function will be defined for non-
2
𝑦+5 negative values. This is possible when y is greater
𝑥 =
3 than y ≥ -5. Hence, the range of f(x) is [-5, ∞].
𝒚+𝟓
𝒙=
𝟑
Simple Functions
Homework: Find the domain and range of the functions

𝟏
𝒚= −𝟓
𝒙−𝟑

𝒙𝟐 − 𝟑𝒙 − 𝟒
𝒚=
𝒙+𝟏
Derivative
The derivative of a function in the calculus of variable standards the
sensitivity to change the output value with respect to a change in its input
value. The derivative of a function f(x) is defined as

Basic derivative rules: (1) The derivative of a constant is always zero.


𝑑
𝑐 =0
𝑑𝑥
𝑑 𝑑𝑓(𝑥)
(2) Constant multiply Rule: 𝑐𝑓 𝑥 = 𝑐
𝑑𝑥 𝑑𝑥
Derivative
𝑑
(3) Power rule: 𝑥 𝑛 = 𝑛𝑥 𝑛−1
𝑑𝑥

𝑑 𝑑𝑓(𝑥) 𝑑𝑔(𝑥)
(4) Sum Rule: 𝑓 𝑥 + 𝑔(𝑥) = +
𝑑𝑥 𝑑𝑥 𝑑𝑥

𝑑 𝑑𝑓(𝑥) 𝑑𝑔 𝑥
(5) Difference Rule: 𝑓 𝑥 − 𝑔(𝑥) = −
𝑑𝑥 𝑑𝑥 𝑑𝑥

𝑑 𝑑𝑔 𝑥 𝑑𝑓(𝑥)
(6) Product rule: 𝑓 𝑥 𝑔(𝑥) = 𝑓(𝑥) + 𝑔(𝑥)
𝑑𝑥 𝑑𝑥 𝑑𝑥

𝑑𝑔 𝑥 𝑑𝑓(𝑥)
𝑑 𝑓 𝑥 𝑓 𝑥 𝑑𝑥
−𝑔(𝑥) 𝑑𝑥
(7) Quotient rule: =
𝑑𝑥 𝑔 𝑥 𝑔(𝑥)2
Derivative
The derivatives of
trigonometric functions Higher-order Derivatives
We can find the successive derivatives of
a function and obtain the higher-order
derivatives.
If y is a function, then its first derivative
is dy/dx.

The second derivative is d/dx (dy/dx)


which also can be written as d2y/dx2.

The third derivative is d/dx (d2y/dx2) and


is denoted by d3y/dx3 and so on.
𝑑𝑦
If 𝑦 = 4𝑥 3 then 𝑑𝑥 = 12𝑥 2
𝑑2 𝑥 𝑑3 𝑥 𝑑2 𝑥
And, = 24𝑥, = 24, 𝑑2 𝑦 =0
𝑑2 𝑦 𝑑3 𝑦
Derivative
Partial Derivatives
If u = f(x,y) we can find the partial derivative of with respect to y by keeping
x as the constant or we can find the partial derivative with respect to x by
keeping y as the constant.
Suppose f(x, y) = x3 y2 , the partial derivatives of the function are:
∂f/∂x(x3 y2) = 3x2y2 and
∂f/∂y(x3 y2) = x3 2y

Further, we can find the second-order partial derivatives also like ∂2f/∂x2,
∂2f/∂y2.
Derivative
Problem 1: Prove the following rules of derivative: (1) power rule, (2) product
rule, and (3) quotient rule

Problem 2: Find the derivative of


f ( x)= 2 x 2 +3 x− 5 at x= − 1
f ( x)= sin( x)
Integral
Integration – Inverse Process of Differentiation
We know that differentiation is the process of finding the derivative of the
functions and integration is the process of finding the antiderivative of a
function. So, these processes are inverse of each other.
𝑑𝐹(𝑥)
𝑓 𝑥 =
𝑑𝑥 But, What is integration?
𝑓 𝑥 𝑑𝑥 = 𝐹 𝑥 + 𝐶 As an example, to find the area of
the region bounded by the graph of
the function f(x) = 𝑥 between x =
0 and x = 1, one can divide the
interval into five pieces (0, 1/5, 2/5,
..., 1), then construct rectangles
using the right end height of each
piece (thus √0, √1/5, √2/5, ..., √1)
and sum their areas to get the
approximation
Integral
1 1 2 2 1 3 3 2 5 5 4
−0 + −5 + − ….+ − 5 ≈ 0.7497
5 5 5 5 5 5 5 5 5

The estimated value is larger than


the exact value.

However, when the number of


pieces increases to infinity, it will
reach a limit which is the exact
value of the area sought (in this
case, 2/3).

1
2
𝑥𝑑𝑥 = ≈ 0.6667
0 3
Hence, integration of any function
means: the estimation of the area
under that curve.
Integral
There are two types of integrals, (1) definite integral and (2) indefinite
integral.

Definite Integral: An integral that contains the upper and lower limits then
it is a definite integral. Riemann Integral is the other name of the Definite
Integral.
𝑏
𝑓 𝑥 𝑑𝑥
𝑎
Indefinite Integrals: They are defined without upper and lower limits, and
are represented as follows.
𝑓 𝑥 𝑑𝑥 = 𝐹 𝑥 + 𝐶
Where C is any constant and the function f(x) is called the integrand.
Integral
Some Important Integration Formulas:

1 𝑑𝑥 = 𝑥 + 𝐶 1
𝑑𝑥 = ln 𝑥 + 𝐶
𝑥
𝑎 𝑑𝑥 = 𝑎𝑥 + 𝐶
𝑒 𝑥 𝑑𝑥 = 𝑒 𝑥 + 𝐶
𝑥 𝑛+1 𝑥
𝑥 𝑛 𝑑𝑥 = + 𝐶; 𝑛 ≠ −1 𝑎
𝑛+1 𝑎 𝑥 𝑑𝑥 = + 𝐶; 𝑎 > 0, 𝑎 ≠ 1
𝑙𝑛𝑎
𝑠𝑖𝑛𝑥 𝑑𝑥 = −𝑐𝑜𝑠𝑥 + 𝐶
1
𝑑𝑥 = 𝑠𝑖𝑛−1 𝑥 + 𝐶
1 − 𝑥2
𝑐𝑜𝑠𝑥 𝑑𝑥 = 𝑠𝑖𝑛𝑥 + 𝐶 1
𝑑𝑥 = 𝑡𝑎𝑛−1 𝑥 + 𝐶
1 + 𝑥2
𝑠𝑒𝑐 2 𝑥 𝑑𝑥 = 𝑡𝑎𝑛𝑥 + 𝐶
Integral
Home work: Integrate the following function.

𝑥 2 − 1 4 + 3𝑥 𝑑𝑥

𝑑𝑥
1 + tan(𝑥)

𝑠𝑖𝑛3 𝑥 𝑐𝑜𝑠 2 𝑥 𝑑𝑥
Numerical differentiation
f (x+ h)− f (x) Forward difference
f ' (x )= lim
h →0 h

f (x)− f (x− h)
f ' (x )= lim Backward difference
h →0 h

f (x+ h)− f (x− h) Central difference


f ' (x )= lim
h →0 2h
Numerical differentiation
Calculate the first derivative of cos(x) at x = π/3 for
a) h = 0.1
b) h = 0.01
c) h = 0.001
d) h = 0.0001
Also calculate error in each case.
h forward Error backward Error
central Error
forward backward
central
0.1
0.01
0.001
0.0001
(cos(x+h) -cos(x))/h, x=pi/3
a) 0.1

f’_first_principle (x=pi/3) = (cos(pi/3+0.1) – cos(pi))/0.1

-0.88956192323
f’_direct(pi) = -sin(pi/3) = 30.5/2 = -0.86602540378

Error = f’_direct(x=pi/3) - f’_first_principle (x=pi/3)


= -0.86602540378 + 0.88956192323
= -0.02353651945
Numerical integration
1
Integrate the following ∫ cos (x) dx for wi
a) 0.1 0

b) 0.01
c) 0.001
d) 0.0001

Also calculate error in each case.


1 10

∫ cos (x) dx=w ∑ cos( xi )=w∗(cos (0)+cos(0.1)+cos(0.2)+cos(0.3)+....+cos (0.9))


0 i= 1

1 10

∫ cos( x) dx=w ∑ cos( xi )=w∗(cos (0.1)+cos(0.2)+cos (0.3)+....+cos(1.0))


0 i= 1

1 10

∫ cos (x) dx=w ∑ cos( xi )=w∗(cos (0)+cos(0.1))/2+(cos(0.1)+cos (0.2))/2+..


0 i= 1
b n

∫ f (x) d x = w ∗ ∑ f (x i )=w∗((f (a)+f (a+w ))/2+(f (a+w )+f (a+2 w))/2+(f (a+2 w)+f (a+3 w))/2+...)
a i= 1
n−1

= w∗ ((f (a)+ f (b))/2+ ∑ f (x i ))


i=1
Functions - Python
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-5,5,100)
y = 2*x+1
plt.plot(x, y, label='y=2x+1')
plt.title('Graph of y=2x+1')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='upper left')
plt.grid()
plt.show()
Functions - Python
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
x = np.linspace(-5,5,100)
plt.plot(x, 2*x+1, '-r', label='y=2x+1')
plt.plot(x, 2*x-1,'-.g', label='y=2x-1')
plt.plot(x, 2*x+3,':b', label='y=2x+3')
plt.plot(x, 2*x-3,'--m', label='y=2x-3')
plt.legend(loc='upper left')
plt.show()
Functions - Python
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
x = np.linspace(-5,5,100)
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('center')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
plt.plot(x, 2*x+1, 'r', label='y=2x+1')
plt.plot(x, 2*x-1,'g', label='y=2x-1')
plt.plot(x, 2*x+3,'b', label='y=2x+3')
plt.plot(x, 2*x-3,'k', label='y=2x-3')
plt.legend(loc='upper left')
plt.show()
Functions - Python
import matplotlib.pyplot as plt
import numpy as np

x = np.linspace(-2, 2, 100)
y = x ** 2

fig = plt.figure(figsize = (10, 5))


plt.plot(x, y)
plt.show()
Functions - Python
import matplotlib.pyplot as plt
import numpy as np

x = np.linspace(-6, 6, 50)
y = np.cos(x)
plt.plot(x, y, 'b', label ='cos(x)')
plt.xlabel('x-axis')
plt.ylabel('y-axis')
Functions - Python
import matplotlib.pyplot as plt
import numpy as np

x = np.linspace(0, 1, 1000)
y = np.exp(x)
plt.plot(x, y, 'b', label ='exp(x)')
plt.xlabel('x-axis')
plt.ylabel('y-axis')
Functions - Python
import matplotlib.pyplot as plt
import numpy as np

x = np.linspace(1, 100, 1000)


y = np.log(x)
plt.plot(x, y, 'b', label ='ln(x)')
plt.xlabel('x-axis')
plt.ylabel('y-axis')
Functions - Python
import matplotlib.pyplot as plt
import numpy as np

x = np.linspace(1, 100, 1000)


y = np.log10(x)
plt.plot(x, y, 'b', label ='log(x)')
plt.xlabel('x-axis')
plt.ylabel('y-axis')
Central tendencies - Python
import numpy as np
import statistics

data1 = [1, 3, 4, 5, 7, 9, 2]
x = statistics.mean(data1)

print("Mean is :", x)
Central tendencies - Python
import numpy as np
import statistics
from statistics import mean

data1 = [1, 3, 4, 5, 7, 9, 2]

print("Mean is :", mean(data1))


Central tendencies - Python
import numpy as np
import statistics

data1 = [1, 3, 4, 5, 7, 9, 2]

print("Median of data-set is : ", statistics.median(data1)))


Central tendencies - Python
import statistics

set1 =[1, 2, 3, 3, 4, 4, 4, 5, 5, 6]

print("Mode of given data set is :", (statistics.mode(set1)))


Central tendencies - Python
import statistics

sample = [1, 2, 3, 4, 5]

print("Standard Deviation of sample is:" , (statistics.stdev(sample)))


Random numbers - Python
from numpy.random import rand
for i in range(10):
print(rand())

from numpy.random import randint


for i in range(10):
print(randint(0,10))
Random numbers - Python
from numpy.random import rand

l=[]
for i in range(10):
l.append(rand())

print(l)

plt.plot(l)
plt.show()
Random Numbers
Binomial Distribution: We consider repeated and independent trials of an
experiment with two outcomes; we call one of the outcomes success and the
other outcome failure. Let p be the probability of success, so that q = 1-p is
the probability of failure. If we are interested in the number of successes and
not in the order in which they occur, then the following theorem applies

n
Random Numbers
Central Limit Theorem: The central limit theorem states that whenever a
random sample of size n is taken from any distribution with mean and variance,
then the sample mean will be approximately a normal distribution with mean
and variance. The larger the value of the sample size, the better the
approximation of the normal.

n
Central limit theorem
Let X1, X2, X3, .........., Xn are the n random samples with mean μ and variance σ 2

Then the limiting from of the distribution


n

∑ Xi−μ
Z n= i= 1

√n σ2
tends to be normal.

You might also like