0% found this document useful (0 votes)
50 views22 pages

Lecture 10

The document provides information about a course on Engineering Mathematics-IV for second year B.Tech students in the 2020-2021 academic year. It discusses developing the notation of mean and variance for discrete random variables. Key concepts covered include expected value (mean), variance, and how to calculate them for discrete probability distributions. Examples are provided to demonstrate calculating the expected value, variance, and standard deviation for random variables.

Uploaded by

ruff ian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views22 pages

Lecture 10

The document provides information about a course on Engineering Mathematics-IV for second year B.Tech students in the 2020-2021 academic year. It discusses developing the notation of mean and variance for discrete random variables. Key concepts covered include expected value (mean), variance, and how to calculate them for discrete probability distributions. Examples are provided to demonstrate calculating the expected value, variance, and standard deviation for random variables.

Uploaded by

ruff ian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

B.

TECH SECOND YEAR


ACADEMIC YEAR: 2020-2021

COURSE NAME: ENGINEERING MATHEMATICS-IV


COURSE CODE : MA 2201
LECTURE SERIES NO : 10
CREDITS : 03
MODE OF DELIVERY : ONLINE (POWER POINT PRESENTATION)
FACULTY :
EMAIL-ID :
PROPOSED DATE OF DELIVERY: as per the schedule lecture
“DEVELOP THE NOTATION OF MEAN
SESSION OUTCOME AND VARIANCE”
ASSIGNMENT

ASSESSMENT CRITERIA'S
QUIZ
MID TERM EXAMINATION –I & II
END TERM EXAMINATION
PROGRAM
OUTCOMES
MAPPING WITH
CO1

 Engineering Knowledge: Understand the key


concept of random variable, its probability
distribution including mean, expectation, variance
and moments
EXPECTED VALUE AND VARIANCE

 All probability distributions are characterized by an expected value and


a variance (standard deviation squared).
EXPECTED VALUE, OR MEAN

 If we understand the underlying probability function of a certain


phenomenon, then we can make informed decisions based on how we
expect x to behave on-average over the long-run…(so called
“frequentist” theory of probability).

 Expected value is just the weighted average or mean (µ) of random


variable x. Imagine placing the masses p(x) at the points X on a beam;
the balance point of the beam is the expected value of x.
Discrete Random Variables
Mean and Variance
EXPECTED VALUE, FORMALLY
Discrete case:

E( X )   x p(x )
all x
i i

Continuous case:

E( X )  
all x
xi p(xi )dx
EMPIRICAL MEAN IS A SPECIAL CASE OF
EXPECTED VALUE…

Sample mean, for a sample of n subjects: =


n

x i n


i 1 1
X  xi ( )
n i 1 n

The probability (frequency) of each


person in the sample is 1/n.
EXAMPLE: EXPECTED VALUE

x 10 11 12 13 14
P(x) .4 .2 .2 .1 .1

 Recall the following probability distribution of


ship arrivals:

 x p( x)  10(.4)  11(.2)  12(.2)  13(.1)  14(.1)  11.3


i 1
i
EXTENSION TO CONTINUOUS CASE:
UNIFORM DISTRIBUTION

p(x)

x
1

1
x2 1
1 1

E ( X ) x(1)dx 
0
2 0

2
0
2
**A FEW NOTES ABOUT EXPECTED VALUE AS A MATHEMATICAL
OPERATOR:

If c= a constant number (i.e., not a variable) and X and Y are any random variables…

 E(c) = c
 E(cX)=cE(X)
 E(c + X)=c + E(X)
 E(X+Y)= E(X) + E(Y)
VARIANCE/STANDARD DEVIATION

  Var ( x)  E[( x   ) ] 
2 2
 (x  )
all x
i
2
p(xi )

“The average (expected) squared distance (or deviation) from the mean”

**We square because squaring has better properties than absolute value. Take square root
to get back linear average distance from the mean (=”standard deviation”).
VARIANCE, FORMALLY
Discrete case:

Var ( X )   2
 (x  )
all x
i
2
p(xi )

Continuous case:

Var ( X )     ( xi   ) p( xi )dx
2 2


SIMILARITY TO EMPIRICAL VARIANCE

The variance of a sample: s2 =

 ( xi  x ) 2 N


i 1 1
 ( xi  x ) (2
)
n 1 i 1 n 1

Division by n-1 reflects the fact that we have lost a


“degree of freedom” (piece of information) because
we had to estimate the sample mean before we could
estimate the sample variance.
VAR(C) = 0

Var(c) = 0

Constants don’t vary!


PRACTICE PROBLEM
Find the variance and standard deviation for
the number of ships to arrive at the harbor
(recall that the mean is 11.3).

x 10 11 12 13 14
P(x) .4 .2 .2 .1 .1
ANSWER: VARIANCE AND STD DEV

x2 100 121 144 169 196


P(x) .4 .2 .2 .1 .1

5
E(x 2 )  
i 1
xi p( x i ) (100)(.4)  (121)(.2)  144(.2)  169(.1)  196(.1)  129.5
2

Var( x)  E ( x 2 )  [ E ( x)] 2  129.5  11.3 2  1.81


stddev( x)  1.81  1.35

Interpretation: On an average day, we expect 11.3 ships to


arrive in the harbor, plus or minus 1.35. This gives you a feel
for what would be considered a usual day!
PRACTICE PROBLEM

You toss a coin 100 times. What’s the expected number of heads? What’s the variance of the
number of heads?
ANSWER: EXPECTED VALUE
Intuitively, we’d probably all agree that we expect around 50 heads, right?

Another way to show this


Think of tossing 1 coin. E(X=number of heads) = (1) P(heads) + (0)P(tails)

E(X=number of heads) = 1(.5) + 0 = .5


If we do this 100 times, we’re looking for the sum of 100 tosses, where we assign 1 for a
heads and 0 for a tails. (these are 100 “independent, identically distributed (i.i.d)”
events)

E(X1 +X2 +X3 +X4 +X5 …..+X100) = E(X1) + E(X2) + E(X3)+ E(X4)+ E(X5) …..+ E(X100) =
100 E(X1) = 50
ANSWER: VARIANCE
What’s the variability, though? More tricky. But, again, we could do this for 1 coin and then use
our rules of variance.

Think of tossing 1 coin.


E(X2=number of heads squared) = 12 P(heads) + 02 P(tails)

E(X2) = 1(.5) + 0 = .5
Var(X) = .5 - .52 = .5 - .25 = .25
Then, using our rule: Var(X+Y)= Var(X) + Var(Y) (coin tosses are independent!)
Var(X1 +X2 +X3 +X4 +X5 …..+X100) = Var(X1) + Var(X2) + Var(X3)+ Var(X4)+ Var(X5) …..+ Var(X100) =

100 Var(X1) = 100 (.25) = 25


Interpretation: When we toss a coin
SD(X)=5
100 times, we expect to get 50 heads
plus or minus 5.
THANKS

You might also like