Probability Theory
Dr. Manoj BR
Assistant Professor
Department of Electronics & Electrical Engineering
Indian Institute of Technology Guwahati
Discrete random variable
: Discrete random variable that can assume any of the finite number of different values
in the set
, ,…..,
: probability that assumes the value
,
0 and
Probability mass function (PMF)
Express the set of probabilities , ,……, in terms of PMF
0
Expected values
Mean or average or the expected value of the random variable (RV) is
=
+ [ ] [], and are arbitrary constants
Second moment
=
Variance
Var[] = = E[] = , is standard deviation of
Var[] =
Pairs of Discrete Random variable
and be random variables which can take values in
, ,……,
,,……,
We can think of (, ) as a vector or a point in the product space of and
For each possible pair of values , we have a joint probability
,
Joint PMF ,
, 0 and
Marginal distributions
Statistical independence: Variables and are said to be statistically independent if and only
if
Expected values of function of two variables
, = ,)
= =
= = ,)
Var[] = , )
Var[] = E[
Covariance of and
Using vector notation
, and represents vector notations
)
represents the space of all possible values for all components of and is the covariance matrix
Covariance: measure of the degree of statistical dependence of and
If and are statistically independent, then
If = 0, the variables and are said to be uncorrelated
It does not follow that uncorrelated variables must be statistically independent
Correlation coefficient
= ,
and are maximally positively correlated
1 and are maximally negatively correlated
variables are uncorrelated
Conditional Probability
When two variables are statistically dependent, knowing the value of one of them lets us get a better
estimate of the value of the other one. This is expressed by the following definition of the
conditional probability of g
Mass function:
The Law of Total Probability and Bayes’ rule
The Law of Total Probability states that if an event can occur in different ways and if these
subevents are mutually exclusive — that is, cannot occur at the same time — then the
probability of occurring is the sum of the probabilities of the subevents
Therefore, (Bayes’ rule)
Posterior =
Vector random variables
Variables: , ,……,
0
If the random variables are statistically independent, then
,……….,
Example: We have and we want
Marginal distribution
=
Conditional distribution
Vector form:
Bayes’s rule (vector form):
Mean: =
Covariance matrix: is a square matrix
,
)
is symmetric, and its diagonal elements are just the
variances of the individual elements of , which can never
be negative; the off-diagonal elements are the covariances,
which can be positive or negative
If the variables are statistically independent, the
covariances are zero, and the covariance matrix is
Continuous random variable
Probability distribution function: x
Probability density function:
x
Mean: () d
Variance:
Multivariate
Mean: () d
Covariance: ) ()
If the components of are statistically independent, then the joint probability density function
factors as
The covariance matrix is diagonal
Conditional probability
Bayes’ rule
=
Expectation with respect to a subset of the variables
Example: d
Joint probability distribution function
Joint density function
If are independent:
Correlation
, is scalar
[], is vector
Covariance
Cov]
are uncorrelated:
or
Independent random variables are always uncorrelated. Converse is always not true
Gaussian probability density function
Multivariate
Joint Density
Let is a vector of real valued random variables
x is said to be a Gaussian random vector and the random v are said to be jointly Gaussian
if the joint PDF is
Maximum aposteriori probability criterion (MAP)
Say if
That is
• say if
• say if
Aposteriori probabilities: ,
Apriori probabilities:
Maximum likelihood criterion (ML)
Say if
That is
• (
Apriori probabilities: ;;
Likelihood: OR
Frequently, we work with the log, ln
Denoted as log likelihood function
We know that:
Applying log to the above equation
ln = ln ln ln
Differentiating with respect to and equating to zero, we get
For equiprobable , MAP = ML
The results presented here for MAP and ML can be generalized and are applicable to
events
References
Athanasios Papoulis and S Pillai, Probability, Random Variables and Stochastic
Processes
Alberto Leon-Garcia, Probability, Statistics, and Random Processes For
Electrical Engineering