m1- Random Variables and Processes
m1- Random Variables and Processes
Principles of Communication
Systems
BEC 402
Module 1 (PCS)
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
INTRODUCTION
• Deals with the statistical characterization of random signals
• A signal is “ random” if it is not possible to predict its precise value in advance.
• In a radio communication system, the received signal consists of an information-bearing signal component, a
random interference component, and receiver noise.
• The information-bearing signal component may represent, a voice signal that, typically, consists of randomly
spaced bursts of energy of random duration.
• The interference component represent spurious electromagnetic waves produced by other communication
systems operating in the vicinity of the radio receiver.
• A major source of receiver noise is thermal noise, caused by the random motion of electrons in conductors
and devices at the front end of the receiver.
• Thus find that the received signal is completely random in nature.
• So described in terms of its statistical properties such as the average power in the random signal, or the
average spectral distribution of this power.
• The mathematical discipline that deals with the statistical characterization of random signals is probability
theory.
• A random variable is obtained by observing a random process at a fixed instant of time. Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
PROBABILITY
• Probability theory is a phenomena that, explicitly or implicitly, can be
modeled by an experiment with an outcome that is subject to chance.
• if the experiment is repeated, the outcome can differ because of the
influence of an underlying random phenomenon or chance
mechanism. Such an experiment is referred to as a random
experiment.
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Basic Definitions
• Assume an experiment and its possible outcomes as defining a space and its points. If an
experiment has K possible outcomes, then for the kth possible outcome there is a point called the
sample point, denoted by sk.
• The set of all possible outcomes of the experiment is called the sample space, denoted by S.
• An event corresponds to either a single sample point or a set of sample points in the space S.
• A single sample point is called an elementary event.
• The entire sample space S is called the sure event, and the null set f is called the null or impossible
event.
• Two events are mutually exclusive if the occurrence of one event precludes the occurrence of the
other event.
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Properties
• A probability measure P is a function that assigns a non-negative
number to an event A in the sample space S and satisfies the
following three properties (axioms):
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Relationship between
sample space, events, and probability.
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
CONDITIONAL PROBABILITY
• Consider an experiment that involves a pair of events A and B. Let P[B|A] denote the probability of
event B, given that event A has occurred. The probability P[B|A] is called the conditional probability of
B given A. Assuming that A has nonzero probability, the conditional probability P[B|A] is defined by
• where P[A ∩ B] is the joint probability of A and B. Eq. (5.7) may be written as,
• the joint probability of two events may be expressed as the product of the conditional probability of one
event given the other, and the elementary probability of the other
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
RANDOM VARIABLES
• Random variable assigns a number to the outcome of a random experiment (Ex- 1 for head, 0 for
tail in coin tossing expt)
• a function whose domain is a sample space and whose range is a set of real numbers is called a
random variable of the experiment
• for events in set E, a random variable assigns a subset of the real line.
• if the outcome of the experiment is s, the random variable is denoted as X( s) or just X.
• In the figure, subsets of the sample space are being mapped directly to a subset or the real line
• Random variables may be discrete or continuous
• Discrete R.V takes only a finite number of values, such as in the coin-tossing experiment.
• Continuous R.V takes a range of real values like the amplitude of a noise voltage at a particular
instant in time, it may take on any value between plus and minus infinity
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
• Probabilistic description of random variables works equally well for both discrete as well as
continuous random variables
• Consider the random variable X and the probability of the event(X ≤ x) denoted by probability P[X
≤ x], the function Fx(x), called the cumulative distribution function (cdf ) of the random variable X
is given by
• The name density function arises from the fact that the probability of the event (x1<=X<=x2)
equals
• The probability of an interval is therefore the area under the probability density function in that
interval. Putting x1 = - ∞ in Eq. (5.18), and changing the notation, the distribution function is
defined in terms of the probability density function as follows:
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
• probability density function must always be a nonnegative function, and with a total area of
one.
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
STATISTICAL AVERAGES
• Determining the average behavior of the outcomes arising in random
experiments
• Expected value or mean is defined by
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
• This is the generalized concept of expected value to an arbitrary function g(X) of a random
variable X
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
MOMENTS
• nth moment of the probability distribution of the random variable X is,
• central moments are the moments of the difference between a random variable X and its mean.
Thus, the nth central moment is
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
• For n = 1, the central moment is zero, whereas for n = 2 the second central moment is referred to
as the variance of the random variable X, written as
• The variance of a random variable X is commonly denoted as σ2. The square root of the variance,
is called the standard deviation of the random variable X
• The variance of a random variable X is a measure of the variable’s “ randomness.”
• if the mean is zero, then the variance and the mean-square value E[X ] of the random variable X
are equal
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
CHARACTERISTIC FUNCTION
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
JOINT MOMENTS
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
• the two random variables X and Y are uncorrelated if and only if their
covariance is zero, that is, if and only if
cov[XY] = 0
• And are orthogonal if and only if their correlation is zero, that is, if
and only if
E[XY] = 0
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
RANDOM PROCESSES
• Statistical analysis of communication systems is the characterization of random signals such as
voice signals, television signals, computer data, and electrical noise
• In random signals, each sample point in sample space is a function of time. The sample space or
ensemble comprised of functions of time is called a random or stochastic process
• Consider then a random experiment specified by the outcomes s from some sample space S, by
the events defined on the sample space S, and by the probabilities of these events.
• Suppose that each sample point s is assigned a function of time in accordance with the rule:
where 2T is the total observation interval. For a fixed sample point sj, the graph of the function
X( t,Sj) versus time t is called a realization or sample function of the random process is defined as
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
• constitutes a random variable. Thus family of random variables {X(t,s)}, is called a random
process. To simplify the notation, suppress the s and simply use X(t) to denote a random process.
• Random process X(t ) is defined as an ensemble of time functions together with a probability
rule that assigns a probability to any meaningful event associated with an observation of one of
the sample functions of the random process.
• Difference between a random variable and a random process :
1. For a random variable, the outcome of a random experiment is mapped into a number.
2. For a random process, the outcome of a random experiment is mapped into a waveform that is a
function of time
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
• A random process is said to be stationary to first order if the distribution function (and therefore
density function) of X(t) does not vary with time. That is, the density functions for the random
variables X(t1) and X(t2 ) satisfy
• The mean of the random process is a constant for a stationary process of first order
• The autocorrelation function of the process X(t) is the expectation of the product of two random
variables X(t1 ) and X(t2 ), obtained by observing X(t) at times t1 and t2,respectively
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
• Random process X(t) is stationary to second order if the joint distribution function
fx(t1 ),x(t2 ) (x1,x2 ) depends only on the difference between the observation times
t1 and t2
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
By above equations,
Equivalently,
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
CROSS-CORRELATION FUNCTIONS
• Consider two random processes X(t) and Y( t ) with autocorrelation functions
Rx{t,u) and RY(t,u), respectively. The cross-correlation function of X(t) and Y(t) is
defined by
• If the random processes X(t) and Y(t ) are each wide-sense stationary and, the
cross-correlation may be written as
• symmetry relationship
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
GAUSSIAN PROCESS
• The random variable Y has a Gaussian distribution if its probability density function has the form
• for the special case when the Gaussian random variable Y is normalized to have a mean of zero
and a variance of one is written as N(0,1)
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
The Xi are said to constitute a set of independently and identically distributed (i.i.d.) random
variables. Let these random variables be normalized as follows;
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
• The central limit theorem states that the probability distribution of VN approaches a normalized
Gaussian distribution N(0,1) in the limit as N approaches infinity. That is, regardless of the
distribution of the Yi, the sum VN approaches a Gaussian distribution
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
3. If a Gaussian process is wide-sense stationary, then the process is also stationary in the strict
sense.
4. If the random variables X(t1), X(t2 ),……, X(t n), obtained by sampling a Gaussian process X( t) at
times t1, t2, ….,tn are uncorrelated, that is,
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
Proof of Property 1
Statement: If a Gaussian process X(t) is applied to a stable linear filter, then the random process Y(t)
developed at the output of the filter is also Gaussian
Proof:
• Consider a linear time-invariant filter of impulse response h{t), with the random process X(t) as input
and the random process Y(t) as output. Assume that X(t ) is a Gaussian process. The random processes
Y{t) and X(t) are related by the convolution integral
• Assume that the impulse response h(t) is such that the mean-square value of the output random
process Y(t ) is finite for all t in the range 0 <t < ∞ for which Y(t) is defined.
• Let us define a random variable Z which must be a Gaussian random variable for every function gy(t),
such that the mean square value of Z is finite.
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT
• Where
• Since X(t) is a Gaussian process by hypothesis, it follows from Eq. 2 that Z must be a Gaussian
random variable. Thus it is shown that if the input X(t) to a linear filter is a Gaussian process, then
the output Y(t) is also a Gaussian process.
Dr. Anupama H
Assistant Professor
Dept. of ECE, BIT