0% found this document useful (0 votes)
33 views19 pages

unit 2b

The document provides lecture notes on two-dimensional random variables, covering definitions, types (discrete and continuous), joint and marginal distributions, and conditional distributions. It includes mathematical formulations and examples to illustrate concepts such as joint probability mass functions, joint probability density functions, and independence of random variables. The notes are intended for students in the BA/BSc II Semester course at Aligarh Muslim University.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views19 pages

unit 2b

The document provides lecture notes on two-dimensional random variables, covering definitions, types (discrete and continuous), joint and marginal distributions, and conditional distributions. It includes mathematical formulations and examples to illustrate concepts such as joint probability mass functions, joint probability density functions, and independence of random variables. The notes are intended for students in the BA/BSc II Semester course at Aligarh Muslim University.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Department of Statistics & O.R.

Aligarh Muslim University Aligarh

BA/BSc II Semester

Probability and Probability Distributions

(STB 251)

by

Dr. Haseeb Athar


Unit - II
Two-Dimensional Random Variable
Contents:
• Definition
• Two-Dimensional Discrete Random Variable
• Two-Dimensional Continuous Random Variable
• Joint Probability Mass Function
• Joint Probability Density Function
• Marginal and Conditional Distributions
• Joint and Marginal Distribution Functions

2 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
Two-Dimensional Random Variables
Definition
Let E be an experiment and S a sample space associated with E. Let X = X ( s ) and Y = Y ( s )
be two functions, each assigning a real number to each outcome s  S . Then ( X , Y ) is a two-
dimensional random variable.

X(s)

S
Y(s)

Two-Dimensional Discrete Random Variable


If the set of possible values of two-dimensional random variable ( X , Y ) is finite or countably
infinite. Then, ( X , Y ) is called two-dimensional discrete random variable. The possible
values of ( X , Y ) may be represented as ( xi , y j ); i = 1, 2,..., n; j = 1, 2,..., m .


( X , Y ) = ( xi , y j ), i = 1, 2,..., n; j = 1, 2,...m 
For example, suppose two unbiased dice are rolled together. If X be the number on first die
and Y on second die, then set of possible values of (X, Y) are
( X , Y ) = (1,1)(1, 2)...(6,1)(6,6)
Two-Dimensional Continuous Random Variable
If ( X , Y ) can assume all values in some non-countable set of two-dimensional plane. Then,
( X , Y ) is called two-dimensional continuous random variable. For example, if ( X , Y ) assume
all values in the rectangle ( x, y) | a  x  b, c  y  d  . Then, ( X , Y ) is a two-dimensional
r.v.
Joint Probability Function or Joint Probability Mass Function
Let ( X , Y ) be a two-dimensional discrete r.v. Then, for each pair of possible values
( xi , y j ), i = 1, 2,..., n; j = 1, 2,..., m there associate a real number p ( xi , y j ) representing
P( X = xi , Y = y j ) , is called joint probability function of ( X , Y ) and satisfying the following
conditions:
i) p ( xi , y j )  0 for every ( x, y )
 
ii)   p ( xi , y j ) = 1
j =1 i =1

The set of triplet xi , y j , p( xi , y j ) , i, j = 1, 2,... is called joint probability distribution of


( X ,Y ) .

3 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
The joint probability function of X and Y can be represented in the tabular form as below:

Y y1 y2 yj ym −1 ym Total
X
x1 p11 p12 p1 j p1m−1 p1m p1•

x2 p21 p22 p2 j p2 m −1 p2m p2•

xi pi1 pi 2 pij pim −1 pim pi •

xn−1 pn −11 pn −12 pn −1 j pn −1 m −1 pn −1 m pn −1•

xn pn1 pn 2 pn j pn m −1 pnm pn •

Total p•1 p•2 p• j p• m−1 p• m p•• = 1

In the above table it may be noted that


P( X = xi , Y = y j ) = p( xi , y j ) = pij ; i = 1, 2,..., n; j = 1, 2,..., m
m n n m
pi• =  pij , p• j =  pij and p•• =   pij
j =1 i =1 i =1 j =1

Joint Probability Density Function


Let ( X , Y ) be a two-dimensional continuous r.v. assuming all values in some region R of
Euclidean plane. The joint probability function f is a function that satisfying the following
conditions:
i) f ( x, y )  0 for every ( x, y )
 
ii) − − f ( x, y)dydx = 1

Marginal Probability Functions


Discrete Case:
If joint distribution of two-dimensional discrete random variable ( X , Y ) is given, then
marginal probability distribution of X is given as
P( X = xi ) = p( xi ) = P( X = xi , Y = y1 or X = xi , Y = y2 or...)

=  p ( xi , y j ) = pi•
j =1

Similarly, the marginal distribution of Y is given as


P(Y = y j ) = p( y j ) = P( X = x1, Y = y j or X = x2 , Y = y j or...)

=  p( xi , y j ) = p• j
i =1

4 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
Continuous Case:
Let f ( ) be the joint pdf of the two-dimensional continuous random variable ( X , Y ) . We
define g ( ) and h ( ) , the marginal probability density functions of X and Y respectively are:

g ( x) = − f ( x, y)dy

and h( y) = − f ( x, y)dx

Conditional Distribution Functions


Discrete Case
Let ( X , Y ) be a two-dimensional discrete r.v. with joint probability function p ( x, y ) . Then
conditional distribution of X given Y = y j is given by

P( X = xi  Y = y j )
P( X = xi | Y = y j ) =
P(Y = y j )

P( X = xi , Y = y j ) p( xi , y j )
or P( X = xi | Y = y j ) = =
P(Y = y j ) p( y j )

Similarly, the conditional distribution of Y given X = xi is given by


P( X = xi , Y = y j ) p( xi , y j )
or P (Y = y j | X = xi ) = =
P ( X = xi ) p ( xi )
It may be noted that
  p( xi , yi ) p( y j )
 p( xi | y j ) =  = =1
i =1 i =1 p( y j ) p( y j )
  p ( xi , yi ) p ( xi )
Similarly,  p ( y j | xi ) =  = =1
j =1 j =1 p ( xi ) p ( xi )

Continuous Case
Let ( X , Y ) be a two-dimensional continuous r.v. with joint pdf f ( x, y ) . Then conditional
pdf of X given Y = y is given by
f ( x, y )
g ( x | y) = , h( y )  0.
h( y )
and the conditional pdf of Y given X = x is
f ( x, y )
h( y | x ) = , g ( x)  0.
g ( x)
Note that
 1  h( y )
− g ( x | y)dx = h( y) − f ( x, y)dx = h( y) = 1

5 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
 1  g ( x)
Similarly, − h( y | x)dy = −
f ( x, y )dy = =1
g ( x) g ( x)

Joint Cumulative Distribution Function


Let ( X , Y ) be two-dimensional random variable. The cumulative distribution function (cdf)
or distribution function (df) of the two-dimensional r.v. ( X , Y ) is defined as
F ( x, y ) = P ( X  x, Y  y )
=  p( x, y) : Discrete Case
x y

x y
= − − f ( x, y)dydx : Continuous Case

Marginal Distribution Functions


If F ( x, y ) is the joint distribution function of two dimensional r.v. ( X , Y ) . Then
i) Discrete Case:
The marginal distribution function of X is given by

F ( x) =  F ( x, y) = P( X  x, Y  y)
y y
and the marginal distribution function of Y is given by

F ( y) =  F ( x, y) = P( X  x, Y  y )
x x

ii) Continuous Case:


The marginal distribution function of X is given by
x
 
F ( x) = − − f ( x, y )dy dx 
and the marginal distribution function of Y is given by
y
 
F ( y ) = − − f ( x, y )dx dy 
Independence of Random Variables
Discrete Case:
Let ( X , Y ) be a two-dimensional discrete r.v., then we say X and Y are independent random
variables, if and only if
i) P( X = xi , Y = y j ) = P( X = xi ) P(Y = y j )
or p ( xi , y j ) = p1 ( xi ). p2 ( y j ) for every i, j

ii) p1 ( xi | y j ) = p1 ( xi )
and p2 ( y j | xi ) = p2 ( y j ) for every i, j

p( xi , y j )
Proof: Since p1 ( xi | y j ) =
p2 ( y j )

6 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
 p( xi , y j ) = p1( xi | y j ) p2 ( y j )

Now p1 ( xi ) =  p( xi , y j )
j

=  p1 ( xi | y j ) p2 ( y j )
j

= p1 ( xi | y j ) p2 ( y j )
j
 p1 ( xi ) = p1 ( xi | y j ) for every i, j
Similarly,
p2 ( y j ) = p2 ( y j | xi ) for every i, j
Continuous Case
Let ( X , Y ) be a two-dimensional continuous r.v., then we say X and Y are independent
random variables, if and only if
i) f ( x, y ) = g ( x).h( y )  x, y
ii) g ( x | y ) = g ( x)
h ( y | x ) = h ( y )  x, y
Proof: We have
f ( x, y )
g ( x | y) =
h( y )
 f ( x, y ) = g ( x | y ) h ( y )

Now g ( x) = − f ( x, y)dy

= − g ( x | y)h( y)dy

= g ( x | y)− h( y)dy
 g ( x ) = g ( x | y )  x, y
Similarly,
h( x) = h( y | x), x, y
Example 2.27: The joint probability distribution of ( X , Y ) is given in the following table

X 0 1 2
Y
0 3k k 2k
1 k 2k k

Determine the value of k. Then find


i) P ( X  1, Y = 1) , P( X  2) , P(Y = 1) , P( X  2, Y  1)
ii) Marginal distribution of X and Y.
iii) Conditional distribution of Y given X = 1.

7 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
iv) Conditional distribution X given Y = 1.
v) Are X and Y independent?

Solution: We know that


 p( x, y) = 1
 10k = 1 .
1
or k=
10
Therefore, the joint distribution of ( X , Y ) is
X p( y)
0 1 2
Y
0 0.3 0.1 0.2 0.6
1 0.1 0.2 0.1 0.4
p ( x) 0.4 0.3 0.3 1.0

i) P( X  1, Y = 1) = P( X = 0, Y = 1) + P( X = 1, Y = 1) = 0.1 + 0.2 = 0.3


P( X  2) = P( X = 0) + P( X = 1) + P( X = 2) = 0.4 + 0.3 + 0.3 = 1.0
P(Y = 1) = 0.4
P( X  2, Y  1) = P( X = 0, Y  1) + P( X = 1, Y  1)
= P( X = 0, Y = 0) + P( X = 0, Y = 1) + P( X = 1, Y = 0) + P( X = 1, Y = 1)
= 0.3 + 0.1+ 0.1+ 0.2 = 0.7
ii) The marginal distribution of X is defined as
P( X = x) = p( x) =  p( x, y)
y

Thus, from table


X : 0 1 2
P( X = x) : 0.4 0.3 0.3
The marginal distribution of Y is defined as
P(Y = y ) = p( y ) =  p( x, y )
x

Thus, from table


Y : 0 1
P (Y = y ) : 0.6 0.4
iii) The conditional distribution of Y given X is given as
P(Y = y, X = 1)
P(Y | X = 1) =
P( X = 1)

8 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
P( X = 1, Y = 0) 0.1 1
Y = 0 : P(Y = 0 | X = 1) = = =
P( X = 1) 0.3 3
P( X = 1, Y = 1) 0.2 2
Y = 1: P(Y = 1| X = 1) = = =
P( X = 1) 0.3 3
Thus,
Y : 0 1
P (Y = y | X = 1) : 0.6 0.4

iv) Similar as (iii).

v) In the above table we can observe that p( x, y )  p1 ( x) p2 ( y ) for any x, y . Hence X and Y
are not independent.

Practice Problem 2.11: A two dimensional r.v. ( X , Y ) has the bivariate distribution given
by
p( x, y) = k ( x2 + y); x = 0,1, 2,3, y = 0,1
Determine the value of k. Hence find the marginal distribution of X and Y.

Practice Problem 2.12: A two dimensional r.v. ( X , Y ) has joint pmf given by
1
P ( X = x, Y = y ) = (2 x + y ); x = 0,1, 2, y = 0,1, 2
27
Find the conditional distribution of Y given X = x .

Example 2.28: The joint pdf of two dimensional r.v. ( X , Y ) is given as


f ( x, y ) = kxy; 0  x  2, 0  y  2
= 0, elsewhere
Evaluate the value of k and hence obtain
(i) P ( X  1  Y  1)
(ii) P( X  1| Y  1)
(iii) Marginal pdf of X and Y.
(iv) Conditional pdf of X given Y.
(v) Are X and Y independent?
Solution: We know that
 
− − f ( x, y)dydx = 1
This implies
2 2
0 0 k x y dydx = 1
or
2
 2
k 0 x 0 ydy dx = 1 
2
or 2k 0 x dx = 1

9 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
or 4k = 1
1
 k=
4
Hence
1
f ( x, y ) = x y, 0  x  2;0  y  2
4
(i) P( X  1  Y  1) = P( X  1, Y  1)
1 1
= 0 0 f ( x, y )dydx

=
1 1 1

 x  ydy dx
4 0 0

1 1 1
= 0 xdx =
8 16

P( X  1, Y  1)
(ii) P( X  1| Y  1) =
P(Y  1)
2 1
P(Y  1) = 0 0 f ( x, y)dydx

1 2 1
=   xydydx
4 0 0

=
1 2
4
0 
1
x 0 ydy dx 
1 2 1
= 0 xdx =
8 4
Therefore,
P( X  1, Y  1) 1/16 1
P( X  1| Y  1) = = = .
P(Y  1) 1/ 4 4
(iii) Marginal pdf of X is
2
g ( x) = 0 f ( x, y)dy

x 2
=  ydy
4 0
x
= , 0 x2
2
Marginal pdf of Y is
2
h( y) = 0 f ( x, y)dx

y 2
=  xdx
4 0

10 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
y
= , 0 y2
2
(iv) Conditional pdf of X given Y is

f ( x, y )
g ( x | y) =
h( y )
1
xy
4 x
= = , 0  x  2.
y/2 2
(v) Here it can be observed that

f ( x, y ) = g ( x)h( y ) x, y
x
Also g ( x | y ) = g ( x) =
, x  (0, 2) .
2
Hence X and Y are independent.

Example 2.29: If X and Y are two random variables having density function
1
f ( x, y ) = (6 − x − y ); 0  x  2, 2  y  4
8
Find (i) P( X  1  Y  3) (ii) P( X + Y  3) (iii) P ( X  1| Y  3)
Solution:
(i) P( X  1  Y  3) = P( X  1, Y  3)
1 1 3
= 0 2 (6 − x − y )dydx
8
3
=
8

1 1 3− x
(ii) P( X + Y  3) = 0 2 (6 − x − y )dydx
8

 
= 6 0 2 dy dx − 0 x 2 dy dx
1 1 3− x
8
1 3− x
  X +Y = 3

−   ydydx 
1 3− x
0 2 
=
5 X +Y  3
24

1 2 3 5
(iii) P(Y  3) = 0 2 (6 − x − y )dydx =
8 8
Therefore,

11 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
P( X  1  Y  3) 3 / 8 3
P( X  1| Y  3) = = = .
P(Y  3) 5/8 5
Example 2.30: Let the two-dimensional random variable ( X , Y ) has joint pdf

f ( x, y) = 6x2 y, 0  x  1;0  y  1
= 0, elsewhere
3 1
Find (i) P(0  X  ,  Y  2) , (ii) P ( X + Y  1) , (iii) P ( X  Y ) ,
4 3
(iv) P( X  1| Y  2) .
Solution:
3 1 3/4 1 3/4 2
(i) P(0  X  ,  Y  2) = 0 1/3 f ( x, y )dydx + 0 1 f ( x, y )dydx
4 3
3/4 1 3/4 2
= 0 1/3 6 x2 ydydx + 0 1 0 dydx

8 3/4 2 3
= 0
3 x dx =
9 8

1 1− x
(ii) P( X + Y  1) = 0 0 6 x2 ydydx
1− x
1  y2  1
= 0
6 x2   dx =
 2 0 10

1 x
(iii) P( X  Y ) = 0 0 6 x2 ydydx
1
 x
= 60 x 2 0 ydy dx 
x
1 2  y2 
= 6 0
x   dx
 2 0
1 3
= 30 x 4 dx =
5

1 1 1 2
(iv) P( X  1  Y  2) = 0 0 6 x2 ydydx + 0 1 0 dydx = 1
1 2
P(Y  2) = 0 0 f ( x, y)dydx

12 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
1 1 1 2
= 0 0 6 x2 ydydx + 0 1 0 = 1dydx

Thus,
P( X  1  Y  2) 1
P( X  1| Y  2) = =1
P(Y  2) 1
Example 2.31: Let two-dimensional continuous random variable ( X , Y ) has joint pdf
f ( x, y ) = 2, 0  x  y  1
Find marginal density functions of X and Y.
Solution: We know that
1
g ( x) = x f ( x, y)dy

1
= x 2dy

= 2(1 − x), 0  x  1
Similarly
y
h( y) = 0 2dx = 2 y,0  y  1

Example 2.32: Let two-dimensional continuous random variable ( X , Y ) has joint pdf

f ( x, y) = xe− x( y+1) , x  0; y  0
Find marginal and conditional pdf.
Solution: We have

g ( x) = 0 f ( x, y)dy


= x e− x 0 e− x y dy = e− x , x  0.


and h( y) = 0 x e− x( y +1) dx

Let x( y + 1) = z  ( y + 1)dx = dz , then

1     e − x x p −1dx = ( p ) 
h( y ) = 2 0
z e − z dz
( y + 1)  0 
1
= , y0
( y + 1) 2
Now
g ( x | y ) = f ( x, y ).h( y )

= ( y + 1) 2 x e − x ( y +1) , x  0.

and h( y | x ) = f ( x, y ) g ( x )

= x e− x y , y  0.

13 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
Example 2.33: Let the conditional distribution of X given Y is
e− y y x
g ( x | y) = , x = 0,1, 2...
x!
which follows Poisson distribution with parameter y .

The r.v. Y is continuous with marginal pdf h( y ) = e − y , y  0 .

Show that g ( x) = 2−( x+1) , x = 0,1, 2...


Solution: The joint pdf of X and Y is given by
f ( x, y ) = g ( x | y ) h ( y )

yx
= e −2 y ; x = 0,1, 2,..., y  0
x!
Therefore, the marginal density of X is given by

g ( x) = 0 f ( x, y)dy

 yx
= 0 e −2 y dy
x!
1  −2 y x
=  e y dy
x! 0
( x + 1)
= x +1
= 2− ( x +1) , x = 0,1, 2...
x !2

Example 2.34: Let ( X , Y ) are distributed with a constant density inside a square R of side
b. Find f ( x, y ), F ( x, y ) and marginal pdfs.
Solution: The joint pdf of X and Y is given as
1 1 1
f ( x, y) =  = 2 , ( x, y)  R
b b b
= 0, otherwise
The joint cdf is given by

F ( x, y ) = 0, x  0, y  0
x y1 xy
= 0 0 2
dt1dt2 = 2 , 0  x  b; 0  y  b
b b
by y
= 2 = , x  b;0  y  b
b b
xb x
= 2 = , 0  x  b; y  b
b b
= 1, x  b; y  b

14 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
The marginal pdf of X is
b 1 1
g ( x) = 0 2
dy = , x  (0, b)
b b
and the marginal pdf of Y is
b 1 1
h( y) = 0 2
dx = , y  (0, b)
b b
Example 2.35: The joint pdf of a two-dimensional random variable ( X , Y ) is given by
f ( x, y ) = 2, 0  x  1; 0  y  x
= 0, elsewhere
(i) Find marginal density functions of X and Y.
(ii) Find the conditional distribution of Y given X and X given Y.
(iii)Check the independence of X and Y.
Solution: (i) The marginal density function of X and Y respectively are given by
x
g ( x) = 0 2dy = 2 x, 0  x  1
= 0, elsewhere
1
h( y ) = y 2dx = 2(1 − y), 0  y  1
and
= 0, elsewhere
(ii) The conditional distribution of Y given X is
f ( x, y ) 2 1
h( y | x ) = = = , 0  y  x.
g ( x) 2x x
The conditional density function of X given Y is
f ( x, y ) 2 1
g ( x | y) = = = , y  x  1.
h( y ) 2(1 − y ) (1 − y )
(iii) We have
f ( x, y ) = 2, 0  x  1;0  y  x
and g ( x)  h( y ) = 4 x(1 − y )
 f ( x, y )  g ( x )  h ( y )
Hence, X and Y are not independent.

Example 2.36: Joint distribution of X and Y is given by

f ( x, y ) = 4 x y e − ( x + y2 )
2
; x  0, y  0
Test, whether X and Y are independent? Also find the conditional distribution of X given
Y = y.
Solution: The marginal distribution of X is given by

15 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India

g ( x) = 0 f ( x, y)dy

= 0 4 xye− ( x + y2 )
2
dy

= 4 xe− x 0 ye− y dy
2 2


= 2 xe− x 0 e−t dt  y 2 = t  2 ydy = dt 
2

 

g ( x) = 2 xe − x ; x  0 .
2

Similarly, marginal distribution of Y is

h( y ) = 2 ye − y ; y  0
2

Since, f ( x, y ) = g ( x)  h( y )
Therefore, X and Y are independent.
The conditional distribution of X given Y is
f ( x, y )
= 2 xe− x , x  0.
2
g ( x | y) =
h( y )
Example 2.37: If joint distribution function of X and Y is given by
F ( x, y ) = 1 − e− x − e− y + e− ( x + y ) ; x  0, y  0.
= 0, elsewhere
(i) Find the marginal densities of X and Y.
(ii) Are X and Y independent?
(iii) Find P ( X + Y  1)
Solution: We have
F ( x, y) = 1 − e− x − e− y + e−( x+ y ) ; x  0, y  0.
Therefore,
2
f ( x, y ) = F ( x, y )
x y
  
= 
x  y

1 − e− x − e− y + e−( x+ y ) 



= [e − y − e − ( x + y ) ]
x
f ( x, y ) = e − ( x + y ) , x  0; y  0

= 0, elsewhere
(i) & (ii) The joint density of X and Y can be expressed as
f ( x, y) = e− x  e− y = g ( x)  h( y)

where g ( x) = e− x , x  0

16 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
and h( y) = e− y , y  0.
Thus, X and Y are independent.
(iii)
P( X + Y  1) = 1 − P( X + Y  1)
1 1− x
P( X + Y  1) = 0 0 f ( x, y)dydx
1− x
= 0 e− x {0 e− y dy}dx
1

= 0 e− x {1 − e−(1− x ) }dx
1

= 1 − 2e−1 .
Thus,
P( X + Y  1) = 2e−1 = 0.7358

Example 2.38: If two-dimensional r.v. has joint pdf


1
f ( x, y ) = xy, 0  y  x; 0  x  2
2
Show that X and Y are dependent.
Solution: We have
x
g ( x) = 0 f ( x, y )dy
x x x3
=  ydy = , 0 x2
2 0 4
2
h( y ) = y f ( x, y )dx

y 2 y3
=  xdx = y − , 0 y2
2 y 4
Since f ( x, y )  g ( x)  h( y )
Hence, X and Y are dependent.

Example 2.39: The joint pdf of a two-dimensional r.v. ( X , Y ) is given as

f ( x, y) = e−( x+ y ) , x  0; y  0.
Are X and Y independent?
Find (i) P( X  1) , (ii) P( X  Y | X  2Y ) , (iii) P(1  X + Y  2)
Solution: The joint pdf of X and Y can be expressed as
f ( x, y) = e− x  e− y = g ( x)  h( y) ,

where g ( x) = e− x , x  0

17 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
and h( y) = e− y , y  0.
Thus, X and Y are independent.

(i) P( X  1) = 1 g ( x)dx

= 1 e− x dx = 1/ e

(ii) 
 y
P( X  Y ) =  0
{0 f ( x, y )dx}dy

= 0 e − y {0 e − x dx}dy
y


= 0 e − y (1 − e − y )dy
[ 1 − e − y = t  e − y dy = dt ]
1
= 0 tdt
= 1/ 2

Similarly,
 2y
P( X  2Y ) = 0 {0 f ( x, y )dx}dy

= 0 e− y {0 e − x dx}dy
2y


= 0 e− y (1 − e−2 y )dy
= 2/3
Therefore,
P( X  Y ) 1/ 2
P( X  Y | X  2Y ) = = = 3/ 4
P( X  2Y ) 2 / 3
(iii)
P (1  X + Y  2)
=  f ( x, y )dydx +  f ( x, y )dydx
I II

1 2− x 2 2− x
= 0 1− x f ( x, y)dydx + 1 0 f ( x, y)dydx

2− x 2− x
= 0 e− x {1− x e− y dy}dx + 1 e− x {0 e− y dy}dx
1 2

2 3
= −
e e2
= 0.3298

The above probability can also be evaluated by using the relation


P(1  X + Y  2) = P( X + Y  2) − P( X + Y  1)

18 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India
Example 2.40: Let X and Y be jointly distributed with pdf
1
f ( x, y ) = (1 + xy ), | x | 1,| y | 1
4
= 0, elsewhere
Show that X and Y are not independent but X 2 and Y 2 are independent.
Solution: The marginal pdf of X is given as
1
g ( x) = −1 f ( x, y)dy

1 1 1
= −1
(1 + xy )dy = , − 1  x  1
4 2
Similarly, the marginal pdf of Y is
1 1
h( y ) = −1 f ( x, y )dx = , −1  y  1
2
Since, f ( x, y )  g ( x).h( y ) , therefore X and Y are not independent.
Now
P( X 2  x) = P(| X | x )
x
= − x
g ( x)dx = x

Similarly,
P(Y 2  y) = P(| Y | y ) = y
Also,
P( X 2  x  Y 2  y ) = P(| X | x ,| Y | y )
x y
= − 
x − y
f (u, v)dvdu

1 x y
=   (1 + uv)dvdu
4 − x − y
= x y
This implies
P( X 2  x, Y 2  y) = P( X 2  x) P(Y 2  y)

Hence, X 2 and Y 2 are independent.

19 Lecture notes by Dr. Haseeb Athar, Department of Statistics & O.R., A.M.U., Aligarh, India

You might also like