0% found this document useful (0 votes)
2 views15 pages

STAT21613 Chapter4

Chapter 4 of STAT 21613 covers functions of random variables, including the distribution of sums, differences, products, and quotients of continuous random variables. It presents theorems and proofs related to these distributions, along with examples to illustrate the concepts. Additionally, the chapter discusses the probability distribution of the maximum and minimum of a set of variables using order statistics.

Uploaded by

mendisvirasha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views15 pages

STAT21613 Chapter4

Chapter 4 of STAT 21613 covers functions of random variables, including the distribution of sums, differences, products, and quotients of continuous random variables. It presents theorems and proofs related to these distributions, along with examples to illustrate the concepts. Additionally, the chapter discusses the probability distribution of the maximum and minimum of a set of variables using order statistics.

Uploaded by

mendisvirasha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

STAT 21613 Probability Distributions and Applications II

Chapter 4
Functions of Random Variables
Let’s recall function of one random variable.

4.0 Function of a Random Variable

Example 1:

Solution:

1
STAT 21613 Probability Distributions and Applications II

Theorem 1

Example 2:

Solution:

2
STAT 21613 Probability Distributions and Applications II

Ex:

Leibniz’ Rule and results

4.1 Distribution of the sum and difference of Two Continuous Random Variables

Theorem 2

Proof:
Let X and Y be two continuous random variables with joint probability density function 𝑓𝑋,𝑌 (𝑥, 𝑦)
Let Z = X + Y, then the cumulative distribution:

𝐹𝑍 (𝑧) = 𝑃(𝑍 ≤ 𝑧)

= 𝑃(𝑋 + 𝑌 ≤ 𝑧)
∞ 𝑧−𝑥
= ∫−∞(∫−∞ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑦)𝑑𝑥

the density function:


𝑑𝐹𝑍 (𝑧)
𝑓𝑍 (𝑧) = 𝑑𝑧

3
STAT 21613 Probability Distributions and Applications II

∞ 𝑧−𝑥
𝑑
𝑓𝑍 (𝑧) = [ ∫ ( ∫ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑦) 𝑑𝑥 ]
𝑑𝑧
−∞ −∞
∞ 𝑧−𝑥
𝑑
=[ ∫ ( ∫ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑦) 𝑑𝑥 ]
𝑑𝑧
−∞ −∞
Now apply the Leibniz’ Rule

𝑓𝑍 (𝑧) = ∫−∞ 𝑓𝑋,𝑌 (𝑥, 𝑧 − 𝑥)𝑑𝑥
Similarly:

𝑓𝑍 (𝑧) = ∫−∞ 𝑓𝑋,𝑌 (𝑧 − 𝑦, 𝑦)𝑑𝑦

Corollary 4.1.1: Let X and Y are two continuous and independent random variables. Let Z=X+Y, then
∞ ∞
𝑓𝑍 (𝑧) = ∫−∞ 𝑓𝑋 (𝑧 − 𝑦)𝑓𝑌 (𝑦)𝑑𝑦 = ∫−∞ 𝑓𝑋 (𝑥)𝑓𝑌 (𝑧 − 𝑥)𝑑𝑥

Where 𝑓𝑋 (. ) and 𝑓𝑌 (. ) are the marginal probability density function of X and Y.

Example 3:
Let 𝑓(𝑥, 𝑦) = 2 , 𝑥 ≥ 0, 𝑦 ≥ 0, 𝑥 + 𝑦 ≤ 1
Find the pdf of W = X+Y

Solution:

𝑓𝑍 (𝑧) = ∫ 𝑓𝑋,𝑌 (𝑥, 𝑧 − 𝑥)𝑑𝑥


−∞

Example 4:
Let 𝑋 and 𝑌 be two independent continuous random variables with exponential distributions, with
parameter λ.
Find the probability density function of random variable, 𝑊 when 𝑊 = 𝑋 + 𝑌.

4
STAT 21613 Probability Distributions and Applications II

Distribution of the Difference of Two Continuous Random Variables

Let X and Y be two continuous random variables with joint probability function 𝑓𝑋,𝑌 (𝑥, 𝑦). Let V=X-Y,
then cumulative distribution

𝐹𝑉 (𝑣) = 𝑃(𝑉 ≤ 𝑣)

= 𝑃(𝑋 − 𝑌 ≤ 𝑣)

∞ 𝑣+𝑦
= ∫−∞{∫−∞ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑥 }𝑑𝑦

Density function:

𝑑𝐹𝑉 (𝑣)
𝑓𝑉 (𝑣) = 𝑑𝑣

𝑑 ∞ 𝑣+𝑦
= 𝑑𝑣 [∫−∞{∫−∞ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑥 }𝑑𝑦]


= ∫−∞ 𝑓𝑋,𝑌 (𝑣 + 𝑦, 𝑦)𝑑𝑦

Similarly:

𝑓𝑉 (𝑣) = ∫−∞ 𝑓𝑋,𝑌 (𝑥, 𝑥 − 𝑢)𝑑𝑥

4.2 Distribution of the Product and Quotient of Two Continuous Random Variables

Let X and Y be two continuous and independent random variables with joint probability function 𝑓𝑋,𝑌 (𝑥, 𝑦).
Then the product Z = XY has cumulative distribution is given by:

𝐹𝑍 (𝑧) = 𝑃(𝑍 ≤ 𝑧)

= 𝑃[𝑋𝑌 ≤ 𝑧]

𝑧
∞ 𝑥
0 ∞
𝐹𝑍 (𝑧) = ∫ {∫ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑦} 𝑑𝑥 + ∫ ∫ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑦 𝑑𝑥
−∞ 𝑧 0
−∞
𝑥 { }
Now apply Leibiniz’ Theorem

5
STAT 21613 Probability Distributions and Applications II

𝑧
∞ 𝑥
0 ∞
𝑑
𝑓𝑍 (𝑧) = ∫ {∫ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑦} 𝑑𝑥 + ∫ ∫ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑦 𝑑𝑥
𝑑𝑧 −∞ 0
𝑧 −∞
𝑥 { }
( )
𝑧
∞ 𝑥
0 ∞
𝑑 𝑑
=∫ { ∫ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑦} 𝑑𝑥 + ∫ ∫ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑦 𝑑𝑥
−∞ 𝑑𝑧 𝑧 0 𝑑𝑧
−∞
𝑥 { }
0 1 𝑧 ∞1 𝑧
𝑓𝑍 (𝑧) = ∫−∞ −𝑥 𝑓𝑋,𝑌 (𝑥, 𝑥) 𝑑𝑥 + ∫0 𝑥 𝑓𝑋,𝑌 (𝑥, 𝑥) 𝑑𝑥𝑑𝑥 `

Density function:
∞ 1 𝑧
𝑓𝑍 (𝑧) = ∫−∞ |𝑥| 𝑓𝑋,𝑌 (𝑥, 𝑥) 𝑑𝑥

Distribution of the Quotient of Two Continuous Random Variables

Let X and Y be two continuous and independent random variables with joint probability function
𝑓𝑋,𝑌 (𝑥, 𝑦). Then, the quotient 𝑈 = 𝑋/𝑌 has the cumulative distribution:

𝐹𝑈 (𝑢) = 𝑃(𝑈 ≤ 𝑢)

𝑋
= 𝑃 [ ≤ 𝑢]
𝑌
0 ∞ ∞ 𝑢𝑦
= ∫−∞ {∫𝑢𝑦 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑥 } 𝑑𝑦 + ∫0 {∫−∞ 𝑓𝑋,𝑌 (𝑥, 𝑦)𝑑𝑥 }𝑑𝑦

Apply the Leibniz’ Theorem:

Density function:

𝑑𝐹𝑈 (𝑢)
𝑓𝑈 (𝑢) = 𝑑𝑢

= ∫ |𝑦|𝑓𝑋,𝑌 (𝑣𝑦, 𝑦)𝑑𝑦


−∞

Corollary 4.4.1: Let X and Y be two continuous and independent random variables. Let 𝑈 = 𝑋/𝑌, then

𝑓𝑈 (𝑢) = ∫ |𝑦|𝑓𝑋,𝑌 (𝑣𝑦)𝑓𝑌 (𝑦)𝑑𝑦


−∞

where 𝑓𝑌 (. ) is the marginal probability density function of Y.

6
STAT 21613 Probability Distributions and Applications II

4.3 Probability Distribution of Maximum and Minimum of a Set of Variables

Order Statistics

Example:
Suppose a random sample of five rats yields the following weights (in grams):

x1=602 x2=781 x3=709 x4=742 x5=633

x(1)= x(2)= x(3)= x(4)= x(5)=

Theorem 3
Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a set of random variables each of which has cumulative probability distribution
function 𝐹(. ).
Let 𝑌𝑖 = 𝑚𝑖𝑛{𝑋1 , 𝑋2 , … , 𝑋𝑛 } and 𝑌𝑛 = 𝑚𝑎𝑥{𝑋1 , 𝑋2 , … , 𝑋𝑛 }.
Cumulative Probability Distribution of 𝑌𝑛 = 𝑚𝑎𝑥{𝑋1 , 𝑋2 , … , 𝑋𝑛 } and 𝑌1 = 𝑚𝑖𝑛{𝑋1 , 𝑋2 , … , 𝑋𝑛 } are

𝐹𝑌𝑛 (𝑦) = {𝐹(𝑦)}𝑛


𝑛

𝐹𝑌1 (𝑦) = 1 − ∏ 𝑃{𝑋𝑖 ≥ 𝑦}


𝑖=1
Proof:

𝐹𝑌𝑛 = 𝑃[𝑌𝑛 ≤ 𝑦]

= 𝑃[𝑋1 ≤ 𝑦, 𝑋2 ≤ 𝑦, … , 𝑋𝑛 ≤ 𝑦]

= ∏𝑛𝑖=1 𝑃{𝑋𝑖 ≤ 𝑦} {𝑖𝑓 𝑅𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡}

= {𝐹(𝑦)}𝑛

Cumulative Probability Distribution of 𝑌1 = 𝑚𝑖𝑛{𝑋1 , 𝑋2 , … , 𝑋𝑛 }

𝐹𝑌1 (𝑦) = 𝑃[𝑌1 ≤ 𝑦]

= 1 − 𝑃[𝑌1 > 𝑦]

= 1 − 𝑃[𝑋1 > 𝑦, 𝑋2 > 𝑦, … , 𝑋𝑛 > 𝑦]

7
STAT 21613 Probability Distributions and Applications II

= 1 − ∏𝑛𝑖=1 𝑃{𝑋𝑖 > 𝑦} {𝑖𝑓 𝑅𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡}

= 1 − ∏𝑛𝑖=1{1 − 𝑃{𝑋𝑖 ≤ 𝑦}}

= 1 − {1 − 𝐹(𝑦)}𝑛

𝐹𝑌𝑖 = 1 − {1 − 𝐹(𝑦)}𝑛

Example 5:
Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be U(0,1). Find the distribution of the minimum and maximum.

The pdf of Xi is f (x) = 1, 0 ≤ x ≤ 1, the cdf is F (x) = x, 0 ≤ x ≤ 1

𝐹𝑌𝑛 (𝑦) = {𝐹(𝑦)}𝑛 𝐹𝑌𝑖 (𝑦) = 1 − {1 − 𝐹(𝑦)}𝑛

Example 6:
A string of 10 light bulbs is connected in series, which means that the entire string will not light
up if anyone of the light bulbs fails. Assume that the lifetimes of the bulbs, 𝜏1 , 𝜏2 , … , 𝜏10 , are
independent random variables that are exponentially distributed with mean 2. Find the
distribution of the life length of this string of light bulbs.
Ref: Ramachandran, K. M.Mathematical statistics with applications / Kandethody M. Ramachandran, Chris P. Tsokos.

8
STAT 21613 Probability Distributions and Applications II

4.4 Moment Generating Technique for Obtaining Probability Distribution

Recall:
Let X and Y be two random variables with joint density function 𝑓(𝑥, 𝑦). A real valued function
𝑚: ℝ2 → ℝ defined by

𝑚(𝑠, 𝑡) = 𝐸(𝑒 𝑠𝑋+𝑡𝑌 )

is called the joint moment generating function of X and Y if this expected values exists for all
𝑠 is some interval −ℎ < 𝑠 < ℎ and for all 𝑡 is some interval −𝑘 < 𝑡 < 𝑘 for some positive ℎ and
𝑘.

Moment generating function of n random variables


Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be n random variables.
Let 𝑌𝑖 = 𝑔𝑖 (𝑋1 , 𝑋2 , … , 𝑋𝑛 ) 𝑓𝑜𝑟 𝑖 = 1,2, … , 𝑘

Then the joint probability generating function 𝑚𝑌1 ,𝑌2 ,…,𝑌𝑘 (𝑡1 , 𝑡2 , … , 𝑡𝑘 ) of 𝑌1 , 𝑌2 , … , 𝑌𝑘 is defined by

𝑘 𝑘
𝑚𝑌1 ,𝑌2 ,…,𝑌𝑘 (𝑡1 , 𝑡2 , … , 𝑡𝑘 ) = 𝐸 (𝑒 ∑𝑖=1 𝑡𝑖𝑌𝑖 ) = = 𝐸 (𝑒 ∑𝑖=1 𝑡𝑖𝑔𝑖(𝑋1 ,𝑋2 ,…,𝑋𝑛 ) )

𝑚𝑌1 ,𝑌2 ,…,𝑌𝑘 (𝑡1 , 𝑡2 , … , 𝑡𝑘 ) =


∞ ∞ 𝑘
∫−∞ … ∫−∞ 𝑒 ∑𝑖=1 𝑡𝑖 𝑔𝑖(𝑋1 ,𝑋2 ,…,𝑋𝑛 ) 𝑓𝑋1 ,𝑋2 ,…,𝑋𝑛 (𝑥1 , 𝑥2 , … , 𝑥𝑛 ) ∏𝑛𝑖=1 𝑑𝑥𝑖 , 𝑖𝑓 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
{ 𝑘
∑𝑥1 … ∑𝑥𝑛 𝑒 ∑𝑖=1 𝑡𝑖𝑔𝑖(𝑋1 ,𝑋2 ,…,𝑋𝑛 ) 𝑃[𝑋1 = 𝑥1 , 𝑋2 = 𝑥2 , … , 𝑋𝑛 = 𝑥𝑛 ], 𝑖𝑓 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒

9
STAT 21613 Probability Distributions and Applications II

Probability Distribution of Sum of Independent Random Variables

Theorem 4

Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be n independent random variables and suppose that moment generating function of
each random variable exits for all −ℎ < 𝑡 < ℎ for some ℎ > 0.

Let 𝑌 = ∑𝑛𝑖=1 𝑋𝑖

Then
𝑛

𝑚𝑌 (𝑡) = ∏ 𝑚𝑋𝑖 (𝑡), 𝑓𝑜𝑟 − ℎ < 𝑡 < ℎ


𝑖=1

Proof: 𝑚𝑌 (𝑡) = 𝐸(𝑒 𝑡𝑌 )


𝑛
= 𝐸(𝑒 𝑡 ∑𝑖=1 𝑋𝑖 )

∞ ∞ 𝑛
𝑡 ∑𝑛
𝑖=1 𝑋𝑖
∫ …∫ 𝑒 𝑓𝑋1 ,𝑋2 ,…,𝑋𝑛 (𝑥1 , 𝑥2 , … , 𝑥𝑛 ) ∏ 𝑑𝑥𝑖 , 𝑖𝑓 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
−∞ −∞
= 𝑖=1
𝑡 ∑𝑛
𝑖=1 𝑋𝑖
∑…∑𝑒 𝑃[𝑋1 = 𝑥1 , 𝑋2 = 𝑥2 , … , 𝑋𝑛 = 𝑥𝑛 ], 𝑖𝑓 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
{ 𝑥1 𝑥𝑛

𝑛 ∞
∏ {∫ 𝑒 𝑡𝑥𝑖 𝑓𝑋 (𝑥𝑖 )} , 𝑖𝑓 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
𝑖=1 −∞
= 𝑛

∏ {∑ 𝑒 𝑡𝑥𝑖 𝑃[ 𝑋𝑖 = 𝑥𝑖 ]} , 𝑖𝑓 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
{ 𝑖=1 𝑥𝑖

= ∏𝒏𝒊=𝟏 𝑚𝑋𝑖 (𝑡).

Example 7:

Let X1 and X2 be independent random variables with uniform distributions on{1, 2, 3, 4}.
Let Y = X1 + X2. For example, Y could equal the sum when two fair four-sided dice are rolled.

Solution:

10
STAT 21613 Probability Distributions and Applications II

Example 8:

11
STAT 21613 Probability Distributions and Applications II

4.5 Transformations

Recall:

Let X be a continuous random variable with probability density function 𝑓𝑋 (𝑥)

Let 𝑥 = {𝑥: 𝑓𝑋 (𝑥) > 0}.

Assume that
(i) 𝑌 = 𝑔(𝑋) is one to one transformation of 𝑥 onto 𝑦.
(ii) The derivative of 𝑥 = 𝑔−1 (𝑦) with respective to 𝑦 is continuous and nonzero for 𝑦 ∈ ℛ
Then 𝑌 = 𝑔(𝑋) is continuous random variable with probability density function

−1
𝑑𝑔−1 (𝑦)
𝑓𝑌 (𝑦) = 𝑓𝑋 (𝑔 )
(𝑦) | | , 𝑓𝑜𝑟 𝑦 ∈ ℛ
𝑑𝑦
Proof:
Case(i). When 𝑔(𝑥) is a monotone increasing function of 𝑥

𝐹𝑌 (𝑦) = 𝑃{𝑌 ≤ 𝑦}

= 𝑃{𝑔(𝑋) ≤ 𝑦}

= 𝑃{𝑋 ≤ 𝑔−1 (𝑦)}

= 𝐹𝑋 (𝑔−1 (𝑦))

𝑑𝐹𝑌 (𝑦)
𝑓𝑌 (𝑦) = 𝑑𝑦

𝑑𝐹𝑋 (𝑔−1 (𝑦))


=
𝑑𝑦

𝑑𝑔−1 (𝑦)
= 𝑓𝑋 (𝑔−1 (𝑦)) 𝑑𝑦

Case(ii). When 𝑔(𝑥) is a monotone decreasing function of 𝑥

𝐹𝑌 (𝑦) = 𝑃{𝑌 ≤ 𝑦}

= 𝑃{𝑔(𝑋) ≤ 𝑦}

= 𝑃{𝑋 ≥ 𝑔−1 (𝑦)} (𝑠𝑖𝑛𝑐𝑒 𝑔(𝑥) 𝑖𝑠 𝑎 𝑚𝑜𝑛𝑜𝑡𝑜𝑛𝑒 𝑑𝑒𝑐𝑟𝑒𝑐𝑖𝑛𝑔 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑥).

= 1 − 𝐹𝑋 (𝑔−1 (𝑦))
Thus the density function:
𝑑𝐹𝑌 (𝑦)
𝑓𝑌 (𝑦) =
𝑑𝑦

12
STAT 21613 Probability Distributions and Applications II

𝑑𝐹𝑋 (𝑔−1 (𝑦))


=
𝑑𝑦
𝑑𝑔−1 (𝑦)
= −𝑓𝑋 (𝑔−1 (𝑦))
𝑑𝑦
−1 (𝑦)
𝑑𝑔 𝑑𝑔−1 (𝑦)
= 𝑓𝑋 (𝑔−1 (𝑦)) | | (𝑠𝑖𝑛𝑐𝑒 < 0 𝑤ℎ𝑒𝑛 𝑔(𝑥)𝑖𝑠 𝑚𝑜𝑛𝑜𝑡𝑜𝑛𝑒 𝑑𝑒𝑐𝑟𝑒𝑎𝑠𝑖𝑛𝑔 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑥. )
𝑑𝑦 𝑑𝑦

Thus we have

𝑑𝑔−1 (𝑦)
𝑓𝑌 (𝑦) = | | 𝑓𝑋 (𝑔−1 (𝑦))
𝑑𝑦

Hence the proof.


In the case the function is not monotone, then it will be approximate by a piecewise monotone function.
This approximation leads to the same conclusion as in the theorem.

Theorem 5
Let X1 and X2 be jointly continuous random variables with joint probability density function
𝑓𝑋1, 𝑋2 (𝑥1 , 𝑥2 )
 
Let x = ( x1 , x2 ) : f X1 , X 2 ( x1, x2 )  0 .
Assume that
(i). y1  g1 ( x1 , x2 ) and y 2  g 2 ( x1 , x2 ) are one to one transformation of x onto y.
1 1
(ii). The first partial derivatives of x1  g1 ( y1 , y2 ) and x2  g 2 ( y1 , y2 ) are continuous over y.
(iii). The Jacobian J of transformation is nonzero for ( y1 , y 2 )  y. Then the joint probability
density function of Y1  g1 ( X 1 , X 2 ) and Y2  g 2 ( X 1 , X 2 ) is given by

𝑓𝑌1, 𝑌2 (𝑦1 , 𝑦2 ) = | 𝐽 | 𝑓𝑋1, 𝑋2 (𝑔1 −1 (𝑥1 ), 𝑔2 −1 (𝑥2 ))

Note:

x1 x1
y y 2
J 1
x2 x2
y1 y 2

13
STAT 21613 Probability Distributions and Applications II

Example 9:

Hogg, Robert V. Probability and Statistical Inference/ Page

Example 10:

14
STAT 21613 Probability Distributions and Applications II

Show that

Find the marginal distributions.

Solution:

15

You might also like