0% found this document useful (0 votes)
17 views30 pages

Tutorial 1

The document contains a tutorial on probability theory, covering various concepts such as conditional probability, probability mass functions, and probability density functions. It includes examples and calculations related to events, sample spaces, and the probabilities of different outcomes. The tutorial also discusses the birthday problem and the independence of events.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views30 pages

Tutorial 1

The document contains a tutorial on probability theory, covering various concepts such as conditional probability, probability mass functions, and probability density functions. It includes examples and calculations related to events, sample spaces, and the probabilities of different outcomes. The tutorial also discusses the birthday problem and the independence of events.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Tutorial 1

1. We know that if A ✓ B then P (B\A) = P (B) P (A).

(a) Note that

Ac \ B = B\(A \ B) =) P (Ac \ B) = P (B) P (A \ B),

since A \ B ✓ B.
(b) Note that

A \ B c = A\(A \ B) =) P (A \ B c ) = P (A) P (A \ B),

since A \ B ✓ A.

2. We know P (A [ B) = P (A) + P (B) P (A \ B). This implies


✓ ◆
3 2 1 2
= 1 + P (B) =) P (B) = .
4 3 4 3

We proved P (A \ B c ) = P (A) P (A \ B) in the last problem. Hence,


✓ ◆
c 2 1 1
P (A \ B ) = 1 = .
3 4 12

3. We have

20 16 14
P (A) = , P (B) = , P (C) = ,
100 100 100
8 5 4 2
P (A \ B) = , P (A \ C) = , P (B \ C) = , P (A \ B \ C) = .
100 100 100 100

The probability that the person reads A or B or C is given by

P (A [ B [ C) = P (A) + P (B) + P (C) P (A \ B) P (B \ C)


P (C \ A) + P (A \ B \ C)
20 16 14 8 8 5 4 2
= + + +
100 100 100 100 100 100 100 100
35 7
= = .
100 20

Hence the probability that the person reads none of the papers is

7 13
1 P [A [ B [ C] = 1 = .
20 20

1
4. The probabilities that the problem is solved by the students A, B and C are

1 1 1
P (A) = , P (B) = and P (C) = ,
2 3 4

respectively. Therefore,

1 2 3
P (Ac ) = , P (B c ) = and P (C c ) = .
2 3 4

The problem will be solved if at least one of them solves it. The probability that none of
them solves it is

1 2 3 1
P (Ac )P (B c )P (C c ) = · · = .
2 3 4 4

Hence, the probability that the problem is solved

1 3
1 P (Ac )P (B c )P (C c ) = 1 = .
4 4

5. Let A be the event that at least two person share the same birthday. Then Ac denotes
the event that no two persons have the some today. Therefore,

365 ⇥ 364 ⇥ · · · ⇥ (365 n + 1)


P (Ac ) =
(365)n
✓ ◆✓ ◆ ✓ ◆
1 2 n 1
= 1 1 ··· 1 .
365 365 365

Hence,
✓ ◆✓ ◆ ✓ ◆
1 2 n 1
P (A) = 1 1 1 ··· 1 .
365 365 365

The probabilities for different values of n are given in the following table:

n P (A)
10 0.117
30 0.706
50 0.97
60 0.994

6. We know that P (A [ B) = P (A) + P (B) P (A \ B). Since P (A [ B)  1, we have

P (A) + P (B) P (A \ B)  1 =) a + b 1  P (A \ B). (1)

2
By the definition of conditional probability, we have

P (A \ B)
P (A | B) = =) P (A \ B) = bP (A | B). (2)
P (B)

From (1) and (2), we have

P (A | B) (a + b 1)/b.

7. The sample space of the experiment, when three fair dice are thrown, consists of 63 = 216
elements. Let A denote the event that no two dice show the same face when three fair
dice are thrown.

(a) Let B denote the event that the sum of faces is 7. We have to find

P (B \ A)
P (B | A) = .
P (A)

If we keep the first die fixed, then no two dice show the same face when three dice
are thrown together in the following ways :
(123), (124), (125), (126), (132), (134), (135), (136), (142), (143),
(145), (146), (152), (153), (154), (156), (162), (163), (164), (165).
In this way, we get 20 favourable cases for the event A (when the first die is fixed).
Similarly, by fixing each of the six faces at a time, we get 20 ⇥ 6 = 120 favourable
cases for the event A. Therefore,

120
P (A) = .
216

Also B \ A = {(124), (142), (214), (241), (412), (421)}. So,

6
P (B \ A) = .
216

Hence,

6 1
P (B | A) = = .
120 20

(b) Let C denote the event that one face is an ace when three fair dice thrown. We have
to find

P (C \ A)
P (C | A) = .
P (A)

As shown above part, the number of favourable cases favouring the event C \ A

3
contributed by the face ‘1” (ace) is 20 and those contributed by the remaining five
faces is each 8. For example, we get an ace contributed by the face (2) in the following
ways:

(213), (214), (215), (216), (231), (241), (251), (261).

Thus, we have 20 + 5 ⇥ 8 = 60 favourable cases for the event C \ A. Hence,

P (C \ A) 60/216 1
P (C | A) = = = .
P (A) 120/216 2

8. Since A and B are independent, we have

P (A \ B) = P (A)P (B).

and A ✓ B implies P (A \ B) = P (A). Hence,

P (A) = P (A)P (B) =) P (A) = 0 or P (B) = 1.

9. Let B1 , B2 , . . . , B6 be six mutually disjoint events, where Bj : j appears on the die,


1  j  6. Clearly P (Bj ) = 16 , 1  j  6. Let

6
[
B= Bj .
j=1

Let A denote the event when the head appears exactly twice. We have to find P (A). By
the theorem of total probability, we have

6
X 6
1X
P (A) = P (Bj ) P (A | Bj ) = P (A | Bj ) .
j=1
6 j=1

Note that if face 1 appears on the die, we can not get two heads and so P (A | B1 ) = 0.
Thus,
!
2 3 4 5 6
1 2 2 2 2 2
P (A) = 0+ + + + +
6 22 23 24 25 26
✓ ◆
1 3 6 10 15
= 1+ + + +
24 2 4 8 16
✓ ◆
1 16 + 24 + 24 + 20 + 15
=
24 16
99
= .
384

4
10. Let E1 , E2 and E3 denote the events that the urns I, II and III are chosen, respectively,
and let E be the event that the two balls taken from the selected urn are white and red.
Then
1 2
1 1
⇥ 1 2
P (E1 ) =P (E2 ) = P (E3 ) = , P (E | E1 ) = 6 = ,
3 2
15
2 2 3 1
⇥ 4 ⇥ 3
P (E | E2 ) = 1
6
1
= and P (E | E3 ) = 1
6
1
= .
2
15 2
15

By Bayes’ theorem, we get

P (E1 ) P (E | E1 )
P (E1 | E) = P3
i=1 P (Ei ) P (E | Ei )
(1/3)(2/5)
=
(1/3)(2/15) + (1/3)(4/15) + (1/3)(3/15)
2
= .
11

5
Tutorial 2

1. Given
8
>
> 0, x < 2,
>
>
>
> 2  x < 1,
< 1/10,
FX (x) = P (X  x) = 3/10, 1  x < 1,
>
>
>
> 6/10, 1  x < 2,
>
>
:
1, x 2.

Clearly, the range of the random variable is 2, 1, 1, 2. Therefore, the probabilities can
be calculated as

1
pX ( 2) = P (X = 2) = P (X  2) P (X < 2) = FX ( 2) FX ( 3) =
10
pX ( 1) = P (X = 1) = P (X  1) P (X  2) = FX ( 1) FX ( 2)
3 1 2
= =
10 10 10
6 3 3
pX (1) = P (X = 1) = P (X  1) P (X  0) = FX (1) FX (0) = =
10 10 10
6 4
pX (2) = P (X = 2) = P (X  2) P (X  1) = FX (2) FX (1) = 1 = .
10 10

Hence, the pmf is given by

x 2 1 1 2
1 2 3 4
pX (x) 10 10 10 10

2. We may take the probability mass function of X as follows:

x -2 -1 0 1 2
pX (x) a a c b b

As given, c = 2a and c = 2b =) a = b. Since

2
X
pX (x) = 1 =) a = 1/6
x= 2

and hence,

b = 1/6 and c = 1/3.

Therefore, the pmf is given by

6
x -2 -1 0 1 2
pX (x) 1/6 1/6 1/3 1/6 1/6

Hence, the cdf is


8
>
>
> 0, if x < 2,
>
>
>
>
> 1/6, if 2  x < 1,
>
< 2/6, if 1  x < 0,
FX (x) = P (X  x) =
>
>
> 4/6, if 0  x < 1,
>
>
>
>
> 5/6, if 1  x < 2,
>
: 1, if x 2.

3. The function fX (x) is a pdf if


Z 1 Z 1 ✓ ◆
2 1
fX (x)dx = 1 =) kx dx = 1 =) k = 1 =) k = 3.
0 0 3

Now
✓ ◆ Z 1/2 Z 1/2
1 1 1 1 19
P <x = f (x)dx = 3x2 dx = = .
3 2 1/3 1/3 8 27 216

4. The given function fX (x) will be a density function if


Z 1 Z 1
k
fX (x)dx = dx = 1.
1 1 1 + x2

This implies
⇣⇡ ⇡⌘ 1
1 1
k tan x 1
= 1 =) k + = 1 =) k = .
2 2 ⇡

Therefore,
Z x
1 1
fX (x) = and FX (x) = dt
⇡ (1 + x2 ) 1 ⇡(1 + t2 )

or
1
tan 1 x + c.
FX (x) =

For a distribution function F ( 1) = 0. This implies

1 ⇣ ⇡⌘ 1
+ c = 0 =) c = .
⇡ 2 2

7
Hence, the required distribution function is

1 1 1
FX (x) = tan x+ , 1 < x < 1.
⇡ 2

5. Note that

1 1 1 11
E(X) = 3⇥ +6⇥ +9⇥ =
6 2 3 2
1 1 1 93
E X2 = 9 ⇥ + 36 ⇥ + 81 ⇥ = .
6 2 3 2

Therefore,

E(2X + 1)2 = E 4X 2 + 4X + 1 = 4E X 2 + 4E(X) + 1


93 11
=4⇥ +4⇥ +1
2 2
= 209.

6. Given
8
>
> 0, x< 1,
>
>
>
>
>
>1/8, 1  x < 0,
>
<
FX (x) = (X  x) = 1/4, 0  x < 1,
>
>
>
>
>
> 1/2, 1  x < 2,
>
>
>
:1, x 2.

Clearly, the range of the random variable is 1, 0, 1, 2. Therefore, the probabilities can
be calculated as

1
pX ( 1) = P (X = 1) = P (X  1) P (X < 1) = FX ( 1) FX (0) =
8
2 1 1
pX (0) = P (X = 0) = P (X  0) P (X < 0) = FX (0) FX ( 1) = =
8 8 8
4 2 1
pX (1) = P (X = 1) = P (X  1) P (X < 1) = FX (1) FX (0) = =
8 8 4
4 1
pX (2) = P (X = 2) = P (X  2) P (X < 2) = FX (2) FX (1) = 1 = .
8 2

Hence, the pmf is given by

x 1 0 1 2
1 1 1 1
pX (x) 8 8 4 2

8
Therefore,

1 1 1 1 9
E(X) = 1⇥ +0⇥ +1⇥ +2⇥ =
8 8 4 2 8
1 1 1 1 19
E(X ) = ( 1) ⇥ + (0) ⇥ + (1) ⇥ + (2)2 ⇥ = .
2 2 2 2
8 8 4 2 8

Hence,
✓ ◆2
2 19 2 9 71
Var(X) = E(X ) (E(X)) = = .
8 8 64

7. Given
8
>
> 0.1, x= 2,
>
>
>
>
>
> 0.2, x = 0,
>
<
pX (x) = 0.3, x = 2,
>
>
>
>
>
> 0.4, x = 5,
>
>
>
:0, otherwise.

Therefore,

µ = E(X) = ( 2) ⇥ 0.1 + 0 ⇥ 0.2 + 2 ⇥ 0.3 + 5 ⇥ 0.4 = 2.4


E(X 2 ) = ( 2)2 ⇥ 0.1 + (0)2 ⇥ 0.2 + (2)2 ⇥ 0.3 + (5)2 ⇥ 0.4 = 11.6.

Therefore,

E(X µ)2 = E(X 2 ) (E(X))2 = 11.6 (2.4)2 = 5.84

and
p
standard deviation = 5.84 = 2.42.

8. If X be the random variable denoting the number appearing on the die. Then

1
P (X = x) = , for x = 1, 2, 3, 4, 5, 6.
6

Then required expectation is

6
X
E(2X + 5) = (2x + 5)P (X = x) = 12.
x=1

9
9. Since fX (x) is a pdf of X, therefore
Z 1 Z 1
x 2
fX (x)dx = kxe dx = 1 =) k = .
0 0

Therefore,
Z 1
E(X) = xfX (x)dx
0
Z 1
2 x
= x xe dx
0
Z 1
2
= x2 e x
dx
0
 1 Z 1
2 x2 x 2 x
= e + xe dx
0 0
 ⇢ x 1 Z 1
2 2 xe 1 x
= 0+ + e dx
0 0

1 x 1
=2 0 2
e 0

2
= (0 1)
2
= . (3)

and
Z 1 Z 1
2 2 2
E X = x fX (x)dx = x3 e x
dx
0 0
 1 Z 1
2 x3 x 3
= e + x2 e x
dx
0 0
 Z 1
3 2
=0+ x2 e x
dx
0
3 2 6
= · = 2
, (using (3)).

Hence,
✓ ◆2
2 2 6 2 2
Var(X) = E X (E(X)) = 2
= 2
.

10
10. (a) We have

↵ <x<↵+
=) <x ↵< , >0
x ↵
=) 1< <1
✓ ◆2
x ↵
=) 0< < 1.

Therefore, fX (x) > 0, for all x 2 (↵ , ↵ + ) and > 0. Thus fX (x) is a pdf if
Z " ✓ ◆2 #
↵+
k x ↵
1 dx = 1

" #
↵+
k 1
=) 2 2
(x ↵)3 =1
3 ↵

k 1 3 3
=) 2 2
+ =1
3
✓ ◆
2
=) k 2 =1
3
3
=) k= .
4

(b) The mean of X is given by


Z ↵+
E(X) = xfX (x)dx

Z ↵+ 
3 1
= x 1 2
(x ↵)2 dx
4 ↵
Z ↵+
3 2
= 3
x x3 2↵x2 + ↵2 x dx
4 ↵
 ↵+
3 1 2 2 2 1 4 2 3
= 3
↵ x x + ↵x
4 2 4 3 ↵

3 1 2 1 2
= 3
↵2 4↵ (↵ + )4 (↵ )4 + ↵ (↵ + )3 (↵ )3
4 2 4 3

3 2 1 2
= 3
2↵ ↵2 8↵3 + 8↵ 3 + ↵ 2 3 + 6↵2
4 4 3

3 2 4
= 3
2↵ ↵2 2↵ ↵2 + 2 + ↵ 2
+ 3↵2
4 3

3 4
= 4↵3 + ↵
3
2
+ 3↵2
4 3
 2
3↵ 1 2 3↵
= 2 ↵2 + + 3↵2 = 2 · = ↵.
3 3

11
Now,
Z ↵+
2
E X = x2 fX (x)dx

Z ↵+ 
3 2 1
= x 1 2
(x ↵)2 dx
4 ↵
 Z ↵+
3 1 3
= (↵ + )3 (↵ ) 3
3
x2 (x ↵)2 dx. (4)
4 3 4 ↵

Integrating the second integral by parts, we get


Z ↵+ Z
2 2 1 2 3 ↵+ 2 ↵+
x (x ↵) dx = x (x ↵) ↵ x(x ↵)3 dx
↵ 3 3 ↵
1⇥ ⇤
= (↵ + )2 + (↵ )2 · 3
3 
Z
2 x 4
↵+ 1 ↵+
(x ↵) (x ↵)4 dx
3 4 ↵ 4 ↵
2 3 2
= ↵ + 2
3 
2 1 1 ↵+
(↵ + ) 4 (↵ ) 4 (x ↵)5 ↵
3 4 20
✓ 5 5

2 3 2 2 2
= ↵ +
3 3 2 10
2 2 3 2 5
= ↵ + .
3 5

Substituting in (4), we get


 ✓ ◆
2 3 1 3 2 2 2
E X = 2 3 + 6↵2 3
↵ 3
+ 5
4 3 4 3 5
1 2 3 1 2 3
= + ↵2 ↵ 2
2 2 2 10
1 2
= ↵2 + .
5

Hence,
✓ ◆
2 2 2 1 2 1
Var(X) = E X (E(X)) = ↵ + ↵2 = 2
.
5 5

12
Tutorial 3

1. We have Y = aX b, E(X) = 10 and Var(X) = 25. Now

E(Y ) = E(aX b) = aE(X) b = 10a b

and

Var(Y ) = Var(aX b) = a2 Var(X) = 25a2 .

Therefore,

E(Y ) = 0 =) 10a b=0

and

1
Var(Y ) = 1 =) 25a2 = 1 =) a = .
5

Hence,

1
a= and b = 2.
5

2. (a) The sample space of the given experiment is

⌦ = {H, T H, T T H, T T T H, . . .}.

Since X denotes the number of tosses required, X can take the values 1, 2, 3, . . ..
Therefore,

1
P (X = 1) =
2
1 1 1
P (X = 2) = · = 2
2 2 2
..
.
1
P (X = x) = , x = 1, 2, 3, . . .
2x

It may be noted that


1
X X1 1
1 2
P (X = x) = x
= = 1.
x=1 x=1
2 1 (1/2)

13
Hence, the pmf of X is
(
1
2x
, for x = 1, 2, 3, . . .
pX (x) =
0, otherwise.

(b) The mgf of X is given by


1
X 1 ✓ t ◆x
X e et
MX (t) = E e tX
= tx
e P (X = x) = = , provided et < 2.
x=1 x=1
2 2 et

(c) Note that

2et
MX0 (t) = t 2
=) E(X) = MX0 (t) =2
(2 e ) t=0

and

2et (2 + et )
MX00 (t) = t 3
=) E(X 2 ) = MX00 (t) = 6.
(2 e ) t=0

Hence,

Var(X) = E(X 2 ) (E(X))2 = 6 4 = 2.

3. The mgf of X is given by


Z 1
tX
MX (t) = E e = etx fX (x)dx
1
Z 1 Z 1
= tx
e e dx =x
e ( t)x dx, where >t
0 0
✓ ◆
1
= e ( t)x 0 , for > t
t
✓ ◆
= (0 1) = , for > t.
t t

Therefore,

MX (t) = , for > t.


t

Note that

1
MX0 (t) = =) E(X) = MX0 (t) =
( t)2 t=0

14
and

2 2
MX00 (t) = =) E(X 2 ) = MX00 (t) = .
( t)3 t=0 2

Hence,

2 1 1
Var(X) = E(X 2 ) (E(X))2 = 2 2
= 2
.

4. Note that
1
X
tx 1
MX (t) = E e = etx
x=1
x(x + 1)
X1 ✓ ◆
tx 1 1
= e
x=1
x 1+x
1
X X1
etx etx
=
x=1
x x=1
1+x
X1 1
X
etx t et(x+1)
= e
x=1
x x=1
1+x
= ln(1 et ) e t ( ln(1 et ) et ), for et < 1 or t < 0
= 1 + (et 1) ln(1 et ), for t < 0.

Therefore,
(
1 + (et 1) ln(1 et ), if t < 0,
MX (t) =
1, if t = 0.

Next, for any positive integer r,


1
X X1
r xr 1
r
E (X ) = x P (X = x) = .
x=1 x=1
1 + x

Note that the terms of the above series are positive. If

xr 1
ur =
1+x

then

xr 1 1
ur = , for r = 1, 2, . . . .
1+x 1+x

Therefore, by comparison text, E(X r ) does not exists for any positive integer r.

15
5. The pgf of X is given by
1
X 1
X
x
GX (t) = P (X = x)t = pq x 1 tx
x=1 x=1
X1
p
= (qt)x
q x=1
pt
= , provided t < 1/q.
1 qt

Note that

p 1
G0X (t) = =) µ = E(X) = G0X (t) =
(1 qt)2 t=1 p
2pq 2q
G00X (t) = =) E(X(X 1)) = G00X (t) =
(1 qt)3 t=1 p2
6pq 2 6q 2
G000
X (t) = =) E(X(X 1)(X 2)) = G00X (t) =
(1 qt)4 t=1 p3
(4) 24pq 3 (4) 24q 3
GX (t) = =) E(X(X 1)(X 2)(X 3)) = GX (t) = .
(1 qt)5 t=1 p4

Solve the above equations for E(X), E(X µ)2 , E(X µ)3 and E(X µ)4 , we get

1
µ = E(X) =
p
2 q
= Var(X) = 2
p
✓ ◆3
x µ 2 p
1 =E = p
q
✓ ◆4
x µ p2
2 =E 3=6+ .
q

6. The characteristic function of X is given by


Z 1
itX
X (t) = E(e ) = eitx fX (x)dx
1
Z 1 x r 1
e x
= eitx dx
0 (r)
Z 1
1
= xr 1 e (1 it)x dx
(r) 0
Z 1
1 e z zr 1 dz
= r 1
, (put (1 it)x = z)
(n) 0 (1 it) 1 it
(1 it) r
= · (r) = (1 it) r .
(r)

16
Note that

0 r 1 1 0
X (t) = ir(1 it) =) µ = E(X) = X (t) =r
i t=0
00 r 2 1 00
X (t) = r(r + 1)(1 it) =) E(X 2 ) = X (t) = r(r + 1)
i2 t=0
000 r 3 1 000
X (t) = ir(r + 1)(r + 2)(1 it) =) E(X 3 ) = X (t) = r(r + 1)(r + 2)
i3 t=0
(4) r 4
X (t) = r(r + 1)(r + 2)(r + 3)(1 it)
1 (4)
=) E(X 4 ) = 4 X (t) = r(r + 1)(r + 2)(r + 3).
i t=0

Solve the above equations for E(X), E(X µ)2 , E(X µ)3 and E(X µ)4 , we get

µ = E(X) = r
2
= Var(X) = r
✓ ◆3
x µ 2
1 =E =p
r
✓ ◆4
x µ 6
2 =E 3= .
r

7. Since there are three answers to each question out of which only one is correct, the
probability of getting correct answer to a question is given by

1 2
p= =) q = .
3 3

The probability of getting r correct answers in a 8 questions is given by (by Binomial


Law)
✓ ◆ ✓ ◆ x ✓ ◆8 x
8 1 2
P (x = x) = , x = 0, 1, . . . , 8.
x 3 3

Now 75% of 8 questions is 100


75
⇥ 8 = 6. Therefore, the required probability of getting at
least 6 correct answers out of 8 questions is

P (X 6) = P (X = 6) + P (X = 7) + P (X = 8) = 0.0197.

8. Let X denote the number of even number appear in 10 throws with success probability p
then X ⇠ B(10, p) and
✓ ◆
10 x
P (X = x) = p (1 p)10 x , x = 0, 1, . . . , 10.
x

17
Given condition implies

p 5 5
P (X = 5) = 2P (X = 4) =) = =) p = .
1 p 3 8

Therefore,
✓ ◆ ✓ ◆0 ✓ ◆10 ✓ ◆10
10 5 3 3
Required Probability = P (X = 0) = = .
0 8 8 8

9. Let n be the number of fired missiles. Suppose X denotes the number of missiles that
successively hit a target. Then X ⇠ B(n, 0.3) and
✓ ◆
n
P (X = x) = (0.3)x (0.7)n x , x = 0, 1, . . . , n.
x

Required condition implies that

P (X 1) 0.9 =) 1 P (X = 0) 0.9 =) P (X = 0)  0.1 =) (0.7)n  0.1

Hence, n 7 and so at least 7 missiles should be fired.

Alternative Solution: Let X denote the number of missiles to hit a target with prob-
ability 0.3 then

P (X = x) = (0.7)x 1
(0.3) , x = 1, 2, . . . .

Suppose n is number of missiles that should be fired so that there is at least a 90%
probability of hitting the target. Then

P (X  n) 0.9 =) 1 (0.7)n 0.9 =) (0.7)n  0.1.

Hence, n 7 and so at least 7 missiles should be fired.

10. Let p be the probability of the assistant studying at the office. Since each assistant is just
likely to study at home as in the office, therefore p = 1/2. Let X denote the number of
graduate assistant in the office then X ⇠ B(8, 0.5) and
✓ ◆
8
P (X = x) = (0.5)x (0.5)8 x , x = 0, 1, . . . , 8.
x

We need to find the number of desks, say k, in the office so that each assistant has a desk

18
at least 90% of the time, that is,

P (X  k) 0.9 =) P (X > k)  0.1

Note that P (X > 5) = 0.14 and P (X > 6) = 0.035. Hence, if there are 6 desks, then
there is at least 90% chance for every graduate assistant to get a desk.

19
Tutorial 4

1. If X ⇠ B(n, p) then its pmf is given by


✓ ◆
n x
P (x = x) = p (1 p)n x , x = 0, 1, 2, . . . , n.
x

For the given problem


✓ ◆ ✓ ◆x ✓ ◆5 x
5 1 1
P (x = x) = , x = 0, 1, 2, 3, 4, 5.
x 2 2

The cdf is written as


8
>
> 0, if x < 0,
>
>
>
> 1 5
, if 0  x < 1,
>
> 2
>
>
>
> 1 5 5 1 5
>
> 2
+ 1 2
, if 1  x < 2,
>
>
< 1 5 5 1 5 5 1 5
FX (x) = 2
+ 1 2
+ 2 2
, if 2  x < 3,
>
>
> 1 5+
> 5 1 5
+ 5 1 5
+ 5 1 5
, if 3  x < 4,
>
> 2 1 2 2 2 3 2
>
>
>
> 1 5
+ 5 1 5
+ 5 1 5
+ 5 1 5
+ 5 1 5
, if 4  x < 5,
>
> 2 1 2 2 2 2 2 4 2
>
>
>
>
: 1, if x 5.

Here,

1 13 1 1 6
FX (2) = , FX (3) = > , F (3 0) = and F (2 0) = < 2.
2 16 2 2 32

This implies, the median belongs to [2, 3) and let it be 12 (2 + 3) = 2.5.

2. Let X ⇠ B(n, p). Then its pmf is given by


✓ ◆
n x
pX (x) = p (1 p)n x , x = 0, 1, 2, . . . n.
x

Now,

pX (x 1)  pX (x) pX (x + 1)
✓ ◆ ✓ ◆ ✓ ◆
n n x n
=) px 1 (1 p)n x+1
 p (1 p)n x px+1 (1 p)n x 1
x 1 x x+1
=) (n + 1)p 1  x  (n + 1)p. (5)

Case I. Let (n + 1)p be an integer. Since the integer x lies between two consecutive
integers (n + 1)p and (n + 1)p 1, so the inequality (5) is possible only when x = (n + 1)p

20
or x = (n + 1)p 1. Hence, there are two modes (n + 1)p and (n + 1)p 1. In this case,
the distribution is said to be bimodal.

Case II. Let (n + 1)p be a fraction. In this case, the integer x lies between two fractions
differing by 1. Hence, the mode is the integral part of (n + 1)p.

3. Let X denote the number of wars in 25 years then X ⇠ P(25/15) = P(5/3) and

5/3 x
e 5
P (X = x) = , x = 0, 1, 2, . . . .
3x x!

Hence,

Required Probability = P (X = 0) = e 5/3


⇡ 0.188.

Alternative: The probability that there is war in a year is 1/15 and there is no war in
a year is 14/15. Hence,
✓ ◆25
14
Required Probability = ⇡ 0.178.
15

4. The average number of typographical errors per page in the book is given by

= (390/520) = 0.75.

Therefore, using Poisson law, the probability of X errors per page is given by
x 0.75
e e (0.75)x
P (X = x) = = , x = 0, 1, 2, . . . .
x! x!

The required probability that a random sample of 5 pages will contain no error is given
by

0.75 5
(P (x = 0))5 = e =e 3.75
.

5. Exact Method: Let X be the number of white drawing balls then X ⇠ B(1000, 0.01)
and
✓ ◆
1000
P (X = x) = (0.01)x (0.99)1000 x , x = 0, 1, . . . , 1000.
x

Therefore,
✓ ◆
1000
Required Probability = P (X = 10) = (0.01)10 (0.99)990 ⇡ 0.12574.
10

21
Approximate Method: Note that n is large and p is small and therefore, we can use
Poisson distribution with = np = 1000 ⇥ 0.01 = 10. Hence,
10
e (10)10
Required Probability = P (X = 10) = ⇡ 0.12511.
10!

6. Exact Method: Let X denote the number of bullets that successively hit the target
then X ⇠ B(5000, 0.001) and
✓ ◆
5000
P (X = x) = (0.001)x (0.999)5000 x , x = 0, 1, . . . , 5000.
x

Therefore,

Required Probability = P (X 2) = 1 P (X < 2) = 1 P (X = 0) P (X = 1) ⇡ 0.96.

Approximate Method: Note that n is large and p is small and therefore, we can use
Poisson distribution with = np = 5000 ⇥ 0.001 = 5. Hence,

Required Probability = P (X 2) = 1 P (X < 2) = 1 P (X = 0) P (X = 1)


5 0 5 1
e (5) e (5)
=1 ⇡ 0.96.
0! 1!

7. We have

1 1
fX (x) = = , 0 < x < 2.
2 0 2

By the given hypothesis, we obtain


Z 1 Z 1
1 x
dx = e dx
0 2 0
1 x 1
=) = e 0
= e 1 =1 e
2
1
=) e =
2✓ ◆
1
=) = ln = ln 2.
2

Hence,

= ln 2.

22
8. Here, µ = 30, = 5.

(a) Consider
✓ ◆
26 30 X 30 40 30
P (26  X  40) = P  
5 5 5
= P ( 0.8  Z  2)
= P (Z  2) P (Z  0.8)
= 0.9772 0.2119
= 0.7653.

(b) Consider
✓ ◆
X 30 45 30
P (X 45) = P
5 5
= p(Z 3)
=1 P (Z  3)
=1 0.9987
= 0.0013.

(c) Consider

P (|X 30|  5) = P ( 5  X 30  5)
= P (25  X  35)
✓ ◆
25 30 X 30 35 30
=P  
5 5 5
= P ( 1  Z  1)
= P (Z  1) P (Z  1)
= 0.8413 0.1587
= 0.6826.

9. Let X be the random variable denoting the life of a lamp in burning hours. Given
X ⇠ N (µ, 2 ) where µ = 1000 and = 200.

(i) Let p be the probability that a lamp fails in the first 800 burning hours. Then
✓ ◆
X µ 800 µ
p = P (X < 800) = P < = P (Z < 1) = 0.1587.

23
Hence, the expected number of lamps which fail in the first 800 burning hours is

10, 000 ⇥ 0.1587 = 1, 587.

(ii) Required probability

P (800  X  1, 200) = P ( 1  Z  1) = 0.6826.

Hence, the expected no. of lamps with life between 800 and 1,200 beaning hours is

10.000 ⇥ 0.6826 = 6, 826.

(a) Let 10% of the lamps fail after x1 burning hours. Then
✓ ◆
x1 1000
P (X < x1 ) = 0.10 =) P Z< = 0.10
200

From normal table,

x1 1000
= 1.28 =) x1 = 744.
200

Hence, after 714 burning hours, 10% of the lamps will fail.
(b) Let 10% of the lamps be still burning after x2 burning hours. Then
✓ ◆
x2 1000
P (X > x2 ) = 0.10 =) P Z< = 0.90
200

From normal table,

x2 1000
= 1.28 =) x2 = 1256.
200

Hence, after 1,256 beaning hours, 10% of the lamps will still be burning.

10. Given X ⇠ N (µ, 2


) then the corresponding pdf is

1 (x µ)2
fX (x) = p e 2 2 , 1 < x < 1, 1 < µ < 1, > 0.
2⇡

The cdf of X is given by


Z x
1 (t µ)2
FX (x) = p e 2 2 dt.
2⇡ 1

Now, the median is the least value of x for which FX (x) = 12 (here we can conclude that
from the symmetry of the normal curve and as it is symmetrical about x = µ, that is,

24
the median is x = µ).

1
FX (x) =
2
Z x2
1 (t µ) 2
1
=) p e 2 2 dt =
2⇡ 1 2
Z x µ ✓ ◆
1 u2 1 t µ
=) p e 2 du = let u =
2⇡ 1 2

Note. For any other continuous random variable which is not symmetrical, you have to
proceed as above.
Hence, µ is the median of N (µ, 2 ) distribution.
Next,

(x µ)2 x µ
fX0 (x) = 0 =) e 2 2 · 2
= 0 =) x = µ.

Also,

1 (x µ)2 (x µ)2 (x µ)2


fX00 (x) = 2
e 2r 2 + 4
e 2 2

Therefore,

1
fX00 (x)|x=µ = 2
<0

Therefore, fX (x) has a unique maximum at x = µ. Hence, µ is the unique mode of


N (µ, 2 ) distribution.

25
Tutorial 5

1. It is known that µ > 0. Therefore, it can be easily verified that

P ({X < µ} \ {X < µ}) P (X < µ)


P (X < µ|X < µ) = = .
P (X < µ) P (X < µ)

Observe that

X µ X µ
Z= = ⇠ N (0, 1).
µ

Hence,
✓ ◆
X µ X µ
P (X < µ|X < µ) = P < 2 <0
µ µ
P (Z < 2)
=
P (Z < 0)
0.02275
= = 0.0455.
0.5

2. Note that
2
1 2
fX (x) = ke 2x2 +x
= ke 2(x
2 1
2
x ) = ke 18 e 2(x 4 ) = ke 18 e 1
2 ( x 1/2
1/4
).

Comparing it with

1 1
(x µ) ,
2
f (x) = p e 2
2⇡

we obtain µ = 1/4, = 1/2 and


r
1 2
p = ce1/8 =) k = c 1/8
.
2⇡ ⇡

Next, observe that

X 1/4
Z= ⇠ N (0, 1).
1/2

Therefore,
✓ ◆
1
P X = P (Z 0) = 0.5.
4

26
3. It is given that µ = 2 and = 3. Therefore,

X µ
Z= ⇠ N (0, 1)

We have to find k such that

P (µ < X < k) = 0.4115


✓ ◆
X µ k µ
=) P 0< < = 0.4115
✓ ◆
k 2
=) P 0<Z< = 0.4115
3
k 2
=) = 1.35
3
=) k = 6.05.

4. The range of the random variable X is 0, 1, 2. Therefore, the pmf of X is given by


8
>
> 1
, x=0
>
> 4
>
>
<1, x = 1
pX (x) = 4
>1, x = 2
>
>
> 2
>
>
:0, otherwise.

Now Y is the derived random variable Y = X 2 = g(X) and the range for Y is 0, 1, 4. For
x1 6= x2 , we have g (x1 ) 6= g (x2 ). This implies that the point for Y will remain same as
that of X. Therefore,
8
>
> 1
, y=0
>
> 4
>
>
<1, y = 1
pY (y) = 4
>
> 1
, y=4
>
>
>2
>
:0, otherwise.

Hence, the required cdf of Y is


8
>
> 0, y < 0,
>
>
>
>
<1, 0  y < 1,
FY (y) = 4
>
> 1
, 1  y < 4,
>
>
>2
>
:1, y 4.

27
5. The pmf of X is given by

1
pX (x) = , for all x 2 { 10, 9, · · · , 9, 10}.
21

(a) Let Y = 4X. Therefore, the range for Y is { 40, 36, . . . , 36, 40} and the pmf of Y
is

pY (y) = 1
21
, for all y 2 { 40, 36, . . . , 36, 40}.

Therefore,

11
P (4X  2) = .
21

(b) Let Y = 4X + 4. Therefore, the range for Y is { 36, 32, . . . , 40, 44} and the pmf
of Y is

pY (y) = 1
21
, for all y 2 { 36, 32, . . . , 40, 44}.

Therefore,

10
P (4X + 4  2) = .
21

(c) Note that

4
P (X 2 X  2) = P (X = 0) + P (X = 1) + P (X = 1) + P (X = 2) = .
21

(d) Note that

P (|X 2|  2) = P (X = 0) + P (X = 1)
+ P (X = 2) + P (X = 3) + P (X = 4)
5
= .
21

6. Given
8
>
> 0.2, x = 2,
>
>
>
>
< 0.3, x = 1,
pX (x) = 0.4, x = 1,
>
>
>
> 0.1, x = 2,
>
>
:
0, otherwise.

28
Note that

E(X) = ( 2) ⇥ 0.2 + ( 1) ⇥ 0.3 + 1 ⇥ 0.4 + 2 ⇥ 0.1 = 0.1.

(a) E(Y ) = E(3X 1) = 3E(X) 1= 1.3.


(b) E(Z) = E( X) = E(X) = 0.1.
(c) E(W ) = E(|X|) = 1 ⇥ (0.3 + 0.4) + 2 ⇥ (0.2 + 0.1) = 1.3.

7. Given
8
>
< 0, x  0,
1
fX (x) = , 0 < x  1,
> 2
: 1
2x2
, x > 1.

Let Y = g(X) = 1/X. Then g(x) is strictly decreasing. Also, x = g 1 (y) = 1/y and

d 1 1
g (y) = .
dy y2

Hence,

d 1
fY (y) = g (y) fX (g 1 (y)
dy
8
< 0,
> y  0,
1
= 2y 2
, y 1,
>
: y2 1
2
· y2 , 0 < y < 1.
8
>
< 0, x  0,
1
= , 0 < y  1,
> 2
: 1
2y 2
, y > 1.

Hence, X and 1
X
has same distribution.

8. Given X ⇠ N (0, 1). Therefore, the pmf of X is given by

1 x2 /2
fX (x) = p e , 1 < x < 1.
2⇡

Let Y = X 2 . Then
( p (
p g1 1 (y) = y
d
g 1 (y)
dy 1
= 1
p
2 y
x = ± y =) 1 p =)
g2 (y) = + y d
g 1 (y)
dy 2
= 1
p
2 y

29
Hence,

d 1 d 1
fY (y) = g1 (y) fX (g1 1 (y) + g (y) fX (g2 1 (y)
dy dy 2
 y/2
1 e e y/2
= p p + p
2⇡ 2 y 2 y
1 e y/2
=p p
2⇡ y
(1/2)1/2 y/2 1 1
= 1
e y2 ,
2

which is pdf of Gamma 1 1


,
2 2
. Hence, Y ⇠ Gamma 1 1
,
2 2
.

9. Given X ⇠ N (µ, 2 ) and Y = g(X) = eX . Note that g(x) is strictly increasing function
and g 1 (y) = ln(y). Therefore,

d 1 1
g (y) = .
dy y

Hence,

d 1 1 1 2
( ln(y) µ ) ,
fY (y) = g (y) fX (g 1 (y) = p e 2
dy y 2⇡

which is the density of log-normal distribution.

10. Note that

X 50
Z= ⇠ N (0, 1).
10

Therefore,

P (Y  3137) = P X 2 + 1  3136 + 1
= P X 2  (56)2
= P ( 56  X  56)
✓ ◆
56 50 X 50 56 50
=P  
10 10 10
= P ( 10.6  Z  0.6)
= P (Z  0.6) P (Z < 10.6)
= 0.7258.

30

You might also like