Assignment-1
Prof. G. N. Singh
√
X− 2n
Q1) Suppose X ∼ Bin (n, p) and T = √
n− n
. Then show that
(a) T is a biased estimator of p.
(b) T is a consistent estimator of p.
Solution:
Since, X√ ∼ Bin (n, p). ⇒ E(X) = np and V (X) = np(1 − p).
X− 2n
T = √
n− n
√ √ √
1
X− n E(X)− 2n np− 2n p− 2√ n
(a) E(T ) = E( n−√2n ) = √
n− n
= √
n− n
= 1− √1n
̸= p
⇒ E(T ) ̸= p
So, T is a biased estimator of p.
1
p− 2√ n
(b) E(T ) = 1− √1n
√
X− n V (X) np(1−p) p(1−p)
and V (T ) = V [ n−√2n ] = √
(n− n)2
= √
(n− n)2
= n(1− √1n )2
Now,
1
p− 2√ n
E(T ) = 1− √1n
−→ p as n −→ ∞
p(1−p)
V (T ) = n(1− √1n )2
−→ 0 as n −→ ∞
So, By the sufficient condition of consistency T is a consistent estimator of p.
n−1
rs k
Q2) X1 , X2 , X3 , · · · , Xn ∼ N (µ, σ 2 ). Find the value of k for which T = i(Xi+1 − Xi )2 is
P
n−1
i=1
an unbiased estimator of σ 2 .
rs
Solution: Since, X1 , X2 , X3 , · · · , Xn ∼ N (µ, σ 2 )
⇒ E(Xi ) = µ and V (Xi ) = σ 2 , ∀ i = 1, 2, · · · , n.
and E(Xi Xj ) = E(Xi )E(Xj ) = µ2 as Xi and Xj are independent.
n−1 n−1
k k
i(Xi+1 − Xi )2 = 2
+ Xi2 − 2Xi+1 Xi )
P P
Now, T = n−1 n−1
i(Xi+1
i=1 i=1
1
n−1
k X
2
⇒ E(T ) = E i(Xi+1 + Xi2 − 2Xi+1 Xi )
n−1 i=1
n−1
k X 2
) + E(Xi2 ) − 2E(Xi+1 Xi )
⇒ E(T ) = i E(Xi+1 (1)
n − 1 i=1
V (Xi ) = E(Xi2 ) − (E(Xi ))2
E(Xi2 ) = σ 2 + µ2 and E(Xi+12
) = σ 2 + µ2 . So, E(Xi+1 Xi ) = E(Xi+1 )E(Xi ) = µ2 . Putting
these values in (1), we have
n−1 n−1 n−1
K X 2 2 2 2 2 K X 2 2Kσ 2 X 2Kσ 2 (n − 1)n
E(T ) = i(σ +µ +σ +µ −2µ ) = i(2σ ) = i=
n − 1 i=1 n − 1 i=1 n − 1 i=1 n−1 2
⇒ E(T ) = kσ 2 n
Since, T is an unbiased estimator of σ 2 .
⇒ E(T ) = σ 2
⇒ kσ 2 n = σ 2
⇒ k = n1 .
Hence, for k = n1 , T will be an unbiased estimator for σ 2 .
Q3) Let A be the most efficient estimator and B is a less efficient estimator with efficiency e.
Then, show that Cov(A, B-A) = 0.
Solution: since A is the most efficient estimator and B is less efficient with efficiency e,
V (A) V (A)
e= V (B)
⇒ V (B) = e
Suppose p is the correlation coefficient between A and B.
√
⇒ρ= e
Now,
Cov(A, B − A) = Cov(A, B) − Cov(A, A)
= Cov(A,
p B) − V (A)
=ρ q V (A)V (B) − V (A)
√
= e V (A) V (A)
e
− V (A)
= V (A) − V (A) = 0
⇒ Cov(A, B − A) = 0.
rs
Q4) Suppose X1 , X2 , X3 , · · · , Xn ∼ N (θ, 1). Find the C-R lower bound for an unbiased estimator
of θr .
2
(ψ ′ (θ))2
Solution: The C-R lower bound for an unbiased estimator of ψ(θ) is given as V (t) ≥ I(θ)
∂ 2
where I(θ) = −nE( ∂θ 2 log f (x; θ)).
rs
As, X1 , X2 , X3 , · · · , Xn ∼ N (θ, 1)
1 2
⇒ f (x; θ) = √1 e− 2 (x−θ) .
2π
⇒ log f (x; θ) = − 21 log(2π) − 21 (x − θ)2
∂
⇒ ∂θ
log f (x; θ) = 22 (x − θ) = (x − θ)
∂2
⇒ log f (x; θ) = −1
∂θ2
2
∂
∴ I(θ) = −nE ∂θ2 log f (x; θ) = −nE(−1) = n
⇒ I(θ) = n and ψ(θ) = θr
⇒ ψ ′ (θ) = rθr−1 .
(ψ ′ (θ))2 (rθr−1 )2 r2 θ2r−2
Therefore, the C-R lower bound for θr is I(θ)
= I(θ)
= n
β α
Q5) A sample is drawn from the population f (x; α, β) = Γα xα−1 e−βx , x ≥ 0. The values of first
and second order sample moments about origin are m′1 = 10 and m′2 = 150. Find the method
of moment estimators of α and β.
Solution: Since sample is drawn from the Gamma distribution with p.d.f given as
β α α−1 −βx
f (x; α, β) = Γα
x e , x≥0
α
⇒ E(X) = β
α α
V (X) = β2
⇒ E(X 2 ) − (E(X))2 = β2
Now, equating these population moments to sample moments (for method of moments esti-
mators), we have
α
E(X) =
β
α
m′1 = (2)
β
and
α
E(X 2 ) − (E(X))2 =
β2
α
m′2 − (m′1 )2 = (3)
β2
By solving the equations 2 and 3, we get
m′1 (m′1 )2
β= m2 −(m′1 )2
′ ;α= m2 −(m′1 )2
′
Therefore, the method of moment estimators of α and β are
(m′1 )2 m′1
α̂M M E = m′2 −(m′1 )2
; β̂M M E = m′2 −(m′1 )2
We have given that m′1 = 10, m′2 = 150
102 10 1
⇒ α̂M M E = 150−102
= 2, β̂M M E = 150−102
= 5
3
rs 2
Q6) Let X1 , X2 , · · · , Xn ∼ from a p.d.f f (x; θ) = xθ exp(− x2θ ); x > 0; θ > 0. Find the method of
moment estimator of θ.
Solution:
R∞
E(X) = 0 xf (x; θ) dx
R∞ 2
= 0 x xθ exp(− x2θ ) dx
R∞ 2 2
= 0 xθ exp(− x2θ ) dx
x2
Let, = u ⇒ 2x dx = 2θ du ⇒ x dx = θ du
2θ
R∞ θ
∴ E(X) = 0 2u exp(−u) √2θu du
√ R ∞ 1 −u
= 2θ 0 u 2 e du
√ R∞ 3
= 2θ 0 u 2 −1 e−u du
√
= 2θΓ 32
√ √
= 2θ . 12 . π
q
⇒ E(X) = πθ 2
q
⇒ µ′1 = πθ 2
Equating this with first order sample moment about origin (m′1 = x̄) i.e.,
q
2
µ′1 = m′1 ⇒ πθ2
= x̄ ⇒ θ = 2x̄π
2x̄2
∴ The method of moment estimator of θ is θ̂M M E = π
.
2
Q7) A random sample of size 1 is drawn from the population with p.d.f f (x; θ) = θ2
(θ − x) ;
0 < x < θ. Find the MLE of θ.
Solution: The likelihood function is
2
L = L(θ|x) = f (x; θ) = θ2
(θ − x); 0 < x < θ.
⇒ log L = log 2 − 2 log θ + log(θ − x)
Now,
∂ 2 1
log L = − + (4)
∂θ θ θ−x
2
∂ 2 1
2
log L = 2 + (5)
∂θ θ (θ − x)2
Equating normal equation 4 to 0 i.e.,
∂
∂θ
log L = 0
⇒ − 2θ + 1
θ−x
=0
2 1
⇒ θ
= θ−x
⇒ 2θ − 2x = θ
⇒ θ = 2x
Using the value θ = 2x in equation 5 we get,
4
∂2 2 1 2−4
∂θ2
log L = 4x2
− x2
= 4x2
= − 2x12 < 0
∴ θ = 2x will maximize the likelihood function L.
So, the MLE of θ is θ̂ = 2x.
Q8) A random variable takes the values 1,2 and 3 with probabilities θ2 ,2θ(1 − θ) and (1 − θ)2
respectively with 0 < θ < 1. However the observed frequencies of 1,2 and 3 are n1 , n2 , n3
respectively. Find the MLE of θ.
Solution: Suppose a random variable X takes the values 1,2 and 3.
⇒ P (X = 1) = θ2
P (X = 2) = 2θ(1 − θ)
P (X = 3) = (1 − θ)2
Since the observed frequencies of X = 1, X = 2 and X = 3 are n1 , n2 , n3 respectively.
Hence the likelihood function is
L(θ) = (P (X = 1))n1 (P (X = 2))n2 (P (X = 3))n3
= (θ2 )n1 (2θ(1 − θ))n2 ((1 − θ)2 )n3
= θ2n1 2n2 θn2 (1 − θ)n2 (1 − θ)2n3
= 2n2 θ2n1 +n2 (1 − θ)n2 +2n3
⇒ log L(θ) = n2 log 2 + (2n1 + n2 )logθ + (n2 + 2n3 )log(1 − θ)
∂
Now equating ∂θ
log L(θ) = 0
2n1 +n2
⇒0+ θ
− n21−θ
+2n3
=0
⇒ 2n1 + n2 − θ(2n1 + n2 ) = θ(n2 + 2n3 )
2n1 +n2
⇒θ= 2(n1 +n2 +n3 )
2n1 +n2
∴ The MLE of θ is θ̂ = 2(n1 +n2 +n3 )
.
Q9) Let Y1 and Y2 be two stochastically independent unbiased estimator of θ. Given that the
variance of Y1 is twice the variance of Y2 . Find the constants k1 and k2 so that k1 Y1 + k2 Y2
is an unbiased estimator with the smallest possible variance for such a linear combination.
Solution:
Let T = k1 Y1 + k2 Y2
⇒ E(T ) = E(k1 Y1 + k2 Y2 ) = k1 E(Y1 ) + k2 E(Y2 ) = k1 θ + k2 θ = θ(k1 + k2 )
(as E(Y1 ) = E(Y2 ) = θ)
So,
E(T ) = θ if k1 + k2 = 1 (6)
And, V (T ) = V (k1 Y1 + k2 Y2 ) = k12 V (Y1 ) + k22 V (Y2 ) + 2k1 k2 Cov(Y1 , Y2 ) = 2k12 σ 2 + k22 σ 2
∵ V (Y1 ) = 2V (Y2 ) and Y1 , Y2 are independent ⇒ Cov(Y1 , Y2 ) = 0. Let V (Y2 ) = σ 2
V (T ) = 2k12 σ 2 + k22 σ 2 (7)
5
We need to find the values of k1 and k2 such that V (T ) is minimum under the condition (6).
∴ By Lagrangian method,
ϕ = V (T ) + λ(k1 + k2 ) where λ is Lagrange multiplier.
⇒ ϕ = σ 2 (2k12 + k22 ) + λ(k1 + k2 )
Now,
∂ϕ
= 0 ⇒ 4k1 σ 2 + λ = 0 (8)
∂k1
∂ϕ
= 0 ⇒ 2k2 σ 2 + λ = 0 (9)
∂k2
Solving equations (8) and (9) we have,
k2 = 2k1
1 2
∵ k1 + k2 = 1 ⇒ k1 = 3
and k2 = 3
1
Therefore, the desired values of k1 and k2 are 3
and 23 .
Q10) If a random sample of size n is taken from a distribution having p.d.f
(
2x
2, 0<x≤θ
f(x; θ) = θ
0, otherwise
Find
(a) The MLE θ̂ for θ.
(b) The constant c so that E(cθ̂) = θ.
(c) The MLE for the median of the distribution.
Solution:
(a) The likelihood function is
!
n
2n
Q
n n xi
i=1
( 2x
Q Q
L = L(θ|x1 , x2 , · · · , xn ) = f (xi ; θ) = θ2
i
)= θ2n
; 0 < xi ≤ θ
i=1 i=1
n
Q
⇒ log L = n log 2 + log( xi ) − 2n log θ ; 0 < xi ≤ θ
i=1
∂
The normal equation is ∂θ
log L = 0 ⇒ 0 + 0 − 2n θ
=0
So, the normal equation will not provide MLE.
Therefore, we will go with ordered statistics.
0 < xi ≤ θ ⇒ 0 < x(1) ≤ x(2) ≤ · · · ≤ x(n) ≤ θ
⇒ θ ≥ x(n) = max{x1 , x2 , · · · , xn }
⇒ θmin = X(n)
∵ Likelihood function L is maximum when θ is minimum.
∴ θ = θmin = x(n) will maximize L.
Therefore, the MLE of θ is θ̂ = x(n) .
6
(
2x
0<x≤θ
θ2
,
(b) f(x; θ) =
0, otherwise
The c.d.f of X(n) is
h Rx Rx i
2t x2
Fx(n) (x) = [F (x)]n ; F (x) = P (X ≤ x) = 0 f (t) dt = 0 θ2
dt = θ2
,0 < x ≤ θ
2 2n
⇒ ( xθ2 )n = xθ2n
The p.d.f of the X(n) is
d
fx(n) (x) = dx Fx(n) (x) = θ2n
2n x
2n−1
2n 2n−1
fx(n) (x) = ;0<x≤θ
θ2n
x
Rθ Rθ Rθ
Now, E(θ̂) = E(x(n) ) = 0 xfx(n) (x) dx = 0 x θ2n
2n x
2n−1
dx = 2n
θ2n 0
x2n dx
2n
⇒ E(θ̂) = 2n+1
θ
⇒ E( 2n+1
2n
θ̂) = θ
2n+1
∴ The required value of c is 2n
.
Rm 1
(c) Suppose m is the median of the given distribution, ⇒ 0
f (x; θ) dx = 2
Rm
⇒ 0 2x θ2
dx = 12
⇒ θ12 (m2 − 0) = 21
2
⇒ m2 = θ2
⇒ m = ± √θ2
But 0 ≤ x ≤ θ ⇒ median will not be negative.
∴ median is m = √θ2 .
Therefore, by the invariance property of MLE, the MLE of median is given as
X(n)
m̂ = √θ̂ = √ ; X(n) = max{X1 , X2 , · · · , Xn }
2 2
Q11) Let X1 , X2 , and X3 be a random sample of size 3 from a uniform (θ, 2θ) where θ > 0.
(a) Find method of moment estimator of θ.
(b) Find MLE of θ.
(c) Find the method of moment estimate and MLE of θ based on the data: 1.23, 0.86, 1.33.
Solution:
rs
(a) X1 , X2 , and X3 ∼ uniform(θ, 2θ).
⇒ f (x; θ) = 1θ ; θ ≤ x ≤ 2θ
R 2θ R 2θ 2 2
∴ E(X) = θ xf (x; θ) dx = θ xθ dx = 4θ 2θ−θ = 3θ2
′ 3θ
µ1 = E(X) = 2
To get the method of moment estimator, we equate population moment with sample
n
moment, i.e, µ′1 = m′1 where m′1 = n1
P
xi = x̄
i=1
3θ 2
⇒ 2
= x̄ ⇒ θ = 3
x̄
∴ Method of moment estimator of θ is θ̂M M E = 23 x̄.
7
(b) The likelihood function is
n n
( 1θ ) = 1
Q Q
L = L(θ|x1 , x2 , · · · , xn ) = f (xi ; θ) = θn
; θ ≤ xi ≤ 2θ
i=1 i=1
The likelihood function L will be maximum if θ is minimum. Suppose x(1) ≤ x(2) ≤ · · · ≤
x(n) are order statistics for x1 , x2 , · · · , xn .
∵ θ ≤ xi ≤ 2θ ⇒ θ ≤ x(1) ≤ x(2) ≤ · · · ≤ x(n) ≤ 2θ
∴ θ ≤ x(1) and 2θ ≥ x(n)
x
⇒ θ ≤ x(1) and θ ≥ (n) 2
x(n)
⇒ 2
≤ θ ≤ x(1)
x(n)
⇒ θmin = 2
x(n)
So, θ = 2
will minimize likelihood function L.
x(n)
Hence, the MLE of θ is θ̂M LE = 2
; x(n) = max{x1 , x2 , · · · , xn }.
(c) The data set is given x1 = 1.29, x2 = 0.86, x3 = 1.33
The method of moment estimator is θ̂M M E = 2x̄ 3
= 23 ( 31 (1.29 + 0.86 + 1.33)) = 23 . 3.48
3
=
0.7733
x
And, MLE of θ is θ̂M LE = (n)2
= 1.33
2
= 0.665.
Q12) A random sample of size 5 is taken from a distribution with the p.d.f
( 2
3x
3 , 0<x≤θ
f(x; θ) = θ
0, otherwise
where θ is unknown parameter.
If observed values of the random sample are 3, 6, 4, 7, 5 then calculate the maximum likelihood
estimate of the 18 th quantile of the distribution.
Solution: Suppose q is the 18 th quantile of the distribution.
Rq
⇒ 0 f (x; θ) dx = 81
Rq 2 3
⇒ 0 3xθ3
= 18 ⇒ θq3 = 81 ⇒ q = 2θ
To get the MLE of q, we need to find the MLE of θ.
The likelihood function is
n
3n x2i
Q
n n 2
( 3x
Q Q
L = L(θ|x) = f (xi ; θ) = θ3
) = i=1
θ3n
; 0 < xi ≤ θ
i=1 i=1
We can see that, the likelihood function L will be maximum if θ is minimum.
∵ 0 < xi ≤ θ
⇒ 0 ≤ x(1) ≤ x(2) ≤ · · · ≤ x(n) ≤ θ
⇒ θ ≥ x(n) ; x(n) = max{x1 , x2 , · · · , xn }
∴ θmin = x(n)
So, θ = x(n) will maximize the likelihood function function L. Therefore the MLE of θ is
θ̂ = x(n) .
Hence, by the invariance property of MLE, the MLE of q is,
8
θ̂ x(n)
q̂ = 2
= 2
∵ Observations are given as : 3, 6, 4, 7, 5
⇒ the largest observation is x(5) = 7
x(5) 7
∴ q̂ = 2
= 2
= 3.5
rs
Q13) Let X1 , X2 , X3 , · · · , Xn ∼ N (θ, θ) ; θ > 0. Find the sufficient statistics for θ.
Qn
Solution: the joint probability density function is L = f (xi ; θ)
i=1
n
√ 1 exp − 1 (xi − θ)2
Q
= 2πθ 2θ
i=1
!
n
−n 1
(xi − θ)2
P
= (2πθ) 2 exp − 2θ
i=1
!
n
−n 1
(x2i + θ2 − 2xi θ)
P
= (2πθ) 2 exp − 2θ
i=1
!
n n
−n 1 nθ
x2i −
P P
= (2πθ) 2 exp − 2θ 2
+ xi
i=1 i=1
! !
n n
−n 1
x2i exp − nθ
P P
= (2πθ) 2 exp − 2θ 2
exp xi
i=1 i=1
! !
n n
−n 1
x2i exp − nθ
P P
= g(t, θ).h(x) , where g(t, θ) = (2πθ) 2 exp − 2θ 2
and h(x) = exp xi
i=1 i=1
n
x2i
P
And, t =
i=1
∴ By neyman-Fisher factorization theorem:
n
x2i is a sufficient estimator for θ.
P
t=
i=1