0% found this document useful (0 votes)
11 views9 pages

Econometrics PS9

Uploaded by

nyandwi charles
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views9 pages

Econometrics PS9

Uploaded by

nyandwi charles
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Problem Set 9

ECON-S-428
Nyandwi Charles

1. The first two exercises aim at assessing the quality of two GMM estimators for
Gaussian observations. Let X1 , . . . , Xn be i.i.d. N (µ, σ 2 ). Let (µ̂1 , σ̂12 ) = (X, s2 ) be the
usual MLE and let:
n
!2
1 X
(µ̂2 , σ̂22 ) = arg min + g(Xi , µ, σ 2 ) ,
µ∈R,σ 2 ∈R0 n i=1

with  
µ − Xi
g(Xi , µ, σ 2 ) =  σ 2 − (Xi − µ)2  .
Xi3 − µ(µ2 + 3σ 2 )
Show that both estimators are GMM estimators.

Solution: The estimator (µ̂1 , σ̂12 ) is derived from the sample mean and sample variance
as follow
(
E[Xi − µ] = 0
E[(Xi − µ)2 − σ 2 ] = 0
Then the estimator (µ̂2 , σ̂22 ) is derived from
 
µ − Xi
g(Xi , µ, σ 2 ) =  σ 2 − (Xi − µ)2 
Xi3 − µ(µ2 + 3σ 2 )
To obtain the GMM we solve following
n
!′ n
!
1X 1X
θ̂ = arg min g(Xi , θ) W g(Xi , θ)
θ n i=1 n i=1

θ = (µ, σ 2 ) while W is a weighting matrix.

The following are the conditions of estimator (µ̂1 , σ̂21 )


( P
1 n
n i=1 (Xi − µ) = 0
1
P n 2 2
n i=1 ((Xi − µ) − σ ) = 0

The second estimator (µ̂2 , σ̂22 ) holds with the following conditions
n
1X
g(Xi , µ, σ 2 ) = 0
n i=1

1
 
µ − Xi
g(Xi , µ, σ 2 ) =  σ 2 − (Xi − µ)2 
Xi3 − µ(µ2 + 3σ 2 )
By minimizing this function we get (µ̂2 , σ̂22 )

E[g(X, µ̂2 , σ̂22 )] = 0

Thus,we have proven that both (µ̂1 , σ̂21 ) and (µ̂2 , σ̂22 ) are GMM estimators as

2. Run a Monte-Carlo experiment in R in which you do the following: generate R = 1000


samples of size n = 500 with true parameters µ0 = 4 and σ02 = 4. Compute both
(r) 2,(r)
estimators (µ̂i , σ̂i ) for i = 1, 2 and r = 1, . . . , R. Approximate the mean squared
error (MSE) of the estimators as:
R R
1 X (r) 1 X 2,(r)
MSEi (µ) ≈ (µ̂ − µ0 )2 , 2
MSEi (σ ) ≈ (σ̂ − σ02 )2 .
R r=1 i R r=1 i

What can you conclude from these approximated MSEs?

3. Let v and w be m-dimensional vectors, and let A be an m × m symmetric matrix.


Define the spectral radius of A as:

ρ(A) := max(|λmin (A)|, |λmax (A)|),

where λmin (A) and λmax (A) denote the smallest and largest eigenvalues of A, respectively.
Show that:
|v ′ Av − w′ Aw| ≤ ρ(A)∥v − w∥{∥v − w∥ + 2∥w∥}.
Also, show that:

|v ′ Av − w′ Aw| ≤ ρ(A)∥v − w∥{∥v − w∥ + 2 min(∥v∥, ∥w∥)}.

Hint: To prove this, show:

|v ′ Av − w′ Aw| = |([v − w] + w)′ A([v − w] + w) − w′ Aw|

= |(v − w)′ A(v − w) + 2(v − w)′ Aw|


and then use the Cauchy-Schwarz inequality.

Solution : We need to show that

|v ′ Av − w′ Aw| ≤ ρ(A)∥v − w∥(∥v − w∥ + 2∥w∥)

We start by rewriting the expression using the property of mstrix A

|v ′ Av − w′ Aw| = |([v − w] + w)′ A([v − w] + w) − w′ Aw|


We can then expand the term as

([v − w] + w)′ A([v − w] + w) = (v − w)′ A(v − w) + (v − w)′ Aw + w′ A(v − w) + w′ Aw

2
Since A is symmetric, w′ A(v − w) = (v − w)′ Aw

We can now combine the terms to get

([v − w] + w)′ A([v − w] + w) = (v − w)′ A(v − w) + 2(v − w)′ Aw + w′ Aw

= |(v − w)′ A(v − w) + 2(v − w)′ Aw + w′ Aw − w′ Aw|

= |(v − w)′ A(v − w) + 2(v − w)′ Aw|


We apply the triangle inequality to separate the terms

|(v − w)′ A(v − w) + 2(v − w)′ Aw| ≤ |(v − w)′ A(v − w)| + |2(v − w)′ Aw|

Use the cauchy-schwarz inequality

|(v − w)′ A(v − w)| ≤ ∥v − w∥2 ρ(A)


|2(v − w)′ Aw| ≤ 2∥v − w∥∥Aw∥
Given the symetric of matrixA we can bound ∥Aw∥ using the spectral radius

∥Aw∥ ≤ ρ(A)∥w∥
We combine the inequalities to get

|v ′ Av − w′ Aw| ≤ ∥v − w∥2 ρ(A) + 2∥v − w∥ρ(A)∥w∥

|v ′ Av − w′ Aw| ≤ ρ(A)∥v − w∥(∥v − w∥ + 2∥w∥)

4. Let v be an m-dimensional vector, and A and B be m×m symmetric matrices. Using:

|y ′ Az| ≤ ρ(A)∥y∥∥z∥ for all y, z ∈ Rm ,

show that:
|v ′ Av − v ′ Bv| ≤ ∥v∥2 ρ(A − B).
Use this result and the previous exercise to verify that for v, w ∈ Rm and symmetric
matrices A and B:

|v ′ Av − w′ Bw| ≤ ρ(A)∥v − w∥{∥v − w∥ + 2 min(∥v∥, ∥w∥)} + ∥w∥2 ρ(A − B).

Solution: We need to show that

|v ′ Av − v ′ Bv| ≤ ∥v∥2 ρ(A − B)

Using the given inequality

|y ′ Az| ≤ ρ(A)∥y∥∥z∥ for every y, z ∈ Rm

3
Rewrite the expression as

|v ′ Av − v ′ Bv| = |v ′ (A − B)v|

Apply the given inequality

|v ′ (A − B)v| ≤ ρ(A − B)∥v∥∥v∥

|v ′ (A − B)v| ≤ ρ(A − B)∥v∥2


Thus,
|v ′ Av − v ′ Bv| ≤ ∥v∥2 ρ(A − B)
Now we need to show that

|v ′ Av − w′ Bw| ≤ |v ′ Av − w′ Aw| + |w′ Aw − w′ Bw|

≤ ρ(A)∥v − w∥ ∥v − w∥ + 2 min(∥v∥, ∥w∥) + ∥w∥2 ρ(A − B)




|v ′ Av − w′ Bw| = |v ′ Av − w′ Aw + w′ Aw − w′ Aw − w′ Bw|
By triangle inequality

|v ′ Av − w′ Bw| ≤ |v ′ Av − w′ Aw| + |w′ Aw − w′ Aw − w′ Bw|

From the previous exercise we know that

|v ′ Av − w′ Aw| ≤ ρ(A)∥v − w∥ ∥v − w∥ + 2 min(∥v∥, ∥w∥)




we have also
|w′ Aw − w′ Bw| = |w′ (A − B)w| ≤ ∥w∥2 ρ(A − B)
By combining these results we get

|v ′ Av − w′ Bw| ≤ ρ(A)∥v − w∥ ∥v − w∥ + 2 min(∥v∥, ∥w∥) + ∥w∥2 ρ(A − B)




5. In the GMM setting, define:


n
X
−1
Mn (θ) = ∥n g(zi , θ) − E(g(z1 , θ))∥.
i=1

Apply the result of the previous exercise to verify that:


n
!′ n
!
X X
n−1 g(zi , θ) Wn n−1 g(zi , θ) − E(g(z1 , θ))′ W E(g(z1 , θ))
i=1 i=1

≤ ρ(Wn )Mn (θ){Mn (θ) + 2∥E(g(z1 , θ))∥} + ∥E(g(z1 , θ))∥2 ρ(Wn − W ).


Use this to show that:
sup |Qn (z1 , . . . , zn , θ) − Q(θ)|
θ∈Θ

4
is bounded above by:
 2
ρ(Wn )Mn∗ {Mn∗ + 2 sup ∥E(g(z1 , θ))∥} + sup ∥E(g(z1 , θ))∥ ρ(Wn − W ),
θ∈Θ θ∈Θ

where Mn∗ = supθ∈Θ Mn (θ).

Solution :Using the result of the previous problem ,Let


n
X
−1
v=n g(zi , θ), w = E(g(z1 , θ)),
i=1

A = Wn , B = W.
We get the following inequality

n
!′ n
!
X X ′
n−1 n−1

g(zi , θ) Wn g(zi , θ) − E(g(z1 , θ)) W E(g(z1 , θ))
i=1 i=1  
−1
Pn −1
Pn Pn
≤ ρ(Wn ) n i=1 g(zi , θ)−E(g(z1 , θ)) n i=1 g(zi , θ) − E(g(z1 , θ)) + 2 min n−1 i=1 g(zi , θ)
2
+ E(g(z1 , θ)) ρ(Wn − W ).

Simplify the expression as


n
X
Mn (θ) = n−1 g(zi , θ) − E(g(z1 , θ))
i=1

Then the inequality becomes


n
!′ n
!
X X ′
n−1 g(zi , θ) Wn n−1

g(zi , θ) − E(g(z1 , θ)) W E(g(z1 , θ))
i=1 i=1
 2
≤ ρ(Wn )Mn (θ) Mn (θ) + 2 E(g(z1 , θ)) + E(g(z1 , θ)) ρ(Wn − W )

We know that
n
!′ n
!
X X
Qn (z1 , . . . , zn , θ) = n−1 g(zi , θ) Wn n−1 g(zi , θ) ,
i=1 i=1
′ 
Q(θ) = E(g(z1 , θ)) W E(g(z1 , θ))
Taking the supremum over all θ ∈ Θ,give us
2
sup Qn (z1 , . . . , zn , θ)−Q(θ) ≤ ρ(Wn )Mn∗ Mn∗ +2 sup E(g(z1 , θ))

+ sup E(g(z1 , θ)) ρ(Wn −W ),
θ∈Θ θ∈Θ θ∈Θ

Finanlly we have shown that


n
X
Mn∗ = sup Mn (θ) = sup n −1
g(zi , θ) − E(g(z1 , θ))
θ∈Θ θ∈Θ i=1

5
6. Using the bound for supθ∈Θ |Qn (z1 , . . . , zn , θ)−Q(θ)| obtained in the previous exercise,
prove Lemma ”GMMC” stated in class.

Hint: You can use that ρ is continuous on the space of symmetric matrices.

p
Lemma: In the GMM setting, if Wn →
− W , θ 7→ E(g(z1 , θ)) is continuous, Θ is compact,
and if: n
p
X
sup n−1 g(zi , θ) − E(g(z1 , θ)) →
− 0,
θ∈Θ i=1

then:
p
sup Qn (z1 , . . . , zn , θ) − Q(θ) →
− 0,
θ∈Θ

for:
Q(θ) = E(g(z1 , θ))′ W E(g(z1 , θ)),
the latter being continuous.

Solution:From Exercise 5, we have the upper bound


 
∗ ∗
sup Qn (z1 , . . . , zn , θ) − Q(θ) ≤ ρ(Wn )Mn Mn + 2 sup ∥E(g(z1 , θ))∥
θ∈Θ θ∈Θ
 2
+ sup ∥E(g(z1 , θ))∥ ρ(Wn − W )
θ∈Θ

where n
X
Mn∗ = sup Mn (θ) = sup ∥n −1
g(zi , θ) − E(g(z1 , θ))∥
θ∈Θ θ∈Θ i=1
p p
− W and Mn∗ →
We apply the continuous mapping theorem, if Wn → − 0, then
p p
ρ(Wn ) →
− ρ(W ), Mn∗ →
− 0

By combining these results, we get


   2
∗ ∗ p
ρ(Wn )Mn Mn + 2 sup ∥E(g(z1 , θ))∥ + sup ∥E(g(z1 , θ))∥ ρ(Wn − W ) →
− 0.
θ∈Θ θ∈Θ

The term supθ∈Θ ∥E(g(z1 , θ))∥ is bounded because E(g(z1 , θ)) is continuous in θ. Since
Θ is compact, every continuous function on Θ attains its maximum, which is finite.

We can conclude that


sup Qn (z1 , . . . , zn , θ) − Q(θ) ≤ C,
θ∈Θ

where C is a constant bounding the expression.

6
7. In this exercise we learn that in the GMM setting one can without loss of generality
assume that Wn is symmetric. This follows from the observation that, for v ∈ Rm and A
an m × m matrix (possibly non-symmetric), it holds that (show this):
 
′ ′ 1 ′
v Av = v (A + A ) v.
2
Show that
1
(A + A′ )
2
is symmetric.
In the remaining two problems you will prove the consistency of the max- imum
likelihood estimator (under suitable assumptions). More specifically, you will prove the
following theorem:

Theorem:Let zi be i.i.d. with density f (z, θ) for θ ∈ Θ ⊆ Rp and z ∈ Z ⊆ Rq , where


f (z, θ) > 0 for every z ∈ Z and every θ ∈ Θ, and:
Z
f (y, θ)dy = 1 for all θ ∈ Θ.
Z

Assume that:
1. Θ is compact;

2. the map θ 7→ f (z, θ) is continuous on Θ for every z ∈ Z, and f (z1 , θ) is a random


variable for every θ ∈ Θ;

3. E supθ∈Θ log(f (z1 , θ)) < ∞;
  
4. E log ff(z(z11,θ,θ)0 ) > 0 for every θ ∈ Θ so that θ ̸= θ0 .

Then the MLE θ̂n , i.e., any solution to the minimization problem:
n
X
−1
 
min n − log(f (zi , θ)) =: min Qn (z1 , . . . , zn , θ),
θ∈Θ θ∈Θ
i=1

p
satisfies θ̂n →
− θ0 , the latter denoting the true parameter.
To prove this theorem, we apply Corollary “Cons”. Note that Θ is com- pact by
assumption. It hence remains to verify that (i) Qn converges uni- formly to a continuous
function of which (ii) θ0 is the unique minimum.

Solution :Let’s rewrite the expression


 
′ 1 1 1
v (A + A ) v = v ′ Av + v ′ A′ v

2 2 2
Applying the transpose of matrix A we get

v ′ A′ v = (v ′ Av)′ = v ′ Av

7
Therefore
1 ′ 1 1 1
v Av + v ′ A′ v = v ′ Av + v ′ Av = v ′ Av
2 2 2 2
Thus  
′ 1
v (A + A ) v = v ′ Av

2
We need to show that the symmetry holds
 ′
1 1 1
(A + A ) = (A′ + A) = (A + A′ )

2 2 2
Transpose of a sum of matrices
 ′
1 1
(A + A ) = (A′ + A)

2 2

1 ′ 1
(A + A) = (A + A′ )
2 2

8. Apply the ULLN we had in class (cf. the slides) to verify that:
p
sup Qn (z1 , . . . , zn , θ) − E([− log(f (z1 , θ))]) →
− 0,
θ∈Θ

and that Q(θ) defined by θ 7→ E([− log(f (z1 , θ))]) is continuous. To do this, check if the
assumptions of the ULLN in the slides are satisfied in the setting of the above Theorem.

Solution: We apply the Uniform Law of Large Numbers (ULLN) with

q(z, θ) = − log(f (z, θ))

The composition of continuous functions log(f (θ, z)) is continuous because both log(x)
and f (θ, z) are continuous, and the input f (θ, z) to log(x) satisfies f (θ, z) > 0 everywhere
Thus, q, being this composition, inherits continuity across its domain for every z ∈ Z

We also have  
E sup q(z1 , θ) < ∞,
θ∈Θ

This inequality holds based onAssumption 3

n
 −1
X  p
sup Qn (z1 , . . . , zn , θ) − E [− log(f (z1 , θ))] = sup n q(zi , θ) − E q(z1 , θ) →− 0
θ∈Θ θ∈Θ i=1

 
θ 7→ E q(z1 , θ) = E [− log(f (z1 , θ))] ,
this is continuous

8
By applying theorem ULLN and verifying the continuity and bounded expectation
conditions, we conclude that
p
sup Qn (z1 , . . . , zn , θ) − Q(θ) →
− 0,
θ∈Θ

while 
Q(θ) = E [− log(f (z1 , θ))]

9. Show that θ0 is the unique minimizer of Q. To this end, show (using Assumption 4)
for every θ ∈ Θ satisfying θ ̸= θ0 that:

Q(θ) − Q(θ0 ) > 0.

Solution: We need to show that θ0 is the unique minimizer of Q. To do this, we will


show that for every θ ∈ Θ satisfying θ ̸= θ0 ,

We have Q(θ) − Q(θ0 ) > 0


Q(θ) = E [− log(f (z1 , θ))]

Q(θ) − Q(θ0 ) = E([− log(f (z1 , θ))]) − E([− log(f (z1 , θ0 ))])
= E(− log(f (z1 , θ)) + log(f (z1 , θ0 )))
  
f (z1 , θ0 )
= E log >0
f (z1 , θ)
This inequality holds based on the assumption of Corollary 2 (Corollary ”Cons”)
theorem   
f (z1 , θ0 )
E log >0
f (z1 , θ)
for every θ ∈ Θ such that θ ̸= θ0 .
Therefore,we have proven that

Q(θ) − Q(θ0 ) > 0

You might also like