Econometrics PS9
Econometrics PS9
ECON-S-428
Nyandwi Charles
1. The first two exercises aim at assessing the quality of two GMM estimators for
Gaussian observations. Let X1 , . . . , Xn be i.i.d. N (µ, σ 2 ). Let (µ̂1 , σ̂12 ) = (X, s2 ) be the
usual MLE and let:
n
!2
1 X
(µ̂2 , σ̂22 ) = arg min + g(Xi , µ, σ 2 ) ,
µ∈R,σ 2 ∈R0 n i=1
with
µ − Xi
g(Xi , µ, σ 2 ) = σ 2 − (Xi − µ)2 .
Xi3 − µ(µ2 + 3σ 2 )
Show that both estimators are GMM estimators.
Solution: The estimator (µ̂1 , σ̂12 ) is derived from the sample mean and sample variance
as follow
(
E[Xi − µ] = 0
E[(Xi − µ)2 − σ 2 ] = 0
Then the estimator (µ̂2 , σ̂22 ) is derived from
µ − Xi
g(Xi , µ, σ 2 ) = σ 2 − (Xi − µ)2
Xi3 − µ(µ2 + 3σ 2 )
To obtain the GMM we solve following
n
!′ n
!
1X 1X
θ̂ = arg min g(Xi , θ) W g(Xi , θ)
θ n i=1 n i=1
The second estimator (µ̂2 , σ̂22 ) holds with the following conditions
n
1X
g(Xi , µ, σ 2 ) = 0
n i=1
1
µ − Xi
g(Xi , µ, σ 2 ) = σ 2 − (Xi − µ)2
Xi3 − µ(µ2 + 3σ 2 )
By minimizing this function we get (µ̂2 , σ̂22 )
Thus,we have proven that both (µ̂1 , σ̂21 ) and (µ̂2 , σ̂22 ) are GMM estimators as
where λmin (A) and λmax (A) denote the smallest and largest eigenvalues of A, respectively.
Show that:
|v ′ Av − w′ Aw| ≤ ρ(A)∥v − w∥{∥v − w∥ + 2∥w∥}.
Also, show that:
2
Since A is symmetric, w′ A(v − w) = (v − w)′ Aw
|(v − w)′ A(v − w) + 2(v − w)′ Aw| ≤ |(v − w)′ A(v − w)| + |2(v − w)′ Aw|
∥Aw∥ ≤ ρ(A)∥w∥
We combine the inequalities to get
show that:
|v ′ Av − v ′ Bv| ≤ ∥v∥2 ρ(A − B).
Use this result and the previous exercise to verify that for v, w ∈ Rm and symmetric
matrices A and B:
3
Rewrite the expression as
|v ′ Av − v ′ Bv| = |v ′ (A − B)v|
|v ′ Av − w′ Bw| = |v ′ Av − w′ Aw + w′ Aw − w′ Aw − w′ Bw|
By triangle inequality
we have also
|w′ Aw − w′ Bw| = |w′ (A − B)w| ≤ ∥w∥2 ρ(A − B)
By combining these results we get
4
is bounded above by:
2
ρ(Wn )Mn∗ {Mn∗ + 2 sup ∥E(g(z1 , θ))∥} + sup ∥E(g(z1 , θ))∥ ρ(Wn − W ),
θ∈Θ θ∈Θ
A = Wn , B = W.
We get the following inequality
n
!′ n
!
X X ′
n−1 n−1
g(zi , θ) Wn g(zi , θ) − E(g(z1 , θ)) W E(g(z1 , θ))
i=1 i=1
−1
Pn −1
Pn Pn
≤ ρ(Wn ) n i=1 g(zi , θ)−E(g(z1 , θ)) n i=1 g(zi , θ) − E(g(z1 , θ)) + 2 min n−1 i=1 g(zi , θ)
2
+ E(g(z1 , θ)) ρ(Wn − W ).
We know that
n
!′ n
!
X X
Qn (z1 , . . . , zn , θ) = n−1 g(zi , θ) Wn n−1 g(zi , θ) ,
i=1 i=1
′
Q(θ) = E(g(z1 , θ)) W E(g(z1 , θ))
Taking the supremum over all θ ∈ Θ,give us
2
sup Qn (z1 , . . . , zn , θ)−Q(θ) ≤ ρ(Wn )Mn∗ Mn∗ +2 sup E(g(z1 , θ))
+ sup E(g(z1 , θ)) ρ(Wn −W ),
θ∈Θ θ∈Θ θ∈Θ
5
6. Using the bound for supθ∈Θ |Qn (z1 , . . . , zn , θ)−Q(θ)| obtained in the previous exercise,
prove Lemma ”GMMC” stated in class.
Hint: You can use that ρ is continuous on the space of symmetric matrices.
p
Lemma: In the GMM setting, if Wn →
− W , θ 7→ E(g(z1 , θ)) is continuous, Θ is compact,
and if: n
p
X
sup n−1 g(zi , θ) − E(g(z1 , θ)) →
− 0,
θ∈Θ i=1
then:
p
sup Qn (z1 , . . . , zn , θ) − Q(θ) →
− 0,
θ∈Θ
for:
Q(θ) = E(g(z1 , θ))′ W E(g(z1 , θ)),
the latter being continuous.
where n
X
Mn∗ = sup Mn (θ) = sup ∥n −1
g(zi , θ) − E(g(z1 , θ))∥
θ∈Θ θ∈Θ i=1
p p
− W and Mn∗ →
We apply the continuous mapping theorem, if Wn → − 0, then
p p
ρ(Wn ) →
− ρ(W ), Mn∗ →
− 0
The term supθ∈Θ ∥E(g(z1 , θ))∥ is bounded because E(g(z1 , θ)) is continuous in θ. Since
Θ is compact, every continuous function on Θ attains its maximum, which is finite.
6
7. In this exercise we learn that in the GMM setting one can without loss of generality
assume that Wn is symmetric. This follows from the observation that, for v ∈ Rm and A
an m × m matrix (possibly non-symmetric), it holds that (show this):
′ ′ 1 ′
v Av = v (A + A ) v.
2
Show that
1
(A + A′ )
2
is symmetric.
In the remaining two problems you will prove the consistency of the max- imum
likelihood estimator (under suitable assumptions). More specifically, you will prove the
following theorem:
Assume that:
1. Θ is compact;
Then the MLE θ̂n , i.e., any solution to the minimization problem:
n
X
−1
min n − log(f (zi , θ)) =: min Qn (z1 , . . . , zn , θ),
θ∈Θ θ∈Θ
i=1
p
satisfies θ̂n →
− θ0 , the latter denoting the true parameter.
To prove this theorem, we apply Corollary “Cons”. Note that Θ is com- pact by
assumption. It hence remains to verify that (i) Qn converges uni- formly to a continuous
function of which (ii) θ0 is the unique minimum.
v ′ A′ v = (v ′ Av)′ = v ′ Av
7
Therefore
1 ′ 1 1 1
v Av + v ′ A′ v = v ′ Av + v ′ Av = v ′ Av
2 2 2 2
Thus
′ 1
v (A + A ) v = v ′ Av
′
2
We need to show that the symmetry holds
′
1 1 1
(A + A ) = (A′ + A) = (A + A′ )
′
2 2 2
Transpose of a sum of matrices
′
1 1
(A + A ) = (A′ + A)
′
2 2
1 ′ 1
(A + A) = (A + A′ )
2 2
8. Apply the ULLN we had in class (cf. the slides) to verify that:
p
sup Qn (z1 , . . . , zn , θ) − E([− log(f (z1 , θ))]) →
− 0,
θ∈Θ
and that Q(θ) defined by θ 7→ E([− log(f (z1 , θ))]) is continuous. To do this, check if the
assumptions of the ULLN in the slides are satisfied in the setting of the above Theorem.
The composition of continuous functions log(f (θ, z)) is continuous because both log(x)
and f (θ, z) are continuous, and the input f (θ, z) to log(x) satisfies f (θ, z) > 0 everywhere
Thus, q, being this composition, inherits continuity across its domain for every z ∈ Z
We also have
E sup q(z1 , θ) < ∞,
θ∈Θ
n
−1
X p
sup Qn (z1 , . . . , zn , θ) − E [− log(f (z1 , θ))] = sup n q(zi , θ) − E q(z1 , θ) →− 0
θ∈Θ θ∈Θ i=1
θ 7→ E q(z1 , θ) = E [− log(f (z1 , θ))] ,
this is continuous
8
By applying theorem ULLN and verifying the continuity and bounded expectation
conditions, we conclude that
p
sup Qn (z1 , . . . , zn , θ) − Q(θ) →
− 0,
θ∈Θ
while
Q(θ) = E [− log(f (z1 , θ))]
9. Show that θ0 is the unique minimizer of Q. To this end, show (using Assumption 4)
for every θ ∈ Θ satisfying θ ̸= θ0 that:
Q(θ) − Q(θ0 ) = E([− log(f (z1 , θ))]) − E([− log(f (z1 , θ0 ))])
= E(− log(f (z1 , θ)) + log(f (z1 , θ0 )))
f (z1 , θ0 )
= E log >0
f (z1 , θ)
This inequality holds based on the assumption of Corollary 2 (Corollary ”Cons”)
theorem
f (z1 , θ0 )
E log >0
f (z1 , θ)
for every θ ∈ Θ such that θ ̸= θ0 .
Therefore,we have proven that