0% found this document useful (0 votes)
18 views12 pages

Appendix A

The random effects model specifies an error term consisting of individual and time components. It assumes correlation over time is due to individual effects, and the components are independent. The generalized least squares estimator exploits the error covariance structure for a more efficient estimate. It involves transforming the data using the covariance matrix to satisfy classical linear regression assumptions.

Uploaded by

chalashebera0314
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views12 pages

Appendix A

The random effects model specifies an error term consisting of individual and time components. It assumes correlation over time is due to individual effects, and the components are independent. The generalized least squares estimator exploits the error covariance structure for a more efficient estimate. It involves transforming the data using the covariance matrix to satisfy classical linear regression assumptions.

Uploaded by

chalashebera0314
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Random E¤ects Model

The random e¤ects model is speci…ed as

yit = + x0it + i + it , i = 1; 2; : : : ; n ; t = 1; 2; : : : ; T (1)


2 2
where it IID 0; and i IID 0;

In the random e¤ects model i + it is treated as an error term consisting of two components: an
individual speci…c component (random heterogeneity speci…c to the ith observation), which does not
vary over time, and a remainder component, which is assumed to be uncorrelated over time.

That is, all correlation of the error terms over time is attributed to the individual e¤ects i. It is
assumed that i and it are mutually independent and independent of xjs (for all j and s).

The error components structure implies that the composite error term i + it exhibits a particular
2
form of autocorrelation (unless = 0).

Consequently, standard errors for the OLS estimator are incorrect and a more e¢ cient estimator is the
(GLS ) estimator which can be obtained by exploiting the structure of the error covariance matrix.

To derive the GLS estimator, note that for individual i all the T observations can be stacked as:
2 3 2 3 2 3 2 3 2 3
yi1 x0i1 i i1
6 7 6 7 6 7 6 7 6 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7 6 .. 7
6 . 7=6 . 7+6 . 7+6 . 7+6 . 7
4 5 4 5 4 5 4 5 4 5
yiT x0iT i iT

The above model can compactly be written as:

yi = iT + xi + i iT + i (2)

where 2 3 2 3 2 3 2 3
yi1 1 x0i1 i1
6 7 6 7 6 7 6 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7
yi = 6 . 7; iT = 6 . 7; xi = 6 . 7 and i =6 . 7
4 5 4 5 4 5 4 5
yiT 1 x0iT iT

The covariance matrix of the composite random vector i iT + i is given by

2
var ( i iT + i ) = iT i0T + 2
IT =

where IT is the T dimensional identity matrix.

This can be used to derive the generalized least squares (GLS ) estimator for the parameters in (2).

1
1=2
For each individual, we can transform the data by premultiplying both sides of equation (2) by ,
which is given by
2
1 2
= IT 2
i i0
2 T T
1
+T

which can be written as


1 2 1 1
= IT iT i0T + iT i0T
T T

where
2
= 2 2
+T

If we denote the composite random vector i iT + i by ui , the model in equation (2) can be written as

yi = iT + xi + ui (3)

1=2
Premultiplying both sides of equation (3) by , where

1=2 1 # 1 1 1=2 1 1=2


= IT iT i0T = IT iT i0T + iT i0T and #=1
T T T

results in the following transformed model

1=2 1=2 1=2 1=2


yi = iT + xi + ui

1=2
Let ui = ui be the vector of random disturbances of the transformed model. The transformed
vector of random disturbances satisfy the assumptions of the classical linear regression model as
0 0
1=2 1=2 1=2 1=2 1=2
var (ui ) = var ui = var (ui ) = = IT

Thus OLS is optimal and the sum of the squared errors, denote by S, is given given by:
n
X n h
X i0 h i
S = ui 0 ui = 1=2
(yi iT xi ) 1=2
(yi iT xi )
i=1 i=1
n
X 0 1
= (yi iT xi ) (yi iT xi )
i=1
Xn
0 0
= yi0 1
yi 2 yi0 1
iT 2yi0 1
xi + 2 0
iT 1
iT + 2 i0T 1
xi + xi 1
xi (4)
i=1

Minimizing the sum of squared errors in equation (4) with respect of and ; gives rise to
n n
X o
@S
= 2yi0 1
iT + 2b RE i0T 1
iT + 2i0T 1
xi b RE =0 (5.1)
@ b RE ; bRE i=1
Xn n o
@S
= 2x0i 1
yi + 2b RE x0i 1
iT + 2x0i 1
xi b RE =0 (5.2)
@ b RE ; bRE i=1

1 (A 1 A 1 bb0 A 1
bb0 ) =A 1
1 b0 A 1 b

2
Equation (5.1) can be expressed as
n
X
2
yi0 iT + 2
T b RE + i0 x b
2 T i RE
= 0
i=1
n n
X o
yi0 iT + T b RE + i0T xi b RE = 0
i=1

n X
X T n X
X T
nT b RE = yit x0it b RE
i=1 t=1 i=1 t=1
0b
b RE = y x RE (5.3)

Using (5.3), equation (5.2) can be written as


n n
X o
2x0i 1
yi + 2b RE x0i 1
iT + 2x0i 1
xi b RE = 0
i=1
n n
X o
x0i 1
yi + b RE x0i 1
iT + x0i 1
xi b RE = 0
i=1
n
X n o
0
x0i 1
yi iT y x b RE xi b RE = 0
i=1
Xn n o
0 b
x0i 1
yi iT y xi iT x RE = 0
i=1

n
X n
X
0 b
x0i 1
xi iT x RE = x0i 1
yi iT y
i=1 i=1
n
X Xn
0 0 1 0 b 0 0 1
xi iT x xi iT x RE = xi iT x yi iT y
i=1 i=1

" n
# 1 n
X 0 0 0
X 0 0
) b RE = xi iT x 1
xi iT x xi iT x 1
yi iT y (6)
i=1 i=1

Equation (6) can be expressed as


n
X n h
X i0
0 0 1 0 1=2 0 1=2 0
x iT x xi iT x = x iT x xi iT x ((7))
i=1 i=1

However,

1=2 0 1 1 1=2 1 0
xi iT x = IT iT i0T + iT i0T xi iT x
T T
1 n 1=2 0
o
= (xi iT x0i: ) + iT xi: x (8)

3
Given the result in (8), (7) can be expressed as
n
X 0 0 1 0
xi iT x xi iT x
i=1

1 Xn o0 n o
n
1=2 0 1=2 0
= 2
(xi iT x0i: ) + iT xi: x (xi iT x0i: ) + iT xi: x
i=1
n
1 X 0 0 0 0
= 2
(xi iT x0i: ) (xi iT x0i: ) + iT xi: x iT xi: x
i=1
n
( T )
1 X X 0 0
= 2
(xit xi: ) (xit xi: ) + T xi: x xi: x
i=1 t=1
( n T n
)
1 XX 0
X 0
= 2
(xit xi: ) (xit xi: ) + T xi: x xi: x (9)
i=1 t=1 i=1

By the same analogy, the second term on the right hand side of (6) can be written as:
n
( n T n
)
X 0 0 1 XX X
1
xi iT x yi iT y = 2 (xit xi: ) (yit y i: ) + T xi: x y i: y
i=1 i=1 t=1 i=1
(10)

Using (9) and (10), random e¤ects estimator can be given by


( n T n
) 1
XX X 0
) b = (xit xi: ) (xit xi: ) + T
0
xi: x xi: x
RE
i=1 t=1 i=1
( n T n
)
XX X
(xit xi: ) (yit y i: ) + T xi: x y i: y
i=1 t=1 i=1

Given the random e¤ects estimator in (6), the covariance matrix of the random e¤ects estimator can
be derived as follows:
" n # 1 n
X 0 0 0
X 0 0
b = xi iT x 1
xi iT x xi iT x 1
yi iT y
RE
i=1 i=1
" n
# 1 n
X 0 0 0
X 0 0
1 1
= xi iT x xi iT x xi iT x yi
i=1 i=1
" n
# 1 n
X 0 0 0
X 0 0
1 1
= xi iT x xi iT x xi iT x ( iT + xi + ui )
i=1 i=1
" n
# 1 n
X 0 0 0
X 0 0
1 1
= xi iT x xi iT x xi iT x (xi + ui )
i=1 i=1
" # 1 n
n
X X h i
0 0 1 0 0 0 1 0
= xi iT x xi iT x xi iT x xi iT x + ui
i=1 i=1
" n
# 1 n
X 0 0 0
X 0 0
1 1
= + xi iT x xi iT x xi iT x ui
i=1 i=1

) E b RE = as ui is independent of all xit

4
Therefore, the covariance matrix of the random e¤ects estimator is given by
0
var b RE = E b
RE E b RE b
RE E b RE
( n
! n
!0 )
X 0 0
X 0 0
1 1 1 1
E A xi iT x ui A xi iT x ui
i=1 i=1

Pn 0 0 1 0
where A = i=1 xi iT x xi iT x

Thus, the covariance matrix of the random e¤ects estimator reduces to


n
X 0 0 0
var b RE = A 1
xi iT x 1
E (ui u0i ) 1
xi iT x A 1

i=1
n
X
1 0 0 1 1 0 1
= A xi iT x xi iT x A
i=1
n
X
1 0 0 1 0 1 1 1 1
= A xi iT x xi iT x A =A AA =A (11)
i=1

Using (8), the covariance matrix of the random e¤ects estimator in (11) can be written as
( n ) 1
X 0 0 0
var b RE = A 1
= xi iT x 1
xi iT x
i=1
( n ) 1
Xh 0
i0 h
0
i
1=2 1=2
= xi iT x xi iT x
i=1
( n ) 1
X 1 n o 1 n o
0
1=2 0 1=2 0
= (xi iT x0i: ) + iT xi: x (xi iT x0i: ) + iT xi: x
i=1
( n T n
) 1
XX 0
X 0
2
= (xit xi: ) (xit xi: ) + T xi: x xi: x
i=1 t=1 i=1

5
Covariance between b F E and b RE

The …xed e¤ects estimator is given by

n X
T
! 1 n T
X XX
b = (xit xi: ) (xit xi: )
0
(xit xi: ) (yit y i: )
FE
i=1 t=1 i=1 t=1
0
where yit y i: = (xit xi: ) +( it i: )

n X
T
! 1 n T
X XX
b = (xit xi: ) (xit xi: )
0
(xit xi: ) (xit xi: )
0
+( it i: )
FE
i=1 t=1 i=1 t=1
n X
T
! 1 n T
X 0
XX
= + (xit xi: ) (xit xi: ) (xit xi: ) ( it i: )
i=1 t=1 i=1 t=1

) E bF E =

n X
T
! 1 n T
X XX
b E bF E = (xit xi: ) (xit xi: )
0
(xit xi: ) ( it i: )
FE
i=1 t=1 i=1 t=1
n X
T
! 1 n T
X 0
XX
= (xit xi: ) (xit xi: ) (xit xi: ) it
i=1 t=1 i=1 t=1
n X
T
! 1 n
X 0
X 0
= (xit xi: ) (xit xi: ) (xi iT x0i: ) i (1)
i=1 t=1 i=1

The random e¤ects estimator is given by


n
! 1 n
X 0 0 0
X 0 0
b = xi iT x 1
xi iT x xi iT x 1
yi iT y
RE
i=1 i=1
n
! 1 n
X 0 0 0
X 0 0
1 1
= xi iT x xi iT x xi iT x yi
i=1 i=1
n
! 1 n
X 0 0 0
X 0 0
1 1
= xi iT x xi iT x xi iT x ( iT + xi + ui )
i=1 i=1
where ui = i iT + i
n
! 1 n
X 0 0 0
X 0 0
1 1
= xi iT x xi iT x xi iT x (xi + ui )
i=1 i=1
n
! 1 n
X 0 0 0
X 0 0 0
1 1
= xi iT x xi iT x xi iT x xi iT x + ui
i=1 i=1
n
! 1 n
X 0 0 0
X 0 0
1 1
= + xi iT x xi iT x xi iT x ui
i=1 i=1

) E b RE =
n
! 1 n
X 0 0 0
X 0 0
b E b RE = xi iT x 1
xi iT x xi iT x 1
ui (2)
RE
i=1 i=1

6
From the expressions in (1) and (2), it follows that
0
cov b F E ; b RE = E b
FE E bF E b
RE E b RE

n X
T
! 1 n n
X 0
XX 0 0
= (xit xi: ) (xit xi: ) (xi iT x0i: ) E 0
i uj
1
xj iT x
i=1 t=1 i=1 j=1
n
! 1
X 0 0 0
1
xi iT x xi iT x
i=1
8
< 2
IT for i = j
0
but E i uj =
: 0 otherwise

n X
T
! 1 n
X X 0 0
cov b F E ; b RE
0
= (xit xi: ) (xit xi: ) (xi iT x0i: ) 2
IT 1
xi iT x
i=1 t=1 i=1
n
! 1
X 0 0 0
1
xi iT x xi iT x
i=1
T
n X
! 1
X 0
= (xit xi: ) (xit xi: )
i=1 t=1
Xn
0 1 1 0
(xi iT x0i: ) IT iT i0T + iT i0T xi iT x
i=1
T T
n
! 1
X 0 0 0
1
xi iT x xi iT x
i=1

1 2 1 1
where = IT iT i0T + iT i0T
T T

n X
T
! 1
X
cov b EE ; b RE
0
= (xit xi: ) (xit xi: )
i=1 t=1
Xn n o
0 0
(xi iT x0i: ) xi iT x0i: + iT x0i: iT x
i=1
n
! 1
X 0 0 0
1
xi iT x xi iT x
i=1
n X
T
! 1 n
!
X 0
X 0
= (xit xi: ) (xit xi: ) (xi iT x0i: ) (xi iT x0i: )
i=1 t=1 i=1
n
! 1
X 0 0 0
1
xi iT x xi iT x
i=1
n n T
!
X 0 0
X X 0
as (xi iT x0i: ) iT x0i: x = (xit xi: ) x0i: x =0
i=1 i=1 t=1

7
n X
T
! 1 n
!
X X 0
cov b F E ; b RE
0
= (xit xi: ) (xit xi: ) (xi iT x0i: ) (xi iT x0i: )
i=1 t=1 i=1
n
! 1
X 0 0 0
1
xi iT x xi iT x
i=1
n X
T
! 1 n X
T
!
X 0
X 0
= (xit xi: ) (xit xi: ) (xit xi: ) (xit xi: )
i=1 t=1 i=1 t=1
n
! 1
X 0 0 0
1
xi iT x xi iT x
i=1
n
! 1
X 0 0 0
= xi iT x 1
xi iT x = var b RE
i=1

8
Within Group Instrumental Variables Estimator
Write the equation as

yit = x01;it 1 + x02;it 2 + z01;i 1 + z02;i 2 + i + it (1)

cov (x1;it ; it ) = 0; cov (x2;it ; it ) 6= 0

Within group transformation of the model in equation (1) can be carried out as follows:

y i: = x01;i: 1 + x02;i: 2 + z01;i 1 + z02;i 2 + i + i: (2)

Subtracting (2) from (1) results in

0 0
yit y i: = (x1;it x1;i: ) 1 + (x2;it x2;i: ) 2 +( it i: ) (3)

The transformed model in equation (3) can be written as

0
yit y i: = (xit xi: ) +( it i: ) (4)
0 1 0 1
x1;it x1;i: 1
where xit xi: = @ A and =@ A
x2;it x2;i: 2

Instruments: Let q2;it be an instrument for the endogenous regressors x2;it and should have at least
as many elements as x2;it . x1;it acts as its own instrument.

Let the vector of instrumental variables, denoted by qit qi: , be given by


0 1
x1;it x1;i:
qit qi: = @ A
q2;it q2;i:

Chooses in such a way that the following sample moments


n T
1 XX 0
(qit qi: ) (yit y i: ) (xit xi: ) = (Wqy Wqx )
nT i=1 t=1

are as close as possible to zero.

This is done by minimizing the following quadratic form

0
Q( ) = [(Wqy Wqx )] Wqq1 [(Wqy Wqx )]
n X
X T
1 0
where Wqq = (qit qi: ) (qit qi: )
nT i=1 t=1

The matrix Wqq is a weighting matrix and tells us how much weight to attach to which sample
moments.

Minimizing the above quadratic form with respect to results in WG-IV estimator given by
1
b = Wxq Wqq1 Wqx Wxq Wqq1 Wqy
W IV

9
Mundlak (1978) Approach
Mundlak’s (1978) approach suggest the speci…cation

E[ i j Xi ] = x0i

Substituting this in the random e¤ects model, we obtain

yit = x0it + i + it = x0it + x0i + ui + it

where ui = i E[ i j Xi ].

Estimating by GLS gives you the same WG estimators for , i.e., OLS estimation of the GLS equation
with xi
yit = x0it + x0i + ui + it

Gives rise to b GLS = b w and b = bb b .


w

Proof.

To derive the GLS estimator, note that for individual i all the t observation can be stacked as:
2 3 2 3 2 3 2 3 2 3
yi1 x0i1 x0i: ui i1
6 7 6 7 6 7 6 7 6 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7 6 .. 7
6 . 7=6 . 7 +6 . 7 +6 . 7+6 . 7
4 5 4 5 4 5 4 5 4 5
0 0
yiT xiT xi: ui iT

The above model can compactly be written as:

yi = xi + iT x0i: + ui iT + i (1)

where 2 3 2 3 2 3 2 3
yi1 x0i1 1 i1
6 7 6 7 6 7 6 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7
yi = 6 . 7; xi = 6 . 7; iT = 6 . 7; i =6 . 7
4 5 4 5 4 5 4 5
yiT x0iT 1 iT

The covariance matrix of the composite random vector ui iT + i is given by

2 0 2
var (ui iT + i ) = u iT iT + IT =

where IT is the T dimensional identity matrix.

If we denote the composite random vector ui iT + i by vi , the GLS estimator of the parameters of the
model in equation (1) can be derived by minimizing the following sum of squared errors:
n
X n
X 0
SG = vi0 1
vi = (yi xi iT x0i: ) 1
(yi xi iT x0i: )
i=1 i=1
n
X
= yi0 1
yi 2 0 x0i 1
yi 2 0 xi: i0T 1
yi + 0 0
xi 1
xi + 0 0
xi 1
iT x0i:
i=1
+ 0 xi: i0T 1
xi + 0
xi: i0T 1
iT x0i: (2)

10
Minimizing the sum of squared errors in equation (2) with respect of and ; gives rise to
n n
X o
@SG
= 2xi: i0T 1
yi + 2xi: i0T 1
xi b GLS + 2xi: i0T 1
iT x0i: b = 0 (3.1)
@ b;bGLS i=1
Xn n o
@SG
= 2x0i 1
yi + 2x0i 1
xi b GLS + 2x0i 1
iT x0i: b = 0 (3.2)
@ b;bGLS i=1

Given the fact that


1 2 1 1
= IT iT i0T + iT i0T
T T

where
2
= 2 2
+T u

1 1 1
i0T 1
= i0T 2
IT iT i0T + iT i0T = i0
2 T
(3.3)
T T

i0T 1
xi = i0 x
2 T i
= 2
T x0i: (3.4)

i0T 1
yi = i0 y
2 T i
= 2
T y i: (3.5)

Using (3.3)-(3.5) equation (3.1) can be expressed as


n
X
2xi: 2
T y i: + 2xi: 2
T x0i: b GLS + 2xi: 2
T x0i: b =0
i=1

! n n
n
X X o
xi: x0i: b = xi: y i: xi: x0i: b GLS
i=1 i=1

) b = bb b
GLS (3.6)

Pn 1 Pn
where b b = ( i=1 xi: x0i: ) i=1 xi: y i:

Using the results in (3.3) -(3.6) equation (3.2) can be written as


n n
X o
2x0i 1
yi + 2x0i 1
xi b GLS + 2x0i 1
iT x0i: b b b
GLS = 0
i=1
n n
X o
2x0i 1
yi iT x0i: b b + 2x0i 1
(xi iT x0i: ) b GLS = 0
i=1
" n
# " n
#
X X
x0i 1
(xi iT x0i: ) b GLS = x0i 1
yi iT x0i: b b (3.7)
i=1 i=1

The expression in (3.7) can be simpli…ed as follows:

1 1 1 1
(xi iT x0i: ) = 2
IT iT i0T + iT i0T (xi iT x0i: )
T T
1
= 2
(xi iT x0i: ) (3.8)

11
1 1 1
1
yi iT x0i: b b = 2
IT iT i0T + iT i0T yi iT x0i: b b
T T
1 n o
= 2
(yi iT y i: ) + iT y i: x0i: b b (3.9)

Using expressions (3.8) and (3.9) equation (3.7) can be written as


" n # " n #
X 1 X 1 n o
x0i 2 (xi iT x0i: ) b GLS = x0i 2 (yi iT y i: ) + iT y i: x0i: b b
i=1 i=1
" # " #
n
X n
X n o
x0i (xi iT x0i: ) b = x0i (yi iT y i: ) + iT y i: x0i: b b
GLS
i=1 i=1
" n
# " n # " n
#
X 0
X 0
X
(xi iT x0i: ) (xi iT x0i: ) b GLS = (xi iT x0i: ) (yi iT y i: ) + T xi: y i: x0i: b b
i=1 i=1 i=1
n
X 0
= (xi iT x0i: ) (yi iT y i: )
i=1

n
! 1 n
X 0
X 0
) b GLS = (xi iT x0i: ) (xi iT x0i: ) (xi iT x0i: ) (yi iT y i: ) = b W (3.10)
i=1 i=1

Expressions (3.6) and (3.10) implies that b GLS = b w and b = bb b


w and this completes the
proof.

12

You might also like