0% found this document useful (0 votes)
23 views27 pages

Instrumental Variables in RCT

Uploaded by

Xi Chen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views27 pages

Instrumental Variables in RCT

Uploaded by

Xi Chen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Empirical Methods in Applied Economics

Jörn-Ste¤en Pischke
LSE

October 2007

1 Instrumental Variables
1.1 Basics
A good baseline for thinking about the estimation of causal e¤ects is often
the randomized experiment, like a clinical trial. However, for many treat-
ments, even in an experiment it may not be possible to assign the treatment
on a random basis. For example, in a drug trial it may only be possible to
o¤er a drug but not to enforce that the patient actually takes the drug (a
problem called non-compliance). When we study job training, we may only
be able to randomly assign the o¤er of training. Individuals will then decide
whether to participate in training or not. Even those not assigned training
in the program under study may obtain the training elsewhere (a problem
called non-embargo, training cannot be e¤ectively witheld from the control
group). Hence, treatment itself is still a behavioral variable, and only the
intention to treat has been randomly assigned. The instrumental variables
(IV) estimator is a useful tool to evaluate treatment in such a setup.
In order to see how this works, we will start by reviewing the assump-
tions necessary for the IV estimator to be valid in this case. We need some
additional notation for this. Let z = f0; 1g the intention to treat variable.
D = f0; 1g is the treatment again. We can now talk about counterfactual
treatments: D(z) is the treatment for the (counterfactual) value of z. I.e.
D(0) is the treatment if there was no intention to treat, and D(1) is the
treatment if there was an intention to treat. In the job training example,
D(0) denotes the training decision of an individual not assigned to the train-
ing program, and D(1) denotes the training decision of someone assigned to
the program. As before, Y is the outcome. The counterfactual outcome is
now Y (z; D) because it may depend on both the treatment choice and the
treatment assignment. So there are four counterfactuals for Y .

1
We can now state three assumptions:

Assumption 1 z is as good as randomly assigned

Assumption 2 Y (z; D) = Y (z 0 ; D) 8z; z 0 ; D


This assumption says that the counterfactual outcome only depends
on D, and once you know D you do not need to know z. This is the
exclusion restriction, and it implies that we can write Y (z; D) = Y (D).

There are three causal e¤ects we can de…ne now:

1. The causal e¤ect of z on D is D(1) D(0).

2. The causal e¤ect of z on Y is Y (1; D(1)) Y (0; D(0)). Given Assump-


tion 2, we can write this as Y (D(1)) Y (D(0)).

3. The causal e¤ect of D on Y is Y (1) Y (0).

E¤ects 1. and 2. are reduced form e¤ects. 1. is the …rst stage relation-
ship, and 2. is the reduced form for the outcome. 3. is the treatment e¤ect
of ultimate interest. Without Assumption 2, it is not clear how to de…ne
this e¤ect, beause the causal e¤ect of D on Y would depend on z.

Assumption 3 E(D(1) D(0)) 6= 0.


This assumption says that the variable z has some power to in‡uence
the treatment. Without it, z would be of no use to help us learn
something about D. It is the existence of a signi…cant …rst stage.

Assumption 1 is su¢ cient to estimate the reduced form causal e¤ects


1. and 2. The exclusion restriction and the existence of a …rst stage are
only necessary in order to give these reduced form e¤ects an instrumental
variables interpretation. Because of this, it is often useful to see estimates
of the reduced forms as well as the IV results in an application.
In order to get some further insights into the workings of the IV esti-
mator, start with the case where the instrument is binary and there are no
other covariates. In this case, the IV estimator takes a particularly simple
form. Notice that the IV estimator is given by

b cov(yi ; zi )
IV = :
cov(Di ; zi )

2
Since zi is a dummy variable, the …rst covariance can be written as

cov(yi ; zi ) = E(yi y)(zi z)


= Eyi zi yz
= E(yi jzi = 1)E(zi = 1) yE(zi = 1)
= fE(yi jzi = 1) yg E(zi = 1)
= fE(yi jzi = 1) [E(yi jzi = 1)E(zi = 1) + E(yi jzi = 0)E(zi = 0)]g E(zi = 1)
= fE(yi jzi = 1)E(zi = 0) E(yi jzi = 0)E(zi = 0)g E(zi = 1)
= fE(yi jzi = 1) E(yi jzi = 0)g E(zi = 1)E(zi = 0):

A similar derivation for the denominator leads to

b cov(yi ; Zi )
IV =
cov(Di ; Zi )
fE(yi jzi = 1) E(yi jzi = 0)g E(zi = 1)E(zi = 0)
=
fE(Di jzi = 1) E(Di jzi = 0)g E(zi = 1)E(zi = 0)
E(yi jzi = 1) E(yi jzi = 0)
=
E(Di jzi = 1) E(Di jzi = 0)
E(yi jzi = 1) E(yi jzi = 0)
= :
P (Di = 1jzi = 1) P (Di = 1jzi = 0)

This formulation of the IV estimator is often referred to as the Wald-


estimator. It says that the estimate is given by the di¤erence in outcomes
for the groups intended and not intended for treatment divided by the dif-
ference in actual treatment for these groups. It is also easy to see that the
numerator is the reduced form estimate, also frequently called the intention
to treat estimate. The denominator is the …rst stage estimate. Hence, the
Wald-estimator is also the indirect least squares estimator, dividing the re-
duced form estimate by the …rst-stage estimate. This has to be true in the
just identi…ced case.
The IV methodology is often useful in actual randomized experiments
when the treatment itself cannot be randomly assigned because of the non-
compliance and the lack of embargo problems. For example, in the Mov-
ing to Opportunity exeriment (Kling et al., 2004), poor households were
given housing vouchers to move out of high poverty neighborhood. While
the voucher receipt was randomly assigned, whether the household actually
ended up moving is not under the control of the experimenter: some house-
holds assigned a voucher do not move, but some not assigned a voucher
move on their own. Kling et al. (2004) therefore report both estimates of

3
the reduced form e¤ect (intention to treat estimates or ITT) and IV esti-
mates (treatment on the treated or TOT). Since about half the households
with vouchers actually moved, TOT estimates are about twice the size of
the ITT estimates. Notice that the ITT estimates are of independent in-
terest. They estimate directly the actual e¤ect of the policy. Nevertheless,
the TOT estimates are often of more interest in terms of their economic
interpretation.
If an instrument is truely as good as randomly assigned, then IV esti-
mation of the binary model

yi = + Di + "i

will be su¢ cient. Often, this assumption is not going to be satis…ed. How-
ever, an instrument may be as good as randomly assigned conditional on
some covariates, so that we can estimate instead

yi = + Di + Xi + "i ;

instrumenting Di by zi . The role of the covariates here is to ensure the


validity of the IV assumptions. Of course, covariates orthogonal to zi may
also be included simply to reduce the variance of the estimate.
An interesting and controversial example of an IV study is the paper by
Angrist and Krueger (1991). They try to estimate the returns to education.
The concern is that there may be an omitted variable (“ability”) which
confounds the OLS estimates. The Angrist and Krueger insight is that US
compulsory schooling laws can be used to construct an instrument for the
number of years of schooling. US laws specify an age an individual has to
reach before being able to drop out of school. This feature, together with
the fact that there is only one date of school entry per year means that
season of birth a¤ects the length of schooling for dropouts.
Suppose, for example, that there are two individuals, Bob and Ron. Bob
is born in January, and Bob is born in December of the previous year. So
they are almost equal in age. School entry rules typically specify that you
are allowed to enter school in summer, if you turned 6 during the previous
calendar year. This means that Ron, who turns 6 in December, is allowed to
enter school at age 6. Bob will not satisfy this rule and therefore has to wait
an additional year and enter when he is 7. At the time of school entry, Bob
is 11 months older than Ron. Both can drop out when they reach age 16.
Of course, at that age, Bob will have completed 11 months less schooling
than Ron, who entered earlier. The situation is illustrated in the following
…gure.

4
turn 6 enter at 7 age 16
Jan
Bob

Ron
Dec
turn 6 enter at 6 age 16

Figure 1: Schooling for Bob and Ron

5
The idea of the Angrist and Krueger (1991) paper is to use season of
birth as an instrument for schooling. So in this application, we have

y = log earnings
D = years of schooling
z = born in the 1st quarter.

The …rst thing to do is to check the three IV assumptions. Assumption


1 (random assignment) is probably close to satis…ed. Births are almost
uniformly spaced over the year. There is relatively little parental choice
over season of birth although there is clearly some. There is some evidence
of small di¤erences in the socioeconomic status of parents by season of birth
of the child. Assumption 2, the exclusion restriction says that season of birth
does not a¤ect earnings directly, only through its e¤ect on schooling. If you
are born early in the year you enter school later (like Bob in the example).
Hence, age at school entry should not be correlated with earnings. There is
some psychology evidence that those who start later do better in school. If
this translates into unobserved factors (i.e. anything other than how long
you stay in school) which are correlated with earnings, then this will lead
to a downward bias in the estimates. Assumption 3, the existence of a …rst
stage is an empirical matter, which we can check in the data.
Figures 1 and 2 in Angrist and Krueger plot years of completed education
against cohort of birth (in quarters) for those born between 1930 and 1950.
The …gures reveal that average education tends to be higher for those born
later in the year (quarters 3 and 4) and relatively low for those born in
the …rst quarter, as we would expect. The pattern is more pronounced
for the early cohorts, and starts to vanish for the cohorts born in the late
1940s, when average education is higher. This is consisent with the idea
that compulsory schooling laws are at the root of this pattern, since fewer
and fewer students drop out at the earliest possible date over time.
Table 1 presents numerical estimates of this relationship. It reveals that
those born in the …rst quarter have about 0.1 years less schooling than those
born in the fourth quarter, with a slightly weaker relationship for the cohorts
born in the 1940s. There is a small e¤ect on high school graduation rates
but basically no e¤ect on college graduation or post-graduate education.1
This pattern suggests that schooling is a¤ected basically only for those with
very little schooling. Table 2 compares the quarter of birth e¤ects on school
1
They also check for the e¤ect on years of education for those with at least a high
school degree. This is not really valid, because the conditioning is on an outcome variable
(graduating from high school), which they showed to be a¤ected by the instrument.

6
enrollment at age 16 for those in states with a compulsory schooling age of 16
with states with a higher compolsory schooling age. The enrollment e¤ect is
clearly visible for the age 16 states, but not for the states with later dropout
ages. The enrollment e¤ect is also concentrated on earlier cohorts when age
16 enrollment rates were lower. This is suggestive that compulsory schooling
laws are indeed at the root of the quarter of birth-schooling relationship.
Figure 5 in the paper shows a similar picture to …gures 1 and 2, but
for earnings. There is again a saw-tooth pattern in quarter of birth, with
earnings being higher for those born later in the year. This is consistent with
the pattern in schooling and a postive return to schooling. One problem
apparent in …gure 5 is that age is clearly related to earnings beyond the
quarter of birth e¤ect, particularly for those in later cohorts (i.e. for those
who are younger at the time of the survey in 1980). It is therefore important
to control for this age earnings pro…le in the estimation, or to restrict the
estimates to the cohorts on the relatively ‡at portion of the age-earnings
pro…le in order to avoid confounding the quarter of birth e¤ect with earnings
growth with age. Hence, the exclusion restriction only holds conditional on
other covariates in this case, namely adequate controls for the age-earnings
pro…le.
Table 3 presents simple Wald estimates of the returns to schooling es-
timate for the cohorts in the relatively ‡at part of their life-cycle pro…le.
Those born in the 1st quarter have about 0.1 fewer years of schooling com-
pared to those born in the remaining quarters of the year. They also have
about 1 percent lower earnings. This translates into a Wald estimate of
about 0.07 for the 1920s cohort, and 0.10 for the 1930s cohort, compared to
OLS estimates of around 0.07 to 0.08.
Tables 4 to 6 present instrumental variables estimates using a full set of
quarter of birth times year of birth interactions as instruments. These spec-
i…cations also control for additional covariates. It can be seen by comparing
columns (2) and (4) in the tables that controlling for age is quite impor-
tant. The IV returns to schooling are either similar to the OLS estimates or
above. Finally, table 7 exploits the fact that di¤erent states have di¤erent
compulsory schooling laws and uses 50 state of birth times quarter of birth
interactions as instruments in addition to the year of birth times quarter of
birth interactions, also controlling for state of birth now. The resulting IV
estimates tend to be a bit higher than those in table 5, and the estimates
are much more precise.

7
1.2 The Bias of 2SLS
It is a fortunate fact that the OLS estimator is not just a consistent estima-
tor of the population regression function but also unbiased in small samples.
With many other estimators we don’t necessarily have such luck, and the
2SLS is no exception: 2SLS is biased in small samples even when the con-
ditions for the consistency of the 2SLS estimator are met. For many years,
applied researchers have lived with this fact without the loss of too much
sleep. A series of papers in the early 1990s have changed this (Nelson and
Startz, 1990a,b and Bound, Jaeger, and Baker, 1995). These papers have
pointed out that point estimates and inference may be seriously misleading
in cases relevant for empirical practice. A huge literature has since clari…ed
the circumstances when we should worry about this bias, and what can be
done about it.
The basic results from this literature are that the 2SLS estimator is bi-
ased when the instruments are “weak,”meaning the correlation with endoge-
nous regressors is low, and when there are many overidentifying restrictions.
In these cases, the 2SLS estimator will be biased towards the OLS estimate.
The intuition for these results is easy to see. Suppose you start with a single
valid instrument (i.e. one which is as good as randomly assigned and which
obeys the exclusion restriction). Now you add more and more instruments
to the (overidenti…ed) IV model. As the number of instruments goes to n,
the sample size, the …rst stage R2 goes to 1, and hence b IV ! b OLS . In
any small sample, even a valid instrument will pick up some small amounts
of the endogenous variation in x. Adding more and more instruments, the
amount of random, and hence endogenous, variation in x will become more
and more important. So IV will be more and more biased towards OLS.
It is also easy to see this formally. For simplicity, consider the case of a
single endogenous regressor x, and no other covariates:

y = x+"

and write the …rst stage as


x=Z + :
The 2SLS estimator is
b 1 1
2SLS = x0 PZ x x0 PZ y = + x0 PZ x x0 PZ ":

Using the fact that PZ x = Z + PZ and substituting in the …rst stage we


get
b 1
2SLS = 0 Z 0 Z + 0 PZ 0 0
Z + 0 PZ ":

8
To get the small sample bias, we would like to take expectations of this
expression. But the expectation of a ratio is not equal to the ratio of the
two expectations. A useful approximation to the small sample bias of the
estimator is obtained using a group asymptotic argument as suggested by
Bekker (1994). In this asymptotic sequence, we let the number instruments
go to in…nity as we let the number of observations to to in…nity but keep
the number of observations per instrument constant. This captures the fact
that we might have many instruments given the number of observations but
it still allows us to use asymptotic theory (see also Angrist and Krueger,
1995). Group assymptotics essentially lets us take the expectations inside
the ratio, i.e.
1
plim b 2SLS = E 0
Z 0Z + 0
PZ E 0
Z 0" + 0
PZ " :

Further notice that E (Z 0 ") = 0 so that


1
plim b 2SLS = E 0
Z 0Z + 0
PZ E 0
PZ " : (1)

Working out the expectations yields


1
" E ( 0 Z 0 Z ) =p
plim b 2SLS = 2 2
+1

where p is the number of instruments (see the appendix for a derivation).


Notice that the term (1= 2 )E ( 0 Z 0 Z ) =p is the population F-statistic for
the joint signifcance of all regressors in the …rst stage regression, call it F ,
so that
" 1
plim b 2SLS = 2 : (2)
F +1
Various things are immediately obvious from inspection of (2). First, as
the …rst stage F-statistic gets small, the bias approaches " = 2 . Notice that
the bias of the OLS estimator is " = 2x , which is equal to " = 2 if = 0,
i.e. if the instruments are completely uncorrelated with the endogenous
regressor. Hence, not surprisingly, unidenti…ed 2SLS will have the same
bias as OLS. So the …rst thing we see is that weakly identi…ed 2SLS will be
biased towards OLS.
Now turn things around, and consider the case where F gets large, i.e.
the …rst stage becomes stronger. The 2SLS bias will vanish in the case.
So the bias is related to the strength of the …rst stage as measured by the
F-statistic on the excluded instruments.

9
Finally, look at the F-statistic. The numerator is divided by the de-
grees of freedom of the test, which is p, the number of excluded instru-
ments. Consider adding completely useless instruments to your 2SLS model.
E ( 0 Z 0 Z ) will not change, since the new s are all zero. Hence, 2 will
also stay the same. So all that changes is that p goes up. The F-statistic
becomes smaller as a result. And hence we learn that adding useless or weak
instruments will make the bias worse.
Where does the bias of 2SLS come from? In order to get some intution
on this, look at (1). If we knew the …rst stage coe¢ cients , our predicted
values from the …rst stage would be x btrue = Z . In practice, we do not
know , but have to estimate it (implicitly) in the …rst stage. Hence, we
use xb = PZ x = Z + PZ , which di¤ers from x btrue by the term PZ . So
if we knew , the numerator in (1) would disappear, and there would be
no bias. The bias arises from the fact that we estimate from the same
sample as the second stage. The PZ term appears twice in (1): Namely in
E ( 0 PZ ) = p 2 and in E ( 0 PZ ") = p " . These two expectations give rise
to the ratio " = 2 in the formula for the bias (2). So some of the bias in
OLS seeps into our 2SLS estimates through the sampling variability in b.
It is important to note that (2) only applies to overidenti…ed models.
Even if = 0, i.e. the model is truely unidenti…ed, the expression (2) exists,
and is equal to " = 2 . For the just identi…ed IV estimator this is not
the case. The just identi…ed IV estimator has no moments. With weak
instruements, IV is median unbiased, even in small samples, so the weak
instrument problem is only one of overidenti…ed models. Just idenit…ed
IV will tend be very unstable when the instrument is weak because you
are almost dividing by zero (the weak covariance of the regressor and the
instrument). As a result, the sampling distribution tends to blow up as the
instrument becomes weaker.
Fortunately, there are other estimators for overidenti…ed 2SLS which are
(approximatedly) unbiased in small samples, even with weak instruments.
The LIML estimator has this property, and seems to deliver desirable results
in most applications. Other estimators, which have been suggested, like split
sample IV (Angrist and Krueger, 1995) or JIVE (Phillips and Hale, 19??;
Angrist, Imbens, and Krueger, 1999) are also unbiased but do not perform
any better than LIML. In order to illustrate some of the results from the
discussion above, the following …gures show some Monte Carlo results for
various estimators. The simulated data are drawn from the following model

10
yi = xi + "i
p
X
xi = j zij + i
j=1

with = 1, 1 = 0:1, j = 0 8j > 1,

"i 0 1 0:8
Z N ; ;
i 0 0:8 1
and the zij are independent, normally distributed random variables with
mean zero and unit variance, and there are 1000 observations in each sample.
Figure 2 shows the empirical distribution functions of four estimators:
OLS, just identi…ed IV, i.e. p = 1 (denoted IV), 2SLS with two instruments,
i.e. p = 2 (denoted 2SLS), and LIML with two instruments. It is easy to
see that the OLS estimator is biased and centerd around a value of of
about 1.79. IV is centered around the true value of of 1. 2SLS with one
weak and one uninformative instrument has some bias towards OLS (the
median of 2SLS in the sampling experiment is 1.07). The distribution func-
tion for LIML is basically indistinguishable from that for just identi…ed IV.
Even though the LIML results are for the case where there is an additional
uninformative instrument, LIML e¤ectively discounts this information and
delivers the same results as just identi…ed IV in this case.
Figure 3 shows the case where we set p = 20, i.e. in addition to the one
informative but weak instrument we add 19 instruments with no correlation
with the endogenous regressor. We again show OLS, 2SLS, LIML results.
It is easy to see that the bias in 2SLS is much worse now (the median is
1.53), and that the sampling distribution of the 2SLS estimator is quite
tight. LIML continues to perform well and is centered around the true
value of of 1, with a slightly larger dispersion than with 2 instruments.
Finally, Figure 4 shows the case where the model is truely unidenti…ed.
We again choose 20 instruments but we set j = 0; j = 1; :::; 20. Not
surprisingly, all estimators are now centered around the same value as OLS.
However, we see that the sampling distribution of 2SLS is quite tight while
the sampling distribution of LIML becomes very wide. Hence, the LIML
standard errors are going to re‡ect the fact that we have no identifying
information.
In terms of prescriptions for the applied researcher, the literature on
the small sample bias of 2SLS and the above discussion yields the following
results:

11
1. Report the …rst stage of your model. The …rst check is whether the
instruments predict the endogenous regressor in the hypothesized way
(and sometimes the …rst stage results are of independent substantive
interest). Report the F-statistic on the excluded instruments. Stock,
Wright, and Yogo (2002), in a nice survey of these issues, suggest that
F-statistics above 10 or 20 are necessary to rule out weak instruments.
The p-value on the F-statistic is rather meaningless in this context.

2. Look at the reduced form. There are no small issues with the reduced
form: it’s OLS so it is unbiased. If there is no e¤ect in the reduced
form, or if the estimates are excessively variable, there is going to be no
e¤ect in the IV either. And if there is you likely have an overidenti…ed
model with a weak instruments probelm.

3. In the overidenti…ed case, if your …rst stage F-statistics suggest that


your instruments may be weak, check the standard 2SLS results with
an alternative estimator like LIML. If the results are di¤erent, and/or
if LIML standard errors are much higher than those of 2SLS, this may
indicate weak instruments. Experiment with di¤erent subsets of the
instruments. Are the results stable?

4. In the just identi…ed case, the IV, LIML, and other related estimators
are all the same. So there are no alternatives to check. However,
remember that just identi…ed IV is median unbiased, even in small
samples.

5. Even if your point estimates seem reliable (sensible reduced form, sim-
ilar results from di¤erent instrument sets, from 2SLS and LIML) but
your instruments are on the weak side your standard errors might be
biased (downwards, of course). Standard errors based on the Bekker
(1994) approximation tend to be more reliable (not implemented any-
where to date).

Bound, Jaeger, and Baker (1995) show that these small sample issues
are a real concern in the Angrist and Krueger case, despite the fact that the
regressions are being run with 300,000 or more observations. “Small sample”
is always a relative concept. Bound et al. show that the IV estimates in the
Angrist and Krueger speci…cation move closer to the OLS estimates as more
control variables are included, and hence as the …rst stage F-statistic shrinks
(tables 1 and 2). They then go on and completely make up quarters of birth,
using a random number generator. Their table 3 shows that the results from

12
this exercise are not very di¤erent from their IV estimates reported before.
Maybe the most worrying fact is that the standard errors from the random
instruments are not much higher than those in the real IV regressions.
In some applications there is more than one endogenous variable, and
hence a set of instruments has to predict these multiple endogenous vari-
ables. The weak instruments problem can no longer be assessed simply by
looking at the F-statistic for each …rst stage equation alone. For example,
consider the case of two endogenous variables and two instruments. Sup-
pose instrument 1 is strong and predicts both endogenous variables well.
This will yield high F-statistics in each of the two …rst stage equations.
Nevertheless, the model is underidenti…ed because x b1 and xb2 will be closely
correlated now. With two instruments it is necessary for one to predict the
…rst endogenous variable, and the second the second. In order to assess
whether the instruments are weak or strong, it is necessary to look at a
matrix version of the F-statistic, which assesses all the …rst stage equations
at once. This is called the Cragg-Donald or minimum eigenvalue statistic .
References can be found in Stock, Wright, and Yogo (2002), the statistic is
implemented in Stata 10.

1.3 Appendix
Start from equation
1
plim b 2SLS = E 0
Z 0Z + 0
PZ E 0
PZ "
0 0 0 1 0
= E ZZ +E PZ E PZ " :

in the text. In the one regressor case 0P is simply a scalar so that


Z

0 0
E PZ = E tr PZ
0
= E tr PZ
0
= tr PZ E
2
= tr PZ I
2
= tr (PZ )
2
= p

13
where p is the number of instruments, and we have assumed that the s are
homoskedastic. Applying the trace trick to 0 PZ " again we can write
1
plim b 2SLS = E 0
Z 0Z + 2
p E tr 0
PZ "
0 1
= E Z 0Z + 2
p E tr PZ " 0

0 1
= " p E Z 0Z + 2
p
1
" E ( 0 Z 0 Z ) =p
= 2 2
+1
1
.75
.5
.25
0

0 .5 1 1.5 2 2.5
x
OLS IV
2SLS LIML

Figure 2: Distribution of the OLS, IV, and 2SLS and LIML estimators

14
1
.75
.5
.25
0

0 .5 1 1.5 2 2.5
x
OLS 2SLS
LIML

Figure 3: OLS, 2SLS, and LIML estimators with 20 instruments

15
1
.75
.5
.25
0

0 .5 1 1.5 2 2.5
x
OLS 2SLS
LIML

Figure 4: OLS, 2SLS, and LIML estimators with 20 uncorrelated instru-


ments

16
Angrist and Krueger 1991: Figures 1 and 2
Angrist and Krueger 1991: Table 1
Angrist and Krueger 1991: Table 2

Angrist and Krueger 1991: Figure 5


Angrist and Krueger 1991: Table 3
Angrist and Krueger 1991: Table 4
Angrist and Krueger 1991: Table 5
Angrist and Krueger 1991: Table 6

Angrist and Krueger 1991: Table 7

Bound et al. 1995: Table 1


Bound et al. 1995: Table 2
Bound et al. 1995: Table 3
Monte Carlo Design

y = 1 + x*b + e b = 1

Instrument vector z includes one instrument with various correlations


with x and k garbage instruments (with no correlation with x).
All experiments use samples with 100 observations and no other regressors.

10000 replications
0 garbage instruments

ols 2sls 2sls 2sls 2sls 2sls


corr of z and x 0.000 0.025 0.050 0.100 0.200
mean of b 1.707 6.803 -0.822 0.631 0.919 0.957
10th perc. of b 1.549 -1.938 -0.777 -0.281 0.227 0.509
25th perc. of b 1.623 0.497 0.405 0.512 0.658 0.763
median of b 1.708 1.735 1.117 1.020 0.999 0.997
75th perc. of b 1.791 2.959 1.634 1.400 1.286 1.202
90th perc. of b 1.866 5.597 2.288 1.722 1.516 1.363
median of std err 0.123 2.630 0.935 0.652 0.451 0.320
std dev of b 0.124 371.588 152.595 7.684 2.734 0.366
se of mean of b 0.001 3.716 1.526 0.077 0.027 0.004

10000 replications
1 garbage instruments

ols 2sls 2sls 2sls 2sls 2sls


corr of z and x 0.000 0.025 0.050 0.100 0.200
mean of b 1.707 1.719 1.184 1.057 1.010 0.999
10th perc. of b 1.550 0.101 0.089 0.217 0.402 0.565
25th perc. of b 1.625 1.024 0.758 0.729 0.763 0.810
median of b 1.707 1.722 1.266 1.141 1.069 1.037
75th perc. of b 1.790 2.411 1.710 1.490 1.332 1.237
90th perc. of b 1.865 3.315 2.185 1.811 1.549 1.393
median of std err 0.123 1.232 0.746 0.574 0.424 0.307
std dev of b 0.123 2.938 1.631 1.104 0.685 0.354
se of mean of b 0.001 0.029 0.016 0.011 0.007 0.004

10000 replications
2 garbage instruments

ols 2sls 2sls 2sls 2sls 2sls


corr of z and x 0.000 0.025 0.050 0.100 0.200
mean of b 1.707 1.730 1.334 1.197 1.081 1.038
10th perc. of b 1.553 0.561 0.434 0.483 0.528 0.625
25th perc. of b 1.626 1.169 0.936 0.879 0.842 0.856
median of b 1.707 1.712 1.368 1.242 1.129 1.066
75th perc. of b 1.788 2.253 1.757 1.553 1.378 1.258
90th perc. of b 1.864 2.889 2.175 1.868 1.591 1.417
median of std err 0.123 0.895 0.644 0.517 0.398 0.299
std dev of b 0.123 1.475 0.885 0.693 0.485 0.323
se of mean of b 0.001 0.015 0.009 0.007 0.005 0.003
5000 replications
4 garbage instruments

ols 2sls 2sls 2sls 2sls 2sls


corr of z and x 0.000 0.025 0.050 0.100 0.200
mean of b 1.707 1.700 1.467 1.336 1.201 1.103
10th perc. of b 1.549 0.874 0.782 0.743 0.712 0.723
25th perc. of b 1.624 1.317 1.144 1.049 0.980 0.928
median of b 1.707 1.709 1.470 1.352 1.228 1.128
75th perc. of b 1.790 2.089 1.788 1.638 1.452 1.308
90th perc. of b 1.864 2.510 2.133 1.897 1.652 1.454
median of std err 0.123 0.628 0.511 0.445 0.362 0.283
std dev of b 0.123 0.718 0.589 0.487 0.393 0.299
se of mean of b 0.002 0.010 0.008 0.007 0.006 0.004

5000 replications
8 garbage instruments

ols 2sls 2sls 2sls 2sls 2sls


corr of z and x 0.000 0.025 0.050 0.100 0.200
mean of b 1.706 1.700 1.561 1.458 1.327 1.209
10th perc. of b 1.543 1.116 1.042 0.978 0.908 0.863
25th perc. of b 1.621 1.407 1.291 1.212 1.118 1.040
median of b 1.707 1.699 1.568 1.454 1.346 1.223
75th perc. of b 1.790 1.993 1.822 1.710 1.552 1.391
90th perc. of b 1.868 2.270 2.069 1.943 1.731 1.534
median of std err 0.123 0.444 0.395 0.362 0.311 0.258
std dev of b 0.128 0.472 0.422 0.395 0.336 0.264
se of mean of b 0.002 0.007 0.006 0.006 0.005 0.004

2500 replications
16 garbage instruments

ols 2sls 2sls 2sls 2sls 2sls


corr of z and x 0.000 0.025 0.050 0.100 0.200
mean of b 1.705 1.705 1.630 1.556 1.462 1.347
10th perc. of b 1.549 1.314 1.255 1.190 1.129 1.057
25th perc. of b 1.623 1.491 1.426 1.362 1.299 1.199
median of b 1.704 1.715 1.633 1.554 1.469 1.353
75th perc. of b 1.785 1.910 1.838 1.749 1.633 1.502
90th perc. of b 1.860 2.112 2.016 1.926 1.792 1.622
median of std err 0.123 0.310 0.292 0.278 0.253 0.224
std dev of b 0.124 0.317 0.306 0.292 0.264 0.223
se of mean of b 0.002 0.006 0.006 0.006 0.005 0.004

2500 replications
32 garbage instruments

ols 2sls 2sls 2sls 2sls 2sls


corr of z and x 0.000 0.025 0.050 0.100 0.200
mean of b 1.701 1.703 1.665 1.630 1.583 1.496
10th perc. of b 1.539 1.422 1.387 1.365 1.316 1.245
25th perc. of b 1.618 1.555 1.518 1.483 1.444 1.370
median of b 1.702 1.705 1.668 1.626 1.585 1.501
75th perc. of b 1.787 1.855 1.808 1.770 1.718 1.629
90th perc. of b 1.859 1.978 1.938 1.900 1.845 1.735
median of std err 0.123 0.217 0.211 0.207 0.198 0.184
std dev of b 0.126 0.220 0.216 0.213 0.208 0.193
se of mean of b 0.003 0.004 0.004 0.004 0.004 0.004

You might also like