Instrumental Variables in RCT
Instrumental Variables in RCT
Jörn-Ste¤en Pischke
LSE
October 2007
1 Instrumental Variables
1.1 Basics
A good baseline for thinking about the estimation of causal e¤ects is often
the randomized experiment, like a clinical trial. However, for many treat-
ments, even in an experiment it may not be possible to assign the treatment
on a random basis. For example, in a drug trial it may only be possible to
o¤er a drug but not to enforce that the patient actually takes the drug (a
problem called non-compliance). When we study job training, we may only
be able to randomly assign the o¤er of training. Individuals will then decide
whether to participate in training or not. Even those not assigned training
in the program under study may obtain the training elsewhere (a problem
called non-embargo, training cannot be e¤ectively witheld from the control
group). Hence, treatment itself is still a behavioral variable, and only the
intention to treat has been randomly assigned. The instrumental variables
(IV) estimator is a useful tool to evaluate treatment in such a setup.
In order to see how this works, we will start by reviewing the assump-
tions necessary for the IV estimator to be valid in this case. We need some
additional notation for this. Let z = f0; 1g the intention to treat variable.
D = f0; 1g is the treatment again. We can now talk about counterfactual
treatments: D(z) is the treatment for the (counterfactual) value of z. I.e.
D(0) is the treatment if there was no intention to treat, and D(1) is the
treatment if there was an intention to treat. In the job training example,
D(0) denotes the training decision of an individual not assigned to the train-
ing program, and D(1) denotes the training decision of someone assigned to
the program. As before, Y is the outcome. The counterfactual outcome is
now Y (z; D) because it may depend on both the treatment choice and the
treatment assignment. So there are four counterfactuals for Y .
1
We can now state three assumptions:
E¤ects 1. and 2. are reduced form e¤ects. 1. is the …rst stage relation-
ship, and 2. is the reduced form for the outcome. 3. is the treatment e¤ect
of ultimate interest. Without Assumption 2, it is not clear how to de…ne
this e¤ect, beause the causal e¤ect of D on Y would depend on z.
b cov(yi ; zi )
IV = :
cov(Di ; zi )
2
Since zi is a dummy variable, the …rst covariance can be written as
b cov(yi ; Zi )
IV =
cov(Di ; Zi )
fE(yi jzi = 1) E(yi jzi = 0)g E(zi = 1)E(zi = 0)
=
fE(Di jzi = 1) E(Di jzi = 0)g E(zi = 1)E(zi = 0)
E(yi jzi = 1) E(yi jzi = 0)
=
E(Di jzi = 1) E(Di jzi = 0)
E(yi jzi = 1) E(yi jzi = 0)
= :
P (Di = 1jzi = 1) P (Di = 1jzi = 0)
3
the reduced form e¤ect (intention to treat estimates or ITT) and IV esti-
mates (treatment on the treated or TOT). Since about half the households
with vouchers actually moved, TOT estimates are about twice the size of
the ITT estimates. Notice that the ITT estimates are of independent in-
terest. They estimate directly the actual e¤ect of the policy. Nevertheless,
the TOT estimates are often of more interest in terms of their economic
interpretation.
If an instrument is truely as good as randomly assigned, then IV esti-
mation of the binary model
yi = + Di + "i
will be su¢ cient. Often, this assumption is not going to be satis…ed. How-
ever, an instrument may be as good as randomly assigned conditional on
some covariates, so that we can estimate instead
yi = + Di + Xi + "i ;
4
turn 6 enter at 7 age 16
Jan
Bob
Ron
Dec
turn 6 enter at 6 age 16
5
The idea of the Angrist and Krueger (1991) paper is to use season of
birth as an instrument for schooling. So in this application, we have
y = log earnings
D = years of schooling
z = born in the 1st quarter.
6
enrollment at age 16 for those in states with a compulsory schooling age of 16
with states with a higher compolsory schooling age. The enrollment e¤ect is
clearly visible for the age 16 states, but not for the states with later dropout
ages. The enrollment e¤ect is also concentrated on earlier cohorts when age
16 enrollment rates were lower. This is suggestive that compulsory schooling
laws are indeed at the root of the quarter of birth-schooling relationship.
Figure 5 in the paper shows a similar picture to …gures 1 and 2, but
for earnings. There is again a saw-tooth pattern in quarter of birth, with
earnings being higher for those born later in the year. This is consistent with
the pattern in schooling and a postive return to schooling. One problem
apparent in …gure 5 is that age is clearly related to earnings beyond the
quarter of birth e¤ect, particularly for those in later cohorts (i.e. for those
who are younger at the time of the survey in 1980). It is therefore important
to control for this age earnings pro…le in the estimation, or to restrict the
estimates to the cohorts on the relatively ‡at portion of the age-earnings
pro…le in order to avoid confounding the quarter of birth e¤ect with earnings
growth with age. Hence, the exclusion restriction only holds conditional on
other covariates in this case, namely adequate controls for the age-earnings
pro…le.
Table 3 presents simple Wald estimates of the returns to schooling es-
timate for the cohorts in the relatively ‡at part of their life-cycle pro…le.
Those born in the 1st quarter have about 0.1 fewer years of schooling com-
pared to those born in the remaining quarters of the year. They also have
about 1 percent lower earnings. This translates into a Wald estimate of
about 0.07 for the 1920s cohort, and 0.10 for the 1930s cohort, compared to
OLS estimates of around 0.07 to 0.08.
Tables 4 to 6 present instrumental variables estimates using a full set of
quarter of birth times year of birth interactions as instruments. These spec-
i…cations also control for additional covariates. It can be seen by comparing
columns (2) and (4) in the tables that controlling for age is quite impor-
tant. The IV returns to schooling are either similar to the OLS estimates or
above. Finally, table 7 exploits the fact that di¤erent states have di¤erent
compulsory schooling laws and uses 50 state of birth times quarter of birth
interactions as instruments in addition to the year of birth times quarter of
birth interactions, also controlling for state of birth now. The resulting IV
estimates tend to be a bit higher than those in table 5, and the estimates
are much more precise.
7
1.2 The Bias of 2SLS
It is a fortunate fact that the OLS estimator is not just a consistent estima-
tor of the population regression function but also unbiased in small samples.
With many other estimators we don’t necessarily have such luck, and the
2SLS is no exception: 2SLS is biased in small samples even when the con-
ditions for the consistency of the 2SLS estimator are met. For many years,
applied researchers have lived with this fact without the loss of too much
sleep. A series of papers in the early 1990s have changed this (Nelson and
Startz, 1990a,b and Bound, Jaeger, and Baker, 1995). These papers have
pointed out that point estimates and inference may be seriously misleading
in cases relevant for empirical practice. A huge literature has since clari…ed
the circumstances when we should worry about this bias, and what can be
done about it.
The basic results from this literature are that the 2SLS estimator is bi-
ased when the instruments are “weak,”meaning the correlation with endoge-
nous regressors is low, and when there are many overidentifying restrictions.
In these cases, the 2SLS estimator will be biased towards the OLS estimate.
The intuition for these results is easy to see. Suppose you start with a single
valid instrument (i.e. one which is as good as randomly assigned and which
obeys the exclusion restriction). Now you add more and more instruments
to the (overidenti…ed) IV model. As the number of instruments goes to n,
the sample size, the …rst stage R2 goes to 1, and hence b IV ! b OLS . In
any small sample, even a valid instrument will pick up some small amounts
of the endogenous variation in x. Adding more and more instruments, the
amount of random, and hence endogenous, variation in x will become more
and more important. So IV will be more and more biased towards OLS.
It is also easy to see this formally. For simplicity, consider the case of a
single endogenous regressor x, and no other covariates:
y = x+"
8
To get the small sample bias, we would like to take expectations of this
expression. But the expectation of a ratio is not equal to the ratio of the
two expectations. A useful approximation to the small sample bias of the
estimator is obtained using a group asymptotic argument as suggested by
Bekker (1994). In this asymptotic sequence, we let the number instruments
go to in…nity as we let the number of observations to to in…nity but keep
the number of observations per instrument constant. This captures the fact
that we might have many instruments given the number of observations but
it still allows us to use asymptotic theory (see also Angrist and Krueger,
1995). Group assymptotics essentially lets us take the expectations inside
the ratio, i.e.
1
plim b 2SLS = E 0
Z 0Z + 0
PZ E 0
Z 0" + 0
PZ " :
9
Finally, look at the F-statistic. The numerator is divided by the de-
grees of freedom of the test, which is p, the number of excluded instru-
ments. Consider adding completely useless instruments to your 2SLS model.
E ( 0 Z 0 Z ) will not change, since the new s are all zero. Hence, 2 will
also stay the same. So all that changes is that p goes up. The F-statistic
becomes smaller as a result. And hence we learn that adding useless or weak
instruments will make the bias worse.
Where does the bias of 2SLS come from? In order to get some intution
on this, look at (1). If we knew the …rst stage coe¢ cients , our predicted
values from the …rst stage would be x btrue = Z . In practice, we do not
know , but have to estimate it (implicitly) in the …rst stage. Hence, we
use xb = PZ x = Z + PZ , which di¤ers from x btrue by the term PZ . So
if we knew , the numerator in (1) would disappear, and there would be
no bias. The bias arises from the fact that we estimate from the same
sample as the second stage. The PZ term appears twice in (1): Namely in
E ( 0 PZ ) = p 2 and in E ( 0 PZ ") = p " . These two expectations give rise
to the ratio " = 2 in the formula for the bias (2). So some of the bias in
OLS seeps into our 2SLS estimates through the sampling variability in b.
It is important to note that (2) only applies to overidenti…ed models.
Even if = 0, i.e. the model is truely unidenti…ed, the expression (2) exists,
and is equal to " = 2 . For the just identi…ed IV estimator this is not
the case. The just identi…ed IV estimator has no moments. With weak
instruements, IV is median unbiased, even in small samples, so the weak
instrument problem is only one of overidenti…ed models. Just idenit…ed
IV will tend be very unstable when the instrument is weak because you
are almost dividing by zero (the weak covariance of the regressor and the
instrument). As a result, the sampling distribution tends to blow up as the
instrument becomes weaker.
Fortunately, there are other estimators for overidenti…ed 2SLS which are
(approximatedly) unbiased in small samples, even with weak instruments.
The LIML estimator has this property, and seems to deliver desirable results
in most applications. Other estimators, which have been suggested, like split
sample IV (Angrist and Krueger, 1995) or JIVE (Phillips and Hale, 19??;
Angrist, Imbens, and Krueger, 1999) are also unbiased but do not perform
any better than LIML. In order to illustrate some of the results from the
discussion above, the following …gures show some Monte Carlo results for
various estimators. The simulated data are drawn from the following model
10
yi = xi + "i
p
X
xi = j zij + i
j=1
"i 0 1 0:8
Z N ; ;
i 0 0:8 1
and the zij are independent, normally distributed random variables with
mean zero and unit variance, and there are 1000 observations in each sample.
Figure 2 shows the empirical distribution functions of four estimators:
OLS, just identi…ed IV, i.e. p = 1 (denoted IV), 2SLS with two instruments,
i.e. p = 2 (denoted 2SLS), and LIML with two instruments. It is easy to
see that the OLS estimator is biased and centerd around a value of of
about 1.79. IV is centered around the true value of of 1. 2SLS with one
weak and one uninformative instrument has some bias towards OLS (the
median of 2SLS in the sampling experiment is 1.07). The distribution func-
tion for LIML is basically indistinguishable from that for just identi…ed IV.
Even though the LIML results are for the case where there is an additional
uninformative instrument, LIML e¤ectively discounts this information and
delivers the same results as just identi…ed IV in this case.
Figure 3 shows the case where we set p = 20, i.e. in addition to the one
informative but weak instrument we add 19 instruments with no correlation
with the endogenous regressor. We again show OLS, 2SLS, LIML results.
It is easy to see that the bias in 2SLS is much worse now (the median is
1.53), and that the sampling distribution of the 2SLS estimator is quite
tight. LIML continues to perform well and is centered around the true
value of of 1, with a slightly larger dispersion than with 2 instruments.
Finally, Figure 4 shows the case where the model is truely unidenti…ed.
We again choose 20 instruments but we set j = 0; j = 1; :::; 20. Not
surprisingly, all estimators are now centered around the same value as OLS.
However, we see that the sampling distribution of 2SLS is quite tight while
the sampling distribution of LIML becomes very wide. Hence, the LIML
standard errors are going to re‡ect the fact that we have no identifying
information.
In terms of prescriptions for the applied researcher, the literature on
the small sample bias of 2SLS and the above discussion yields the following
results:
11
1. Report the …rst stage of your model. The …rst check is whether the
instruments predict the endogenous regressor in the hypothesized way
(and sometimes the …rst stage results are of independent substantive
interest). Report the F-statistic on the excluded instruments. Stock,
Wright, and Yogo (2002), in a nice survey of these issues, suggest that
F-statistics above 10 or 20 are necessary to rule out weak instruments.
The p-value on the F-statistic is rather meaningless in this context.
2. Look at the reduced form. There are no small issues with the reduced
form: it’s OLS so it is unbiased. If there is no e¤ect in the reduced
form, or if the estimates are excessively variable, there is going to be no
e¤ect in the IV either. And if there is you likely have an overidenti…ed
model with a weak instruments probelm.
4. In the just identi…ed case, the IV, LIML, and other related estimators
are all the same. So there are no alternatives to check. However,
remember that just identi…ed IV is median unbiased, even in small
samples.
5. Even if your point estimates seem reliable (sensible reduced form, sim-
ilar results from di¤erent instrument sets, from 2SLS and LIML) but
your instruments are on the weak side your standard errors might be
biased (downwards, of course). Standard errors based on the Bekker
(1994) approximation tend to be more reliable (not implemented any-
where to date).
Bound, Jaeger, and Baker (1995) show that these small sample issues
are a real concern in the Angrist and Krueger case, despite the fact that the
regressions are being run with 300,000 or more observations. “Small sample”
is always a relative concept. Bound et al. show that the IV estimates in the
Angrist and Krueger speci…cation move closer to the OLS estimates as more
control variables are included, and hence as the …rst stage F-statistic shrinks
(tables 1 and 2). They then go on and completely make up quarters of birth,
using a random number generator. Their table 3 shows that the results from
12
this exercise are not very di¤erent from their IV estimates reported before.
Maybe the most worrying fact is that the standard errors from the random
instruments are not much higher than those in the real IV regressions.
In some applications there is more than one endogenous variable, and
hence a set of instruments has to predict these multiple endogenous vari-
ables. The weak instruments problem can no longer be assessed simply by
looking at the F-statistic for each …rst stage equation alone. For example,
consider the case of two endogenous variables and two instruments. Sup-
pose instrument 1 is strong and predicts both endogenous variables well.
This will yield high F-statistics in each of the two …rst stage equations.
Nevertheless, the model is underidenti…ed because x b1 and xb2 will be closely
correlated now. With two instruments it is necessary for one to predict the
…rst endogenous variable, and the second the second. In order to assess
whether the instruments are weak or strong, it is necessary to look at a
matrix version of the F-statistic, which assesses all the …rst stage equations
at once. This is called the Cragg-Donald or minimum eigenvalue statistic .
References can be found in Stock, Wright, and Yogo (2002), the statistic is
implemented in Stata 10.
1.3 Appendix
Start from equation
1
plim b 2SLS = E 0
Z 0Z + 0
PZ E 0
PZ "
0 0 0 1 0
= E ZZ +E PZ E PZ " :
0 0
E PZ = E tr PZ
0
= E tr PZ
0
= tr PZ E
2
= tr PZ I
2
= tr (PZ )
2
= p
13
where p is the number of instruments, and we have assumed that the s are
homoskedastic. Applying the trace trick to 0 PZ " again we can write
1
plim b 2SLS = E 0
Z 0Z + 2
p E tr 0
PZ "
0 1
= E Z 0Z + 2
p E tr PZ " 0
0 1
= " p E Z 0Z + 2
p
1
" E ( 0 Z 0 Z ) =p
= 2 2
+1
1
.75
.5
.25
0
0 .5 1 1.5 2 2.5
x
OLS IV
2SLS LIML
Figure 2: Distribution of the OLS, IV, and 2SLS and LIML estimators
14
1
.75
.5
.25
0
0 .5 1 1.5 2 2.5
x
OLS 2SLS
LIML
15
1
.75
.5
.25
0
0 .5 1 1.5 2 2.5
x
OLS 2SLS
LIML
16
Angrist and Krueger 1991: Figures 1 and 2
Angrist and Krueger 1991: Table 1
Angrist and Krueger 1991: Table 2
y = 1 + x*b + e b = 1
10000 replications
0 garbage instruments
10000 replications
1 garbage instruments
10000 replications
2 garbage instruments
5000 replications
8 garbage instruments
2500 replications
16 garbage instruments
2500 replications
32 garbage instruments