0% found this document useful (0 votes)
125 views7 pages

Econometrics Midterm Exam Specimen

The document is a specimen paper for a midterm exam in Econometrics, consisting of two parts. Part A includes 8 multiple-choice questions covering topics such as variance, probability, econometric models, and hypothesis testing. Part B contains 4 questions requiring brief discussions on concepts like mean squared error, coefficient of determination, assumptions of the Simple Linear Regression Model, and the unbiasedness of the OLS estimator.

Uploaded by

22100447
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
125 views7 pages

Econometrics Midterm Exam Specimen

The document is a specimen paper for a midterm exam in Econometrics, consisting of two parts. Part A includes 8 multiple-choice questions covering topics such as variance, probability, econometric models, and hypothesis testing. Part B contains 4 questions requiring brief discussions on concepts like mean squared error, coefficient of determination, assumptions of the Simple Linear Regression Model, and the unbiasedness of the OLS estimator.

Uploaded by

22100447
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Econometrics:

Specimen paper of midterm exam #1


Time limit: 60 minutes.

Part A: Solve all of 8 questions (5 marks for each question).

1. Consider two random variables 𝑋 and 𝑌. Let a and b denote constants.

What is the variance of (𝑎𝑋 + 𝑏𝑌)?

a. 𝑎 𝑣𝑎𝑟(𝑋) + 𝑏 𝑣𝑎𝑟(𝑌) + 𝑎𝑏 √𝑣𝑎𝑟(𝑋)𝑣𝑎𝑟(𝑌)

b. 𝑎2 𝑣𝑎𝑟(𝑋) + 𝑏 2 𝑣𝑎𝑟(𝑌) + 2𝑎𝑏 √𝑣𝑎𝑟(𝑋)𝑣𝑎𝑟(𝑌)

c. 𝑎 𝑣𝑎𝑟(𝑋) + 𝑏 𝑣𝑎𝑟(𝑌) + 𝑎𝑏 𝑐𝑜𝑣(𝑋, 𝑌)

d. 𝑎2 𝑣𝑎𝑟(𝑋) + 𝑏 2 𝑣𝑎𝑟(𝑌) + 2𝑎𝑏 𝑐𝑜𝑣(𝑋, 𝑌)

e. None of the others.

Ans: d

2. Consider two discrete random variables 𝑋 and 𝑌. The probability that 𝑌 has a certain value given
that 𝑋 has a particular value is called:

a. marginal probability.
b. joint probability.
c. conditional probability.
d. cumulative probability

Ans: c

3.
d
( s c
)
Consider the model Q = f P, P , P , INC where Q d is quantity demanded of a
particular product per month, P is the price of the product, P s is the price of substitutes,
P c is the price of complements, and INC is monthly income. This equation represents _____.

a. a linear model
b. an economic model
c. an econometric model
d. an interval forecast

Ans: b
4. A data set that has observations on one entity at multiple points in time is classified as _____.

a. time series data


b. cross-section data
c. panel data
d. flow data

Ans: a

5. Of the following steps in conducting empirical economic research, which one should be
performed last?

a. Find appropriate data that can be used for estimation.


b. Build an economic model guided by economic theory.
c. Evaluate and analyze the consequences and implications of the results.
d. Estimate parameters and test hypotheses.

Ans: c

6. If the conditional variation of the random errors is not constant (i.e., var ( ei | xi )   2 ),
then we say that random errors are _____.

a. homoscedastic
b. exogenous
c. heteroskedastic
d. serially correlated

Ans: c

7. Rejecting a true null hypothesis _____.

a. is a Type I error
b. is a Type II error
c. should not happen if a valid statistical test is used
d. depends on the size of the estimation sample

Ans: a
8. In which case would testing the null hypothesis involve a two-tailed statistical test?

a. H1: Incentive pay for teachers does affect student achievement.


b. H1: Higher sales tax rates does not reduce state tax revenues.
c. H1: Extending the duration of unemployment benefits does not increase the length
of joblessness.
d. H1: Smoking does not reduce life expectancy.

Ans: a

----------------------------

Part B: Solve all questions 9-12.

9. (10 marks) Consider the simple linear regression model. Briefly discuss the mean squared error
(MSE) of an estimator of 𝛽2 .

Ans:

The MSE is the sum of the estimator’s variance and its squared bias. It is possible that the MSE of a
biased estimator is smaller than the MSE of an unbiased estimator.
10. (10 marks) Briefly discuss coefficient of determination, or 𝑅 2 .

Ans: 𝑅 2 measures the proportion of variation in y explained by x within the regression


model:

where
 ( y − y ) = total sum of squares = SST
2
i

 ( yˆ − y ) = sum of squares due to regression


2
i = SSR

 eˆ = sum of squares due to error = SSE


2
i

Note that the simple linear regression model can be written as:

which can be rewritten as:

Thus, we write:

We can show that: SST = SSR + SSE. 𝑅 2 meausres the fraction of SST explained by the model.
Graphical illustration is as follows:
11. (20 marks) Briefly summarize the six assumptions of the Simple Linear Regression Model. And
discuss the Gauss—Markov theorem, especially which assumptions are needed and what
implication/limitations of this theorem are.

Ans:
1. Econometric Model: All data pairs ( yi , xi ) collected from a population satisfy
the relationship yi = 1 +  2 xi + ei , i = 1, …, N.
2. Strict Exogeneity: The conditional expected value of the random error, ei , is
zero. That is, 𝐸(𝑒𝑖 |𝒙) = 0 where 𝒙 = (𝑥1 , … , 𝑥𝑡 , … , 𝑥𝑁 ).
3. Conditional Homoscedasticity: conditional variance of the error is constant.
That is, 𝑉𝑎𝑟(𝑒𝑖 |𝒙) = 𝜎 2 .
4. Conditionally Uncorrelated Errors: the conditional covariance of random
errors is zero. That is, 𝐶𝑜𝑣(𝑒𝑖 , 𝑒𝑗 |𝒙) = 0 for i  j .
5. Explanatory Variables Must Vary: xi must take at least two different values.
6. (Optional) Error Normality: conditional distribution of the random error is
normally distributed. That is, 𝑒𝑖 |𝒙 ~𝑵(𝟎, 𝝈𝟐 ) .

The Gauss—Markov theorem says that given x and under the first five assumptions above (i.e.,
SR1–SR5 of the linear regression model), the OLS estimators b1 and b2 have the smallest variance
of all linear and unbiased estimators of 𝛽1 and 𝛽1 . They are the best linear unbiased estimators
(BLUE) of b1 and b2. Note that the optional normality assumption is NOT needed to derive the
Gauss—Markov theorem’s result.

The Gauss—Markov theorem’s conclusion that OLS estimators are BLUE is limited to the linear and
unbiased estimators. That is, there could be non-linear estimators (or biased estimators) of which
variances are smaller than variances of OLS estimators.

12. (20 marks) Consider the Simple Linear Regression Model. Briefly discuss the unbiasedness of
the OLS estimator, especially 𝑏2 . That is, discuss the meaning and implications of the property that
OLS estimator 𝑏2 is unbiased; specify the crucial assumption and prove that the OLS estimator 𝑏2 is
unbiased under such an assumption.

Ans:
1. Assumption. Exogeneity : 𝐸(𝑒𝑖 |𝒙) = 0.
2. Meaning/Implications:
2.1. Meaning: 𝐸(𝑏2 |𝒙) = 𝛽2 .
2.2. Implications are as follows:
 One sample → one estimates of 𝒃𝟏 and 𝒃𝟐 , may differ from their actual parameters β1
and β2.
 Repeat such a procedure: sample and estimates
 Averages of estimates from many samples: converge to β1 and β2.
 Unbiasedness does not say that an estimate from a sample is close to the true
parameter value, and thus we cannot say that an estimate is unbiased.
Least squares estimation procedure (i.e., estimator) is unbiased.

3. Proof:

b2 = β 2 +  wi ei
𝑏2 can be rewritten as: where .
In the proof above, we use the fact that 𝐸(𝑤𝑖 𝑒𝑖 |𝒙) = 𝑤𝑖 𝐸(𝑒𝑖 |𝒙), because
conditional on 𝒙 = (𝑥1 , … , 𝑥𝑡 , … , 𝑥𝑁 ) is non-random (i.e., we can treat 𝑤𝑖 as if constant in calculating
the expected value conditional on 𝒙. In the last step of the proof, we use the exogeneity assumption
of 𝐸(𝑒𝑖 |𝒙) = 0.

--------------------------------

You might also like