0% found this document useful (0 votes)
36 views18 pages

Slide 1 B

This document provides an overview of linear time series analysis, focusing on moving average (MA) and autoregressive moving average (ARMA) models. It defines MA(q) and ARMA(p,q) models, discusses their properties such as autocorrelation functions and stationarity/invertibility conditions. Estimation methods like maximum likelihood are presented. Forecasting is discussed, showing that forecasts and variances have different forms depending on the forecast horizon relative to the model order. Model selection using information criteria and autocorrelations is also covered.

Uploaded by

yingdong liu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views18 pages

Slide 1 B

This document provides an overview of linear time series analysis, focusing on moving average (MA) and autoregressive moving average (ARMA) models. It defines MA(q) and ARMA(p,q) models, discusses their properties such as autocorrelation functions and stationarity/invertibility conditions. Estimation methods like maximum likelihood are presented. Forecasting is discussed, showing that forecasts and variances have different forms depending on the forecast horizon relative to the model order. Model selection using information criteria and autocorrelations is also covered.

Uploaded by

yingdong liu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

ECMT3150: The Econometrics of Financial

Markets
1b. Linear Time Series Analysis

Simon Kwok
University of Sydney

Semester 1, 2022

1 / 18
Outline

1. MA Model
2. ARMA Model
3. Stationarity and Invertibility
4. Model Checking and Portmanteau Tests

2 / 18
Moving-Average (MA) Model
MA(q ) :
yt = µ + εt + θ 1 εt 1 + + θ q εt q,

where fεt g wn(0, σ2ε ). Noting that E (yt ) = µ, we can demean


the MA(q ) model into

ut = εt + θ 1 εt 1 + + θ q εt q
= [1 + θ (L)] εt ,

where θ (x ) = ∑qi=1 θ i x i .

Ex: Show that for an MA(q ) model, the ACF is given by, for j > 0,

ρj Corr (ut , ut j ) = Corr (yt , yt j)


8
< θ j +∑qi =1j θ j +i θ i
if j q,
= 1 +∑qi=1 θ 2i
: 0 if j > q,

and ρ j = ρj . Plot the ACF against lag order.


3 / 18
MA Model Estimation

Given the time series data fyt gTt=1 , we can estimate an MA(q )
model by either conditional or exact maximum likelihood
estimation (MLE).
I Conditional MLE: Set the initial errors (εt for
t = 0, 1, . . . , q + 1) to zero. Then assume a distribution
on fεt gTt=1 (usually iid normal). The joint likelihood function
is obtained in terms of the MA parameters µ, θ 1 , . . . , θ q .
Maximize the log-likelihood w.r.t. µ, θ 1 , . . . , θ q .
I Exact MLE: Treat the initial errors (εt for
t = 0, 1, . . . , q + 1) as extra parameters. Maximize the
log-likelihood w.r.t. µ, θ 1 , . . . , θ q , ε0 , ε 1 , . . . , ε q +1 .

4 / 18
MA Model Selection

I Use sample ACF. The asymptotic variance of ρ̂j is T1 . For


large T , the 95% con…dence interval is roughly given by
ρ̂j 1.96
p .
T
I Use information criteria.

5 / 18
Forecasting with MA model

Suppose fyt g MA(q ) with errors fεt g wn(0, σ2ε ). For ` = 1,

ŷt (1) = E [yt +1 jFt ]


= E [ µ + ε t +1 + θ 1 ε t + + θ q εt q +1 jFt ]
= µ + θ 1 εt + + θ q εt q +1 ,

as E [εt +1 jFt ] = 0, and εt , . . . , εt q +1 are measurable w.r.t. Ft .


The forecast error is et (1) = yt +1 ŷt (1) = εt +1 , with variance
Var [et (1)] = σ2ε .

6 / 18
Forecasting with MA model

For ` = 2,

ŷt (2) = E [yt +2 jFt ]


= E [ µ + ε t +2 + θ 2 ε t +1 + + θ q εt q +2 jFt ]
= µ + θ 2 εt + + θ q ε t q +2 .

The forecast error is et (2) = yt +2 ŷt (2) = εt +2 + θ 1 εt +1 , with


variance

Var [et (2)] = Var (εt +2 ) + θ 21 Var (εt +1 ) + 2θ 1 Cov (εt +2 , εt +1 )


= σ2ε + θ 21 σ2ε + 0
= (1 + θ 21 )σ2ε .

7 / 18
Forecasting with MA model
Q: What is the `-step ahead forecast of an MA(q ) model? What
are the forecast error and its variance? What happens to the
forecast and its variance when ` increases beyond q?
A: The `-step ahead forecast is

µ + θ ` εt + + θ q εt q +` if ` q,
ŷt (`) = (1)
µ if ` > q.

The forecast error is

εt +` + θ 1 εt +` 1 + + θ ` 1 ε t +1 if ` q,
et (`) = (2)
εt +` + θ 1 εt +` 1 + + θ q εt +` q if ` > q.

The forecast variance is

(1 + θ 21 + θ 22 + + θ 2` 1 )σ2ε if ` q,
Var [et (`)] = (3)
(1 + θ 21 + θ 22 + + θ 2q )σ2ε if ` > q,

which is equal to Var (yt ).


As ` increases beyond q, ŷt (`) stays at the mean level µ, and the forecast variance
remains at Var (yt ) = (1 + θ 21 + θ 22 + + θ 2q )σ2ε .
8 / 18
Autoregressive Moving-Average Model

fyt g ARMA(p, q ) if

yt = φ 0 + φ 1 yt 1 + + φ p yt p + εt + θ 1 εt 1 + + θ q εt q
[1 φ(L)] yt = φ0 + [1 + θ (L)] εt ,

where fεt g wn(0, σ2ε ).

After demeaning (yt = µ + ut ), the expressions become

ut = φ1 ut 1 + + φp ut p + εt + θ 1 εt 1 + + θ q εt q
[1 φ(L)] ut = [1 + θ (L)] εt . (4)

Ex: What is the ACF of a stationary ARMA(1, 1) model?

9 / 18
Stationarity Condition

fyt g is said to be stationary if it can be expressed as an MA(∞)


process of the form ut = [1 + ψ (L)] εt .

Stationarity condition: The ARMA(p, q ) model is stationary i¤ all


the roots of the polynomial equation 1 φ(x ) = 0 lie outside the
unit circle.
1 + θ (L )
In this case, 1 + ψ(L) := 1 φ(L ) is a well-de…ned in…nite-order
polynomial.

Q: Does the stationarity condition imply weak (covariance)


stationarity?
Hint: ARMA (p, q ) can be expressed as MA (∞) under the stationarity condition. The
Yule-Walker equations yield a unique set of autocovariances as solution: γ0 , γ1 , γ2 , . . ..
Verify that the mean (by de…nition), the variance γ0 and autocovariances γj are all
time-invariant; hence the model is weakly stationary.

10 / 18
Stationarity Condition

Q: What is the stationarity condition for an AR (1) model?


A: For AR (1), we want the root of the equation 1 φ1 x = 0 to
have magnitude greater than one, i.e., jx j = jφ1 j > 1, so the
1
stationarity condition is jφ1 j < 1.

Q: How about AR (2)?


Hint: The polynomial equation is 1 φ1 x φ2 x 2 = 0. Let z = x1 . The polynomial
equation then becomes z 2 φ1 z φ2 = 0. Now impose the condition jx j > 1, which
is equivalent to jz j < 1. Let the two roots be z1 and z2 , where z1 < z2 . Then from the
conditions z1 > 1, z2 < 1 and jz1 z2 j < 1, we obtain φ2 φ1 < 1, φ1 + φ2 < 1 and
jφ2 j < 1.

11 / 18
Stationarity Condition of AR(2)

12 / 18
Invertibility Condition

fyt g is said to be invertible if it can be expressed as an AR (∞)


process of the form [1 π (L)] ut = εt .

Invertibility condition: The ARMA(p, q ) model is invertible i¤ all


the roots of the polynomial equation 1 + θ (x ) = 0 lie outside the
unit circle.
1 φ (L )
In this case, 1 π (L) := 1 +θ (L ) is a well-de…ned in…nite-order
polynomial.

Q: What is the invertibility condition for an MA(1) model?


A: For MA(1), the invertibility condition is jθ 1 j < 1.

Ex: What is the invertibility condition of an MA(2) model?


Ex: Is a stationary AR (p ) invertible?
Ex: Is an invertible MA(q ) stationary?

13 / 18
AR Representation

Given an ARMA(p, q ) model:

yt = µ + ut ,
[1 φ(L)] ut = [1 + θ (L)] εt .
1
Suppose the invertibility condition holds, so that 1 + θ (L )
is a
1 φ (L )
well-de…ned in…nite-order polynomial. De…ne 1 + π (L) = 1 + θ (L )
.
Then we can rewrite the model as AR (∞):

1 φ (L)
εt = ut
1 + θ (L)
= [1 + π (L)]ut
= ut + π 1 ut 1 + π 2 ut 2 + .

14 / 18
MA Representation
Given an ARMA(p, q ) model:

yt = µ + ut ,
[1 φ(L)] ut = [1 + θ (L)] εt .
1
Suppose the stationarity condition holds, so that 1 φ (L )
is a
1 + θ (L )
well-de…ned in…nite-order polynomial. De…ne 1 + ψ(L) = 1 φ (L )
.
Then we can rewrite the model as MA(∞):

1 + θ (L)
ut = εt
1 φ (L)
= [1 + ψ(L)]εt
= εt + ψ1 εt 1 + ψ2 εt 2 + . (5)

Interpretation: ψj as a function of j is the impulse response


function. ψj is the marginal impact of the lag j shock εt j on the
current observation yt .
15 / 18
MA Representation
Q: Let fyt g be a weakly stationary process. What happens to the
`-step ahead forecast and its variance as ` ! ∞?
A: By weak stationarity, fyt g can be expressed as an MA (∞) process (5), so that

Var (yt ) = Var (εt + ψ1 εt 1 + ψ2 εt 2 + )


!

= σ2ε 1+ ∑ ψ2i < ∞.
i =1

This implies that ψi ! 0 as i ! ∞.


It follows from (1) with q = ∞ that the `-step ahead forecast is

yt (`) = µ + ψ` εt + ψ`+1 εt 1 +
! µ as ` ! ∞.

This is the mean-reverting property. The forecast variance is, by (3) with q = ∞,

Var [et (`)] ! (1 + ψ21 + ψ22 + )σ2ε


= Var (yt ) as ` ! ∞.

16 / 18
Model Checking
If a model is correctly speci…ed, the residual process fε̂t g should
look like a white noise.
To test H0 : ρ` = 0 vs Ha : ρ` 6= 0, we get the sample ACF ρ̂` of
fε̂t g.
I Suppose ρ = 0 for all j > ` under H0 . Then,
j
!
p ` 1
T ρ̂` ! N 0, 1 + 2 ∑ ρj
d 2
as T ! ∞.
j =1

The t ratio converges in distribution to standard normal:


ρ̂` d
t= r ! N (0, 1) as T ! ∞.
1 + 2 ∑j`=11 ρ̂2j /T

I Suppose fεt g wn(0, σ2ε ) under H0 . Then all ρj = 0 for all


j 6= 0, and so
p d
T ρ̂` ! N (0, 1) as T ! ∞.
This is a two-sided test (reject H0 at α level if jt j > zα/2 ). 17 / 18
Portmanteau Tests

Set a maximum lag m. We want to test H0 : ρ1 = = ρm = 0


vs Ha : ρj 6= 0 for some j = 1, . . . , m.
Assume fyt g ARMA(p, q ), with fεt g iid and some moment
conditions.
Box-Pierce test on fε̂t g:
m
∑ ρ̂2` ! χ2 (m
d
Q (m ) = T p q ) as T ! ∞.
`=1

Ljung-Box test on fε̂t g:


m
ρ̂2`
∑T
d
Q (m ) = T (T + 2) ! χ2 (m p q ) as T ! ∞.
`=1
`

They are one-sided tests (reject H0 at α level if Q (m ) > χ2α ).

18 / 18

You might also like