TimeSeriesAnalysisLectureThree
TimeSeriesAnalysisLectureThree
35
Mean, Autocovariance, Stationarity
γX (t + h, t) = Cov(Xt+h , Xt )
= E[(Xt+h − µt+h )(Xt − µt )].
2
Linear Processes
3
Linear Processes
∞
X
Xt = µ + ψj Wt−j
j=−∞
Examples:
• White noise: ψ0 = 1.
• MA(1): ψ0 = 1, ψ1 = θ.
• AR(1): ψ0 = 1, ψ1 = φ, ψ2 = φ2 , ...
4
Estimating the ACF: Sample ACF
γ(h) = Cov(Xt+h , Xt )
= E[(Xt+h − µ)(Xt − µ)].
5
Estimating the ACF: Sample ACF
6
Estimating the ACF: Sample ACF
7
Sample ACF for white Gaussian (hence i.i.d.) noise
1.2
0.8
0.6
0.4
0.2
−0.2
−20 −15 −10 −5 0 5 10 15 20 Red lines=c.i.
8
Sample ACF
9
Sample ACF: Trend
−1
−2
−3
−4
0 10 20 30 40 50 60 70 80 90 100
10
Sample ACF: Trend
1.2
0.8
0.6
0.4
0.2
−0.2
−60 −40 −20 0 20 40 60 (why?)
11
Sample ACF
12
Sample ACF: Periodic
−1
−2
−3
−4
0 10 20 30 40 50 60 70 80 90 100
13
Sample ACF: Periodic
6
signal
signal plus noise
5
−1
−2
−3
−4
0 10 20 30 40 50 60 70 80 90 100
14
Sample ACF: Periodic
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−100 −80 −60 −40 −20 0 20 40 60 80 100 (why?)
15
Sample ACF
16
ACF: MA(1)
MA(1): X = Z + θ Z
t t t−1
0.8
0.6
0.4 θ/(1+θ2)
0.2
0
−10 −8 −6 −4 −2 0 2 4 6 8 10
17
Sample ACF: MA(1)
1.2
ACF
Sample ACF
0.8
0.6
0.4
0.2
−0.2
−10 −8 −6 −4 −2 0 2 4 6 8 10
18
Sample ACF
19
ACF: AR(1)
AR(1): X = φ X +Z
t t−1 t
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
φ|h|
0.2
0.1
0
−10 −8 −6 −4 −2 0 2 4 6 8 10
20
Sample ACF: AR(1)
1.2
ACF
Sample ACF
0.8
0.6
0.4
0.2
−0.2
−10 −8 −6 −4 −2 0 2 4 6 8 10
21
Introduction to Time Series Analysis. Lecture 3.
1. Sample autocorrelation function
2. ACF and prediction
22
ACF and prediction
−1
white noise
MA(1)
−2
−3
0 2 4 6 8 10 12 14 16 18 20
1.2
ACF
1 Sample ACF
0.8
0.6
0.4
0.2
−0.2
−10 −8 −6 −4 −2 0 2 4 6 8 10
23
ACF of a MA(1) process
5 5 5 5
0 0 0 0
−5 −5 −5 −5
−5 0 5 −5 0 5 −5 0 5 −5 0 5
lag 0 lag 1 lag 2 lag 3
24
ACF and least squares prediction
25
ACF and least squares prediction
26
ACF and least squares prediction
So for Gaussian and stationary {Xt }, the best estimate of Xn+h given
Xn = xn is
f (xn ) = µ + ρ(h)(xn − µ),
and the mean squared error is
Notice:
• Prediction accuracy improves as |ρ(h)| → 1.
• Predictor is linear: f (x) = µ(1 − ρ(h)) + ρ(h)x.
27
ACF and least squares linear prediction
f (xn ) = ρ(h)Xn .
28
ACF and least squares linear prediction
f (xn ) = a(xn − µ) + b.
f (xn ) = ρ(h)(Xn − µ) + µ.
For this optimal linear predictor, the mean squared error is again
29
Least squares prediction of Xn+h given Xn
30
Introduction to Time Series Analysis. Lecture 3.
1. Sample autocorrelation function
2. ACF and prediction
31
Properties of the autocovariance function
32
Properties of the autocovariance function
a′ F a ≥ 0.
33
Properties of the autocovariance function
1. γ(0) ≥ 0,
2. |γ(h)| ≤ γ(0),
3. γ(h) = γ(−h),
4. γ is positive semidefinite.
Furthermore, any function γ : Z → R that satisfies (3) and (4) is the
autocovariance of some stationary time series (in particular, a Gaussian
process).
e.g.: (1) and (2) follow from (4).
34