0% found this document useful (0 votes)
17 views

Using Normalized Bayesian Information Criterion (Bic) To Improve Box - Jenkins Model Building

Uploaded by

thamkhin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Using Normalized Bayesian Information Criterion (Bic) To Improve Box - Jenkins Model Building

Uploaded by

thamkhin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

American Journal of Mathematics and Statistics 2014, 4(5): 214-221

DOI: 10.5923/j.ajms.20140405.02

Using Normalized Bayesian Information Criterion (Bic) to


Improve Box - Jenkins Model Building
Etebong P. Clement

Department of Mathematics and Statistics University of Uyo, Akwa Ibom State, Nigeria

Abstract A Statistical Time Series model is fitted to the Chemical Viscosity Reading data. Comparison with the original
models fitted to the same data set by Box and Jenkins is made using the Normalized Bayesian Information Criterion (BIC)
and analysis and evaluation are presented. The analysis proved that the proposed model is superior to the Box and Jenkins
models.
Keywords ARIMA, Bayesian Information Criterion (BIC), Box-Jenkins Approach, Ljung-Box Statistic, Time Series
Analysis

means that the current observation is correlated with its


1. Introduction immediate past values at time t = 1 . The moving Average
(MA) component represents the duration of the influence of
The Box-Jenkins approach to modeling ARIMA (p,d,q) a random (unexplained) shocks. For example, MA (1) means
processes is adopted in this work. The original Box-Jenkins that a shock on the value of the series at time t is correlated
modeling procedure involved an iterative three-stage process with the shock at time t = 1 . The autocorrelation functions
of model selection, parameter estimation and model (acf) and partial Autocorrelation functions (pacf) are used to
diagnostic checking. Recent explanations of the process (e.g. estimate the values of p and q
[1]) often include a preliminary stage of data preparation and
a final stage of model application (or forecasting).
Although originally designed for modeling time series 2. Methodology
with ARIMA (p,d,q) processes, the underlying strategy of
Box-Jenkins is applicable to a wide variety of statistical The Box - Jenkins methodology adopted for this work is
modeling situations. It provides a convenient framework widely regarded as the most efficient forecasting technique,
which allows an analyst to find an appropriate statistical and is used extensively. It involves the following steps:
model which could be used to answer relevant questions Model identification, model estimation, model diagnostic
about the data. check and forecasting [4].
ARIMA models describe the current behaviour of
2.1. Model Identification
variables in terms of linear relationships with their past
values. These models are also called Box-Jenkins Models on The foremost step in the process of modeling is to check
the basis of these authors’ pioneering work regarding time for the stationarity of the time series data. This is done by
series forecasting techniques. An ARIMA model can be observing the graph of the data or autocorrelation and the
decomposed into two parts [2]. First, it has an integrated (I) partial autocorrelation functions [1]. Another way of
component (d) which represents the order of differencing to checking for stationarity is to fit the first order AR model to
be performed on the series to attain stationarity. The second the raw data and test whether the coefficients φ is less than
component of an ARIMA consists of an ARMA model for one.
the series rendered stationary through differentiation. The The task is to identify an appropriate sub-class of model
ARMA component is further decomposed into AR and MA from the general ARIMA family.
components [3]. The Auto Regressive (AR) components 𝜙𝜙(𝐵𝐵)∇𝑑𝑑 𝑧𝑧𝑡𝑡 = 𝜃𝜃(𝐵𝐵)𝑎𝑎𝑡𝑡 (1)
capture the correlation between the current values of the time
series and some of its past values. For example, AR (1) which may be used to represent a given time series. Our
approach will be:
* Corresponding author: (i) To difference 𝑧𝑧𝑡𝑡 as many times as is needed to produce
[email protected] (Etebong P. Clement) stationarity.
Published online at http://journal.sapub.org/ajms
Copyright © 2014 Scientific & Academic Publishing. All Rights Reserved 𝜙𝜙(𝐵𝐵)𝜔𝜔𝑡𝑡 = 𝜃𝜃(𝐵𝐵)𝑎𝑎𝑡𝑡 (2)
American Journal of Mathematics and Statistics 2014, 4(5): 214-221 215

s: length of coefficients to test autocorrelation


where
𝑟𝑟𝑘𝑘 : Autocorrelation coefficient (for lag k)
𝜔𝜔𝑡𝑡 = (1 − 𝐵𝐵)𝑑𝑑 𝑧𝑧𝑡𝑡 = ∇𝑑𝑑 𝑧𝑧𝑡𝑡 (3) The hypothesis of Ljung - Box test are:
(ii) To identify the resulting ARIMA process. Our principal 𝐻𝐻𝑜𝑜 : Residual is white noise
tools for putting (i) and (ii) into effect will be the 𝐻𝐻1 : Residual is not white noise
autocorrelation and the partial autocorrelation functions. If the sample value of 𝑄𝑄 exceeds the critical value of a𝜒𝜒 2
distribution with s degrees of freedom, then at least one
Non-stationary stochastic process is indicated by the
value of r is statistically different from zero at the
failure of the estimated autocorrelation functions to die out
specified significance level.
rapidly. To achieve stationarity, a certain degree of
differencing (d) is required. The degree of differencing (d), 2.3.2. Bayesian Information Criterion (BIC)
necessary to achieve stationarity is attained when the
In statistics, the Bayesian information criterion (BIC) or
autocorrelation functions of
Schwarz criterion (also SBC, SBIC) is a criterion for model
𝜔𝜔𝑡𝑡 = ∇𝑑𝑑 𝑧𝑧𝑡𝑡 (4) selection among a finite set of models. It is based, in part, on
die out fairly quickly. the likelihood function, and it is closely related to Akaike
The autocorrelation function of an AR (p) process tails off, information criterion (AIC).
while its partial autocorrelation function has a cut off after When fitting models, it is possible to increase the likehood
lag p. Conversely, the acf of a MA (q) process has a cut off by adding parameters, but doing so may result in over fitting.
after lag q, while its partial autocorrelation function tails off. The BIC resolves this problem by introducing a penalty term
However, if both the acf and pacf tail off, a mixed ARMA for the number of parameters in the model. The penalty term
(p,q) process is suggested. The acf of a mixed ARMA (p,q) is large in BIC than in AIC.
process is a mixture of exponentials and damped sine waves The BIC was developed by Gideon E. Schwarz [7], who
after the q − p lags. Conversely, the pacf of a mixed gave a Bayesian argument for adopting it. It is closely related
ARMA (p,q) process is dominated by a mixture of to the Akaike information criterion (AIC). In fact, Akaike
exponentials and damped sine waves after the first p − q was so impressed with Schwarz’s Bayesian formalism that
lags. he developed his own Bayesian formalism, now often
referred to as the ABIC for “a Bayesian Information
2.2. Model Estimation Criterion” or more casually “Akaike’s Bayesian information
Preliminary estimates of the parameters are obtained from criterion” [8].
the values of appropriate autocorrelation of the differenced The BIC is an asymptotic result derived under the
series. These can be used as starting values in the search for assumptions that the data distribution is in the exponential
appropriate least square estimates. In practice not all family.
parameter in the models are significant. The ratios Let x : The observed data;
𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝
n : The number of data points in x , the numbers of
� �>1 (5) observations, or equivalently the sample size;
1.96×𝑠𝑠𝑠𝑠𝑠𝑠 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
may suggest trying a model in which some of the parameters
k : The numbers of free parameters to be estimated.
are set to zero [5]. Then, we need to re-estimate the model If the estimated model is a linear regression,
after each parameter is set to zero. k is the number of regressors, including the intercept;
p( x | k ) : The probability of the observed data given the
2.3. Diagnostic Check
number of parameters; or, the likelihood of the parameters
The diagnostic check is a procedure that is used to check given the dataset;
residuals. The residual should fulfill the models assumption L : The maximized value of the likelihood functions for
of being independent and normally distributed. If these the estimated model.
assumptions are not fulfilled, then another model is chosen The formula for the BIC is:
for the series. We will use the Ljung-Box test statistic for
−2. ln 𝑝𝑝(𝑥𝑥|𝑘𝑘) ≈ 𝐵𝐵𝐵𝐵𝐵𝐵 = −2. 𝑙𝑙𝑙𝑙 𝐿𝐿 + 𝑘𝑘{𝑙𝑙𝑙𝑙(𝑛𝑛)} (7)
testing the independency of the residuals. Also, statistical
inferences of the parameters and the goodness of fit of with the assumption that the model errors or disturbances are
estimated statistical models will be made. independent and identically distributed according to normal
distribution and that the boundary condition that the
2.3.1. Ljung – Box Statistics derivative of the log likelihood with respect to the true
[6] statistic tests whether a group of autocorrelations of a variance is zero, this becomes (up to an additive constant,
time series are less than zero. The test statistic is given as: which depends only on 𝑛𝑛 and not on the model [9].
𝑟𝑟 2 𝐵𝐵𝐵𝐵𝐵𝐵 = 𝑛𝑛{𝑙𝑙𝑙𝑙(𝜎𝜎�𝑒𝑒2 )} + 𝑘𝑘{𝑙𝑙𝑙𝑙(𝑛𝑛)} (8)
𝑄𝑄 = 𝑇𝑇(𝑇𝑇 + 2) ∑𝑠𝑠𝑘𝑘=1 𝑘𝑘 (6)
𝑇𝑇−𝐾𝐾 where 𝜎𝜎�𝑒𝑒2 is the error variance.
where T: number of observations The error variance in this case is defined as
216 Etebong P. Clement: Using Normalized Bayesian Information Criterion (Bic) to
Improve Box - Jenkins Model Building

1 2.3.2.1. Characteristic of the Bayesian Information Criterion


𝜎𝜎�𝑒𝑒2 = ∑𝑛𝑛𝑖𝑖=1(𝑥𝑥𝑖𝑖 − 𝑥𝑥̅ )2 (9)
𝑛𝑛−1
It is independent of the prior or the prior is “vague” (a
One may point out from probability theory that 𝜎𝜎�𝑒𝑒2 is a
constant).
biased estimator for the true variance, 𝜎𝜎 2 . Let 𝜎𝜎�𝑒𝑒2 denote
unbiased form of approximating the error variance. It is It can measure the efficiency of the parameterized model
defined as: in terms of predicting the data.
It penalizes the complexity of the model where complexity
1
𝜎𝜎�𝑒𝑒2 = ∑𝑛𝑛𝑖𝑖=1(𝑥𝑥𝑖𝑖 − 𝑥𝑥̅ )2 (10) refers to the number of parameters in models.
𝑛𝑛−1
It is approximately equal to the minimum description
Additionally, under the assumption of normality the
length criterion but with negative sign.
following version may be more tractable
It can be used to choose the number of clusters according
𝐵𝐵𝐵𝐵𝐵𝐵 = 𝜒𝜒 2 + 𝑘𝑘 {𝑙𝑙𝑙𝑙(𝑛𝑛)} (11) to the intrinsic complexity present in a particular dataset.
Note that there is a constant added that follows from It is closely related to other penalized likelihood criteria
transition from log-likelihood to 𝜒𝜒 2 ; however, in using the such as RIC and the Akaike Information Criterion (AIC).
BIC to determine the “best” model the constant becomes
trivial. 2.3.2.2. Implications of the Bayesian Information Criterion
Given any two estimated models, the model with the lower BIC has been widely used for model identification in time
value of BIC is the one to be preferred. The BIC is an series and linear regression. It can, however, be applied quite
increasing function of 𝜎𝜎𝑒𝑒2 and an increasing function of k . widely to any set of maximum likelihood-based models.
That is, unexplained variations in the dependent variable and However, in many applications (for example, selecting a
the number of explanatory variables increase the value of black body or power law spectrum for an astronomical
BIC. Hence, lower BIC implies either fewer explanatory source), BIC simply reduces to maximum likelihood
variables, better fit, or both. The BIC generally penalizes free selection because the number of parameters is equal for the
parameters more strongly than does the Akaike Information models of interest.
Criterion, though it depends on the size of n and relative
magnitude of n and k .
It is important to keep in mind that the BIC can be used to 3. The Analysis of Data
compare estimated models only when the numerical values
of the dependent variable are identical for all estimates being Having discussed some basic concepts and theoretical
compared. The models being compared need not be nested, foundation of time series that will enable us analyze the data.
unlike the case when models are being compared using an We now present a step by step analysis of our dataset of
Series D.
F or likelihood ratio test.

Figure 1. Graph of original series 𝑋𝑋𝑡𝑡


American Journal of Mathematics and Statistics 2014, 4(5): 214-221 217

Figure 2. Plot of Autocorrelation Functions of the Original Series 𝑋𝑋𝑡𝑡

Figure 3. Graph of the Differenced Series D (-1)


218 Etebong P. Clement: Using Normalized Bayesian Information Criterion (Bic) to
Improve Box - Jenkins Model Building

Figure 4. Plot of the Autocorrelation functions of the differenced Series

Figure 5. Plot of the Partial Autocorrelation functions of the differenced Series


American Journal of Mathematics and Statistics 2014, 4(5): 214-221 219

3.1. Model Identification That is, the AR coefficient ∅1 was estimated to be 0.814
with standard error of 0.045 and a t-ratio of 18.024 while the
The graphical plot of the original series of the chemical
MA coefficient θ was estimated to be 0.972, with standard
process viscosity Reading: (Every Hour) is given in figure 1.
error of 0.020 and a t-ratio of 49.007.
It is observed that, the series exhibits non-stationary
For this model Q = 9.746. The 10% and 5% points of
behaviour indicated by its growth.
chi-square with 16 degree of freedom are 23.50 and 26.30
The sample autocorrelations of the original series in figure
2 failed to die out quickly at high lags, confirming the respectively. Therefore, since Q is not unduly large and the
non-stationarity behaviour of the series, which equally evidence does not contradict the hypothesis of White Noise
suggests that transformation is required to attain stationarity. behaviour in the residuals, the model is very adequate and
Consequently, the difference method of transformation was significantly appropriate.
adopted and the first difference ( d = 1 ) of the series was
3.3. Model Diagnostic Check
made. The plot of the stationary equivalent is given in figure
3 while the plots of the autocorrelation and partial It is concerned with testing the goodness of fit of the
autocorrelation functions of the differenced series are given model. From plots of the residual acf and pacf, it can be seen
in figure 4 and figure 5 respectively. that all points are randomly distributed and it can be
The autocorrelation and partial autocorrelation functions concluded that there is an irregular pattern which means that
of the differenced series indicated no need for further the model is adequate. Also, the individual residual
differencing as they tend to be tailing off rapidly. They also autocorrelations are very small and are generally within
indicated no sign of seasonality since they do not repeat ±2√𝑛𝑛 significance bounds. Also the statistical significance
themselves at lags that are multiples of the number of periods of the model was checked. Five criteria: The Normalized
per season. Bayesian information criterion (BIC), the R – square, Root
Using figure 4 and figure 5, the differenced series will be Mean Square Error (RMSE), the Mean Absolute Percentage
denoted by 𝜔𝜔𝑡𝑡 for 𝑡𝑡 = 1,2, … ,309 where 𝜔𝜔𝑡𝑡 = ∇𝑧𝑧𝑡𝑡 . It is Error (MAPE) and the Ljung – Box Q statistic were used to
observed that both the autocorrelation and partial test for the adequacy and statistical appropriateness of the
autocorrelation functions of 𝜔𝜔𝑡𝑡 are characterized by model.
correlations that alternate in sign and which tend to damp out First, the Ljung – Box (Q) Statistic test was performed
with increasing lag. Consequently, a mix autoregressive using SPSS 17 Expert Modeler (see table 1 and 2), the Ljung
moving average of order (1, 1, 1) was proposed since both – Box Statistic of the model is not significantly different
the autocorrelation and partial autocorrelation functions of from zero, with a value of 9.746 for 16 d.f and associated
the 𝑤𝑤𝑡𝑡 seem to be tailing off. p-value of 0.880, thus failing to reject the null hypothesis of
Thus, using equation (1), the proposed model is an white noise. This indicates that the model has adequately
ARIMA (1,1,1). captured the correlation in the time series.
𝜙𝜙(𝐵𝐵)∇𝑧𝑧𝑡𝑡 = 𝜃𝜃(𝐵𝐵)𝑎𝑎𝑡𝑡 (12) Table 1. Model Parameters
(1 − 𝜙𝜙1 𝐵𝐵)𝜔𝜔𝑡𝑡 = (1 − 𝜃𝜃1 𝐵𝐵)𝑎𝑎𝑡𝑡 (13) Estimates S.E t-radio Sig.
AR Lag1 0.814 0.045 18.024 0.000
(1 − 𝜙𝜙1 𝐵𝐵)(𝑧𝑧𝑡𝑡 − 𝑧𝑧𝑡𝑡−1 ) = (1 − 𝜃𝜃1 𝐵𝐵)𝑎𝑎𝑡𝑡 (14)
Coefficients Difference 1 - - -
The plot of the autocorrelation and partial autocorrelation MA Lag1 0.972 0.020 49.007 0.000
functions of the residuals from the tentatively identified
ARIMA (1, 1, 1) model are given in figure 6. Moreover, the low value of RMSE indicates a good fit for
the model. Also, the high value of the R-Square and MAPE
3.2. Estimation of Parameters indicate a perfect prediction over the mean.
Having tentatively identified what appears to be a suitable Again, the model is adequate in the sense that the plots of
model, the next step is to obtain the least squares estimates of the residual acf and pacf in figure 6 show a random variation,
the parameters of the model. The SPSS 17 Expert Modeler thus, from the origin zero (0), the points below and above are
was used to fit the model to the data. The coefficient of both all uneven, hence the model fitted is adequate. The adequacy
the AR and the MA were not significantly different from and significant appropriateness of the model was confirmed
zero with values of 0.814 and 0.972 respectively. This model by exploring the Normalized Bayesian Information Criterion
enables us to write the model equation as: (BIC). In a class of statistically significant ARIMA (p,d,q)
models fitted to the series, the ARIMA (1,1,1) model had the
𝑧𝑧𝑡𝑡 = 0.814𝑧𝑧𝑡𝑡−1 + 0.972𝑎𝑎𝑡𝑡−1 + 𝑎𝑎𝑡𝑡 (15) least BIC value of −2.366.
Table 2. Model Statistics

Model Fit Statistics Ljung-Box Q (18)


2
R RMSE MAPE BIC statistics D.F Sig. Number of outliers
0.749 0.301 2.424 -2.366 9.746 16 0.880 0
220 Etebong P. Clement: Using Normalized Bayesian Information Criterion (Bic) to
Improve Box - Jenkins Model Building

Figure 6. Autocorrelation & Partial Autocorrelation Functions of the Residuals

3.4. Forecasting with the Model 4. Summary


Forecasting based on the fitted model was computed up to The sample acf and pacf of the original series (series D)
lead time of 12, and the one-step forecasting and the 95% were computed using the SPSS 17 Expert modeler and their
confidence limits are displayed in table 3. graphs were plotted. These were used in identifying the
appropriate model. The series exhibited non-stationary
Table 3. One-step forecast of the ARIMA (1, 1, 1) Model behaviour, following the inability of the sample acf of the
Lead time Forecast 95% Lower Limit 95% Upper Limit series to die our rapidly even at high lags. The series was
transformed by differencing once, and stationarity was
1. 9.12 9.00 10.02 attained. The plot of the differenced series indicated that the
2. 8.65 8.02 9.63 series is evenly distributed around the mean.
Following the distribution of the acf and pacf of the
3. 10.14 9.86 10.84
differenced series, an ARIMA (1, 1, 1) model given by
4. 12.02 10.63 12.46 𝑧𝑧𝑡𝑡 = 0.814𝑧𝑧𝑡𝑡−1 + 0.972𝑎𝑎𝑡𝑡−1 + 𝑎𝑎𝑡𝑡 was identified. The
parameters of the fitted model were estimated. The model
5. 9.38 8.86 10.41
was then subjected to statistical diagnostic check using the
6. 9.02 8.92 9.65 Ljung-Box test statistic and the Normalized Bayesian
Information Criterion (BIC). Analysis proved that the model
7. 8.86 8.02 9.85
is statistically significant, appropriate and adequate.
8. 7.80 7.04 8.06 The fitted model was used to forecast values of the
9. 10.81 9.84 11.00 chemical Viscosity Readings for a lead time (𝑙𝑙) of 12. The
forecast is a good representation of the original data which
10. 9.16 9.03 10.06 neither decreases nor increases.
11. 8.28 7.89 9.40 The fitted model (ARIMA (1,1,1)) was compared with the
two original models fitted to the same series by [4]. That is:
12. 10.02 9.88 10.64
AR (1) model given by:
American Journal of Mathematics and Statistics 2014, 4(5): 214-221 221

𝑧𝑧𝑡𝑡 = 0.87𝑧𝑧𝑡𝑡−1 + 𝑎𝑎𝑡𝑡 (16) 5. Conclusions


and IMA (1,1) model given by: The ARIMA (1, 1, 1) model fitted to the Chemical
Viscosity data is a better model than both the ARIMA (1, 0, 0)
∇𝑧𝑧𝑡𝑡 = −0.06𝑎𝑎𝑡𝑡−1 + 𝑎𝑎𝑡𝑡 (17)
and ARIMA (0, 1, 1) models originally fitted to the same
were fitted to the series D data. Series by Box-Jenkins in 1976. This showed that using the
The Bayesian Information Criterion procedure was used in Bayesian Information Criterion procedure an improved or
comparing these three models. That is: superior Box-Jenkins Models could be obtained.
𝐴𝐴𝐴𝐴(1): 𝑧𝑧𝑡𝑡 = 0.87𝑧𝑧𝑡𝑡−1 + 𝑎𝑎𝑡𝑡 (18)
𝐼𝐼𝐼𝐼𝐼𝐼(1,1): ∇𝑧𝑧𝑡𝑡 = −0.06𝑎𝑎𝑡𝑡−1 + 𝑎𝑎𝑡𝑡 (19)
and REFERENCES
𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(1,1,1): 𝑧𝑧𝑡𝑡 = 0.814𝑧𝑧𝑡𝑡−1 + 0.972𝑎𝑎𝑡𝑡−1 + 𝑎𝑎𝑡𝑡 (20) [1] Makridakis, S., Wheelwright, S. C. and Hyndman, R. J.
(1998): Forecasting Methods and Applications, John Wiley,
Analysis showed that the ARIMA (1, 1, 1) model is New York.
superior to the two other models having the least BIC value.
The study aims at fitting a statistical time series model to [2] Box, G. E. P., Jenkins, G. M. and Reinsel; G. C. (1994): Time
the Chemical Viscosity Reading data. The data were Series Analysis; Forecasting and Control, Peason Education,
Delhi.
extracted from [4] p. 529 called series D. The plots of the
sample acf and pacf of the original series indicated that the [3] Pankratz, A. (1983): Forecasting with Univariate
series was not stationary. Transformation of the series was Box-Jenkins Models: Concepts and Cases. John Wiley, New
made by differencing to obtain stationarity. Following the York.
distribution of the acf and pacf of the differenced series, an [4] Box, G. E. P., and Jenkins, G. M. (1976): Time Series
ARIMA (1, 1, 1) model was identified, the parameters of the Analysis; Forecasting and Control, Holden-Day Inc. U.S.A.
model were estimated and diagnostically checked to prove
its statistical significant and adequacy at both 0.05 and 0.01 [5] Enders, W. (2003): Applied Econometrics Time Series, John
α –levels of significance under the Ljung-Box goodness of Wiley & Sons, U.S.A.
fit test. The Normalized Bayesian Information Criterion [6] Ljung, G., and Box, G. E. P. (1978): On a Measure of lack of
(BIC) was explored to confirm the adequacy of the model. fit in Time Series Models. Biometrika 65:553-564
Again, among a class of significantly adequate set of
[7] Schwarz, G. E. (1978): Estimating the dimension of a Model.
ARIMA (p,d,q) models of the same data set, the ARIMA
Annals of Statistics. 6(2): 461-464.
(1,1,1) model was found as the most suitable model with
least BIC value of –2.366, MAPE of 2.424, RMSE of 0.301 [8] Akaike, H., (1977): On Entropy Maximization Principle In:
and R-square of 0.749. Estimation by Ljung-Box test with Q Krishnaiah, P. R. (Editor). Application of Statistics,
(18) = 9.746, 16 d.f and p-value of 0.880 showed no North-Holland, Amsterdam pp. 27- 41.
autocorrelation between residuals at different lag times. [9] Priestley, M. B. (1981): Spectral Analysis and Time Series.
Finally, a forecast for a lead time (  ) of 12 was made. Academic Press. London.

You might also like