hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox Chapter Introduction

G13 — Time Series Analysis

Scope of the Chapter

This chapter provides facilities for investigating and modelling the statistical structure of series of observations collected at equally spaced points in time. The models may then be used to forecast the series.
The chapter covers the following models and approaches.
  1. Univariate time series analysis, including autocorrelation functions and autoregressive moving average (ARMA) models.
  2. Univariate spectral analysis.
  3. Transfer function (multi-input) modelling, in which one time series is dependent on other time series.
  4. Bivariate spectral methods including coherency, gain and input response functions.
  5. Vector ARMA models for multivariate time series.
  6. Kalman filter models.
  7. GARCH models for volatility.

Background to the Problems

Univariate Analysis

Let the given time series be x1,x2,,xnx1,x2,,xn, where nn is its length. The structure which is intended to be investigated, and which may be most evident to the eye in a graph of the series, can be broadly described as:
(a) trends, linear or possibly higher-order polynomial;
(b) seasonal patterns, associated with fixed integer seasonal periods. The presence of such seasonality and the period will normally be known a priori. The pattern may be fixed, or slowly varying from one season to another;
(c) cycles or waves of stable amplitude and period pp (from peak to peak). The period is not necessarily integer, the corresponding absolute frequency (cycles/time unit) being f = 1 / pf=1/p and angular frequency ω = 2πfω=2πf. The cycle may be of pure sinusoidal form like sin(ωt)sin(ωt), or the presence of higher harmonic terms may be indicated, e.g., by asymmetry in the wave form;
(d) quasi-cycles, i.e., waves of fluctuating period and amplitude; and
(e) irregular statistical fluctuations and swings about the overall mean or trend.
Trends, seasonal patterns, and cycles might be regarded as deterministic components following fixed mathematical equations, and the quasi-cycles and other statistical fluctuations as stochastic and describable by short-term correlation structure. For a finite dataset it is not always easy to discriminate between these two types, and a common description using the class of autoregressive integrated moving-average (ARIMA) models is now widely used. The form of these models is that of difference equations (or recurrence relations) relating present and past values of the series. You are referred to Box and Jenkins (1976) for a thorough account of these models and how to use them. We follow their notation and outline the recommended steps in ARIMA model building for which functions are available.

Transformations

If the variance of the observations in the series is not constant across the range of observations it may be useful to apply a variance-stabilizing transformation to the series. A common situation is for the variance to increase with the magnitude of the observations and in this case typical transformations used are the log or square root transformation. A range-mean plot or standard deviation-mean plot provides a quick and easy way of detecting non-constant variance and of choosing, if required, a suitable transformation. These are plots of either the range or standard deviation of successive groups of observations against their means.

Differencing operations

These may be used to simplify the structure of a time series.
First-order differencing, i.e., forming the new series
xt = xtxt1
xt=xt-xt-1
will remove a linear trend. First-order seasonal differencing
sxt = xtxts
sxt=xt-xt-s
eliminates a fixed seasonal pattern.
These operations reflect the fact that it is often appropriate to model a time series in terms of changes from one value to another. Differencing is also therefore appropriate when the series has something of the nature of a random walk, which is by definition the accumulation of independent changes.
Differencing may be applied repeatedly to a series, giving
wt = dsDxt
wt=dsDxt
where dd and DD are the orders of differencing. The derived series wtwt will be shorter, of length N = nds × DN=n-d-s×D, and extend for t = 1 + d + s × D,,nt=1+d+s×D,,n.

Sample autocorrelations

Given that a series has (possibly as a result of simplifying by differencing operations) a homogeneous appearance throughout its length, fluctuating with approximately constant variance about an overall mean level, it is appropriate to assume that its statistical properties are stationary. For most purposes the correlations ρkρk between terms xt,xt + kxt,xt+k or wt,wt + kwt,wt+k separated by lag kk give an adequate description of the statistical structure and are estimated by the sample autocorrelation function (ACF) rkrk, for k = 1,2,k=1,2,.
As described by Box and Jenkins (1976), these may be used to indicate which particular ARIMA model may be appropriate.

Partial autocorrelations

The information in the autocorrelations, ρkρk, may be presented in a different light by deriving from them the coefficients of the partial autocorrelation function (PACF) φk,kϕk,k, for k = 1,2,k=1,2,. φk,kϕk,k which measures the correlation between xtxt and xt + kxt+k conditional upon the intermediate values xt + 1,xt + 2,,xt + k1xt+1,xt+2,,xt+k-1. The corresponding sample values ϕ̂k,kϕ^k,k give further assistance in the selection of ARIMA models.
Both autocorrelation function (ACF) and PACF may be rapidly computed, particularly in comparison with the time taken to estimate ARIMA models.

Finite lag predictor coefficients and error variances

The partial autocorrelation coefficient φk,kϕk,k is determined as the final parameter in the minimum variance predictor of xtxt in terms of xt1,xt2,,xtkxt-1,xt-2,,xt-k,
xt = φk,1xt1 + φk,2xt2 + + φk,kxtk + ek,t
xt=ϕk,1xt-1+ϕk,2xt-2++ϕk,kxt-k+ek,t
where ek,tek,t is the prediction error, and the first subscript kk of φk,iϕk,i and ek,tek,t emphasizes the fact that the parameters will alter as kk increases. Moderately good estimates ϕ̂k,iϕ^k,i of φk,iϕk,i are obtained from the sample autocorrelation function (ACF), and after calculating the partial autocorrelation function (PACF) up to lag LL, the successive values v1,v2,,vLv1,v2,,vL of the prediction error variance estimates, vk = var(ek,t)vk=var(ek,t), are available, together with the final values of the coefficients ϕ̂k,1,ϕ̂k,2,,ϕ̂k,Lϕ^k,1,ϕ^k,2,,ϕ^k,L. If xtxt has nonzero mean, xx-, it is adequate to use xtxxt-x- in place of xtxt in the prediction equation.
Although Box and Jenkins (1976) do not place great emphasis on these prediction coefficients, their use is advocated for example by Akaike (1971), who recommends selecting an optimal order of the predictor as the lag for which the final prediction error (FPE) criterion (1 + k / n)(1k / n)1vk(1+k/n)(1-k/n)-1vk is a minimum.

ARIMA models

The correlation structure in stationary time series may often be represented by a model with a small number of parameters belonging to the autoregressive moving-average (ARMA) class. If the stationary series wtwt has been derived by differencing from the original series xtxt, then xtxt is said to follow an ARIMA model. Taking wt = dxtwt=dxt, the (non-seasonal) ARIMA (p,d,q)(p,d,q) model with pp autoregressive parameters φ1,φ2,,φpϕ1,ϕ2,,ϕp and qq moving-average parameters θ1,θ2,,θqθ1,θ2,,θq, represents the structure of wtwt by the equation
wt = φ1wt1 + + φpwtp + atθ1at1θqatq,
wt=ϕ1wt-1++ϕpwt-p+at-θ1at-1--θqat-q,
(1)
where atat is an uncorrelated series (white noise) with mean 00 and constant variance σa2σa2. If wtwt has a nonzero mean cc, then this is allowed for by replacing wt,wt1,wt,wt-1, by wtc,wt1c,wt-c,wt-1-c, in the model. Although cc is often estimated by the sample mean of wtwt this is not always optimal.
A series generated by this model will only be stationary provided restrictions are placed on φ1,φ2,,φpϕ1,ϕ2,,ϕp to avoid unstable growth of wtwt. These are called stationarity constraints. The series atat may also be usefully interpreted as the linear innovations in xtxt (and in wtwt), i.e., the error if xtxt were to be predicted using the information in all past values xt1,xt2,xt-1,xt-2,, provided also that θ1,θ2,,θqθ1,θ2,,θq satisfy invertibility constraints. This allows the series atat to be regenerated by rewriting the model equation as
at = wtφ1wt1φpwtp + θ1at1 + + θqatq.
at=wt-ϕ1wt-1--ϕpwt-p+θ1at-1++θqat-q.
(2)
For a series with short-term correlation only, i.e., rkrk is not significant beyond some low lag qq (see Box and Jenkins (1976) for the statistical test), then the pure moving-average model MA(q)MA(q) is appropriate, with no autoregressive parameters, i.e., p = 0p=0.
Autoregressive parameters are appropriate when the autocorrelation function (ACF) pattern decays geometrically, or with a damped sinusoidal pattern which is associated with quasi-periodic behaviour in the series. If the sample partial autocorrelation function (PACF) ϕ̂k,kϕ^k,k is significant only up to some low lag pp, then a pure autoregressive model AR(p)AR(p) is appropriate, with q = 0q=0. Otherwise moving-average terms will need to be introduced, as well as autoregressive terms.
The seasonal ARIMA (p,d,q,P,D,Q,s)(p,d,q,P,D,Q,s) model allows for correlation at lags which are multiples of the seasonal period ss. Taking wt = dsDxtwt=dsDxt, the series is represented in a two-stage manner via an intermediate series etet:
wt = Φ1wts + + ΦPwts × P + etΘ1etsΘQets × Q
wt=Φ1wt-s++ΦPwt-s×P+et-Θ1et-s--ΘQet-s×Q
(3)
et = φ1et1 + + φpetp + atθ1at1θqatq
et=ϕ1et-1++ϕpet-p+at-θ1at-1--θqat-q
(4)
where ΦiΦi, ΘiΘi are the seasonal parameters and PP and QQ are the corresponding orders. Again, wtwt may be replaced by wtcwt-c.

ARIMA model estimation

In theory, the parameters of an ARIMA model are determined by a sufficient number of autocorrelations ρ1,ρ2,ρ1,ρ2,. Using the sample values r1,r2,r1,r2, in their place it is usually (but not always) possible to solve for the corresponding ARIMA parameters.
These are rapidly computed but are not fully efficient estimates, particularly if moving-average parameters are present. They do provide useful preliminary values for an efficient but relatively slow iterative method of estimation. This is based on the least squares principle by which parameters are chosen to minimize the sum of squares of the innovations atat, which are regenerated from the data using (2), or the reverse of (3) and (4) in the case of seasonal models.
Lack of knowledge of terms on the right-hand side of (2), when t = 1,2,,max (p,q)t=1,2,,max(p,q), is overcome by introducing qq unknown series values w0,w1,,wq1w0,w1,,wq-1 which are estimated as nuisance parameters, and using correction for transient errors due to the autoregressive terms. If the data w1,w2,,wN = ww1,w2,,wN=w is viewed as a single sample from a multivariate Normal density whose covariance matrix VV is a function of the ARIMA model parameters, then the exact likelihood of the parameters is
(1/2)log|V|(1/2)wTV1w.
-12log|V|-12wTV-1w.
The least squares criterion as outlined above is equivalent to using the quadratic form
QF = wTV1w
QF=wTV-1w
as an objective function to be minimized. Neglecting the term (1/2)log|V|-12log|V| yields estimates which differ very little from the exact likelihood except in small samples, or in seasonal models with a small number of whole seasons contained in the data. In these cases bias in moving-average parameters may cause them to stick at the boundary of their constraint region, resulting in failure of the estimation method.
Approximate standard errors of the parameter estimates and the correlations between them are available after estimation.
The model residuals, ta^t, are the innovations resulting from the estimation and are usually examined for the presence of autocorrelation as a check on the adequacy of the model.

ARIMA model forecasting

An ARIMA model is particularly suited to extrapolation of a time series. The model equations are simply used for t = n + 1,n + 2,t=n+1,n+2, replacing the unknown future values of atat by zero. This produces future values of wtwt, and if differencing has been used this process is reversed (the so-called integration part of ARIMA models) to construct future values of xtxt.
Forecast error limits are easily deduced.
This process requires knowledge only of the model orders and parameters together with a limited set of the terms ati,eti,wti,xtiat-i,et-i,wt-i,xt-i which appear on the right-hand side of the models (3) and (4) (and the differencing equations) when t = nt=n. It does not require knowledge of the whole series.
We call this the state set. It is conveniently constituted after model estimation. Moreover, if new observations xn + 1,xn + 2,xn+1,xn+2, come to hand, then the model equations can easily be used to update the state set before constructing forecasts from the end of the new observations. This is particularly useful when forecasts are constructed on a regular basis. The new innovations an + 1,an + 2,an+1,an+2, may be compared with the residual standard deviation, σaσa, of the model used for forecasting, as a check that the model is continuing to forecast adequately.

Exponential Smoothing

Exponential smoothing is a relatively simple method of short term forecasting for a time series. A variety of different smoothing methods are possible, including; single exponential, Brown's double exponential, linear Holt (also called double exponential smoothing in some references), additive Holt–Winters and multiplicative Holt–Winters. The choice of smoothing method used depends on the characteristics of the time series. If the mean of the series is only slowly changing then single exponential smoothing may be suitable. If there is a trend in the time series, which itself may be slowly changing, then linear Holt smoothing may be suitable. If there is a seasonal component to the time series, e.g., daily or monthly data, then one of the two Holt–Winters methods may be suitable.
For a time series ytyt, for t = 1,2,,nt=1,2,,n, the five smoothing functions are defined by the following:
mt = α yt / s t-p + ( 1-α ) ( m t-1 +ϕ r t-1 ) rt = γ ( mt - m t-1 ) + ( 1-γ ) ϕ r t-1 st = β yt / mt + ( 1-β ) s t-p y^ t+f = ( mt + i=1 f ϕi rt ) × s t-p var( y^ t+f ) = var( εt ) ( i=0 j=0 p-1 ( ψ j+ip s t+f s t+f-j ) 2 )
and ψψ is defined as in the additive Holt–Winters smoothing, where mtmt is the mean, rtrt is the trend and stst is the seasonal component at time tt with pp being the seasonal order. The ff-step ahead forecasts are given by t + fy^t+f and their variances by var(t + f) var( y^ t+f ) . The term var(εt) var( εt )  is estimated as the mean deviation.
The parameters, αα, ββ and γγ control the amount of smoothing. The nearer these parameters are to one, the greater the emphasis on the current data point. Generally these parameters take values in the range 0.10.1 to 0.30.3. The linear Holt and two Holt–Winters smoothers include an additional parameter, φϕ, which acts as a trend dampener. For 0.0 < φ < 1.00.0<ϕ<1.0 the trend is dampened and for φ > 1.0ϕ>1.0 the forecast function has an exponential trend, φ = 0.0ϕ=0.0 removes the trend term from the forecast function and φ = 1.0ϕ=1.0 does not dampen the trend.
For all methods, values for αα, ββ, γγ and ψψ can be chosen by trying different values and then visually comparing the results by plotting the fitted values along side the original data. Alternatively, for single exponential smoothing a suitable value for αα can be obtained by fitting an ARIMA(0,1,1)ARIMA(0,1,1) model. For Brown's double exponential smoothing and linear Holt smoothing with no dampening, (i.e., φ = 1.0ϕ=1.0), suitable values for αα and, in the case of linear Holt smoothing, γγ can be obtained by fitting an ARIMA(0,2,2)ARIMA(0,2,2) model. Similarly, the linear Holt method, with φ1.0ϕ1.0, can be expressed as an ARIMA(1,2,2)ARIMA(1,2,2) model and the additive Holt–Winters, with no dampening, (φ = 1.0ϕ=1.0), can be expressed as a seasonal ARIMA model with order pp of the form ARIMA(0,1,p + 1)(0,1,0)ARIMA(0,1,p+1)(0,1,0). There is no similar procedure for obtaining parameter values for the multiplicative Holt–Winters method, or the additive Holt–Winters method with φ1.0ϕ1.0. In these cases parameters could be selected by minimizing a measure of fit using nonlinear optimization.

Univariate Spectral Analysis

In describing a time series using spectral analysis the fundamental components are taken to be sinusoidal waves of the form Rcos(ωt + φ)Rcos(ωt+ϕ), which for a given angular frequency ωω, 0ωπ0ωπ, is specified by its amplitude R > 0R>0 and phase φϕ, 0φ < 2π0ϕ<2π. Thus in a time series of nn observations it is not possible to distinguish more than n / 2n/2 independent sinusoidal components. The frequency range 0ωπ0ωπ is limited to the shortest wavelength of two sampling units because any wave of higher frequency is indistinguishable upon sampling (or is aliased with) a wave within this range. Spectral analysis follows the idea that for a series made up of a finite number of sine waves the amplitude of any component at frequency ωω is given to order 1 / n1/n by
R2 = (1/(n2))
(n ) ∑ xteiωtt = 1 2
.
R2= (1n2) | t=1 n xt eiωt | 2 .

The sample spectrum

For a series x1,x2,,xnx1,x2,,xn this is defined as
f* (ω) = (1/(2nπ))
(n ) ∑ xteiωtt = 1 2
,
f* (ω)= (12nπ ) | t=1 n xt eiωt | 2 ,
the scaling factor now being chosen in order that
π
2f*(ω)dω = σx2,
0
20πf*(ω)dω=σx2,
i.e., the spectrum indicates how the sample variance (σx2σx2) of the series is distributed over components in the frequency range 0ωπ0ωπ.
It may be demonstrated that f*(ω)f*(ω) is equivalently defined in terms of the sample ACF rkrk of the series as
f*(ω) = (1/(2π))
( n1 )
c0 + 2ckcoskω
k = 1
,
f*(ω)=(12π ) (c0+2k=1 n-1ckcoskω) ,
where ck = σx2rkck=σx2rk are the sample autocovariance coefficients.
If the series xtxt does contain a deterministic sinusoidal component of amplitude RR, this will be revealed in the sample spectrum as a sharp peak of approximate width π / nπ/n and height (n / 2π)R2(n/2π)R2. This is called the discrete part of the spectrum, the variance R2R2 associated with this component being in effect concentrated at a single frequency.
If the series xtxt has no deterministic components, i.e., is purely stochastic being stationary with autocorrelation function (ACF) rkrk, then with increasing sample size the expected value of f*(ω)f*(ω) converges to the theoretical spectrum – the continuous part
f(ω) = (1/(2π))
( )
γ0 + 2γkcos(ωk)
k = 1
,
f(ω)=(12π ) (γ0+2k=1γk cos(ωk)) ,
where γkγk are the theoretical autocovariances.
The sample spectrum does not however converge to this value but at each frequency point fluctuates about the theoretical spectrum with an exponential distribution, being independent at frequencies separated by an interval of 2π / n2π/n or more. Various devices are therefore employed to smooth the sample spectrum and reduce its variability. Much of the strength of spectral analysis derives from the fact that the error limits are multiplicative so that features may still show up as significant in a part of the spectrum which has a generally low level, whereas they are completely masked by other components in the original series. The spectrum can help to distinguish deterministic cyclical components from the stochastic quasi-cycle components which produce a broader peak in the spectrum. (The deterministic components can be removed by regression and the remaining part represented by an ARIMA model.)
A large discrete component in a spectrum can distort the continuous part over a large frequency range surrounding the corresponding peak. This may be alleviated at the cost of slightly broadening the peak by tapering a portion of the data at each end of the series with weights which decay smoothly to zero. It is usual to correct for the mean of the series and for any linear trend by simple regression, since they would similarly distort the spectrum.

Spectral smoothing by lag window

The estimate is calculated directly from the sample autocovariances ckck as
f(ω) = (1/(2π))
( M1 )
c0 + 2wkckcoskω
k = 1
,
f(ω)=(12π ) (c0+2k=1 M-1wkckcoskω) ,
the smoothing being induced by the lag window weights wkwk which extend up to a truncation lag MM which is generally much less than nn. The smaller the value of MM, the greater the degree of smoothing, the spectrum estimates being independent only at a wider frequency separation indicated by the bandwidth bb which is proportional to 1 / M1/M. It is wise, however, to calculate the spectrum at intervals appreciably less than this. Although greater smoothing narrows the error limits, it can also distort the spectrum, particularly by flattening peaks and filling in troughs.

Direct spectral smoothing

The unsmoothed sample spectrum is calculated for a fine division of frequencies, then averaged over intervals centred on each frequency point for which the smoothed spectrum is required. This is usually at a coarser frequency division. The bandwidth corresponds to the width of the averaging interval.

Linear Lagged Relationships Between Time Series

We now consider the context in which one time series, called the dependent or output series, y1,y2,,yny1,y2,,yn, is believed to depend on one or more explanatory or input series, e.g., x1,x2,,xnx1,x2,,xn. This dependency may follow a simple linear regression, e.g.,
yt = vxt + nt
yt=vxt+nt
or more generally may involve lagged values of the input
yt = v0xt + v1xt 1 + v2xt 2 + + nt.
yt=v0xt+v1xt- 1+v2xt- 2++nt.
The sequence v0,v1,v2,v0,v1,v2, is called the impulse response function (IRF) of the relationship. The term ntnt represents that part of ytyt which cannot be explained by the input, and it is assumed to follow a univariate ARIMA model. We call ntnt the (output) noise component of ytyt, and it includes any constant term in the relationship. It is assumed that the input series, xtxt, and the noise component, ntnt, are independent.
The part of ytyt which is explained by the input is called the input component ztzt:
zt = v0xt + v1xt1 + v2xt2 +
zt=v0xt+v1xt-1+v2xt-2+
so yt = zt + ntyt=zt+nt.
The eventual aim is to model both these components of ytyt on the basis of observations of y1,y2,,yny1,y2,,yn and x1,x2,,xnx1,x2,,xn. In applications to forecasting or control both components are important. In general there may be more than one input series, e.g., x1,tx1,t and x2,tx2,t, which are assumed to be independent and corresponding components z1,tz1,t and z2,tz2,t, so
yt = z1,t + z2,t + nt.
yt=z1,t+z2,t+nt.

Transfer function models

In a similar manner to that in which the structure of a univariate series may be represented by a finite-parameter ARIMA model, the structure of an input component may be represented by a transfer function (TF) model with delay time bb, pp autoregressive-like parameters δ1,δ2,,δpδ1,δ2,,δp and q + 1q+1 moving-average-like parameters ω0,ω1,,ωqω0,ω1,,ωq:
zt = δ1zt1 + δ2zt2 + + δpztp + ω0xtbω1xtb1ωqxtbq.
zt=δ1zt-1+δ2zt-2++δpzt-p+ω0xt-b-ω1xt-b-1--ωqxt-b-q.
(5)
If p > 0p>0 this represents an impulse response function (IRF) which is infinite in extent and decays with geometric and/or sinusoidal behaviour. The parameters δ1,δ2,,δpδ1,δ2,,δp are constrained to satisfy a stability condition identical to the stationarity condition of autoregressive models. There is no constraint on ω0,ω1,,ωqω0,ω1,,ωq.

Cross-correlations

An important tool for investigating how an input series xtxt affects an output series ytyt is the sample cross-correlation function (CCF) rxy(k)rxy(k), for k = 0,1,k=0,1, between the series. If xtxt and ytyt are (jointly) stationary time series this is an estimator of the theoretical quantity
ρxy(k) = corr(xt,yt + k).
ρxy(k)=corr(xt,yt+k).
The sequence ryx(k)ryx(k), for k = 0,1,k=0,1,, is distinct from rxy(k)rxy(k), though it is possible to interpret
ryx(k) = rxy(k).
ryx(k)=rxy(-k).
When the series ytyt and xtxt are believed to be related by a transfer function (TF) model, the CCF is determined by the impulse response function (IRF) v0,v1,v2,v0,v1,v2, and the autocorrelation function (ACF) of the input xtxt.
In the particular case when xtxt is an uncorrelated series or white noise (and is uncorrelated with any other inputs):
ρxy(k)vk
ρxy(k)vk
and the sample CCF can provide an estimate of vkvk:
k = (sy / sx)rxy(k)
v~k=(sy/sx)rxy(k)
where sysy and sxsx are the sample standard deviations of ytyt and xtxt, respectively.
In theory the IRF coefficients vb,,vb + p + qvb,,vb+p+q determine the parameters in the TF model, and using kv~k to estimate kv~k it is possible to solve for preliminary estimates of δ1,δ2,,δpδ1,δ2,,δp, ω0,ω1,,ωqω0,ω1,,ωq.

Prewhitening or filtering by an ARIMA model

In general an input series xtxt is not white noise, but may be represented by an ARIMA model with innovations or residuals atat which are white noise. If precisely the same operations by which atat is generated from xtxt are applied to the output ytyt to produce a series btbt, then the transfer function relationship between ytyt and xtxt is preserved between btbt and atat. It is then possible to estimate
k = (sb / sa)rab(k).
v~k=(sb/sa)rab(k).
The procedure of generating atat from xtxt (and btbt from ytyt) is called prewhitening or filtering by an ARIMA model. Although atat is necessarily white noise, this is not generally true of btbt.

Multi-input model estimation

The term multi-input model is used for the situation when one output series ytyt is related to one or more input series xj,txj,t, as described in Section [Linear Lagged Relationships Between Time Series]. If for a given input the relationship is a simple linear regression, it is called a simple input; otherwise it is a transfer function input. The error or noise term follows an ARIMA model.
Given that the orders of all the transfer function models and the ARIMA model of a multi-input model have been specified, the various parameters in those models may be (simultaneously) estimated.
The procedure used is closely related to the least squares principle applied to the innovations in the ARIMA noise model.
The innovations are derived for any proposed set of parameter values by calculating the response of each input to the transfer functions and then evaluating the noise ntnt as the difference between this response (combined for all the inputs) and the output. The innovations are derived from the noise using the ARIMA model in the same manner as for a univariate series, and as described in Section [ARIMA models].
In estimating the parameters, consideration has to be given to the lagged terms in the various model equations which are associated with times prior to the observation period, and are therefore unknown. The function descriptions provide the necessary detail as to how this problem is treated.
Also, as described in Section [ARIMA model estimation] the sum of squares criterion
S = at2
S=at2
is related to the quadratic form in the exact log-likelihood of the parameters:
(1/2)log|V|(1/2)wTV1w.
-12log|V|-12wTV-1w.
Here ww is the vector of appropriately differenced noise terms, and
wTV1w = S / σa2,
wTV-1w=S/σa2,
where σa2σa2 is the innovation variance parameter.
The least squares criterion is therefore identical to minimization of the quadratic form, but is not identical to exact likelihood. Because VV may be expressed as Mσa2Mσa2, where MM is a function of the ARIMA model parameters, substitution of σa2σa2 by its maximum likelihood (ML) estimator yields a concentrated (or profile) likelihood which is a function of
|M|1 / NS.
|M|1/NS.
NN is the length of the differenced noise series ww, and |M| = detM|M|=detM.
Use of the above quantity, called the deviance, DD, as an objective function is preferable to the use of SS alone, on the grounds that it is equivalent to exact likelihood, and yields estimates with better properties. However, there is an appreciable computational penalty in calculating DD, and in large samples it differs very little from SS, except in the important case of seasonal ARIMA models where the number of whole seasons within the data length must also be large.
You are given the option of taking the objective function to be either SS or DD, or a third possibility, the marginal likelihood. This is similar to exact likelihood but can counteract bias in the ARIMA model due to the fitting of a large number of simple inputs.
Approximate standard errors of the parameter estimates and the correlations between them are available after estimation.
The model residuals ta^t are the innovations resulting from the estimation, and they are usually examined for the presence of either autocorrelation or cross-correlation with the inputs. Absence of such correlation provides some confirmation of the adequacy of the model.

Multi-input model forecasting

A multi-input model may be used to forecast the output series provided future values (possibly forecasts) of the input series are supplied.
Construction of the forecasts requires knowledge only of the model orders and parameters, together with a limited set of the most recent variables which appear in the model equations. This is called the state set. It is conveniently constituted after model estimation. Moreover, if new observations yn + 1,yn + 2,yn+1,yn+2, of the output series and xn + 1,xn + 2,xn+1,xn+2, of (all) the independent input series become available, then the model equations can easily be used to update the state set before constructing forecasts from the end of the new observations. The new innovations an + 1,an + 2,an+1,an+2, generated in this updating may be used to monitor the continuing adequacy of the model.

Transfer function model filtering

In many time series applications it is desired to calculate the response (or output) of a transfer function (TF) model for a given input series.
Smoothing, detrending, and seasonal adjustment are typical applications. You must specify the orders and parameters of a TF model for the purpose being considered. This may then be applied to the input series.
Again, problems may arise due to ignorance of the input series values prior to the observation period. The transient errors which can arise from this may be substantially reduced by using ‘backforecasts’ of these unknown observations.

Multivariate Time Series

Multi-input modelling represents one output time series in terms of one or more input series. Although there are circumstances in which it may be more appropriate to analyse a set of time series by modelling each one in turn as the output series with the remainder as inputs, there is a more symmetric approach in such a context. These models are known as vector autoregressive moving-average (VARMA) models.

Differencing and transforming a multivariate time series

As in the case of a univariate time series, it may be useful to simplify the series by differencing operations which may be used to remove linear or seasonal trends, thus ensuring that the resulting series to be used in the model estimation is stationary. It may also be necessary to apply transformations to the individual components of the multivariate series in order to stabilize the variance. Commonly used transformations are the log and square root transformations.

Model identification for a multivariate time series

Multivariate analogues of the autocorrelation and partial autocorrelation functions are available for analysing a set of kk time series, xi,1,xi,2,,xi,nxi,1,xi,2,,xi,n, for i = 1,2,,ki=1,2,,k, thereby making it possible to obtain some understanding of a suitable VARMA model for the observed series.
It is assumed that the time series have been differenced if necessary, and that they are jointly stationary. The lagged correlations between all possible pairs of series, i.e.,
ρijl = corr (x i , t ,x j , t + l )
ρ ijl = corr ( x i , t , x j , t + l )
are then taken to provide an adequate description of the statistical relationships between the series. These quantities are estimated by sample auto- and cross-correlations rijlrijl. For each ll these may be viewed as elements of a (lagged) autocorrelation matrix.
Thus consider the vector process xtxt (with elements xitxit) and lagged autocovariance matrices ΓlΓl with elements of σiσjρijlσiσjρijl where σi2 = var(xi,t)σi2=var(xi,t). Correspondingly, ΓlΓl is estimated by the matrix ClCl with elements sisjrijlsisjrijl where si2si2 is the sample variance of xitxit.
For a series with short-term cross-correlation only, i.e., rijlrijl is not significant beyond some low lag qq, then the pure vector MA(q)MA(q) model, with no autoregressive parameters, i.e., p = 0p=0, is appropriate.
The correlation matrices provide a description of the joint statistical properties of the series. It is also possible to calculate matrix quantities which are closely analogous to the partial autocorrelations of univariate series (see Section [Partial autocorrelations]). Wei (1990) discusses both the partial autoregression matrices proposed by Tiao and Box (1981) and partial lag correlation matrices.
In the univariate case the partial autocorrelation function (PACF) between xtxt and xt + lxt+l is the correlation coefficient between the two after removing the linear dependence on each of the intervening variables xt + 1,xt + 2,,xt + l1xt+1,xt+2,,xt+l-1. This partial autocorrelation may also be obtained as the last regression coefficient associated with xtxt when regressing xt + lxt+l on its ll lagged variables xt + l1,xt + l2,,xtxt+l-1,xt+l-2,,xt. Tiao and Box (1981) extended this method to the multivariate case to define the partial autoregression matrix. Heyse and Wei (1985) also extended the univariate definition of the PACF to derive the correlation matrix between the vectors xtxt and xt + lxt+l after removing the linear dependence on each of the intervening vectors xt + 1,xt + 2,,xt + l1xt+1,xt+2,,xt+l-1, the partial lag correlation matrix.
Note that the partial lag correlation matrix is a correlation coefficient matrix since each of its elements is a properly normalized correlation coefficient. This is not true of the partial autoregression matrices (except in the univariate case for which the two types of matrix are the same). The partial lag correlation matrix at lag 11 also reduces to the regular correlation matrix at lag 11; this is not true of the partial autoregression matrices (again except in the univariate case).
Both the above share the same cut-off property for autoregressive processes; that is for an autoregressive process of order pp, the terms of the matrix at lags p + 1p+1 and greater are zero. Thus if the sample partial cross-correlations are significant only up to some low lag pp then a pure vector AR(p)AR(p) model is appropriate with q = 0q=0. Otherwise moving-average terms will need to be introduced as well as autoregressive terms.
Under the hypothesis that xtxt is an autoregressive process of order l1l-1, nn times the sum of the squared elements of the partial lag correlation matrix at lag ll is asymptotically distributed as a χ2χ2 variable with k2k2 degrees of freedom where kk is the dimension of the multivariate time series. This provides a diagnostic aid for determining the order of an autoregressive model.
The partial autoregression matrices may be found by solving a multivariate version of the Yule–Walker equations to find the autoregression matrices, using the final regression matrix coefficient as the partial autoregression matrix at that particular lag.
The basis of these calculations is a multivariate autoregressive model:
xt = φl,1xt1 + + φl,lxtl + el,t
xt=ϕl,1xt-1++ϕl,lxt-l+el,t
where φl,1,φl,2,,φl,lϕl,1,ϕl,2,,ϕl,l are matrix coefficients, and el,tel,t is the vector of errors in the prediction. These coefficients may be rapidly computed using a recursive technique which requires, and simultaneously furnishes, a backward prediction equation:
xtl1 = ψl,1xtl + ψl,2xtl + 1 + + ψl,lxt1 + fl,t
xt-l-1=ψl,1xt-l+ψl,2xt-l+1++ψl,lxt-1+fl,t
(in the univariate case ψl,i = φl,iψl,i=ϕl,i).
The forward prediction equation coefficients, φl,iϕl,i, are of direct interest, together with the covariance matrix DlDl of the prediction errors el,tel,t. The calculation of these quantities for a particular maximum equation lag l = Ll=L involves calculation of the same quantities for increasing values of l = 1,2,,Ll=1,2,,L.
The quantities vl = detDl / detΓ0vl=detDl/detΓ0 may be viewed as generalized variance ratios, and provide a measure of the efficiency of prediction (the smaller the better). The reduction from vl1vl-1 to vlvl which occurs on extending the order of the predictor to ll may be represented as
vl = vl1(1ρl2)
vl=vl-1(1-ρl2)
where ρl2ρl2 is a multiple squared partial autocorrelation coefficient associated with k2k2 degrees of freedom.
Sample estimates of all the above quantities may be derived by using the series covariance matrices ClCl, for l = 1,2,,Ll=1,2,,L, in place of ΓlΓl. The best lag for prediction purposes may be chosen as that which yields the minimum final prediction error (FPE) criterion:
FPE(l) = vl × ((1 + lk2 / n))/((1lk2 / n)).
FPE(l)=vl× (1+lk2/n) (1-lk2/n) .
An alternative method of estimating the sample partial autoregression matrices is by using multivariate least squares to fit a series of multivariate autoregressive models of increasing order.

VARMA model estimation

The cross-correlation structure of a stationary multivariate time series may often be represented by a model with a small number of parameters belonging to the VARMA class. If the stationary series wtwt has been derived by transforming and/or differencing the original series xtxt, then wtwt is said to follow the VARMA model:
wt = φ1wt1 + + φpwtp + εtθ1εt1θqεtq,
wt=ϕ1wt-1++ϕpwt-p+εt-θ1εt-1--θqεt-q,
where εtεt is a vector of uncorrelated residual series (white noise) with zero mean and constant covariance matrix ΣΣ, φ1,φ2,,φpϕ1,ϕ2,,ϕp are the pp autoregressive (AR) parameter matrices and θ1,θ2,,θqθ1,θ2,,θq are the qq moving-average (MA) parameter matrices. If wtwt has a nonzero mean μμ, then this can be allowed for by replacing wt,wt1,wt,wt-1, by wtμ,wt1μ,wt-μ,wt-1-μ, in the model.
A series generated by this model will only be stationary provided restrictions are placed on φ1,φ2,,φpϕ1,ϕ2,,ϕp to avoid unstable growth of wtwt. These are stationarity constraints. The series εtεt may also be usefully interpreted as the linear innovations in wtwt, i.e., the error if wtwt were to be predicted using the information in all past values wt1,wt2,wt-1,wt-2,, provided also that θ1,θ2,,θqθ1,θ2,,θq satisfy what are known as invertibility constraints. This allows the series εtεt to be generated by rewriting the model equation as
εt = wtφ1wt1φpwtp + θ1εt1 + + θqεtq.
εt=wt-ϕ1wt-1--ϕpwt-p+θ1εt-1++θqεt-q.
The method of maximum likelihood (ML) may be used to estimate the parameters of a specified VARMA model from the observed multivariate time series together with their standard errors and correlations.
The residuals from the model may be examined for the presence of autocorrelations as a check on the adequacy of the fitted model.

VARMA model forecasting

Forecasts of the series may be constructed using a multivariate version of the univariate method. Efficient methods are available for updating the forecasts each time new observations become available.

Cross-spectral Analysis

The relationship between two time series may be investigated in terms of their sinusoidal components at different frequencies. At frequency ωω a component of ytyt of the form
Ry(ω)cos(ωtφy(ω))
Ry(ω)cos(ωt-ϕy(ω))
has its amplitude Ry(ω)Ry(ω) and phase lag φy(ω)ϕy(ω) estimated by
n
Ry(ω)eiφy(ω) = 1/nyteiωt
t = 1
Ry(ω)eiϕy(ω)=1nt=1nyteiωt
and similarly for xtxt. In the univariate analysis only the amplitude was important – in the cross analysis the phase is important.

The sample cross-spectrum

This is defined by
fxy * (ω) = 1/(2πn)
(n )
yteiωt
t = 1
(n )
xteiωt
t = 1
.
fxy*(ω)=12πn (t=1nyteiωt) (t=1nxte-iωt) .
It may be demonstrated that this is equivalently defined in terms of the sample cross-correlation function (CCF), rxy(k)rxy(k), of the series as
(n1)
fxy * (ω) = 1/(2π)cxy(k)eiωk
(n1)
fxy*(ω)=12π -(n-1) (n-1) cxy(k)eiωk
where cxy(k) = sxsyrxy(k)cxy(k)=sxsyrxy(k) is the cross-covariance function.

The amplitude and phase spectrum

The cross-spectrum is specified by its real part or cospectrum cf*(ω)cf*(ω) and imaginary part or quadrature spectrum qf*(ω)qf*(ω), but for the purpose of interpretation the cross-amplitude spectrum and phase spectrum are useful:
A*(ω) = |fxy * (ω)| ,   φ*(ω) = arg(fxy * (ω)) .
A*(ω) = | fxy* (ω) | ,   ϕ*(ω) = arg( f xy * (ω)) .
If the series xtxt and ytyt contain deterministic sinusoidal components of amplitudes Ry,RxRy,Rx and phases φy,φxϕy,ϕx at frequency ωω, then A*(ω)A*(ω) will have a peak of approximate width π / nπ/n and height (n / 2π)RyRx(n/2π)RyRx at that frequency, with corresponding phase φ*(ω) = φyφxϕ*(ω)=ϕy-ϕx. This supplies no information that cannot be obtained from the two series separately. The statistical relationship between the series is better revealed when the series are purely stochastic and jointly stationary, in which case the expected value of fxy * (ω)fxy*(ω) converges with increasing sample size to the theoretical cross-spectrum
fxy(ω) = 1/(2π)γxy(k)eiωk
fxy (ω) = 12π - γxy (k) eiωk
where γxy(k) = cov(xt,yt + k)γxy(k)=cov(xt,yt+k). The sample spectrum, as in the univariate case, does not converge to the theoretical spectrum without some form of smoothing which either implicitly (using a lag window) or explicitly (using a frequency window) averages the sample spectrum fxy(ω) * fxy(ω)* over wider bands of frequency to obtain a smoothed estimate xy(ω)f^xy(ω).

The coherency spectrum

If there is no statistical relationship between the series at a given frequency, then fxy(ω) = 0fxy(ω)=0, and the smoothed estimate xy(ω)f^xy(ω), will be close to 00. This is assessed by the squared coherency between the series:
(ω) = (|xy(ω)|2)/(xx(ω)yy(ω))
W^(ω)=|f^xy(ω)|2f^xx(ω)f^yy(ω)
where xx(ω)f^xx(ω) is the corresponding smoothed univariate spectrum estimate for xtxt, and similarly for ytyt. The coherency can be treated as a squared multiple correlation. It is similarly invariant in theory not only to simple scaling of xtxt and ytyt, but also to filtering of the two series, and provides a useful test statistic for the relationship between autocorrelated series. Note that without smoothing,
|fxy * (ω)|2 = fxx * (ω)fyy * (ω),
|fxy*(ω)|2=fxx*(ω)fyy*(ω),
so the coherency is 11 at all frequencies, just as a correlation is 11 for a sample of size 11. Thus smoothing is essential for cross-spectrum analysis.

The gain and noise spectrum

If ytyt is believed to be related to xtxt by a linear lagged relationship as in Section [Linear Lagged Relationships Between Time Series], i.e.,
yt = v0xt + v1xt1 + v2xt2 + + nt,
yt=v0xt+v1xt-1+v2xt-2++nt,
then the theoretical cross-spectrum is
fxy(ω) = V(ω)fxx(ω)
fxy(ω )=V(ω )fxx(ω )
where
V(ω) = G(ω)eiφ(ω) = vkeikω
k = 0
V(ω)=G(ω)eiϕ(ω)=k=0vkeikω
is called the frequency response of the relationship.
Thus if xtxt were a sinusoidal wave at frequency ωω (and ntnt were absent), ytyt would be similar but multiplied in amplitude by G(ω)G(ω) and shifted in phase by φ(ω)ϕ(ω). Furthermore, the theoretical univariate spectrum
fyy(ω) = G(ω)2fxx(ω) + fn(ω)
fyy(ω)=G (ω) 2fxx(ω)+fn(ω)
where ntnt, with spectrum fn(ω)fn(ω), is assumed independent of the input xtxt.
Cross-spectral analysis thus furnishes estimates of the gain
(ω) = |xy(ω)| / xx (ω)
G^ (ω)= | f^ xy (ω) |/ f^xx (ω)
and the phase
ϕ̂(ω) = arg(xy(ω)) .
ϕ^(ω)=arg(f^xy(ω )) .
From these representations of the estimated frequency response (ω)V^(ω), parametric transfer function (TF) models may be recognized and selected. The noise spectrum may also be estimated as
yx (ω) = yy (ω) (1(ω))
f^ yx (ω)= f^yy (ω) (1-W^ (ω) )
a formula which reflects the fact that in essence a regression is being performed of the sinusoidal components of ytyt on those of xtxt over each frequency band.
Interpretation of the frequency response may be aided by extracting from (ω)V^(ω) estimates of the impulse response function (IRF) kv^k. It is assumed that there is no anticipatory response between ytyt and xtxt, i.e., no coefficients vkvk with k = 1k=-1 or 2-2 are needed (their presence might indicate feedback between the series).

Cross-spectrum smoothing by lag window

The estimate of the cross-spectrum is calculated from the sample cross-variances as
M + S
xy(ω) = 1/(2π)wkScxy(k)eiωk.
M + S
f^xy(ω)=12π -M+S M+Swk-Scxy(k)eiωk.
The lag window wkwk extends up to a truncation lag MM as in the univariate case, but its centre is shifted by an alignment lag SS usually chosen to coincide with the peak cross-correlation. This is equivalent to an alignment of the series for peak cross-correlation at lag 00, and reduces bias in the phase estimation.
The selection of the truncation lag MM, which fixes the bandwidth of the estimate, is based on the same criteria as for univariate series, and the same choice of MM and window shape should be used as in univariate spectrum estimation to obtain valid estimates of the coherency, gain, etc., and test statistics.

Direct smoothing of the cross-spectrum

The computations are exactly as for smoothing of the univariate spectrum except that allowance is made for an implicit alignment shift SS between the series.

Kalman Filters

Kalman filtering provides a method for the analysis of multidimensional time series. The underlying model is:
Xt + 1 = AtXt + BtWt
Xt+1=AtXt+BtWt
(6)
Yt = CtXt + Vt
Yt=CtXt+Vt
(7)
where XtXt is the unobserved state vector, YtYt is the observed measurement vector, WtWt is the state noise, VtVt is the measurement noise, AtAt is the state transition matrix, BtBt is the noise coefficient matrix and CtCt is the measurement coefficient matrix at time tt. The state noise and the measurement noise are assumed to be uncorrelated with zero mean and covariance matrices:
E {WtWtT} = Qt   and   E {VtVtT} = Rt .
E { Wt WtT } = Qt   and   E { Vt VtT } = Rt .
If the system matrices AtAt, BtBt, CtCt and the covariance matrices Qt,RtQt,Rt are known then Kalman filtering can be used to compute the minimum variance estimate of the stochastic variable XtXt.
The estimate of XtXt given observations Y1Y1 to Yt1Yt-1 is denoted by tt1 X^ tt-1  with state covariance matrix E {tt1tt1T} = Ptt1 E { X^ tt-1 X^tt-1T }= Ptt-1  while the estimate of XtXt given observations Y1Y1 to YtYt is denoted by tt X^ tt  with covariance matrix E {ttttT} = Ptt E { X^ tt X^ttT }= Ptt.
The update of the estimate, t + 1t X^ t+1t , from time tt to time t + 1t+1, is computed in two stages.
First, the update equations are
tt = tt1 + Kt rt,   Ptt = (IKtCt) Ptt1
X^ tt = X^ tt-1 + Kt rt,   Ptt= ( I- Kt Ct ) Ptt-1
where the residual rt = Yt Ct Xtt1 rt= Yt- Ct Xtt-1  has an associated covariance matrix Ht = Ct Ptt1 CtT + Rt Ht = Ct Ptt-1 CtT + Rt , and KtKt is the Kalman gain matrix with
Kt = Ptt1 CtT Ht1 .
Kt= Ptt-1 CtT Ht-1 .
The second stage is the one-step-ahead prediction equations given by
t + 1 t = At t t ,   Pt + 1 t = At Pt t AtT + Bt Qt BtT .
X^ t+1 t = At X^ t t ,   Pt+1 t = At Pt t AtT + Bt Qt BtT.
These two stages can be combined to give the one-step-ahead update-prediction equations
t + 1t = At tt1 + At Kt rt .
X^ t+1t = At X^ tt-1 + At Kt rt .
The above equations thus provide a method for recursively calculating the estimates of the state vectors tt X^ tt  and t + 1t X^ t+1t  and their covariance matrices Ptt Ptt  and Pt + 1tPt+1t from their previous values. This recursive procedure can be viewed in a Bayesian framework as being the updating of the prior by the data YtYt.
The initial values 10X^ 10 and P10P10 are required to start the recursion. For stationary systems, P10P10 can be computed from the following equation:
P10 = A1P10 A1T + B1Q1 B1T ,
P10=A1P10 A1T +B1Q1 B1T ,
which can be solved by iterating on the equation. For 10X^ 10 the value E{X}E{X} can be used if it is available.

Computational methods

To improve the stability of the computations the square root algorithm is used. One recursion of the square root covariance filter algorithm which can be summarised as follows:
  Rt1 / 2 CtSt 0 0 AtSt BtQt1 / 2  
U =
  Ht1 / 2 0 0 Gt St + 1 0  
Rt1/2 CtSt 0 0 AtSt BtQt1/2 U= Ht1/2 0 0 Gt St+1 0
where UU is an orthogonal transformation triangularizing the left-hand pre-array to produce the right-hand post-array, StSt is the lower triangular Cholesky factor of the state covariance matrix Pt + 1tPt+1t, Qt1 / 2Qt1/2 and Rt1 / 2Rt1/2 are the lower triangular Cholesky factor of the covariance matrices QQ and RR and H1 / 2H1/2 is the lower triangular Cholesky factor of the covariance matrix of the residuals. The relationship between the Kalman gain matrix, KtKt, and GtGt is given by
AtKt = Gt (Ht1 / 2)1.
AtKt=Gt (Ht1/2) -1.
To improve the efficiency of the computations when the matrices At,BtAt,Bt and CtCt do not vary with time the system can be transformed to give a simpler structure. The transformed state vector is U*XU*X where U*U* is the transformation that reduces the matrix pair (A,C)(A,C) to lower observer Hessenberg form. That is, the matrix U*U* is computed such that the compound matrix
(CU * T)
U*AU * T
CU*T U*AU*T
is a lower trapezoidal matrix. The transformations need only be computed once at the start of a series, and the covariance matrices QtQt and RtRt can still be time-varying.

Model fitting and forecasting

If the state space model contains unknown parameters, θθ, these can be estimated using maximum likelihood (ML). Assuming that WtWt and VtVt are normal variates the log-likelihood for observations YtYt, for t = 1,2,,nt=1,2,,n, is given by
n t
constant(1/2)ln(det(Ht))(1/2)rtTHt1rt.
t = 1 t = 1
constant-12t=1nln(det(Ht))-12t=1t rtT Ht-1rt.
Optimal estimates for the unknown model parameters θθ can then be obtained by using a suitable optimizer function to maximize the likelihood function.
Once the model has been fitted forecasting can be performed by using the one-step-ahead prediction equations. The one-step-ahead prediction equations can also be used to ‘jump over’ any missing values in the series.

Kalman filter and time series models

Many commonly used time series models can be written as state space models. A univariate ARMA(p,q)ARMA(p,q) model can be cast into the following state space form:
xt = Axt1 + Bεt
wt = Cxt
xt = Axt-1+Bεt wt = Cxt
A =
  φ1 1 φ2 1 . . . . φr − 1 1 φr 0 0 . . 0  
,  B =
  − 1 − θ1 − θ2 . . − θr − 1  
  and  CT =
  1 0 0 . . 0  
,
A= ϕ1 1 ϕ2 1 . . . . ϕr-1 1 ϕr 0 0 . . 0 ,  B= -1 -θ1 -θ2 . . -θr-1   and  CT= 1 0 0 . . 0 ,
where r = max (p,q + 1)r=max(p,q+1).
The representation for a kk-variate ARMA(p,q)ARMA(p,q) series (VARMA) is very similar to that given above, except now the state vector is of length krkr and the φϕ and θθ are now k × kk×k matrices and the 1s in AA, BB and CC are now the identity matrix of order kk. If p < rp<r or q + 1 < rq+1<r then the appropriate φϕ or θθ matrices are set to zero, respectively.
Since the compound matrix
(C)
A
C A
is already in lower observer Hessenberg form (i.e., it is lower trapezoidal with zeros in the top right-hand triangle) the invariant Kalman filter algorithm can be used directly without the need to generate a transformation matrix U*U*.

GARCH Models

ARCH models and their generalizations

Rather than modelling the mean (for example using regression models) or the autocorrelation (by using ARMA models) there are circumstances in which the variance of a time series needs to be modelled. This is common in financial data modelling where the variance (or standard deviation) is known as volatility. The ability to forecast volatility is a vital part in deciding the risk attached to financial decisions like portfolio selection. The basic model for relating the variance at time tt to the variance at previous times is the autoregressive conditional heteroskedastic (ARCH) model. The standard ARCH model is defined as
yt ψt1 N (0,ht) ,
q
ht = α0 + αiεti2,
i = 1
yt ψt-1 N (0,ht) , ht = α0+ i=1 q αi ε t-i 2 ,
where ψtψt is the information up to time tt and htht is the conditional variance.
In a similar way to that in which autoregressive (AR) models were generalized to ARMA models the ARCH models have been generalized to a GARCH model; see Engle (1982), Bollerslev (1986) and Hamilton (1994)
q p
ht = α0 + αiεti2 + βhti.
i = 1 i = 1
ht = α0 + i=1 q αi ε t-i 2 + i=1 p β ht-i .
This can be combined with a regression model:
k
yt = b0 + bixit + εt,
i = 1
yt=b0+i= 1k bi xit+εt,
where εtψt1N(0,ht)εtψt-1N(0,ht) and where xitxit, for i = 1,2,,ki=1,2,,k, are the exogenous variables.
The above models assume that the change in variance, htht, is symmetric with respect to the shocks, that is, that a large negative value of εt1εt-1 has the same effect as a large positive value of εt1εt-1. A frequently observed effect is that a large negative value εt1εt-1 often leads to a greater variance than a large positive value. The following three asymmetric models represent this effect in different ways using the parameter γγ as a measure of the asymmetry.
Type I AGARCH(p,qp,q)
q p
ht = α0 + αi(εti + γ)2 + βihti.
i = 1 i = 1
ht = α0 + i=1 q αi ( εt-i +γ ) 2 + i=1 p βi ht-i .
Type II AGARCH(p,qp,q)
q p
ht = α0 + αi(|εti| + γεti)2 + βihti.
i = 1 i = 1
ht = α0 + i=1 q αi ( | εt-i | + γ εt-i ) 2+ i=1 p βi ht-i .
GJR-GARCH(p,qp,q), or Glosten, Jagannathan and Runkle GARCH (see Glosten et al. (1993))
q p
ht = α0 + (αi + γIt1)εt12 + βihti,
i = 1 i = 1
ht = α0 + i=1 q ( αi + γ It-1 ) ε t-1 2 + i=1 p βi ht-i ,
where It = 1It=1 if εt < 0εt<0 and It = 0It=0 if εt0εt0.
The first assumes that the effects of the shocks are symmetric about γγ rather than zero, so that for γ < 0γ<0 the effect of negative shocks is increased and the effect of positive shocks is decreased. Both the Type II AGARCH and the GJR GARCH (see Glosten et al. (1993)) models introduce asymmetry by increasing the value of the coefficient of εt12εt-12 for negative values of εt1εt-1. In the case of the Type II AGARCH the effect is multiplicative while for the GJR GARCH the effect is additive.
Coefficient εt1 < 0εt-1<0 εt1 > 0εt-1>0
Type II AGARCH αi(1γ)2αi (1-γ) 2 αi(1 + γ)2αi (1+γ) 2
GJR GARCH αi + γαi+γ αiαi
(Note that in the case of GJR GARCH, γγ needs to be positive to inflate variance after negative shocks while for Type I and Type II AGARCH, γγ needs to be negative.)
A third type of GARCH model is the exponential GARCH (EGARCH). In this model the variance relationship is on the log scale and hence asymmetric.
q q p
ln(ht) = α0 + αizti + φi(|zti|E[|zti|]) + βiln(hti),
i = 1 i = 1 i = 1
ln(ht) = α0 + i=1 q αi zt-i + i=1 q ϕi ( |zt-i| - E [ | zt-i | ] ) + i=1 p βi ln( ht-i) ,
where zt = (εt)/(sqrt(ht)) zt= εtht  and E[|zti|]E[|zt-i|] denotes the expected value of |zti||zt-i|.
Note that the φiϕi terms represent a symmetric contribution to the variance while the αiαi terms give an asymmetric contribution.
Another common characteristic of financial data is that it is heavier in the tails (leptokurtic) than the Normal distribution. To model this the Normal distribution is replaced by a scaled Student's tt-distribution (that is a Student's tt-distribution standardized to have variance htht). The Student's tt-distribution is such that the smaller the degrees of freedom the higher the kurtosis for degrees of freedom > 4>4.

Fitting GARCH models

The models are fitted by maximizing the conditional log-likelihood. For the Normal distribution the conditional log-likelihood is
T
(1/2)(log(hi) + (εi2)/(hi)).
i = 1
12i=1T (log(hi)+εi2hi) .
For the Student's tt-distribution the function is more complex. An approximation to the standard errors of the parameter estimates is computed from the Fisher information matrix.

Inhomogeneous Time Series

If we denote a generic univariate time series as a sequence of pairs of values (zi,ti)(zi,ti), for i = 1,2,i=1,2, where the zz's represent an observed scalar value and the tt's the time that the value was observed, then in a standard time series analysis, as discussed in other sections of this introduction, it is assumed that the series being analysed is homogeneous, that is the sampling times are regularly spaced with titi1 = δti-ti-1=δ for some value δδ. In many real world applications this assumption does not hold, that is, the series is inhomogeneous.
Standard time series analysis techniques cannot be used on an inhomogeneous series without first preprocessing the series to construct an artificial homogeneous series, by for example, resampling the series at regular intervals. Zumbach and Müller (2001) introduced a series of operators that can be used to extract robust information directly from the inhomogeneous time series. In this context, robust information means that the results should be essentially independent of minor changes to the sampling mechanism used when collecting the data, for example, changing a number of time stamps or adding or removing a few observations.
The basic operator available for inhomogeneous time series is the exponential moving average (EMA). This operator has a single parameter, ττ, and is an average operator with an exponentially decaying kernel given by:
(et / τ)/τ .
e -t/τ τ .
This gives rise to the following iterative formula:
EMA [τ ; z] (ti) = μ EMA [τ ; z] (ti1) + (νμ) zi1 + (1ν) zi
EMA [τ;z] (ti) = μ EMA [τ;z] ( ti-1 ) + (ν-μ) zi-1 + (1-ν) zi
where
μ = eα   and   α = ( ti ti1 )/τ .
μ = e-α   and   α = ti - ti-1 τ .
The value of νν depends on the method of interpolation chosen. Three interpolation methods are available:
1. Previous point: ν = 1ν=1.
2. Linear: ν = (1μ) / α ν= (1-μ) / α .
3. Next point: ν = μν=μ.
Given the EMA, a number of other operators can be defined, including:
(i) mm-Iterated Exponential Moving Average, defined as
EMA [τ,m ; z] = EMA [τ ; EMA[τ,m1 ; z]] ​ where ​ EMA [τ,1 ; z] = EMA [τ ; z] .
EMA [ τ,m;z ] = EMA [ τ ; EMA [ τ,m-1 ; z ] ] ​ where ​ EMA [ τ,1;z ] = EMA [ τ ; z ] .
(ii) Moving Average (MA), defined as
m2
MA[τ,m1,m2 ; z](ti) = 1/( m2 m1 + 1 )EMA[τ̃,j ; z](ti)​ where ​τ̃ = (2τ)/(m2 + m1)
j = m1
MA [ τ, m1, m2; z ] (ti) = 1 m2 - m1 +1 j=m1 m2 EMA [ τ~, j; z ] (ti) ​ where ​ τ~= 2τ m2+m1
(iii) Moving Norm (MNorm), defined as
MNorm (τ,m,p ; z) = MA [τ,1,m ; |z|p] 1 / p
MNorm ( τ,m,p;z ) = MA [ τ,1,m; |z| p ] 1 / p
(iv) Moving Variance (MVar), defined as
MVar (τ,m,p ; z) = MA [τ,1,m ; |zMA[τ,1,m ; z]|p]
MVar ( τ,m,p;z ) = MA [ τ,1,m; | z - MA [ τ,1,m;z ] | p ]
(v) Moving Standard Deviation (MSD), defined as
MSD (τ,m,p ; z) = MA [τ,1,m ; |zMA[τ,1,m ; z]|p] 1 / p
MSD ( τ,m,p;z ) = MA [ τ,1,m; | z - MA [ τ,1,m;z ] | p ] 1 / p
(vi) Differential ( ΔΔ), defined as
Δ[τ,α,β,γ ; z] = γ (EMA[ατ,1 ; z] + EMA[ατ,2 ; z]2EMA[αβτ,4 ; z])
Δ[τ,α,β,γ;z] = γ ( EMA[ατ,1;z] + EMA[ατ,2;z] - 2 EMA[αβτ,4;z] )
(vii) Volatility, defined as
Volatility[τ,τ,m,p ; z] = MNorm (τ / 2,m,p ; Δ[τ ; z])
Volatility[τ,τ,m,p;z] = MNorm ( τ/2,m,p;Δ[τ;z] )
A discussion of each of these operators, their use and in some cases, alternative definitions, are given in Zumbach and Müller (2001).

Recommendations on Choice and Use of Available Functions

ARMA-type Models

ARMA-type modelling usually follows the methodology made popular by Box and Jenkins. It consists of four steps: identification, model fitting, model checking and forecasting. The availability of functions for each of these four steps is given below for the three types of modelling situation considered: univariate, input-output and multivariate.

Univariate series

(a) Model identification
The function nag_tsa_uni_means (g13au) may be used in obtaining either a range-mean or standard deviation-mean plot for a series of observations, which may be useful in detecting the need for a variance-stabilizing transformation. nag_tsa_uni_means (g13au) computes the range or standard deviation and the mean for successive groups of observations and nag_stat_plot_scatter_2var (g01ag) may then be used to produce a scatter plot of range against mean or of standard deviation against mean.
The function nag_tsa_uni_diff (g13aa) may be used to difference a time series. The N = nds × DN=n-d-s×D values of the differenced time series which extends for t = 1 + d + s × D,,nt=1+d+s×D,,n are stored in the first NN elements of the output array.
The function nag_tsa_uni_autocorr (g13ab) may be used for direct computation of the autocorrelations. It requires the time series as input, after optional differencing by nag_tsa_uni_diff (g13aa).
An alternative is to use nag_tsa_uni_spectrum_lag (g13ca), which uses the fast Fourier transform (FFT) to carry out the convolution for computing the autocovariances. Circumstances in which this is recommended are
(i) if the main aim is to calculate the smoothed sample spectrum;
(ii) if the series length and maximum lag for the autocorrelations are both very large, in which case appreciable computing time may be saved.
For more precise recommendations, see Gentleman and Sande (1966). In this case the autocorrelations rkrk need to be obtained from the autocovariances ckck by rk = ck / c0rk=ck/c0.
The function nag_tsa_uni_autocorr_part (g13ac) computes the partial autocorrelation function (PACF) and prediction error variance estimates from an input autocorrelation function (ACF). Note that nag_tsa_multi_corrmat_partlag (g13dn), which is designed for multivariate time series, may also be used to compute the PACF together with χ2χ2 statistics and their significance levels.
Finite lag predictor coefficients are also computed by the function nag_tsa_uni_autocorr_part (g13ac). It may have to be used twice, firstly with a large value for the maximum lag LL in order to locate the optimum final prediction error (FPE) lag, then again with LL reset to this lag.
The function nag_tsa_uni_arma_roots (g13dx) may be used to check that the autoregressive (AR) part of the model is stationary and that the moving-average (MA) part is invertible.
(b) Model estimation
The function nag_tsa_uni_arima_prelim (g13ad) is used to compute preliminary estimates of the ARIMA model parameters, the sample autocorrelations of the appropriately differenced series being input. The model orders are required.
The main function for parameter estimation for ARIMA models is nag_tsa_uni_arima_estim (g13ae), and an easy-to-use version is nag_tsa_uni_arima_estim_easy (g13af). Both these functions use the least squares criterion of estimation.
In some circumstances the use of nag_tsa_multi_inputmod_estim (g13be) or nag_tsa_multi_varma_estimate (g13dd), which use maximum likelihood (ML), is recommended.
The functions require the time series values to be input, together with the ARIMA orders. Any differencing implied by the model is carried out internally. They also require the maximum number of iterations to be specified, and return the state set for use in forecasting.
nag_tsa_uni_arima_estim (g13ae) should be preferred to nag_tsa_uni_arima_estim_easy (g13af) for:
(i) more information about the differenced series, its backforecasts and the intermediate series;
(ii) greater control over the output at successive iterations;
(iii) more detailed control over the search policy of the nonlinear least squares algorithm;
(iv) more information about the first and second derivatives of the objective function during and upon completion of the iterations.
nag_tsa_multi_inputmod_estim (g13be) is primarily designed for estimating relationships between time series. It is, however, easily used in a univariate mode for ARIMA model estimation. The advantage is that it allows (optional) use of the exact likelihood estimation criterion, which is not available in nag_tsa_uni_arima_estim (g13ae) or nag_tsa_uni_arima_estim_easy (g13af). This is particularly recommended for models which have seasonal parameters, because it reduces the tendency of parameter estimates to become stuck at points on the parameter space boundary. The model parameters estimated in this function should be passed over to nag_tsa_uni_arima_forcecast (g13aj) for use in univariate forecasting.
The function nag_tsa_multi_varma_estimate (g13dd) is primarily designed for fitting vector ARMA models to multivariate time series but may also be used in a univariate mode. It allows the use of either the exact or conditional likelihood estimation criterion, and allows you to fit non-multiplicative seasonal models which are not available in nag_tsa_uni_arima_estim (g13ae), nag_tsa_uni_arima_estim_easy (g13af) or nag_tsa_multi_inputmod_estim (g13be).
(c) Model checking
nag_tsa_uni_arima_resid (g13as) calculates the correlations in the residuals from a model fitted by either nag_tsa_uni_arima_estim (g13ae) or nag_tsa_uni_arima_estim_easy (g13af). In addition the standard errors and correlations of the residual autocorrelations are computed along with a portmanteau test for model adequacy. nag_tsa_uni_arima_resid (g13as) can be used after a univariate model has been fitted by nag_tsa_multi_inputmod_estim (g13be), but care must be taken in selecting the correct inputs to nag_tsa_uni_arima_resid (g13as). Note that if nag_tsa_multi_varma_estimate (g13dd) has been used to fit a non-multiplicative seasonal model to a univariate series then nag_tsa_multi_varma_diag (g13ds) may be used to check the adequacy of the model.
(d) Forecasting using an ARIMA model
Given that the state set produced on estimation of the ARIMA model by either nag_tsa_uni_arima_estim (g13ae) or nag_tsa_uni_arima_estim_easy (g13af) has been retained, nag_tsa_uni_arima_forecast_state (g13ah) can be used directly to construct forecasts for xn + 1,xn + 2,xn+1,xn+2,, together with probability limits. If some further observations xn + 1,xn + 2,xn+1,xn+2, have come to hand since model estimation (and there is no desire to re-estimate the model using the extended series), then nag_tsa_uni_arima_update (g13ag) can be used to update the state set using the new observations, prior to forecasting from the end of the extended series. The original series is not required.
The function nag_tsa_uni_arima_forcecast (g13aj) is provided for forecasting when the ARIMA model is known but the state set is unknown. For example, the model may have been estimated by a procedure other than the use of nag_tsa_uni_arima_estim (g13ae) or nag_tsa_uni_arima_estim_easy (g13af), such as nag_tsa_multi_inputmod_estim (g13be). nag_tsa_uni_arima_forcecast (g13aj) constructs the state set and optionally constructs forecasts with probability limits. It is equivalent to a call to nag_tsa_uni_arima_estim (g13ae) with zero iterations requested, followed by an optional call to nag_tsa_uni_arima_forecast_state (g13ah), but it is much more efficient.

Input-output/transfer function modelling

(a) Model identification
Normally use nag_tsa_multi_xcorr (g13bc) for direct computation of cross-correlations, from which cross-covariances may be obtained by multiplying by sysxsysx, and impulse response estimates (after prewhitening) by multiplying by sy / sxsy/sx, where sy,sxsy,sx are the sample standard deviations of the series.
An alternative is to use nag_tsa_multi_spectrum_lag (g13cc), which exploits the fast Fourier transform (FFT) to carry out the convolution for computing cross-covariances. The criteria for this are the same as given in Section [Univariate series] for calculation of autocorrelations. The impulse response function may also be computed by spectral methods without prewhitening using nag_tsa_multi_noise_bivar (g13cg).
nag_tsa_multi_filter_arima (g13ba) may be used to prewhiten or filter a series by an ARIMA model.
nag_tsa_multi_filter_transf (g13bb) may be used to filter a time series using a transfer function model.
(b) Estimation of input-output model parameters
The function nag_tsa_multi_transf_prelim (g13bd) is used to obtain preliminary estimates of transfer function model parameters. The model orders and an estimate of the impulse response function (see Section [Univariate spectral estimation]) are required.
The simultaneous estimation of the transfer function model parameters for the inputs, and ARIMA model parameters for the output, is carried out by nag_tsa_multi_inputmod_estim (g13be).
This function requires values of the output and input series, and the orders of all the models. Any differencing implied by the model is carried out internally.
The function also requires the maximum number of iterations to be specified, and returns the state set for use in forecasting.
(c) Input-output model checking
The function nag_tsa_uni_arima_resid (g13as), primarily designed for univariate time series, can be used to test the residuals from an input-output model.
(d) Forecasting using an input-output model
Given that the state set produced on estimation of the model by nag_tsa_multi_inputmod_estim (g13be) has been retained, the function nag_tsa_multi_inputmod_forecast_state (g13bh) can be used directly to construct forecasts of the output series. Future values of the input series (possibly forecasts previously obtained using nag_tsa_uni_arima_forecast_state (g13ah)) are required.
If further observations of the output and input series have become available since model estimation (and there is no desire to re-estimate the model using the extended series) then nag_tsa_multi_inputmod_update (g13bg) can be used to update the state set using the new observations prior to forecasting from the end of the extended series. The original series are not required.
The function nag_tsa_multi_inputmod_forecast (g13bj) is provided for forecasting when the multi-input model is known, but the state set is unknown. The set of output and input series must be supplied to the function which then constructs the state set (for future use with nag_tsa_multi_inputmod_update (g13bg) and/or nag_tsa_multi_inputmod_forecast_state (g13bh)) and also optionally constructs forecasts of the output series in a similar manner to nag_tsa_multi_inputmod_forecast_state (g13bh).
In constructing probability limits for the forecasts, it is possible to allow for the fact that future input series values may themselves have been calculated as forecasts using ARIMA models. Use of this option requires that these ARIMA models be supplied to the function.
(e) Filtering a time series using a transfer function model
The function for this purpose is nag_tsa_multi_filter_transf (g13bb).

Multivariate series

(a) Model identification
The function nag_tsa_multi_diff (g13dl) may be used to difference the series. You must supply the differencing parameters for each component of the multivariate series. The order of differencing for each individual component does not have to be the same. The function may also be used to apply a log or square root transformation to the components of the series.
The function nag_tsa_multi_corrmat_cross (g13dm) may be used to calculate the sample cross-correlation or cross-covariance matrices. It requires a set of time series as input. You may request either the cross-covariances or cross-correlations.
The function nag_tsa_multi_corrmat_partlag (g13dn) computes the partial lag correlation matrices from the sample cross-correlation matrices computed by nag_tsa_multi_corrmat_cross (g13dm), and the function nag_tsa_multi_regmat_partial (g13dp) computes the least squares estimates of the partial autoregression matrices and their standard errors. Both functions compute a series of χ2χ2 statistics that aid the determination of the order of a suitable autoregressive model. nag_tsa_multi_autocorr_part (g13db) may also be used in the identification of the order of an autoregressive model. The function computes multiple squared partial autocorrelations and predictive error variance ratios from the sample cross-correlations or cross-covariances computed by nag_tsa_multi_corrmat_cross (g13dm).
The function nag_tsa_uni_arma_roots (g13dx) may be used to check that the autoregressive part of the model is stationary and that the moving-average part is invertible.
(b) Estimation of VARMA model parameters
The function for this purpose is nag_tsa_multi_varma_estimate (g13dd). This function requires a set of time series to be input, together with values for pp and qq. You must also specify the maximum number of likelihood evaluations to be permitted and which parameters (if any) are to be held at their initial (user-supplied) values. The fitting criterion is either exact maximum likelihood (ML) or conditional maximum likelihood.
nag_tsa_multi_varma_estimate (g13dd) is primarily designed for estimating relationships between time series. It may, however, easily be used in univariate mode for non-seasonal and non-multiplicative seasonal ARIMA model estimation. The advantage is that it allows (optional) use of the exact maximum likelihood (ML) estimation criterion, which is not available in either nag_tsa_uni_arima_estim (g13ae) or nag_tsa_uni_arima_estim_easy (g13af). The conditional likelihood option is recommended for those models in which the parameter estimates display a tendency to become stuck at points on the boundary of the parameter space. When one of the series is known to be influenced by all the others, but the others in turn are mutually independent and do not influence the output series, then nag_tsa_multi_inputmod_estim (g13be) (the transfer function (TF) model fitting function) may be more appropriate to use.
(c) VARMA model checking
nag_tsa_multi_varma_diag (g13ds) calculates the cross-correlation matrices of residuals for a model fitted by nag_tsa_multi_varma_estimate (g13dd). In addition the standard errors and correlations of the residual correlation matrices are computed along with a portmanteau test for model adequacy.
(d) Forecasting using a VARMA model
The function nag_tsa_multi_varma_forecast (g13dj) may be used to construct a chosen number of forecasts using the model estimated by nag_tsa_multi_varma_estimate (g13dd). The standard errors of the forecasts are also computed. A reference vector is set up by nag_tsa_multi_varma_forecast (g13dj) so that should any further observations become available the existing forecasts can be efficiently updated using nag_tsa_multi_varma_update (g13dk). On a call to nag_tsa_multi_varma_update (g13dk) the reference vector itself is also updated so that nag_tsa_multi_varma_update (g13dk) may be called again each time new observations are available.

Exponential Smoothing

A variety of different smoothing methods are provided by nag_tsa_uni_smooth_exp (g13am), including; single exponential, Brown's double exponential, linear Holt (also called double exponential smoothing in some references), additive Holt–Winters and multiplicative Holt–Winters. The choice of smoothing method used depends on the characteristics of the time series. If the mean of the series is only slowly changing then single exponential smoothing may be suitable. If there is a trend in the time series, which itself may be slowly changing, then double exponential smoothing may be suitable. If there is a seasonal component to the time series, e.g., daily or monthly data, then one of the two Holt–Winters methods may be suitable.

Spectral Methods

Univariate spectral estimation

Two functions are available, nag_tsa_uni_spectrum_lag (g13ca) carrying out smoothing using a lag window and nag_tsa_uni_spectrum_daniell (g13cb) carrying out direct frequency domain smoothing. Both can take as input the original series, but nag_tsa_uni_spectrum_lag (g13ca) alone can use the sample autocovariances as alternative input. This has some computational advantage if a variety of spectral estimates needs to be examined for the same series using different amounts of smoothing.
However, the real choice in most cases will be which of the four shapes of lag window in nag_tsa_uni_spectrum_lag (g13ca) to use, or whether to use the trapezium frequency window of nag_tsa_uni_spectrum_daniell (g13cb). The references may be consulted for advice on this, but the two most recommended lag windows are the Tukey and Parzen. The Tukey window has a very small risk of supplying negative spectrum estimates; otherwise, for the same bandwidth, both give very similar results, though the Parzen window requires a higher truncation lag (more autocorrelation function (ACF) values).
The frequency window smoothing procedure of nag_tsa_uni_spectrum_daniell (g13cb) with a trapezium shape parameter p(1/2) p12  generally gives similar results for the same bandwidth as lag window methods with a slight advantage of somewhat less distortion around sharp peaks, but suffering a rather less smooth appearance in fine detail.

Cross-spectrum estimation

Two functions are available for the main step in cross-spectral analysis. To compute the cospectrum and quadrature spectrum estimates using smoothing by a lag window, nag_tsa_multi_spectrum_lag (g13cc) should be used. It takes as input either the original series or cross-covariances which may be computed in a previous call of the same function or possibly using results from nag_tsa_multi_xcorr (g13bc). As in the univariate case, this gives some advantage if estimates for the same series are to be computed with different amounts of smoothing.
The choice of window shape will be determined as the same as that which has already been used in univariate spectrum estimation for the series.
For direct frequency domain smoothing, nag_tsa_multi_spectrum_daniell (g13cd) should be used, with similar consideration for the univariate estimation in choice of degree of smoothing.
The cross-amplitude and squared coherency spectrum estimates are calculated, together with upper and lower confidence bounds, using nag_tsa_multi_spectrum_bivar (g13ce). For input the cross-spectral estimates from either nag_tsa_multi_spectrum_lag (g13cc) or nag_tsa_multi_spectrum_daniell (g13cd) and corresponding univariate spectra from either nag_tsa_uni_spectrum_lag (g13ca) or nag_tsa_uni_spectrum_daniell (g13cb) are required.
The gain and phase spectrum estimates are calculated together with upper and lower confidence bounds using nag_tsa_multi_gain_bivar (g13cf). The required input is as for nag_tsa_multi_spectrum_bivar (g13ce) above.
The noise spectrum estimates and impulse response function estimates are calculated together with multiplying factors for confidence limits on the former, and the standard error for the latter, using nag_tsa_multi_noise_bivar (g13cg). The required input is again the same as for nag_tsa_multi_spectrum_bivar (g13ce) above.

Kalman Filtering

GARCH Models

The main choice in selecting a type of GARCH model is whether the data is symmetric or asymmetric and if asymmetric what form of asymmetry should be included in the model.
A symmetric ARCH or GARCH model can be fitted by nag_tsa_uni_garch_asym1_estim (g13fa) and the volatility forecast by nag_tsa_uni_garch_asym1_forecast (g13fb). For asymmetric data the choice is between the type of asymmetry as described in Section [GARCH Models].
GARCH Type Fit Forecast
Type I nag_tsa_uni_garch_asym1_estim (g13fa) nag_tsa_uni_garch_asym1_forecast (g13fb)
Type II nag_tsa_uni_garch_asym2_estim (g13fc) nag_tsa_uni_garch_asym2_forecast (g13fd)
GJR nag_tsa_uni_garch_gjr_estim (g13fe) nag_tsa_uni_garch_gjr_forecast (g13ff)
EGARCH nag_tsa_uni_garch_exp_estim (g13fg) nag_tsa_uni_garch_exp_forecast (g13fh)
All functions allow the option of including regressor variables in the model and the choice between Normal and Student's tt-distribution for the errors.

Inhomogeneous Time Series

The following functions deal with inhomogeneous time series, nag_tsa_inhom_iema (g13me), nag_tsa_inhom_iema_all (g13mf) and nag_tsa_inhom_ma (g13mg).
Both nag_tsa_inhom_iema (g13me) and nag_tsa_inhom_iema_all (g13mf) calculate the mm-iterated exponential moving average (EMA). In most cases nag_tsa_inhom_iema (g13me) can be used, which returns EMA [τ,m ; z] EMA [ τ,m;z ]  for a given value of mm, overwriting the input data. Sometimes it is advantageous to have access to the intermediate results, for example when calculating the differential operator, in which case nag_tsa_inhom_iema_all (g13mf) can be used, which can return EMA [τ,i ; z] EMA [ τ,i;z ], for i = 1,2,,mi=1,2,,m. nag_tsa_inhom_iema_all (g13mf) can also be used if you do not wish to overwrite the input data.
The last function, nag_tsa_inhom_ma (g13mg) should be used if you require the moving average, (MA), moving norm (MNorm), moving variance (MVar) or moving standard deviation (MSD). Other operators can be calculated by calling a combination of these three functions and the use of simple mathematics (additions, subtractions, etc.).

Time Series Simulation

There are functions available in Chapter G05 for generating a realisation of a time series from a specified model: nag_rand_times_arma (g05ph) for univariate time series and nag_rand_times_mv_varma (g05pj) for multivariate time series. There is also a suite of functions for simulating GARCH models: nag_rand_times_garch_asym1 (g05pd), nag_rand_times_garch_asym2 (g05pe), nag_rand_times_garch_gjr (g05pf) and nag_rand_times_garch_exp (g05pg). The function nag_rand_times_smooth_exp (g05pm) can be used to simulate data from an exponential smoothing model.

Functionality Index

ARMA modelling, 
    ACF nag_tsa_uni_autocorr (g13ab)
    diagnostic checking nag_tsa_uni_arima_resid (g13as)
    differencing nag_tsa_uni_diff (g13aa)
    estimation (comprehensive) nag_tsa_uni_arima_estim (g13ae)
    estimation (easy-to-use) nag_tsa_uni_arima_estim_easy (g13af)
    forecasting from fully specified model nag_tsa_uni_arima_forcecast (g13aj)
    forecasting from state set nag_tsa_uni_arima_forecast_state (g13ah)
    mean/range nag_tsa_uni_means (g13au)
    PACF nag_tsa_uni_autocorr_part (g13ac)
    preliminary estimation nag_tsa_uni_arima_prelim (g13ad)
    update state set nag_tsa_uni_arima_update (g13ag)
Bivariate spectral analysis, 
    Bartlett, Tukey, Parzen windows nag_tsa_multi_spectrum_lag (g13cc)
    cross amplitude spectrum nag_tsa_multi_spectrum_bivar (g13ce)
    direct smoothing nag_tsa_multi_spectrum_daniell (g13cd)
    gain and phase nag_tsa_multi_gain_bivar (g13cf)
    noise spectrum nag_tsa_multi_noise_bivar (g13cg)
Exponential smoothing nag_tsa_uni_smooth_exp (g13am)
GARCH, 
    EGARCH, 
        fitting nag_tsa_uni_garch_exp_estim (g13fg)
        forecasting nag_tsa_uni_garch_exp_forecast (g13fh)
    GJR GARCH, 
        fitting nag_tsa_uni_garch_gjr_estim (g13fe)
        forecasting nag_tsa_uni_garch_gjr_forecast (g13ff)
    symmetric or type I AGARCH, 
        fitting nag_tsa_uni_garch_asym1_estim (g13fa)
        forecasting nag_tsa_uni_garch_asym1_forecast (g13fb)
    type II AGARCH, 
        fitting nag_tsa_uni_garch_asym2_estim (g13fc)
        forecasting nag_tsa_uni_garch_asym2_forecast (g13fd)
Inhomogenous series, 
    iterated exponential moving average, 
        final value only returned nag_tsa_inhom_iema (g13me)
        intermediate values returned nag_tsa_inhom_iema_all (g13mf)
    moving average nag_tsa_inhom_ma (g13mg)
Kalman, 
    filter, 
        time invariant, 
            square root covariance nag_tsa_multi_kalman_sqrt_invar (g13eb)
        time varying, 
            square root covariance nag_tsa_multi_kalman_sqrt_var (g13ea)
Transfer function modelling, 
    cross-correlations nag_tsa_multi_xcorr (g13bc)
    filtering nag_tsa_multi_filter_transf (g13bb)
    fitting nag_tsa_multi_inputmod_estim (g13be)
    forecasting from fully specified model nag_tsa_multi_inputmod_forecast (g13bj)
    forecasting from state set nag_tsa_multi_inputmod_forecast_state (g13bh)
    preliminary estimation nag_tsa_multi_transf_prelim (g13bd)
    pre-whitening nag_tsa_multi_filter_arima (g13ba)
    update state set nag_tsa_multi_inputmod_update (g13bg)
Univariate spectral analysis, 
    Bartlett, Tukey, Parzen windows nag_tsa_uni_spectrum_lag (g13ca)
    direct smoothing nag_tsa_uni_spectrum_daniell (g13cb)
Vector ARMA, 
    cross-correlations nag_tsa_multi_corrmat_cross (g13dm)
    diagnostic checks nag_tsa_multi_varma_diag (g13ds)
    differencing nag_tsa_multi_diff (g13dl)
    fitting nag_tsa_multi_varma_estimate (g13dd)
    forecasting nag_tsa_multi_varma_forecast (g13dj)
    partial autoregression matrices nag_tsa_multi_regmat_partial (g13dp)
    partial correlation matrices nag_tsa_multi_corrmat_partlag (g13dn)
    squared partial autocorrelations nag_tsa_multi_autocorr_part (g13db)
    update forecast nag_tsa_multi_varma_update (g13dk)
    zeros of ARIMA operator nag_tsa_uni_arma_roots (g13dx)

References

Akaike H (1971) Autoregressive model fitting for control Ann. Inst. Statist. Math. 23 163–180
Bollerslev T (1986) Generalised autoregressive conditional heteroskedasticity Journal of Econometrics 31 307–327
Box G E P and Jenkins G M (1976) Time Series Analysis: Forecasting and Control (Revised Edition) Holden–Day
Engle R (1982) Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation Econometrica 50 987–1008
Gentleman W S and Sande G (1966) Fast Fourier transforms for fun and profit Proc. Joint Computer Conference, AFIPS 29 563–578
Glosten L, Jagannathan R and Runkle D (1993) Relationship between the expected value and the volatility of nominal excess return on stocks Journal of Finance 48 1779–1801
Hamilton J (1994) Time Series Analysis Princeton University Press
Heyse J F and Wei W W S (1985) The partial lag autocorrelation function Technical Report No. 32 Department of Statistics, Temple University, Philadelphia
Tiao G C and Box G E P (1981) Modelling multiple time series with applications J. Am. Stat. Assoc. 76 802–816
Wei W W S (1990) Time Series Analysis: Univariate and Multivariate Methods Addison–Wesley
Zumbach G O and Müller U A (2001) Operators on inhomogeneous time series International Journal of Theoretical and Applied Finance 4(1) 147–178

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013