g13 Chapter Contents
g13 Chapter Introduction
NAG Library Manual

# NAG Library Function Documentnag_kalman_sqrt_filt_cov_var (g13eac)

## 1  Purpose

nag_kalman_sqrt_filt_cov_var (g13eac) performs a combined measurement and time update of one iteration of the time-varying Kalman filter. The method employed for this update is the square root covariance filter with the system matrices in their original form.

## 2  Specification

 #include #include
 void nag_kalman_sqrt_filt_cov_var (Integer n, Integer m, Integer p, double s[], Integer tds, const double a[], Integer tda, const double b[], Integer tdb, const double q[], Integer tdq, const double c[], Integer tdc, const double r[], Integer tdr, double k[], Integer tdk, double h[], Integer tdh, double tol, NagError *fail)

## 3  Description

For the state space system defined by
 $X i+1 = A i X i + B i W i var W i = Q i Y i = C i X i + V i var V i = R i$
the estimate of ${X}_{i}$ given observations ${Y}_{1}$ to ${Y}_{i-1}$ is denoted by ${\stackrel{^}{X}}_{i\mid i-1}$ with $\mathrm{var}\left({\stackrel{^}{X}}_{i\mid i-1}\right)={P}_{i\mid i-1}={S}_{i}{S}_{i}^{\mathrm{T}}$. nag_kalman_sqrt_filt_cov_var (g13eac) performs one recursion of the square root covariance filter algorithm, summarised as follows:
 $R i 1/2 C i S i 0 0 A i S i B i Q i 1/2 U = H i 1/2 0 0 G i S i+1 0 Pre-array Post-array$
where $U$ is an orthogonal transformation triangularizing the pre-array. The triangularisation is carried out via Householder transformations exploiting the zero pattern in the pre-array. The measurement-update for the estimated state vector $X$ is
 $X ^ i∣i = X ^ i∣i - 1 - K i C i X ^ i∣i - 1 - Y i$ (1)
where ${K}_{i}$ is the Kalman gain matrix, whilst the time-update for $X$ is
 $X ^ i + 1∣i = A i X ^ i∣i + D i U i$ (2)
where ${D}_{i}{U}_{i}$ represents any deterministic control used. The relationship between the Kalman gain matrix ${K}_{i}$ and ${G}_{i}$ is given by
 $A i K i = G i H i 1/2 -1$
The function returns the product of the matrices ${A}_{i}$ and ${K}_{i}$ represented as ${AK}_{i}$, and the state covariance matrix ${P}_{i\mid i-1}$ factorized as ${P}_{i\mid i-1}={S}_{i}{S}_{i}^{\mathrm{T}}$ (see the g13 Chapter Introduction for more information concerning the covariance filter).

## 4  References

Anderson B D O and Moore J B (1979) Optimal Filtering Prentice–Hall
Harvey A C and Phillips G D A (1979) Maximum likelihood estimation of regression models with autoregressive — moving average disturbances Biometrika 66 49–58
Vanbegin M, van Dooren P and Verhaegen M H G (1989) Algorithm 675: FORTRAN subroutines for computing the square root covariance filter and square root information filter in dense or Hessenberg forms ACM Trans. Math. Software 15 243–256
Verhaegen M H G and van Dooren P (1986) Numerical aspects of different Kalman filter implementations IEEE Trans. Auto. Contr. AC-31 907–917
Wei W W S (1990) Time Series Analysis: Univariate and Multivariate Methods Addison–Wesley

## 5  Arguments

1:     nIntegerInput
On entry: the actual state dimension, $n$, i.e., the order of the matrices ${S}_{i}$ and ${A}_{i}$.
Constraint: ${\mathbf{n}}\ge 1$.
2:     mIntegerInput
On entry: the actual input dimension, $m$, i.e., the order of the matrix ${Q}_{i}^{1/2}$.
Constraint: ${\mathbf{m}}\ge 1$.
3:     pIntegerInput
On entry: the actual output dimension, $p$, i.e., the order of the matrix ${R}_{i}^{1/2}$.
Constraint: ${\mathbf{p}}\ge 1$.
4:     s[${\mathbf{n}}×{\mathbf{tds}}$]doubleInput/Output
Note: the $\left(i,j\right)$th element of the matrix $S$ is stored in ${\mathbf{s}}\left[\left(i-1\right)×{\mathbf{tds}}+j-1\right]$.
On entry: the leading $n$ by $n$ lower triangular part of this array must contain ${S}_{i}$, the left Cholesky factor of the state covariance matrix ${P}_{i\mid i-1}$.
On exit: the leading $n$ by $n$ lower triangular part of this array contains ${S}_{i+1}$, the left Cholesky factor of the state covariance matrix ${P}_{i+1\mid i}$.
5:     tdsIntegerInput
On entry: the stride separating matrix column elements in the array s.
Constraint: ${\mathbf{tds}}\ge {\mathbf{n}}$.
6:     a[${\mathbf{n}}×{\mathbf{tda}}$]const doubleInput
Note: the $\left(i,j\right)$th element of the matrix $A$ is stored in ${\mathbf{a}}\left[\left(i-1\right)×{\mathbf{tda}}+j-1\right]$.
On entry: the leading $n$ by $n$ part of this array must contain ${A}_{i}$, the state transition matrix of the discrete system.
7:     tdaIntegerInput
On entry: the stride separating matrix column elements in the array a.
Constraint: ${\mathbf{tda}}\ge {\mathbf{n}}$.
8:     b[${\mathbf{n}}×{\mathbf{tdb}}$]const doubleInput
Note: the $\left(i,j\right)$th element of the matrix $B$ is stored in ${\mathbf{b}}\left[\left(i-1\right)×{\mathbf{tdb}}+j-1\right]$.
On entry: if q is not NULL then the leading $n$ by $m$ part of this array must contain the matrix ${B}_{i}$, otherwise if q is NULL then the leading $n$ by $m$ part of the array must contain the matrix ${B}_{i}{Q}_{i}^{1/2}$. ${B}_{i}$ is the input weight matrix and ${Q}_{i}$ is the noise covariance matrix.
9:     tdbIntegerInput
On entry: the stride separating matrix column elements in the array b.
Constraint: ${\mathbf{tdb}}\ge {\mathbf{m}}$.
10:   q[${\mathbf{m}}×{\mathbf{tdq}}$]const doubleInput
Note: the $\left(i,j\right)$th element of the matrix $Q$ is stored in ${\mathbf{q}}\left[\left(i-1\right)×{\mathbf{tdq}}+j-1\right]$.
On entry: if the noise covariance matrix is to be supplied separately from the input weight matrix then the leading $m$ by $m$ lower triangular part of this array must contain ${Q}_{i}^{1/2}$, the left Cholesky factor of the input process noise covariance matrix. If the noise covariance matrix is to be input with the weight matrix as ${B}_{i}{Q}_{i}^{1/2}$ then the array q must be set to NULL.
11:   tdqIntegerInput
On entry: the stride separating matrix column elements in the array q.
Constraint: ${\mathbf{tdq}}\ge {\mathbf{m}}$ if q is defined.
12:   c[${\mathbf{p}}×{\mathbf{tdc}}$]const doubleInput
Note: the $\left(i,j\right)$th element of the matrix $C$ is stored in ${\mathbf{c}}\left[\left(i-1\right)×{\mathbf{tdc}}+j-1\right]$.
On entry: the leading $p$ by $n$ part of this array must contain ${C}_{i}$, the output weight matrix of the discrete system.
13:   tdcIntegerInput
On entry: the stride separating matrix column elements in the array c.
Constraint: ${\mathbf{tdc}}\ge {\mathbf{n}}$.
14:   r[${\mathbf{p}}×{\mathbf{tdr}}$]const doubleInput
Note: the $\left(i,j\right)$th element of the matrix $R$ is stored in ${\mathbf{r}}\left[\left(i-1\right)×{\mathbf{tdr}}+j-1\right]$.
On entry: the leading $p$ by $p$ lower triangular part of this array must contain ${R}_{i}^{1/2}$, the left Cholesky factor of the measurement noise covariance matrix.
15:   tdrIntegerInput
On entry: the stride separating matrix column elements in the array r.
Constraint: ${\mathbf{tdr}}\ge {\mathbf{p}}$.
16:   k[${\mathbf{n}}×{\mathbf{tdk}}$]doubleOutput
Note: the $\left(i,j\right)$th element of the matrix $K$ is stored in ${\mathbf{k}}\left[\left(i-1\right)×{\mathbf{tdk}}+j-1\right]$.
On exit: if k is not NULL then the leading $n$ by $p$ part of k contains ${AK}_{i}$, the product of the Kalman filter gain matrix ${K}_{i}$ with the state transition matrix ${A}_{i}$. If $A{K}_{i}$ is not required then k must be set to NULL.
17:   tdkIntegerInput
On entry: the stride separating matrix column elements in the array k.
Constraint: if k is not NULL, ${\mathbf{tdk}}\ge {\mathbf{p}}$
18:   h[${\mathbf{p}}×{\mathbf{tdh}}$]doubleOutput
Note: the $\left(i,j\right)$th element of the matrix $H$ is stored in ${\mathbf{h}}\left[\left(i-1\right)×{\mathbf{tdh}}+j-1\right]$.
On exit: if k is not NULL then the leading $p$ by $p$ lower triangular part of this array contains ${H}_{i}^{1/2}$. If k is NULL then h is not referenced and may be set to NULL.
19:   tdhIntegerInput
On entry: the stride separating matrix column elements in the array h.
Constraint: if both k and h are not NULL, ${\mathbf{tdh}}\ge {\mathbf{p}}$
20:   toldoubleInput
On entry: if both k and h are not NULL then tol is used to test for near singularity of the matrix ${H}_{i}^{1/2}$. If you set tol to be less than ${p}^{2}\epsilon$ then the tolerance is taken as ${p}^{2}\epsilon$, where $\epsilon$ is the machine precision. Otherwise, tol need not be set by you.
21:   failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).

## 6  Error Indicators and Warnings

NE_2_INT_ARG_LT
On entry ${\mathbf{tda}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tda}}\ge {\mathbf{n}}$.
On entry ${\mathbf{tdb}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdb}}\ge {\mathbf{m}}$.
On entry ${\mathbf{tdc}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdc}}\ge {\mathbf{n}}$.
On entry ${\mathbf{tdh}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{p}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdh}}\ge {\mathbf{p}}$.
On entry ${\mathbf{tdk}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{p}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdk}}\ge {\mathbf{p}}$.
On entry ${\mathbf{tdq}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdq}}\ge {\mathbf{m}}$.
On entry ${\mathbf{tdr}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{p}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdr}}\ge {\mathbf{p}}$.
On entry, ${\mathbf{tds}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tds}}\ge {\mathbf{n}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_INT_ARG_LT
On entry, ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{m}}\ge 1$.
On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{n}}\ge 1$.
On entry, ${\mathbf{p}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{p}}\ge 1$.
NE_MAT_SINGULAR
The matrix sqrt($H$) is singular.
NE_NULL_ARRAY

## 7  Accuracy

The use of the square root algorithm improves the stability of the computations.

## 8  Parallelism and Performance

Not applicable.

The algorithm requires $\frac{7}{6}{n}^{3}+{n}^{2}\left(\frac{5}{2}p+m\right)+n\left(\frac{1}{2}{m}^{2}+{p}^{2}\right)$ operations and is backward stable (see Vanbegin et al. (1989)).

## 10  Example

For this function two examples are presented. There is a single example program for nag_kalman_sqrt_filt_cov_var (g13eac), with a main program and the code to solve the two example problems is given in the functions ex1 and ex2.
Example 1 (ex1)
To apply three iterations of the Kalman filter (in square root covariance form) to the system $\left({A}_{i},{B}_{i},{C}_{i}\right)$. The same data is used for all three iterative steps.
Example 2 (ex2)
In the second example 2000 terms of an ARMA(1,1) time series (with ${\sigma }^{2}=1.0,\theta =0.9$ and $\varphi =0.4$) are generated using the function nag_rand_arma (g05phc). The Kalman filter and optimization function nag_opt_nlp_solve (e04wdc) are then used to find the maximum likelihood estimate for the time series arguments $\theta$ and $\varphi$. The ARMA(1,1) time series is defined by
 $y k = ϕ y k-1 + ε k - θ ε k-1$
This has the following state space representation (Harvey and Phillips (1979))
 $x k = ϕ 1 0 0 x k-1 + 1 -θ ε k y k = 1 0 x k$
where the state vector ${x}_{k}=\left(\begin{array}{c}{y}_{k}\\ -\theta {\epsilon }_{k}\end{array}\right)$ and ${\epsilon }_{k}$ is uncorrelated white noise with zero mean and variance ${\sigma }^{2}$, i.e.,
 $E ε k = 0 , E ε k ε k = σ 2 , E y k ε k = σ 2 ​ and ​ E ε k ε k-1 = 0 .$
Since ${\sigma }^{2}=1$ we arrive at the following Kalman Filter matrices
 $A k = ϕ 1 0 0 , B k = 1 -θ C k = 1 0 , Q k = 0 ​ and ​ R k = 1 .$
The initial estimates for the state vector, ${x}_{1\mid 0}$, and state covariance matrix, ${P}_{1\mid 0}$, are:
 $x 1∣0 = E x k = 0 ​ and ​ P 1∣0 = E x k xkT = E y k y k -θ E y k ε k -θ E y k ε k θ 2 E ε k ε k .$
Since $E\left[{y}_{k}{y}_{k}\right]={\gamma }_{\circ }=\frac{\left(1+{\theta }^{2}-2\varphi \theta \right){\sigma }^{2}}{\left(1-{\varphi }^{2}\right)}$ (Wei (1990))
 $P 1∣0 = γ ∘ -θ -θ θ 2 .$
Using ${P}_{1\mid 0}={S}_{1\mid 0}{S}_{1\mid 0}^{\mathrm{T}}$ gives an initial Cholesky ‘square root’ of
 $S 1∣0 = γ ∘ 0 -θ γ ∘ θ γ ∘ - 1 γ ∘ .$

### 10.1  Program Text

Program Text (g13eace.c)

None.

### 10.3  Program Results

Program Results (g13eace.r)