Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_correg_ridge_opt (g02ka)

## Purpose

nag_correg_ridge_opt (g02ka) calculates a ridge regression, optimizing the ridge parameter according to one of four prediction error criteria.

## Syntax

[h, niter, nep, b, vif, res, rss, df, perr, ifail] = g02ka(x, isx, ip, y, h, opt, niter, tol, orig, optloo, 'n', n, 'm', m, 'tau', tau)
[h, niter, nep, b, vif, res, rss, df, perr, ifail] = nag_correg_ridge_opt(x, isx, ip, y, h, opt, niter, tol, orig, optloo, 'n', n, 'm', m, 'tau', tau)
Note: the interface to this routine has changed since earlier releases of the toolbox:
Mark 24: tau optional
.

## Description

A linear model has the form:
 y = c + Xβ + ε , $y = c+Xβ+ε ,$
where
• y$y$ is an n$n$ by 1$1$ matrix of values of a dependent variable;
• c$c$ is a scalar intercept term;
• X$X$ is an n$n$ by m$m$ matrix of values of independent variables;
• β$\beta$ is an m$m$ by 1$1$ matrix of unknown values of parameters;
• ε$\epsilon$ is an n$n$ by 1$1$ matrix of unknown random errors such that variance of ε = σ2I$\epsilon ={\sigma }^{2}I$.
Let $\stackrel{~}{X}$ be the mean-centred X$X$ and $\stackrel{~}{y}$ the mean-centred y$y$. Furthermore, $\stackrel{~}{X}$ is scaled such that the diagonal elements of the cross product matrix T${\stackrel{~}{X}}^{\mathrm{T}}\stackrel{~}{X}$ are one. The linear model now takes the form:
 ỹ = X̃ β̃ + ε . $y~ = X~ β~ + ε .$
Ridge regression estimates the parameters β̃$\stackrel{~}{\beta }$ in a penalised least squares sense by finding the $\stackrel{~}{b}$ that minimizes
 ‖X̃b̃ − ỹ‖2 + h ‖b̃‖2 ,   h > 0 , $‖ X~ b~ - y~ ‖ 2 + h ‖b~‖ 2 , h>0 ,$
where · $‖·‖$ denotes the 2${\ell }_{2}$-norm and h$h$ is a scalar regularization or ridge parameter. For a given value of h$h$, the parameter estimates $\stackrel{~}{b}$ are found by evaluating
 b̃ = (X̃TX̃ + hI) − 1 X̃T ỹ . $b~ = ( X~T X~+hI )-1 X~T y~ .$
Note that if h = 0$h=0$ the ridge regression solution is equivalent to the ordinary least squares solution.
Rather than calculate the inverse of (T + hI${\stackrel{~}{X}}^{\mathrm{T}}\stackrel{~}{X}+hI$) directly, nag_correg_ridge_opt (g02ka) uses the singular value decomposition (SVD) of $\stackrel{~}{X}$. After decomposing $\stackrel{~}{X}$ into UDVT$UD{V}^{\mathrm{T}}$ where U$U$ and V$V$ are orthogonal matrices and D$D$ is a diagonal matrix, the parameter estimates become
 b̃ = V (DTD + hI) − 1 D UT ỹ . $b~ = V ( DTD+hI )-1 D UT y~ .$
A consequence of introducing the ridge parameter is that the effective number of parameters, γ$\gamma$, in the model is given by the sum of diagonal elements of
 DT D (DTD + hI) − 1 , $DT D ( DT D+hI)-1 ,$
see Moody (1992) for details.
Any multi-collinearity in the design matrix X$X$ may be highlighted by calculating the variance inflation factors for the fitted model. The j$j$th variance inflation factor, vj${v}_{j}$, is a scaled version of the multiple correlation coefficient between independent variable j$j$ and the other independent variables, Rj${R}_{j}$, and is given by
 vj = 1/(1 − Rj) ,   j = 1,2, … ,m . $vj = 1 1-Rj , j=1,2,…,m .$
The m$m$ variance inflation factors are calculated as the diagonal elements of the matrix:
 (X̃TX̃ + hI) − 1 X̃T X̃ (X̃TX̃ + hI) − 1 , $( X~T X~+hI )-1 X~T X~ ( X~T X~+hI )-1 ,$
which, using the SVD of $\stackrel{~}{X}$, is equivalent to the diagonal elements of the matrix:
 V (DTD + hI) − 1 DT D (DTD + hI) − 1 VT . $V ( DTD+hI )-1 DT D ( DTD+hI )-1 VT .$
Although parameter estimates $\stackrel{~}{b}$ are calculated by using $\stackrel{~}{X}$, it is usual to report the parameter estimates b$b$ associated with X$X$. These are calculated from $\stackrel{~}{b}$, and the means and scalings of X$X$. Optionally, either $\stackrel{~}{b}$ or b$b$ may be calculated.
The method can adopt one of four criteria to minimize while calculating a suitable value for h$h$:
(a) Generalized cross-validation (GCV):
 (ns)/((n − γ)2) ; $ns (n-γ) 2 ;$
(b) Unbiased estimate of variance (UEV):
 s/(n − γ) ; $s n-γ ;$
(c) Future prediction error (FPE):
 1/n (s + (2γs)/(n − γ)) ; $1n ( s+ 2γs n-γ ) ;$
(d) Bayesian information criterion (BIC):
 1/n (s + (log(n)γs)/(n − γ)) ; $1n ( s + log(n)γs n-γ ) ;$
where s$s$ is the sum of squares of residuals. However, the function returns all four of the above prediction errors regardless of the one selected to minimize the ridge parameter, h$h$. Furthermore, the function will optionally return the leave-one-out cross-validation (LOOCV) prediction error.

## References

Hastie T, Tibshirani R and Friedman J (2003) The Elements of Statistical Learning: Data Mining, Inference and Prediction Springer Series in Statistics
Moody J.E. (1992) The effective number of parameters: An analysis of generalisation and regularisation in nonlinear learning systems In: Neural Information Processing Systems (eds J E Moody, S J Hanson, and R P Lippmann) 4 847–854 Morgan Kaufmann San Mateo CA

## Parameters

### Compulsory Input Parameters

1:     x(ldx,m) – double array
ldx, the first dimension of the array, must satisfy the constraint ldxn$\mathit{ldx}\ge {\mathbf{n}}$.
The values of independent variables in the data matrix X$X$.
2:     isx(m) – int64int32nag_int array
m, the dimension of the array, must satisfy the constraint mn${\mathbf{m}}\le {\mathbf{n}}$.
Indicates which m$m$ independent variables are included in the model.
isx(j) = 1${\mathbf{isx}}\left(j\right)=1$
The j$j$th variable in x will be included in the model.
isx(j) = 0${\mathbf{isx}}\left(j\right)=0$
Variable j$j$ is excluded.
Constraint: isx(j) = 0 ​ or ​ 1${\mathbf{isx}}\left(\mathit{j}\right)=0\text{​ or ​}1$, for j = 1,2,,m$\mathit{j}=1,2,\dots ,{\mathbf{m}}$.
3:     ip – int64int32nag_int scalar
m$m$, the number of independent variables in the model.
Constraints:
• 1ipm$1\le {\mathbf{ip}}\le {\mathbf{m}}$;
• Exactly ip elements of isx must be equal to 1$1$.
4:     y(n) – double array
n, the dimension of the array, must satisfy the constraint n > 1${\mathbf{n}}>1$.
The n$n$ values of the dependent variable y$y$.
5:     h – double scalar
An initial value for the ridge regression parameter h$h$; used as a starting point for the optimization.
Constraint: h > 0.0${\mathbf{h}}>0.0$.
6:     opt – int64int32nag_int scalar
The measure of prediction error used to optimize the ridge regression parameter h$h$. The value of opt must be set equal to one of:
opt = 1${\mathbf{opt}}=1$
Generalized cross-validation (GCV);
opt = 2${\mathbf{opt}}=2$
Unbiased estimate of variance (UEV)
opt = 3${\mathbf{opt}}=3$
Future prediction error (FPE)
opt = 4${\mathbf{opt}}=4$
Bayesian information criteron (BIC).
Constraint: opt = 1${\mathbf{opt}}=1$, 2$2$, 3$3$ or 4$4$.
7:     niter – int64int32nag_int scalar
The maximum number of iterations allowed to optimize the ridge regression parameter h$h$.
Constraint: niter1${\mathbf{niter}}\ge 1$.
8:     tol – double scalar
Iterations of the ridge regression parameter h$h$ will halt when consecutive values of h$h$ lie within tol.
Constraint: tol > 0.0${\mathbf{tol}}>0.0$.
9:     orig – int64int32nag_int scalar
If orig = 1${\mathbf{orig}}=1$, the parameter estimates b$b$ are calculated for the original data; otherwise orig = 2${\mathbf{orig}}=2$ and the parameter estimates $\stackrel{~}{b}$ are calculated for the standardized data.
Constraint: orig = 1${\mathbf{orig}}=1$ or 2$2$.
10:   optloo – int64int32nag_int scalar
If optloo = 2${\mathbf{optloo}}=2$, the leave-one-out cross-validation estimate of prediction error is calculated; otherwise no such estimate is calculated and optloo = 1${\mathbf{optloo}}=1$.
Constraint: optloo = 1${\mathbf{optloo}}=1$ or 2$2$.

### Optional Input Parameters

1:     n – int64int32nag_int scalar
Default: The dimension of the array y and the first dimension of the array x. (An error is raised if these dimensions are not equal.)
n$n$, the number of observations.
Constraint: n > 1${\mathbf{n}}>1$.
2:     m – int64int32nag_int scalar
Default: The dimension of the array isx and the second dimension of the array x. (An error is raised if these dimensions are not equal.)
The number of independent variables available in the data matrix X$X$.
Constraint: mn${\mathbf{m}}\le {\mathbf{n}}$.
3:     tau – double scalar
Singular values less than tau of the SVD of the data matrix X$X$ will be set equal to zero.
Default: tau = 0.0${\mathbf{tau}}=0.0$
Constraint: tau0.0${\mathbf{tau}}\ge 0.0$.

ldx

### Output Parameters

1:     h – double scalar
h is the optimized value of the ridge regression parameter h$h$.
2:     niter – int64int32nag_int scalar
The number of iterations used to optimize the ridge regression parameter h$h$ within tol.
3:     nep – double scalar
The number of effective parameters, γ$\gamma$, in the model.
4:     b(ip + 1${\mathbf{ip}}+1$) – double array
Contains the intercept and parameter estimates for the fitted ridge regression model in the order indicated by isx. The first element of b contains the estimate for the intercept; b(j + 1)${\mathbf{b}}\left(\mathit{j}+1\right)$ contains the parameter estimate for the j$\mathit{j}$th independent variable in the model, for j = 1,2,,ip$\mathit{j}=1,2,\dots ,{\mathbf{ip}}$.
5:     vif(ip) – double array
The variance inflation factors in the order indicated by isx. For the j$\mathit{j}$th independent variable in the model, vif(j)${\mathbf{vif}}\left(\mathit{j}\right)$ is the value of vj${v}_{\mathit{j}}$, for j = 1,2,,ip$\mathit{j}=1,2,\dots ,{\mathbf{ip}}$.
6:     res(n) – double array
res(i)${\mathbf{res}}\left(\mathit{i}\right)$ is the value of the i$\mathit{i}$th residual for the fitted ridge regression model, for i = 1,2,,n$\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
The sum of squares of residual values.
8:     df – int64int32nag_int scalar
The degrees of freedom for the residual sum of squares rss.
9:     perr(5$5$) – double array
The first four elements contain, in this order, the measures of prediction error: GCV, UEV, FPE and BIC.
If optloo = 2${\mathbf{optloo}}=2$, perr(5)${\mathbf{perr}}\left(5\right)$ is the LOOCV estimate of prediction error; otherwise perr(5)${\mathbf{perr}}\left(5\right)$ is not referenced.
10:   ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

## Error Indicators and Warnings

Errors or warnings detected by the function:
ifail = 1${\mathbf{ifail}}=-1$
Maximum number of iterations used.
ifail = 1${\mathbf{ifail}}=1$
 On entry, n ≤ 1${\mathbf{n}}\le 1$; or tau < 0.0${\mathbf{tau}}<0.0$; or opt ≠ 1${\mathbf{opt}}\ne 1$, 2$2$, 3$3$ or 4$4$; or h ≤ 0.0${\mathbf{h}}\le 0.0$; or optloo ≠ 1${\mathbf{optloo}}\ne 1$ or 2$2$; or tol ≤ 0.0${\mathbf{tol}}\le 0.0$; or niter < 1${\mathbf{niter}}<1$; or orig ≠ 1${\mathbf{orig}}\ne 1$ or 2$2$
ifail = 2${\mathbf{ifail}}=2$
 On entry, m > n${\mathbf{m}}>{\mathbf{n}}$; or ldx < n$\mathit{ldx}<{\mathbf{n}}$; or ip < 1${\mathbf{ip}}<1$ or ip > m${\mathbf{ip}}>{\mathbf{m}}$; or An element of isx ≠ 0${\mathbf{isx}}\ne 0$ or 1$1$; or ip does not equal the sum of elements in isx.
ifail = 3${\mathbf{ifail}}=3$
SVD failed to converge.
ifail = 4${\mathbf{ifail}}=4$

## Accuracy

Not applicable.

nag_correg_ridge_opt (g02ka) allocates internally max (5 × (n1),2 × ip × ip) + (n + 3) × ip + n $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(5×\left({\mathbf{n}}-1\right),2×{\mathbf{ip}}×{\mathbf{ip}}\right)+\left({\mathbf{n}}+3\right)×{\mathbf{ip}}+{\mathbf{n}}$ elements of double precision storage.

## Example

```function nag_correg_ridge_opt_example
x = [19.5, 43.1, 29.1;
24.7, 49.8, 28.2;
30.7, 51.9, 37;
29.8, 54.3, 31.1;
19.1, 42.2, 30.9;
25.6, 53.9, 23.7;
31.4, 58.5, 27.6;
27.9, 52.1, 30.6;
22.1, 49.9, 23.2;
25.5, 53.5, 24.8;
31.1, 56.6, 30;
30.4, 56.7, 28.3;
18.7, 46.5, 23;
19.7, 44.2, 28.6;
14.6, 42.7, 21.3;
29.5, 54.4, 30.1;
27.7, 55.3, 25.7;
30.2, 58.6, 24.6;
22.7, 48.2, 27.1;
25.2, 51, 27.5];
isx = [int64(1);1;1];
ip = int64(3);
y = [11.9;
22.8;
18.7;
20.1;
12.9;
21.7;
27.1;
25.4;
21.3;
19.3;
25.4;
27.2;
11.7;
17.8;
12.8;
23.9;
22.6;
25.4;
14.8;
21.1];
h = 0.5;
opt = int64(1);
niter = int64(25);
tol = 0.0001;
orig = int64(2);
optloo = int64(2);
[hOut, niterOut, nep, b, vif, res, rss, df, perr, ifail] = ...
nag_correg_ridge_opt(x, isx, ip, y, h, opt, niter, tol, orig, optloo)
```
```

hOut =

0.0712

niterOut =

6

nep =

2.9059

b =

20.1950
9.7934
9.9576
-2.0125

vif =

0.2928
0.4162
0.8089

res =

-1.9894
3.5469
-3.0392
-3.0309
-0.1899
-0.3146
0.9775
4.0157
2.5332
-2.3560
0.5446
2.3989
-4.0876
3.2778
0.2894
0.7330
-0.7116
-0.6092
-2.9995
1.0110

109.1674

df =

16

perr =

7.4718
6.3862
7.3141
8.2380
7.5495

ifail =

0

```
```function g02ka_example
x = [19.5, 43.1, 29.1;
24.7, 49.8, 28.2;
30.7, 51.9, 37;
29.8, 54.3, 31.1;
19.1, 42.2, 30.9;
25.6, 53.9, 23.7;
31.4, 58.5, 27.6;
27.9, 52.1, 30.6;
22.1, 49.9, 23.2;
25.5, 53.5, 24.8;
31.1, 56.6, 30;
30.4, 56.7, 28.3;
18.7, 46.5, 23;
19.7, 44.2, 28.6;
14.6, 42.7, 21.3;
29.5, 54.4, 30.1;
27.7, 55.3, 25.7;
30.2, 58.6, 24.6;
22.7, 48.2, 27.1;
25.2, 51, 27.5];
isx = [int64(1);1;1];
ip = int64(3);
y = [11.9;
22.8;
18.7;
20.1;
12.9;
21.7;
27.1;
25.4;
21.3;
19.3;
25.4;
27.2;
11.7;
17.8;
12.8;
23.9;
22.6;
25.4;
14.8;
21.1];
h = 0.5;
opt = int64(1);
niter = int64(25);
tol = 0.0001;
orig = int64(2);
optloo = int64(2);
[hOut, niterOut, nep, b, vif, res, rss, df, perr, ifail] = ...
g02ka(x, isx, ip, y, h, opt, niter, tol, orig, optloo)
```
```

hOut =

0.0712

niterOut =

6

nep =

2.9059

b =

20.1950
9.7934
9.9576
-2.0125

vif =

0.2928
0.4162
0.8089

res =

-1.9894
3.5469
-3.0392
-3.0309
-0.1899
-0.3146
0.9775
4.0157
2.5332
-2.3560
0.5446
2.3989
-4.0876
3.2778
0.2894
0.7330
-0.7116
-0.6092
-2.9995
1.0110

109.1674

df =

16

perr =

7.4718
6.3862
7.3141
8.2380
7.5495

ifail =

0

```