NAG FL Interface
g02maf (lars)
1
Purpose
g02maf performs Least Angle Regression (LARS), forward stagewise linear regression or Least Absolute Shrinkage and Selection Operator (LASSO).
2
Specification
Fortran Interface
Subroutine g02maf ( 
mtype, pred, prey, n, m, d, ldd, isx, lisx, y, mnstep, ip, nstep, b, ldb, fitsum, ropt, lropt, ifail) 
Integer, Intent (In) 
:: 
mtype, pred, prey, n, m, ldd, isx(lisx), lisx, mnstep, ldb, lropt 
Integer, Intent (Inout) 
:: 
ifail 
Integer, Intent (Out) 
:: 
ip, nstep 
Real (Kind=nag_wp), Intent (In) 
:: 
d(ldd,*), y(n), ropt(lropt) 
Real (Kind=nag_wp), Intent (Inout) 
:: 
b(ldb,*) 
Real (Kind=nag_wp), Intent (Out) 
:: 
fitsum(6,mnstep+1) 

C Header Interface
#include <nag.h>
void 
g02maf_ (const Integer *mtype, const Integer *pred, const Integer *prey, const Integer *n, const Integer *m, const double d[], const Integer *ldd, const Integer isx[], const Integer *lisx, const double y[], const Integer *mnstep, Integer *ip, Integer *nstep, double b[], const Integer *ldb, double fitsum[], const double ropt[], const Integer *lropt, Integer *ifail) 

C++ Header Interface
#include <nag.h> extern "C" {
void 
g02maf_ (const Integer &mtype, const Integer &pred, const Integer &prey, const Integer &n, const Integer &m, const double d[], const Integer &ldd, const Integer isx[], const Integer &lisx, const double y[], const Integer &mnstep, Integer &ip, Integer &nstep, double b[], const Integer &ldb, double fitsum[], const double ropt[], const Integer &lropt, Integer &ifail) 
}

The routine may be called by the names g02maf or nagf_correg_lars.
3
Description
g02maf implements the LARS algorithm of
Efron et al. (2004) as well as the modifications needed to perform forward stagewise linear regression and fit LASSO and positive LASSO models.
Given a vector of
$n$ observed values,
$y=\left\{{y}_{i}:i=1,2,\dots ,n\right\}$ and an
$n\times p$ design matrix
$X$, where the
$j$th column of
$X$, denoted
${x}_{j}$, is a vector of length
$n$ representing the
$j$th independent variable
${x}_{j}$, standardized such that
$\sum _{\mathit{i}=1}^{n}}{x}_{ij}=0$, and
$\sum _{\mathit{i}=1}^{n}}{x}_{ij}^{2}=1$ and a set of model parameters
$\beta $ to be estimated from the observed values, the LARS algorithm can be summarised as:

1.Set $k=1$ and all coefficients to zero, that is $\beta =0$.

2.Find the variable most correlated with $y$, say ${x}_{{j}_{1}}$. Add ${x}_{{j}_{1}}$ to the ‘most correlated’ set $\mathcal{A}$. If $p=1$ go to 8.

3.Take the largest possible step in the direction of ${x}_{{j}_{1}}$ (i.e., increase the magnitude of ${\beta}_{{j}_{1}}$) until some other variable, say ${x}_{{j}_{2}}$, has the same correlation with the current residual, $y{x}_{{j}_{1}}{\beta}_{{j}_{1}}$.

4.Increment $k$ and add ${x}_{{j}_{k}}$ to $\mathcal{A}$.

5.If $\left\mathcal{A}\right=p$ go to 8.

6.Proceed in the ‘least angle direction’, that is, the direction which is equiangular between all variables in $\mathcal{A}$, altering the magnitude of the parameter estimates of those variables in $\mathcal{A}$, until the $k$th variable, ${x}_{{j}_{k}}$, has the same correlation with the current residual.

7.Go to 4.

8.Let $K=k$.
As well as being a model selection process in its own right, with a small number of modifications the LARS algorithm can be used to fit the LASSO model of
Tibshirani (1996), a positive LASSO model, where the independent variables enter the model in their defined direction (i.e.,
${\beta}_{kj}\ge 0$), forward stagewise linear regression (
Hastie et al. (2001)) and forward selection (
Weisberg (1985)). Details of the required modifications in each of these cases are given in
Efron et al. (2004).
The LASSO model of
Tibshirani (1996) is given by
for all values of
${t}_{k}$, where
$\alpha =\overline{y}={n}^{1}{\displaystyle \sum _{\mathit{i}=1}^{n}}{y}_{i}$. The positive LASSO model is the same as the standard LASSO model, given above, with the added constraint that
Unlike the standard LARS algorithm, when fitting either of the LASSO models, variables can be dropped as well as added to the set $\mathcal{A}$. Therefore the total number of steps $K$ is no longer bounded by $p$.
Forward stagewise linear regression is an iterative procedure of the form:

1.Initialize $k=1$ and the vector of residuals ${r}_{0}=y\alpha $.

2.For each $j=1,2,\dots ,p$ calculate ${c}_{j}={x}_{j}^{\mathrm{T}}{r}_{k1}$. The value ${c}_{j}$ is therefore proportional to the correlation between the $j$th independent variable and the vector of previous residual values, ${r}_{k}$.

3.Calculate ${j}_{k}={\displaystyle \underset{j}{\mathrm{argmax}}}\phantom{\rule{0.25em}{0ex}}\left{c}_{j}\right$, the value of $j$ with the largest absolute value of ${c}_{j}$.

4.If $\left{c}_{{j}_{k}}\right<\epsilon $ then go to 7.

5.Update the residual values, with
where $\delta $ is a small constant and $\mathrm{sign}\left({c}_{{j}_{k}}\right)=1$ when ${c}_{{j}_{k}}<0$ and $1$ otherwise.

6.Increment $k$ and go to 2.

7.Set $K=k$.
If the largest possible step were to be taken, that is
$\delta =\left{c}_{{j}_{k}}\right$ then forward stagewise linear regression reverts to the standard forward selection method as implemented in
g02eef.
The LARS procedure results in
$K$ models, one for each step of the fitting process. In order to aid in choosing which is the most suitable
Efron et al. (2004) introduced a
${C}_{p}$type statistic given by
where
${\nu}_{k}$ is the approximate degrees of freedom for the
$k$th step and
One way of choosing a model is therefore to take the one with the smallest value of ${C}_{p}^{\left(k\right)}$.
4
References
Efron B, Hastie T, Johnstone I and Tibshirani R (2004) Least Angle Regression The Annals of Statistics (Volume 32) 2 407–499
Hastie T, Tibshirani R and Friedman J (2001) The Elements of Statistical Learning: Data Mining, Inference and Prediction Springer (New York)
Tibshirani R (1996) Regression Shrinkage and Selection via the Lasso Journal of the Royal Statistics Society, Series B (Methodological) (Volume 58) 1 267–288
Weisberg S (1985) Applied Linear Regression Wiley
5
Arguments

1:
$\mathbf{mtype}$ – Integer
Input

On entry: indicates the type of model to fit.
 ${\mathbf{mtype}}=1$
 LARS is performed.
 ${\mathbf{mtype}}=2$
 Forward linear stagewise regression is performed.
 ${\mathbf{mtype}}=3$
 LASSO model is fit.
 ${\mathbf{mtype}}=4$
 A positive LASSO model is fit.
Constraint:
${\mathbf{mtype}}=1$, $2$, $3$ or $4$.

2:
$\mathbf{pred}$ – Integer
Input

On entry: indicates the type of data preprocessing to perform on the independent variables supplied in
d to comply with the standardized form of the design matrix.
 ${\mathbf{pred}}=0$
 No preprocessing is performed.
 ${\mathbf{pred}}=1$
 Each of the independent variables,
${x}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,p$, are mean centred prior to fitting the model. The means of the independent variables, $\overline{x}$, are returned in b, with
${\overline{x}}_{\mathit{j}}={\mathbf{b}}\left(\mathit{j},{\mathbf{nstep}}+2\right)$, for $\mathit{j}=1,2,\dots ,p$.
 ${\mathbf{pred}}=2$
 Each independent variable is normalized, with the $j$th variable scaled by $1/\sqrt{{x}_{j}^{\mathrm{T}}{x}_{j}}$. The scaling factor used by variable $j$ is returned in ${\mathbf{b}}\left(\mathit{j},{\mathbf{nstep}}+1\right)$.
 ${\mathbf{pred}}=3$
 As ${\mathbf{pred}}=1$ and $2$, all of the independent variables are mean centred prior to being normalized.
Suggested value:
${\mathbf{pred}}=3$.
Constraint:
${\mathbf{pred}}=0$, $1$, $2$ or $3$.

3:
$\mathbf{prey}$ – Integer
Input

On entry: indicates the type of data preprocessing to perform on the dependent variable supplied in
y.
 ${\mathbf{prey}}=0$
 No preprocessing is performed, this is equivalent to setting $\alpha =0$.
 ${\mathbf{prey}}=1$
 The dependent variable, $y$, is mean centred prior to fitting the model, so $\alpha =\overline{y}$. Which is equivalent to fitting a nonpenalized intercept to the model and the degrees of freedom etc. are adjusted accordingly.
The value of $\alpha $ used is returned in ${\mathbf{fitsum}}\left(1,{\mathbf{nstep}}+1\right)$.
Suggested value:
${\mathbf{prey}}=1$.
Constraint:
${\mathbf{prey}}=0$ or $1$.

4:
$\mathbf{n}$ – Integer
Input

On entry: $n$, the number of observations.
Constraint:
${\mathbf{n}}\ge 1$.

5:
$\mathbf{m}$ – Integer
Input

On entry: $m$, the total number of independent variables.
Constraint:
${\mathbf{m}}\ge 1$.

6:
$\mathbf{d}\left({\mathbf{ldd}},*\right)$ – Real (Kind=nag_wp) array
Input

Note: the second dimension of the array
d
must be at least
${\mathbf{m}}$.
On entry:
$D$, the data, which along with
pred and
isx, defines the design matrix
$X$. The
$\mathit{i}$th observation for the
$\mathit{j}$th variable must be supplied in
${\mathbf{d}}\left(\mathit{i},\mathit{j}\right)$, for
$\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and
$\mathit{j}=1,2,\dots ,{\mathbf{m}}$.

7:
$\mathbf{ldd}$ – Integer
Input

On entry: the first dimension of the array
d as declared in the (sub)program from which
g02maf is called.
Constraint:
${\mathbf{ldd}}\ge {\mathbf{n}}$.

8:
$\mathbf{isx}\left({\mathbf{lisx}}\right)$ – Integer array
Input

On entry: indicates which independent variables from
d will be included in the design matrix,
$X$.
If
${\mathbf{lisx}}=0$, all variables are included in the design matrix and
isx is not referenced.
If
${\mathbf{lisx}}={\mathbf{m}}$${\mathbf{isx}}\left(\mathit{j}\right)$ must be set as follows, for
$\mathit{j}=1,2,\dots ,{\mathbf{m}}$:
 ${\mathbf{isx}}\left(j\right)=1$
 To indicate that the $j$th variable, as supplied in d, is included in the design matrix;
 ${\mathbf{isx}}\left(j\right)=0$
 To indicated that the $j$th variable, as supplied in d, is not included in the design matrix;
and
$p={\displaystyle \sum _{\mathit{j}=1}^{m}}{\mathbf{isx}}\left(\mathit{j}\right)$.
Constraint:
if ${\mathbf{lisx}}={\mathbf{m}}$,
${\mathbf{isx}}\left(\mathit{j}\right)=0$ or $1$ and at least one value of ${\mathbf{isx}}\left(\mathit{j}\right)\ne 0$, for $\mathit{j}=1,2,\dots ,{\mathbf{m}}$.

9:
$\mathbf{lisx}$ – Integer
Input

On entry: length of the
isx array.
Constraint:
${\mathbf{lisx}}=0$ or ${\mathbf{m}}$.

10:
$\mathbf{y}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) array
Input

On entry: $y$, the observations on the dependent variable.

11:
$\mathbf{mnstep}$ – Integer
Input

On entry: the maximum number of steps to carry out in the model fitting process.
If ${\mathbf{mtype}}=1$, i.e., a LARS is being performed, the maximum number of steps the algorithm will take is $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(p,n\right)$ if ${\mathbf{prey}}=0$, otherwise $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(p,n1\right)$.
If ${\mathbf{mtype}}=2$, i.e., a forward linear stagewise regression is being performed, the maximum number of steps the algorithm will take is likely to be several orders of magnitude more and is no longer bound by $p$ or $n$.
If ${\mathbf{mtype}}=3$ or $4$, i.e., a LASSO or positive LASSO model is being fit, the maximum number of steps the algorithm will take lies somewhere between that of the LARS and forward linear stagewise regression, again it is no longer bound by $p$ or $n$.
Constraint:
${\mathbf{mnstep}}\ge 1$.

12:
$\mathbf{ip}$ – Integer
Output

On exit:
$p$, number of parameter estimates.
If
${\mathbf{lisx}}=0$,
$p={\mathbf{m}}$, i.e., the number of variables in
d.
Otherwise
$p$ is the number of nonzero values in
isx.

13:
$\mathbf{nstep}$ – Integer
Output

On exit: $K$, the actual number of steps carried out in the model fitting process.

14:
$\mathbf{b}\left({\mathbf{ldb}},*\right)$ – Real (Kind=nag_wp) array
Output

Note: the second dimension of the array
b
must be at least
${\mathbf{mnstep}}+2$.
On exit:
$\beta $ the parameter estimates, with
${\mathbf{b}}\left(j,k\right)={\beta}_{kj}$, the parameter estimate for the
$j$th variable,
$j=1,2,\dots ,p$ at the
$k$th step of the model fitting process,
$k=1,2,\dots ,{\mathbf{nstep}}$.
By default, when
${\mathbf{pred}}=2$ or
$3$ the parameter estimates are rescaled prior to being returned. If the parameter estimates are required on the normalized scale, then this can be overridden via
ropt.
The values held in the remaining part of
b depend on the type of preprocessing performed.
 If ${\mathbf{pred}}=0$,
 $\begin{array}{lll}{\mathbf{b}}\left(j,{\mathbf{nstep}}+1\right)& =& 1\\ {\mathbf{b}}\left(j,{\mathbf{nstep}}+2\right)& =& 0\end{array}$
 If ${\mathbf{pred}}=1$,
 $\begin{array}{lll}{\mathbf{b}}\left(j,{\mathbf{nstep}}+1\right)& =& 1\\ {\mathbf{b}}\left(j,{\mathbf{nstep}}+2\right)& =& {\overline{x}}_{j}\end{array}$
 If ${\mathbf{pred}}=2$,
 $\begin{array}{lll}{\mathbf{b}}\left(j,{\mathbf{nstep}}+1\right)& =& 1/\sqrt{{x}_{j}^{\mathrm{T}}{x}_{j}}\\ {\mathbf{b}}\left(j,{\mathbf{nstep}}+2\right)& =& 0\end{array}$
 If ${\mathbf{pred}}=3$,
 $\begin{array}{lll}{\mathbf{b}}\left(j,{\mathbf{nstep}}+1\right)& =& 1/\sqrt{{\left({x}_{j}{\overline{x}}_{j}\right)}^{\mathrm{T}}\left({x}_{j}{\overline{x}}_{j}\right)}\\ {\mathbf{b}}\left(j,{\mathbf{nstep}}+2\right)& =& {\overline{x}}_{j}\end{array}$
for $j=1,2,\dots ,p$.

15:
$\mathbf{ldb}$ – Integer
Input

On entry: the first dimension of the array
b as declared in the (sub)program from which
g02maf is called.
Constraint:
${\mathbf{ldb}}\ge p$, where
$p$ is the number of parameter estimates as described in
ip.

16:
$\mathbf{fitsum}\left(6,{\mathbf{mnstep}}+1\right)$ – Real (Kind=nag_wp) array
Output

On exit: summaries of the model fitting process. When
$k=1,2,\dots ,{\mathbf{nstep}}$,
 ${\mathbf{fitsum}}\left(1,k\right)$
 ${\Vert {\beta}_{k}\Vert}_{1}$, the sum of the absolute values of the parameter estimates for the $k$th step of the modelling fitting process. If ${\mathbf{pred}}=2$ or $3$, the scaled parameter estimates are used in the summation.
 ${\mathbf{fitsum}}\left(2,k\right)$
 ${\mathrm{RSS}}_{k}$, the residual sums of squares for the $k$th step, where ${\mathrm{RSS}}_{k}={\Vert y{X}^{\mathrm{T}}{\beta}_{k}\Vert}^{2}$.
 ${\mathbf{fitsum}}\left(3,k\right)$
 ${\nu}_{k}$, approximate degrees of freedom for the $k$th step.
 ${\mathbf{fitsum}}\left(4,k\right)$
 ${C}_{p}^{\left(k\right)}$, a ${C}_{p}$type statistic for the $k$th step, where ${C}_{p}^{\left(k\right)}=\frac{{\mathrm{RSS}}_{k}}{{\sigma}^{2}}n+2{\nu}_{k}$.
 ${\mathbf{fitsum}}\left(5,k\right)$
 ${\hat{C}}_{k}$, correlation between the residual at step $k1$ and the most correlated variable not yet in the active set $\mathcal{A}$, where the residual at step $0$ is $y$.
 ${\mathbf{fitsum}}\left(6,k\right)$
 ${\hat{\gamma}}_{k}$, the step size used at step $k$.
In addition
 ${\mathbf{fitsum}}\left(1,{\mathbf{nstep}}+1\right)$
 $\alpha $, with $\alpha =\overline{y}$ if ${\mathbf{prey}}=1$ and $0$ otherwise.
 ${\mathbf{fitsum}}\left(2,{\mathbf{nstep}}+1\right)$
 ${\mathrm{RSS}}_{0}$, the residual sums of squares for the null model, where ${\mathrm{RSS}}_{0}={y}^{\mathrm{T}}y$ when ${\mathbf{prey}}=0$ and ${\mathrm{RSS}}_{0}={\left(y\overline{y}\right)}^{\mathrm{T}}\left(y\overline{y}\right)$ otherwise.
 ${\mathbf{fitsum}}\left(3,{\mathbf{nstep}}+1\right)$
 ${\nu}_{0}$, the degrees of freedom for the null model, where ${\nu}_{0}=0$ if ${\mathbf{prey}}=0$ and ${\nu}_{0}=1$ otherwise.
 ${\mathbf{fitsum}}\left(4,{\mathbf{nstep}}+1\right)$
 ${C}_{p}^{\left(0\right)}$, a ${C}_{p}$type statistic for the null model, where ${C}_{p}^{\left(0\right)}=\frac{{\mathrm{RSS}}_{0}}{{\sigma}^{2}}n+2{\nu}_{0}$.
 ${\mathbf{fitsum}}\left(5,{\mathbf{nstep}}+1\right)$
 ${\sigma}^{2}$, where ${\sigma}^{2}=\frac{n{\mathrm{RSS}}_{K}}{{\nu}_{K}}$ and $K={\mathbf{nstep}}$.
Although the ${C}_{p}$ statistics described above are returned when ${\mathbf{ifail}}={\mathbf{112}}$ they may not be meaningful due to the estimate ${\sigma}^{2}$ not being based on the saturated model.

17:
$\mathbf{ropt}\left({\mathbf{lropt}}\right)$ – Real (Kind=nag_wp) array
Input

On entry: optional parameters to control various aspects of the LARS algorithm.
The default value will be used for
${\mathbf{ropt}}\left(i\right)$ if
${\mathbf{lropt}}<i$, therefore setting
${\mathbf{lropt}}=0$ will use the default values for all optional parameters and
ropt need not be set. The default value will also be used if an invalid value is supplied for a particular argument, for example, setting
${\mathbf{ropt}}\left(i\right)=1$ will use the default value for argument
$i$.
 ${\mathbf{ropt}}\left(1\right)$
 The minimum step size that will be taken.
Default is
$100\times \mathit{eps}$, where
$\mathit{eps}$ is the
machine precision returned by
x02ajf.
 ${\mathbf{ropt}}\left(2\right)$
 General tolerance, used amongst other things, for comparing correlations.
Default is ${\mathbf{ropt}}\left(1\right)$.
 ${\mathbf{ropt}}\left(3\right)$
 If set to $1$, parameter estimates are rescaled before being returned.
If set to $0$, no rescaling is performed.
This argument has no effect when ${\mathbf{pred}}=0$ or $1$.
Default is for the parameter estimates to be rescaled.
 ${\mathbf{ropt}}\left(4\right)$
 If set to $1$, it is assumed that the model contains an intercept during the model fitting process and when calculating the degrees of freedom.
If set to $0$, no intercept is assumed.
This has no effect on the amount of preprocessing performed on
y.
Default is to treat the model as having an intercept when ${\mathbf{prey}}=1$ and as not having an intercept when ${\mathbf{prey}}=0$.
 ${\mathbf{ropt}}\left(5\right)$
 As implemented, the LARS algorithm can either work directly with $y$ and $X$, or it can work with the crossproduct matrices, ${X}^{\mathrm{T}}y$ and ${X}^{\mathrm{T}}X$. In most cases it is more efficient to work with the crossproduct matrices. This flag allows you direct control over which method is used, however, the default value will usually be the best choice.
If ${\mathbf{ropt}}\left(5\right)=1$, $y$ and $X$ are worked with directly.
If ${\mathbf{ropt}}\left(5\right)=0$, the crossproduct matrices are used.
Default is $1$ when $p\ge 500$ and $n<p$ and $0$ otherwise.
Constraints:
 ${\mathbf{ropt}}\left(1\right)>\mathit{machineprecision}$;
 ${\mathbf{ropt}}\left(2\right)>\mathit{machineprecision}$;
 ${\mathbf{ropt}}\left(3\right)=0$ or $1$;
 ${\mathbf{ropt}}\left(4\right)=0$ or $1$;
 ${\mathbf{ropt}}\left(5\right)=0$ or $1$.

18:
$\mathbf{lropt}$ – Integer
Input

On entry: length of the options array
ropt.
Constraint:
$0\le {\mathbf{lropt}}\le 5$.

19:
$\mathbf{ifail}$ – Integer
Input/Output

On entry:
ifail must be set to
$0$,
$1$ or
$1$ to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $1$ means that an error message is printed while a value of $1$ means that it is not.
If halting is not appropriate, the value
$1$ or
$1$ is recommended. If message printing is undesirable, then the value
$1$ is recommended. Otherwise, the value
$1$ is recommended since useful values can be provided in some output arguments even when
${\mathbf{ifail}}\ne {\mathbf{0}}$ on exit.
When the value $\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit:
${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see
Section 6).
6
Error Indicators and Warnings
If on entry
${\mathbf{ifail}}=0$ or
$1$, explanatory error messages are output on the current error message unit (as defined by
x04aaf).
Errors or warnings detected by the routine:
Note: in some cases g02maf may return useful information.
 ${\mathbf{ifail}}=11$

On entry, ${\mathbf{mtype}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{mtype}}=1$, $2$, $3$ or $4$.
 ${\mathbf{ifail}}=21$

On entry, ${\mathbf{pred}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{pred}}=0$, $1$, $2$ or $3$.
 ${\mathbf{ifail}}=31$

On entry, ${\mathbf{prey}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{prey}}=0$ or $1$.
 ${\mathbf{ifail}}=41$

On entry, ${\mathbf{n}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{n}}\ge 1$.
 ${\mathbf{ifail}}=51$

On entry, ${\mathbf{m}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{m}}\ge 1$.
 ${\mathbf{ifail}}=71$

On entry, ${\mathbf{ldd}}=\u2329\mathit{\text{value}}\u232a$ and ${\mathbf{n}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{ldd}}\ge {\mathbf{n}}$.
 ${\mathbf{ifail}}=81$

On entry, ${\mathbf{isx}}\left(\u2329\mathit{\text{value}}\u232a\right)=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{isx}}\left(i\right)=0$ or $1$, for all $i$.
 ${\mathbf{ifail}}=82$

On entry, all values of
isx are zero.
Constraint: at least one value of
isx must be nonzero.
 ${\mathbf{ifail}}=91$

On entry, ${\mathbf{lisx}}=\u2329\mathit{\text{value}}\u232a$ and ${\mathbf{m}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{lisx}}=0$ or ${\mathbf{m}}$.
 ${\mathbf{ifail}}=111$

On entry, ${\mathbf{mnstep}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{mnstep}}\ge 1$.
 ${\mathbf{ifail}}=112$

Fitting process did not finish in
mnstep steps. Try increasing the size of
mnstep and supplying larger output arrays.
All output is returned as documented, up to step
mnstep, however,
$\sigma $ and the
${C}_{p}$ statistics may not be meaningful.
 ${\mathbf{ifail}}=151$

On entry, ${\mathbf{ldb}}=\u2329\mathit{\text{value}}\u232a$ and ${\mathbf{m}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: if ${\mathbf{lisx}}=0$ then ${\mathbf{ldb}}\ge {\mathbf{m}}$.
 ${\mathbf{ifail}}=152$

On entry, ${\mathbf{ldb}}=\u2329\mathit{\text{value}}\u232a$ and $p=\u2329\mathit{\text{value}}\u232a$.
Constraint: if ${\mathbf{lisx}}={\mathbf{m}}$ then ${\mathbf{ldb}}\ge p$.
 ${\mathbf{ifail}}=161$

${\sigma}^{2}$ is approximately zero and hence the ${C}_{p}$type criterion cannot be calculated. All other output is returned as documented.
 ${\mathbf{ifail}}=162$

${\nu}_{K}=n$, therefore $\sigma $ has been set to a large value. Output is returned as documented.
 ${\mathbf{ifail}}=163$

Degenerate model, no variables added and ${\mathbf{nstep}}=0$. Output is returned as documented.
 ${\mathbf{ifail}}=181$

On entry, ${\mathbf{lropt}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: $0\le {\mathbf{lropt}}\le 5$.
 ${\mathbf{ifail}}=99$
An unexpected error has been triggered by this routine. Please
contact
NAG.
See
Section 7 in the Introduction to the NAG Library FL Interface for further information.
 ${\mathbf{ifail}}=399$
Your licence key may have expired or may not have been installed correctly.
See
Section 8 in the Introduction to the NAG Library FL Interface for further information.
 ${\mathbf{ifail}}=999$
Dynamic memory allocation failed.
See
Section 9 in the Introduction to the NAG Library FL Interface for further information.
7
Accuracy
Not applicable.
8
Parallelism and Performance
g02maf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
g02maf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the
X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the
Users' Note for your implementation for any additional implementationspecific information.
g02maf returns the parameter estimates at various points along the solution path of a LARS, LASSO or stagewise regression analysis. If the solution is required at a different set of points, for example when performing crossvalidation, then
g02mcf can be used.
For datasets with a large number of observations,
$n$, it may be impractical to store the full
$X$ matrix in memory in one go. In such instances the crossproduct matrices
${X}^{\mathrm{T}}y$ and
${X}^{\mathrm{T}}X$ can be calculated, using for example, multiple calls to
g02buf and
g02bzf, and
g02mbf called to perform the analysis.
The amount of workspace used by
g02maf depends on whether the crossproduct matrices are being used internally (as controlled by
ropt). If the crossproduct matrices are being used then
g02maf internally allocates approximately
$2{p}^{2}+4p+\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(np\right)$ elements of real storage compared to
${p}^{2}+3p+\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(np\right)+2n+n\times p$ elements when
$X$ and
$y$ are used directly. In both cases approximately
$5p$ elements of integer storage are also used. If a forward linear stagewise analysis is performed than an additional
${p}^{2}+5p$ elements of real storage are required.
10
Example
This example performs a LARS on a simulated dataset with $20$ observations and $6$ independent variables.
10.1
Program Text
10.2
Program Data
10.3
Program Results
This example plot shows the regression coefficients (${\beta}_{k}$) plotted against the scaled absolute sum of the parameter estimates (${\Vert {\beta}_{k}\Vert}_{1}$).