Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_correg_glm_normal (g02ga)

## Purpose

nag_correg_glm_normal (g02ga) fits a generalized linear model with normal errors.

## Syntax

[s, rss, idf, b, irank, se, cov, v, ifail] = g02ga(link, mean, x, isx, ip, y, s, 'n', n, 'm', m, 'wt', wt, 'a', a, 'v', v, 'tol', tol, 'maxit', maxit, 'iprint', iprint, 'eps', eps)
[s, rss, idf, b, irank, se, cov, v, ifail] = nag_correg_glm_normal(link, mean, x, isx, ip, y, s, 'n', n, 'm', m, 'wt', wt, 'a', a, 'v', v, 'tol', tol, 'maxit', maxit, 'iprint', iprint, 'eps', eps)
Note: the interface to this routine has changed since earlier releases of the toolbox:
Mark 23: offset & weight omitted; v, wt, tol, maxit, iprint, eps, a optional
.

## Description

A generalized linear model with Normal errors consists of the following elements:
(a) a set of n$n$ observations, yi${y}_{i}$, from a Normal distribution with probability density function:
 1/(sqrt(2π)σ)exp( − ((y − μ)2)/(2σ2)) , $12πσ exp(- (y-μ) 22σ2 ) ,$
where μ$\mu$ is the mean and σ2${\sigma }^{2}$ is the variance.
(b) X$X$, a set of p$p$ independent variables for each observation, x1,x2,,xp${x}_{1},{x}_{2},\dots ,{x}_{p}$.
(c) a linear model:
 η = ∑ βjxj. $η=∑βjxj.$
(d) a link between the linear predictor, η$\eta$, and the mean of the distribution, μ$\mu$, i.e., η = g(μ)$\eta =g\left(\mu \right)$. The possible link functions are:
 (i) exponent link: η = μa$\eta ={\mu }^{a}$, for a constant a$a$, (ii) identity link: η = μ$\eta =\mu$, (iii) log link: η = logμ$\eta =\mathrm{log}\mu$, (iv) square root link: η = sqrt(μ)$\eta =\sqrt{\mu }$, (v) reciprocal link: η = 1/μ $\eta =\frac{1}{\mu }$.
(e) a measure of fit, the residual sum of squares = (yiμ̂i)2$\text{}=\sum {\left({y}_{i}-{\stackrel{^}{\mu }}_{i}\right)}^{2}$.
The linear parameters are estimated by iterative weighted least squares. An adjusted dependent variable, z$z$, is formed:
 z = η + (y − μ)(dη)/(dμ) $z=η+(y-μ)dη dμ$
and a working weight, w$w$,
 w = ((dη)/(dμ))2. $w= (dη dμ ) 2.$
At each iteration an approximation to the estimate of β$\beta$, β̂$\stackrel{^}{\beta }$, is found by the weighted least squares regression of z$z$ on X$X$ with weights w$w$.
nag_correg_glm_normal (g02ga) finds a QR$QR$ decomposition of w(1/2)X${w}^{\frac{1}{2}}X$, i.e., w(1/2)X = QR${w}^{\frac{1}{2}}X=QR$ where R$R$ is a p$p$ by p$p$ triangular matrix and Q$Q$ is an n$n$ by p$p$ column orthogonal matrix.
If R$R$ is of full rank, then β̂$\stackrel{^}{\beta }$ is the solution to
 Rβ̂ = QTw(1/2)z. $Rβ^=QTw12z.$
If R$R$ is not of full rank a solution is obtained by means of a singular value decomposition (SVD) of R$R$.
R = Q*
 ( D 0 ) 0 0
PT,
$R=Q* D 0 0 0 PT,$
where D$D$ is a k$k$ by k$k$ diagonal matrix with nonzero diagonal elements, k$k$ being the rank of R$R$ and w(1/2)X${w}^{\frac{1}{2}}X$.
This gives the solution
β̂ = P1D1
 ( Q* 0 ) 0 I
QTw(1/2)z
$β^=P1D-1 Q* 0 0 I QTw12z$
P1${P}_{1}$ being the first k$k$ columns of P$P$, i.e., P = (P1P0)$P=\left({P}_{1}{P}_{0}\right)$.
The iterations are continued until there is only a small change in the residual sum of squares.
The initial values for the algorithm are obtained by taking
 η̂ = g(y). $η^=g(y).$
The fit of the model can be assessed by examining and testing the residual sum of squares, in particular comparing the difference in residual sums of squares between nested models, i.e., when one model is a sub-model of the other.
Let RSSf${\mathrm{RSS}}_{f}$ be the residual sum of squares for the full model with degrees of freedom νf${\nu }_{f}$ and let RSSs${\mathrm{RSS}}_{s}$ be the residual sum of squares for the sub-model with degrees of freedom νs${\nu }_{s}$ then:
 F = ((RSSs − RSSf) / (νs − νf))/(RSSf / νf), $F=(RSSs-RSSf)/(νs-νf) RSSf/νf ,$
has, approximately, an F$F$-distribution with (νsνf${\nu }_{s}-{\nu }_{f}$), νf${\nu }_{f}$ degrees of freedom.
The parameter estimates, β̂$\stackrel{^}{\beta }$, are asymptotically Normally distributed with variance-covariance matrix:
• C = R1R1Tσ2$C={R}^{-1}{{R}^{-1}}^{\mathrm{T}}{\sigma }^{2}$ in the full rank case,
• otherwise C = P1D2 P1T σ2$C={P}_{1}{D}^{-2}{P}_{1}^{\mathrm{T}}{\sigma }^{2}$
The residuals and influence statistics can also be examined.
The estimated linear predictor η̂ = Xβ̂$\stackrel{^}{\eta }=X\stackrel{^}{\beta }$, can be written as Hw(1/2)z$H{w}^{\frac{1}{2}}z$ for an n$n$ by n$n$ matrix H$H$. The i$i$th diagonal elements of H$H$, hi${h}_{i}$, give a measure of the influence of the i$i$th values of the independent variables on the fitted regression model. These are sometimes known as leverages.
The fitted values are given by μ̂ = g1(η̂)$\stackrel{^}{\mu }={g}^{-1}\left(\stackrel{^}{\eta }\right)$.
nag_correg_glm_normal (g02ga) also computes the residuals, r$r$:
 ri = yi − μ̂i. $ri=yi-μ^i.$
An option allows prior weights ωi${\omega }_{i}$ to be used; this gives a model with:
 σi2 = (σ2)/(ωi). $σi2=σ2ωi.$
In many linear regression models the first term is taken as a mean term or an intercept, i.e., xi,1 = 1${x}_{\mathit{i},1}=1$, for i = 1,2,,n$\mathit{i}=1,2,\dots ,n$; this is provided as an option.
Often only some of the possible independent variables are included in a model, the facility to select variables to be included in the model is provided.
If part of the linear predictor can be represented by a variable with a known coefficient, then this can be included in the model by using an offset, o$o$:
 η = o + ∑ βjxj. $η=o+∑βjxj.$
If the model is not of full rank the solution given will be only one of the possible solutions. Other estimates may be obtained by applying constraints to the parameters. These solutions can be obtained by using nag_correg_glm_constrain (g02gk) after using nag_correg_glm_normal (g02ga). Only certain linear combinations of the parameters will have unique estimates; these are known as estimable functions and can be estimated and tested using nag_correg_glm_estfunc (g02gn).
Details of the SVD are made available, in the form of the matrix P*${P}^{*}$:
P* =
 ( D − 1 P1T ) P0T
.
$P*= D-1 P1T P0T .$

## References

Cook R D and Weisberg S (1982) Residuals and Influence in Regression Chapman and Hall
McCullagh P and Nelder J A (1983) Generalized Linear Models Chapman and Hall

## Parameters

### Compulsory Input Parameters

Indicates which link function is to be used.
link = 'E'${\mathbf{link}}=\text{'E'}$
link = 'I'${\mathbf{link}}=\text{'I'}$
An identity link is used. You are advised not to use nag_correg_glm_normal (g02ga) with an identity link as nag_correg_linregm_fit (g02da) provides a more efficient way of fitting such a model.
link = 'L'${\mathbf{link}}=\text{'L'}$
link = 'S'${\mathbf{link}}=\text{'S'}$
A square root link is used.
link = 'R'${\mathbf{link}}=\text{'R'}$
Constraint: link = 'E'${\mathbf{link}}=\text{'E'}$, 'I'$\text{'I'}$, 'L'$\text{'L'}$, 'S'$\text{'S'}$ or 'R'$\text{'R'}$.
2:     mean – string (length ≥ 1)
Indicates if a mean term is to be included.
mean = 'M'${\mathbf{mean}}=\text{'M'}$
A mean term, intercept, will be included in the model.
mean = 'Z'${\mathbf{mean}}=\text{'Z'}$
The model will pass through the origin, zero-point.
Constraint: mean = 'M'${\mathbf{mean}}=\text{'M'}$ or 'Z'$\text{'Z'}$.
3:     x(ldx,m) – double array
ldx, the first dimension of the array, must satisfy the constraint ldxn$\mathit{ldx}\ge {\mathbf{n}}$.
x(i,j)${\mathbf{x}}\left(\mathit{i},\mathit{j}\right)$ must contain the i$\mathit{i}$th observation for the j$\mathit{j}$th independent variable, for i = 1,2,,n$\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and j = 1,2,,m$\mathit{j}=1,2,\dots ,{\mathbf{m}}$.
4:     isx(m) – int64int32nag_int array
m, the dimension of the array, must satisfy the constraint m1${\mathbf{m}}\ge 1$.
Indicates which independent variables are to be included in the model.
If isx(j) > 0${\mathbf{isx}}\left(j\right)>0$, the variable contained in the j$j$th column of x is included in the regression model.
Constraints:
• isx(j)0${\mathbf{isx}}\left(j\right)\ge 0$, for i = 1,2,,m$\mathit{i}=1,2,\dots ,{\mathbf{m}}$;
• if mean = 'M'${\mathbf{mean}}=\text{'M'}$, exactly ip1${\mathbf{ip}}-1$ values of isx must be > 0$\text{}>0$;
• if mean = 'Z'${\mathbf{mean}}=\text{'Z'}$, exactly ip values of isx must be > 0$\text{}>0$.
5:     ip – int64int32nag_int scalar
The number of independent variables in the model, including the mean or intercept if present.
Constraint: ip > 0${\mathbf{ip}}>0$.
6:     y(n) – double array
n, the dimension of the array, must satisfy the constraint n2${\mathbf{n}}\ge 2$.
The observations on the dependent variable, yi${y}_{\mathit{i}}$, for i = 1,2,,n$\mathit{i}=1,2,\dots ,n$.
7:     s – double scalar
The scale parameter for the model, σ2${\sigma }^{2}$.
If s = 0.0${\mathbf{s}}=0.0$, the scale parameter is estimated with the function using the residual mean square.
Constraint: s0.0${\mathbf{s}}\ge 0.0$.

### Optional Input Parameters

1:     n – int64int32nag_int scalar
Default: The dimension of the array y and the first dimension of the arrays x, v. (An error is raised if these dimensions are not equal.)
n$n$, the number of observations.
Constraint: n2${\mathbf{n}}\ge 2$.
2:     m – int64int32nag_int scalar
Default: The dimension of the array isx and the second dimension of the array x. (An error is raised if these dimensions are not equal.)
m$m$, the total number of independent variables.
Constraint: m1${\mathbf{m}}\ge 1$.
3:     wt( : $:$) – double array
Note: the dimension of the array wt must be at least n${\mathbf{n}}$ if weight = 'W'$\mathit{weight}=\text{'W'}$, and at least 1$1$ otherwise.
If weight = 'W'$\mathit{weight}=\text{'W'}$, wt must contain the weights to be used with the model, ωi${\omega }_{i}$. If wt(i) = 0.0${\mathbf{wt}}\left(i\right)=0.0$, the i$i$th observation is not included in the model, in which case the effective number of observations is the number of observations with nonzero weights.
If weight = 'U'$\mathit{weight}=\text{'U'}$, wt is not referenced and the effective number of observations is n$n$.
Constraint: if weight = 'W'$\mathit{weight}=\text{'W'}$, wt(i)0.0${\mathbf{wt}}\left(\mathit{i}\right)\ge 0.0$, for i = 1,2,,n$\mathit{i}=1,2,\dots ,n$.
4:     a – double scalar
If link = 'E'${\mathbf{link}}=\text{'E'}$, a must contain the power of the exponential.
If link'E'${\mathbf{link}}\ne \text{'E'}$, a is not referenced.
Default: 0$0$
Constraint: if link = 'E'${\mathbf{link}}=\text{'E'}$, a0.0${\mathbf{a}}\ne 0.0$.
5:     v(n,ip + 7${\mathbf{ip}}+7$) – double array
If offset = 'N'$\mathit{offset}=\text{'N'}$, v need not be set.
If offset = 'Y'$\mathit{offset}=\text{'Y'}$, v(i,7)${\mathbf{v}}\left(\mathit{i},7\right)$, for i = 1,2,,n$\mathit{i}=1,2,\dots ,n$, must contain the offset values oi${o}_{\mathit{i}}$. All other values need not be set.
6:     tol – double scalar
Indicates the accuracy required for the fit of the model.
The iterative weighted least squares procedure is deemed to have converged if the absolute change in deviance between interactions is less than tol × (1.0 + current residual sum of squares)${\mathbf{tol}}×\left(1.0+\text{current residual sum of squares}\right)$. This is approximately an absolute precision if the residual sum of squares is small and a relative precision if the residual sum of squares is large.
If 0.0tol < machine precision, nag_correg_glm_normal (g02ga) will use 10 × machine precision.
Default: 0$0$
Constraint: tol0.0${\mathbf{tol}}\ge 0.0$.
7:     maxit – int64int32nag_int scalar
The maximum number of iterations for the iterative weighted least squares.
If maxit = 0${\mathbf{maxit}}=0$, a default value of 10$10$ is used.
Default: 10$10$
Constraint: maxit0${\mathbf{maxit}}\ge 0$.
8:     iprint – int64int32nag_int scalar
Indicates if the printing of information on the iterations is required.
iprint0${\mathbf{iprint}}\le 0$
There is no printing.
iprint > 0${\mathbf{iprint}}>0$
Every iprint iteration, the following is printed:
 the deviance, the current estimates, and if the weighted least squares equations are singular then this is indicated.
When printing occurs the output is directed to the current advisory message unit (see nag_file_set_unit_advisory (x04ab)).
Default: 0$0$
9:     eps – double scalar
The value of eps is used to decide if the independent variables are of full rank and, if not, what is the rank of the independent variables. The smaller the value of eps the stricter the criterion for selecting the singular value decomposition.
If 0.0eps < machine precision, the function will use machine precision instead.
Default: 0$0$
Constraint: eps0.0${\mathbf{eps}}\ge 0.0$.

### Input Parameters Omitted from the MATLAB Interface

offset weight ldx ldv wk

### Output Parameters

1:     s – double scalar
If on input s = 0.0${\mathbf{s}}=0.0$, s contains the estimated value of the scale parameter, σ̂2${\stackrel{^}{\sigma }}^{2}$.
If on input s0.0${\mathbf{s}}\ne 0.0$, s is unchanged on exit.
The residual sum of squares for the fitted model.
3:     idf – int64int32nag_int scalar
The degrees of freedom associated with the residual sum of squares for the fitted model.
4:     b(ip) – double array
The estimates of the parameters of the generalized linear model, β̂$\stackrel{^}{\beta }$.
If mean = 'M'${\mathbf{mean}}=\text{'M'}$, b(1)${\mathbf{b}}\left(1\right)$ will contain the estimate of the mean parameter and b(i + 1)${\mathbf{b}}\left(i+1\right)$ will contain the coefficient of the variable contained in column j$j$ of x${\mathbf{x}}$, where isx(j)${\mathbf{isx}}\left(j\right)$ is the i$i$th positive value in the array isx.
If mean = 'Z'${\mathbf{mean}}=\text{'Z'}$, b(i)${\mathbf{b}}\left(i\right)$ will contain the coefficient of the variable contained in column j$j$ of x${\mathbf{x}}$, where isx(j)${\mathbf{isx}}\left(j\right)$ is the i$i$th positive value in the array isx.
5:     irank – int64int32nag_int scalar
The rank of the independent variables.
If the model is of full rank, ${\mathbf{irank}}={\mathbf{ip}}$.
If the model is not of full rank, irank is an estimate of the rank of the independent variables. irank is calculated as the number of singular values greater than eps × ${\mathbf{eps}}×\text{}$ (largest singular value). It is possible for the SVD to be carried out but for irank to be returned as ip.
6:     se(ip) – double array
The standard errors of the linear parameters.
se(i)${\mathbf{se}}\left(\mathit{i}\right)$ contains the standard error of the parameter estimate in b(i)${\mathbf{b}}\left(\mathit{i}\right)$, for i = 1,2,,ip$\mathit{i}=1,2,\dots ,{\mathbf{ip}}$.
7:     cov(ip × (ip + 1) / 2${\mathbf{ip}}×\left({\mathbf{ip}}+1\right)/2$) – double array
The upper triangular part of the variance-covariance matrix of the ip parameter estimates given in b. They are stored packed by column, i.e., the covariance between the parameter estimate given in b(i)${\mathbf{b}}\left(i\right)$ and the parameter estimate given in b(j)${\mathbf{b}}\left(j\right)$, ji$j\ge i$, is stored in cov((j × (j1) / 2 + i))${\mathbf{cov}}\left(\left(j×\left(j-1\right)/2+i\right)\right)$.
8:     v(n,ip + 7${\mathbf{ip}}+7$) – double array
Auxiliary information on the fitted model.
 v(i,1)${\mathbf{v}}\left(i,1\right)$ contains the linear predictor value, ηi${\eta }_{\mathit{i}}$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$. v(i,2)${\mathbf{v}}\left(i,2\right)$ contains the fitted value, μ̂i${\stackrel{^}{\mu }}_{\mathit{i}}$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$. v(i,3)${\mathbf{v}}\left(i,3\right)$ is only included for consistency with other functions. v(i,3) = 1.0${\mathbf{v}}\left(\mathit{i},3\right)=1.0$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$. v(i,4)${\mathbf{v}}\left(i,4\right)$ contains the square root of the working weight, wi(1/2)${w}_{\mathit{i}}^{\frac{1}{2}}$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$. v(i,5)${\mathbf{v}}\left(i,5\right)$ contains the residual, ri${r}_{\mathit{i}}$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$. v(i,6)${\mathbf{v}}\left(i,6\right)$ contains the leverage, hi${h}_{\mathit{i}}$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$. v(i,7)${\mathbf{v}}\left(i,7\right)$ contains the offset, for i = 1,2, … ,n$i=1,2,\dots ,n$. If offset = 'N'$\mathit{offset}=\text{'N'}$, all values will be zero. v(i,j)${\mathbf{v}}\left(i,j\right)$, for j = 8, … ,ip + 7$j=8,\dots ,{\mathbf{ip}}+7$, contains the results of the QR$QR$ decomposition or the singular value decomposition.
If the model is not of full rank, i.e., ${\mathbf{irank}}<{\mathbf{ip}}$, the first ip rows of columns 8$8$ to ip + 7${\mathbf{ip}}+7$ contain the P*${P}^{*}$ matrix.
9:     ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

## Error Indicators and Warnings

Note: nag_correg_glm_normal (g02ga) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:

Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.

ifail = 1${\mathbf{ifail}}=1$
 On entry, n < 2${\mathbf{n}}<2$, or m < 1${\mathbf{m}}<1$, or ldx < n$\mathit{ldx}<{\mathbf{n}}$, or ldv < n$\mathit{ldv}<{\mathbf{n}}$, or ip < 1${\mathbf{ip}}<1$, or link ≠ 'E','I','L','S'${\mathbf{link}}\ne \text{'E'},\text{'I'},\text{'L'},\text{'S'}$ or 'R', or s < 0.0${\mathbf{s}}<0.0$, or link = 'E'${\mathbf{link}}=\text{'E'}$ and a = 0.0${\mathbf{a}}=0.0$, or mean ≠ 'M'${\mathbf{mean}}\ne \text{'M'}$ or 'Z'$\text{'Z'}$, or weight ≠ 'U'$\mathit{weight}\ne \text{'U'}$ or 'W'$\text{'W'}$, or offset ≠ 'N'$\mathit{offset}\ne \text{'N'}$ or 'Y', or maxit < 0${\mathbf{maxit}}<0$, or tol < 0.0${\mathbf{tol}}<0.0$, or eps < 0.0${\mathbf{eps}}<0.0$.
ifail = 2${\mathbf{ifail}}=2$
 On entry, weight = 'W'$\mathit{weight}=\text{'W'}$ and a value of wt < 0.0${\mathbf{wt}}<0.0$.
ifail = 3${\mathbf{ifail}}=3$
 On entry, a value of isx < 0${\mathbf{isx}}<0$, or the value of ip is incompatible with the values of mean and isx, or ip is greater than the effective number of observations.
ifail = 4${\mathbf{ifail}}=4$
A fitted value is at a boundary. This will only occur with link = 'L'${\mathbf{link}}=\text{'L'}$, 'R'$\text{'R'}$ or 'E'$\text{'E'}$. This may occur if there are small values of y$y$ and the model is not suitable for the data. The model should be reformulated with, perhaps, some observations dropped.
ifail = 5${\mathbf{ifail}}=5$
The singular value decomposition has failed to converge. This is an unlikely error exit, see nag_eigen_real_triang_svd (f02wu).
ifail = 6${\mathbf{ifail}}=6$
The iterative weighted least squares has failed to converge in maxit (or default 10$10$) iterations. The value of maxit could be increased but it may be advantageous to examine the convergence using the iprint option. This may indicate that the convergence is slow because the solution is at a boundary in which case it may be better to reformulate the model.
W ifail = 7${\mathbf{ifail}}=7$
The rank of the model has changed during the weighted least squares iterations. The estimate for β$\beta$ returned may be reasonable, but you should check how the deviance has changed during iterations.
W ifail = 8${\mathbf{ifail}}=8$
The degrees of freedom for error are 0$0$. A saturated model has been fitted.

## Accuracy

The accuracy is determined by tol as described in Section [Parameters]. As the residual sum of squares is a function of μ2${\mu }^{2}$ the accuracy of the β̂$\stackrel{^}{\beta }$ will depend on the link used and may be of the order sqrt(tol)$\sqrt{{\mathbf{tol}}}$.

None.

## Example

```function nag_correg_glm_normal_example
mean_p = 'M';
x = [1;
2;
3;
4;
5];
isx = [int64(1)];
ip = int64(2);
y = [25;
10;
6;
4;
3];
s = 0;
tol = 5e-05;
[sOut, rss, idf, b, irank, se, covar, vOut, ifail] = ...
nag_correg_glm_normal(link, mean_p, x, isx, ip, y, s, 'tol', 5e-5)
```
```

sOut =

0.1291

0.3872

idf =

3

b =

-0.0239
0.0638

irank =

2

se =

0.0028
0.0026

covar =

1.0e-05 *

0.7723
-0.7177
0.6957

vOut =

0.0399   25.0387    1.0000 -626.9347   -0.0387    0.9954         0  635.1594  655.2205
0.1037    9.6387    1.0000  -92.9036    0.3613    0.4577         0    0.1038  136.2024
0.1676    5.9680    1.0000  -35.6173    0.0320    0.2681         0    0.0398    0.4013
0.2314    4.3221    1.0000  -18.6803   -0.3221    0.1666         0    0.0209    0.3166
0.2952    3.3878    1.0000  -11.4769   -0.3878    0.1121         0    0.0128    0.2597

ifail =

0

```
```function g02ga_example
mean_p = 'M';
x = [1;
2;
3;
4;
5];
isx = [int64(1)];
ip = int64(2);
y = [25;
10;
6;
4;
3];
s = 0;
tol = 5e-05;
[sOut, rss, idf, b, irank, se, covar, vOut, ifail] = ...
g02ga(link, mean_p, x, isx, ip, y, s, 'tol', 5e-5)
```
```

sOut =

0.1291

0.3872

idf =

3

b =

-0.0239
0.0638

irank =

2

se =

0.0028
0.0026

covar =

1.0e-05 *

0.7723
-0.7177
0.6957

vOut =

0.0399   25.0387    1.0000 -626.9347   -0.0387    0.9954         0  635.1594  655.2205
0.1037    9.6387    1.0000  -92.9036    0.3613    0.4577         0    0.1038  136.2024
0.1676    5.9680    1.0000  -35.6173    0.0320    0.2681         0    0.0398    0.4013
0.2314    4.3221    1.0000  -18.6803   -0.3221    0.1666         0    0.0209    0.3166
0.2952    3.3878    1.0000  -11.4769   -0.3878    0.1121         0    0.0128    0.2597

ifail =

0

```