Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_correg_quantile_linreg (g02qg)

## Purpose

nag_correg_quantile_linreg (g02qg) performs a multiple linear quantile regression. Parameter estimates and, if required, confidence limits, covariance matrices and residuals are calculated. nag_correg_quantile_linreg (g02qg) may be used to perform a weighted quantile regression. A simplified interface for nag_correg_quantile_linreg (g02qg) is provided by nag_correg_quantile_linreg_easy (g02qf).

## Syntax

[df, b, bl, bu, ch, res, state, info, ifail] = g02qg(sorder, intcpt, weight, dat, isx, y, tau, b, iopts, opts, state, 'n', n, 'm', m, 'ip', ip, 'wt', wt, 'ntau', ntau)
[df, b, bl, bu, ch, res, state, info, ifail] = nag_correg_quantile_linreg(sorder, intcpt, weight, dat, isx, y, tau, b, iopts, opts, state, 'n', n, 'm', m, 'ip', ip, 'wt', wt, 'ntau', ntau)

## Description

Given a vector of n$n$ observed values, y = {yi : i = 1,2,,n} $y=\left\{{y}_{i}:i=1,2,\dots ,n\right\}$, an n × p$n×p$ design matrix X$X$, a column vector, x$x$, of length p$p$ holding the i$i$th row of X$X$ and a quantile τ (0,1) $\tau \in \left(0,1\right)$, nag_correg_quantile_linreg (g02qg) estimates the p$p$-element vector β$\beta$ as the solution to
 n minimize ∑ ρτ (yi − xiTβ) β ∈ ℝp i = 1
$minimize β ∈ ℝ p ∑ i=1 n ρ τ ( y i - xiT β )$
(1)
where ρτ ${\rho }_{\tau }$ is the piecewise linear loss function ρτ (z) = z (τI(z < 0)) ${\rho }_{\tau }\left(z\right)=z\left(\tau -I\left(z<0\right)\right)$, and I (z < 0) $I\left(z<0\right)$ is an indicator function taking the value 1$1$ if z < 0$z<0$ and 0$0$ otherwise. Weights can be incorporated by replacing X$X$ and y$y$ with WX$WX$ and Wy$Wy$ respectively, where W$W$ is an n × n$n×n$ diagonal matrix. Observations with zero weights can either be included or excluded from the analysis; this is in contrast to least squares regression where such observations do not contribute to the objective function and are therefore always dropped.
nag_correg_quantile_linreg (g02qg) uses the interior point algorithm of Portnoy and Koenker (1997), described briefly in Section [Algorithmic Details], to obtain the parameter estimates β̂$\stackrel{^}{\beta }$, for a given value of τ$\tau$.
Under the assumption of Normally distributed errors, Koenker (2005) shows that the limiting covariance matrix of β̂β$\stackrel{^}{\beta }-\beta$ has the form
 Σ = ( τ (1 − τ) )/n Hn − 1 Jn Hn − 1 $Σ = τ ( 1 - τ ) n H n -1 J n H n -1$
where Jn = n1 i = 1n xi xiT ${J}_{n}={n}^{-1}\sum _{\mathit{i}=1}^{n}{x}_{i}{x}_{i}^{\mathrm{T}}$ and Hn${H}_{n}$ is a function of τ$\tau$, as described below. Given an estimate of the covariance matrix, Σ̂$\stackrel{^}{\Sigma }$, lower (β̂L${\stackrel{^}{\beta }}_{L}$) and upper (β̂U${\stackrel{^}{\beta }}_{U}$) limits for an (100 × α)%$\left(100×\alpha \right)%$ confidence interval can be calculated for each of the p$p$ parameters, via
 β̂Li = β̂i − t n − p , (1 + α) / 2 sqrt( Σ̂ii ) , β̂Ui = β̂i + t n − p , (1 + α) / 2 sqrt( Σ̂ii ) $β^ Li = β^ i - t n-p , ( 1 + α ) / 2 Σ^ ii , β^ Ui = β^ i + t n-p , ( 1 + α ) / 2 Σ^ ii$
where tnp,0.975${t}_{n-p,0.975}$ is the 97.5$97.5$ percentile of the Student's t$t$ distribution with nk$n-k$ degrees of freedom, where k$k$ is the rank of the cross-product matrix XTX${X}^{\mathrm{T}}X$.
Four methods for estimating the covariance matrix, Σ$\Sigma$, are available:
(i) Independent, identically distributed (IID) errors
Under an assumption of IID errors the asymptotic relationship for Σ$\Sigma$ simplifies to
 Σ = ( τ (1 − τ) )/n (s(τ))2 (XTX) − 1 $Σ = τ ( 1 - τ ) n ( s( τ ) ) 2 ( XT X )-1$
where s$s$ is the sparsity function. nag_correg_quantile_linreg (g02qg) estimates s(τ)$s\left(\tau \right)$ from the residuals, ri = yi xiT β̂ ${r}_{i}={y}_{i}-{x}_{i}^{\mathrm{T}}\stackrel{^}{\beta }$ and a bandwidth hn${h}_{n}$.
(ii) Powell Sandwich
Powell (1991) suggested estimating the matrix Hn${H}_{n}$ by a kernel estimator of the form
 n Ĥn = (ncn) − 1 ∑ K(( ri )/(cn)) xi xiT i = 1
$H^ n = ( n cn )-1 ∑ i=1 n K( r i c n ) ⁢ xi xiT$
where K$K$ is a kernel function and cn${c}_{n}$ satisfies limn  cn 0 $\underset{n\to \infty }{\mathrm{lim}}\phantom{\rule{0.25em}{0ex}}{c}_{n}\to 0$ and limn  sqrt( n ) cn $\underset{n\to \infty }{\mathrm{lim}}\phantom{\rule{0.25em}{0ex}}\sqrt{n}{c}_{n}\to \infty$. When the Powell method is chosen, nag_correg_quantile_linreg (g02qg) uses a Gaussian kernel (i.e., K = φ$K=\varphi$) and sets
 cn = min (σr, (qr3 − qr1) / 1.34 ) × (Φ − 1(τ + hn) − Φ − 1(τ − hn)) $cn = min(σr, ( qr3 - qr1 ) / 1.34 ) × ( Φ-1( τ + hn ) - Φ-1( τ - hn ) )$
where hn${h}_{n}$ is a bandwidth, σr,qr1${\sigma }_{r},{q}_{r1}$ and qr3${q}_{r3}$ are, respectively, the standard deviation and the 25%$25%$ and 75%$75%$ quantiles for the residuals, ri${r}_{i}$.
(iii) Hendricks–Koenker Sandwich
Koenker (2005) suggested estimating the matrix Hn${H}_{n}$ using
 n Ĥn = n − 1 ∑ [( 2 hn )/( xiT (β̂(τ + hn) − β̂(τ − hn)) )] xi xiT i = 1
$H^ n = n-1 ∑ i=1 n [ 2 ⁢ hn xiT ( β^ ( τ + hn ) - β^ ( τ - hn ) ) ] ⁢ xi xiT$
where hn${h}_{n}$ is a bandwidth and β̂(τ + hn)$\stackrel{^}{\beta }\left(\tau +{h}_{n}\right)$ denotes the parameter estimates obtained from a quantile regression using the (τ + hn)$\left(\tau +{h}_{n}\right)$th quantile. Similarly with β̂(τhn)$\stackrel{^}{\beta }\left(\tau -{h}_{n}\right)$.
(iv) Bootstrap
The last method uses bootstrapping to either estimate a covariance matrix or obtain confidence intervals for the parameter estimates directly. This method therefore does not assume Normally distributed errors. Samples of size n$n$ are taken from the paired data {yi,xi}$\left\{{y}_{i},{x}_{i}\right\}$ (i.e., the independent and dependent variables are sampled together). A quantile regression is then fitted to each sample resulting in a series of bootstrap estimates for the model parameters, β$\beta$. A covariance matrix can then be calculated directly from this series of values. Alternatively, confidence limits, β̂L${\stackrel{^}{\beta }}_{L}$ and β̂U${\stackrel{^}{\beta }}_{U}$, can be obtained directly from the (1α) / 2$\left(1-\alpha \right)/2$ and (1 + α) / 2$\left(1+\alpha \right)/2$ sample quantiles of the bootstrap estimates.
Further details of the algorithms used to calculate the covariance matrices can be found in Section [Algorithmic Details].
All three asymptotic estimates of the covariance matrix require a bandwidth, hn${h}_{n}$. Two alternative methods for determining this are provided:
(i) Sheather–Hall
 hn = (( 1.5 (Φ − 1(αb)φ(Φ − 1(τ)))2 )/( n (2Φ − 1(τ) + 1) ))(1/3) $hn = ( 1.5 ⁢ ( Φ-1(αb) ϕ( Φ-1(τ) ) ) 2 n ⁢ ( 2⁢ Φ-1(τ) + 1 ) ) 13$
for a user-supplied value αb${\alpha }_{b}$,
(ii) Bofinger
 hn = (( 4.5 (φ(Φ − 1(τ)))4 )/( n (2Φ − 1(τ) + 1)2 ))(1/5) $hn = ( 4.5 ⁢ ( ϕ( Φ-1(τ) ) ) 4 n ⁢ ( 2⁢ Φ-1(τ) + 1 ) 2 ) 15$
nag_correg_quantile_linreg (g02qg) allows optional arguments to be supplied via the iopts and opts arrays (see Section [Optional Parameters] for details of the available options). Prior to calling nag_correg_quantile_linreg (g02qg) the optional parameter arrays, iopts and opts must be initialized by calling nag_correg_optset (g02zk) with optstr set to ${\mathbf{Initialize}}=\mathbf{g02qg}$ (see Section [Optional Parameters] for details on the available options). If bootstrap confidence limits are required (Interval Method = BOOTSTRAP XY${\mathbf{Interval Method}}=\mathrm{BOOTSTRAP XY}$) then one of the random number initialization functions nag_rand_init_repeat (g05kf) (for a repeatable analysis) or nag_rand_init_nonrepeat (g05kg) (for an unrepeatable analysis) must also have been previously called.

## References

Koenker R (2005) Quantile Regression Econometric Society Monographs, Cambridge University Press, New York
Mehrotra S (1992) On the implementation of a primal-dual interior point method SIAM J. Optim. 2 575–601
Nocedal J and Wright S J (1999) Numerical Optimization Springer Series in Operations Research, Springer, New York
Portnoy S and Koenker R (1997) The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute error estimators Statistical Science 4 279–300
Powell J L (1991) Estimation of monotonic regression models under quantile restrictions Nonparametric and Semiparametric Methods in Econometrics Cambridge University Press, Cambridge

## Parameters

### Compulsory Input Parameters

1:     sorder – int64int32nag_int scalar
Determines the storage order of variates supplied in dat.
Constraint: sorder = 1${\mathbf{sorder}}=1$ or 2$2$.
2:     intcpt – string (length ≥ 1)
Indicates whether an intercept will be included in the model. The intercept is included by adding a column of ones as the first column in the design matrix, X$X$.
intcpt = 'Y'${\mathbf{intcpt}}=\text{'Y'}$
An intercept will be included in the model.
intcpt = 'N'${\mathbf{intcpt}}=\text{'N'}$
An intercept will not be included in the model.
Constraint: intcpt = 'N'${\mathbf{intcpt}}=\text{'N'}$ or 'Y'$\text{'Y'}$.
3:     weight – string (length ≥ 1)
Indicates if weights are to be used.
weight = 'W'${\mathbf{weight}}=\text{'W'}$
A weighted regression model is fitted to the data using weights supplied in array wt.
weight = 'U'${\mathbf{weight}}=\text{'U'}$
An unweighted regression model is fitted to the data and array wt is not referenced.
Constraint: weight = 'U'${\mathbf{weight}}=\text{'U'}$ or 'W'$\text{'W'}$.
4:     dat(lddat, : $:$) – double array
The first dimension, lddat, of the array dat must satisfy
• if sorder = 1${\mathbf{sorder}}=1$, lddatn$\mathit{lddat}\ge {\mathbf{n}}$;
• otherwise lddatm$\mathit{lddat}\ge {\mathbf{m}}$.
The second dimension of the array must be at least m${\mathbf{m}}$ if sorder = 1${\mathbf{sorder}}=1$ and at least n${\mathbf{n}}$ if sorder = 2${\mathbf{sorder}}=2$
The i$\mathit{i}$th value for the j$\mathit{j}$th variate, for i = 1,2,,n$\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and j = 1,2,,m$\mathit{j}=1,2,\dots ,{\mathbf{m}}$, must be supplied in
• dat(i,j)${\mathbf{dat}}\left(i,j\right)$ if sorder = 1${\mathbf{sorder}}=1$, and
• dat(j,i)${\mathbf{dat}}\left(j,i\right)$ if sorder = 2${\mathbf{sorder}}=2$.
The design matrix X$X$ is constructed from dat, isx and intcpt.
5:     isx(m) – int64int32nag_int array
m, the dimension of the array, must satisfy the constraint m0${\mathbf{m}}\ge 0$.
Indicates which independent variables are to be included in the model.
isx(j) = 0${\mathbf{isx}}\left(j\right)=0$
The j$j$th variate, supplied in dat, is not included in the regression model.
isx(j) = 1${\mathbf{isx}}\left(j\right)=1$
The j$j$th variate, supplied in dat, is included in the regression model.
Constraints:
• isx(j) = 0${\mathbf{isx}}\left(\mathit{j}\right)=0$ or 1$1$, for j = 1,2,,m$\mathit{j}=1,2,\dots ,{\mathbf{m}}$;
• if intcpt = 'Y'${\mathbf{intcpt}}=\text{'Y'}$, exactly ip1${\mathbf{ip}}-1$ values of isx must be set to 1$1$;
• if intcpt = 'N'${\mathbf{intcpt}}=\text{'N'}$, exactly ip values of isx must be set to 1$1$.
6:     y(n) – double array
n, the dimension of the array, must satisfy the constraint n2${\mathbf{n}}\ge 2$.
y$y$, observations on the dependent variable.
7:     tau(ntau) – double array
ntau, the dimension of the array, must satisfy the constraint ntau1${\mathbf{ntau}}\ge 1$.
The vector of quantiles of interest. A separate model is fitted to each quantile.
Constraint: sqrt(ε) < tau(j) < 1sqrt(ε)$\sqrt{\epsilon }<{\mathbf{tau}}\left(\mathit{j}\right)<1-\sqrt{\epsilon }$ where ε$\epsilon$ is the machine precision returned by nag_machine_precision (x02aj), for j = 1,2,,ntau$\mathit{j}=1,2,\dots ,{\mathbf{ntau}}$.
8:     b(ip,ntau) – double array
ip, the first dimension of the array, must satisfy the constraint
• 1ip < n$1\le {\mathbf{ip}}<{\mathbf{n}}$
• if intcpt = 'Y'${\mathbf{intcpt}}=\text{'Y'}$, 1ipm + 1$1\le {\mathbf{ip}}\le {\mathbf{m}}+1$
• if intcpt = 'N'${\mathbf{intcpt}}=\text{'N'}$, 1ipm$1\le {\mathbf{ip}}\le {\mathbf{m}}$
• .
If ${\mathbf{Calculate Initial Values}}=\mathrm{NO}$, b(i,l)${\mathbf{b}}\left(\mathit{i},\mathit{l}\right)$ must hold an initial estimates for β̂i${\stackrel{^}{\beta }}_{\mathit{i}}$, for i = 1,2,,ip$\mathit{i}=1,2,\dots ,{\mathbf{ip}}$ and l = 1,2,,ntau$\mathit{l}=1,2,\dots ,{\mathbf{ntau}}$. If ${\mathbf{Calculate Initial Values}}=\mathrm{YES}$, b need not be set.
9:     iopts( : $:$) – int64int32nag_int array
Note: the contents of iopts must not have been altered between calls to nag_correg_optset (g02zk), nag_correg_optget (g02zl), nag_correg_quantile_linreg (g02qg) and the selected problem solving routine.
Optional parameter array, as initialized by a call to nag_correg_optset (g02zk).
10:   opts( : $:$) – double array
Note: the contents of opts must not have been altered between calls to nag_correg_optset (g02zk), nag_correg_optget (g02zl), nag_correg_quantile_linreg (g02qg) and the selected problem solving routine.
Optional parameter array, as initialized by a call to nag_correg_optset (g02zk).
11:   state( : $:$) – int64int32nag_int array
Note: the actual argument supplied must be the array state supplied to the initialization routines nag_rand_init_repeat (g05kf) or nag_rand_init_nonrepeat (g05kg).
If Interval Method = BOOTSTRAP XY${\mathbf{Interval Method}}=\mathrm{BOOTSTRAP XY}$, state contains information about the selected random number generator. Otherwise state is not referenced.

### Optional Input Parameters

1:     n – int64int32nag_int scalar
Default: The dimension of the array y and the first dimension of the array dat. (An error is raised if these dimensions are not equal.)
The total number of observations in the dataset. If no weights are supplied, or no zero weights are supplied or observations with zero weights are included in the model then n = n${\mathbf{n}}=n$. Otherwise n = n + ${\mathbf{n}}=n+$ the number of observations with zero weights.
Constraint: n2${\mathbf{n}}\ge 2$.
2:     m – int64int32nag_int scalar
Default: The dimension of the array isx and the first dimension of the array dat. (An error is raised if these dimensions are not equal.)
m$m$, the total number of variates in the dataset.
Constraint: m0${\mathbf{m}}\ge 0$.
3:     ip – int64int32nag_int scalar
Default: The first dimension of the array b.
p$p$, the number of independent variables in the model, including the intercept, see intcpt, if present.
Constraints:
• 1ip < n$1\le {\mathbf{ip}}<{\mathbf{n}}$;
• if intcpt = 'Y'${\mathbf{intcpt}}=\text{'Y'}$, 1ipm + 1$1\le {\mathbf{ip}}\le {\mathbf{m}}+1$;
• if intcpt = 'N'${\mathbf{intcpt}}=\text{'N'}$, 1ipm$1\le {\mathbf{ip}}\le {\mathbf{m}}$.
4:     wt( : $:$) – double array
Note: the dimension of the array wt must be at least n${\mathbf{n}}$ if weight = 'W'${\mathbf{weight}}=\text{'W'}$.
If weight = 'W'${\mathbf{weight}}=\text{'W'}$, wt must contain the diagonal elements of the weight matrix W$W$. Otherwise wt is not referenced.
When
${\mathbf{Drop Zero Weights}}=\mathrm{YES}$
If wt(i) = 0.0${\mathbf{wt}}\left(i\right)=0.0$, the i$i$th observation is not included in the model, in which case the effective number of observations, n$n$, is the number of observations with nonzero weights. If Return Residuals = YES${\mathbf{Return Residuals}}=\mathrm{YES}$, the values of res will be set to zero for observations with zero weights.
${\mathbf{Drop Zero Weights}}=\mathrm{NO}$
All observations are included in the model and the effective number of observations is n, i.e., n = n$n={\mathbf{n}}$.
Constraints:
• If weight = 'W'${\mathbf{weight}}=\text{'W'}$, wt(i)0.0${\mathbf{wt}}\left(\mathit{i}\right)\ge 0.0$, for i = 1,2,,n$\mathit{i}=1,2,\dots ,{\mathbf{n}}$;
• The effective number of observations 2$\text{}\ge 2$.
5:     ntau – int64int32nag_int scalar
Default: The dimension of the array tau and the second dimension of the array b. (An error is raised if these dimensions are not equal.)
The number of quantiles of interest.
Constraint: ntau1${\mathbf{ntau}}\ge 1$.

lddat

### Output Parameters

1:     df – double scalar
The degrees of freedom given by nk$n-k$, where n$n$ is the effective number of observations and k$k$ is the rank of the cross-product matrix XTX${X}^{\mathrm{T}}X$.
2:     b(ip,ntau) – double array
b(i,l)${\mathbf{b}}\left(\mathit{i},\mathit{l}\right)$, for i = 1,2,,ip$\mathit{i}=1,2,\dots ,{\mathbf{ip}}$, contains the estimates of the parameters of the regression model, β̂$\stackrel{^}{\beta }$, estimated for τ = tau(l)$\tau ={\mathbf{tau}}\left(\mathit{l}\right)$.
If intcpt = 'Y'${\mathbf{intcpt}}=\text{'Y'}$, b(1,l)${\mathbf{b}}\left(1,l\right)$ will contain the estimate corresponding to the intercept and b(i + 1,l)${\mathbf{b}}\left(i+1,l\right)$ will contain the coefficient of the j$j$th variate contained in dat, where isx(j)${\mathbf{isx}}\left(j\right)$ is the i$i$th nonzero value in the array isx.
If intcpt = 'N'${\mathbf{intcpt}}=\text{'N'}$, b(i,l)${\mathbf{b}}\left(i,l\right)$ will contain the coefficient of the j$j$th variate contained in dat, where isx(j)${\mathbf{isx}}\left(j\right)$ is the i$i$th nonzero value in the array isx.
3:     bl(ip, : $:$) – double array
The second dimension of the array will be ntau${\mathbf{ntau}}$ if Interval MethodNONE${\mathbf{Interval Method}}\ne \mathrm{NONE}$
If Interval MethodNONE${\mathbf{Interval Method}}\ne \mathrm{NONE}$, bl(i,l)${\mathbf{bl}}\left(i,l\right)$ contains the lower limit of an (100 × α)%$\left(100×\alpha \right)%$ confidence interval for b(i,l)${\mathbf{b}}\left(\mathit{i},\mathit{l}\right)$, for i = 1,2,,ip$\mathit{i}=1,2,\dots ,{\mathbf{ip}}$ and l = 1,2,,ntau$\mathit{l}=1,2,\dots ,{\mathbf{ntau}}$.
If Interval Method = NONE${\mathbf{Interval Method}}=\mathrm{NONE}$, bl is not referenced.
The method used for calculating the interval is controlled by the optional parameters Interval Method and Bootstrap Interval Method. The size of the interval, α$\alpha$, is controlled by the optional parameter Significance Level.
4:     bu(ip, : $:$) – double array
The second dimension of the array will be ntau${\mathbf{ntau}}$ if Interval MethodNONE${\mathbf{Interval Method}}\ne \mathrm{NONE}$
If Interval MethodNONE${\mathbf{Interval Method}}\ne \mathrm{NONE}$, bu(i,l)${\mathbf{bu}}\left(i,l\right)$ contains the upper limit of an (100 × α)%$\left(100×\alpha \right)%$ confidence interval for b(i,l)${\mathbf{b}}\left(\mathit{i},\mathit{l}\right)$, for i = 1,2,,ip$\mathit{i}=1,2,\dots ,{\mathbf{ip}}$ and l = 1,2,,ntau$\mathit{l}=1,2,\dots ,{\mathbf{ntau}}$.
If Interval Method = NONE${\mathbf{Interval Method}}=\mathrm{NONE}$, bu is not referenced.
The method used for calculating the interval is controlled by the optional parameters Interval Method and Bootstrap Interval Method. The size of the interval, α$\alpha$ is controlled by the optional parameter Significance Level.
5:     ch(ip,ip, : $:$) – double array
Note: the last dimension of the array ch must be at least ntau${\mathbf{ntau}}$ if Interval MethodNONE${\mathbf{Interval Method}}\ne \mathrm{NONE}$ and Matrix Returned = COVARIANCE${\mathbf{Matrix Returned}}=\mathrm{COVARIANCE}$ and at least ntau + 1${\mathbf{ntau}}+1$ if Interval MethodNONE${\mathbf{Interval Method}}\ne \mathrm{NONE}$, IID$\mathrm{IID}$ or BOOTSTRAP XY$\mathrm{BOOTSTRAP XY}$ and Matrix Returned = H INVERSE${\mathbf{Matrix Returned}}=\mathrm{H INVERSE}$.
Depending on the supplied optional parameters, ch will either not be referenced, hold an estimate of the upper triangular part of the covariance matrix, Σ$\Sigma$, or an estimate of the upper triangular parts of nJn$n{J}_{n}$ and n1Hn1${n}^{-1}{H}_{n}^{-1}$.
If Interval Method = NONE${\mathbf{Interval Method}}=\mathrm{NONE}$ or Matrix Returned = NONE${\mathbf{Matrix Returned}}=\mathrm{NONE}$, ch is not referenced.
If Interval Method = BOOTSTRAP XY${\mathbf{Interval Method}}=\mathrm{BOOTSTRAP XY}$ or IID$\mathrm{IID}$ and Matrix Returned = H INVERSE${\mathbf{Matrix Returned}}=\mathrm{H INVERSE}$, ch is not referenced.
Otherwise, for i,j = 1,2,,ip,ji$i,j=1,2,\dots ,{\mathbf{ip}},j\ge i$ and l = 1,2,,ntau$l=1,2,\dots ,{\mathbf{ntau}}$:
• If Matrix Returned = COVARIANCE${\mathbf{Matrix Returned}}=\mathrm{COVARIANCE}$, ch(i,j,l)${\mathbf{ch}}\left(i,j,l\right)$ holds an estimate of the covariance between b(i,l)${\mathbf{b}}\left(i,l\right)$ and b(j,l)${\mathbf{b}}\left(j,l\right)$.
• If Matrix Returned = H INVERSE${\mathbf{Matrix Returned}}=\mathrm{H INVERSE}$, ch(i,j,1)${\mathbf{ch}}\left(i,j,1\right)$ holds an estimate of the (i,j)$\left(i,j\right)$th element of nJn$n{J}_{n}$ and ch(i,j,l + 1)${\mathbf{ch}}\left(i,j,l+1\right)$ holds an estimate of the (i,j)$\left(i,j\right)$th element of n1Hn1${n}^{-1}{H}_{n}^{-1}$, for τ = tau(l)$\tau ={\mathbf{tau}}\left(l\right)$.
The method used for calculating Σ$\Sigma$ and Hn1${H}_{n}^{-1}$ is controlled by the optional parameter Interval Method.
6:     res(n, : $:$) – double array
The second dimension of the array will be ntau${\mathbf{ntau}}$ if Return Residuals = YES${\mathbf{Return Residuals}}=\mathrm{YES}$
If Return Residuals = YES${\mathbf{Return Residuals}}=\mathrm{YES}$, res(i,l)${\mathbf{res}}\left(\mathit{i},\mathit{l}\right)$ holds the (weighted) residuals, ri${r}_{\mathit{i}}$, for τ = tau(l)$\tau ={\mathbf{tau}}\left(\mathit{l}\right)$, for i = 1,2,,n$\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and l = 1,2,,ntau$\mathit{l}=1,2,\dots ,{\mathbf{ntau}}$.
If weight = 'W'${\mathbf{weight}}=\text{'W'}$ and ${\mathbf{Drop Zero Weights}}=\mathrm{YES}$, the value of res will be set to zero for observations with zero weights.
If ${\mathbf{Return Residuals}}=\mathrm{NO}$, res is not referenced.
7:     state( : $:$) – int64int32nag_int array
Note: the actual argument supplied must be the array state supplied to the initialization routines nag_rand_init_repeat (g05kf) or nag_rand_init_nonrepeat (g05kg).
8:     info(ntau${\mathbf{ntau}}$) – int64int32nag_int array
info(i)${\mathbf{info}}\left(i\right)$ holds additional information concerning the model fitting and confidence limit calculations when τ = tau(i)$\tau ={\mathbf{tau}}\left(i\right)$.
 Code Warning 0$0$ Model fitted and confidence limits (if requested) calculated successfully 1$1$ The function did not converge. The returned values are based on the estimate at the last iteration. Try increasing Iteration Limit whilst calculating the parameter estimates or relaxing the definition of convergence by increasing Tolerance. 2$2$ A singular matrix was encountered during the optimization. The model was not fitted for this value of τ$\tau$. 4$4$ Some truncation occurred whilst calculating the confidence limits for this value of τ$\tau$. See Section [Algorithmic Details] for details. The returned upper and lower limits may be narrower than specified. 8$8$ The function did not converge whilst calculating the confidence limits. The returned limits are based on the estimate at the last iteration. Try increasing Iteration Limit. 16$16$ Confidence limits for this value of τ$\tau$ could not be calculated. The returned upper and lower limits are set to a large positive and large negative value respectively as defined by the optional parameter Big.
It is possible for multiple warnings to be applicable to a single model. In these cases the value returned in info is the sum of the corresponding individual nonzero warning codes.
9:     ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

## Error Indicators and Warnings

Errors or warnings detected by the function:
ifail = 11${\mathbf{ifail}}=11$
Constraint: sorder = 1${\mathbf{sorder}}=1$ or 2$2$.
ifail = 31${\mathbf{ifail}}=31$
On entry, weight had an illegal value.
ifail = 41${\mathbf{ifail}}=41$
Constraint: n2${\mathbf{n}}\ge 2$.
ifail = 51${\mathbf{ifail}}=51$
Constraint: m0${\mathbf{m}}\ge 0$.
ifail = 71${\mathbf{ifail}}=71$
Constraint: lddatn$\mathit{lddat}\ge {\mathbf{n}}$.
ifail = 72${\mathbf{ifail}}=72$
Constraint: lddatm$\mathit{lddat}\ge {\mathbf{m}}$.
ifail = 81${\mathbf{ifail}}=81$
Constraint: isx(i) = 0${\mathbf{isx}}\left(i\right)=0$ or 1$1$ for all i$i$.
ifail = 91${\mathbf{ifail}}=91$
Constraint: 1ip < n$1\le {\mathbf{ip}}<{\mathbf{n}}$.
ifail = 92${\mathbf{ifail}}=92$
On entry, ip is not consistent with isx or intcpt.
ifail = 111${\mathbf{ifail}}=111$
Constraint: wt(i)0.0${\mathbf{wt}}\left(i\right)\ge 0.0$ for all i$i$.
ifail = 112${\mathbf{ifail}}=112$
Constraint: .
ifail = 121${\mathbf{ifail}}=121$
Constraint: ntau1${\mathbf{ntau}}\ge 1$.
ifail = 131${\mathbf{ifail}}=131$
On entry is invalid.
ifail = 201${\mathbf{ifail}}=201$
On entry, either the option arrays have not been initialized or they have been corrupted.
ifail = 221${\mathbf{ifail}}=221$
On entry, state vector has been corrupted or not initialized.
ifail = 231${\mathbf{ifail}}=231$
A potential problem occurred whilst fitting the model(s).
Additional information has been returned in info.
ifail = 999${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.

## Accuracy

Not applicable.

nag_correg_quantile_linreg (g02qg) allocates internally approximately the following elements of double storage: 13n + np + 3p2 + 6p + 3(p + 1) × ntau$13n+np+3{p}^{2}+6p+3\left(p+1\right)×{\mathbf{ntau}}$. If Interval Method = BOOTSTRAP XY${\mathbf{Interval Method}}=\mathrm{BOOTSTRAP XY}$ then a further np$np$ elements are required, and this increases by $p×{\mathbf{ntau}}×{\mathbf{Bootstrap Iterations}}$ if Bootstrap Interval Method = QUANTILE${\mathbf{Bootstrap Interval Method}}=\mathrm{QUANTILE}$. Where possible, any user-supplied output arrays are used as workspace and so the amount actually allocated may be less. If sorder = 2${\mathbf{sorder}}=2$, weight = 'U'${\mathbf{weight}}=\text{'U'}$, intcpt = 'N'${\mathbf{intcpt}}=\text{'N'}$ and ip = m${\mathbf{ip}}={\mathbf{m}}$ an internal copy of the input data is avoided and the amount of locally allocated memory is reduced by np$np$.

## Example

```function nag_correg_quantile_linreg_example
sorder = int64(1);
c1 = 'y';
weight = 'u';
dat = [ 420.1577;  541.4117;  901.1575;  639.0802;  750.8756;  945.7989;
829.3979;  979.1648; 1309.8789; 1492.3987;  502.8390;  616.7168;
790.9225;  555.8786;  713.4412;  838.7561;  535.0766;  596.4408;
924.5619;  487.7583;  692.6397;  997.8770;  506.9995;  654.1587;
933.9193;  433.6813;  587.5962;  896.4746;  454.4782;  584.9989;
800.7990;  502.4369;  713.5197;  906.0006;  880.5969;  796.8289;
854.8791; 1167.3716;  523.8000;  670.7792;  377.0584;  851.5430;
1121.0937;  625.5179;  805.5377;  558.5812;  884.4005; 1257.4989;
2051.1789; 1466.3330;  730.0989; 2432.3910;  940.9218; 1177.8547;
1222.5939; 1519.5811;  687.6638;  953.1192;  953.1192;  953.1192;
939.0418; 1283.4025; 1511.5789; 1342.5821;  511.7980;  689.7988;
1532.3074; 1056.0808;  387.3195;  387.3195;  410.9987;  499.7510;
832.7554;  614.9986;  887.4658; 1595.1611; 1807.9520;  541.2006;
1057.6767;  800.7990; 1245.6964; 1201.0002;  634.4002;  956.2315;
1148.6010; 1768.8236; 2822.5330;  922.3548; 2293.1920;  627.4726;
889.9809; 1162.2000; 1197.0794;  530.7972; 1142.1526; 1088.0039;
484.6612; 1536.0201;  678.8974;  671.8802;  690.4683;  860.6948;
873.3095;  894.4598; 1148.6470;  926.8762;  839.0414;  829.4974;
1264.0043; 1937.9771;  698.8317;  920.4199; 1897.5711;  891.6824;
889.6784; 1221.4818;  544.5991; 1031.4491; 1462.9497;  830.4353;
975.0415; 1337.9983;  867.6427;  725.7459;  989.0056; 1525.0005;
672.1960;  923.3977;  472.3215;  590.7601;  831.7983; 1139.4945;
507.5169;  576.1972;  696.5991;  650.8180;  949.5802;  497.1193;
570.1674;  724.7306;  408.3399;  638.6713; 1225.7890;  715.3701;
800.4708;  975.5974; 1613.7565;  608.5019;  958.6634;  835.9426;
1024.8177; 1006.4353;  726.0000;  494.4174;  776.5958;  415.4407;
581.3599;  643.3571; 2551.6615; 1795.3226; 1165.7734;  815.6212;
1264.2066; 1095.4056;  447.4479; 1178.9742;  975.8023; 1017.8522;
423.8798;  558.7767;  943.2487; 1348.3002; 2340.6174;  587.1792;
1540.9741; 1115.8481; 1044.6843; 1389.7929; 2497.7860; 1585.3809;
1862.0438; 2008.8546;  697.3099;  571.2517;  598.3465;  461.0977;
977.1107;  883.9849;  718.3594;  543.8971; 1587.3480; 4957.8130;
969.6838;  419.9980;  561.9990;  689.5988; 1398.5203;  820.8168;
875.1716; 1392.4499; 1256.3174; 1362.8590; 1999.2552; 1209.4730;
1125.0356; 1827.4010; 1014.1540;  880.3944;  873.7375;  951.4432;
473.0022;  601.0030;  713.9979;  829.2984;  959.7953; 1212.9613;
958.8743; 1129.4431; 1943.0419;  539.6388;  463.5990;  562.6400;
736.7584; 1415.4461; 2208.7897;  636.0009;  759.4010; 1078.8382;
748.6413;  987.6417;  788.0961; 1020.0225; 1230.9235;  440.5174;
743.0772];
y = [ 255.8394;  310.9587;  485.6800;  402.9974;  495.5608;  633.7978;
630.7566;  700.4409;  830.9586;  815.3602;  338.0014;  412.3613;
520.0006;  452.4015;  512.7201;  658.8395;  392.5995;  443.5586;
640.1164;  333.8394;  466.9583;  543.3969;  317.7198;  424.3209;
518.9617;  338.0014;  419.6412;  476.3200;  386.3602;  423.2783;
503.3572;  354.6389;  497.3182;  588.5195;  654.5971;  550.7274;
528.3770;  640.4813;  401.3204;  435.9990;  276.5606;  588.3488;
664.1978;  444.8602;  462.8995;  377.7792;  553.1504;  810.8962;
1067.9541; 1049.8788;  522.7012; 1424.8047;  517.9196;  830.9586;
925.5795; 1162.0024;  383.4580;  621.1173;  621.1173;  621.1173;
548.6002;  745.2353;  837.8005;  795.3402;  418.5976;  508.7974;
883.2780;  742.5276;  242.3202;  242.3202;  266.0010;  408.4992;
614.7588;  385.3184;  515.6200; 1138.1620;  993.9630;  299.1993;
750.3202;  572.0807;  907.3969;  811.5776;  427.7975;  649.9985;
860.6002; 1143.4211; 2032.6792;  590.6183; 1570.3911;  483.4800;
600.4804;  696.2021;  774.7962;  390.5984;  612.5619;  708.7622;
296.9192; 1071.4627;  496.5976;  503.3974;  357.6411;  430.3376;
624.6990;  582.5413;  580.2215;  543.8807;  588.6372;  627.9999;
712.1012;  968.3949;  482.5816;  593.1694; 1033.5658;  693.6795;
693.6795;  761.2791;  361.3981;  628.4522;  771.4486;  757.1187;
821.5970; 1022.3202;  679.4407;  538.7491;  679.9981;  977.0033;
561.2015;  728.3997;  372.3186;  361.5210;  620.8006;  819.9964;
360.8780;  395.7608;  442.0001;  404.0384;  670.7993;  297.5702;
353.4882;  383.9376;  284.8008;  431.1000;  801.3518;  448.4513;
577.9111;  570.5210;  865.3205;  444.5578;  680.4198;  576.2779;
708.4787;  734.2356;  433.0010;  327.4188;  485.5198;  305.4390;
468.0008;  459.8177;  863.9199;  831.4407;  534.7610;  392.0502;
934.9752;  813.3081;  263.7100;  769.0838;  630.5863;  645.9874;
319.5584;  348.4518;  614.5068;  662.0096; 1504.3708;  406.2180;
692.1689;  588.1371;  511.2609;  700.5600; 1301.1451;  879.0660;
912.8851; 1509.7812;  484.0605;  399.6703;  444.1001;  248.8101;
527.8014;  500.6313;  436.8107;  374.7990;  726.3921; 1827.2000;
523.4911;  334.9998;  473.2009;  581.2029;  929.7540;  591.1974;
637.5483;  674.9509;  776.7589;  959.5170; 1250.9643;  737.8201;
810.6772;  983.0009;  708.8968;  633.1200;  631.7982;  608.6419;
300.9999;  377.9984;  397.0015;  588.5195;  681.7616;  807.3603;
696.8011;  811.1962; 1305.7201;  442.0001;  353.6013;  468.0008;
526.7573;  890.2390; 1318.8033;  331.0005;  416.4015;  596.8406;
429.0399;  619.6408;  400.7990;  775.0209;  772.7611;  306.5191;
522.6019];
isx = [int64(1)];
tau = [0.10; 0.25; 0.50; 0.75; 0.90];
state = zeros(1, 1, 'int64');
ip = 2;
b = zeros(2, 5);
iopts = zeros(100, 1, 'int64');
opts = zeros(100, 1);
% Initialize the optional argument array
[iopts, opts, ifail] = nag_correg_optset('Initialize = nag_correg_quantile_linreg', iopts, opts);

% Set optional arguments
[iopts, opts, ifail] = nag_correg_optset('Return Residuals = Yes', iopts, opts);
[iopts, opts, ifail] = nag_correg_optset('Matrix Returned = Covariance', iopts, opts);
[iopts, opts, ifail] = nag_correg_optset('Interval Method = IID', iopts, opts);

% Call the model fitting routine
[df, b, bl, bu, ch, res, state, info, ifail] = ...
nag_correg_quantile_linreg(sorder, c1, weight, dat, isx, y, tau, b, iopts, opts, state);

if (ifail == 0)
% Display the parameter estimates
for l=1:numel(tau)
fprintf('\nQuantile: %6.3f\n\n', tau(l));
fprintf('        Lower   Parameter   Upper\n');
fprintf('        Limit   Estimate    Limit\n');
for j=1:2
fprintf('%3d   %7.3f   %7.3f   %7.3f\n', j, bl(j,l), b(j,l), bu(j,l));
end
fprintf('\nCovariance matrix\n');
for i=1:ip
fprintf('%10.3e ', ch(1:i, i, l));
fprintf('\n');
end
fprintf('\n');
end

if (numel(res) > 0)
fprintf('First 10 Residuals\n');
fprintf('                              Quantile\n');
fprintf('Obs.   %6.3f     %6.3f     %6.3f     %6.3f     %6.3f\n', tau);
for i=1:10
fprintf(' %3d %10.5f %10.5f %10.5f %10.5f %10.5f\n', i, res(i, 1:5));
end
else
fprintf('Residuals not returned\n');
end
elseif (ifail == 231)
disp(info);
end
```
```

Quantile:  0.100

Lower   Parameter   Upper
Limit   Estimate    Limit
1    74.946   110.142   145.337
2     0.370     0.402     0.433

Covariance matrix
3.191e+02
-2.541e-01  2.587e-04

Quantile:  0.250

Lower   Parameter   Upper
Limit   Estimate    Limit
1    64.232    95.483   126.735
2     0.446     0.474     0.502

Covariance matrix
2.516e+02
-2.004e-01  2.039e-04

Quantile:  0.500

Lower   Parameter   Upper
Limit   Estimate    Limit
1    55.399    81.482   107.566
2     0.537     0.560     0.584

Covariance matrix
1.753e+02
-1.396e-01  1.421e-04

Quantile:  0.750

Lower   Parameter   Upper
Limit   Estimate    Limit
1    41.372    62.396    83.421
2     0.625     0.644     0.663

Covariance matrix
1.139e+02
-9.068e-02  9.230e-05

Quantile:  0.900

Lower   Parameter   Upper
Limit   Estimate    Limit
1    26.829    67.351   107.873
2     0.650     0.686     0.723

Covariance matrix
4.230e+02
-3.369e-01  3.429e-04

First 10 Residuals
Quantile
Obs.    0.100      0.250      0.500      0.750      0.900
1  -23.10718  -38.84219  -61.00711  -77.14462  -99.86551
2  -16.70358  -41.20981  -73.81193 -100.11463 -127.96277
3   13.48419  -37.04518 -100.61322 -157.07478 -200.13481
4   36.09526    4.52393  -36.48522  -70.97584 -102.95390
5   83.74310   44.08476   -6.54743  -50.41028  -87.11562
6  143.66660   89.90799   22.49734  -37.70668  -82.65437
7  187.39134  142.05288   84.66171   34.21603   -5.80963
8  196.90443  140.73220   70.44951    7.44831  -38.91027
9  194.55254  114.45726   15.70761  -75.01861 -135.36147
10  105.62394   12.32563 -102.13482 -208.16238 -276.22311

```
```function g02qg_example
sorder = int64(1);
c1 = 'y';
weight = 'u';
dat = [ 420.1577;  541.4117;  901.1575;  639.0802;  750.8756;  945.7989;
829.3979;  979.1648; 1309.8789; 1492.3987;  502.8390;  616.7168;
790.9225;  555.8786;  713.4412;  838.7561;  535.0766;  596.4408;
924.5619;  487.7583;  692.6397;  997.8770;  506.9995;  654.1587;
933.9193;  433.6813;  587.5962;  896.4746;  454.4782;  584.9989;
800.7990;  502.4369;  713.5197;  906.0006;  880.5969;  796.8289;
854.8791; 1167.3716;  523.8000;  670.7792;  377.0584;  851.5430;
1121.0937;  625.5179;  805.5377;  558.5812;  884.4005; 1257.4989;
2051.1789; 1466.3330;  730.0989; 2432.3910;  940.9218; 1177.8547;
1222.5939; 1519.5811;  687.6638;  953.1192;  953.1192;  953.1192;
939.0418; 1283.4025; 1511.5789; 1342.5821;  511.7980;  689.7988;
1532.3074; 1056.0808;  387.3195;  387.3195;  410.9987;  499.7510;
832.7554;  614.9986;  887.4658; 1595.1611; 1807.9520;  541.2006;
1057.6767;  800.7990; 1245.6964; 1201.0002;  634.4002;  956.2315;
1148.6010; 1768.8236; 2822.5330;  922.3548; 2293.1920;  627.4726;
889.9809; 1162.2000; 1197.0794;  530.7972; 1142.1526; 1088.0039;
484.6612; 1536.0201;  678.8974;  671.8802;  690.4683;  860.6948;
873.3095;  894.4598; 1148.6470;  926.8762;  839.0414;  829.4974;
1264.0043; 1937.9771;  698.8317;  920.4199; 1897.5711;  891.6824;
889.6784; 1221.4818;  544.5991; 1031.4491; 1462.9497;  830.4353;
975.0415; 1337.9983;  867.6427;  725.7459;  989.0056; 1525.0005;
672.1960;  923.3977;  472.3215;  590.7601;  831.7983; 1139.4945;
507.5169;  576.1972;  696.5991;  650.8180;  949.5802;  497.1193;
570.1674;  724.7306;  408.3399;  638.6713; 1225.7890;  715.3701;
800.4708;  975.5974; 1613.7565;  608.5019;  958.6634;  835.9426;
1024.8177; 1006.4353;  726.0000;  494.4174;  776.5958;  415.4407;
581.3599;  643.3571; 2551.6615; 1795.3226; 1165.7734;  815.6212;
1264.2066; 1095.4056;  447.4479; 1178.9742;  975.8023; 1017.8522;
423.8798;  558.7767;  943.2487; 1348.3002; 2340.6174;  587.1792;
1540.9741; 1115.8481; 1044.6843; 1389.7929; 2497.7860; 1585.3809;
1862.0438; 2008.8546;  697.3099;  571.2517;  598.3465;  461.0977;
977.1107;  883.9849;  718.3594;  543.8971; 1587.3480; 4957.8130;
969.6838;  419.9980;  561.9990;  689.5988; 1398.5203;  820.8168;
875.1716; 1392.4499; 1256.3174; 1362.8590; 1999.2552; 1209.4730;
1125.0356; 1827.4010; 1014.1540;  880.3944;  873.7375;  951.4432;
473.0022;  601.0030;  713.9979;  829.2984;  959.7953; 1212.9613;
958.8743; 1129.4431; 1943.0419;  539.6388;  463.5990;  562.6400;
736.7584; 1415.4461; 2208.7897;  636.0009;  759.4010; 1078.8382;
748.6413;  987.6417;  788.0961; 1020.0225; 1230.9235;  440.5174;
743.0772];
y = [ 255.8394;  310.9587;  485.6800;  402.9974;  495.5608;  633.7978;
630.7566;  700.4409;  830.9586;  815.3602;  338.0014;  412.3613;
520.0006;  452.4015;  512.7201;  658.8395;  392.5995;  443.5586;
640.1164;  333.8394;  466.9583;  543.3969;  317.7198;  424.3209;
518.9617;  338.0014;  419.6412;  476.3200;  386.3602;  423.2783;
503.3572;  354.6389;  497.3182;  588.5195;  654.5971;  550.7274;
528.3770;  640.4813;  401.3204;  435.9990;  276.5606;  588.3488;
664.1978;  444.8602;  462.8995;  377.7792;  553.1504;  810.8962;
1067.9541; 1049.8788;  522.7012; 1424.8047;  517.9196;  830.9586;
925.5795; 1162.0024;  383.4580;  621.1173;  621.1173;  621.1173;
548.6002;  745.2353;  837.8005;  795.3402;  418.5976;  508.7974;
883.2780;  742.5276;  242.3202;  242.3202;  266.0010;  408.4992;
614.7588;  385.3184;  515.6200; 1138.1620;  993.9630;  299.1993;
750.3202;  572.0807;  907.3969;  811.5776;  427.7975;  649.9985;
860.6002; 1143.4211; 2032.6792;  590.6183; 1570.3911;  483.4800;
600.4804;  696.2021;  774.7962;  390.5984;  612.5619;  708.7622;
296.9192; 1071.4627;  496.5976;  503.3974;  357.6411;  430.3376;
624.6990;  582.5413;  580.2215;  543.8807;  588.6372;  627.9999;
712.1012;  968.3949;  482.5816;  593.1694; 1033.5658;  693.6795;
693.6795;  761.2791;  361.3981;  628.4522;  771.4486;  757.1187;
821.5970; 1022.3202;  679.4407;  538.7491;  679.9981;  977.0033;
561.2015;  728.3997;  372.3186;  361.5210;  620.8006;  819.9964;
360.8780;  395.7608;  442.0001;  404.0384;  670.7993;  297.5702;
353.4882;  383.9376;  284.8008;  431.1000;  801.3518;  448.4513;
577.9111;  570.5210;  865.3205;  444.5578;  680.4198;  576.2779;
708.4787;  734.2356;  433.0010;  327.4188;  485.5198;  305.4390;
468.0008;  459.8177;  863.9199;  831.4407;  534.7610;  392.0502;
934.9752;  813.3081;  263.7100;  769.0838;  630.5863;  645.9874;
319.5584;  348.4518;  614.5068;  662.0096; 1504.3708;  406.2180;
692.1689;  588.1371;  511.2609;  700.5600; 1301.1451;  879.0660;
912.8851; 1509.7812;  484.0605;  399.6703;  444.1001;  248.8101;
527.8014;  500.6313;  436.8107;  374.7990;  726.3921; 1827.2000;
523.4911;  334.9998;  473.2009;  581.2029;  929.7540;  591.1974;
637.5483;  674.9509;  776.7589;  959.5170; 1250.9643;  737.8201;
810.6772;  983.0009;  708.8968;  633.1200;  631.7982;  608.6419;
300.9999;  377.9984;  397.0015;  588.5195;  681.7616;  807.3603;
696.8011;  811.1962; 1305.7201;  442.0001;  353.6013;  468.0008;
526.7573;  890.2390; 1318.8033;  331.0005;  416.4015;  596.8406;
429.0399;  619.6408;  400.7990;  775.0209;  772.7611;  306.5191;
522.6019];
isx = [int64(1)];
tau = [0.10; 0.25; 0.50; 0.75; 0.90];
state = zeros(1, 1, 'int64');
ip = 2;
b = zeros(2, 5);
iopts = zeros(100, 1, 'int64');
opts = zeros(100, 1);
% Initialize the optional argument array
[iopts, opts, ifail] = g02zk('Initialize = g02qg', iopts, opts);

% Set optional arguments
[iopts, opts, ifail] = g02zk('Return Residuals = Yes', iopts, opts);
[iopts, opts, ifail] = g02zk('Matrix Returned = Covariance', iopts, opts);
[iopts, opts, ifail] = g02zk('Interval Method = IID', iopts, opts);

% Call the model fitting routine
[df, b, bl, bu, ch, res, state, info, ifail] = ...
g02qg(sorder, c1, weight, dat, isx, y, tau, b, iopts, opts, state);

if (ifail == 0)
% Display the parameter estimates
for l=1:numel(tau)
fprintf('\nQuantile: %6.3f\n\n', tau(l));
fprintf('        Lower   Parameter   Upper\n');
fprintf('        Limit   Estimate    Limit\n');
for j=1:2
fprintf('%3d   %7.3f   %7.3f   %7.3f\n', j, bl(j,l), b(j,l), bu(j,l));
end
fprintf('\nCovariance matrix\n');
for i=1:ip
fprintf('%10.3e ', ch(1:i, i, l));
fprintf('\n');
end
fprintf('\n');
end

if (numel(res) > 0)
fprintf('First 10 Residuals\n');
fprintf('                              Quantile\n');
fprintf('Obs.   %6.3f     %6.3f     %6.3f     %6.3f     %6.3f\n', tau);
for i=1:10
fprintf(' %3d %10.5f %10.5f %10.5f %10.5f %10.5f\n', i, res(i, 1:5));
end
else
fprintf('Residuals not returned\n');
end
elseif (ifail == 231)
disp(info);
end
```
```

Quantile:  0.100

Lower   Parameter   Upper
Limit   Estimate    Limit
1    74.946   110.142   145.337
2     0.370     0.402     0.433

Covariance matrix
3.191e+02
-2.541e-01  2.587e-04

Quantile:  0.250

Lower   Parameter   Upper
Limit   Estimate    Limit
1    64.232    95.483   126.735
2     0.446     0.474     0.502

Covariance matrix
2.516e+02
-2.004e-01  2.039e-04

Quantile:  0.500

Lower   Parameter   Upper
Limit   Estimate    Limit
1    55.399    81.482   107.566
2     0.537     0.560     0.584

Covariance matrix
1.753e+02
-1.396e-01  1.421e-04

Quantile:  0.750

Lower   Parameter   Upper
Limit   Estimate    Limit
1    41.372    62.396    83.421
2     0.625     0.644     0.663

Covariance matrix
1.139e+02
-9.068e-02  9.230e-05

Quantile:  0.900

Lower   Parameter   Upper
Limit   Estimate    Limit
1    26.829    67.351   107.873
2     0.650     0.686     0.723

Covariance matrix
4.230e+02
-3.369e-01  3.429e-04

First 10 Residuals
Quantile
Obs.    0.100      0.250      0.500      0.750      0.900
1  -23.10718  -38.84219  -61.00711  -77.14462  -99.86551
2  -16.70358  -41.20981  -73.81193 -100.11463 -127.96277
3   13.48419  -37.04518 -100.61322 -157.07478 -200.13481
4   36.09526    4.52393  -36.48522  -70.97584 -102.95390
5   83.74310   44.08476   -6.54743  -50.41028  -87.11562
6  143.66660   89.90799   22.49734  -37.70668  -82.65437
7  187.39134  142.05288   84.66171   34.21603   -5.80963
8  196.90443  140.73220   70.44951    7.44831  -38.91027
9  194.55254  114.45726   15.70761  -75.01861 -135.36147
10  105.62394   12.32563 -102.13482 -208.16238 -276.22311

```

## Algorithmic Details

By the addition of slack variables the minimization (1) can be reformulated into the linear programming problem
 minimize τeTu + (1 − τ)eTv​   subject to  y = Xβ + u − v (u,v,β) ∈ ℝ + n × ℝ + n × ℝp
$minimize ( u, v, β ) ∈ ℝ + n × ℝ + n × ℝ p τ eT u + ( 1 - τ ) eT v ​ subject to y = Xβ + u - v$
(2)
and its associated dual
 maximize yTd​   subject to  XTd = 0, d ∈ [τ − 1,τ]n d
$maximize d yT d ​ subject to XTd=0, d ∈ [τ-1,τ] n$
(3)
where e$e$ is a vector of n$n$ 1$1$s. Setting a = d + (1τ)e$a=d+\left(1-\tau \right)e$ gives the equivalent formulation
 maximize yTa​   subject to   XT a = (1 − τ) XT e , a ∈ [0,1]n . a
$maximize a yT a ​ subject to XT a = ( 1 - τ ) XT e , a ∈ [0,1] n .$
(4)
The algorithm introduced by Portnoy and Koenker (1997) and used by nag_correg_quantile_linreg (g02qg), uses the primal-dual formulation expressed in equations (2) and (4) along with a logarithmic barrier function to obtain estimates for β$\beta$. The algorithm is based on the predictor-corrector algorithm of Mehrotra (1992) and further details can be obtained from Portnoy and Koenker (1997) and Koenker (2005). A good description of linear programming, interior point algorithms, barrier functions and Mehrotra's predictor-corrector algorithm can be found in Nocedal and Wright (1999).

### Interior Point Algorithm

In this section a brief description of the interior point algorithm used to estimate the model parameters is presented. It should be noted that there are some differences in the equations given here – particularly (7) and (9) – compared to those given in Koenker (2005) and Portnoy and Koenker (1997).

#### Central path

Rather than optimize (4) directly, an additional slack variable s$s$ is added and the constraint a [0,1]n $a\in {\left[0,1\right]}^{n}$ is replaced with a + s = e , ai 0 , si 0 $a+s=e,{a}_{i}\ge 0,{s}_{i}\ge 0$, for i = 1,2,,n $i=1,2,\dots ,n$.
The positivity constraint on a$a$ and s$s$ is handled using the logarithmic barrier function
 n B(a,s,μ) = yTa + μ ∑ (logai + logsi) . i = 1
$B (a,s,μ) = yT a + μ ⁢ ∑ i=1 n ( log⁡ai + log⁡si ) .$
The primal-dual form of the problem is used giving the Lagrangian
 L (a,s,β,u,μ) = B (a,s,μ) − βT (XTa − (1 − τ)XTe) − uT (a + s − e) $L (a,s,β,u,μ) = B (a,s,μ) - βT ( XT a - ( 1 - τ ) XT e ) - uT ( a + s - e )$
whose central path is described by the following first order conditions
 XTa = (1 − τ) XTe a + s = e Xβ + u − v = y SUe = μe AVe = μe
$XTa = (1-τ) XTe a+s = e Xβ+u-v = y SUe = μe AVe = μe$
(5)
where A$A$ denotes the diagonal matrix with diagonal elements given by a$a$, similarly with S,U$S,U$ and V$V$. By enforcing the inequalities on s$s$ and a$a$ strictly, i.e., ai > 0${a}_{i}>0$ and si > 0${s}_{i}>0$ for all i$i$ we ensure that A$A$ and S$S$ are positive definite diagonal matrices and hence A1${A}^{-1}$ and S1${S}^{-1}$ exist.
Rather than applying Newton's method to the system of equations given in (5) to obtain the step directions δβ,δa,δs,δu ${\delta }_{\beta },{\delta }_{a},{\delta }_{s},{\delta }_{u}$ and δv ${\delta }_{v}$, Mehrotra substituted the steps directly into (5) giving the augmented system of equations
 XT (a + δa) = (1 − τ)XTe (a + δa) + (s + δs) = e X (β + δβ) + (u + δu) − (v + δv) = y (S + Δs) (U + Δu) e = μe (A + Δa) (V + Δv) e = μe
$XT ( a + δa ) = (1-τ)XTe ( a + δa ) + ( s + δs ) = e X ( β + δβ ) + ( u + δu ) - ( v + δv ) = y ( S + Δs ) ( U + Δu ) e = μe ( A + Δa ) ( V + Δv ) e = μe$
(6)
where Δa,Δs,Δu ${\Delta }_{a},{\Delta }_{s},{\Delta }_{u}$ and Δv ${\Delta }_{v}$ denote the diagonal matrices with diagonal elements given by δa,δs,δu ${\delta }_{a},{\delta }_{s},{\delta }_{u}$ and δv ${\delta }_{v}$ respectively.

#### Affine scaling step

The affine scaling step is constructed by setting μ = 0$\mu =0$ in (5) and applying Newton's method to obtain an intermediate set of step directions
 (XTWX) δβ = XT W (y − Xβ) + (τ − 1) XT e + XT a δa = W (y − Xβ − Xδβ) δs = − δa δu = S − 1 U δa − U e δv = A − 1 V δs − V e
$( XT W X ) δ β = XT W ( y - X β ) + ( τ - 1 ) XT e + XT a δ a = W ( y-Xβ - X δ β ) δ s = - δ a δ u = S-1 U δ a - U e δ v = A-1 V δ s - V e$
(7)
where W = (S1U + A1V)1 $W={\left({S}^{-1}U+{A}^{-1}V\right)}^{-1}$.
Initial step sizes for the primal (γ̂P${\stackrel{^}{\gamma }}_{P}$) and dual (γ̂D${\stackrel{^}{\gamma }}_{D}$) parameters are constructed as
 γ̂P = σ × min { min i , δai < 0   {ai / δai}, min i , δsi < 0   {si / δsi}} γ̂D = σ × min { min i , δui < 0   {ui / δui}, min i , δvi < 0   {vi / δvi}}
$γ^P = σ × min{ min i , δ a i < 0 { ai / δ a i }, min i , δ s i < 0 { si / δ s i }} γ^D = σ × min{ min i , δ u i < 0 { ui / δ u i }, min i , δ v i < 0 { vi / δ v i }}$
(8)
where σ$\sigma$ is a user-supplied scaling factor. If γ̂P × γ̂D 1 ${\stackrel{^}{\gamma }}_{P}×{\stackrel{^}{\gamma }}_{D}\ge 1$ then the nonlinearity adjustment, described in Section [Nonlinearity Adjustment], is not made and the model parameters are updated using the current step size and directions.

In the nonlinearity adjustment step a new estimate of μ$\mu$ is obtained by letting
 ĝ(γ̂P,γ̂D) = (s + γ̂Pδs)T (u + γ̂Dδu) + (a + γ̂Pδa)T (v + γ̂Dδv) $g^(γ^P,γ^D) = ( s + γ^P δs )T ( u + γ^D δu ) + ( a + γ^P δa )T ( v + γ^D δv )$
and estimating μ$\mu$ as
 μ = ((ĝ(γ̂P,γ̂D))/(ĝ(0,0)))3 (ĝ(0,0))/( 2n ) . $μ = ( g^(γ^P,γ^D) g^(0,0) ) 3 ⁢ g^(0,0) 2⁢n .$
This estimate, along with the nonlinear terms (Δu$\Delta u$, Δs$\Delta s$, Δa$\Delta a$ and Δv$\Delta v$) from (6) are calculated using the values of δa,δs,δu ${\delta }_{a},{\delta }_{s},{\delta }_{u}$ and δv ${\delta }_{v}$ obtained from the affine scaling step.
Given an updated estimate for μ$\mu$ and the nonlinear terms the system of equations
 (XTWX) δβ = XT W (y − Xβ + μ(S − 1 − A − 1)e + S − 1ΔsΔue − A − 1ΔaΔve) + (τ − 1) XT e + XT a δa = W (y − Xβ − Xδβ + μ(S − 1 − A − 1)) δs = − δa δu = μ S − 1 e + S − 1 U δa − U e − S − 1 Δs Δu e δv = μ A − 1 e + A − 1 V δs − V e − A − 1 Δa Δv e
$( XT W X ) δ β = XT W ( y - X β + μ ( S-1 - A-1 ) e + S-1 Δ s Δ u e - A-1 Δ a Δ v e ) + ( τ - 1 ) XT e + XT a δ a = W ( y-Xβ - X δ β + μ ( S-1 - A-1 ) ) δ s = - δ a δ u = μ S-1 e + S-1 U δ a - U e - S-1 Δs Δu e δ v = μ A-1 e + A-1 V δ s - V e - A-1 Δa Δv e$
(9)
are solved and updated values for δβ,δa,δs,δu,δv,γ̂P ${\delta }_{\beta },{\delta }_{a},{\delta }_{s},{\delta }_{u},{\delta }_{v},{\stackrel{^}{\gamma }}_{P}$ and γ̂D ${\stackrel{^}{\gamma }}_{D}$ calculated.

#### Update and convergence

At each iteration the model parameters (β,a,s,u,v) $\left(\beta ,a,s,u,v\right)$ are updated using step directions, (δβ,δa,δs,δu,δv) $\left({\delta }_{\beta },{\delta }_{a},{\delta }_{s},{\delta }_{u},{\delta }_{v}\right)$ and step lengths (γ̂P,γ̂D) $\left({\stackrel{^}{\gamma }}_{P},{\stackrel{^}{\gamma }}_{D}\right)$.
Convergence is assessed using the duality gap, that is, the differences between the objective function in the primal and dual formulations. For any feasible point (u,v,s,a) $\left(u,v,s,a\right)$ the duality gap can be calculated from equations (2) and (3) as
 τ eT u + (1 − τ) eT v − dT y = τ eT u + (1 − τ) eT v − (a − (1 − τ)e)T y = sT u + aT v = eT u − aT y + (1 − τ) eT X β
$τ eT u + (1-τ) ⁢ eT v - dT y = τ eT u + (1-τ) ⁢ eT v - ( a - ( 1 - τ ) e )T y = sT u + aT v = eT u - aT y + ( 1 - τ ) eT X β$
and the optimization terminates if the duality gap is smaller than the tolerance supplied in the optional parameter Tolerance.

Initial values are required for the parameters a,s,u,v$a,s,u,v$ and β$\beta$. If not supplied by the user, initial values for β$\beta$ are calculated from a least squares regression of y$y$ on X$X$. This regression is carried out by first constructing the cross-product matrix XTX${X}^{\mathrm{T}}X$ and then using a pivoted QR$QR$ decomposition as performed by nag_lapack_dgeqp3 (f08bf). In addition, if the cross-product matrix is not of full rank, a rank reduction is carried out and, rather than using the full design matrix, X$X$, a matrix formed from the first p$p$-rank columns of XP$XP$ is used instead, where P$P$ is the pivot matrix used during the QR$QR$ decomposition. Parameter estimates, confidence intervals and the rows and columns of the matrices returned in the parameter ch (if any) are set to zero for variables dropped during the rank-reduction. The rank reduction step is performed irrespective of whether initial values are supplied by the user.
Once initial values have been obtained for β$\beta$, the initial values for u$u$ and v$v$ are calculated from the residuals. If |ri| < εu$|{r}_{i}|<{\epsilon }_{u}$ then a value of ± εu$±{\epsilon }_{u}$ is used instead, where εu${\epsilon }_{u}$ is supplied in the optional parameter Epsilon. The initial values for the a$a$ and s$s$ are always set to 1τ$1-\tau$ and τ$\tau$ respectively.
The solution for δβ${\delta }_{\beta }$ in both (7) and (9) is obtained using a Bunch–Kaufman decomposition, as implemented in nag_lapack_dsytrf (f07md).

### Calculation of Covariance Matrix

nag_correg_quantile_linreg (g02qg) supplies four methods to calculate the covariance matrices associated with the parameter estimates for β$\beta$. This section gives some additional detail on three of the algorithms, the fourth, (which uses bootstrapping), is described in Section [Description].
(i) Independent, identically distributed (IID) errors
When assuming IID errors, the covariance matrices depend on the sparsity, s(τ)$s\left(\tau \right)$, which nag_correg_quantile_linreg (g02qg) estimates as follows:
 (a) Let ri${r}_{i}$ denote the residuals from the original quantile regression, that is ri = yi − xiT β̂ ${r}_{i}={y}_{i}-{x}_{i}^{\mathrm{T}}\stackrel{^}{\beta }$. (b) Drop any residual where |ri|$|{r}_{i}|$ is less than εu${\epsilon }_{u}$, supplied in the optional parameter Epsilon. (c) Sort and relabel the remaining residuals in ascending order, by absolute value, so that εu < |r1| < |r2| < … ${\epsilon }_{u}<|{r}_{1}|<|{r}_{2}|<\dots$. (d) Select the first l$l$ values where l = hnn$l={h}_{n}n$, for some bandwidth hn${h}_{n}$. (e) Sort and relabel these l$l$ residuals again, so that r1 < r2 < … < rl ${r}_{1}<{r}_{2}<\dots <{r}_{l}$ and regress them against a design matrix with two columns (p = 2$p=2$) and rows given by xi = {1, i / (n − p) } ${x}_{i}=\left\{1,i/\left(n-p\right)\right\}$ using quantile regression with τ = 0.5$\tau =0.5$. (f) Use the resulting estimate of the slope as an estimate of the sparsity.
(ii) Powell Sandwich
When using the Powell Sandwich to estimate the matrix Hn${H}_{n}$, the quantity
 cn = min (σr, (qr3 − qr1) / 1.34 ) × (Φ − 1(τ + hn) − Φ − 1(τ − hn)) $cn = min(σr, ( qr3 - qr1 ) / 1.34 ) × ( Φ-1( τ + hn ) - Φ-1( τ - hn ) )$
is calculated. Dependent on the value of τ$\tau$ and the method used to calculate the bandwidth (hn${h}_{n}$), it is possible for the quantities τ ± hn$\tau ±{h}_{n}$ to be too large or small, compared to machine precision (ε$\epsilon$). More specifically, when τhnsqrt(ε)$\tau -{h}_{n}\le \sqrt{\epsilon }$, or τ + hn1sqrt(ε)$\tau +{h}_{n}\ge 1-\sqrt{\epsilon }$, a warning flag is raised in info, the value is truncated to sqrt(ε)$\sqrt{\epsilon }$ or 1sqrt(ε)$1-\sqrt{\epsilon }$ respectively and the covariance matrix calculated as usual.
(iii) Hendricks–Koenker Sandwich
The Hendricks–Koenker Sandwich requires the calculation of the quantity di = xiT (β̂(τ + hn)β̂(τhn)) ${d}_{i}={x}_{i}^{\mathrm{T}}\left(\stackrel{^}{\beta }\left(\tau +{h}_{n}\right)-\stackrel{^}{\beta }\left(\tau -{h}_{n}\right)\right)$. As with the Powell Sandwich, in cases where τhnsqrt(ε)$\tau -{h}_{n}\le \sqrt{\epsilon }$, or τ + hn1sqrt(ε)$\tau +{h}_{n}\ge 1-\sqrt{\epsilon }$, a warning flag is raised in info, the value truncated to sqrt(ε)$\sqrt{\epsilon }$ or 1sqrt(ε)$1-\sqrt{\epsilon }$ respectively and the covariance matrix calculated as usual.
In addition, it is required that di > 0${d}_{i}>0$, in this method. Hence, instead of using 2 hn / di $2{h}_{n}/{d}_{i}$ in the calculation of Hn${H}_{n}$, max ( 2 hn / (di + εu) ,0) $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(2{h}_{n}/\left({d}_{i}+{\epsilon }_{u}\right),0\right)$ is used instead, where εu${\epsilon }_{u}$ is supplied in the optional parameter Epsilon.

## Optional Parameters

Several optional parameters in nag_correg_quantile_linreg (g02qg) control aspects of the optimization algorithm, methodology used, logic or output. Their values are contained in the arrays iopts and opts; these must be initialized before calling nag_correg_quantile_linreg (g02qg) by first calling nag_correg_optset (g02zk) with optstr set to ${\mathbf{Initialize}}=\mathbf{g02qg}$.
Each optional parameter has an associated default value; to set any of them to a non-default value, use nag_correg_optset (g02zk). The current value of an optional parameter can be queried using nag_correg_optget (g02zl).
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section [Description of the optional parameters].

### Description of the Optional Parameters

For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
• the keywords, where the minimum abbreviation of each keyword is underlined (if no characters of an optional qualifier are underlined, the qualifier may be omitted);
• a parameter value, where the letters a$a$, i​ and ​r$i\text{​ and ​}r$ denote options that take character, integer and real values respectively;
• the default value, where the symbol ε$\epsilon$ is a generic notation for machine precision (see nag_machine_precision (x02aj)).
Keywords and character values are case and white space insensitive.
Band Width Alpha  r$r$
Default = 1.0$\text{}=1.0$
A multiplier used to construct the parameter αb${\alpha }_{b}$ used when calculating the Sheather–Hall bandwidth (see Section [Description]), with αb = (1α) × Band Width Alpha${\alpha }_{b}=\left(1-\alpha \right)×{\mathbf{Band Width Alpha}}$. Here, α$\alpha$ is the Significance Level.
Constraint: Band Width Alpha > 0.0${\mathbf{Band Width Alpha}}>0.0$.
Band Width Method  a$a$
Default = 'SHEATHER HALL'$\text{}=\text{'SHEATHER HALL'}$
The method used to calculate the bandwidth used in the calculation of the asymptotic covariance matrix Σ$\Sigma$ and H1${H}^{-1}$ if Interval Method = HKS${\mathbf{Interval Method}}=\mathrm{HKS}$, KERNEL$\mathrm{KERNEL}$ or IID$\mathrm{IID}$ (see Section [Description]).
Constraint: Band Width Method = SHEATHER HALL${\mathbf{Band Width Method}}=\overline{)\mathrm{SHE}}\mathrm{ATHER}\overline{)\mathrm{HAL}}\mathrm{L}$ or BOFINGER$\overline{)\mathrm{BOF}}\mathrm{INGER}$.
Big  r$r$
Default = 10.020$\text{}={10.0}^{20}$
This parameter should be set to something larger than the biggest value supplied in dat and y.
Constraint: Big > 0.0${\mathbf{Big}}>0.0$.
Bootstrap Interval Method  a$a$
Default = 'QUANTILE'$\text{}=\text{'QUANTILE'}$
If Interval Method = BOOTSTRAP XY${\mathbf{Interval Method}}=\mathrm{BOOTSTRAP XY}$, Bootstrap Interval Method controls how the confidence intervals are calculated from the bootstrap estimates.
${\mathbf{Bootstrap Interval Method}}=\mathrm{T}$
t$t$ intervals are calculated. That is, the covariance matrix, Σ = {σij : i,j = 1,2,,p} $\Sigma =\left\{{\sigma }_{ij}:i,j=1,2,\dots ,p\right\}$ is calculated from the bootstrap estimates and the limits calculated as βi ± t( n p , (1 + α) / 2 ) σii ${\beta }_{i}±{t}_{\left(n-p,\left(1+\alpha \right)/2\right)}{\sigma }_{ii}$ where t( n p , (1 + α) / 2 ) ${t}_{\left(n-p,\left(1+\alpha \right)/2\right)}$ is the (1 + α) / 2 $\left(1+\alpha \right)/2$ percentage point from a Student's t$t$ distribution on n p $n-p$ degrees of freedom, n$n$ is the effective number of observations and α$\alpha$ is given by the optional parameter Significance Level.
Bootstrap Interval Method = QUANTILE${\mathbf{Bootstrap Interval Method}}=\mathrm{QUANTILE}$
Quantile intervals are calculated. That is, the upper and lower limits are taken as the (1 + α) / 2 $\left(1+\alpha \right)/2$ and (1α) / 2 $\left(1-\alpha \right)/2$ quantiles of the bootstrap estimates, as calculated using nag_stat_quantiles (g01am).
Constraint: ${\mathbf{Bootstrap Interval Method}}=\overline{)\mathrm{T}}$ or QUANTILE$\overline{)\mathrm{QUA}}\mathrm{NTILE}$.
Bootstrap Iterations  i$i$
Default = 100$\text{}=100$
The number of bootstrap samples used to calculate the confidence limits and covariance matrix (if requested) when Interval Method = BOOTSTRAP XY${\mathbf{Interval Method}}=\mathrm{BOOTSTRAP XY}$.
Constraint: ${\mathbf{Bootstrap Iterations}}>1$.
Bootstrap Monitoring  a$a$
Default = 'NO'$\text{}=\text{'NO'}$
If ${\mathbf{Bootstrap Monitoring}}=\mathrm{YES}$ and Interval Method = BOOTSTRAP XY${\mathbf{Interval Method}}=\mathrm{BOOTSTRAP XY}$, then the parameter estimates for each of the bootstrap samples are displayed. This information is sent to the unit number specified by Unit Number.
Constraint: ${\mathbf{Bootstrap Monitoring}}=\overline{)\mathrm{YES}}$ or NO$\overline{)\mathrm{NO}}$.
Calculate Initial Values  a$a$
Default = 'YES'$\text{}=\text{'YES'}$
If ${\mathbf{Calculate Initial Values}}=\mathrm{YES}$ then the initial values for the regression parameters, β$\beta$, are calculated from the data. Otherwise they must be supplied in b.
Constraint: ${\mathbf{Calculate Initial Values}}=\overline{)\mathrm{YES}}$ or NO$\overline{)\mathrm{NO}}$.
Defaults
This special keyword is used to reset all optional parameters to their default values.
Drop Zero Weights  a$a$
Default = 'YES'$\text{}=\text{'YES'}$
If a weighted regression is being performed and ${\mathbf{Drop Zero Weights}}=\mathrm{YES}$ then observations with zero weight are dropped from the analysis. Otherwise such observations are included.
Constraint: ${\mathbf{Drop Zero Weights}}=\overline{)\mathrm{YES}}$ or NO$\overline{)\mathrm{NO}}$.
Epsilon  r$r$
Default = sqrt(ε)$\text{}=\sqrt{\epsilon }$
εu${\epsilon }_{u}$, the tolerance used when calculating the covariance matrix and the initial values for u$u$ and v$v$. For additional details see Section [Calculation of Covariance Matrix] and Section [Additional information] respectively.
Constraint: Epsilon0.0${\mathbf{Epsilon}}\ge 0.0$.
Interval Method  a$a$
Default = 'IID'$\text{}=\text{'IID'}$
The value of Interval Method controls whether confidence limits are returned in bl and bu and how these limits are calculated. This parameter also controls how the matrices returned in ch are calculated.
Interval Method = NONE${\mathbf{Interval Method}}=\mathrm{NONE}$
No limits are calculated and bl, bu and ch are not referenced.
Interval Method = KERNEL${\mathbf{Interval Method}}=\mathrm{KERNEL}$
The Powell Sandwich method with a Gaussian kernel is used.
Interval Method = HKS${\mathbf{Interval Method}}=\mathrm{HKS}$
The Hendricks–Koenker Sandwich is used.
Interval Method = IID${\mathbf{Interval Method}}=\mathrm{IID}$
The errors are assumed to be identical, and independently distributed.
Interval Method = BOOTSTRAP XY${\mathbf{Interval Method}}=\mathrm{BOOTSTRAP XY}$
A bootstrap method is used, where sampling is done on the pair (yi,xi)$\left({y}_{i},{x}_{i}\right)$. The number of bootstrap samples is controlled by the parameter Bootstrap Iterations and the type of interval constructed from the bootstrap samples is controlled by Bootstrap Interval Method.
Constraint: Interval Method = NONE${\mathbf{Interval Method}}=\overline{)\mathrm{NON}}\mathrm{E}$, KERNEL$\overline{)\mathrm{KER}}\mathrm{NEL}$, HKS$\overline{)\mathrm{HKS}}$, IID$\overline{)\mathrm{IID}}$ or BOOTSTRAP XY$\overline{)\mathrm{BOO}}\mathrm{TSTRAP}\overline{)\mathrm{XY}}$.
Iteration Limit  i$i$
Default = 100$\text{}=100$
The maximum number of iterations to be performed by the interior point optimization algorithm.
Constraint: ${\mathbf{Iteration Limit}}>0$.
Matrix Returned  a$a$
Default = 'NONE'$\text{}=\text{'NONE'}$
The value of Matrix Returned controls the type of matrices returned in ch. If Interval Method = NONE${\mathbf{Interval Method}}=\mathrm{NONE}$, this parameter is ignored and ch is not referenced. Otherwise:
Matrix Returned = NONE${\mathbf{Matrix Returned}}=\mathrm{NONE}$
No matrices are returned and ch is not referenced.
Matrix Returned = COVARIANCE${\mathbf{Matrix Returned}}=\mathrm{COVARIANCE}$
The covariance matrices are returned.
Matrix Returned = H INVERSE${\mathbf{Matrix Returned}}=\mathrm{H INVERSE}$
If Interval Method = KERNEL${\mathbf{Interval Method}}=\mathrm{KERNEL}$ or HKS$\mathrm{HKS}$, the matrices J$J$ and H1${H}^{-1}$ are returned. Otherwise no matrices are returned and ch is not referenced.
The matrices returned are calculated as described in Section [Description], with the algorithm used specified by Interval Method. In the case of Interval Method = BOOTSTRAP XY${\mathbf{Interval Method}}=\mathrm{BOOTSTRAP XY}$ the covariance matrix is calculated directly from the bootstrap estimates.
Constraint: Matrix Returned = NONE${\mathbf{Matrix Returned}}=\overline{)\mathrm{NON}}\mathrm{E}$, COVARIANCE$\overline{)\mathrm{COV}}\mathrm{ARIANCE}$ or H INVERSE$\overline{)\mathrm{H}}\mathrm{}\overline{)\mathrm{INV}}\mathrm{ERSE}$.
Monitoring  a$a$
Default = 'NO'$\text{}=\text{'NO'}$
If Monitoring = YES${\mathbf{Monitoring}}=\mathrm{YES}$ then the duality gap is displayed at each iteration of the interior point optimization algorithm. In addition, the final estimates for β$\beta$ are also displayed.
The monitoring information is sent to the unit number specified by Unit Number.
Constraint: Monitoring = YES${\mathbf{Monitoring}}=\overline{)\mathrm{YES}}$ or NO$\overline{)\mathrm{NO}}$.
QR Tolerance  r$r$
Default = ε0.9$\text{}={\epsilon }^{0.9}$
The tolerance used to calculate the rank, k$k$, of the p × p$p×p$ cross-product matrix, XTX${X}^{\mathrm{T}}X$. Letting Q$Q$ be the orthogonal matrix obtained from a QR$QR$ decomposition of XTX${X}^{\mathrm{T}}X$, then the rank is calculated by comparing Qii${Q}_{ii}$ with Q11 × QR Tolerance${Q}_{11}×{\mathbf{QR Tolerance}}$.
If the cross-product matrix is rank deficient, then the parameter estimates for the pk$p-k$ columns with the smallest values of Qii${Q}_{ii}$ are set to zero, along with the corresponding entries in bl, bu and ch, if returned. This is equivalent to dropping these variables from the model. Details on the QR$QR$ decomposition used can be found in nag_lapack_dgeqp3 (f08bf).
Constraint: QR Tolerance > 0.0${\mathbf{QR Tolerance}}>0.0$.
Return Residuals  a$a$
Default = 'NO'$\text{}=\text{'NO'}$
If Return Residuals = YES${\mathbf{Return Residuals}}=\mathrm{YES}$, the residuals are returned in res. Otherwise res is not referenced.
Constraint: Return Residuals = YES${\mathbf{Return Residuals}}=\overline{)\mathrm{YES}}$ or NO$\overline{)\mathrm{NO}}$.
Sigma  r$r$
Default = 0.99995$\text{}=0.99995$
The scaling factor used when calculating the affine scaling step size (see equation (8)).
Constraint: 0.0 < Sigma < 1.0$0.0<{\mathbf{Sigma}}<1.0$.
Significance Level  r$r$
Default = 0.95$\text{}=0.95$
α$\alpha$, the size of the confidence interval whose limits are returned in bl and bu.
Constraint: 0.0 < Significance Level < 1.0$0.0<{\mathbf{Significance Level}}<1.0$.
Tolerance  r$r$
Default = sqrt(ε)$\text{}=\sqrt{\epsilon }$
Convergence tolerance. The optimization is deemed to have converged if the duality gap is less than Tolerance (see Section [Update and convergence]).
Constraint: Tolerance > 0.0${\mathbf{Tolerance}}>0.0$.
Unit Number  i$i$
Constraint: ${\mathbf{Unit Number}}>1$.