Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_nonpar_rank_regsn (g08ra)

## Purpose

nag_nonpar_rank_regsn (g08ra) calculates the parameter estimates, score statistics and their variance-covariance matrices for the linear model using a likelihood based on the ranks of the observations.

## Syntax

[prvr, irank, zin, eta, vapvec, parest, ifail] = g08ra(nv, y, x, idist, nmax, tol, 'ns', ns, 'ip', ip)
[prvr, irank, zin, eta, vapvec, parest, ifail] = nag_nonpar_rank_regsn(nv, y, x, idist, nmax, tol, 'ns', ns, 'ip', ip)

## Description

Analysis of data can be made by replacing observations by their ranks. The analysis produces inference for regression parameters arising from the following model.
For random variables Y1,Y2,,Yn${Y}_{1},{Y}_{2},\dots ,{Y}_{n}$ we assume that, after an arbitrary monotone increasing differentiable transformation, h( . )$h\left(.\right)$, the model
 h(Yi) = xiT β + εi $h(Yi)= xiT β+εi$ (1)
holds, where xi${x}_{i}$ is a known vector of explanatory variables and β$\beta$ is a vector of p$p$ unknown regression coefficients. The εi${\epsilon }_{i}$ are random variables assumed to be independent and identically distributed with a completely known distribution which can be one of the following: Normal, logistic, extreme value or double-exponential. In Pettitt (1982) an estimate for β$\beta$ is proposed as β̂ = MXTa$\stackrel{^}{\beta }=M{X}^{\mathrm{T}}a$ with estimated variance-covariance matrix M$M$. The statistics a$a$ and M$M$ depend on the ranks ri${r}_{i}$ of the observations Yi${Y}_{i}$ and the density chosen for εi${\epsilon }_{i}$.
The matrix X$X$ is the n$n$ by p$p$ matrix of explanatory variables. It is assumed that X$X$ is of rank p$p$ and that a column or a linear combination of columns of X$X$ is not equal to the column vector of 1$1$ or a multiple of it. This means that a constant term cannot be included in the model (1). The statistics a$a$ and M$M$ are found as follows. Let εi${\epsilon }_{i}$ have pdf f(ε)$f\left(\epsilon \right)$ and let g = f / f$g=-{f}^{\prime }/f$. Let W1,W2,,Wn${W}_{1},{W}_{2},\dots ,{W}_{n}$ be order statistics for a random sample of size n$n$ with the density f( . )$f\left(.\right)$. Define Zi = g(Wi)${Z}_{i}=g\left({W}_{i}\right)$, then ai = E(Zri)${a}_{i}=E\left({Z}_{{r}_{i}}\right)$. To define M$M$ we need M1 = XT(BA)X${M}^{-1}={X}^{\mathrm{T}}\left(B-A\right)X$, where B$B$ is an n$n$ by n$n$ diagonal matrix with Bii = E(g(Wri))${B}_{ii}=E\left({g}^{\prime }\left({W}_{{r}_{i}}\right)\right)$ and A$A$ is a symmetric matrix with Aij = cov(Zri,Zrj)${A}_{ij}=\mathrm{cov}\left({Z}_{{r}_{i}},{Z}_{{r}_{j}}\right)$. In the case of the Normal distribution, the Z1 < < Zn${Z}_{1}<\cdots <{Z}_{n}$ are standard Normal order statistics and E(g(Wi)) = 1$E\left({g}^{\prime }\left({W}_{i}\right)\right)=1$, for i = 1,2,,n$i=1,2,\dots ,n$.
The analysis can also deal with ties in the data. Two observations are adjudged to be tied if |YiYj| < tol$|{Y}_{i}-{Y}_{j}|<{\mathbf{tol}}$, where tol is a user-supplied tolerance level.
Various statistics can be found from the analysis:
 (a) The score statistic XTa${X}^{\mathrm{T}}a$. This statistic is used to test the hypothesis H0 : β = 0${H}_{0}:\beta =0$, see (e). (b) The estimated variance-covariance matrix XT(B − A)X${X}^{\mathrm{T}}\left(B-A\right)X$ of the score statistic in (a). (c) The estimate β̂ = MXTa$\stackrel{^}{\beta }=M{X}^{\mathrm{T}}a$. (d) The estimated variance-covariance matrix M = (XT(B − A)X) − 1$M={\left({X}^{\mathrm{T}}\left(B-A\right)X\right)}^{-1}$ of the estimate β̂$\stackrel{^}{\beta }$. (e) The χ2${\chi }^{2}$ statistic Q = β̂TM − 1β̂ = aTX(XT(B − A)X) − 1XTa$Q={\stackrel{^}{\beta }}^{\mathrm{T}}{M}^{-1}\stackrel{^}{\beta }={a}^{\mathrm{T}}X{\left({X}^{\mathrm{T}}\left(B-A\right)X\right)}^{-1}{X}^{\mathrm{T}}a$ used to test H0 : β = 0${H}_{0}:\beta =0$. Under H0${H}_{0}$, Q$Q$ has an approximate χ2${\chi }^{2}$-distribution with p$p$ degrees of freedom. (f) The standard errors Mii1 / 2${M}_{ii}^{1/2}$ of the estimates given in (c). (g) Approximate z$z$-statistics, i.e., Zi = β̂i / se(β̂i)${Z}_{i}={\stackrel{^}{\beta }}_{i}/se\left({\stackrel{^}{\beta }}_{i}\right)$ for testing H0 : βi = 0${H}_{0}:{\beta }_{i}=0$. For i = 1,2, … ,n$i=1,2,\dots ,n$, Zi${Z}_{i}$ has an approximate N(0,1)$N\left(0,1\right)$ distribution.
In many situations, more than one sample of observations will be available. In this case we assume the model
 hk(Yk) = XkT β + ek,  k = 1,2, … ,ns, $hk(Yk)= XkT β+ek, k=1,2,…,ns,$
where ns is the number of samples. In an obvious manner, Yk${Y}_{k}$ and Xk${X}_{k}$ are the vector of observations and the design matrix for the k$k$th sample respectively. Note that the arbitrary transformation hk${h}_{k}$ can be assumed different for each sample since observations are ranked within the sample.
The earlier analysis can be extended to give a combined estimate of β$\beta$ as β̂ = Dd$\stackrel{^}{\beta }=Dd$, where
 ns D − 1 = ∑ XkT(Bk − Ak)Xk k = 1
$D-1=∑k=1ns XkT (Bk-Ak)Xk$
and
 ns d = ∑ XkTak, k = 1
$d=∑k= 1ns XkT ak ,$
with ak${a}_{k}$, Bk${B}_{k}$ and Ak${A}_{k}$ defined as a$a$, B$B$ and A$A$ above but for the k$k$th sample.
The remaining statistics are calculated as for the one sample case.

## References

Pettitt A N (1982) Inference for the linear model using a likelihood based on ranks J. Roy. Statist. Soc. Ser. B 44 234–243

## Parameters

### Compulsory Input Parameters

1:     nv(ns) – int64int32nag_int array
ns, the dimension of the array, must satisfy the constraint ns1${\mathbf{ns}}\ge 1$.
The number of observations in the i$\mathit{i}$th sample, for i = 1,2,,ns$\mathit{i}=1,2,\dots ,{\mathbf{ns}}$.
Constraint: nv(i)1${\mathbf{nv}}\left(\mathit{i}\right)\ge 1$, for i = 1,2,,ns$\mathit{i}=1,2,\dots ,{\mathbf{ns}}$.
2:     y(nsum) – double array
nsum, the dimension of the array, must satisfy the constraint nsum = i = 1ns nv(i) $\mathit{nsum}=\sum _{\mathit{i}=1}^{{\mathbf{ns}}}{\mathbf{nv}}\left(\mathit{i}\right)$.
The observations in each sample. Specifically, y( k = 1i1 nv(k) + j ) ${\mathbf{y}}\left(\sum _{k=1}^{i-1}{\mathbf{nv}}\left(k\right)+j\right)$ must contain the j$j$th observation in the i$i$th sample.
3:     x(ldx,ip) – double array
ldx, the first dimension of the array, must satisfy the constraint ldxnsum$\mathit{ldx}\ge \mathit{nsum}$.
The design matrices for each sample. Specifically, x( k = 1i1 nv(k) + j ,l) ${\mathbf{x}}\left(\sum _{k=1}^{i-1}{\mathbf{nv}}\left(k\right)+j,l\right)$ must contain the value of the l$l$th explanatory variable for the j$j$th observation in the i$i$th sample.
Constraint: x${\mathbf{x}}$ must not contain a column with all elements equal.
4:     idist – int64int32nag_int scalar
The error distribution to be used in the analysis.
idist = 1${\mathbf{idist}}=1$
Normal.
idist = 2${\mathbf{idist}}=2$
Logistic.
idist = 3${\mathbf{idist}}=3$
Extreme value.
idist = 4${\mathbf{idist}}=4$
Double-exponential.
Constraint: 1idist4$1\le {\mathbf{idist}}\le 4$.
5:     nmax – int64int32nag_int scalar
The value of the largest sample size.
Constraint: nmax = max1ins (nv(i))${\mathbf{nmax}}=\underset{1\le i\le {\mathbf{ns}}}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\left({\mathbf{nv}}\left(i\right)\right)$ and ${\mathbf{nmax}}>{\mathbf{ip}}$.
6:     tol – double scalar
The tolerance for judging whether two observations are tied. Thus, observations Yi${Y}_{i}$ and Yj${Y}_{j}$ are adjudged to be tied if |YiYj| < tol$|{Y}_{i}-{Y}_{j}|<{\mathbf{tol}}$.
Constraint: tol > 0.0${\mathbf{tol}}>0.0$.

### Optional Input Parameters

1:     ns – int64int32nag_int scalar
Default: The dimension of the array nv.
The number of samples.
Constraint: ns1${\mathbf{ns}}\ge 1$.
2:     ip – int64int32nag_int scalar
Default: The second dimension of the array x.
The number of parameters to be fitted.
Constraint: ip1${\mathbf{ip}}\ge 1$.

### Input Parameters Omitted from the MATLAB Interface

nsum ldx ldprvr work lwork iwa

### Output Parameters

1:     prvr(ldprvr,ip) – double array
ldprvrip + 1$\mathit{ldprvr}\ge {\mathbf{ip}}+1$.
The variance-covariance matrices of the score statistics and the parameter estimates, the former being stored in the upper triangle and the latter in the lower triangle. Thus for 1ijip$1\le i\le j\le {\mathbf{ip}}$, prvr(i,j)${\mathbf{prvr}}\left(i,j\right)$ contains an estimate of the covariance between the i$i$th and j$j$th score statistics. For 1jiip1$1\le j\le i\le {\mathbf{ip}}-1$, prvr(i + 1,j)${\mathbf{prvr}}\left(i+1,j\right)$ contains an estimate of the covariance between the i$i$th and j$j$th parameter estimates.
2:     irank(nmax) – int64int32nag_int array
For the one sample case, irank contains the ranks of the observations.
3:     zin(nmax) – double array
For the one sample case, zin contains the expected values of the function g( . )$g\left(.\right)$ of the order statistics.
4:     eta(nmax) – double array
For the one sample case, eta contains the expected values of the function g( . )$g\prime \left(.\right)$ of the order statistics.
5:     vapvec(nmax × (nmax + 1) / 2${\mathbf{nmax}}×\left({\mathbf{nmax}}+1\right)/2$) – double array
For the one sample case, vapvec contains the upper triangle of the variance-covariance matrix of the function g( . )$g\left(.\right)$ of the order statistics stored column-wise.
6:     parest(4 × ip + 1$4×{\mathbf{ip}}+1$) – double array
The statistics calculated by the function.
The first ip components of parest contain the score statistics.
The next ip elements contain the parameter estimates.
parest(2 × ip + 1)${\mathbf{parest}}\left(2×{\mathbf{ip}}+1\right)$ contains the value of the χ2${\chi }^{2}$ statistic.
The next ip elements of parest contain the standard errors of the parameter estimates.
Finally, the remaining ip elements of parest contain the z$z$-statistics.
7:     ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

## Error Indicators and Warnings

Errors or warnings detected by the function:
ifail = 1${\mathbf{ifail}}=1$
 On entry, ns < 1${\mathbf{ns}}<1$, or tol ≤ 0.0${\mathbf{tol}}\le 0.0$, or ${\mathbf{nmax}}\le {\mathbf{ip}}$, or ldprvr < ip + 1$\mathit{ldprvr}<{\mathbf{ip}}+1$, or ldx < nsum$\mathit{ldx}<\mathit{nsum}$, or nmax ≠ max1 ≤ i ≤ ns (nv(i))${\mathbf{nmax}}\ne {\mathrm{max}}_{1\le i\le {\mathbf{ns}}}\left({\mathbf{nv}}\left(i\right)\right)$, or nv(i) ≤ 0${\mathbf{nv}}\left(i\right)\le 0$, for some i$i$, nv(i)${\mathbf{nv}}\left(i\right)$, or nsum ≠ ∑ i = 1nsnv(i)$\mathit{nsum}\ne \sum _{i=1}^{{\mathbf{ns}}}{\mathbf{nv}}\left(i\right)$, or ip < 1${\mathbf{ip}}<1$, or lwork < nmax × (ip + 1)$\mathit{lwork}<{\mathbf{nmax}}×\left({\mathbf{ip}}+1\right)$.
ifail = 2${\mathbf{ifail}}=2$
 On entry, idist < 1${\mathbf{idist}}<1$, or idist > 4${\mathbf{idist}}>4$.
ifail = 3${\mathbf{ifail}}=3$
On entry, all the observations are adjudged to be tied. You are advised to check the value supplied for tol.
ifail = 4${\mathbf{ifail}}=4$
The matrix XT(BA)X${X}^{\mathrm{T}}\left(B-A\right)X$ is either ill-conditioned or not positive definite. This error should only occur with extreme rankings of the data.
ifail = 5${\mathbf{ifail}}=5$
The matrix X$X$ has at least one of its columns with all elements equal.

## Accuracy

The computations are believed to be stable.

The time taken by nag_nonpar_rank_regsn (g08ra) depends on the number of samples, the total number of observations and the number of parameters fitted.
In extreme cases the parameter estimates for certain models can be infinite, although this is unlikely to occur in practice. See Pettitt (1982) for further details.

## Example

```function nag_nonpar_rank_regsn_example
nv = [int64(20)];
y = [1;
1;
3;
4;
2;
4;
1;
5;
4;
4;
4;
4;
4;
1;
4;
5;
5;
4;
4;
3];
x = [1, 23;
1, 32;
1, 37;
1, 41;
1, 41;
1, 48;
1, 48;
1, 55;
1, 55;
0, 56;
1, 57;
1, 57;
1, 57;
0, 58;
1, 59;
0, 59;
0, 60;
1, 61;
1, 62;
1, 62];
idist = int64(2);
nmax = int64(20);
tol = 1e-05;
[parvar, irank, zin, eta, vapvec, parest, ifail] = ...
nag_nonpar_rank_regsn(nv, y, x, idist, nmax, tol);
parvar, irank, zin, eta, parest, ifail
```
```

parvar =

0.6733   -4.1587
1.5604  533.6696
0.0122    0.0020

irank =

1
2
6
8
5
9
3
18
10
11
12
13
14
4
15
19
20
16
17
7

zin =

-0.7619
-0.7619
-0.7619
-0.7619
-0.5238
-0.3810
-0.3810
0.1905
0.1905
0.1905
0.1905
0.1905
0.1905
0.1905
0.1905
0.1905
0.1905
0.8095
0.8095
0.8095

eta =

0.1948
0.1948
0.1948
0.1948
0.3463
0.4069
0.4069
0.4242
0.4242
0.4242
0.4242
0.4242
0.4242
0.4242
0.4242
0.4242
0.4242
0.1616
0.1616
0.1616

parest =

-1.0476
64.3333
-0.8524
0.1139
8.2210
1.2492
0.0444
-0.6824
2.5673

ifail =

0

```
```function g08ra_example
nv = [int64(20)];
y = [1;
1;
3;
4;
2;
4;
1;
5;
4;
4;
4;
4;
4;
1;
4;
5;
5;
4;
4;
3];
x = [1, 23;
1, 32;
1, 37;
1, 41;
1, 41;
1, 48;
1, 48;
1, 55;
1, 55;
0, 56;
1, 57;
1, 57;
1, 57;
0, 58;
1, 59;
0, 59;
0, 60;
1, 61;
1, 62;
1, 62];
idist = int64(2);
nmax = int64(20);
tol = 1e-05;
[parvar, irank, zin, eta, vapvec, parest, ifail] = g08ra(nv, y, x, idist, nmax, tol);
parvar, irank, zin, eta, parest, ifail
```
```

parvar =

0.6733   -4.1587
1.5604  533.6696
0.0122    0.0020

irank =

1
2
6
8
5
9
3
18
10
11
12
13
14
4
15
19
20
16
17
7

zin =

-0.7619
-0.7619
-0.7619
-0.7619
-0.5238
-0.3810
-0.3810
0.1905
0.1905
0.1905
0.1905
0.1905
0.1905
0.1905
0.1905
0.1905
0.1905
0.8095
0.8095
0.8095

eta =

0.1948
0.1948
0.1948
0.1948
0.3463
0.4069
0.4069
0.4242
0.4242
0.4242
0.4242
0.4242
0.4242
0.4242
0.4242
0.4242
0.4242
0.1616
0.1616
0.1616

parest =

-1.0476
64.3333
-0.8524
0.1139
8.2210
1.2492
0.0444
-0.6824
2.5673

ifail =

0

```