Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_nonpar_test_ks_1sample (g08cb)

## Purpose

nag_nonpar_test_ks_1sample (g08cb) performs the one sample Kolmogorov–Smirnov test, using one of the distributions provided.

## Syntax

[par, d, z, p, sx, ifail] = g08cb(x, dist, par, estima, ntype, 'n', n)
[par, d, z, p, sx, ifail] = nag_nonpar_test_ks_1sample(x, dist, par, estima, ntype, 'n', n)

## Description

The data consist of a single sample of $n$ observations denoted by ${x}_{1},{x}_{2},\dots ,{x}_{n}$. Let ${S}_{n}\left({x}_{\left(i\right)}\right)$ and ${F}_{0}\left({x}_{\left(i\right)}\right)$ represent the sample cumulative distribution function and the theoretical (null) cumulative distribution function respectively at the point ${x}_{\left(i\right)}$ where ${x}_{\left(i\right)}$ is the $i$th smallest sample observation.
The Kolmogorov–Smirnov test provides a test of the null hypothesis ${H}_{0}$: the data are a random sample of observations from a theoretical distribution specified by you against one of the following alternative hypotheses:
 (i) ${H}_{1}$: the data cannot be considered to be a random sample from the specified null distribution. (ii) ${H}_{2}$: the data arise from a distribution which dominates the specified null distribution. In practical terms, this would be demonstrated if the values of the sample cumulative distribution function ${S}_{n}\left(x\right)$ tended to exceed the corresponding values of the theoretical cumulative distribution function ${F}_{0}\left(x\right)$. (iii) ${H}_{3}$: the data arise from a distribution which is dominated by the specified null distribution. In practical terms, this would be demonstrated if the values of the theoretical cumulative distribution function ${F}_{0}\left(x\right)$ tended to exceed the corresponding values of the sample cumulative distribution function ${S}_{n}\left(x\right)$.
One of the following test statistics is computed depending on the particular alternative null hypothesis specified (see the description of the argument ntype in Arguments).
For the alternative hypothesis ${H}_{1}$.
• ${D}_{n}$ – the largest absolute deviation between the sample cumulative distribution function and the theoretical cumulative distribution function. Formally ${D}_{n}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{D}_{n}^{+},{D}_{n}^{-}\right\}$.
For the alternative hypothesis ${H}_{2}$.
• ${D}_{n}^{+}$ – the largest positive deviation between the sample cumulative distribution function and the theoretical cumulative distribution function. Formally ${D}_{n}^{+}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{S}_{n}\left({x}_{\left(i\right)}\right)-{F}_{0}\left({x}_{\left(i\right)}\right),0\right\}$ for both discrete and continuous null distributions.
For the alternative hypothesis ${H}_{3}$.
• ${D}_{n}^{-}$ – the largest positive deviation between the theoretical cumulative distribution function and the sample cumulative distribution function. Formally if the null distribution is discrete then ${D}_{n}^{-}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{F}_{0}\left({x}_{\left(i\right)}\right)-{S}_{n}\left({x}_{\left(i\right)}\right),0\right\}$ and if the null distribution is continuous then ${D}_{n}^{-}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{F}_{0}\left({x}_{\left(i\right)}\right)-{S}_{n}\left({x}_{\left(i-1\right)}\right),0\right\}$.
The standardized statistic $Z=D×\sqrt{n}$ is also computed where $D$ may be ${D}_{n},{D}_{n}^{+}$ or ${D}_{n}^{-}$ depending on the choice of the alternative hypothesis. This is the standardized value of $D$ with no correction for continuity applied and the distribution of $Z$ converges asymptotically to a limiting distribution, first derived by Kolmogorov (1933), and then tabulated by Smirnov (1948). The asymptotic distributions for the one-sided statistics were obtained by Smirnov (1933).
The probability, under the null hypothesis, of obtaining a value of the test statistic as extreme as that observed, is computed. If $n\le 100$ an exact method given by Conover (1980), is used. Note that the method used is only exact for continuous theoretical distributions and does not include Conover's modification for discrete distributions. This method computes the one-sided probabilities. The two-sided probabilities are estimated by doubling the one-sided probability. This is a good estimate for small $p$, that is $p\le 0.10$, but it becomes very poor for larger $p$. If $n>100$ then $p$ is computed using the Kolmogorov–Smirnov limiting distributions, see Feller (1948), Kendall and Stuart (1973), Kolmogorov (1933), Smirnov (1933) and Smirnov (1948).

## References

Conover W J (1980) Practical Nonparametric Statistics Wiley
Feller W (1948) On the Kolmogorov–Smirnov limit theorems for empirical distributions Ann. Math. Statist. 19 179–181
Kendall M G and Stuart A (1973) The Advanced Theory of Statistics (Volume 2) (3rd Edition) Griffin
Kolmogorov A N (1933) Sulla determinazione empirica di una legge di distribuzione Giornale dell' Istituto Italiano degli Attuari 4 83–91
Siegel S (1956) Non-parametric Statistics for the Behavioral Sciences McGraw–Hill
Smirnov N (1933) Estimate of deviation between empirical distribution functions in two independent samples Bull. Moscow Univ. 2(2) 3–16
Smirnov N (1948) Table for estimating the goodness of fit of empirical distributions Ann. Math. Statist. 19 279–281

## Parameters

### Compulsory Input Parameters

1:     $\mathrm{x}\left({\mathbf{n}}\right)$ – double array
The sample observations ${x}_{1},{x}_{2},\dots ,{x}_{n}$.
Constraint: the sample observations supplied must be consistent, in the usual manner, with the null distribution chosen, as specified by the arguments dist and par. For further details see Further Comments.
2:     $\mathrm{dist}$ – string
The theoretical (null) distribution from which it is suspected the data may arise.
${\mathbf{dist}}=\text{'U'}$
The uniform distribution over $\left(a,b\right)$.
${\mathbf{dist}}=\text{'N'}$
The Normal distribution with mean $\mu$ and variance ${\sigma }^{2}$.
${\mathbf{dist}}=\text{'G'}$
The gamma distribution with shape parameter$\alpha$ and scale parameter $\beta$, where the mean $\text{}=\alpha \beta$.
${\mathbf{dist}}=\text{'BE'}$
The beta distribution with shape parameters $\alpha$ and $\beta$, where the mean $\text{}=\alpha /\left(\alpha +\beta \right)$.
${\mathbf{dist}}=\text{'BI'}$
The binomial distribution with the number of trials, $m$, and the probability of a success, $p$.
${\mathbf{dist}}=\text{'E'}$
The exponential distribution with parameter $\lambda$, where the mean $\text{}=1/\lambda$.
${\mathbf{dist}}=\text{'P'}$
The Poisson distribution with parameter $\mu$, where the mean $\text{}=\mu$.
${\mathbf{dist}}=\text{'NB'}$
The negative binomial distribution with the number of trials, $m$, and the probability of success, $p$.
${\mathbf{dist}}=\text{'GP'}$
The generalized Pareto distribution with shape parameter $\xi$ and scale $\beta$.
Any number of characters may be supplied as the actual argument, however only the characters, maximum 2, required to uniquely identify the distribution are referenced.
Constraint: ${\mathbf{dist}}=\text{'U'}$, $\text{'N'}$, $\text{'G'}$, $\text{'BE'}$, $\text{'BI'}$, $\text{'E'}$, $\text{'P'}$, $\text{'NB'}$ or $\text{'GP'}$.
3:     $\mathrm{par}\left(2\right)$ – double array
If ${\mathbf{estima}}=\text{'S'}$, par must contain the known values of the parameter(s) of the null distribution as follows.
If a uniform distribution is used, then ${\mathbf{par}}\left(1\right)$ and ${\mathbf{par}}\left(2\right)$ must contain the boundaries $a$ and $b$ respectively.
If a Normal distribution is used, then ${\mathbf{par}}\left(1\right)$ and ${\mathbf{par}}\left(2\right)$ must contain the mean, $\mu$, and the variance, ${\sigma }^{2}$, respectively.
If a gamma distribution is used, then ${\mathbf{par}}\left(1\right)$ and ${\mathbf{par}}\left(2\right)$ must contain the parameters $\alpha$ and $\beta$ respectively.
If a beta distribution is used, then ${\mathbf{par}}\left(1\right)$ and ${\mathbf{par}}\left(2\right)$ must contain the parameters $\alpha$ and $\beta$ respectively.
If a binomial distribution is used, then ${\mathbf{par}}\left(1\right)$ and ${\mathbf{par}}\left(2\right)$ must contain the parameters $m$ and $p$ respectively.
If an exponential distribution is used, then ${\mathbf{par}}\left(1\right)$ must contain the parameter $\lambda$.
If a Poisson distribution is used, then ${\mathbf{par}}\left(1\right)$ must contain the parameter $\mu$.
If a negative binomial distribution is used, ${\mathbf{par}}\left(1\right)$ and ${\mathbf{par}}\left(2\right)$ must contain the parameters $m$ and $p$ respectively.
If a generalized Pareto distribution is used, ${\mathbf{par}}\left(1\right)$ and ${\mathbf{par}}\left(2\right)$ must contain the parameters $\xi$ and $\beta$ respectively.
If ${\mathbf{estima}}=\text{'E'}$, par need not be set except when the null distribution requested is either the binomial or the negative binomial distribution in which case ${\mathbf{par}}\left(1\right)$ must contain the parameter $m$.
Constraints:
• if ${\mathbf{dist}}=\text{'U'}$, ${\mathbf{par}}\left(1\right)<{\mathbf{par}}\left(2\right)$;
• if ${\mathbf{dist}}=\text{'N'}$, ${\mathbf{par}}\left(2\right)>0.0$;
• if ${\mathbf{dist}}=\text{'G'}$, ${\mathbf{par}}\left(1\right)>0.0$ and ${\mathbf{par}}\left(2\right)>0.0$;
• if ${\mathbf{dist}}=\text{'BE'}$, ${\mathbf{par}}\left(1\right)>0.0$ and ${\mathbf{par}}\left(2\right)>0.0$ and ${\mathbf{par}}\left(1\right)\le {10}^{6}$ and ${\mathbf{par}}\left(2\right)\le {10}^{6}$;
• if ${\mathbf{dist}}=\text{'BI'}$, ${\mathbf{par}}\left(1\right)\ge 1.0$ and $0.0<{\mathbf{par}}\left(2\right)<1.0$ and ${\mathbf{par}}\left(1\right)×{\mathbf{par}}\left(2\right)×\left(1.0-{\mathbf{par}}\left(2\right)\right)\le {10}^{6}$ and ${\mathbf{par}}\left(1\right)<1/\mathrm{eps}$, where , see nag_machine_precision (x02aj);
• if ${\mathbf{dist}}=\text{'E'}$, ${\mathbf{par}}\left(1\right)>0.0$;
• if ${\mathbf{dist}}=\text{'P'}$, ${\mathbf{par}}\left(1\right)>0.0$ and ${\mathbf{par}}\left(1\right)\le {10}^{6}$;
• if ${\mathbf{dist}}=\text{'NB'}$, ${\mathbf{par}}\left(1\right)\ge 1.0$ and $0.0<{\mathbf{par}}\left(2\right)<1.0$ and ${\mathbf{par}}\left(1\right)×\left(1.0-{\mathbf{par}}\left(2\right)\right)/\left({\mathbf{par}}\left(2\right)×{\mathbf{par}}\left(2\right)\right)\le {10}^{6}$ and ${\mathbf{par}}\left(1\right)<1/\mathrm{eps}$, where , see nag_machine_precision (x02aj);
• if ${\mathbf{dist}}=\text{'GP'}$, ${\mathbf{par}}\left(2\right)>0$.
4:     $\mathrm{estima}$ – string (length ≥ 1)
estima must specify whether values of the parameters of the null distribution are known or are to be estimated from the data.
${\mathbf{estima}}=\text{'S'}$
Values of the parameters will be supplied in the array par described above.
${\mathbf{estima}}=\text{'E'}$
Parameters are to be estimated from the data except when the null distribution requested is the binomial distribution or the negative binomial distribution in which case the first parameter, $m$, must be supplied in ${\mathbf{par}}\left(1\right)$ and only the second parameter, $p$, is estimated from the data.
Constraint: ${\mathbf{estima}}=\text{'S'}$ or $\text{'E'}$.
5:     $\mathrm{ntype}$int64int32nag_int scalar
The test statistic to be calculated, i.e., the choice of alternative hypothesis.
${\mathbf{ntype}}=1$
Computes ${D}_{n}$, to test ${H}_{0}$ against ${H}_{1}$,
${\mathbf{ntype}}=2$
Computes ${D}_{n}^{+}$, to test ${H}_{0}$ against ${H}_{2}$,
${\mathbf{ntype}}=3$
Computes ${D}_{n}^{-}$, to test ${H}_{0}$ against ${H}_{3}$.
Constraint: ${\mathbf{ntype}}=1$, $2$ or $3$.

### Optional Input Parameters

1:     $\mathrm{n}$int64int32nag_int scalar
Default: the dimension of the array x.
$n$, the number of observations in the sample.
Constraint: ${\mathbf{n}}\ge 3$.

### Output Parameters

1:     $\mathrm{par}\left(2\right)$ – double array
If ${\mathbf{estima}}=\text{'S'}$, par is unchanged; if ${\mathbf{estima}}=\text{'E'}$, and ${\mathbf{dist}}=\text{'BI'}$ or ${\mathbf{dist}}=\text{'NB'}$ then ${\mathbf{par}}\left(2\right)$ is estimated from the data; otherwise ${\mathbf{par}}\left(1\right)$ and ${\mathbf{par}}\left(2\right)$ are estimated from the data.
2:     $\mathrm{d}$ – double scalar
The Kolmogorov–Smirnov test statistic (${D}_{n}$, ${D}_{n}^{+}$ or ${D}_{n}^{-}$ according to the value of ntype).
3:     $\mathrm{z}$ – double scalar
A standardized value, $Z$, of the test statistic, $D$, without any correction for continuity.
4:     $\mathrm{p}$ – double scalar
The probability, $p$, associated with the observed value of $D$ where $D$ may be ${D}_{n},{D}_{n}^{+}$ or ${D}_{n}^{-}$ depending on the value of ntype (see Description).
5:     $\mathrm{sx}\left({\mathbf{n}}\right)$ – double array
The sample observations, ${x}_{1},{x}_{2},\dots ,{x}_{n}$, sorted in ascending order.
6:     $\mathrm{ifail}$int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see Error Indicators and Warnings).

## Error Indicators and Warnings

Errors or warnings detected by the function:
${\mathbf{ifail}}=1$
Constraint: ${\mathbf{n}}\ge 3$.
${\mathbf{ifail}}=2$
On entry, ${\mathbf{dist}}=_$ was an illegal value.
${\mathbf{ifail}}=3$
Constraint: ${\mathbf{ntype}}=1$, $2$ or $3$.
${\mathbf{ifail}}=4$
On entry, ${\mathbf{estima}}=_$ was an illegal value.
${\mathbf{ifail}}=5$
Constraint: for the beta distribution, $0<{\mathbf{par}}\left(1\right)$ and ${\mathbf{par}}\left(2\right)\le 1000000$.
Constraint: for the binomial distribution, $0<{\mathbf{par}}\left(2\right)<1$.
Constraint: for the binomial distribution, $1\le {\mathbf{par}}\left(1\right)<1/\mathrm{eps}$, where , see nag_machine_precision (x02aj).
Constraint: for the exponential distribution, ${\mathbf{par}}\left(1\right)>0$.
Constraint: for the gamma distribution, ${\mathbf{par}}\left(1\right)$ and ${\mathbf{par}}\left(2\right)>0$.
Constraint: for the generalized Pareto distribution, ${\mathbf{par}}\left(2\right)>0$.
Constraint: for the generalized Pareto distribution with ${\mathbf{par}}\left(1\right)<0$, $0\le {\mathbf{x}}\left(\mathit{i}\right)\le -{\mathbf{par}}\left(2\right)/{\mathbf{par}}\left(1\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
Constraint: for the negative binomial distribution, $0<{\mathbf{par}}\left(2\right)<1$.
Constraint: for the negative binomial distribution, $1\le {\mathbf{par}}\left(1\right)<1/\mathrm{eps}$, where , see nag_machine_precision (x02aj).
Constraint: for the Normal distribution, ${\mathbf{par}}\left(2\right)>0$.
Constraint: for the Poisson distribution, $0<{\mathbf{par}}\left(1\right)<1000000$.
Constraint: for the uniform distribution, ${\mathbf{par}}\left(1\right)<{\mathbf{par}}\left(2\right)$.
${\mathbf{ifail}}=6$
On entry, ${\mathbf{dist}}=\text{'BE'}$ and at least one observation is illegal.
Constraint: $0\le {\mathbf{x}}\left(\mathit{i}\right)\le 1$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
On entry, ${\mathbf{dist}}=\text{'BI'}$ and all observations are zero or $m$.
Constraint: at least one $0.0<{\mathbf{x}}\left(\mathit{i}\right)<{\mathbf{par}}\left(1\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
On entry, ${\mathbf{dist}}=\text{'BI'}$ and at least one observation is illegal.
Constraint: $0\le {\mathbf{x}}\left(\mathit{i}\right)\le {\mathbf{par}}\left(1\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
On entry, ${\mathbf{dist}}=\text{'E'}$ or $\text{'P'}$ and all observations are zero.
Constraint: at least one ${\mathbf{x}}\left(\mathit{i}\right)>0$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
On entry, ${\mathbf{dist}}=\text{'G'}$, $\text{'E'}$, $\text{'P'}$, $\text{'NB'}$ or $\text{'GP'}$ and at least one observation is negative.
Constraint: ${\mathbf{x}}\left(\mathit{i}\right)\ge 0$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
On entry, ${\mathbf{dist}}=\text{'GP'}$ and ${\mathbf{estima}}=\text{'E'}$.
The parameter estimates are invalid; the data may not be from the generalized Pareto distribution.
On entry, ${\mathbf{dist}}=\text{'U'}$ and at least one observation is illegal.
Constraint: ${\mathbf{par}}\left(1\right)\le {\mathbf{x}}\left(\mathit{i}\right)\le {\mathbf{par}}\left(2\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
${\mathbf{ifail}}=7$
On entry, ${\mathbf{dist}}=\text{'U'}$, $\text{'N'}$, $\text{'G'}$, $\text{'BE'}$ or $\text{'GP'}$, ${\mathbf{estima}}=\text{'E'}$ and the whole sample is constant. Thus the variance is zero.
${\mathbf{ifail}}=8$
On entry, ${\mathbf{dist}}=\text{'BI'}$, ${\mathbf{par}}\left(1\right)=_$, .
The variance ${\mathbf{par}}\left(1\right)×{\mathbf{par}}\left(2\right)×\left(1-{\mathbf{par}}\left(2\right)\right)$ exceeds 1000000.
On entry, ${\mathbf{dist}}=\text{'NB'}$, ${\mathbf{par}}\left(1\right)=_$, .
The variance ${\mathbf{par}}\left(1\right)×\left(1-{\mathbf{par}}\left(2\right)\right)/\left({\mathbf{par}}\left(2\right)×{\mathbf{par}}\left(2\right)\right)$ exceeds 1000000.
${\mathbf{ifail}}=9$
On entry, ${\mathbf{dist}}=\text{'G'}$ and in the computation of the incomplete gamma function by nag_specfun_gamma_incomplete (s14ba) the convergence of the Taylor series or Legendre continued fraction fails within $600$ iterations.
${\mathbf{ifail}}=-99$
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.

## Accuracy

The approximation for $p$, given when $n>100$, has a relative error of at most 2.5% for most cases. The two-sided probability is approximated by doubling the one-sided probability. This is only good for small $p$, i.e., $p<0.10$ but very poor for large $p$. The error is always on the conservative side, that is the tail probability, $p$, is over estimated.

The time taken by nag_nonpar_test_ks_1sample (g08cb) increases with $n$ until $n>100$ at which point it drops and then increases slowly with $n$. The time may also depend on the choice of null distribution and on whether or not the parameters are to be estimated.
The data supplied in the argument x must be consistent with the chosen null distribution as follows:
• when ${\mathbf{dist}}=\text{'U'}$, then ${\mathbf{par}}\left(1\right)\le {x}_{i}\le {\mathbf{par}}\left(2\right)$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\text{'N'}$, then there are no constraints on the ${x}_{i}$'s;
• when ${\mathbf{dist}}=\text{'G'}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\text{'BE'}$, then $0.0\le {x}_{i}\le 1.0$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\text{'BI'}$, then $0.0\le {x}_{i}\le {\mathbf{par}}\left(1\right)$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\text{'E'}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\text{'P'}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\text{'NB'}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\text{'GP'}$ and ${\mathbf{par}}\left(1\right)\ge 0.0$, then ${x}_{\mathit{i}}\ge 0.0$, for $\mathit{i}=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\text{'GP'}$ and ${\mathbf{par}}\left(1\right)<0.0$, then $0.0\le {x}_{\mathit{i}}\le -{\mathbf{par}}\left(2\right)/{\mathbf{par}}\left(1\right)$, for $\mathit{i}=1,2,\dots ,n$.

## Example

The following example program reads in a set of data consisting of 30 observations. The Kolmogorov–Smirnov test is then applied twice, firstly to test whether the sample is taken from a uniform distribution, $U\left(0,2\right)$, and secondly to test whether the sample is taken from a Normal distribution where the mean and variance are estimated from the data. In both cases we are testing against ${H}_{1}$; that is, we are doing a two tailed test. The values of d, z and p are printed for each case.
```function g08cb_example

fprintf('g08cb example results\n\n');

x = [0.01; 0.30; 0.20; 0.90; 1.20; 0.09; 1.30; 0.18; 0.90; 0.48;
1.98; 0.03; 0.50; 0.07; 0.70; 0.60; 0.95; 1.00; 0.31; 1.45;
1.04; 1.25; 0.15; 0.75; 0.85; 0.22; 1.56; 0.81; 0.57; 0.55];

% Parameters
dist   = 'Normal';
estima = 'Estimate';
ntype  = int64(1);
par = [0; 0];

[par, d, z, p, sx, ifail] = g08cb( ...
x, dist, par, estima, ntype);

fprintf('K-S Test\n');
fprintf('Distribution: %s\n',dist);
fprintf('Parameters  :');
fprintf('%7.4f', par);
fprintf('\n\nTest statistic D = %8.4f\n', d);
fprintf('Z statistic      = %8.4f\n', z);
fprintf('Tail probability = %8.4f\n', p);

```
```g08cb example results

K-S Test
Distribution: Normal
Parameters  : 0.6967 0.2564

Test statistic D =   0.1108
Z statistic      =   0.6068
Tail probability =   0.8925
```