g08 Chapter Contents
g08 Chapter Introduction
NAG C Library Manual

# NAG Library Function Documentnag_1_sample_ks_test (g08cbc)

## 1  Purpose

nag_1_sample_ks_test (g08cbc) performs the one sample Kolmogorov–Smirnov test, using one of the standard distributions provided.

## 2  Specification

 #include #include
 void nag_1_sample_ks_test (Integer n, const double x[], Nag_Distributions dist, double par[], Nag_ParaEstimates estima, Nag_TestStatistics dtype, double *d, double *z, double *p, NagError *fail)

## 3  Description

The data consist of a single sample of $n$ observations denoted by ${x}_{1},{x}_{2},\dots ,{x}_{n}$. Let ${S}_{n}\left({x}_{\left(i\right)}\right)$ and ${F}_{0}\left({x}_{\left(i\right)}\right)$ represent the sample cumulative distribution function and the theoretical (null) cumulative distribution function respectively at the point ${x}_{\left(i\right)}$ where ${x}_{\left(i\right)}$ is the $i$th smallest sample observation.
The Kolmogorov–Smirnov test provides a test of the null hypothesis ${H}_{0}$: the data are a random sample of observations from a theoretical distribution specified by you against one of the following alternative hypotheses:
 (i) ${H}_{1}$: the data cannot be considered to be a random sample from the specified null distribution. (ii) ${H}_{2}$: the data arise from a distribution which dominates the specified null distribution. In practical terms, this would be demonstrated if the values of the sample cumulative distribution function ${S}_{n}\left(x\right)$ tended to exceed the corresponding values of the theoretical cumulative distribution function ${F}_{0}\left(x\right)$. (iii) ${H}_{3}$: the data arise from a distribution which is dominated by the specified null distribution. In practical terms, this would be demonstrated if the values of the theoretical cumulative distribution function ${F}_{0}\left(x\right)$ tended to exceed the corresponding values of the sample cumulative distribution function ${S}_{n}\left(x\right)$.
One of the following test statistics is computed depending on the particular alternative null hypothesis specified (see the description of the argument dtype in Section 5).
For the alternative hypothesis ${H}_{1}$.
• ${D}_{n}$ – the largest absolute deviation between the sample cumulative distribution function and the theoretical cumulative distribution function. Formally ${D}_{n}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{D}_{n}^{+},{D}_{n}^{-}\right\}$.
For the alternative hypothesis ${H}_{2}$.
• ${D}_{n}^{+}$ – the largest positive deviation between the sample cumulative distribution function and the theoretical cumulative distribution function. Formally ${D}_{n}^{+}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{S}_{n}\left({x}_{\left(i\right)}\right)-{F}_{0}\left({x}_{\left(i\right)}\right),0\right\}$ for both discrete and continuous null distributions.
For the alternative hypothesis ${H}_{3}$.
• ${D}_{n}^{-}$ – the largest positive deviation between the theoretical cumulative distribution function and the sample cumulative distribution function. Formally if the null distribution is discrete then ${D}_{n}^{-}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{F}_{0}\left({x}_{\left(i\right)}\right)-{S}_{n}\left({x}_{\left(i\right)}\right),0\right\}$ and if the null distribution is continuous then ${D}_{n}^{-}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{F}_{0}\left({x}_{\left(i\right)}\right)-{S}_{n}\left({x}_{\left(i-1\right)}\right),0\right\}$.
The standardized statistic $Z=D×\sqrt{n}$ is also computed where $D$ may be ${D}_{n},{D}_{n}^{+}$ or ${D}_{n}^{-}$ depending on the choice of the alternative hypothesis. This is the standardized value of $D$ with no correction for continuity applied and the distribution of $Z$ converges asymptotically to a limiting distribution, first derived by Kolmogorov (1933), and then tabulated by Smirnov (1948). The asymptotic distributions for the one-sided statistics were obtained by Smirnov (1933).
The probability, under the null hypothesis, of obtaining a value of the test statistic as extreme as that observed, is computed. If $n\le 100$ an exact method given by Conover (1980), is used. Note that the method used is only exact for continuous theoretical distributions and does not include Conover's modification for discrete distributions. This method computes the one-sided probabilities. The two-sided probabilities are estimated by doubling the one-sided probability. This is a good estimate for small $p$, that is $p\le 0.10$, but it becomes very poor for larger $p$. If $n>100$ then $p$ is computed using the Kolmogorov–Smirnov limiting distributions, see Feller (1948), Kendall and Stuart (1973), Kolmogorov (1933), Smirnov (1933) and Smirnov (1948).

## 4  References

Conover W J (1980) Practical Nonparametric Statistics Wiley
Feller W (1948) On the Kolmogorov–Smirnov limit theorems for empirical distributions Ann. Math. Statist. 19 179–181
Kendall M G and Stuart A (1973) The Advanced Theory of Statistics (Volume 2) (3rd Edition) Griffin
Kolmogorov A N (1933) Sulla determinazione empirica di una legge di distribuzione Giornale dell' Istituto Italiano degli Attuari 4 83–91
Siegel S (1956) Non-parametric Statistics for the Behavioral Sciences McGraw–Hill
Smirnov N (1933) Estimate of deviation between empirical distribution functions in two independent samples Bull. Moscow Univ. 2(2) 3–16
Smirnov N (1948) Table for estimating the goodness of fit of empirical distributions Ann. Math. Statist. 19 279–281

## 5  Arguments

1:     nIntegerInput
On entry: $n$, the number of observations in the sample.
Constraint: ${\mathbf{n}}\ge 3$.
2:     x[n]const doubleInput
On entry: the sample observations ${x}_{1},{x}_{2},\dots ,{x}_{n}$.
Constraint: the sample observations supplied must be consistent, in the usual manner, with the null distribution chosen, as specified by the arguments dist and par. For further details see Section 8.
3:     distNag_DistributionsInput
On entry: the theoretical (null) distribution from which it is suspected the data may arise.
${\mathbf{dist}}=\mathrm{Nag_Uniform}$
The uniform distribution over $\left(a,b\right)-U\left(a,b\right)$.
${\mathbf{dist}}=\mathrm{Nag_Normal}$
The Normal distribution with mean $\mu$ and variance ${\sigma }^{2}-N\left(\mu ,{\sigma }^{2}\right)$.
${\mathbf{dist}}=\mathrm{Nag_Gamma}$
The gamma distribution with shape parameter $\alpha$ and scale parameter $\beta$, where the mean $\text{}=\alpha \beta$.
${\mathbf{dist}}=\mathrm{Nag_Beta}$
The beta distribution with shape parameters $\alpha$ and $\beta$, where the mean $\text{}=\alpha /\left(\alpha +\beta \right)$.
${\mathbf{dist}}=\mathrm{Nag_Binomial}$
The binomial distribution with the number of trials, $m$, and the probability of a success, $p$.
${\mathbf{dist}}=\mathrm{Nag_Exponential}$
The exponential distribution with parameter $\lambda$, where the mean $\text{}=1/\lambda$.
${\mathbf{dist}}=\mathrm{Nag_Poisson}$
The Poisson distribution with parameter $\mu$, where the mean $\text{}=\mu$.
${\mathbf{dist}}=\mathrm{Nag_NegBinomial}$
The negative binomial distribution with the number of trials, $m$, and the probability of success, $p$.
${\mathbf{dist}}=\mathrm{Nag_GenPareto}$
The generalized Pareto distribution with shape parameter $\epsilon$ and scale $\beta$.
Constraint: ${\mathbf{dist}}=\mathrm{Nag_Uniform}$, $\mathrm{Nag_Normal}$, $\mathrm{Nag_Gamma}$, $\mathrm{Nag_Beta}$, $\mathrm{Nag_Binomial}$, $\mathrm{Nag_Exponential}$, $\mathrm{Nag_Poisson}$, $\mathrm{Nag_NegBinomial}$ or $\mathrm{Nag_GenPareto}$.
4:     par[$2$]doubleInput/Output
On entry: if ${\mathbf{estima}}=\mathrm{Nag_ParaSupplied}$, par must contain the known values of the parameter(s) of the null distribution as follows.
If a uniform distribution is used, then ${\mathbf{par}}\left[0\right]$ and ${\mathbf{par}}\left[1\right]$ must contain the boundaries $a$ and $b$ respectively.
If a Normal distribution is used, then ${\mathbf{par}}\left[0\right]$ and ${\mathbf{par}}\left[1\right]$ must contain the mean, $\mu$, and the variance, ${\sigma }^{2}$, respectively.
If a gamma distribution is used, then ${\mathbf{par}}\left[0\right]$ and ${\mathbf{par}}\left[1\right]$ must contain the parameters $\alpha$ and $\beta$ respectively.
If a beta distribution is used, then ${\mathbf{par}}\left[0\right]$ and ${\mathbf{par}}\left[1\right]$ must contain the parameters $\alpha$ and $\beta$ respectively.
If a binomial distribution is used, then ${\mathbf{par}}\left[0\right]$ and ${\mathbf{par}}\left[1\right]$ must contain the parameters $m$ and $p$ respectively.
If an exponential distribution is used, then ${\mathbf{par}}\left[0\right]$ must contain the parameter $\lambda$.
If a Poisson distribution is used, then ${\mathbf{par}}\left[0\right]$ must contain the parameter $\mu$.
If a negative binomial distribution is used, ${\mathbf{par}}\left[0\right]$ and ${\mathbf{par}}\left[1\right]$ must contain the parameters $m$ and $p$ respectively.
If a generalized Pareto distribution is used, ${\mathbf{par}}\left[0\right]$ and ${\mathbf{par}}\left[1\right]$ must contain the parameters $\epsilon$ and $\beta$ respectively.
If ${\mathbf{estima}}=\mathrm{Nag_ParaEstimated}$, par need not be set except when the null distribution requested is either the binomial or the negative binomial distribution in which case ${\mathbf{par}}\left[0\right]$ must contain the parameter $m$.
On exit: if ${\mathbf{estima}}=\mathrm{Nag_ParaSupplied}$, par is unchanged. If ${\mathbf{estima}}=\mathrm{Nag_ParaEstimated}$, then ${\mathbf{par}}\left[0\right]$ and ${\mathbf{par}}\left[1\right]$ are set to values as estimated from the data.
Constraints:
• if ${\mathbf{dist}}=\mathrm{Nag_Uniform}$, ${\mathbf{par}}\left[0\right]<{\mathbf{par}}\left[1\right]$;
• if ${\mathbf{dist}}=\mathrm{Nag_Normal}$, ${\mathbf{par}}\left[1\right]>0.0$;
• if ${\mathbf{dist}}=\mathrm{Nag_Gamma}$, ${\mathbf{par}}\left[0\right]>0.0$ and ${\mathbf{par}}\left[1\right]>0.0$;
• if ${\mathbf{dist}}=\mathrm{Nag_Beta}$, ${\mathbf{par}}\left[0\right]>0.0$ and ${\mathbf{par}}\left[1\right]>0.0$ and ${\mathbf{par}}\left[0\right]\le {10}^{6}$ and ${\mathbf{par}}\left[1\right]\le {10}^{6}$;
• if ${\mathbf{dist}}=\mathrm{Nag_Binomial}$, ${\mathbf{par}}\left[0\right]\ge 1.0$ and $0.0<{\mathbf{par}}\left[1\right]<1.0$ and ${\mathbf{par}}\left[0\right]×{\mathbf{par}}\left[1\right]×\left(1.0-{\mathbf{par}}\left[1\right]\right)\le {10}^{6}$ and ${\mathbf{par}}\left[0\right]<1/\mathrm{eps}$, where , see nag_machine_precision (X02AJC);
• if ${\mathbf{dist}}=\mathrm{Nag_Exponential}$, ${\mathbf{par}}\left[0\right]>0.0$;
• if ${\mathbf{dist}}=\mathrm{Nag_Poisson}$, ${\mathbf{par}}\left[0\right]>0.0$ and ${\mathbf{par}}\left[0\right]\le {10}^{6}$;
• if ${\mathbf{dist}}=\mathrm{Nag_NegBinomial}$, ${\mathbf{par}}\left[0\right]\ge 1.0$ and $0.0<{\mathbf{par}}\left[1\right]<1.0$ and ${\mathbf{par}}\left[0\right]×{\mathbf{par}}\left[1\right]×\left(1.0-{\mathbf{par}}\left[1\right]\right)\le {10}^{6}$ and ${\mathbf{par}}\left[0\right]<1/\mathrm{eps}$, where , see nag_machine_precision (X02AJC);
• if ${\mathbf{dist}}=\mathrm{Nag_GenPareto}$, ${\mathbf{par}}\left[1\right]>0$.
5:     estimaNag_ParaEstimatesInput
On entry: estima must specify whether values of the parameters of the null distribution are known or are to be estimated from the data.
${\mathbf{estima}}=\mathrm{Nag_ParaSupplied}$
Values of the parameters will be supplied in the array par described above.
${\mathbf{estima}}=\mathrm{Nag_ParaEstimated}$
Parameters are to be estimated from the data except when the null distribution requested is the binomial or the negative binomial distribution in which case the first parameter, $m$, must be supplied in ${\mathbf{par}}\left[0\right]$ and only the second parameter, $p$ is estimated from the data.
Constraint: ${\mathbf{estima}}=\mathrm{Nag_ParaSupplied}$ or $\mathrm{Nag_ParaEstimated}$.
6:     dtypeNag_TestStatisticsInput
On entry: the test statistic to be calculated, i.e., the choice of alternative hypothesis.
${\mathbf{dtype}}=\mathrm{Nag_TestStatisticsDAbs}$
Computes ${D}_{n}$, to test ${H}_{0}$ against ${H}_{1}$,
${\mathbf{dtype}}=\mathrm{Nag_TestStatisticsDPos}$
Computes ${D}_{n}^{+}$, to test ${H}_{0}$ against ${H}_{2}$,
${\mathbf{dtype}}=\mathrm{Nag_TestStatisticsDNeg}$
Computes ${D}_{n}^{-}$, to test ${H}_{0}$ against ${H}_{3}$.
Constraint: ${\mathbf{dtype}}=\mathrm{Nag_TestStatisticsDAbs}$, $\mathrm{Nag_TestStatisticsDPos}$ or $\mathrm{Nag_TestStatisticsDNeg}$.
7:     ddouble *Output
On exit: the Kolmogorov–Smirnov test statistic (${D}_{n}$, ${D}_{n}^{+}$ or ${D}_{n}^{-}$ according to the value of dtype).
8:     zdouble *Output
On exit: a standardized value, $Z$, of the test statistic, $D$, without any correction for continuity.
9:     pdouble *Output
On exit: the probability, $p$, associated with the observed value of $D$ where $D$ may be ${D}_{n},{D}_{n}^{+}$ or ${D}_{n}^{-}$ depending on the value of dtype (see Section 3).
10:   failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).

## 6  Error Indicators and Warnings

NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, dist had an illegal value.
On entry, dtype had an illegal value.
On entry, estima had an illegal value.
NE_G08CB_DATA
The data supplied in x could not arise from the chosen null distribution, as specified by the arguments dist and par.
NE_G08CB_INCOMP_GAMMA
On entry, ${\mathbf{dist}}=\mathrm{Nag_Gamma}$, and in the computation of the incomplete gamma function by nag_incomplete_gamma (s14bac) the convergence of the Taylor series or Legendre continued fraction fails within $600$ iterations.
NE_G08CB_PARAM
On entry, the parameters supplied for the specified null distribution are out of range. This error will only occur if ${\mathbf{estima}}=\mathrm{Nag_ParaSupplied}$.
NE_G08CB_SAMPLE
On entry, ${\mathbf{dist}}=\mathrm{Nag_Uniform}$, $\mathrm{Nag_Normal}$, $\mathrm{Nag_Gamma}$, $\mathrm{Nag_Beta}$ or $\mathrm{Nag_GenPareto}$, ${\mathbf{estima}}=\mathrm{Nag_ParaEstimated}$ and the whole sample is constant. Thus the variance is zero.
NE_G08CB_VARIANCE
The variance $m×p×\left(1-p\right)$ of the binomial distribution exceeds $1000000$. $m={\mathbf{par}}\left[0\right]=〈\mathit{\text{value}}〉$ and $p={\mathbf{par}}\left[1\right]=〈\mathit{\text{value}}〉$.
The variance of the data x is too small for the generalized Pareto distribution (${\mathbf{dist}}=\mathrm{Nag_GenPareto}$).
The variance of the negative binomial distribution (${\mathbf{dist}}=\mathrm{Nag_NegBinomial}$) is too large. That is $m\left(1-p\right)/{p}^{2}>\text{1.0e6}$.
NE_INT_ARG_LT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 3$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.

## 7  Accuracy

The approximation for $p$, given when $n>100$, has a relative error of at most 2.5% for most cases. The two-sided probability is approximated by doubling the one-sided probability. This is only good for small $p$, i.e., $p<0.10$ but very poor for large $p$. The error is always on the conservative side, that is the tail probability, $p$, is over estimated.

The time taken by nag_1_sample_ks_test (g08cbc) increases with $n$ until $n>100$ at which point it drops and then increases slowly with $n$. The time may also depend on the choice of null distribution and on whether or not the parameters are to be estimated.
The data supplied in the argument x must be consistent with the chosen null distribution as follows:
• when ${\mathbf{dist}}=\mathrm{Nag_Uniform}$, then ${\mathbf{par}}\left[0\right]\le {x}_{i}\le {\mathbf{par}}\left[1\right]$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\mathrm{Nag_Normal}$, then there are no constraints on the ${x}_{i}$'s;
• when ${\mathbf{dist}}=\mathrm{Nag_Gamma}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\mathrm{Nag_Beta}$, then $0.0\le {x}_{i}\le 1.0$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\mathrm{Nag_Binomial}$, then $0.0\le {x}_{i}\le {\mathbf{par}}\left[0\right]$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\mathrm{Nag_Exponential}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\mathrm{Nag_Poisson}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\mathrm{Nag_NegBinomial}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\mathrm{Nag_GenPareto}$ and ${\mathbf{par}}\left[0\right]\ge 0.0$, then ${x}_{\mathit{i}}\ge 0.0$, for $\mathit{i}=1,2,\dots ,n$;
• when ${\mathbf{dist}}=\mathrm{Nag_GenPareto}$ and ${\mathbf{par}}\left[0\right]<0.0$, then $0.0\le {x}_{\mathit{i}}\le -{\mathbf{par}}\left[1\right]/{\mathbf{par}}\left[0\right]$, for $\mathit{i}=1,2,\dots ,n$.

## 9  Example

The following example program reads in a set of data consisting of 30 observations. The Kolmogorov–Smirnov test is then applied twice, firstly to test whether the sample is taken from a uniform distribution, $U\left(0,2\right)$, and secondly to test whether the sample is taken from a Normal distribution where the mean and variance are estimated from the data. In both cases we are testing against ${H}_{1}$; that is, we are doing a two tailed test. The values of d, z and p are printed for each case.

### 9.1  Program Text

Program Text (g08cbce.c)

### 9.2  Program Data

Program Data (g08cbce.d)

### 9.3  Program Results

Program Results (g08cbce.r)