# NAG FL Interfaceg07bbf (estim_​normal)

## ▸▿ Contents

Settings help

FL Name Style:

FL Specification Language:

## 1Purpose

g07bbf computes maximum likelihood estimates and their standard errors for parameters of the Normal distribution from grouped and/or censored data.

## 2Specification

Fortran Interface
 Subroutine g07bbf ( n, x, xc, ic, xmu, xsig, tol, corr, dev, nobs, nit, wk,
 Integer, Intent (In) :: n, ic(n), maxit Integer, Intent (Inout) :: ifail Integer, Intent (Out) :: nobs(4), nit Real (Kind=nag_wp), Intent (In) :: x(n), xc(n), tol Real (Kind=nag_wp), Intent (Inout) :: xmu, xsig Real (Kind=nag_wp), Intent (Out) :: sexmu, sexsig, corr, dev, wk(2*n) Character (1), Intent (In) :: method
#include <nag.h>
 void g07bbf_ (const char *method, const Integer *n, const double x[], const double xc[], const Integer ic[], double *xmu, double *xsig, const double *tol, const Integer *maxit, double *sexmu, double *sexsig, double *corr, double *dev, Integer nobs[], Integer *nit, double wk[], Integer *ifail, const Charlen length_method)
The routine may be called by the names g07bbf or nagf_univar_estim_normal.

## 3Description

A sample of size $n$ is taken from a Normal distribution with mean $\mu$ and variance ${\sigma }^{2}$ and consists of grouped and/or censored data. Each of the $n$ observations is known by a pair of values $\left({L}_{i},{U}_{i}\right)$ such that:
 $Li≤xi≤Ui.$
The data is represented as particular cases of this form:
• exactly specified observations occur when ${L}_{i}={U}_{i}={x}_{i}$,
• right-censored observations, known only by a lower bound, occur when ${U}_{i}\to \infty$,
• left-censored observations, known only by a upper bound, occur when ${L}_{i}\to -\infty$,
• and interval-censored observations when ${L}_{i}<{x}_{i}<{U}_{i}$.
Let the set $A$ identify the exactly specified observations, sets $B$ and $C$ identify the observations censored on the right and left respectively, and set $D$ identify the observations confined between two finite limits. Also let there be $r$ exactly specified observations, i.e., the number in $A$. The probability density function for the standard Normal distribution is
 $Z(x)=12πexp(-12x2) , -∞
and the cumulative distribution function is
 $P(X)= 1-Q(X)=∫-∞XZ(x)dx.$
The log-likelihood of the sample can be written as:
 $L (μ,σ) =-r log⁡σ - 1 2 ∑ A {( x i -μ)/σ} 2 +∑B log(Q( l i )) + ∑ C log(P( u i )) + ∑ D log( p i )$
where ${p}_{i}=P\left({u}_{i}\right)-P\left({l}_{i}\right)$ and ${u}_{i}=\left({U}_{i}-\mu \right)/\sigma \text{, }{l}_{i}=\left({L}_{i}-\mu \right)/\sigma$.
Let
 $S(xi)=Z(xi) Q(xi) , S1(li,ui)=Z(li)-Z(ui)pi$
and
 $S2(li,ui)=uiZ(ui)-liZ(li)pi,$
then the first derivatives of the log-likelihood can be written as:
 $∂L(μ,σ) ∂μ =L1(μ,σ)=σ−2∑A(xi-μ)+σ-1∑BS(li)-σ-1∑CS(-ui)+σ-1∑DS1(li,ui)$
and
 $∂L(μ,σ) ∂σ =L2(μ,σ)=-rσ-1+σ−3∑A (xi-μ) 2+σ-1∑BliS(li)-σ-1∑CuiS(-ui)$
 $-σ-1∑DS2(li,ui)$
The maximum likelihood estimates, $\stackrel{^}{\mu }$ and $\stackrel{^}{\sigma }$, are the solution to the equations:
 $L1(μ^,σ^)=0$ (1)
and
 $L2(μ^,σ^)=0$ (2)
and if the second derivatives $\frac{{\partial }^{2}L}{{\partial }^{2}\mu }$, $\frac{{\partial }^{2}L}{\partial \mu \partial \sigma }$ and $\frac{{\partial }^{2}L}{{\partial }^{2}\sigma }$ are denoted by ${L}_{11}$, ${L}_{12}$ and ${L}_{22}$ respectively, then estimates of the standard errors of $\stackrel{^}{\mu }$ and $\stackrel{^}{\sigma }$ are given by:
 $se(μ^)=-L22 L11L22-L122 , se(σ^)=-L11 L11L22-L122$
and an estimate of the correlation of $\stackrel{^}{\mu }$ and $\stackrel{^}{\sigma }$ is given by:
 $L12L12L22 .$
To obtain the maximum likelihood estimates the equations (1) and (2) can be solved using either the Newton–Raphson method or the Expectation-maximization $\left(EM\right)$ algorithm of Dempster et al. (1977).
Newton–Raphson Method
This consists of using approximate estimates $\stackrel{~}{\mu }$ and $\stackrel{~}{\sigma }$ to obtain improved estimates $\stackrel{~}{\mu }+\delta \stackrel{~}{\mu }$ and $\stackrel{~}{\sigma }+\delta \stackrel{~}{\sigma }$ by solving
 $δμ~L11+δσ~L12+L1=0, δμ~L12+δσ~L22+L2=0,$
for the corrections $\delta \stackrel{~}{\mu }$ and $\delta \stackrel{~}{\sigma }$.
EM Algorithm
The expectation step consists of constructing the variable ${w}_{i}$ as follows:
 $if i∈A, wi= xi E (∣Li (3)
 $if i∈B, wi= E (xi∣xi>Li)=μ+σS (li) S1 (li,ui)$ (4)
 $if i∈C, wi= E (xi∣xi (5)
 $if i∈D, wi= E (xi∣Li (6)
the maximization step consists of substituting (3), (4), (5) and (6) into (1) and (2) giving:
 $μ^=∑i=1nw^i/n$ (7)
and
 $σ^2=∑i=1n(w^i-μ^)2/ {r+∑BT(l^i)+∑CT(-u^i)+∑DT1(l^i,u^i)}$ (8)
where
 $T(x)=S(x){S(x)-x} , T1(l,u)=S12(l,u)+S2(l,u)$
and where ${\stackrel{^}{w}}_{i}$, ${\stackrel{^}{l}}_{i}$ and ${\stackrel{^}{u}}_{i}$ are ${w}_{i}$, ${l}_{i}$ and ${u}_{i}$ evaluated at $\stackrel{^}{\mu }$ and $\stackrel{^}{\sigma }$. Equations (3) to (8) are the basis of the $EM$ iterative procedure for finding $\stackrel{^}{\mu }$ and ${\stackrel{^}{\sigma }}^{2}$. The procedure consists of alternately estimating $\stackrel{^}{\mu }$ and ${\stackrel{^}{\sigma }}^{2}$ using (7) and (8) and estimating $\left\{{\stackrel{^}{w}}_{i}\right\}$ using (3) to (6).
In choosing between the two methods a general rule is that the Newton–Raphson method converges more quickly but requires good initial estimates whereas the $EM$ algorithm converges slowly but is robust to the initial values. In the case of the censored Normal distribution, if only a small proportion of the observations are censored then estimates based on the exact observations should give good enough initial estimates for the Newton–Raphson method to be used. If there are a high proportion of censored observations then the $EM$ algorithm should be used and if high accuracy is required the subsequent use of the Newton–Raphson method to refine the estimates obtained from the $EM$ algorithm should be considered.
Dempster A P, Laird N M and Rubin D B (1977) Maximum likelihood from incomplete data via the $EM$ algorithm (with discussion) J. Roy. Statist. Soc. Ser. B 39 1–38
Swan A V (1969) Algorithm AS 16. Maximum likelihood estimation from grouped and censored normal data Appl. Statist. 18 110–114
Wolynetz M S (1979) Maximum likelihood estimation from confined and censored normal data Appl. Statist. 28 185–195

## 5Arguments

1: $\mathbf{method}$Character(1) Input
On entry: indicates whether the Newton–Raphson or $EM$ algorithm should be used.
If ${\mathbf{method}}=\text{'N'}$, the Newton–Raphson algorithm is used.
If ${\mathbf{method}}=\text{'E'}$, the $EM$ algorithm is used.
Constraint: ${\mathbf{method}}=\text{'N'}$ or $\text{'E'}$.
2: $\mathbf{n}$Integer Input
On entry: $n$, the number of observations.
Constraint: ${\mathbf{n}}\ge 2$.
3: $\mathbf{x}\left({\mathbf{n}}\right)$Real (Kind=nag_wp) array Input
On entry: the observations ${x}_{\mathit{i}}$, ${L}_{\mathit{i}}$ or ${U}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$.
If the observation is exactly specified – the exact value, ${x}_{i}$.
If the observation is right-censored – the lower value, ${L}_{i}$.
If the observation is left-censored – the upper value, ${U}_{i}$.
If the observation is interval-censored – the lower or upper value, ${L}_{i}$ or ${U}_{i}$, (see xc).
4: $\mathbf{xc}\left({\mathbf{n}}\right)$Real (Kind=nag_wp) array Input
On entry: if the $\mathit{j}$th observation, for $\mathit{j}=1,2,\dots ,n$ is an interval-censored observation then ${\mathbf{xc}}\left(j\right)$ should contain the complementary value to ${\mathbf{x}}\left(j\right)$, that is, if ${\mathbf{x}}\left(j\right)<{\mathbf{xc}}\left(j\right)$, then ${\mathbf{xc}}\left(j\right)$ contains upper value, ${U}_{i}$, and if ${\mathbf{x}}\left(j\right)>{\mathbf{xc}}\left(j\right)$, then ${\mathbf{xc}}\left(j\right)$ contains lower value, ${L}_{i}$. Otherwise if the $j$th observation is exact or right- or left-censored ${\mathbf{xc}}\left(j\right)$ need not be set.
Note: if ${\mathbf{x}}\left(j\right)={\mathbf{xc}}\left(j\right)$ then the observation is ignored.
5: $\mathbf{ic}\left({\mathbf{n}}\right)$Integer array Input
On entry: ${\mathbf{ic}}\left(\mathit{i}\right)$ contains the censoring codes for the $\mathit{i}$th observation, for $\mathit{i}=1,2,\dots ,n$.
If ${\mathbf{ic}}\left(i\right)=0$, the observation is exactly specified.
If ${\mathbf{ic}}\left(i\right)=1$, the observation is right-censored.
If ${\mathbf{ic}}\left(i\right)=2$, the observation is left-censored.
If ${\mathbf{ic}}\left(i\right)=3$, the observation is interval-censored.
Constraint: ${\mathbf{ic}}\left(\mathit{i}\right)=0$, $1$, $2$ or $3$, for $\mathit{i}=1,2,\dots ,n$.
6: $\mathbf{xmu}$Real (Kind=nag_wp) Input/Output
On entry: if ${\mathbf{xsig}}>0.0$ the initial estimate of the mean, $\mu$; otherwise xmu need not be set.
On exit: the maximum likelihood estimate, $\stackrel{^}{\mu }$, of $\mu$.
7: $\mathbf{xsig}$Real (Kind=nag_wp) Input/Output
On entry: specifies whether an initial estimate of $\mu$ and $\sigma$ are to be supplied.
${\mathbf{xsig}}>0.0$
xsig is the initial estimate of $\sigma$ and xmu must contain an initial estimate of $\mu$.
${\mathbf{xsig}}\le 0.0$
Initial estimates of xmu and xsig are calculated internally from:
1. (a)the exact observations, if the number of exactly specified observations is $\text{}\ge 2$; or
2. (b)the interval-censored observations; if the number of interval-censored observations is $\text{}\ge 1$; or
3. (c)they are set to $0.0$ and $1.0$ respectively.
On exit: the maximum likelihood estimate, $\stackrel{^}{\sigma }$, of $\sigma$.
8: $\mathbf{tol}$Real (Kind=nag_wp) Input
On entry: the relative precision required for the final estimates of $\mu$ and $\sigma$. Convergence is assumed when the absolute relative changes in the estimates of both $\mu$ and $\sigma$ are less than tol.
If ${\mathbf{tol}}=0.0$, a relative precision of $0.000005$ is used.
Constraint: or ${\mathbf{tol}}=0.0$.
9: $\mathbf{maxit}$Integer Input
On entry: the maximum number of iterations.
If ${\mathbf{maxit}}\le 0$, a value of $25$ is used.
10: $\mathbf{sexmu}$Real (Kind=nag_wp) Output
On exit: the estimate of the standard error of $\stackrel{^}{\mu }$.
11: $\mathbf{sexsig}$Real (Kind=nag_wp) Output
On exit: the estimate of the standard error of $\stackrel{^}{\sigma }$.
12: $\mathbf{corr}$Real (Kind=nag_wp) Output
On exit: the estimate of the correlation between $\stackrel{^}{\mu }$ and $\stackrel{^}{\sigma }$.
13: $\mathbf{dev}$Real (Kind=nag_wp) Output
On exit: the maximized log-likelihood, $L\left(\stackrel{^}{\mu },\stackrel{^}{\sigma }\right)$.
14: $\mathbf{nobs}\left(4\right)$Integer array Output
On exit: the number of the different types of each observation;
${\mathbf{nobs}}\left(1\right)$ contains number of right-censored observations.
${\mathbf{nobs}}\left(2\right)$ contains number of left-censored observations.
${\mathbf{nobs}}\left(3\right)$ contains number of interval-censored observations.
${\mathbf{nobs}}\left(4\right)$ contains number of exactly specified observations.
15: $\mathbf{nit}$Integer Output
On exit: the number of iterations performed.
16: $\mathbf{wk}\left(2×{\mathbf{n}}\right)$Real (Kind=nag_wp) array Workspace
17: $\mathbf{ifail}$Integer Input/Output
On entry: ifail must be set to $0$, $-1$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $-1$ means that an error message is printed while a value of $1$ means that it is not.
If halting is not appropriate, the value $-1$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $0$ is recommended. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).

## 6Error Indicators and Warnings

If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
On entry, effective number of observations $\text{}<2$.
On entry, $i=⟨\mathit{\text{value}}⟩$ and ${\mathbf{ic}}\left(i\right)=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{ic}}\left(i\right)=0$, $1$, $2$ or $3$.
On entry, ${\mathbf{method}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{method}}=\text{'N'}$ or $\text{'E'}$.
On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{n}}\ge 2$.
On entry, ${\mathbf{tol}}=⟨\mathit{\text{value}}⟩$.
Constraint: or ${\mathbf{tol}}=0.0$.
${\mathbf{ifail}}=2$
The chosen method has not converged in $⟨\mathit{\text{value}}⟩$ iterations. You should either increase tol or maxit or, if using the $EM$ algorithm try using the Newton–Raphson method with initial values those returned by the current call to g07bbf. All returned values will be reasonable approximations to the correct results if maxit is not very small.
${\mathbf{ifail}}=3$
The EM process has failed. Different initial values should be tried.
The process has diverged. Different initial values should be tried.
${\mathbf{ifail}}=4$
Standard errors cannot be computed. This can be caused by the method starting to diverge when the maximum number of iterations was reached.
${\mathbf{ifail}}=-99$
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.

## 7Accuracy

The accuracy is controlled by the argument tol.
If high precision is requested with the $EM$ algorithm then there is a possibility that, due to the slow convergence, before the correct solution has been reached the increments of $\stackrel{^}{\mu }$ and $\stackrel{^}{\sigma }$ may be smaller than tol and the process will prematurely assume convergence.

## 8Parallelism and Performance

g07bbf is not threaded in any implementation.

The process is deemed divergent if three successive increments of $\mu$ or $\sigma$ increase.

## 10Example

A sample of $18$ observations and their censoring codes are read in and the Newton–Raphson method used to compute the estimates.

### 10.1Program Text

Program Text (g07bbfe.f90)

### 10.2Program Data

Program Data (g07bbfe.d)

### 10.3Program Results

Program Results (g07bbfe.r)