Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_mv_discrim_group (g03dc)

## Purpose

nag_mv_discrim_group (g03dc) allocates observations to groups according to selected rules. It is intended for use after nag_mv_discrim (g03da).

## Syntax

[prior, p, iag, ati, ifail] = g03dc(typ, equal, priors, nig, gmn, gc, det, isx, x, prior, atiq, 'nvar', nvar, 'ng', ng, 'nobs', nobs, 'm', m)
[prior, p, iag, ati, ifail] = nag_mv_discrim_group(typ, equal, priors, nig, gmn, gc, det, isx, x, prior, atiq, 'nvar', nvar, 'ng', ng, 'nobs', nobs, 'm', m)
Note: the interface to this routine has changed since earlier releases of the toolbox:
Mark 22: nobs has been made optional
.

## Description

Discriminant analysis is concerned with the allocation of observations to groups using information from other observations whose group membership is known, Xt${X}_{t}$; these are called the training set. Consider p$p$ variables observed on ng${n}_{g}$ populations or groups. Let xj${\stackrel{-}{x}}_{j}$ be the sample mean and Sj${S}_{j}$ the within-group variance-covariance matrix for the j$j$th group; these are calculated from a training set of n$n$ observations with nj${n}_{j}$ observations in the j$j$th group, and let xk${x}_{k}$ be the k$k$th observation from the set of observations to be allocated to the ng${n}_{g}$ groups. The observation can be allocated to a group according to a selected rule. The allocation rule or discriminant function will be based on the distance of the observation from an estimate of the location of the groups, usually the group means. A measure of the distance of the observation from the j$j$th group mean is given by the Mahalanobis distance, Dkj${D}_{kj}$:
 Dkj2 = (xk − xj)TSj − 1(xk − xj). $Dkj2=(xk-x-j)TSj-1(xk-x-j).$ (1)
If the pooled estimate of the variance-covariance matrix S$S$ is used rather than the within-group variance-covariance matrices, then the distance is:
 Dkj2 = (xk − xj)TS − 1(xk − xj). $Dkj2=(xk-x-j)TS-1(xk-x-j).$ (2)
Instead of using the variance-covariance matrices S$S$ and Sj${S}_{j}$, nag_mv_discrim_group (g03dc) uses the upper triangular matrices R$R$ and Rj${R}_{j}$ supplied by nag_mv_discrim (g03da) such that S = RTR$S={R}^{\mathrm{T}}R$ and Sj = RjTRj${S}_{j}={R}_{j}^{\mathrm{T}}{R}_{j}$. Dkj2${D}_{kj}^{2}$ can then be calculated as zTz${z}^{\mathrm{T}}z$ where RTjz = (xkxj)${{R}^{\mathrm{T}}}_{j}z=\left({x}_{k}-{x}_{j}\right)$ or RTz = (xkx)${R}^{\mathrm{T}}z=\left({x}_{k}-x\right)$ as appropriate.
In addition to the distances, a set of prior probabilities of group membership, πj${\pi }_{j}$, for j = 1,2,,ng$j=1,2,\dots ,{n}_{g}$, may be used, with πj = 1$\sum {\pi }_{j}=1$. The prior probabilities reflect your view as to the likelihood of the observations coming from the different groups. Two common cases for prior probabilities are π1 = π2 = = πng${\pi }_{1}={\pi }_{2}=\cdots ={\pi }_{{n}_{g}}$, that is, equal prior probabilities, and πj = nj / n${\pi }_{\mathit{j}}={n}_{\mathit{j}}/n$, for j = 1,2,,ng$\mathit{j}=1,2,\dots ,{n}_{g}$, that is, prior probabilities proportional to the number of observations in the groups in the training set.
nag_mv_discrim_group (g03dc) uses one of four allocation rules. In all four rules the p$p$ variables are assumed to follow a multivariate Normal distribution with mean μj${\mu }_{j}$ and variance-covariance matrix Σj${\Sigma }_{j}$ if the observation comes from the j$j$th group. The different rules depend on whether or not the within-group variance-covariance matrices are assumed equal, i.e., Σ1 = Σ2 = = Σng${\Sigma }_{1}={\Sigma }_{2}=\cdots ={\Sigma }_{{n}_{g}}$, and whether a predictive or estimative approach is used. If p ( xk μj ,Σj) $p\left({x}_{k}\mid {\mu }_{j},{\Sigma }_{j}\right)$ is the probability of observing the observation xk${x}_{k}$ from group j$j$, then the posterior probability of belonging to group j$j$ is:
 p (j ∣ xk,μj,Σj) ∝ p ( xk ∣ μj ,Σj) πj. $p (j∣xk,μj,Σj)∝ p ( xk∣ μj ,Σj) πj.$ (3)
In the estimative approach, the parameters μj${\mu }_{j}$ and Σj${\Sigma }_{j}$ in (3) are replaced by their estimates calculated from Xt${X}_{t}$. In the predictive approach, a non-informative prior distribution is used for the parameters and a posterior distribution for the parameters, p (μj, Σj Xt ) $p\left({\mu }_{j},{\Sigma }_{j}\mid {X}_{t}\right)$, is found. A predictive distribution is then obtained by integrating p (jxk,μj,Σj) p (μj, Σj X ) $p\left(j\mid {x}_{k},{\mu }_{j},{\Sigma }_{j}\right)p\left({\mu }_{j},{\Sigma }_{j}\mid X\right)$ over the parameter space. This predictive distribution then replaces p ( xk μj ,Σj) $p\left({x}_{k}\mid {\mu }_{j},{\Sigma }_{j}\right)$ in (3). See Aitchison and Dunsmore (1975), Aitchison et al. (1977) and Moran and Murphy (1979) for further details.
The observation is allocated to the group with the highest posterior probability. Denoting the posterior probabilities, p (jxk,μj,Σj) $p\left(j\mid {x}_{k},{\mu }_{j},{\Sigma }_{j}\right)$, by qj${q}_{j}$, the four allocation rules are:
(i) Estimative with equal variance-covariance matrices – Linear Discrimination
 logqj ∝ − (1/2)Dkj2 + logπj $log⁡qj∝-12Dkj2+log⁡πj$
(ii) Estimative with unequal variance-covariance matrices – Quadratic Discrimination
 logqj ∝ − (1/2)Dkj2 + logπj − (1/2)log|Sj| $log⁡qj∝-12Dkj2+log⁡πj-12log|Sj|$
(iii) Predictive with equal variance-covariance matrices
 qj − 1 ∝ ((nj + 1) / nj) p / 2 {1 + [nj / ((n − ng)(nj + 1))]D k j 2} (n + 1 − ng) / 2 $q j - 1 ∝ ( ( n j +1 ) / n j ) p / 2 { 1 +[ n j / ( ( n - n g ) ( n j +1 ) ) ] D k j 2 } ( n +1 - n g ) / 2$
(iv) Predictive with unequal variance-covariance matrices
 qj − 1 ∝ C {((nj2 − 1) / nj)|Sj|} p / 2 {1 + (nj / (nj2 − 1))D k j 2} nj / 2 , $q j - 1 ∝ C { ( ( n j 2 - 1 ) / n j ) | S j | } p / 2 { 1 + ( n j / ( n j 2 - 1 ) ) D k j 2 } n j / 2 ,$
where
 C = (Γ((1/2)(nj − p)))/(Γ((1/2)nj)). $C=Γ(12(nj-p)) Γ(12nj) .$
In the above the appropriate value of Dkj2${D}_{kj}^{2}$ from (1) or (2) is used. The values of the qj${q}_{j}$ are standardized so that,
 ng ∑ qj = 1. j = 1
$∑j=1ngqj=1.$
Moran and Murphy (1979) show the similarity between the predictive methods and methods based upon likelihood ratio tests.
In addition to allocating the observation to a group, nag_mv_discrim_group (g03dc) computes an atypicality index, Ij(xk)${I}_{j}\left({x}_{k}\right)$. The predictive atypicality index is returned, irrespective of the value of the parameter typ. This represents the probability of obtaining an observation more typical of group j$j$ than the observed xk${x}_{k}$ (see Aitchison and Dunsmore (1975) and Aitchison et al. (1977)). The atypicality index is computed for unequal within-group variance-covariance matrices as:
 Ij(xk) = P(B ≤ z : (1/2)p,(1/2)(nj − p)) $Ij(xk)=P(B≤z:12p,12(nj-p))$
where P(Bβ : a,b)$P\left(B\le \beta :a,b\right)$ is the lower tail probability from a beta distribution and
 z = Dkj2 / (Dkj2 + (nj2 − 1) / nj), $z=Dkj2/(Dkj2+(nj2-1)/nj),$
and for equal within-group variance-covariance matrices as:
 Ij(xk) = P(B ≤ z : (1/2)p,(1/2)(n − ng − p + 1)), $Ij(xk)=P(B≤z : 12p,12(n-ng-p+ 1)),$
with
 z = Dkj2 / (Dkj2 + (n − ng)(nj + 1) / nj). $z=Dkj2/(Dkj2+(n-ng)(nj+1)/nj).$
If Ij(xk)${I}_{j}\left({x}_{k}\right)$ is close to 1$1$ for all groups it indicates that the observation may come from a grouping not represented in the training set. Moran and Murphy (1979) provide a frequentist interpretation of Ij(xk)${I}_{j}\left({x}_{k}\right)$.

## References

Aitchison J and Dunsmore I R (1975) Statistical Prediction Analysis Cambridge
Aitchison J, Habbema J D F and Kay J W (1977) A critical comparison of two methods of statistical discrimination Appl. Statist. 26 15–25
Kendall M G and Stuart A (1976) The Advanced Theory of Statistics (Volume 3) (3rd Edition) Griffin
Krzanowski W J (1990) Principles of Multivariate Analysis Oxford University Press
Moran M A and Murphy B J (1979) A closer look at two alternative methods of statistical discrimination Appl. Statist. 28 223–232
Morrison D F (1967) Multivariate Statistical Methods McGraw–Hill

## Parameters

### Compulsory Input Parameters

1:     typ – string (length ≥ 1)
Whether the estimative or predictive approach is used.
typ = 'E'${\mathbf{typ}}=\text{'E'}$
The estimative approach is used.
typ = 'P'${\mathbf{typ}}=\text{'P'}$
The predictive approach is used.
Constraint: typ = 'E'${\mathbf{typ}}=\text{'E'}$ or 'P'$\text{'P'}$.
2:     equal – string (length ≥ 1)
Indicates whether or not the within-group variance-covariance matrices are assumed to be equal and the pooled variance-covariance matrix used.
equal = 'E'${\mathbf{equal}}=\text{'E'}$
The within-group variance-covariance matrices are assumed equal and the matrix R$R$ stored in the first p(p + 1) / 2$p\left(p+1\right)/2$ elements of gc is used.
equal = 'U'${\mathbf{equal}}=\text{'U'}$
The within-group variance-covariance matrices are assumed to be unequal and the matrices Ri${R}_{\mathit{i}}$, for i = 1,2,,ng$\mathit{i}=1,2,\dots ,{n}_{g}$, stored in the remainder of gc are used.
Constraint: equal = 'E'${\mathbf{equal}}=\text{'E'}$ or 'U'$\text{'U'}$.
3:     priors – string (length ≥ 1)
Indicates the form of the prior probabilities to be used.
priors = 'E'${\mathbf{priors}}=\text{'E'}$
Equal prior probabilities are used.
priors = 'P'${\mathbf{priors}}=\text{'P'}$
Prior probabilities proportional to the group sizes in the training set, nj${n}_{j}$, are used.
priors = 'I'${\mathbf{priors}}=\text{'I'}$
The prior probabilities are input in prior.
Constraint: priors = 'E'${\mathbf{priors}}=\text{'E'}$, 'I'$\text{'I'}$ or 'P'$\text{'P'}$.
4:     nig(ng) – int64int32nag_int array
ng, the dimension of the array, must satisfy the constraint ng2${\mathbf{ng}}\ge 2$.
The number of observations in each group in the training set, nj${n}_{j}$.
Constraints:
• if equal = 'E'${\mathbf{equal}}=\text{'E'}$, nig(j) > 0${\mathbf{nig}}\left(\mathit{j}\right)>0$ and j = 1ngnig(j) > ng + nvar$\sum _{\mathit{j}=1}^{{n}_{g}}{\mathbf{nig}}\left(\mathit{j}\right)>{\mathbf{ng}}+{\mathbf{nvar}}$, for j = 1,2,,ng$\mathit{j}=1,2,\dots ,{n}_{g}$;
• if equal = 'U'${\mathbf{equal}}=\text{'U'}$, nig(j) > nvar${\mathbf{nig}}\left(\mathit{j}\right)>{\mathbf{nvar}}$, for j = 1,2,,ng$\mathit{j}=1,2,\dots ,{n}_{g}$.
5:     gmn(ldgmn,nvar) – double array
ldgmn, the first dimension of the array, must satisfy the constraint ldgmnng$\mathit{ldgmn}\ge {\mathbf{ng}}$.
The j$\mathit{j}$th row of gmn contains the means of the p$p$ variables for the j$\mathit{j}$th group, for j = 1,2,,nj$\mathit{j}=1,2,\dots ,{n}_{j}$. These are returned by nag_mv_discrim (g03da).
6:     gc((ng + 1) × nvar × (nvar + 1) / 2$\left({\mathbf{ng}}+1\right)×{\mathbf{nvar}}×\left({\mathbf{nvar}}+1\right)/2$) – double array
The first p(p + 1) / 2$p\left(p+1\right)/2$ elements of gc should contain the upper triangular matrix R$R$ and the next ng${n}_{g}$ blocks of p(p + 1) / 2$p\left(p+1\right)/2$ elements should contain the upper triangular matrices Rj${R}_{j}$.
All matrices must be stored packed by column. These matrices are returned by nag_mv_discrim (g03da). If equal = 'E'${\mathbf{equal}}=\text{'E'}$ only the first p(p + 1) / 2$p\left(p+1\right)/2$ elements are referenced, if equal = 'U'${\mathbf{equal}}=\text{'U'}$ only the elements p(p + 1) / 2 + 1$p\left(p+1\right)/2+1$ to (ng + 1)p(p + 1) / 2$\left({n}_{g}+1\right)p\left(p+1\right)/2$ are referenced.
Constraints:
• if equal = 'E'${\mathbf{equal}}=\text{'E'}$, the diagonal elements of R$R$ must be 0.0$\text{}\ne 0.0$;
• if equal = 'U'${\mathbf{equal}}=\text{'U'}$, the diagonal elements of the Rj${R}_{\mathit{j}}$ must be 0.0$\text{}\ne 0.0$, for j = 1,2,,ng$\mathit{j}=1,2,\dots ,{n}_{g}$.
7:     det(ng) – double array
ng, the dimension of the array, must satisfy the constraint ng2${\mathbf{ng}}\ge 2$.
If equal = 'U'${\mathbf{equal}}=\text{'U'}$. the logarithms of the determinants of the within-group variance-covariance matrices as returned by nag_mv_discrim (g03da). Otherwise det is not referenced.
8:     isx(m) – int64int32nag_int array
m, the dimension of the array, must satisfy the constraint ${\mathbf{m}}\ge {\mathbf{nvar}}$.
isx(l)${\mathbf{isx}}\left(l\right)$ indicates if the l$l$th variable in x is to be included in the distance calculations.
If isx(l) > 0${\mathbf{isx}}\left(\mathit{l}\right)>0$, the l$\mathit{l}$th variable is included, for l = 1,2,,m$\mathit{l}=1,2,\dots ,{\mathbf{m}}$; otherwise the l$\mathit{l}$th variable is not referenced.
Constraint: isx(l) > 0${\mathbf{isx}}\left(l\right)>0$ for nvar values of l$l$.
9:     x(ldx,m) – double array
ldx, the first dimension of the array, must satisfy the constraint ldxnobs$\mathit{ldx}\ge {\mathbf{nobs}}$.
x(k,l)${\mathbf{x}}\left(\mathit{k},\mathit{l}\right)$ must contain the k$\mathit{k}$th observation for the l$\mathit{l}$th variable, for k = 1,2,,nobs$\mathit{k}=1,2,\dots ,{\mathbf{nobs}}$ and l = 1,2,,m$\mathit{l}=1,2,\dots ,{\mathbf{m}}$.
10:   prior(ng) – double array
ng, the dimension of the array, must satisfy the constraint ng2${\mathbf{ng}}\ge 2$.
If priors = 'I'${\mathbf{priors}}=\text{'I'}$, the prior probabilities for the ng${n}_{g}$ groups.
Constraint: if priors = 'I'${\mathbf{priors}}=\text{'I'}$, prior(j) > 0.0${\mathbf{prior}}\left(\mathit{j}\right)>0.0$ and |1j = 1ngprior(j)| 10 × machine precision , for j = 1,2,,ng$\mathit{j}=1,2,\dots ,{n}_{g}$.
11:   atiq – logical scalar
atiq must be true if atypicality indices are required. If atiq is false the array ati is not set.

### Optional Input Parameters

1:     nvar – int64int32nag_int scalar
Default: The second dimension of the array gmn.
p$p$, the number of variables in the variance-covariance matrices.
Constraint: nvar1${\mathbf{nvar}}\ge 1$.
2:     ng – int64int32nag_int scalar
Default: The dimension of the arrays nig, det, prior and the first dimension of the array gmn. (An error is raised if these dimensions are not equal.)
The number of groups, ng${n}_{g}$.
Constraint: ng2${\mathbf{ng}}\ge 2$.
3:     nobs – int64int32nag_int scalar
Default: The first dimension of the arrays gmn, x. (An error is raised if these dimensions are not equal.)
The number of observations in x which are to be allocated.
Constraint: nobs1${\mathbf{nobs}}\ge 1$.
4:     m – int64int32nag_int scalar
Default: The dimension of the array isx and the second dimension of the array x. (An error is raised if these dimensions are not equal.)
The number of variables in the data array x.
Constraint: ${\mathbf{m}}\ge {\mathbf{nvar}}$.

ldgmn ldx ldp wk

### Output Parameters

1:     prior(ng) – double array
If priors = 'P'${\mathbf{priors}}=\text{'P'}$, the computed prior probabilities in proportion to group sizes for the ng${n}_{g}$ groups.
If priors = 'I'${\mathbf{priors}}=\text{'I'}$, the input prior probabilities will be unchanged.
If priors = 'E'${\mathbf{priors}}=\text{'E'}$, prior is not set.
2:     p(ldp,ng) – double array
ldpnobs$\mathit{ldp}\ge {\mathbf{nobs}}$.
p(k,j)${\mathbf{p}}\left(\mathit{k},\mathit{j}\right)$ contains the posterior probability pkj${p}_{\mathit{k}\mathit{j}}$ for allocating the k$\mathit{k}$th observation to the j$\mathit{j}$th group, for k = 1,2,,nobs$\mathit{k}=1,2,\dots ,{\mathbf{nobs}}$ and j = 1,2,,ng$\mathit{j}=1,2,\dots ,{n}_{g}$.
3:     iag(nobs) – int64int32nag_int array
The groups to which the observations have been allocated.
4:     ati(ldp, : $:$) – double array
The first dimension of the array ati will be nobs${\mathbf{nobs}}$
The second dimension of the array will be ng${\mathbf{ng}}$ if atiq = true${\mathbf{atiq}}=\mathbf{true}$, and at least 1$1$ otherwise
ldpnobs$\mathit{ldp}\ge {\mathbf{nobs}}$.
If atiq is true, ati(k,j)${\mathbf{ati}}\left(\mathit{k},\mathit{j}\right)$ will contain the predictive atypicality index for the k$\mathit{k}$th observation with respect to the j$\mathit{j}$th group, for k = 1,2,,nobs$\mathit{k}=1,2,\dots ,{\mathbf{nobs}}$ and j = 1,2,,ng$\mathit{j}=1,2,\dots ,{n}_{g}$.
If atiq is false, ati is not set.
5:     ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

## Error Indicators and Warnings

Errors or warnings detected by the function:
ifail = 1${\mathbf{ifail}}=1$
 On entry, nvar < 1${\mathbf{nvar}}<1$, or ng < 2${\mathbf{ng}}<2$, or nobs < 1${\mathbf{nobs}}<1$, or ${\mathbf{m}}<{\mathbf{nvar}}$, or ldgmn < ng$\mathit{ldgmn}<{\mathbf{ng}}$, or ldx < nobs$\mathit{ldx}<{\mathbf{nobs}}$, or ldp < nobs$\mathit{ldp}<{\mathbf{nobs}}$, or typ ≠ 'E'${\mathbf{typ}}\ne \text{'E'}$ or ‘p’, or equal ≠ 'E'${\mathbf{equal}}\ne \text{'E'}$ or ‘U’, or priors ≠ 'E'${\mathbf{priors}}\ne \text{'E'}$, ‘I’ or ‘p’.
ifail = 2${\mathbf{ifail}}=2$
 On entry, the number of variables indicated by isx is not equal to nvar, or equal = 'E'${\mathbf{equal}}=\text{'E'}$ and nig(j) ≤ 0${\mathbf{nig}}\left(j\right)\le 0$, for some j$j$, or equal = 'E'${\mathbf{equal}}=\text{'E'}$ and ∑ j = 1ngnig(j) ≤ ng + nvar$\sum _{j=1}^{{n}_{g}}{\mathbf{nig}}\left(j\right)\le {\mathbf{ng}}+{\mathbf{nvar}}$, or equal = 'U'${\mathbf{equal}}=\text{'U'}$ and nig(j) ≤ nvar${\mathbf{nig}}\left(j\right)\le {\mathbf{nvar}}$ for some j$j$.
ifail = 3${\mathbf{ifail}}=3$
 On entry, priors = 'I'${\mathbf{priors}}=\text{'I'}$ and prior(j) ≤ 0.0${\mathbf{prior}}\left(j\right)\le 0.0$ for some j$j$, or priors = 'I'${\mathbf{priors}}=\text{'I'}$ and ∑ j = 1ngprior(j)$\sum _{j=1}^{{n}_{g}}{\mathbf{prior}}\left(j\right)$ is not within 10 × machine precision of 1$1$.
ifail = 4${\mathbf{ifail}}=4$
 On entry, equal = 'E'${\mathbf{equal}}=\text{'E'}$ and a diagonal element of R$R$ is zero, or equal = 'U'${\mathbf{equal}}=\text{'U'}$ and a diagonal element of Rj${R}_{j}$ for some j$j$ is zero.

## Accuracy

The accuracy of the returned posterior probabilities will depend on the accuracy of the input R$R$ or Rj${R}_{j}$ matrices. The atypicality index should be accurate to four significant places.

The distances Dkj2${D}_{kj}^{2}$ can be computed using nag_mv_discrim_mahal (g03db) if other forms of discrimination are required.

## Example

function nag_mv_discrim_group_example
typ = 'P';
equal = 'U';
priors = 'Equal priors';
nig = [int64(6);10;5];
gmean = [1.0433, -0.6034166666666667;
2.00727, -0.20604;
2.70974, 1.5998];
gc = [-0.5099642881287538;
-0.279705472386133;
-1.217327847040481;
-0.3326727521153484;
-0.3723518779712077;
-1.987589395382754;
-0.4603014906920608;
-0.7041634974247672;
0.4737334252803499;
0.7451327720614629;
-0.3251057349548681;
-0.4275545007358186];
det = [-0.8273469064608421;
-3.045968198109008;
-2.287732741158105];
isx = [int64(1);1];
x = [1.6292, -0.9163;
2.5572, 1.6094;
2.5649, -0.2231;
0.9555, -2.3026;
3.4012, -2.3026;
3.0204, -0.2231];
prior = zeros(3, 1);
atiq = true;
[priorOut, p, iag, ati, ifail] = ...
nag_mv_discrim_group(typ, equal, priors, nig, gmean, gc, det, isx, x, prior, atiq)

priorOut =

0
0
0

p =

0.0939    0.9046    0.0015
0.0047    0.1682    0.8270
0.0186    0.9196    0.0618
0.6969    0.3026    0.0005
0.3174    0.0130    0.6696
0.0323    0.3664    0.6013

iag =

2
3
2
1
3
3

ati =

0.5956    0.2539    0.9747
0.9519    0.8360    0.0184
0.9540    0.7966    0.9122
0.2073    0.8599    0.9929
0.9908    0.9999    0.9843
0.9807    0.9779    0.8871

ifail =

0

function g03dc_example
typ = 'P';
equal = 'U';
priors = 'Equal priors';
nig = [int64(6);10;5];
gmean = [1.0433, -0.6034166666666667;
2.00727, -0.20604;
2.70974, 1.5998];
gc = [-0.5099642881287538;
-0.279705472386133;
-1.217327847040481;
-0.3326727521153484;
-0.3723518779712077;
-1.987589395382754;
-0.4603014906920608;
-0.7041634974247672;
0.4737334252803499;
0.7451327720614629;
-0.3251057349548681;
-0.4275545007358186];
det = [-0.8273469064608421;
-3.045968198109008;
-2.287732741158105];
isx = [int64(1);1];
x = [1.6292, -0.9163;
2.5572, 1.6094;
2.5649, -0.2231;
0.9555, -2.3026;
3.4012, -2.3026;
3.0204, -0.2231];
prior = zeros(3, 1);
atiq = true;
[priorOut, p, iag, ati, ifail] = ...
g03dc(typ, equal, priors, nig, gmean, gc, det, isx, x, prior, atiq)

priorOut =

0
0
0

p =

0.0939    0.9046    0.0015
0.0047    0.1682    0.8270
0.0186    0.9196    0.0618
0.6969    0.3026    0.0005
0.3174    0.0130    0.6696
0.0323    0.3664    0.6013

iag =

2
3
2
1
3
3

ati =

0.5956    0.2539    0.9747
0.9519    0.8360    0.0184
0.9540    0.7966    0.9122
0.2073    0.8599    0.9929
0.9908    0.9999    0.9843
0.9807    0.9779    0.8871

ifail =

0