# NAG CL Interfaceg03aac (prin_​comp)

Settings help

CL Name Style:

## 1Purpose

g03aac performs a principal component analysis on a data matrix; both the principal component loadings and the principal component scores are returned.

## 2Specification

 #include
 void g03aac (Nag_PrinCompMat pcmatrix, Nag_PrinCompScores scores, Integer n, Integer m, const double x[], Integer tdx, const Integer isx[], double s[], const double wt[], Integer nvar, double e[], Integer tde, double p[], Integer tdp, double v[], Integer tdv, NagError *fail)
The function may be called by the names: g03aac or nag_mv_prin_comp.

## 3Description

Let $X$ be an $n×p$ data matrix of $n$ observations on $p$ variables ${x}_{1},{x}_{2},\dots ,{x}_{p}$ and let the $p×p$ variance-covariance matrix of ${x}_{1},{x}_{2},\dots ,{x}_{p}$ be $S$. A vector ${a}_{1}$ of length $p$ is found such that:
 $a1T Sa 1$
is maximized subject to
 $a1T a 1 = 1 .$
The variable ${z}_{1}={\sum }_{i=1}^{p}{a}_{1i}{x}_{i}$ is known as the first principal component and gives the linear combination of the variables that gives the maximum variation. A second principal component, ${z}_{2}={\sum }_{i=1}^{p}{a}_{2i}{x}_{i}$, is found such that:
 $a2T Sa 2$
is maximized subject to
 $a2T a 2 = 1$
and
 $a2T a 1 = 0 .$
This gives the linear combination of variables that is orthogonal to the first principal component that gives the maximum variation. Further principal components are derived in a similar way.
The vectors ${a}_{1},{a}_{2},\dots ,{a}_{p}$, are the eigenvectors of the matrix $S$ and associated with each eigenvector is the eigenvalue, ${\lambda }_{i}^{2}$. The value of ${\lambda }_{i}^{2}/\sum {\lambda }_{i}^{2}$ gives the proportion of variation explained by the $i$th principal component. Alternatively, the ${a}_{i}$'s can be considered as the right singular vectors in a singular value decomposition with singular values ${\lambda }_{i}$ of the data matrix centred about its mean and scaled by $1/\sqrt{\left(n-1\right)}$, ${X}_{s}$. This latter approach is used in g03aac, with
 $X s = V Λ P ′$
where $\Lambda$ is a diagonal matrix with elements ${\lambda }_{i}$, ${P}^{\prime }$ is the $p×p$ matrix with columns ${a}_{i}$ and $V$ is an $n×p$ matrix with ${V}^{\prime }V=I$, which gives the principal component scores.
Principal component analysis is often used to reduce the dimension of a dataset, replacing a large number of correlated variables with a smaller number of orthogonal variables that still contain most of the information in the original dataset.
The choice of the number of dimensions required is usually based on the amount of variation accounted for by the leading principal components. If $k$ principal components are selected, then a test of the equality of the remaining $p-k$ eigenvalues is
 $(n-(2p+5)/6) {- ∑ i = k + 1 p log( λ i 2 )+(p-k)log( ∑ i = k + 1 p λ i 2 /(p-k))}$
which has, asymptotically, a ${\chi }^{2}$ distribution with $\frac{1}{2}\left(p-k-1\right)\left(p-k+2\right)$ degrees of freedom.
Equality of the remaining eigenvalues indicates that if any more principal components are to be considered then they all should be considered.
Instead of the variance-covariance matrix the correlation matrix, the sums of squares and cross-products matrix or a standardized sums of squares and cross-products matrix may be used. In the last case $S$ is replaced by ${\sigma }^{-1/2}S{\sigma }^{-1/2}$ for a diagonal matrix $\sigma$ with positive elements. If the correlation matrix is used, the ${\chi }^{2}$ approximation for the statistic given above is not valid.
The principal component scores, $F$, are the values of the principal component variables for the observations. These can be standardized so that the variance of these scores for each principal component is $1.0$ or equal to the corresponding eigenvalue.
Weights can be used with the analysis, in which case the matrix $X$ is first centred about the weighted means then each row is scaled by an amount $\sqrt{{w}_{i}}$, where ${w}_{i}$ is the weight for the $i$th observation.
Chatfield C and Collins A J (1980) Introduction to Multivariate Analysis Chapman and Hall
Cooley W C and Lohnes P R (1971) Multivariate Data Analysis Wiley
Hammarling S (1985) The singular value decomposition in multivariate statistics SIGNUM Newsl. 20(3) 2–25
Kendall M G and Stuart A (1979) The Advanced Theory of Statistics (3 Volumes) (4th Edition) Griffin
Morrison D F (1967) Multivariate Statistical Methods McGraw–Hill

## 5Arguments

1: $\mathbf{pcmatrix}$Nag_PrinCompMat Input
On entry: indicates for which type of matrix the principal component analysis is to be carried out.
${\mathbf{pcmatrix}}=\mathrm{Nag_MatCorrelation}$
It is for the correlation matrix.
${\mathbf{pcmatrix}}=\mathrm{Nag_MatStandardised}$
It is for the standardized matrix, with standardizations given by s.
${\mathbf{pcmatrix}}=\mathrm{Nag_MatSumSq}$
It is for the sums of squares and cross-products matrix.
${\mathbf{pcmatrix}}=\mathrm{Nag_MatVarCovar}$
It is for the variance-covariance matrix.
Constraint: ${\mathbf{pcmatrix}}=\mathrm{Nag_MatCorrelation}$, $\mathrm{Nag_MatStandardised}$, $\mathrm{Nag_MatSumSq}$ or $\mathrm{Nag_MatVarCovar}$.
2: $\mathbf{scores}$Nag_PrinCompScores Input
On entry: specifies the type of principal component scores to be used.
${\mathbf{scores}}=\mathrm{Nag_ScoresStand}$
The principal component scores are standardized so that ${F}^{\prime }F=I$, i.e., $F={X}_{s}P{\Lambda }^{-1}=V$.
${\mathbf{scores}}=\mathrm{Nag_ScoresNotStand}$
The principal component scores are unstandardized, i.e., $F={X}_{s}P=V\Lambda$.
${\mathbf{scores}}=\mathrm{Nag_ScoresUnitVar}$
The principal component scores are standardized so that they have unit variance.
${\mathbf{scores}}=\mathrm{Nag_ScoresEigenval}$
The principal component scores are standardized so that they have variance equal to the corresponding eigenvalue.
Constraint: ${\mathbf{scores}}=\mathrm{Nag_ScoresStand}$, $\mathrm{Nag_ScoresNotStand}$, $\mathrm{Nag_ScoresUnitVar}$ or $\mathrm{Nag_ScoresEigenval}$.
3: $\mathbf{n}$Integer Input
On entry: the number of observations, $n$.
Constraint: ${\mathbf{n}}\ge 2$.
4: $\mathbf{m}$Integer Input
On entry: the number of variables in the data matrix, $m$.
Constraint: ${\mathbf{m}}\ge 1$.
5: $\mathbf{x}\left[{\mathbf{n}}×{\mathbf{tdx}}\right]$const double Input
On entry: ${\mathbf{x}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdx}}+\mathit{j}-1\right]$ must contain the $\mathit{i}$th observation for the $\mathit{j}$th variable, for $\mathit{i}=1,2,\dots ,n$ and $\mathit{j}=1,2,\dots ,m$.
6: $\mathbf{tdx}$Integer Input
On entry: the stride separating matrix column elements in the array x.
Constraint: ${\mathbf{tdx}}\ge {\mathbf{m}}$.
7: $\mathbf{isx}\left[{\mathbf{m}}\right]$const Integer Input
On entry: ${\mathbf{isx}}\left[j-1\right]$ indicates whether or not the $j$th variable is to be included in the analysis. If ${\mathbf{isx}}\left[\mathit{j}-1\right]>0$, then the variable contained in the $\mathit{j}$th column of x is included in the principal component analysis, for $\mathit{j}=1,2,\dots ,m$.
Constraint: ${\mathbf{isx}}\left[j-1\right]>0$ for nvar values of $j$.
8: $\mathbf{s}\left[{\mathbf{m}}\right]$double Input/Output
On entry: the standardizations to be used, if any.
If ${\mathbf{pcmatrix}}=\mathrm{Nag_MatStandardised}$, then the first $m$ elements of s must contain the standardization coefficients, the diagonal elements of $\sigma$.
Constraint: if ${\mathbf{isx}}\left[\mathit{j}-1\right]>0$, ${\mathbf{s}}\left[\mathit{j}-1\right]>0.0$, for $\mathit{j}=1,2,\dots ,m$.
On exit: if ${\mathbf{pcmatrix}}=\mathrm{Nag_MatStandardised}$, then s is unchanged on exit.
If ${\mathbf{pcmatrix}}=\mathrm{Nag_MatCorrelation}$, then s contains the variances of the selected variables. ${\mathbf{s}}\left[j-1\right]$ contains the variance of the variable in the $j$th column of x if ${\mathbf{isx}}\left[j-1\right]>0$.
If ${\mathbf{pcmatrix}}=\mathrm{Nag_MatSumSq}$ or $\mathrm{Nag_MatVarCovar}$, then s is not referenced.
9: $\mathbf{wt}\left[{\mathbf{n}}\right]$const double Input
On entry: optionally, the weights to be used in the principal component analysis.
If ${\mathbf{wt}}\left[i-1\right]=0.0$, then the $i$th observation is not included in the analysis. The effective number of observations is the sum of the weights.
If weights are not provided then wt must be set to NULL and the effective number of observations is n.
Constraints:
• if wt is not NULL, ${\mathbf{wt}}\left[\mathit{i}-1\right]\ge 0.0$, for $\mathit{i}=1,2,\dots ,n$;
• if wt is not NULL, the sum of weights $\ge {\mathbf{nvar}}+1$.
10: $\mathbf{nvar}$Integer Input
On entry: the number of variables in the principal component analysis, $p$.
Constraint: $1\le {\mathbf{nvar}}\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}}-1,{\mathbf{m}}\right)$.
11: $\mathbf{e}\left[{\mathbf{nvar}}×{\mathbf{tde}}\right]$double Output
On exit: the statistics of the principal component analysis. ${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}\right]$, the eigenvalues associated with the $\mathit{i}$th principal component, ${\lambda }_{\mathit{i}}^{2}$, for $\mathit{i}=1,2,\dots ,p$.
${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+1\right]$, the proportion of variation explained by the $\mathit{i}$th principal component, for $\mathit{i}=1,2,\dots ,p$.
${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+2\right]$, the cumulative proportion of variation explained by the first $\mathit{i}$ principal components, for $\mathit{i}=1,2,\dots ,p$.
${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+3\right]$, the ${\chi }^{2}$ statistics, for $\mathit{i}=1,2,\dots ,p$.
${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+4\right]$, the degrees of freedom for the ${\chi }^{2}$ statistics, for $\mathit{i}=1,2,\dots ,p$.
If ${\mathbf{pcmatrix}}\ne \mathrm{Nag_MatCorrelation}$, then ${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+5\right]$ contains the significance level for the ${\chi }^{2}$ statistic, for $\mathit{i}=1,2,\dots ,p$.
If ${\mathbf{pcmatrix}}=\mathrm{Nag_MatCorrelation}$, then ${\mathbf{e}}\left[\left(i-1\right)×{\mathbf{tde}}+5\right]$ is returned as zero.
12: $\mathbf{tde}$Integer Input
On entry: the stride separating matrix column elements in the array e.
Constraint: ${\mathbf{tde}}\ge 6$.
13: $\mathbf{p}\left[{\mathbf{nvar}}×{\mathbf{tdp}}\right]$double Output
Note: the $\left(i,j\right)$th element of the matrix $P$ is stored in ${\mathbf{p}}\left[\left(i-1\right)×{\mathbf{tdp}}+j-1\right]$.
On exit: the first nvar columns of p contain the principal component loadings, ${a}_{i}$. The $j$th column of p contains the nvar coefficients for the $j$th principal component.
14: $\mathbf{tdp}$Integer Input
On entry: the stride separating matrix column elements in the array p.
Constraint: ${\mathbf{tdp}}\ge {\mathbf{nvar}}$.
15: $\mathbf{v}\left[{\mathbf{n}}×{\mathbf{tdv}}\right]$double Output
Note: the $\left(i,j\right)$th element of the matrix $V$ is stored in ${\mathbf{v}}\left[\left(i-1\right)×{\mathbf{tdv}}+j-1\right]$.
On exit: the first nvar columns of v contain the principal component scores. The $j$th column of v contains the n scores for the $j$th principal component.
If weights are supplied in the array wt, then any rows for which ${\mathbf{wt}}\left[i-1\right]$ is zero will be set to zero.
16: $\mathbf{tdv}$Integer Input
On entry: the stride separating matrix column elements in the array v.
Constraint: ${\mathbf{tdv}}\ge {\mathbf{nvar}}$.
17: $\mathbf{fail}$NagError * Input/Output
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).

## 6Error Indicators and Warnings

NE_2_INT_ARG_GE
On entry, ${\mathbf{nvar}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{nvar}}<{\mathbf{n}}$.
NE_2_INT_ARG_GT
On entry, ${\mathbf{nvar}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{nvar}}\le {\mathbf{m}}$.
NE_2_INT_ARG_LT
On entry, ${\mathbf{tdp}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{nvar}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdp}}\ge {\mathbf{nvar}}$.
On entry, ${\mathbf{tdv}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{nvar}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdv}}\ge {\mathbf{nvar}}$.
On entry, ${\mathbf{tdx}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdx}}\ge {\mathbf{m}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument pcmatrix had an illegal value.
On entry, argument scores had an illegal value.
NE_INT_ARG_LT
On entry, ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{m}}\ge 1$.
On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{n}}\ge 2$.
On entry, ${\mathbf{nvar}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{nvar}}\ge 1$.
On entry, ${\mathbf{tde}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{tde}}\ge 6$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
NE_NEG_WEIGHT_ELEMENT
On entry, ${\mathbf{wt}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$.
Constraint: when referenced, all elements of wt must be non-negative.
NE_OBSERV_LT_VAR
With weighted data, the effective number of observations given by the sum of weights $\text{}=⟨\mathit{\text{value}}⟩$, while the number of variables included in the analysis, ${\mathbf{nvar}}=⟨\mathit{\text{value}}⟩$.
Constraint: effective number of observations $>{\mathbf{nvar}}+1$.
NE_SVD_NOT_CONV
The singular value decomposition has failed to converge. This is an unlikely error exit.
NE_VAR_INCL_INDICATED
The number of variables, nvar in the analysis $\text{}=⟨\mathit{\text{value}}⟩$, while the number of variables included in the analysis via array ${\mathbf{isx}}=⟨\mathit{\text{value}}⟩$.
Constraint: these two numbers must be the same.
NE_VAR_INCL_STANDARD
On entry, the standardization element ${\mathbf{s}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$, while the variable to be included ${\mathbf{isx}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$.
Constraint: when a variable is to be included, the standardization element must be positive.
NE_ZERO_EIGVALS
All eigenvalues/singular values are zero. This will be caused by all the variables being constant.

## 7Accuracy

As g03aac uses a singular value decomposition of the data matrix, it will be less affected by ill-conditioned problems than traditional methods using the eigenvalue decomposition of the variance-covariance matrix.

## 8Parallelism and Performance

g03aac is not threaded in any implementation.

None.

## 10Example

A dataset is taken from Cooley and Lohnes (1971), it consists of ten observations on three variables. The unweighted principal components based on the variance-covariance matrix are computed and unstandardized principal component scores requested.

### 10.1Program Text

Program Text (g03aace.c)

### 10.2Program Data

Program Data (g03aace.d)

### 10.3Program Results

Program Results (g03aace.r)