Let be an by data matrix of observations on variables and let the by variance-covariance matrix of be . A vector of length is found such that:
The variable is known as the first principal component and gives the linear combination of the variables that gives the maximum variation. A second principal component, , is found such that:
This gives the linear combination of variables that is orthogonal to the first principal component that gives the maximum variation. Further principal components are derived in a similar way.
The vectors , are the eigenvectors of the matrix and associated with each eigenvector is the eigenvalue, . The value of gives the proportion of variation explained by the th principal component. Alternatively, the 's can be considered as the right singular vectors in a singular value decomposition with singular values of the data matrix centred about its mean and scaled by , . This latter approach is used in g03aaf, with
where is a diagonal matrix with elements , is the by matrix with columns and is an by matrix with , which gives the principal component scores.
Principal component analysis is often used to reduce the dimension of a dataset, replacing a large number of correlated variables with a smaller number of orthogonal variables that still contain most of the information in the original dataset.
The choice of the number of dimensions required is usually based on the amount of variation accounted for by the leading principal components. If principal components are selected, then a test of the equality of the remaining eigenvalues is
which has, asymptotically, a -distribution with degrees of freedom.
Equality of the remaining eigenvalues indicates that if any more principal components are to be considered then they all should be considered.
Instead of the variance-covariance matrix the correlation matrix, the sums of squares and cross-products matrix or a standardized sums of squares and cross-products matrix may be used. In the last case is replaced by for a diagonal matrix with positive elements. If the correlation matrix is used, the approximation for the statistic given above is not valid.
The principal component scores, , are the values of the principal component variables for the observations. These can be standardized so that the variance of these scores for each principal component is or equal to the corresponding eigenvalue.
Weights can be used with the analysis, in which case the matrix is first centred about the weighted means then each row is scaled by an amount , where is the weight for the th observation.
Chatfield C and Collins A J (1980) Introduction to Multivariate Analysis Chapman and Hall
Cooley W C and Lohnes P R (1971) Multivariate Data Analysis Wiley
Hammarling S (1985) The singular value decomposition in multivariate statistics SIGNUM Newsl.20(3) 2–25
Kendall M G and Stuart A (1969) The Advanced Theory of Statistics (Volume 1) (3rd Edition) Griffin
Morrison D F (1967) Multivariate Statistical Methods McGraw–Hill
1: – Character(1)Input
On entry: indicates for which type of matrix the principal component analysis is to be carried out.
It is for the correlation matrix.
It is for a standardized matrix, with standardizations given by s.
It is for the sums of squares and cross-products matrix.
It is for the variance-covariance matrix.
, , or .
2: – Character(1)Input
On entry: indicates if the principal component scores are to be standardized.
The principal component scores are standardized so that , i.e., .
The principal component scores are unstandardized, i.e., .
The principal component scores are standardized so that they have unit variance.
The principal component scores are standardized so that they have variance equal to the corresponding eigenvalue.
Note: the dimension of the array wt
must be at least
if , and at least otherwise.
On entry: if , the first elements of wt must contain the weights to be used in the principal component analysis.
If , the th observation is not included in the analysis. The effective number of observations is the sum of the weights.
If , wt is not referenced and the effective number of observations is .
, for ;
the sum of weights .
11: – IntegerInput
On entry: , the number of variables in the principal component analysis.
12: – Real (Kind=nag_wp) arrayOutput
On exit: the statistics of the principal component analysis.
The eigenvalues associated with the
th principal component, , for .
The proportion of variation explained by the
th principal component, for .
The cumulative proportion of variation explained by the first
th principal components, for .
The statistics, for .
The degrees of freedom for the statistics, for .
contains significance level for the statistic, for .
If , is returned as zero.
13: – IntegerInput
On entry: the first dimension of the array e as declared in the (sub)program from which g03aaf is called.
14: – Real (Kind=nag_wp) arrayOutput
On exit: the first nvar columns of p contain the principal component loadings, . The th column of p contains the nvar coefficients for the th principal component.
15: – IntegerInput
On entry: the first dimension of the array p as declared in the (sub)program from which g03aaf is called.
16: – Real (Kind=nag_wp) arrayOutput
On exit: the first nvar columns of v contain the principal component scores. The th column of v contains the n scores for the th principal component.
If , any rows for which is zero will be set to zero.
17: – IntegerInput
On entry: the first dimension of the array v as declared in the (sub)program from which g03aaf is called.
18: – Real (Kind=nag_wp) arrayInput
This argument is no longer accessed by g03aaf. Workspace is provided internally by dynamic allocation instead.
19: – IntegerInput/Output
On entry: ifail must be set to , . If you are unfamiliar with this argument you should refer to Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value is recommended. If the output of error messages is undesirable, then the value is recommended. Otherwise, if you are not familiar with this argument, the recommended value is . When the value is used it is essential to test the value of ifail on exit.
On exit: unless the routine detects an error or a warning has been flagged (see Section 6).
Error Indicators and Warnings
If on entry or , explanatory error messages are output on the current error message unit (as defined by x04aaf).
and the effective number of observations is less than .
for some , when and .
The singular value decomposition has failed to converge. This is an unlikely error exit.
All eigenvalues/singular values are zero. This will be caused by all the variables being constant.
An unexpected error has been triggered by this routine. Please
See Section 3.9 in How to Use the NAG Library and its Documentation for further information.
Your licence key may have expired or may not have been installed correctly.
See Section 3.8 in How to Use the NAG Library and its Documentation for further information.
Dynamic memory allocation failed.
See Section 3.7 in How to Use the NAG Library and its Documentation for further information.
As g03aaf uses a singular value decomposition of the data matrix, it will be less affected by ill-conditioned problems than traditional methods using the eigenvalue decomposition of the variance-covariance matrix.
Parallelism and Performance
g03aaf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
A dataset is taken from Cooley and Lohnes (1971), it consists of ten observations on three variables. The unweighted principal components based on the variance-covariance matrix are computed and the principal component scores requested. The principal component scores are standardized so that they have variance equal to the corresponding eigenvalue.