Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_mv_factor (g03ca)

## Purpose

nag_mv_factor (g03ca) computes the maximum likelihood estimates of the arguments of a factor analysis model. Either the data matrix or a correlation/covariance matrix may be input. Factor loadings, communalities and residual correlations are returned.

## Syntax

[e, stat, com, psi, res, fl, ifail] = g03ca(matrix, n, x, nvar, isx, nfac, iop, 'm', m, 'wt', wt)
[e, stat, com, psi, res, fl, ifail] = nag_mv_factor(matrix, n, x, nvar, isx, nfac, iop, 'm', m, 'wt', wt)
Note: the interface to this routine has changed since earlier releases of the toolbox:
 At Mark 24: weight was removed from the interface; wt was made optional

## Description

Let $p$ variables, ${x}_{1},{x}_{2},\dots ,{x}_{p}$, with variance-covariance matrix $\Sigma$ be observed. The aim of factor analysis is to account for the covariances in these $p$ variables in terms of a smaller number, $k$, of hypothetical variables, or factors, ${f}_{1},{f}_{2},\dots ,{f}_{k}$. These are assumed to be independent and to have unit variance. The relationship between the observed variables and the factors is given by the model:
 $xi=∑j=1kλijfj+ei, i=1,2,…,p$
where ${\lambda }_{\mathit{i}\mathit{j}}$, for $\mathit{i}=1,2,\dots ,p$ and $\mathit{j}=1,2,\dots ,k$, are the factor loadings and ${e}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$, are independent random variables with variances ${\psi }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$. The ${\psi }_{i}$ represent the unique component of the variation of each observed variable. The proportion of variation for each variable accounted for by the factors is known as the communality. For this function it is assumed that both the $k$ factors and the ${e}_{i}$'s follow independent Normal distributions.
The model for the variance-covariance matrix, $\Sigma$, can be written as:
 $Σ=ΛΛT+Ψ$ (1)
where $\Lambda$ is the matrix of the factor loadings, ${\lambda }_{ij}$, and $\Psi$ is a diagonal matrix of unique variances, ${\psi }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$.
The estimation of the arguments of the model, $\Lambda$ and $\Psi$, by maximum likelihood is described by Lawley and Maxwell (1971). The log-likelihood is:
 $-12n-1logΣ-12n-1traceS,Σ-1+constant,$
where $n$ is the number of observations, $S$ is the sample variance-covariance matrix or, if weights are used, $S$ is the weighted sample variance-covariance matrix and $n$ is the effective number of observations, that is, the sum of the weights. The constant is independent of the arguments of the model. A two stage maximization is employed. It makes use of the function $F\left(\Psi \right)$, which is, up to a constant, $-2/\left(n-1\right)$ times the log-likelihood maximized over $\Lambda$. This is then minimized with respect to $\Psi$ to give the estimates, $\stackrel{^}{\Psi }$, of $\Psi$. The function $F\left(\Psi \right)$ can be written as:
 $FΨ=∑j=k+1pθj-log⁡θj-p-k$
where values ${\theta }_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,p$ are the eigenvalues of the matrix:
 $S*=Ψ-1/2SΨ-1/2.$
The estimates $\stackrel{^}{\Lambda }$, of $\Lambda$, are then given by scaling the eigenvectors of ${S}^{*}$, which are denoted by $V$:
 $Λ^=Ψ1/2VΘ-I1/2.$
where $\Theta$ is the diagonal matrix with elements ${\theta }_{i}$, and $I$ is the identity matrix.
The minimization of $F\left(\Psi \right)$ is performed using nag_opt_bounds_mod_deriv2_comp (e04lb) which uses a modified Newton algorithm. The computation of the Hessian matrix is described by Clark (1970). However, instead of using the eigenvalue decomposition of the matrix ${S}^{*}$ as described above, the singular value decomposition of the matrix $R{\Psi }^{-1/2}$ is used, where $R$ is obtained either from the $QR$ decomposition of the (scaled) mean centred data matrix or from the Cholesky decomposition of the correlation/covariance matrix. The function nag_opt_bounds_mod_deriv2_comp (e04lb) ensures that the values of ${\psi }_{i}$ are greater than a given small positive quantity, $\delta$, so that the communality is always less than one. This avoids the so called Heywood cases.
In addition to the values of $\Lambda$, $\Psi$ and the communalities, nag_mv_factor (g03ca) returns the residual correlations, i.e., the off-diagonal elements of $C-\left(\Lambda {\Lambda }^{\mathrm{T}}+\Psi \right)$ where $C$ is the sample correlation matrix. nag_mv_factor (g03ca) also returns the test statistic:
 $χ2=n-1-2p+5/6-2k/3FΨ^$
which can be used to test the goodness-of-fit of the model (1), see Lawley and Maxwell (1971) and Morrison (1967).

## References

Clark M R B (1970) A rapidly convergent method for maximum likelihood factor analysis British J. Math. Statist. Psych.
Hammarling S (1985) The singular value decomposition in multivariate statistics SIGNUM Newsl. 20(3) 2–25
Lawley D N and Maxwell A E (1971) Factor Analysis as a Statistical Method (2nd Edition) Butterworths
Morrison D F (1967) Multivariate Statistical Methods McGraw–Hill

## Parameters

### Compulsory Input Parameters

1:     $\mathrm{matrix}$ – string (length ≥ 1)
Selects the type of matrix on which factor analysis is to be performed.
${\mathbf{matrix}}=\text{'D'}$
The data matrix will be input in x and factor analysis will be computed for the correlation matrix.
${\mathbf{matrix}}=\text{'S'}$
The data matrix will be input in x and factor analysis will be computed for the covariance matrix, i.e., the results are scaled as described in Further Comments.
${\mathbf{matrix}}=\text{'C'}$
The correlation/variance-covariance matrix will be input in x and factor analysis computed for this matrix.
Constraint: ${\mathbf{matrix}}=\text{'D'}$, $\text{'S'}$ or $\text{'C'}$.
2:     $\mathrm{n}$int64int32nag_int scalar
If ${\mathbf{matrix}}=\text{'D'}$ or $\text{'S'}$ the number of observations in the data array x.
If ${\mathbf{matrix}}=\text{'C'}$ the (effective) number of observations used in computing the (possibly weighted) correlation/variance-covariance matrix input in x.
Constraint: ${\mathbf{n}}>{\mathbf{nvar}}$.
3:     $\mathrm{x}\left(\mathit{ldx},{\mathbf{m}}\right)$ – double array
ldx, the first dimension of the array, must satisfy the constraint
• if ${\mathbf{matrix}}=\text{'D'}$ or $\text{'S'}$, $\mathit{ldx}\ge {\mathbf{n}}$;
• if ${\mathbf{matrix}}=\text{'C'}$, $\mathit{ldx}\ge {\mathbf{m}}$.
The input matrix.
If ${\mathbf{matrix}}=\text{'D'}$ or $\text{'S'}$, x must contain the data matrix, i.e., ${\mathbf{x}}\left(\mathit{i},\mathit{j}\right)$ must contain the $\mathit{i}$th observation for the $\mathit{j}$th variable, for $\mathit{i}=1,2,\dots ,n$ and $\mathit{j}=1,2,\dots ,{\mathbf{m}}$.
If ${\mathbf{matrix}}=\text{'C'}$, x must contain the correlation or variance-covariance matrix. Only the upper triangular part is required.
4:     $\mathrm{nvar}$int64int32nag_int scalar
$p$, the number of variables in the factor analysis.
Constraint: ${\mathbf{nvar}}\ge 2$.
5:     $\mathrm{isx}\left({\mathbf{m}}\right)$int64int32nag_int array
${\mathbf{isx}}\left(\mathit{j}\right)$ indicates whether or not the $\mathit{j}$th variable is included in the factor analysis. If ${\mathbf{isx}}\left(\mathit{j}\right)\ge 1$, the variable represented by the $\mathit{j}$th column of x is included in the analysis; otherwise it is excluded, for $\mathit{j}=1,2,\dots ,{\mathbf{m}}$.
Constraint: ${\mathbf{isx}}\left(j\right)>0$ for nvar values of $j$.
6:     $\mathrm{nfac}$int64int32nag_int scalar
$k$, the number of factors.
Constraint: $1\le {\mathbf{nfac}}\le {\mathbf{nvar}}$.
7:     $\mathrm{iop}\left(5\right)$int64int32nag_int array
Options for the optimization. There are four options to be set:
 $\mathit{iprint}$ controls iteration monitoring; if $\mathit{iprint}\le 0$, then there is no printing of information else if $\mathit{iprint}>0$, then information is printed at every iprint iterations. The information printed consists of the value of $F\left(\Psi \right)$ at that iteration, the number of evaluations of $F\left(\Psi \right)$, the current estimates of the communalities and an indication of whether or not they are at the boundary. $\mathit{maxfun}$ the maximum number of function evaluations. $\mathit{acc}$ the required accuracy for the estimates of ${\psi }_{i}$. $\mathit{eps}$ a lower bound for the values of $\psi$, see Description.
Let  then if ${\mathbf{iop}}\left(1\right)=0$, then the following default values are used:
• $\mathit{iprint}=-1$
• $\mathit{maxfun}=100p$
• $\mathit{acc}=10\sqrt{\epsilon }$
• $\mathit{eps}=\epsilon$
If ${\mathbf{iop}}\left(1\right)\ne 0$, then
• $\mathit{iprint}={\mathbf{iop}}\left(2\right)$
• $\mathit{maxfun}={\mathbf{iop}}\left(3\right)$
• $\mathit{acc}={10}^{-l}$ where $l={\mathbf{iop}}\left(4\right)$
• $\mathit{eps}={10}^{-l}$ where $l={\mathbf{iop}}\left(5\right)$
Constraint: if ${\mathbf{iop}}\left(1\right)\ne 0$, ${\mathbf{iop}}\left(\mathit{i}\right)$ must be such that $\mathit{maxfun}\ge 1$, $\epsilon \le \mathit{acc}<1$ and $\epsilon \le \mathit{eps}<1$, for $\mathit{i}=3,4,5$.

### Optional Input Parameters

1:     $\mathrm{m}$int64int32nag_int scalar
Default: the dimension of the array isx and the second dimension of the array x. (An error is raised if these dimensions are not equal.)
The number of variables in the data/correlation/variance-covariance matrix.
Constraint: ${\mathbf{m}}\ge {\mathbf{nvar}}$.
2:     $\mathrm{wt}\left(:\right)$ – double array
The dimension of the array wt must be at least ${\mathbf{n}}$ if $\mathit{weight}=\text{'W'}$ and ${\mathbf{matrix}}=\text{'D'}$ or $\text{'S'}$, and at least $1$ otherwise
If $\mathit{weight}=\text{'W'}$ and ${\mathbf{matrix}}=\text{'D'}$ or $\text{'S'}$, wt must contain the weights to be used in the factor analysis. The effective number of observations in the analysis will then be the sum of weights. If ${\mathbf{wt}}\left(i\right)=0.0$, the $i$th observation is not included in the analysis.
If $\mathit{weight}=\text{'U'}$ or ${\mathbf{matrix}}=\text{'C'}$, wt is not referenced and the effective number of observations is $n$.
Constraint: if $\mathit{weight}=\text{'W'}$, $\text{the sum of weights}>{\mathbf{nvar}}$, ${\mathbf{wt}}\left(\mathit{i}\right)\ge 0.0$, for $\mathit{i}=1,2,\dots ,n$.

### Output Parameters

1:     $\mathrm{e}\left({\mathbf{nvar}}\right)$ – double array
The eigenvalues ${\theta }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$.
2:     $\mathrm{stat}\left(4\right)$ – double array
The test statistics.
${\mathbf{stat}}\left(1\right)$
Contains the value $F\left(\stackrel{^}{\Psi }\right)$.
${\mathbf{stat}}\left(2\right)$
Contains the test statistic, ${\chi }^{2}$.
${\mathbf{stat}}\left(3\right)$
Contains the degrees of freedom associated with the test statistic.
${\mathbf{stat}}\left(4\right)$
Contains the significance level.
3:     $\mathrm{com}\left({\mathbf{nvar}}\right)$ – double array
The communalities.
4:     $\mathrm{psi}\left({\mathbf{nvar}}\right)$ – double array
The estimates of ${\psi }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$.
5:     $\mathrm{res}\left({\mathbf{nvar}}×\left({\mathbf{nvar}}-1\right)/2\right)$ – double array
The residual correlations. The residual correlation for the $i$th and $j$th variables is stored in ${\mathbf{res}}\left(\left(j-1\right)\left(j-2\right)/2+i\right)$, $i.
6:     $\mathrm{fl}\left(\mathit{ldfl},{\mathbf{nfac}}\right)$ – double array
The factor loadings. ${\mathbf{fl}}\left(\mathit{i},\mathit{j}\right)$ contains ${\lambda }_{\mathit{i}\mathit{j}}$, for $\mathit{i}=1,2,\dots ,p$ and $\mathit{j}=1,2,\dots ,k$.
7:     $\mathrm{ifail}$int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see Error Indicators and Warnings).

## Error Indicators and Warnings

Note: nag_mv_factor (g03ca) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:

Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.

${\mathbf{ifail}}=1$
 On entry, $\mathit{ldfl}<{\mathbf{nvar}}$, or ${\mathbf{nvar}}<2$, or ${\mathbf{n}}\le {\mathbf{nvar}}$, or ${\mathbf{nfac}}<1$, or ${\mathbf{nvar}}<{\mathbf{nfac}}$, or ${\mathbf{m}}<{\mathbf{nvar}}$, or ${\mathbf{matrix}}=\text{'D'}$ or $\text{'S'}$ and $\mathit{ldx}<{\mathbf{n}}$, or ${\mathbf{matrix}}=\text{'C'}$ and $\mathit{ldx}<{\mathbf{m}}$, or ${\mathbf{matrix}}\ne \text{'D'}$, $\text{'S'}$ or $\text{'C'}$, or ${\mathbf{matrix}}=\text{'D'}$ or $\text{'S'}$ and $\mathit{weight}\ne \text{'U'}$ or $\text{'W'}$, or ${\mathbf{iop}}\left(1\right)\ne 0$ and ${\mathbf{iop}}\left(3\right)$ is such that $\mathit{maxfun}<1$, or ${\mathbf{iop}}\left(1\right)\ne 0$ and ${\mathbf{iop}}\left(4\right)$ is such that $\mathit{acc}\ge 1.0$, or ${\mathbf{iop}}\left(1\right)\ne 0$ and ${\mathbf{iop}}\left(4\right)$ is such that , or ${\mathbf{iop}}\left(1\right)\ne 0$ and ${\mathbf{iop}}\left(5\right)$ is such that $\mathit{eps}\ge 1.0$, or ${\mathbf{iop}}\left(1\right)\ne 0$ and ${\mathbf{iop}}\left(5\right)$ is such that , or ${\mathbf{matrix}}=\text{'C'}$ and $\mathit{lwk}<\left(5×{\mathbf{nvar}}×{\mathbf{nvar}}+33×{\mathbf{nvar}}-4\right)/2$, or ${\mathbf{matrix}}=\text{'D'}$ or $\text{'S'}$ and $\mathit{lwk}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(\left(5×{\mathbf{nvar}}×{\mathbf{nvar}}+33×{\mathbf{nvar}}-4\right)/2,{\mathbf{n}}×{\mathbf{nvar}}+7×{\mathbf{nvar}}+{\mathbf{nvar}}×\left({\mathbf{nvar}}-1\right)/2\right)$.
${\mathbf{ifail}}=2$
 On entry, $\mathit{weight}=\text{'W'}$ and a value of ${\mathbf{wt}}<0.0$.
${\mathbf{ifail}}=3$
On entry, there are not exactly nvar elements of ${\mathbf{isx}}>0$, or the effective number of observations $\text{}\le {\mathbf{nvar}}$.
${\mathbf{ifail}}=4$
On entry, ${\mathbf{matrix}}=\text{'D'}$ or $\text{'S'}$ and the data matrix is not of full column rank, or ${\mathbf{matrix}}=\text{'C'}$ and the input correlation/variance-covariance matrix is not positive definite.
This exit may also be caused by two of the eigenvalues of ${S}^{*}$ being equal; this is rare (see Lawley and Maxwell (1971)), and may be due to the data/correlation matrix being almost singular.
${\mathbf{ifail}}=5$
A singular value decomposition has failed to converge. This is a very unlikely error exit.
${\mathbf{ifail}}=6$
The estimation procedure has failed to converge in the given number of iterations. Change iop to either increase number of iterations $\mathit{maxfun}$ or increase the value of $\mathit{acc}$.
W  ${\mathbf{ifail}}=7$
The convergence is not certain but a lower point could not be found. See nag_opt_bounds_mod_deriv2_comp (e04lb) for further details. In this case all results are computed.
${\mathbf{ifail}}=-99$
An unexpected error has been triggered by this routine. Please contact NAG.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.

## Accuracy

The accuracy achieved is discussed in nag_opt_bounds_mod_deriv2_comp (e04lb) with the value of the argument xtol given by $\mathit{acc}$ as described in parameter iop.

The factor loadings may be orthogonally rotated by using nag_mv_rot_orthomax (g03ba) and factor score coefficients can be computed using nag_mv_factor_score (g03cc). The maximum likelihood estimators are invariant to a change in scale. This means that the results obtained will be the same (up to a scaling factor) if either the correlation matrix or the variance-covariance matrix is used. As the correlation matrix ensures that all values of ${\psi }_{i}$ are between $0$ and $1$ it will lead to a more efficient optimization. In the situation when the data matrix is input the results are always computed for the correlation matrix and then scaled if the results for the covariance matrix are required. When you input the covariance/correlation matrix the input matrix itself is used and you are advised to input the correlation matrix rather than the covariance matrix.

## Example

This example is taken from Lawley and Maxwell (1971). The correlation matrix for nine variables is input and the arguments of a factor analysis model with three factors are estimated and printed.
```function g03ca_example

fprintf('g03ca example results\n\n');

matrix = 'C';
n = int64(211);
x = [1,     0.523, 0.395, 0.471, 0.346, 0.426, 0.576, 0.434, 0.639;
0.523, 1,     0.479, 0.506, 0.418, 0.462, 0.547, 0.283, 0.645;
0.395, 0.479, 1,     0.355, 0.270, 0.254, 0.452, 0.219, 0.504;
0.471, 0.506, 0.355, 1,     0.691, 0.791, 0.443, 0.285, 0.505;
0.346, 0.418, 0.270, 0.691, 1,     0.679, 0.383, 0.149, 0.409;
0.426, 0.462, 0.254, 0.791, 0.679, 1,     0.372, 0.314, 0.472;
0.576, 0.547, 0.452, 0.443, 0.383, 0.372, 1,     0.385, 0.68;
0.434, 0.283, 0.219, 0.285, 0.149, 0.314, 0.385, 1,     0.47;
0.639, 0.645, 0.504, 0.505, 0.409, 0.472, 0.680, 0.470, 1];
nvar = int64(size(x,1));
isx  = ones(nvar,1,'int64');
nfac = int64(3);
iop = [int64(1); -1;  500;  2;  5];

[e, stat, com, psi, res, fl, ifail] = ...
g03ca( ...
matrix, n, x, nvar, isx, nfac, iop);

disp(' Eigenvalues');
fprintf('%12.4e%12.4e%12.4e\n',e);
fprintf('\n     Test Statistic = %6.3f\n', stat(2));
fprintf('                 df = %6.3f\n', stat(3));
fprintf(' Significance level = %6.3f\n', stat(4));
fprintf('\n Residuals\n\n');
l = 1;
for i = 1:nvar-1
fprintf('%8.3f', res(l:(l+i-1)));
fprintf('\n');
l = l + i;
end
for i = 1:nvar
fprintf('%8.3f', fl(i,1:nfac), com(i), psi(i));
fprintf('\n');
end

```
```g03ca example results

Eigenvalues
1.5968e+01  4.3577e+00  1.8474e+00
1.1560e+00  1.1190e+00  1.0271e+00
9.2574e-01  8.9508e-01  8.7710e-01

Test Statistic =  7.149
df = 12.000
Significance level =  0.848

Residuals

0.000
-0.013   0.022
0.011  -0.005   0.023
-0.010  -0.019  -0.016   0.003
-0.005   0.011  -0.012  -0.001  -0.001
0.015  -0.022  -0.011   0.002   0.029  -0.012
-0.001  -0.011   0.013   0.005  -0.006  -0.001   0.003
-0.006   0.010  -0.005  -0.011   0.002   0.007   0.003  -0.001

0.664  -0.321   0.074   0.550   0.450
0.689  -0.247  -0.193   0.573   0.427
0.493  -0.302  -0.222   0.383   0.617
0.837   0.292  -0.035   0.788   0.212
0.705   0.315  -0.153   0.619   0.381
0.819   0.377   0.105   0.823   0.177
0.661  -0.396  -0.078   0.600   0.400
0.458  -0.296   0.491   0.538   0.462
0.766  -0.427  -0.012   0.769   0.231
```

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015