hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox Chapter Introduction

G03 — Multivariate Methods

Scope of the Chapter

This chapter is concerned with methods for studying multivariate data. A multivariate dataset consists of several variables recorded on a number of objects or individuals. Multivariate methods can be classified as those that seek to examine the relationships between the variables (e.g., principal components), known as variable-directed methods, and those that seek to examine the relationships between the objects (e.g., cluster analysis), known as individual-directed methods.
Multiple regression is not included in this chapter as it involves the relationship of a single variable, known as the response variable, to the other variables in the dataset, the explanatory variables. Routines for multiple regression are provided in Chapter G02.

Background to the Problems

Variable-directed Methods

Let the nn by pp data matrix consist of pp variables, x1,x2,,xpx1,x2,,xp, observed on nn objects or individuals. Variable-directed methods seek to examine the linear relationships between the pp variables with the aim of reducing the dimensionality of the problem. There are different methods depending on the structure of the problem. Principal component analysis and factor analysis examine the relationships between all the variables. If the individuals are classified into groups, then canonical variate analysis examines the between group structure. If the variables can be considered as coming from two sets, then canonical correlation analysis examines the relationships between the two sets of variables. All four methods are based on an eigenvalue decomposition or a singular value decomposition (SVD) of an appropriate matrix.
The above methods may reduce the dimensionality of the data from the original pp variables to a smaller number, kk, of derived variables that adequately represent the data. In general, these kk derived variables will be unique only up to an orthogonal rotation. Therefore, it may be useful to see if there exists suitable rotations of these variables that lead to a simple interpretation of the new variables in terms of the original variables.

Principal component analysis

Principal component analysis finds new variables which are linear combinations of the pp observed variables so that they have maximum variation and are orthogonal (uncorrelated).
Let SS be the pp by pp variance-covariance matrix of the nn by pp data matrix. A vector a1a1 of length pp is found such that
a1TSa1​ is maximized subject to ​a1Ta1 = 1.
a1TSa1​ is maximized subject to ​a1Ta1=1.
The variable z1 = i = 1pa1ixiz1=i=1pa1ixi is known as the first principal component and gives the linear combination of the variables that gives the maximum variation. A second principal component, z2 = i = 1pa2ixiz2=i=1pa2ixi, is found such that
a2TSa2​ is maximized subject to ​a2Ta2 = 1​ and ​a2Ta1 = 0.
a2TSa2​ is maximized subject to ​a2Ta2=1​ and ​a2Ta1=0.
This gives the linear combination of variables, orthogonal to the first principal component, that gives the maximum variation. Further principal components are derived in a similar way.
The vectors aiai, for i = 1,2,,pi=1,2,,p, are the eigenvectors of the matrix SS and associated with each eigenvector is the eigenvalue, γi2γi2. The value of γi2 / γi2γi2/γi2 gives the proportion of variation explained by the iith principal component. Alternatively, the aiai can be considered as the right singular vectors in a SVD of a scaled mean-centred data matrix. The singular values of the SVD are the γiγi-values.
Often fewer than pp dimensions (principal components) are needed to represent most of the variation in the data. A test on the smaller eigenvalues can be used to investigate the number of dimensions needed.
The values of the principal component variables for the individuals are known as the principal component scores. These can be standardized so that the variance of these scores for each principal component is 1.01.0 or equal to the corresponding eigenvalue. The principal component scores correspond to the left-hand singular vectors in the SVD.

Factor analysis

Let the pp variables have variance-covariance matrix ΣΣ. The aim of factor analysis is to account for the covariances in these pp variables in terms of a smaller number, kk, of hypothetical variables or factors, f1,f2,,fkf1,f2,,fk. These are assumed to be independent and to have unit variance. The relationship between the observed variables and the factors is given by the model
k
xi = λijfj + ei,  i = 1,2,,p
j = 1
xi=j=1kλijfj+ei,  i=1,2,,p
where λijλij, for i = 1,2,,pi=1,2,,p and j = 1,2,,kj=1,2,,k, are the factor loadings and eiei, for i = 1,2,,pi=1,2,,p, are independent random variables with variances ψiψi. These represent the unique component of the variation of each observed variable. The proportion of variation for each variable accounted for by the factors is known as the communality.
The model for the variance-covariance matrix, ΣΣ, can then be written as
Σ = ΛΛT + Ψ,
Σ=ΛΛT+Ψ,
where ΛΛ is the matrix of the factor loadings, λijλij, and ΨΨ is a diagonal matrix of the unique variances ψiψi.
If it is assumed that both the kk factors and the eiei follow independent Normal distributions then the parameters of the model, ΛΛ and ΨΨ, can be estimated by maximum likelihood, as described by Lawley and Maxwell (1971). The computation of the maximum likelihood estimates is an iterative procedure which involves computing the eigenvalues and eigenvectors of the matrix
S* = Ψ1 / 2SΨ1 / 2,
S*=Ψ-1/2SΨ-1/2,
where SS is the sample variance-covariance matrix. Alternatively, the SVD of the matrix RΨ1 / 2RΨ-1/2 can be used, where RTR = SRTR=S. When convergence has been achieved, the estimates Λ̂Λ^, of ΛΛ, are obtained by scaling the eigenvectors of S*S*. The use of maximum likelihood estimation means that likelihood ratio tests can be constructed to test for the number of factors required.
Having found the estimates of the parameters of the model, the estimates of the values of the factors for the individuals, the factor scores, can be computed. These involve the calculation of the factor score coefficients. Two common methods of computing factor score coefficients are the regression method and Bartlett's method. Bartlett's method gives unbiased estimates of the factor scores while the estimates from the regression method are biased but have smaller variance; see Lawley and Maxwell (1971).

Canonical variate analysis

If the individuals can be classified into one of gg groups, then canonical variate analysis finds the linear combinations of the pp variables that maximize the ratio of the between-group variation to the within-group variation. These variables are known as canonical variates. As the canonical variates provide discrimination between the groups, the method is also known as canonical discrimination.
The canonical variates can be calculated from the eigenvectors of the within-group sums of squares and cross-products matrix or from the SVD of the matrix
V = QxTQg,
V=QxTQg,
where QgQg is an orthogonal matrix that defines the groups and QxQx is the first pp columns of the orthogonal matrix QQ from the QRQR decomposition of the data matrix with the variable means subtracted. If the data matrix is not of full rank, the QxQx matrix can be obtained from a SVD. If the SVD of VV is
V = UxΔUgT,
V=UxΔUgT,
then the nonzero elements (δi > 0δi>0) of the diagonal matrix ΔΔ are the canonical correlations. The largest δiδi is called the first canonical correlation and associated with it is the first canonical variate.
The eigenvalues, γi2γi2, of the within-group sums of squares matrix are given by
γi2 = (δi2)/(1δi2).
γi2=δi2 1-δi2 .
The value of πi = γi2 / γi2πi=γi2/γi2 gives the proportion of variation explained by the iith canonical variate. The values of the πiπi give an indication as to how many canonical variates are needed to adequately describe the data, i.e., the dimensionality of the problem. The number of dimensions can be investigated by means of a test on the smaller canonical correlations.
The canonical variate loadings and the relationship between the original variables and the canonical variates are calculated from the matrix UxUx. This matrix is scaled so that the canonical variates have unit variance.

Canonical correlation analysis

If the pp variables can be considered as coming from two sets then canonical correlation analysis finds linear combinations of the variables in each set, known as canonical variates, such that the correlations between corresponding canonical variates for the two sets are maximized. Let the two sets of variables be denoted by xx and yy, with pxpx and pypy variables in each set respectively. Let the variance-covariance of the dataset be
S =
[ Sxx Sxy Syx Syy ]
S= [ Sxx Sxy Syx Syy ]
and let
Σ = Syy1SyxSxx1Sxy,
Σ=Syy -1SyxSxx -1Sxy,
then the canonical correlations can be calculated from the eigenvalues of the matrix ΣΣ. Alternatively, the canonical correlations can be calculated by means of a SVD of the matrix
V = QxTQy,
V=QxTQy,
where QxQx is the first pxpx columns of the orthogonal matrix QQ from the QRQR decomposition of the xx-variables in the data matrix, and QyQy is the first pypy columns of the QQ matrix of the QRQR decomposition of the yy-variables in the data matrix. In both cases, the variable means are subtracted before the QRQR decomposition is computed. If either set of variables is not of full rank, an SVD can be used instead of the QRQR decomposition. If the SVD of VV is
V = UxΔUyT,
V=UxΔUyT,
then the nonzero elements (δi > 0δi>0) of the diagonal matrix ΔΔ are the canonical correlations. The largest δiδi is called the first canonical correlation and associated with it is the first canonical variate. The eigenvalues, γi2γi2, of the matrix ΣΣ are given by
γi2 = (δi2)/(1 + δi2).
γi2=δi2 1+δi2 .
The value of πi = γi2 / γi2πi=γi2/γi2 gives the proportion of variation explained by the iith canonical variate. The values of the πiπi give an indication as to how many canonical variates are needed to adequately describe the data, i.e., the dimensionality of the problem; this can also be investigated by means of a test on the smaller values of the γi2γi2.
The relationship between the canonical variables and the original variables, the canonical variate loadings, can be computed from the UxUx and UyUy matrices.

Rotations

There are two principal reasons for using rotations: either
(a) simplifying the structure to aid interpretation of derived variables, or
(b) comparing two or more datasets or sets of derived variables.
The most common type of rotations used for (a) are orthogonal rotations. If ΛΛ is the pp by kk loading matrix from a variable-directed multivariate method, then the rotations are selected such that the elements, λij * λij*, of the rotated loading matrix, Λ*Λ*, are either relatively large or small. The rotations may be found by minimizing the criterion
kp k
V = (λij * )4γ/p
(p )
(λij * )2
i = 1
2
j = 1i = 1 j = 1
V=j=1ki=1p (λij*) 4-γpj=1k (i=1p (λij*) 2) 2
where the constant, γγ, gives a family of rotations, with γ = 1γ=1 giving varimax rotations and γ = 0γ=0 giving quartimax rotations.
Given an orthogonal rotation matrix XX, a solution may be further simplified by removing the orthogonality restriction with an oblique ProMax rotation. Let YY denote the matrix defined by a power transformation of XX, designed to increase high values in XX and decrease low values. Then the ProMax solution is based on a least squares fit of XX to YY.
For (b) Procrustes rotations are used. Let AA and BB be two ll by mm matrices, which can be considered as representing ll points in mm dimensions. One example is if AA is the loading matrix from a variable-directed multivariate method and BB is a hypothesised pattern matrix. In order to try to match the points in AA and BB there are three steps:
(i) translate so that centroids of both matrices are at the origin,
(ii) find a rotation that minimizes the sum of squared distances between corresponding points of the matrices,
(iii) scale the matrices.
For a more detailed description, see Krzanowski (1990).

Individual-directed Methods

While dealing with the same nn by pp data matrix as variable-directed methods, the emphasis is the nn objects or individuals rather than the pp variables. The methods are generally based on an nn by nn distance or dissimilarity matrix such that the (k,jk,j)th element gives a measure of how ‘far apart’ the individuals kk and jj are. Alternatively, a similarity matrix can be used which measures how ‘close’ individuals are. The form of the measure of distance or similarity will depend upon the form of the pp variables. For continuous variables it is usually assumed that some form of Euclidean distance is suitable. That is, for xkixki and xjixji measured for individuals kk and jj on variable ii respectively, the contribution to distance between individuals kk and jj from variable ii is given by
(xkixji)2.
(xki-xji) 2.
Often there will be a need to scale the variables to produce satisfactory distances. For discrete variables, there are various measures of similarity or distance that can easily be computed. For example, for binary data a measure of similarity could be
Given a measure of distance between individuals, there are three basic tasks that can be performed.
(i) Group the individuals; that is, collect the individuals into groups so that those within a group are closer to each other than they are to members of another group.
(ii) Classify individuals; that is, if some individuals are known to come from certain groups, allocate individuals whose group membership is unknown, to the nearest group.
(iii) Map the individuals; that is, produce a multidimensional diagram in which the distances on the diagram represent the distances between the individuals.
In the above, (i) leads to cluster analysis, (ii) leads to discriminant analysis and (iii) leads to scaling methods.

Hierarchical cluster analysis

Approaches for cluster analysis can be classified into two types: hierarchical and non-hierarchical. Hierarchical cluster analysis produces a series of overlapping groups or clusters ranging from separate individuals to one single cluster. For example, five individuals could be hierarchically clustered as follows.
Step 1 (1)(1) (2)(2) (3)(3) (4)(4) (5)(5)
Step 2 (1,2)(1,2) (3)(3) (4)(4) (5)(5)
Step 3 (1,2)(1,2) (3,4)(3,4) (5)(5)
Step 4 (1,2)(1,2) (3,4,5)(3,4,5)
Step 5 (1,2,3,4,5)(1,2,3,4,5)
The clusters at a level are constructed from the clusters at a previous level. There are two basic approaches to hierarchical cluster analysis: agglomerative methods which build up clusters starting from individuals until there is only one cluster, or divisive methods which start with a single cluster and split clusters until the individual level is reached. This chapter contains the more common agglomerative methods.
The stages in a hierarchical cluster analysis are usually as follows.
(i) form a distance matrix;
(ii) use selected criterion to form hierarchy;
(iii) print cluster information in the form of a dendrogram or use information to form a set of clusters.
These three stages will be considered in turn.
(i) Form a distance matrix
For the nn by pp data matrix XX, a general measure of the distance between object jj and object kk, djkdjk, is
djk =
(p ) ∑ D(xji / si,xki / si)i = 1 α
,
djk= (i=1pD(xji/si,xki/si)) α,
where xjixji and xkixki are the (j,i)(j,i)th and (k,i)(k,i)th elements of XX, sisi is a standardization for the iith variable and D(u,v)D(u,v) is a suitable function. Three common distances for continuous variables are:
(a) Euclidean distance: D(u,v) = (uv)2D(u,v)= (u-v) 2 and α = (1/2) α=12 .
(b) Euclidean squared distance: D(u,v) = (uv)2D(u,v)= (u-v) 2 and α = 1α=1.
(c) Absolute distance (city block metric): D(u,v) = |uv|D(u,v)=|u-v| and α = 1α=1.
The common standardizations are the standard deviation and the range. For dichotomous variables there are a number of different measures (see Krzanowski (1990) and Everitt (1974)); these are usually easy to compute. If the individuals in a cluster analysis are themselves variables, then a suitable distance measure will be based on the correlation coefficient for continuous variables and contingency table statistics for discrete data.
(ii) Form Hierarchy
Given a distance matrix for the nn individuals, an agglomerative clustering method produces a hierarchical tree by starting with nn clusters, each with a single individual and then at each of n1n-1 stages, merging two clusters to form a larger cluster until all individuals are in a single cluster. At each stage, the two clusters that are nearest are merged to form a new cluster and a new distance matrix is computed for the reduced number of clusters.
Methods differ as to how the distances between the new cluster and other clusters are computed. For three clusters ii, jj and kk, let nini, njnj and nknk be the number of objects in each cluster, and let dijdij, dikdik and djkdjk be the distances between the clusters. If clusters jj and kk, are to be merged to give cluster jkjk, then the distance from cluster ii to cluster jkjk, di . jkdi.jk, can be computed in the following ways.
(a) Single link or nearest neighbour: di . jk = min (dij,dik)di.jk=min(dij,dik).
(b) Complete link or furthest neighbour: di . jk = max (dij,dik)di.jk=max(dij,dik).
(c) Group average: di . jk = (nj)/(nj + nk)dij + (nk)/(nj + nk)dikdi.jk=njnj+nk dij+nknj+nk dik.
(d) Centroid: di . jk = (nj)/(nj + nk)dij + (nk)/(nj + nk)dik(njnk)/((nj + nk)2)djkdi.jk=njnj+nk dij+nknj+nk dik-njnk (nj+nk) 2djk.
(e) Median: di . jk = (1/2)dij + (1/2)dik(1/4)djkdi.jk=12dij+12dik-14djk.
(f) Minimum variance: di . jk = [(ni + nj)dij + (ni + nk)diknidjk] / (ni + nj + nk)di.jk=[(ni+nj)dij+(ni+nk)dik-nidjk]/(ni+nj+nk).
For further details, see Everitt (1974) or Krzanowski (1990).
(iii) Produce Dendrogram and Clusters
Hierarchical cluster analysis can be represented by a tree that shows at which distance the clusters merge. Such a tree is known as a dendrogram; see Everitt (1974) and Krzanowski (1990).
A simple example is
Figure 1
Figure 1
The end points of the dendrogram represent the individuals that have been clustered.
Alternatively, the information from the tree can be used to produce either a chosen number of clusters or the clusters that exist at a given distance. The latter is equivalent to taking the dendrogram and drawing a line across at a given distance to produce clusters.

Non-hierarchical clustering

Non-hierarchical cluster analysis usually forms a given number of clusters from the data. There is no requirement that if first k1k-1 and then kk clusters were requested then the k1k-1 clusters would be formed from the kk clusters.
Most non-hierarchical methods of cluster analysis seek to partition the set of individuals into a number of clusters so as to optimize a criterion. The number of clusters is usually specified prior to the analysis. One commonly used criterion is the within-cluster sum of squares. Given nn individuals with pp variables measured on each individual, xijxij, for i = 1,2,,ni=1,2,,n and j = 1,2,,pj=1,2,,p, the within-cluster sum of squares for KK clusters is
Kp
SSc = (xijxkj)2,
k = 1iSkj = 1
SSc=k=1KiSk j=1p (xij-x-kj) 2,
where SkSk is the set of objects in the kkth cluster and xkjx-kj is the mean for the variable jj over cluster kk. Starting with an initial allocation of individuals to clusters, the method then seeks to minimize SScSSc by a series of re-allocations. This is often known as KK-means clustering.
In the KK-means case individuals belong to a single cluster and are excluded from all remaining clusters. Alternatively, probabilities of cluster membership can be estimated and each cluster can have its own distributional properties. For example, given an initial set of probabilities, the Normal (Gaussian) mixture model uses the E–M method of Dempster et al. (1977) to maximize the sum of log-likelihoods over KK clusters for a given covariance model ranging from pooled variance to individual covariance matrices.

Discriminant analysis

Discriminant analysis is concerned with the allocation of objects to ngng groups on the basis of observations on those objects using an allocation rule. This rule is computed from observations coming from a training set in which group membership is known. The allocation rule is based on the distance between the object and an estimate of the location of the groups. If pp variables are observed and the vector of means for the jjth group in the training set are xjx-j then the usual measure of the distance of an observation, xkxk, from the jjth group mean is given by Mahalanobis squared distance
Dkj2 = (xkxj)TS * 1(xkxj),
Dkj2=(xk-x-j)TS*-1(xk-x-j),
where S*S* is either the within-group variance-covariance matrix, SjSj, for the njnj objects in the jjth group, or a pooled variance-covariance matrix, SS, computed from all nn objects from all groups where
S = (j = 1ng(nj1)Sj)/((nng)).
S=j=1ng(nj-1)Sj (n-ng) .
If the within-group variance-covariance matrices can be assumed to be equal then the pooled variance-covariance matrix can be used. This assumption can be tested using the test statistic
G = C
( ng )
(nng)log|S|(nj1)log|Sj|
j = 1
,
G=C ((n-ng)log|S|-j= 1ng(nj- 1)log|Sj|) ,
where
C = 1(2p2 + 3p1)/(6(p + 1)(ng1))
(ng )
1/((nj1))1/((nng))
j = 1
.
C=1-2p2+3p-1 6(p+1)(ng-1) (j=1ng1 (nj-1) -1 (n-ng) ) .
For large nn, GG is approximately distributed as a χ2χ2 variable with (1/2)p(p + 1)(ng1)12p(p+1)(ng-1) degrees of freedom; see Morrison (1967).
In addition to the distances, a set of prior probabilities of group membership, πjπj, for j = 1,2,,ngj=1,2,,ng, may be used. The prior probabilities reflect your view as to the likelihood of the objects coming from the different groups.
It is generally assumed that the pp variables follow a multivariate Normal distribution with, for the jjth group, mean μjμj and variance-covariance matrix ΣjΣj. If p ( xk μj , Σj ) p ( xk μj ,Σj)  is the probability of observing the observation xkxk from group jj, then the posterior probability of belonging to group jj is
p (jxk,μj,Σj) p (xk μj ,Σj) πj .
p (jxk,μj,Σj) p (xk μj ,Σj) πj .
An observation is allocated to the group with the highest posterior probability.
In the estimative approach to discrimination, the parameters μjμj and ΣjΣj in p(jxk,μj,Σj)p(jxk,μj,Σj) are replaced by their estimates calculated from the training set. If it is assumed that the within-group variance-covariance matrices are equal then the linear discriminant function is obtained; otherwise if it is assumed that the variance-covariance matrices are unequal then the quadratic discriminant function is obtained.
In the Bayesian predictive approach, a non-informative prior distribution is used for the parameters giving the posterior distribution for the parameters from the training set, XtXt, of, p (μj, Σj Xt ) p (μj, Σj Xt ) . A predictive distribution is then obtained by integrating p (jxk,μj,Σj) p (μj, Σj X) p (jxk,μj,Σj) p (μj, Σj X)  over the parameter space. This predictive distribution, p (xkXt) p ( xk Xt ) , then replaces p ( xk μj ,Σj) p ( xk μj ,Σj)  to give
p (jxk,μj,Σj) p (xkXt) πj .
p (jxk,μj,Σj) p ( xk Xt ) πj .
In addition to allocating the objects to groups, an atypicality index for each object and for each group can be computed. This represents the probability of obtaining an observation more typical of the group than that observed. A high value of the atypicality index for all groups indicates that the observation may in fact come from a group not represented in the training set.
Alternative approaches to discrimination are the use of canonical variates and logistic discrimination. Canonical variate analysis is described above and as it seeks to find the directions that best discriminate between groups these directions can also be used to allocate further observations. This can be viewed as an extension of Fisher's linear discriminant function. This approach does not assume that the data is Normally distributed, but Fisher's linear discriminant function may not perform well on non-Normal data. In the case of two groups, logistic regression can be performed with the response variable indicating the group allocation and the variables in the discriminant analysis being the explanatory variables. Allocation can then be made on the basis of the fitted response value. This is known as logistic discrimination and can be shown to be valid for a wide range of distributional assumptions.

Scaling methods

Scaling methods seek to represent the observed dissimilarities or distances between objects as distances between points in Euclidean space. For example if the distances between objects A, B and C were 33, 44 and 55, the distances could be represented exactly by three points in two-dimensional space. Only their relative positions would be important, the whole configuration of points could be rotated or shifted without effecting the distances between the points. If a one-dimensional representation was required, the ‘best’ representation might give distances of 2(1/3),3(1/3) 213,313  and 5(2/3) 523 , which may be an adequate representation. If the distances were 33, 44 and 88 then these distances could not be exactly represented in Euclidean space, even in two dimensions, the best representation being the three points in a straight line giving distances 33, 44 and 77.
In practice, the use of scaling methods has to decide upon the number of dimensions in which the data is to be represented. The smaller the number the easier it will be to assimilate the information. The chosen number of dimensions needs to give an adequate representation of the data but will often not give an exact representation because either the number of chosen dimensions is too small or the data cannot be represented in Euclidean space.
Two basic methods are available depending on the nature of the dissimilarities or distances being analysed. If the distances can be assumed to satisfy the metric inequality
dijdik + dkj,
dijdik+dkj,
then the distances can be represented exactly by points in Euclidean space and the technique known as metric scaling, classical scaling or principal coordinate analysis can be used. This technique involves the computing of the eigenvalues of a matrix derived from the distance matrix. The eigenvectors corresponding to the kk largest positive eigenvalues gives the best kk dimensions in which to represent the objects. If there are negative eigenvalues then the distance matrix cannot be represented in Euclidean space.
Instead of the above approach of requiring the distances from the points to match the distances from the objects as closely as possible, sometimes only a rank order equivalence is required. That is, the iith largest distance between objects should, as far as possible, be represented by the iith largest distance between points. This would be appropriate when the dissimilarities are based on subjective rankings. For example, if the objects were foods then a number of judges rank the foods for different qualities such as taste and texture, the resulting distances would not necessarily obey the metric inequality, but the rank order would be significant. Alternatively, by relaxing the requirement from matching distances to rank order equivalence only, the number of dimensions required to represent the distance matrix may be decreased. The requirement of rank order equivalence leads to non-metric or ordinal multidimensional scaling. The criterion used to measure the closeness of the fitted distance matrix to the observed distance matrix is known as STRESS, which is given by
sqrt((i = 1nj = 1i1(dij^dij~)2)/(i = 1nj = 1i1dij2^)),
i=1nj=1 i-1 (dij^-dij~) 2 i=1nj=1 i-1dij2^ ,
where dij2^dij2^ is the Euclidean squared distance between the computed points ii and jj, and dij~dij~ is the fitted distance obtained when dij^dij^ is monotonically regressed on the observed distances dijdij; that is, dij~dij~ is monotonic relative to dijdij and is obtained from dij^dij^ with the smallest number of changes. Thus STRESS is a measure of by how much the set of points preserve the order of the distances in the original distance matrix, and non-metric multidimensional scaling seeks to find the set of points that minimize the STRESS.

Recommendations on Choice and Use of Available Functions

See Section [Functionality Index] for a list of functions available in this Chapter.
Note also that nag_correg_glm_binomial (g02gb) will fit a logistic regression model and can be used for logistic discrimination.

Functionality Index

Canonical correlation analysis nag_mv_canon_corr (g03ad)
Canonical variate analysis nag_mv_canon_var (g03ac)
Cluster Analysis, 
    compute distance matrix nag_mv_distance_mat (g03ea)
    construct clusters following nag_mv_cluster_hier (g03ec) nag_mv_cluster_hier_indicator (g03ej)
    construct dendrogram following nag_mv_cluster_hier (g03ec) nag_mv_cluster_hier_dendrogram (g03eh)
    Gaussian mixture model nag_mv_gaussian_mixture (g03ga)
    hierarchical nag_mv_cluster_hier (g03ec)
    K-means nag_mv_cluster_kmeans (g03ef)
Discriminant Analysis, 
    allocation of observations to groups, following nag_mv_discrim (g03da) nag_mv_discrim_group (g03dc)
    Mahalanobis squared distances, following nag_mv_discrim (g03da) nag_mv_discrim_mahal (g03db)
    test for equality of within-group covariance matrices nag_mv_discrim (g03da)
Factor Analysis, 
    factor score coefficients, following nag_mv_factor (g03ca) nag_mv_factor_score (g03cc)
    maximum likelihood estimates of parameters nag_mv_factor (g03ca)
Principal component analysis nag_mv_prin_comp (g03aa)
Rotations, 
    orthogonal rotations for loading matrix nag_mv_rot_orthomax (g03ba)
    Procustes rotations nag_mv_rot_procrustes (g03bc)
    ProMax rotations nag_mv_rot_promax (g03bd)
Scaling Methods, 
    multidimensional scaling nag_mv_multidimscal_ordinal (g03fc)
    principal coordinate analysis nag_mv_multidimscal_metric (g03fa)
Standardize values of a data matrix nag_mv_z_scores (g03za)

References

Chatfield C and Collins A J (1980) Introduction to Multivariate Analysis Chapman and Hall
Dempster A P, Laird N M and Rubin D B (1977) Maximum likelihood from incomplete data via the EMEM algorithm (with discussion) J. Roy. Statist. Soc. Ser. B 39 1–38
Everitt B S (1974) Cluster Analysis Heinemann
Gnanadesikan R (1977) Methods for Statistical Data Analysis of Multivariate Observations Wiley
Hammarling S (1985) The singular value decomposition in multivariate statistics SIGNUM Newsl. 20(3) 2–25
Kendall M G and Stuart A (1976) The Advanced Theory of Statistics (Volume 3) (3rd Edition) Griffin
Krzanowski W J (1990) Principles of Multivariate Analysis Oxford University Press
Lawley D N and Maxwell A E (1971) Factor Analysis as a Statistical Method (2nd Edition) Butterworths
Morrison D F (1967) Multivariate Statistical Methods McGraw–Hill

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013