hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox Chapter Introduction

F02 — Eigenvalues and Eigenvectors

Scope of the Chapter

This chapter provides functions for various types of matrix eigenvalue problem: Functions are provided for both real and complex data.
The majority of functions for these problems can be found in Chapter F08 which contains software derived from LAPACK (see Anderson et al. (1999)). However, you should read the introduction to this chapter before turning to Chapter F08, especially if you are a new user. Chapter F12 contains functions for large sparse eigenvalue problems, although one such function is also available in this chapter.
Chapters F02 and F08 contain Black Box (or Driver) functions that enable many problems to be solved by a call to a single function, and the decision trees in Section [Decision Trees] direct you to the most appropriate functions in Chapters F02 and F08. The Chapter F02 functions call functions in Chapters F07 and F08 wherever possible to perform the computations, and there are pointers in Section [Decision Trees] to the relevant decision trees in Chapter F08.

Background to the Problems

Here we describe the different types of problem which can be tackled by the functions in this chapter, and give a brief outline of the methods used to solve them. If you have one specific type of problem to solve, you need only read the relevant sub-section and then turn to Section [Recommendations on Choice and Use of Available Functions]. Consult a standard textbook for a more thorough discussion, for example Golub and Van Loan (1996) or Parlett (1998).
In each sub-section, we first describe the problem in terms of real matrices. The changes needed to adapt the discussion to complex matrices are usually simple and obvious: a matrix transpose such as QTQT must be replaced by its conjugate transpose QHQH; symmetric matrices must be replaced by Hermitian matrices, and orthogonal matrices by unitary matrices. Any additional changes are noted at the end of the sub-section.

Standard Eigenvalue Problems

Let AA be a square matrix of order nn. The standard eigenvalue problem is to find eigenvalues, λλ, and corresponding eigenvectors, x0x0, such that
Ax = λx.
Ax=λx.
(1)
(The phrase ‘eigenvalue problem’ is sometimes abbreviated to eigenproblem.)

Standard symmetric eigenvalue problems

If AA is real symmetric, the eigenvalue problem has many desirable features, and it is advisable to take advantage of symmetry whenever possible.
The eigenvalues λλ are all real, and the eigenvectors can be chosen to be mutually orthogonal. That is, we can write
Azi = λizi  for ​i = 1,2,,n
Azi=λizi  for ​i=1,2,,n
or equivalently:
AZ = ZΛ
AZ=ZΛ
(2)
where ΛΛ is a real diagonal matrix whose diagonal elements λiλi are the eigenvalues, and ZZ is a real orthogonal matrix whose columns zizi are the eigenvectors. This implies that ziT zj = 0 ziT zj = 0  if ijij, and zi2 = 1zi2=1.
Equation (2) can be rewritten
A = ZΛZT.
A=ZΛZT.
(3)
This is known as the eigen-decomposition or spectral factorization of AA.
Eigenvalues of a real symmetric matrix are well-conditioned, that is, they are not unduly sensitive to perturbations in the original matrix AA. The sensitivity of an eigenvector depends on how small the gap is between its eigenvalue and any other eigenvalue: the smaller the gap, the more sensitive the eigenvector. More details on the accuracy of computed eigenvalues and eigenvectors are given in the function documents, and in the F08 Chapter Introduction.
For dense or band matrices, the computation of eigenvalues and eigenvectors proceeds in the following stages:
  1. AA is reduced to a symmetric tridiagonal matrix TT by an orthogonal similarity transformation: A = QTQTA=QTQT, where QQ is orthogonal. (A tridiagonal matrix is zero except for the main diagonal and the first subdiagonal and superdiagonal on either side.) TT has the same eigenvalues as AA and is easier to handle.
  2. Eigenvalues and eigenvectors of TT are computed as required. If all eigenvalues (and optionally eigenvectors) are required, they are computed by the QRQR algorithm, which effectively factorizes TT as T = SΛSTT=SΛST, where SS is orthogonal. If only selected eigenvalues are required, they are computed by bisection, and if selected eigenvectors are required, they are computed by inverse iteration. If ss is an eigenvector of TT, then QsQs is an eigenvector of AA.
All the above remarks also apply – with the obvious changes – to the case when AA is a complex Hermitian matrix. The eigenvectors are complex, but the eigenvalues are all real, and so is the tridiagonal matrix TT.
If AA is large and sparse, the methods just described would be very wasteful in both storage and computing time, and therefore an alternative algorithm, known as subspace iteration, is provided (for real problems only) to find a (usually small) subset of the eigenvalues and their corresponding eigenvectors. Chapter F12 contains functions based on the Lanczos method for real symmetric large sparse eigenvalue problems, and these functions are usually more efficient than subspace iteration.

Standard nonsymmetric eigenvalue problems

A real nonsymmetric matrix AA may have complex eigenvalues, occurring as complex conjugate pairs. If xx is an eigenvector corresponding to a complex eigenvalue λλ, then the complex conjugate vector xx- is the eigenvector corresponding to the complex conjugate eigenvalue λλ-. Note that the vector xx defined in equation (1) is sometimes called a right eigenvector; a left eigenvector yy is defined by
yHA = λyH  or  ATy = λy.
yHA=λyH  or  ATy=λ-y.
Functions in this chapter only compute right eigenvectors (the usual requirement), but functions in Chapter F08 can compute left or right eigenvectors or both.
The eigenvalue problem can be solved via the Schur factorization of AA, defined as
A = ZTZT,
A=ZTZT,
where ZZ is an orthogonal matrix and TT is a real upper quasi-triangular matrix, with the same eigenvalues as AA. TT is called the Schur form of AA. If all the eigenvalues of AA are real, then TT is upper triangular, and its diagonal elements are the eigenvalues of AA. If AA has complex conjugate pairs of eigenvalues, then TT has 22 by 22 diagonal blocks, whose eigenvalues are the complex conjugate pairs of eigenvalues of AA. (The structure of TT is simpler if the matrices are complex – see below.)
For example, the following matrix is in quasi-triangular form
  1 * * * 0 2 − 1 * 0 1 2 * 0 0 0 3  
1 * * * 0 2 -1 * 0 1 2 * 0 0 0 3
and has eigenvalues 11, 2 ± i2±i, and 33. (The elements indicated by ‘ * *’ may take any values.)
The columns of ZZ are called the Schur vectors. For each k(1kn)k(1kn), the first kk columns of ZZ form an orthonormal basis for the invariant subspace corresponding to the first kk eigenvalues on the diagonal of TT. (An invariant subspace (for AA) is a subspace SS such that for any vector vv in SS, AvAv is also in SS.) Because this basis is orthonormal, it is preferable in many applications to compute Schur vectors rather than eigenvectors. It is possible to order the Schur factorization so that any desired set of kk eigenvalues occupy the kk leading positions on the diagonal of TT, and functions for this purpose are provided in Chapter F08.
Note that if AA is symmetric, the Schur vectors are the same as the eigenvectors, but if AA is nonsymmetric, they are distinct, and the Schur vectors, being orthonormal, are often more satisfactory to work with in numerical computation.
Eigenvalues and eigenvectors of a nonsymmetric matrix may be ill-conditioned, that is, sensitive to perturbations in AA. Chapter F08 contains functions which compute or estimate the condition numbers of eigenvalues and eigenvectors, and the F08 Chapter Introduction gives more details about the error analysis of nonsymmetric eigenproblems. The accuracy with which eigenvalues and eigenvectors can be obtained is often improved by balancing a matrix. This is discussed further in Section [Balancing for Nonsymmmetric Eigenproblems].
Computation of eigenvalues, eigenvectors or the Schur factorization proceeds in the following stages:
  1. AA is reduced to an upper Hessenberg matrix HH by an orthogonal similarity transformation: A = QHQTA=QHQT, where QQ is orthogonal. (An upper Hessenberg matrix is zero below the first subdiagonal.) HH has the same eigenvalues as AA, and is easier to handle.
  2. The upper Hessenberg matrix HH is reduced to Schur form TT by the QRQR algorithm, giving the Schur factorization H = STSTH=STST. The eigenvalues of AA are obtained from the diagonal blocks of TT. The matrix ZZ of Schur vectors (if required) is computed as Z = QSZ=QS.
  3. After the eigenvalues have been found, eigenvectors may be computed, if required, in two different ways. Eigenvectors of HH can be computed by inverse iteration, and then pre-multiplied by QQ to give eigenvectors of AA; this approach is usually preferred if only a few eigenvectors are required. Alternatively, eigenvectors of TT can be computed by back-substitution, and pre-multiplied by ZZ to give eigenvectors of AA.
All the above remarks also apply – with the obvious changes – to the case when AA is a complex matrix. The eigenvalues are in general complex, so there is no need for special treatment of complex conjugate pairs, and the Schur form TT is simply a complex upper triangular matrix.
As for the symmetric eigenvalue problem, if AA and is large and sparse then it is generally preferable to use an alternative method. Chapter F12 provides functions based on Arnoldi's method for both real and complex matrices, intended to find a subset of the eigenvalues and vectors.

The Singular Value Decomposition

The singular value decomposition (SVD) of a real mm by nn matrix AA is given by
A = UΣVT,
A=UΣVT,
where UU and VV are orthogonal and ΣΣ is an mm by nn diagonal matrix with real diagonal elements, σiσi, such that
σ1σ2σmin (m,n)0.
σ1σ2σmin(m,n)0.
The σiσi are the singular values of AA and the first min (m,n)min(m,n) columns of UU and VV are, respectively, the left and right singular vectors of AA. The singular values and singular vectors satisfy
Avi = σiui  and  ATui = σivi
Avi=σiui  and  ATui=σivi
where uiui and vivi are the iith columns of UU and VV respectively.
The singular value decomposition of AA is closely related to the eigen-decompositions of the symmetric matrices ATAATA or AATAAT, because:
ATAvi = σi2vi  and  AATui = σi2ui.
ATAvi=σi2vi  and  AATui=σi2ui.
However, these relationships are not recommended as a means of computing singular values or vectors unless AA is sparse and functions from Chapter F12 are to be used.
If UkUk, VkVk denote the leading kk columns of UU and VV respectively, and if ΣkΣk denotes the leading principal submatrix of Σ Σ, then
Ak Uk Σk VTk
Ak Uk Σk VTk
is the best rank-kk approximation to AA in both the 22-norm and the Frobenius norm.
Singular values are well-conditioned; that is, they are not unduly sensitive to perturbations in AA. The sensitivity of a singular vector depends on how small the gap is between its singular value and any other singular value: the smaller the gap, the more sensitive the singular vector. More details on the accuracy of computed singular values and vectors are given in the function documents and in the F08 Chapter Introduction.
The singular value decomposition is useful for the numerical determination of the rank of a matrix, and for solving linear least squares problems, especially when they are rank-deficient (or nearly so). See Chapter F04.
Computation of singular values and vectors proceeds in the following stages:
  1. AA is reduced to an upper bidiagonal matrix BB by an orthogonal transformation A = U1BV1TA=U1BV1T, where U1U1 and V1V1 are orthogonal. (An upper bidiagonal matrix is zero except for the main diagonal and the first superdiagonal.) BB has the same singular values as AA, and is easier to handle.
  2. The SVD of the bidiagonal matrix BB is computed as B = U2 Σ V2T B = U2 Σ V2T , where U2U2 and V2V2 are orthogonal and ΣΣ is diagonal as described above. Then in the SVD of AA, U = U1U2U=U1U2 and V = V1V2V=V1V2.
All the above remarks also apply – with the obvious changes – to the case when AA is a complex matrix. The singular vectors are complex, but the singular values are real and non-negative, and the bidiagonal matrix BB is also real.
By formulating the problems appropriately, real large sparse singular value problems may be solved using the symmetric eigenvalue functions in Chapter F12.

Generalized Eigenvalue Problems

Let AA and BB be square matrices of order nn. The generalized eigenvalue problem is to find eigenvalues, λλ, and corresponding eigenvectors, x0x0, such that
Ax = λBx.
Ax=λBx.
(4)
For given AA and BB, the set of all matrices of the form AλBA-λB is called a pencil, and λλ and xx are said to be an eigenvalue and eigenvector of the pencil AλBA-λB.
When BB is nonsingular, equation (4) is mathematically equivalent to (B1A)x = λx(B-1A)x=λx, and when AA is nonsingular, it is equivalent to (A1B)x = (1 / λ)x(A-1B)x=(1/λ)x. Thus, in theory, if one of the matrices AA or BB is known to be nonsingular, the problem could be reduced to a standard eigenvalue problem.
However, for this reduction to be satisfactory from the point of view of numerical stability, it is necessary not only that BB (or AA) should be nonsingular, but that it should be well-conditioned with respect to inversion. The nearer BB is to singularity, the more unsatisfactory B1AB-1A will be as a vehicle for determining the required eigenvalues. Well-determined eigenvalues of the original problem (4) may be poorly determined even by the correctly rounded version of B1AB-1A.
We consider first a special class of problems in which BB is known to be nonsingular, and then return to the general case in the following sub-section.

Generalized symmetric-definite eigenvalue problems

If AA and BB are symmetric and BB is positive definite, then the generalized eigenvalue problem has desirable properties similar to those of the standard symmetric eigenvalue problem. The eigenvalues are all real, and the eigenvectors, while not orthogonal in the usual sense, satisfy the relations ziT B zj = 0 ziT B zj = 0  for ijij and can be normalized so that ziT B zi = 1 ziT B zi = 1 .
Note that it is not enough for AA and BB to be symmetric; BB must also be positive definite, which implies nonsingularity. Eigenproblems with these properties are referred to as symmetric-definite problems.
If ΛΛ is the diagonal matrix whose diagonal elements are the eigenvalues, and ZZ is the matrix whose columns are the eigenvectors, then
ZTAZ = Λ  and  ZTBZ = I.
ZTAZ=Λ  and  ZTBZ=I.
To compute eigenvalues and eigenvectors, the problem can be reduced to a standard symmetric eigenvalue problem, using the Cholesky factorization of BB as LLTLLT or UTUUTU (see Chapter F07). Note, however, that this reduction does implicitly involve the inversion of BB, and hence this approach should not be used if BB is ill-conditioned with respect to inversion.
For example, with B = LLTB=LLT, we have
Az = λBz (L1ALT) (LTz) = λ(LTz) .
Az=λBz (L-1AL-T) (LTz) = λ(LTz) .
Hence the eigenvalues of Az = λBzAz=λBz are those of Cy = λyCy=λy, where CC is the symmetric matrix C = L1 ALT C = L-1 AL-T  and y = LTzy=LTz. The standard symmetric eigenproblem Cy = λyCy=λy may be solved by the methods described in Section [Standard symmetric eigenvalue problems]. The eigenvectors zz of the original problem may be recovered by computing z = LT y z = L-T y .
Most of the functions which solve this class of problems can also solve the closely related problems
ABx = λx  or  BAx = λx
ABx=λx  or  BAx=λx
where again AA and BB are symmetric and BB is positive definite. See the function documents for details.
All the above remarks also apply – with the obvious changes – to the case when AA and BB are complex Hermitian matrices. Such problems are called Hermitian-definite. The eigenvectors are complex, but the eigenvalues are all real.
If AA and BB are large and sparse, reduction to an equivalent standard eigenproblem as described above would almost certainly result in a large dense matrix CC, and hence would be very wasteful in both storage and computing time. The methods of subspace iteration and Lanczos type methods, mentioned in Section [Standard symmetric eigenvalue problems], can also be used for large sparse generalized symmetric-definite problems.

Generalized nonsymmetric eigenvalue problems

Any generalized eigenproblem which is not symmetric-definite with well-conditioned BB must be handled as if it were a general nonsymmetric problem.
If BB is singular, the problem has infinite eigenvalues. These are not a problem; they are equivalent to zero eigenvalues of the problem Bx = μAxBx=μAx. Computationally they appear as very large values.
If AA and BB are both singular and have a common null space, then AλBA-λB is singular for all λλ; in other words, any value λλ can be regarded as an eigenvalue. Pencils with this property are called singular.
As with standard nonsymmetric problems, a real problem may have complex eigenvalues, occurring as complex conjugate pairs.
The generalized eigenvalue problem can be solved via the generalized Schur factorization of AA and BB:
A = QUZT,  B = QVZT
A=QUZT,  B=QVZT
where QQ and ZZ are orthogonal, VV is upper triangular, and UU is upper quasi-triangular (defined just as in Section [Standard nonsymmetric eigenvalue problems]).
If all the eigenvalues are real, then UU is upper triangular; the eigenvalues are given by λi = uii / viiλi=uii/vii. If there are complex conjugate pairs of eigenvalues, then UU has 22 by 22 diagonal blocks.
Eigenvalues and eigenvectors of a generalized nonsymmetric problem may be ill-conditioned; that is, sensitive to perturbations in AA or BB.
Particular care must be taken if, for some ii, uii = vii = 0uii=vii=0, or in practical terms if uiiuii and viivii are both small; this means that the pencil is singular, or approximately so. Not only is the particular value λiλi undetermined, but also no reliance can be placed on any of the computed eigenvalues. See also the function documents.
Computation of eigenvalues and eigenvectors proceeds in the following stages.
  1. The pencil AλBA-λB is reduced by an orthogonal transformation to a pencil HλKH-λK in which HH is upper Hessenberg and KK is upper triangular: A = Q1 H Z1T A = Q1 H Z1T  and B = Q1 K Z1T B = Q1 K Z1T . The pencil HλKH-λK has the same eigenvalues as AλBA-λB, and is easier to handle.
  2. The upper Hessenberg matrix HH is reduced to upper quasi-triangular form, while KK is maintained in upper triangular form, using the QZQZ algorithm. This gives the generalized Schur factorization: H = Q2UZ2H=Q2UZ2 and K = Q2VZ2K=Q2VZ2.
  3. Eigenvectors of the pencil UλVU-λV are computed (if required) by back-substitution, and pre-multiplied by Z1Z2Z1Z2 to give eigenvectors of AA.
All the above remarks also apply – with the obvious changes – to the case when AA and BB are complex matrices. The eigenvalues are in general complex, so there is no need for special treatment of complex conjugate pairs, and the matrix UU in the generalized Schur factorization is simply a complex upper triangular matrix.
As for the generalized symmetric-definite eigenvalue problem, if AA and BB are large and sparse then it is generally preferable to use an alternative method. Chapter F12 provides functions based on Arnoldi's method for both real and complex matrices, intended to find a subset of the eigenvalues and vectors.

Recommendations on Choice and Use of Available Functions

Black Box Functions and General Purpose Functions

Functions in the NAG Toolbox for solving eigenvalue problems fall into two categories.
  1. Black Box Functions: these are designed to solve a standard type of problem in a single call – for example, to compute all the eigenvalues and eigenvectors of a real symmetric matrix. You are recommended to use a black box function if there is one to meet your needs; refer to the decision tree in Section [Black Box s] or the index in Section [Functionality Index].
  2. General Purpose Functions: these perform the computational subtasks which make up the separate stages of the overall task, as described in Section [Background to the Problems] – for example, reducing a real symmetric matrix to tridiagonal form. General purpose functions are to be found, for historical reasons, some in this chapter, a few in Chapter F01, but most in Chapter F08. If there is no black box function that meets your needs, you will need to use one or more general purpose functions.
The decision trees in Section [General Purpose s (Eigenvalues and Eigenvectors)] list the combinations of general purpose functions which are needed to solve many common types of problem.
Sometimes a combination of a black box function and one or more general purpose functions will be the most convenient way to solve your problem: the black box function can be used to compute most of the results, and a general purpose function can be used to perform a subsidiary computation, such as computing condition numbers of eigenvalues and eigenvectors.

Computing Selected Eigenvalues and Eigenvectors

The decision trees and the function documents make a distinction between functions which compute all eigenvalues or eigenvectors, and functions which compute selected eigenvalues or eigenvectors; the two classes of function use different algorithms.
It is difficult to give clear guidance on which of these two classes of function to use in a particular case, especially with regard to computing eigenvectors. If you only wish to compute a very few eigenvectors, then a function for selected eigenvectors will be more economical, but if you want to compute a substantial subset (an old rule of thumb suggested more than 25%), then it may be more economical to compute all of them. Conversely, if you wish to compute all the eigenvectors of a sufficiently large symmetric tridiagonal matrix, the function for selected eigenvectors may be faster.
The choice depends on the properties of the matrix and on the computing environment; if it is critical, you should perform your own timing tests.
For dense nonsymmetric eigenproblems, there are no algorithms provided for computing selected eigenvalues; it is always necessary to compute all the eigenvalues, but you can then select specific eigenvectors for computation by inverse iteration.

Storage Schemes for Symmetric Matrices

Functions which handle symmetric matrices are usually designed to use either the upper or lower triangle of the matrix; it is not necessary to store the whole matrix. If either the upper or lower triangle is stored conventionally in the upper or lower triangle of a two-dimensional array, the remaining elements of the array can be used to store other useful data. However, that is not always convenient, and if it is important to economize on storage, the upper or lower triangle can be stored in a one-dimensional array of length n(n + 1) / 2n(n+1)/2; in other words, the storage is almost halved. This storage format is referred to as packed storage.
Functions designed for packed storage are usually less efficient, especially on high-performance computers, so there is a trade-off between storage and efficiency.
A band matrix is one whose nonzero elements are confined to a relatively small number of subdiagonals or superdiagonals on either side of the main diagonal. Algorithms can take advantage of bandedness to reduce the amount of work and storage required.
Functions which take advantage of packed storage or bandedness are provided for both standard symmetric eigenproblems and generalized symmetric-definite eigenproblems.

Balancing for Nonsymmmetric Eigenproblems

There are two preprocessing steps which one may perform on a nonsymmetric matrix AA in order to make its eigenproblem easier. Together they are referred to as balancing.
  1. Permutation: this involves reordering the rows and columns to make AA more nearly upper triangular (and thus closer to Schur form): A = PAPTA=PAPT, where PP is a permutation matrix. If AA has a significant number of zero elements, this preliminary permutation can reduce the amount of work required, and also improve the accuracy of the computed eigenvalues. In the extreme case, if AA is permutable to upper triangular form, then no floating point operations are needed to reduce it to Schur form.
  2. Scaling: a diagonal matrix DD is used to make the rows and columns of AA more nearly equal in norm: A = DAD1A=DAD-1. Scaling can make the matrix norm smaller with respect to the eigenvalues, and so possibly reduce the inaccuracy contributed by roundoff (see Chapter II/11 of Wilkinson and Reinsch (1971)).
Functions are provided in Chapter F08 for performing either or both of these preprocessing steps, and also for transforming computed eigenvectors or Schur vectors back to those of the original matrix.
The black box functions in this chapter which compute eigenvectors perform both forms of balancing.

Non-uniqueness of Eigenvectors and Singular Vectors

Eigenvectors, as defined by equations (1) or (4), are not uniquely defined. If xx is an eigenvector, then so is kxkx where kk is any nonzero scalar. Eigenvectors computed by different algorithms, or on different computers, may appear to disagree completely, though in fact they differ only by a scalar factor (which may be complex). These differences should not be significant in any application in which the eigenvectors will be used, but they can arouse uncertainty about the correctness of computed results.
Even if eigenvectors xx are normalized so that x2 = 1x2=1, this is not sufficient to fix them uniquely, since they can still be multiplied by a scalar factor kk such that |k| = 1|k|=1. To counteract this inconvenience, most of the functions in this chapter, and in Chapter F08, normalize eigenvectors (and Schur vectors) so that x2 = 1x2=1 and the component of xx with largest absolute value is real and positive. (There is still a possible indeterminacy if there are two components of equal largest absolute value – or in practice if they are very close – but this is rare.)
In symmetric problems the computed eigenvalues are sorted into ascending order, but in nonsymmetric problems the order in which the computed eigenvalues are returned is dependent on the detailed working of the algorithm and may be sensitive to rounding errors. The Schur form and Schur vectors depend on the ordering of the eigenvalues and this is another possible cause of non-uniqueness when they are computed. However, it must be stressed again that variations in the results from this cause should not be significant. (Functions in Chapter F08 can be used to transform the Schur form and Schur vectors so that the eigenvalues appear in any given order if this is important.)
In singular value problems, the left and right singular vectors uu and vv which correspond to a singular value σσ cannot be normalized independently: if uu is multiplied by a factor kk such that |k| = 1|k|=1, then vv must also be multiplied by kk.
Non-uniqueness also occurs among eigenvectors which correspond to a multiple eigenvalue, or among singular vectors which correspond to a multiple singular value. In practice, this is more likely to be apparent as the extreme sensitivity of eigenvectors which correspond to a cluster of close eigenvalues (or of singular vectors which correspond to a cluster of close singular values).

Decision Trees

Black Box Functions

The decision tree for this section is divided into three sub-trees.
Note: for the Chapter F08 functions there is generally a choice of simple and comprehensive function. The comprehensive functions return additional information such as condition and/or error estimates.

Tree 1: Eigenvalues and Eigenvectors of Real Matrices

Is this a sparse eigenproblem Ax = λxAx=λx or Ax = λBxAx=λBx? _
yes
Is the problem symmetric? _
yes
nag_eigen_real_symm_sparse_eigsys (f02fj)
| no
|
| nag_eigen_real_gen_sparse_arnoldi (f02ek)
no
|
Is the eigenproblem Ax = λBxAx=λBx? _
yes
Are AA and BB symmetric with BB positive definite and well-conditioned w.r.t inversion? _
yes
Are AA and BB band matrices? _
yes
nag_lapack_dsbgv (f08ua), nag_lapack_dsbgvx (f08ub) or nag_sparseig_real_symm_band_init (f12ff) and nag_sparseig_real_band_solve (f12ag)
| | no
|
| | nag_lapack_dsygv (f08sa) or nag_lapack_dsygvx (f08sb)
| no
|
| Are AA and BB band matrices? _
yes
nag_sparseig_real_band_init (f12af) and nag_sparseig_real_band_solve (f12ag)
| no
|
| Is the generalized Schur factorization required? _
yes
nag_lapack_dgges (f08xa)
| no
|
| nag_lapack_dggev (f08wa) or nag_lapack_dggevx (f08wb)
no
|
The eigenproblem is Ax = λxAx=λx. Is AA symmetric? _
yes
Are all eigenvalues or all eigenvectors required? _
yes
nag_lapack_dsyev (f08fa)
| no
|
| nag_lapack_dsyevx (f08fb)
no
|
Are eigenvalues only required? _
yes
nag_lapack_dgeev (f08na) or nag_lapack_dgeevx (f08nb)
no
|
Is the Schur factorization required? _
yes
nag_lapack_dgees (f08pa) or nag_lapack_dgeesx (f08pb)
no
|
Are all eigenvectors required? _
yes
nag_lapack_dgeev (f08na) or nag_lapack_dgeevx (f08nb)
no
|
nag_eigen_real_gen_eigsys (f02ec)

Tree 2: Eigenvalues and Eigenvectors of Complex Matrices

Is this a sparse eigenproblem Ax = λxAx=λx or Ax = λBxAx=λBx? _
yes
Are AA and BB banded matrices? _
yes
nag_sparseig_complex_band_init (f12at) and nag_sparseig_complex_band_solve (f12au)
| no
|
| Chapter F12
no
|
Is the eigenproblem Ax = λBxAx=λBx? _
yes
Are AA and BB Hermitian with BB positive definite and well-conditioned w.r.t. inversion? _
yes
nag_lapack_zhbgv (f08un) or nag_lapack_zhbgvx (f08up)
| no
|
| Is the generalized Schur factorization required? _
yes
nag_lapack_zgges (f08xn)
| no
|
| nag_lapack_zggev (f08wn)
no
|
The eigenproblem is Ax = λxAx=λx. Is AA Hermitian? _
yes
Are all eigenvalues and eigenvectors required? _
yes
nag_lapack_zheev (f08fn) or nag_lapack_zheevx (f08fp)
| no
|
| nag_lapack_zheevx (f08fp)
no
|
Are eigenvalues only required? _
yes
nag_lapack_zgeev (f08nn) or nag_lapack_zgeevx (f08np)
no
|
Is the Schur factorization required? _
yes
nag_lapack_zgees (f08pn) or nag_lapack_zgeesx (f08pp)
no
|
Are all eigenvectors required? _
yes
nag_lapack_zgeev (f08nn) or nag_lapack_zgeevx (f08np)
no
|
nag_eigen_complex_gen_eigsys (f02gc)

Tree 3: Singular Values and Singular Vectors

Is AA a complex matrix? _
yes
Is AA upper triangular? _
yes
nag_eigen_complex_triang_svd (f02xu)
| no
|
| nag_lapack_zgesvd (f08kp)
no
|
Is AA upper triangular? _
yes
nag_eigen_real_triang_svd (f02wu)
no
|
Are only the leading terms required? _
yes
nag_eigen_real_gen_partialsvd (f02wg)
no
|
nag_lapack_dgesvd (f08kb)

General Purpose Functions (Eigenvalues and Eigenvectors)

Functions for large sparse eigenvalue problems are to be found in Chapter F12, see the F12 Chapter Introduction.
The decision tree for this section addressing dense problems, is divided into eight sub-trees:
As it is very unlikely that one of the functions in this section will be called on its own, the other functions required to solve a given problem are listed in the order in which they should be called.

General Purpose Functions (Singular Value Decomposition)

See Section [General Purpose s (singular value decomposition)] in the F08 Chapter Introduction. For real sparse matrices where only selected singular values are required (possibly with their singular vectors), functions from Chapter F12 may be applied to the symmetric matrix ATAATA; see Section [Example] in (f12fb).

Functionality Index

Black Box functions, 
    complex eigenproblem, 
        selected eigenvalues and eigenvectors nag_eigen_complex_gen_eigsys (f02gc)
    complex upper triangular matrix, 
        singular values and, optionally, left and/or right singular vectors nag_eigen_complex_triang_svd (f02xu)
    generalized real sparse symmetric-definite eigenproblem, 
        selected eigenvalues and eigenvectors nag_eigen_real_symm_sparse_eigsys (f02fj)
    real eigenproblem, 
        selected eigenvalues and eigenvectors nag_eigen_real_gen_eigsys (f02ec)
    real sparse symmetric matrix, 
        selected eigenvalues and eigenvectors nag_eigen_real_symm_sparse_eigsys (f02fj)
    real upper triangular matrix, 
        singular values and, optionally, left and/or right singular vectors nag_eigen_real_triang_svd (f02wu)
Black Box routines, 
    real sparse eigenproblem 
        selected eigenvalues and eigenvectors nag_eigen_real_gen_sparse_arnoldi (f02ek)
General Purpose functions (see also Chapter F08), 
    real band matrix, selected eigenvector, A − λB nag_eigen_real_band_geneig (f02sd)
    real m by n matrix (m ≥ n), QR factorization and SVD nag_eigen_real_gen_qu_svd (f02wd)
General Purpose functions (see also Chapter F12), 
    real m by n matrix, leading terms SVD nag_eigen_real_gen_partialsvd (f02wg)

References

Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Parlett B N (1998) The Symmetric Eigenvalue Problem SIAM, Philadelphia
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013