NAG CL Interface
F12 (Sparseig)
Large Scale Eigenproblems

 Contents

Settings help

CL Name Style:


1 Scope of the Chapter

This chapter provides functions for computing some eigenvalues and eigenvectors of large-scale (sparse) standard and generalized eigenvalue problems. It provides functions for:
Functions are provided for both real and complex data.

1.1 ARPACK Functions

The functions in this chapter whose short names begin with either f12a or f12f have all been derived from the ARPACK software suite (see Lehoucq et al. (1998)), a collection of Fortran 77 functions designed to solve large scale eigenvalue problems. The interfaces provided in this chapter have been chosen to combine ease of use with the flexibility of the original ARPACK software. The underlying iterative methods and algorithms remain essentially the same as those in ARPACK and are described fully in Lehoucq et al. (1998).
The algorithms used in the ARPACK functions are based upon an algorithmic variant of the Arnoldi process called the Implicitly Restarted Arnoldi Method. For symmetric matrices, this reduces to a variant of the Lanczos process called the Implicitly Restarted Lanczos Method. These variants may be viewed as a synthesis of the Arnoldi/Lanczos process with the Implicitly Shifted QR technique that is suitable for large scale problems. For many standard problems, a matrix factorization is not required. Only the action of the matrix on a vector is needed.
The ARPACK functions can be used to find the eigenvalues with the largest and/or smallest magnitudes, real part or imaginary part.

1.2 FEAST Functions

The functions in this chapter whose short names begin with ‘f12j’ have been derived from the FEAST software suite (see Polizzi (2009)). FEAST is a general purpose eigensolver for standard, generalized and polynomial eigenvalue problems. It is suitable for both sparse and dense matrices, and functions are available for real, complex, symmetric, Hermitian and non-Hermitian eigenvalue problems. The FEAST algorithm requires you to specify a particular region of interest in the complex plane within which eigenvalues are sought. The algorithm then performs a numerical quadrature computation, involving solving linear systems along a complex contour around the region of interest.

2 Background to the Problems

This section is only a brief introduction to the solution of large-scale eigenvalue problems. For a more detailed discussion see, for example, Saad (1992) or Lehoucq (1995) in addition to Lehoucq et al. (1998). The basic factorization techniques and definitions of terms used for the different problem types are given in Section 2 in the F08 Chapter Introduction.

2.1 Sparse Matrices and their Storage

A matrix A may be described as sparse if the number of zero elements is so large that it is worthwhile using algorithms which avoid computations involving zero elements.
If A is sparse, and the chosen algorithm requires the matrix coefficients to be stored, a significant saving in storage can often be made by storing only the nonzero elements. A number of different formats may be used to represent sparse matrices economically. These differ according to the amount of storage required, the amount of indirect addressing required for fundamental operations such as matrix-vector products, and their suitability for vector and/or parallel architectures. For a survey of some of these storage formats see Barrett et al. (1994).
Most of the functions in this chapter have been designed to be independent of the matrix storage format. This allows you to choose your own preferred format, or to avoid storing the matrix altogether. Other functions are general purpose, which are easier to use, but are based on fixed storage formats. One such format is currently provided. This is the banded coordinate storage format as used in Chapters F07 and F08 (LAPACK) for storing general banded matrices.

2.2 Symmetric Eigenvalue Problems

The symmetric eigenvalue problem is to find the eigenvalues, λ , and corresponding eigenvectors, z 0 , such that
A z = λ z ,   A = AT ,   where ​ A ​ is real.  
For the Hermitian eigenvalue problem we have
A z = λ z ,   A = AH ,   where ​ A ​ is complex.  
For both problems the eigenvalues λ are real.
The basic task of the symmetric eigenproblem functions is to compute some of the values of λ and, optionally, corresponding vectors z for a given matrix A . For example, we may wish to obtain the first ten eigenvalues of largest magnitude, of a large sparse matrix A .

2.3 Generalized Symmetric-definite Eigenvalue Problems

This section is concerned with the solution of the generalized eigenvalue problems A z = λ B z , A B z = λ z , and B A z = λ z , where A and B are real symmetric or complex Hermitian and B is positive definite. Each of these problems can be reduced to a standard symmetric eigenvalue problem, using a Cholesky factorization of B as either B = L LT or B = UT U ( L LH or UH U in the Hermitian case).
With B = L LT , we have
A z = λ B z (L-1AL-T) (LTz) = λ (LTz) .  
Hence the eigenvalues of A z = λ B z are those of C y = λ y , where C is the symmetric matrix C = L-1 A L-T and y = LT z . In the complex, case C is Hermitian with C = L-1 A L-H and y = LH z .
The basic task of the generalized symmetric eigenproblem functions is to compute some of the values of λ and, optionally, corresponding vectors z for a given matrix A . For example, we may wish to obtain the first ten eigenvalues of largest magnitude, of a large sparse matrix pair A and B .

2.4 Nonsymmetric Eigenvalue Problems

The nonsymmetric eigenvalue problem is to find the eigenvalues, λ , and corresponding eigenvectors, v 0 , such that
A v = λ v .  
More precisely, a vector v as just defined is called a right eigenvector of A , and a vector u 0 satisfying
uT A = λ uT (uHA=λuH  when ​u​ is complex)  
is called a left eigenvector of A .
A real matrix A may have complex eigenvalues, occurring as complex conjugate pairs.
This problem can be solved via the Schur factorization of A , defined in the real case as
A = ZT ZT ,  
where Z is an orthogonal matrix and T is an upper quasi-triangular matrix with 1×1 and 2×2 diagonal blocks, the 2×2 blocks corresponding to complex conjugate pairs of eigenvalues of A . In the complex case, the Schur factorization is
A = Z T ZH ,  
where Z is unitary and T is a complex upper triangular matrix.
The columns of Z are called the Schur vectors. For each k ( 1 k n ), the first k columns of Z form an orthonormal basis for the invariant subspace corresponding to the first k eigenvalues on the diagonal of T . Because this basis is orthonormal, it is preferable in many applications to compute Schur vectors rather than eigenvectors. It is possible to order the Schur factorization so that any desired set of k eigenvalues occupy the k leading positions on the diagonal of T .
The two basic tasks of the nonsymmetric eigenvalue functions are to compute, for a given matrix A , some values of λ and, if desired, their associated right eigenvectors v , and the Schur factorization.

2.5 Generalized Nonsymmetric Eigenvalue Problem

The generalized nonsymmetric eigenvalue problem is to find the eigenvalues, λ , and corresponding eigenvectors, v 0 , such that
A v = λ B v ,   A B v = λ v ,  and   B A v = λ v .  
More precisely, a vector v as just defined is called a right eigenvector of the matrix pair (A,B) , and a vector u 0 satisfying
uT A = λ uT B (uHA=λuHB​ when ​u​ is complex)  
is called a left eigenvector of the matrix pair (A,B) .

2.6 The Polynomial Eigenvalue Problem

The polynomial eigenvalue problem is to find the eigenvalues, λ, and the corresponding eigenvectors, v0, such that
i=0p λi Ai v=0 .  
Here the Ai are matrices, and p is known as the degree of the problem.
More precisely, a vector v as just defined is a right eigenvector of the problem and a vector u0 satisfying
i=0p λi u Ai=0 ,  
is a left eigenvector of the problem.

2.7 The Singular Value Decomposition

The singular value decomposition (SVD) of an m×n matrix A is given by
A=UΣVT,  (A=UΣVHin the complex case)  
where U and V are orthogonal (unitary) and Σ is an m×n diagonal matrix with real diagonal elements, σi, such that
σ1σ2σmin(m,n)0.  
The σi are the singular values of A and the first min(m,n) columns of U and V are the left and right singular vectors of A. The singular values and singular vectors satisfy
Avi = σi ui   and   ATui = σi vi (or ​AHui=σivi)   so that   ATA ui = σi2 ui   ( AHA ui = σi2 ui )  
where ui and vi are the ith columns of U and V respectively.
Thus selected singular values and the corresponding right singular vectors may be computed by finding eigenvalues and eigenvectors for the symmetric matrix ATA (or the Hermitian matrix AHA if A is complex).
An alternative approach is to use the relationship
( 0 A AT 0 ) ​ ​ ( U V ) = ( U V ) Σ  
and thus compute selected singular values and vectors via the symmetric matrix
C = ( 0 A AT 0 ) ​   ​ (C=( 0 A AH 0 )​ if ​A​ is complex) .  
In many applications, one is interested in computing a few (say k) of the largest singular values and corresponding vectors. If Uk, Vk denote the leading k columns of U and V respectively, and if Σk denotes the leading principal submatrix of Σ, then
Ak Uk Σk VTk   (or ​ Uk Σk VHk )  
is the best rank-k approximation to A in both the 2-norm and the Frobenius norm. Often a very small k will suffice to approximate important features of the original A or to approximately solve least squares problems involving A.

2.8 Iterative Methods

Iterative methods for the solution of the standard eigenproblem
A x = λ x (1)
approach the solution through a sequence of approximations until some user-specified termination criterion is met or until some predefined maximum number of iterations has been reached. The number of iterations required for convergence is not generally known in advance, as it depends on the accuracy required, and on the matrix A , its sparsity pattern, conditioning and eigenvalue spectrum.

3 Choosing between ARPACK and FEAST Functions

Both the ARPACK and FEAST suites can handle standard, generalized, symmetric, Hermitian and non-Hermitian eigenvalue problems, with both left and right eigenvectors returned. However, the suites differ in the subset of eigenvalues that will be returned.
The ARPACK solvers can be instructed to find the eigenvalues with the largest and/or smallest magnitudes, real parts or imaginary parts.
The FEAST solvers allow you to specify a region in the complex plane (or an interval on the real line for Hermitian problems) within which eigenvalues will be found.
Note also that FEAST contains solvers for the polynomial eigenvalue problem.

4 ARPACK Functions

4.1 Recommendations on Choice and Use of Available Functions

4.1.1 Types of Function Available

The ARPACK functions available in this chapter divide essentially into three suites of basic reverse communication functions and some general purpose functions for banded systems.
Basic functions are grouped in suites of five, and implement the underlying iterative method. Each suite comprises a setup function, an options setting function, a solver function, a function to return additional monitoring information and a post-processing function. The solver function is independent of the matrix storage format (indeed the matrix need not be stored at all) and the type of preconditioner. It uses reverse communication (see Section 7 in How to Use the NAG Library for further information), i.e., it returns repeatedly to the calling program with the argument irevcm set to specified values which require the calling program to carry out a specific task (either to compute a matrix-vector product or to solve the preconditioning equation), to signal the completion of the computation or to allow the calling program to monitor the solution. Reverse communication has the following advantages:
  1. (i)Maximum flexibility in the representation and storage of sparse matrices. All matrix operations are performed outside the solver function, thereby avoiding the need for a complicated interface with enough flexibility to cope with all types of storage schemes and sparsity patterns. This also applies to preconditioners.
  2. (ii)Enhanced user interaction: you can closely monitor the solution and tidy or immediate termination can be requested. This is useful, for example, when alternative termination criteria are to be employed or in case of failure of the external functions used to perform matrix operations.
At present there are suites of basic functions for real symmetric and nonsymmetric systems, and for complex systems.
General purpose functions call basic functions in order to provide easy-to-use functions for particular sparse matrix storage formats. They are much less flexible than the basic functions, but do not use reverse communication, and may be suitable in many cases.
The structure of this part of the chapter has been designed to cater for as many types of application as possible. If a general purpose function exists which is suitable for a given application you are recommended to use it. If you then decide you need some additional flexibility it is easy to achieve this by using basic and utility functions which reproduce the algorithm used in the general purpose function, but allow more access to algorithmic control parameters and monitoring.

4.1.2 Iterative Methods for Real Nonsymmetric and Complex Eigenvalue Problems

The suite of basic functions f12aac, f12abc, f12acc, f12adc and f12aec implements the iterative solution of real nonsymmetric eigenvalue problems, finding estimates for a specified spectrum of eigenvalues. These eigenvalue estimates are often referred to as Ritz values and the error bounds obtained are referred to as the Ritz estimates. These functions allow a choice of termination criteria and many other options for specifying the problem type, allow monitoring of the solution process, and can return Ritz estimates of the calculated Ritz values of the problem A .
For complex matrices there is an equivalent suite of functions. f12anc, f12apc, f12aqc, f12arc and f12asc are the basic functions which implement corresponding methods used for real nonsymmetric systems. Note that these functions are to be used for both Hermitian and non-Hermitian problems. Occasionally, when using these functions on a complex Hermitian problem, eigenvalues will be returned with small but nonzero imaginary part due to unavoidable round-off errors. These should be ignored unless they are significant with respect to the eigenvalues of largest magnitude that have been computed.
There are general purpose functions for the case where the matrices are known to be banded. In these cases an initialization function is called first to set up default options, and the problem is solved by a single call to a solver function. The matrices are supplied, in LAPACK banded-storage format, as arguments to the solver function. For real general matrices these functions are f12afc and f12agc; and for complex matrices the pair is f12atc and f12auc. With each pair non-default options can be set, following a call to the initialization function, using f12adc for real matrices and f12arc for complex matrices. For real matrices that can be supplied in the sparse matrix compressed column storage (CCS) format, the driver function f02ekc is available. This function uses functions from Chapter F12 in conjunction with direct solver functions from Chapter F11.
There is little computational penalty in using the non-Hermitian complex functions for a Hermitian problem. The only additional cost is to compute eigenvalues of a Hessenberg rather than a tridiagonal matrix. The difference in computational cost should be negligible compared to the overall cost.

4.1.3 Iterative Methods for Real Symmetric Eigenvalue Problems

The suite of basic functions f12fac, f12fbc, f12fcc, f12fdc and f12fec implement a Lanczos method for the iterative solution of the real symmetric eigenproblem.
There is a general purpose function pair for the case where the matrices are known to be banded. In this case an initialization function, f12ffc, is called first to set up default options, and the problem is solved by a single call to a solver function, f12fgc. The matrices are supplied, in LAPACK banded-storage format, as arguments to f12fgc. Non-default options can be set, following a call to f12ffc, using f12fdc.

4.1.4 Iterative Methods for Singular Value Decomposition

The partial singular value decomposition, Ak (as defined in Section 2.7), of an (m×n) matrix A can be computed efficiently using functions from this chapter. For real matrices, the suite of functions listed in Section 4.1.3 (for symmetric problems) can be used; for complex matrices, the corresponding suite of functions for complex problems can be used; however, there are no general purpose functions for complex problems.
The driver function f02wgc is available for computing the partial SVD of real matrices. The matrix is not supplied to f02wgc; rather, a user-defined function argument provides the results of performing Matrix-vector products.
For both real and complex matrices, you should use the default options (see, for example, the options listed in Section 11 in f12fdc) for problem type (Standard), computational mode (Regular) and spectrum (Largest Magnitude). The operation to be performed on request by the reverse communication function (e.g., f12fbc) is, for real matrices, to multiply the returned vector by the symmetric matrix ATA if mn , or by AAT if m<n. For complex matrices, the corresponding Hermitian matrices are AHA and AAH .
The right (mn) or left (m<n) singular vectors are returned by the post-processing function (e.g., f12fcc). The left (or right) singular vectors can be recovered from the returned singular vectors. Providing the largest singular vectors are not multiple or tightly clustered, there should be no problem in obtaining numerically orthogonal left singular vectors from the computed right singular vectors (or vice versa).
The second example in Section 10 in f12fbc illustrates how the partial singular value decomposition of a real matrix can be performed using the suite of functions for finding some eigenvalues of a real symmetric matrix. In this case mn, however, the program is easily amended to perform the same task in the case m<n.
Similarly, functions in this part of the chapter may be used to estimate the 2-norm condition number,
K2(A) = σ1 σn .  
This can be achieved by setting the option Both Ends to get the largest and smallest few singular values, then taking the ratio of largest to smallest computed singular values as your estimate.

4.1.5 Alternative Methods

Other functions for the solution of sparse linear eigenproblems can be found in Chapters F02 and F08. In particular, tridiagonal and band matrices are addressed in Chapter F08 whereas sparse matrices are addressed in Chapter F02.

4.2 General Use of Functions

This section will describe the complete structure of the reverse communication interfaces. Numerous computational modes are available, including several shift-invert strategies designed to accelerate convergence. Two of the more sophisticated modes will be described in detail. The remaining ones are quite similar in principle, but require slightly different tasks to be performed with the reverse communication interface.
This section is structured as follows. The naming conventions used, and the data types available are described in Section 4.2.1, spectral transformations are discussed in Section 4.2.2. Spectral transformations are usually extremely effective but there are a number of problem dependent issues that determine which one to use. In Section 4.2.3 we describe the reverse communication interface needed to exercise the various shift-invert options. Each shift-invert option is specified as a computational mode and all of these are summarised in the remaining sections. There is a subsection for each problem type and hence these sections are quite similar and repetitive. Once the basic idea is understood, it is probably best to turn directly to the subsection that describes the problem setting that is most interesting to you.
Perhaps the easiest way to rapidly become acquainted with the modes in this part of the chapter is to run each of the example programs which use the various modes. These may be used as templates and adapted to solve specific problems.

4.2.1 Naming Conventions

Functions for solving nonsymmetric (real and complex) eigenvalue problems, in their short names, have as first letter after the chapter name, the letter ‘a’, e.g., f12abc; equivalent functions for symmetric eigenvalue problems will have this letter replaced by the letter ‘f’ (and ‘_symm’ added to their long names), e.g., f12fbc. For the letter following this, functions for real eigenvalue problems will have letters in the range ‘a to m’ (and have long names beginning ‘nag_real’) while those for complex eigenvalue problems will have letters correspondingly shifted into the range ‘n to z’ (and long names beginning ‘nag_complex’); so, for example, the complex equivalent of f12adc is f12arc, while the real symmetric equivalent is f12fdc.
A suite of five functions are named consecutively in their short names and differ only in the final word of their long names, e.g., f12aac, f12abc, f12acc, f12adc and f12aec. Each general purpose function has its own initialization function, but uses the option setting function from the suite relevant to the problem type. Thus each general purpose function can be viewed as belonging to a suite of three functions, even though only two functions will be named consecutively. For example, f12adc, f12afc and f12agc represent the suite of functions for solving a banded real symmetric eigenvalue problem.

4.2.2 Shift and Invert Spectral Transformations

The most general problem that may be solved here is to compute a few selected eigenvalues and corresponding eigenvectors for
A x = λ B x ,   where ​ A ​ and ​ B ​ are real or complex ​ n × n ​ matrices. (2)
The shift and invert spectral transformation is used to enhance convergence to a desired portion of the spectrum. If (x,λ) is an eigen-pair for (A,B) and σ λ then
( A - σB ) −1 B x = ν x ,   where ​ ν = 1 λ - σ . (3)
This transformation is effective for finding eigenvalues near σ since the n ν eigenvalues of C ( A - σB ) −1 B that are largest in magnitude correspond to the n ν eigenvalues λ j of the original problem that are nearest to the shift σ in absolute value. These transformed eigenvalues of largest magnitude are precisely the eigenvalues that are easy to compute with a Krylov method. (See Barrett et al. (1994)). Once they are found, they may be transformed back to eigenvalues of the original problem. The direct relation is
λj = σ + 1 νj  
and the eigenvector x j associated with νj in the transformed problem is also an eigenvector of the original problem corresponding to λj . Usually the Arnoldi process will rapidly obtain good approximations to the eigenvalues of C of largest magnitude. However, to implement this transformation, you must provide the means to solve linear systems involving A - σB either with a matrix factorization or with an iterative method.
In general, C will be non-Hermitian even if A and B are both Hermitian. However, this is easily remedied. The assumption that B is Hermitian positive definite implies that the bilinear form
x,y xH By  
is an inner product. If B is positive semidefinite and singular, then a semi-inner product results. This is a weighted B -inner product and vectors x , y are called B -orthogonal if x,y = 0 . It is easy to show that if A is Hermitian (self-adjoint) then C is Hermitian self-adjoint with respect to this B -inner product (meaning Cx,y = x,Cy for all vectors x , y ). Therefore, symmetry will be preserved if we force the computed basis vectors to be orthogonal in this B -inner product. Implementing this B -orthogonality requires you to provide a matrix-vector product Bv on request along with each application of C . In the following sections we shall discuss some of the more familiar transformations to the standard eigenproblem. However, when B is positive (semi)definite, we recommend using the shift-invert spectral transformation with B -inner products if at all possible. This is a far more robust transformation when B is ill-conditioned or singular. With a little extra manipulation (provided automatically in the post-processing functions) the semi-inner product induced by B prevents corruption of the computed basis vectors by roundoff-error associated with the presence of infinite eigenvalues. These very ill-conditioned eigenvalues are generally associated with a singular or highly ill-conditioned B . A detailed discussion of this theory may be found in Chapter 4 of Lehoucq et al. (1998).
Shift-invert spectral transformations are very effective and should even be used on standard problems, B = I , whenever possible. This is particularly true when interior eigenvalues are sought or when the desired eigenvalues are clustered. Roughly speaking, a set of eigenvalues is clustered if the maximum distance between any two eigenvalues in that set is much smaller than the minimum distance between these eigenvalues and any other eigenvalues of (A,B) .
If you have a generalized problem B I , then you must provide a way to solve linear systems with either A , B or a linear combination of the two matrices in order to use the reverse communication suites in this chapter. In this case, a sparse direct method should be used to factor the appropriate matrix whenever possible. The resulting factorization may be used repeatedly to solve the required linear systems once it has been obtained. If instead you decide to use an iterative method, the accuracy of the solutions must be commensurate with the convergence tolerance used for the Arnoldi iteration. A slightly more stringent tolerance is needed relative to the desired accuracy of the eigenvalue calculation.
The main drawback with using the shift-invert spectral transformation is that the coefficient matrix A - σ B is typically indefinite in the Hermitian case and has zero-valued eigenvalues in the non-Hermitian case. These are often the most difficult situations for iterative methods and also for sparse direct methods.
The decision to use a spectral transformation on a standard eigenvalue problem B = I or to use one of the simple modes is problem dependent. The simple modes have the advantage that you only need to supply a matrix vector product Av . However, this approach is usually only successful for problems where extremal non-clustered eigenvalues are sought. In non-Hermitian problems, extremal means eigenvalues near the boundary of the spectrum of A . For Hermitian problems, extremal means eigenvalues at the left- or right-hand end points of the spectrum of A . The notion of non-clustered (or well separated) is difficult to define without going into considerable detail. A simplistic notion of a well-separated eigenvalue λj for a Hermitian problem would be λi-λj > τ λn-λ1 for all j i with τ ε , where λ1 and λn are the smallest and largest algebraically. Unless a matrix vector product is quite difficult to code or extremely expensive computationally, it is probably worth trying to use the simple mode first if you are seeking extremal eigenvalues.
The remainder of this section discusses additional transformations that may be applied to convert a generalized eigenproblem to a standard eigenproblem. These are appropriate when B is well-conditioned (Hermitian or non-Hermitian).
4.2.2.1 B is Hermitian positive definite
If B is Hermitian positive definite and well-conditioned ( B B-1 is of modest size), then computing the Cholesky factorization B = L LH and converting equation (2) to
(L-1AL-H) y = λy ,   where ​ LH x = y  
provides a transformation to a standard eigenvalue problem. In this case, a request for a matrix vector product would be satisfied with the following three steps:
  1. (i)Solve LH z = v for z .
  2. (ii)Matrix-vector multiply z A z .
  3. (iii)Solve L w = z for w .
Upon convergence, a computed eigenvector y for (L-1AL-H) is converted to an eigenvector x of the original problem by solving the triangular system LH x = y . This transformation is most appropriate when A is Hermitian, B is Hermitian positive definite and extremal eigenvalues are sought. This is because when A is Hermitian, so is (L-1AL-H) .
If A is Hermitian positive definite and the smallest eigenvalues are sought, then it would be best to reverse the roles of A and B in the above description and ask for the largest algebraic eigenvalues or those of largest magnitude. Upon convergence, a computed eigenvalue λ^ would then be converted to an eigenvalue of the original problem by the relation λ 1λ^ .
4.2.2.2 B is not Hermitian positive semidefinite
If neither A nor B is Hermitian positive semidefinite, then a direct transformation to standard form is required. One simple way to obtain a direct transformation of equation (2) to a standard eigenvalue problem C x = λ x is to multiply on the left by B-1 which results in C = B-1 A . Of course, you should not perform this transformation explicitly since it will most likely convert a sparse problem into a dense one. If possible, you should obtain a direct factorization of B and when a matrix-vector product involving C is called for, it may be accomplished with the following two steps:
  1. (i)Matrix-vector multiply z A v .
  2. (ii)Solve B w = z for w .
Several problem-dependent issues may modify this strategy. If B is singular or if you are interested in eigenvalues near a point σ then you may choose to work with C (A-σB) −1 B but without using the B -inner products discussed previously. In this case you will have to transform the converged eigenvalues of C to eigenvalues of the original problem.

4.2.3 Reverse Communication and Shift-invert Modes

The reverse communication interface function for real nonsymmetric problems is f12abc; for complex problems is f12apc; and for real symmetric problems is f12fbc. First the reverse communication loop structure will be described and then the details and nuances of the problem setup will be discussed. We use the symbol op for the operator that is applied to vectors in the Arnoldi/Lanczos process and B will stand for the matrix to use in the weighted inner product described previously. For the shift-invert spectral transformation mode op denotes (A-σB) −1 B .
The basic idea is to set up a loop that repeatedly call one of f12abc, f12apc and f12fbc. On each return, you must either apply op or B to a specified vector or exit the loop depending upon the value returned in the reverse communication argument irevcm.
4.2.3.1 Shift and invert on a generalized eigenproblem
The example program in Section 10 in f12aec illustrates the reverse communication loop for f12abc in shift-invert mode for a generalized nonsymmetric eigenvalue problem. This loop structure will be identical for the symmetric problem calling f12fbc. The loop structure is also identical for the complex arithmetic function f12apc.
In the example, the matrix B is assumed to be symmetric and positive semidefinite. In the loop structure, you will have to supply a function to obtain a matrix factorization of (A-σB) that may repeatedly be used to solve linear systems. Moreover, a function needs to be provided to perform the matrix-vector product z = Bv and a function is required to solve linear systems of the form (A-σB) w = z as needed using the previously computed factorization.
When convergence has taken place (indicated by irevcm = 5 and fail = 0 ), the reverse communication loop will be exited. Then, post-processing using the relevant function from f12acc, f12aqc and f12fcc must be done to recover the eigenvalues and corresponding eigenvectors of the original problem. When operating in shift-invert mode, the eigenvalue selection option is normally set to Largest Magnitude. The post-processing function is then used to convert the converged eigenvalues of op to eigenvalues of the original problem (2). Also, when B is singular or ill-conditioned, the post-processing function takes steps to purify the eigenvectors and rid them of numerical corruption from eigenvectors corresponding to near-infinite eigenvalues. These procedures are performed automatically when operating in any one of the computational modes described above and later in this section.
You may wish to construct alternative computational modes using spectral transformations that are not addressed by any of the modes specified in this chapter. The reverse communication interface will easily accommodate these modifications. However, it will most likely be necessary to construct explicit transformations of the eigenvalues of op to eigenvalues of the original problem in these situations.
4.2.3.2 Using the computational modes
The problem set up is similar for all of the available computational modes. In the previous section, a detailed description of the reverse communication loop for a specific mode (Shift-invert for a Generalized Problem) was given. To use this or any of the other modes listed below, you are strongly urged to modify one of the example programs.
The first thing to decide is whether the problem will require a spectral transformation. If the problem is generalized, B I , then a spectral transformation will be required (see Section 4.2.2). Such a transformation will most likely be needed for a standard problem if the desired eigenvalues are in the interior of the spectrum or if they are clustered at the desired part of the spectrum. Once this decision has been made and op has been specified, an efficient means to implement the action of the operator op on a vector must be devised. The expense of applying op to a vector will of course have direct impact on performance.
Shift-invert spectral transformations may be implemented with or without the use of a weighted B -inner product. The relation between the eigenvalues of op and the eigenvalues of the original problem must also be understood in order to make the appropriate eigenvalue selection option (e.g., Largest Magnitude) in order to recover eigenvalues of interest for the original problem. You must specify the number of eigenvalues to compute, which eigenvalues are of interest, the number of basis vectors to use, and whether or not the problem is standard or generalized. These items are controlled by setting options via the option setting function.
Setting the number of eigenvalues nev and the number of basis vectors ncv (in the setup function) for optimal performance is very much problem dependent. If possible, it is best to avoid setting nev in a way that will split clusters of eigenvalues. As a rule of thumb ncv 2 × nev is reasonable. There are trade-offs due to the cost of the user-supplied matrix-vector products and the cost of the implicit restart mechanism. If the user-supplied matrix-vector product is relatively cheap, then a smaller value of ncv may lead to more user matrix-vector products and implicit Arnoldi iterations but an overall decrease in computation time. Convergence behaviour can be quite different depending on which of the spectrum options (e.g., Largest Magnitude) is chosen. The Arnoldi process tends to converge most rapidly to extreme points of the spectrum. Implicit restarting can be effective in focusing on and isolating a selected set of eigenvalues near these extremes. In principle, implicit restarting could isolate eigenvalues in the interior, but in practice this is difficult and usually unsuccessful. If you are interested in eigenvalues near a point that is in the interior of the spectrum, a shift-invert strategy is usually required for reasonable convergence.
The integer argument irevcm is the reverse communication flag that will specify a requested action on return from one of the solver functions f12abc, f12apc and f12fbc. The options Standard and Generalized specify if this is a standard or generalized eigenvalue problem. The dimension of the problem is specified on the call to the initialization function only; this value, together with the number of eigenvalues and the dimension of the basis vectors is passed through the communication array. There are a number of spectrum options which specify the eigenvalues to be computed; these options differ depending on whether a Hermitian or non-Hermitian eigenvalue problem is to be solved. For example, the Both Ends is specific to Hermitian (symmetric) problems while the Largest Imaginary is specific to non-Hermitian eigenvalue problems (see Section 11.1 in f12adc). The specification of problem type will be described separately but the reverse communication interface and loop structure is the same for each type of the basic modes Regular, Regular Inverse, Shifted Inverse (also Shifted Inverse Real and Shifted Inverse Imaginary for real nonsymmetric problems), and for the problem type: Standard or Generalized. There are some additional specialised modes for symmetric problems, Buckling and Cayley, and for real nonsymmetric problems with complex shifts applied in real arithmetic. You are encouraged to examine the documented example programs for these modes.
The Tolerance specifies the accuracy requested. If you wish to supply shifts for implicit restarting then the Supplied Shifts must be selected, otherwise the default Exact Shifts strategy will be used. The Supplied Shifts should only be used when you have a great deal of knowledge about the spectrum and about the implicit restarted Arnoldi method and its underlying theory. The Iteration Limit should be set to the maximum number of implicit restarts allowed. The cost of an implicit restart step (major iteration) is in the order of 4 n (ncv-nev) floating-point operations for the dense matrix operations and ncv - nev matrix-vector products w Av with the matrix A .
The choice of computational mode through the option setting function is very important. The legitimate computational mode options available differ with each problem type and are listed below for each of them.
4.2.3.3 Computational modes for real symmetric problems
The reverse communication interface function for symmetric eigenvalue problems is f12fbc. The option for selecting the region of the spectrum of interest can be one of those listed in Table 1.
Table 1
Eigenvalue spectrum options for symmetric eigenproblems
Largest Magnitude The eigenvalues of greatest magnitude
Largest Algebraic The eigenvalues of largest algebraic value (rightmost)
Smallest Magnitude The eigenvalues of least magnitude.
Smallest Algebraic The eigenvalues of smallest algebraic value (leftmost)
Both Ends The eigenvalues from both ends of the algebraic spectrum
Table 2 lists the spectral transformation options for symmetric eigenvalue problems together with the specification of op and B for each mode and the problem type option setting.
Table 2
Problem types, computational modes and spectral transformations for
symmetric eigenproblems
Problem Type Mode Problem op B
Standard Regular Ax=λx A I
Standard Shifted Inverse Ax=λx (A-σI)−1 I
Generalized Regular Inverse Ax=λBx B-1Ax B
Generalized Shifted Inverse Ax=λBx (A-σB)−1B B
Generalized Buckling Kx=λKGx (K-σKG)−1K K
Generalized Cayley Ax=λBx (A-σB)−1(A+σB) B
4.2.3.4 Computational modes for non-Hermitian problems
When A is a general non-Hermitian matrix and B is Hermitian and positive semidefinite, then the selection of the eigenvalues is controlled by the choice of one of the options in Table 3.
Table 3
Eigenvalue spectrum options for real nonsymmetric and
complex eigenproblems
Largest Magnitude The eigenvalues of greatest magnitude
Smallest Magnitude The eigenvalues of least magnitude
Largest Real The eigenvalues with largest real part
Smallest Real The eigenvalues with smallest real part
Largest Imaginary The eigenvalues with largest imaginary part
Smallest Imaginary The eigenvalues with smallest imaginary part
Table 4 lists the spectral transformation options for real nonsymmetric eigenvalue problems together with the specification of op and B for each mode and the problem type option setting. The equivalent listing for complex non-Hermitian eigenvalue problems is given in Table 5.
Table 4
Problem types, computational modes and spectral transformations for
real nonsymmetric eigenproblems
Problem Type Mode Problem op B
Standard Regular Ax=λx A I
Standard Shifted Inverse Real Ax=λx (A-σI) −1 I
Generalized Regular Inverse Ax=λBx B-1Ax B
Generalized Shifted Inverse Real with real σ Ax=λBx (A-σB) −1 B B
Generalized Shifted Inverse Real with complex σ Ax=λBx real { (A-σB) −1 B} B
Generalized Shifted Inverse Imaginary with complex σ Ax=λBx imag { (A-σB) −1 B} B
Note that there are two shifted inverse modes with complex shifts in Table 4. Since σ is complex, these both require the factorization of the matrix A - σ B in complex arithmetic even though, in the case of real nonsymmetric problems, both A and B are real. The only advantage of using this option for real nonsymmetric problems instead of using the equivalent suite for complex problems is that all of the internal operations in the Arnoldi process are executed in real arithmetic. This results in a factor of two saving in storage and a factor of four saving in computational cost. There is additional post-processing that is somewhat more complicated than the other modes in order to get the eigenvalues and eigenvectors of the original problem. These modes are only recommended if storage is extremely critical.
Table 5
Problem types, computational modes and spectral transformations for
complex non-Hermitian eigenproblems
Problem Type Mode Problem op B
Standard Regular Ax=λx A I
Standard Shifted Inverse Ax=λx (A-σI) −1 I
Generalized Regular Inverse Ax=λBx B-1Ax B
Generalized Shifted Inverse Ax=λBx (A-σB) −1 B B
4.2.3.5 Post processing
On the final successful return from a reverse communication function, the corresponding post-processing function must be called to get eigenvalues of the original problem and the corresponding eigenvectors if desired. In the case of Shifted Inverse modes for Generalized problems, there are some subtleties to recovering eigenvectors when B is ill-conditioned. This process is called eigenvector purification. It prevents eigenvectors from being corrupted with noise due to the presence of eigenvectors corresponding to near infinite eigenvalues. These operations are completely transparent to you. There is negligible additional cost to obtain eigenvectors. An orthonormal (Arnoldi/Lanczos) basis is always computed. The approximate eigenvalues of the original problem are returned in ascending algebraic order. The option relevant to this function is Vectors which may be set to values that determine whether only eigenvalues are desired or whether corresponding eigenvectors and/or Schur vectors are required. The value of the shift σ used in spectral transformations must be passed to the post-processing function through the appropriately named argument(s). The eigenvectors returned are normalized to have unit length with respect to the semi-inner product that was used. Thus, if B = I then they will have unit length in the standard-norm. In general, a computed eigenvector x will satisfy xH B x = 1 .
4.2.3.6 Solution monitoring and printing
The option setting function for each suite allows the setting of three options that control solution printing and the monitoring of the iterative and post-processing stages. These three options are: Advisory, Monitoring and Print Level. By default, no solution monitoring or printing is performed. The Advisory option controls where solution details are printed; the Monitoring option controls where monitoring details are to be printed and is mainly used for debugging purposes; the Print Level option controls the amount of detail to be printed, see individual option setting function documents for specifications of each print level. The value passed to Advisory and Monitoring can be the same, but it is recommended that the two sets of information be kept separate. Note that the monitoring information can become very voluminous for the highest settings of Print Level.
To use the above options to print information to a file, the function x04acc must be called to open a file with a given name and return an associated Nag_FileID (see Section 3.1.1 in the Introduction to the NAG Library CL Interface) for that file. The Nag_FileID (see Section 3.1.1 in the Introduction to the NAG Library CL Interface) value can then be passed to the advisory or monitoring option setting string. On final exit from the post-processing function the file may be closed by a call to x04adc.
The following example extract shows how the files ‘solut.dat’ and ‘monit.dat’ may be opened for the printing of solution and monitoring information respectively.
Nag_FileID solutid, monitid;
char option1[16], option2[16];
x04acc("solut.dat", 1, &solutid, &fail);
x04acc("monit.dat", 1, &monitid, &fail);
Vsprintf(option1, "advisory=%4ld", (Integer) solutid);
Vsprintf(option2, "monitoring=%4ld", (Integer) monitid);
.
.
.
f12adc(option1, icomm, comm, &fail);
f12adc(option2, icomm, comm, &fail);
f12adc("print level = 10", icomm, comm, &fail);
.
.
.
x04adc(solutid, &fail);
x04adc(monitid, &fail);

5 FEAST Functions

The NAG FEAST suite of functions all have short names beginning with ‘f12j’. They are divided into the following types of function:
Solving an eigenvalue problem using the FEAST algorithm involves the following function calls.
  1. 1.Call f12jac to initialize the handle to the internal data structure used by the functions and set options to their default values.
  2. 2.Optionally, call f12jbc to set any options if different from their defaults (for example, the number of quadrature nodes on the contour, or the location of the ellipse if such a contour is to be used). f12jbc should be called once for each option to be set.
  3. 3.Call one of the contour setting functions f12jec (for Hermitian and real symmetric problems), f12jfc (for circular or elliptical contours) or f12jgc (for maximum flexibility in your choice of contour). These functions will generate a set of quadrature nodes and weights to be used by the solvers.
  4. 4.Call one of the reverse communication solvers f12jjc, f12jkc, f12jrc, f12jsc, f12jtc, f12juc or f12jvc.
  5. 5.Call f12jzc to destroy the handle to the internal data structure.
The exact choice of which contour setting function and which solver to use is problem-dependent and is detailed in Section 5.2.3.

5.1 Contour Setting Functions

The contour setting functions create a set of nodes and weights describing the contour within which eigenvalues are required. There are three such functions.
f12jec is intended for use with Hermitian or real symmetric eigenvalue problems (the eigenvalues of such problems all lie on the real line). You need only specify the limits of the real interval within which eigenvalues will be sought. f12jec uses these to generate an elliptical contour, symmetric about the real axis. Prior to calling f12jec, you can set the eccentricity of the ellipse, and the number of contour integration points using the option setting function f12jbc.
f12jfc is intended for non-Hermitian eigenvalue problems. It generates nodes and weights for an elliptical contour in the complex plane. You need only specify the horizontal radius and the location of the centre of the ellipse. Prior to calling f12jfc you can use f12jbc to rotate the ellipse, control its eccentricity and specify the number of integration points to use.
f12jgc gives you the maximum flexibility in creating your own contour. It is intended for non-Hermitian problems. Your contour can be made up of a combination of line segments and half ellipses. You must specify the start and end points of each segment of the contour, together with the number of integration points that should be assigned to each segment. f12jgc will use this information to generate the nodes and weights of a polygonal approximation to the contour. The contour must be convex (the behaviour of the solvers is undefined if a concave contour is used).
Note that f12jbc allows you to choose between three types of quadrature: Gauss-Legendre, Trapezoidal and (for Hermitian and real symmetric problems only) Zolotarev. The choice of quadrature will change the values of the nodes and weights computed by the contour setting functions. The type of quadrature, and the number of integration points used both influence the convergence rate of the algorithm. In general, increasing the number of integration points increases the convergence rate, at the expense of more expensive iterations, and using Zolotarev quadrature is recomended for Hermitian eigenvalue problems.

5.2 Solvers

The solvers use reverse communication (see Section 7 in How to Use the NAG Library for further information). They return repeatedly to the calling program with the argument irevcm set to specified values which require the calling program to carry out a specific task (either to compute a matrix-vector product or to solve a linear system), or to signal the completion of the computation. Reverse communication offers maximum flexibility in the representation and storage of sparse matrices. All matrix operations are performed outside the solver function, thereby avoiding the need for a complicated interface with enough flexibility to cope with all types of storage schemes and sparsity patterns.

5.2.1 Linear Systems

When FEAST requires the calling program to solve a system of linear equations, this will occur in two stages.
  1. (i)FEAST will first ask the calling program to compute a factorization of a matrix suitable for solving the linear system. For dense matrices this might be a Bunch-Kaufman factorization (f07nrc) or an LU decomposition (f07arc). For sparse matrices this could be an incomplete LU factorization (f11dnc) or even just a preconditioner. The factorization should be stored as it may be reused several times.
  2. (ii)FEAST will ask the calling program to use the factorization computed in (i) to solve linear systems with different sets of righthand sides. When a new factorization is required (i.e., FEAST returns to step (i)), the factorization previously computed in step (i) can be overwritten.
Note that FEAST uses an inverse residual iteration algorithm which enables the linear systems to be solved with very low accuracy with no impact on the double precision convergence rate. Thus single precision solvers, and very high convergence tolerances are entirely acceptable when factorizing and solving the linear systems, provided the condition numbers of the linear systems are not so high as to prevent such low precision solvers from obtaining any degree of accuracy.

5.2.2 Further Tips on the Use of the Solvers

The size of the search subspace m0 affects the convergence of the algorithm. Increasing m0 will improve convergence, but will require more memory and result in a more expensive computation. As a general rule of thumb, m0 should exceed the number of eigenvalues in the search contour by a factor of approximately 1.5 (note that FEAST can be used to estimate the number of eigenvalues inside the contour prior to embarking on the full eigenvalue computation by setting the option Execution Mode=Estimate in f12jbc).
In principal, the FEAST algorithm can be used to find many thousands of eigenpairs within a large search contour. However, in practice better performance will be achieved if the computation is split into multiple smaller contours (which could then be searched in parallel).

5.2.3 Function Choices for Different Problem Types

The following table shows which contour setting function and which reverse communication solver should be used for the different problem types. Recall that for all problem types the initialization function f12jac should first be called, and the cleanup function f12jzc should be called after the solver.
Problem Type Contour Setting Function Reverse Communication Solver
real symmetric f12jec f12jjc
real nonsymmetric f12jfc (circular or elliptical contours)
f12jgc (general contours)
f12jkc
complex Hermitian f12jec f12jrc
complex symmetric f12jfc (circular or elliptical contours)
f12jgc (general contours)
f12jsc
complex nonsymmetric f12jfc (circular or elliptical contours)
f12jgc (general contours)
f12jtc
polynomial symmetric f12jfc (circular or elliptical contours)
f12jgc (general contours)
f12juc
polynomial nonsymmetric f12jfc (circular or elliptical contours)
f12jgc (general contours)
f12jvc

6 Functionality Index

ARPACK routines,  
Standard or generalized eigenvalue problems for complex matrices,  
banded matrices,  
initialize problem and method   f12atc
selected eigenvalues, eigenvectors and/or Schur vectors   f12auc
general matrices,  
initialize problem and method   f12anc
option setting   f12arc
reverse communication implicitly restarted Arnoldi method   f12apc
reverse communication monitoring   f12asc
selected eigenvalues, eigenvectors and/or Schur vectors of original problem   f12aqc
Standard or generalized eigenvalue problems for real nonsymmetric matrices,  
banded matrices,  
initialize problem and method   f12afc
selected eigenvalues, eigenvectors and/or Schur vectors   f12agc
general matrices,  
initialize problem and method   f12aac
option setting   f12adc
reverse communication implicitly restarted Arnoldi method   f12abc
reverse communication monitoring   f12aec
selected eigenvalues, eigenvectors and/or Schur vectors of original problem   f12acc
Standard or generalized eigenvalue problems for real symmetric matrices,  
banded matrices,  
initialize problem and method   f12ffc
selected eigenvalues, eigenvectors and/or Schur vectors   f12fgc
general matrices,  
initialize problem and method   f12fac
option setting   f12fdc
reverse communication implicitly restarted Arnoldi(Lanczos) method   f12fbc
reverse communication monitoring   f12fec
selected eigenvalues and eigenvectors and/or Schur vectors of original problem   f12fcc
NAG FEAST suite,  
contour setting,  
elliptical contour for nonsymmetric or complex symmetric eigenvalue problems   f12jfc
general contour for nonsymmetric or complex symmetric eigenvalue problems   f12jgc
real symmetric/complex Hermitian eigenvalue problems   f12jec
deallocation   f12jzc
initialization   f12jac
option setting   f12jbc
solvers,  
complex Hermitian   f12jrc
complex nonsymmetric   f12jtc
complex symmetric   f12jsc
polynomial nonsymmetric   f12jvc
polynomial symmetric   f12juc
real nonsymmetric   f12jkc
real symmetric   f12jjc

7 Auxiliary Functions Associated with Library Function Arguments

None.

8 Withdrawn or Deprecated Functions

None.

9 References

Barrett R, Berry M, Chan T F, Demmel J, Donato J, Dongarra J, Eijkhout V, Pozo R, Romine C and Van der Vorst H (1994) Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods SIAM, Philadelphia
Lehoucq R B (1995) Analysis and implementation of an implicitly restarted iteration PhD Thesis Rice University, Houston, Texas
Lehoucq R B (2001) Implicitly restarted Arnoldi methods and subspace iteration SIAM Journal on Matrix Analysis and Applications 23 551–562
Lehoucq R B and Scott J A (1996) An evaluation of software for computing eigenvalues of sparse nonsymmetric matrices Preprint MCS-P547-1195 Argonne National Laboratory
Lehoucq R B and Sorensen D C (1996) Deflation techniques for an implicitly restarted Arnoldi iteration SIAM Journal on Matrix Analysis and Applications 17 789–821
Lehoucq R B, Sorensen D C and Yang C (1998) ARPACK Users' Guide: Solution of Large-scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods SIAM, Philadelphia
Polizzi E (2009) Density-Matrix-Based Algorithms for Solving Eigenvalue Problems Phys. Rev. B. 79 115112
Saad Y (1992) Numerical Methods for Large Eigenvalue Problems Manchester University Press, Manchester, UK