F12 (Sparseig)

Large Scale Eigenproblems

This chapter provides functions for computing **some** eigenvalues and eigenvectors of large-scale (sparse) standard and generalized eigenvalue problems. It provides functions for:

- solution of symmetric eigenvalue problems;
- solution of nonsymmetric eigenvalue problems;
- solution of generalized symmetric-definite eigenvalue problems;
- solution of generalized nonsymmetric eigenvalue problems;
- solution of polynomial eigenvalue problems;
- partial singular value decomposition.

Functions are provided for both real and complex data.

The functions in this chapter whose short names begin with either f12a or f12f have all been derived from the
ARPACK software suite (see Lehoucq et al. (1998)),
a collection of Fortran 77 functions designed to solve large scale eigenvalue problems. The interfaces provided in this chapter have been chosen to combine ease of use with the flexibility of the original
ARPACK
software. The underlying iterative methods and algorithms remain essentially the same as those in ARPACK and are described fully in Lehoucq et al. (1998).

The algorithms used in the ARPACK functions are based upon an algorithmic variant of the Arnoldi process called the Implicitly Restarted Arnoldi Method. For symmetric matrices, this reduces to a variant of the Lanczos process called the Implicitly Restarted Lanczos Method. These variants may be viewed as a synthesis of the Arnoldi/Lanczos process with the Implicitly Shifted $QR$ technique that is suitable for large scale problems. For many standard problems, a matrix factorization is not required. Only the action of the matrix on a vector is needed.

The ARPACK functions can be used to find the eigenvalues with the largest and/or smallest magnitudes, real part or imaginary part.

The functions in this chapter whose short names begin with ‘f12j’ have been derived from the FEAST software suite (see Polizzi (2009)). FEAST is a general purpose eigensolver for standard, generalized and polynomial eigenvalue problems. It is suitable for both sparse and dense matrices, and functions are available for real, complex, symmetric, Hermitian and non-Hermitian eigenvalue problems. The FEAST algorithm requires you to specify a particular region of interest in the complex plane within which eigenvalues are sought. The algorithm then performs a numerical quadrature computation, involving solving linear systems along a complex contour around the region of interest.

This section is only a brief introduction to the solution of large-scale eigenvalue problems. For a more detailed discussion see, for example, Saad (1992) or Lehoucq (1995) in addition to Lehoucq et al. (1998). The basic factorization techniques and definitions of terms used for the different problem types are given in Section 2 in the F08 Chapter Introduction.

A matrix $A$ may be described as **sparse** if the number of zero elements is so large that it is worthwhile using algorithms which avoid computations involving zero elements.

If $A$ is sparse, and the chosen algorithm requires the matrix coefficients to be stored, a significant saving in storage can often be made by storing only the nonzero elements. A number of different formats may be used to represent sparse matrices economically. These differ according to the amount of storage required, the amount of indirect addressing required for fundamental operations such as matrix-vector products, and their suitability for vector and/or parallel architectures. For a survey of some of these storage formats see Barrett et al. (1994).

Most
of the functions in this chapter have been designed to be independent of the matrix storage format. This allows you to choose your own preferred format, or to avoid storing the matrix altogether.
Other functions are **general purpose**, which are easier to use, but are based on fixed storage formats. One such format is currently provided. This is the banded coordinate storage format as used in Chapters F07 and F08 (LAPACK) for storing general banded matrices.

The symmetric eigenvalue problem is to find the eigenvalues, $\lambda $, and corresponding eigenvectors, $z\ne 0$, such that

For the Hermitian eigenvalue problem we have

For both problems the eigenvalues $\lambda $ are real.

$$Az=\lambda z\text{, \hspace{1em}}A={A}^{\mathrm{T}}\text{, \hspace{1em} where}A\text{ is real.}$$ |

$$Az=\lambda z\text{, \hspace{1em}}A={A}^{\mathrm{H}}\text{, \hspace{1em} where}A\text{ is complex.}$$ |

The basic task of the symmetric eigenproblem functions is to compute some of the values of $\lambda $ and, optionally, corresponding vectors $z$ for a given matrix $A$. For example, we may wish to obtain the first ten eigenvalues of largest magnitude, of a large sparse matrix $A$.

This section is concerned with the solution of the generalized eigenvalue problems $Az=\lambda Bz$, $ABz=\lambda z$, and $BAz=\lambda z$, where $A$ and $B$ are real symmetric or complex Hermitian and $B$ is positive definite. Each of these problems can be reduced to a standard symmetric eigenvalue problem, using a Cholesky factorization of $B$ as either $B=L{L}^{\mathrm{T}}$ or $B={U}^{\mathrm{T}}U$ ($L{L}^{\mathrm{H}}$ or ${U}^{\mathrm{H}}U$ in the Hermitian case).

With $B=L{L}^{\mathrm{T}}$, we have

Hence the eigenvalues of $Az=\lambda Bz$ are those of $Cy=\lambda y$, where $C$ is the symmetric matrix $C={L}^{-1}A{L}^{-\mathrm{T}}$ and $y={L}^{\mathrm{T}}z$. In the complex, case $C$ is Hermitian with $C={L}^{-1}A{L}^{-\mathrm{H}}$ and $y={L}^{\mathrm{H}}z$.

$$Az=\lambda Bz\Rightarrow \left({L}^{-1}A{L}^{-\mathrm{T}}\right)\left({L}^{\mathrm{T}}z\right)=\lambda \left({L}^{\mathrm{T}}z\right)\text{.}$$ |

The basic task of the generalized symmetric eigenproblem functions is to compute some of the values of $\lambda $ and, optionally, corresponding vectors $z$ for a given matrix $A$. For example, we may wish to obtain the first ten eigenvalues of largest magnitude, of a large sparse matrix pair $A$ and $B$.

The nonsymmetric eigenvalue problem is to find the eigenvalues, $\lambda $, and corresponding eigenvectors, $v\ne 0$, such that

More precisely, a vector $v$ as just defined is called a right eigenvector of $A$, and a vector $u\ne 0$ satisfying

is called a left eigenvector of $A$.

$$Av=\lambda v\text{.}$$ |

$${u}^{\mathrm{T}}A=\lambda {u}^{\mathrm{T}}\text{\hspace{1em}}({u}^{\mathrm{H}}A=\lambda {u}^{\mathrm{H}}\text{\hspace{1em} when}u\text{ is complex})$$ |

A real matrix $A$ may have complex eigenvalues, occurring as complex conjugate pairs.

This problem can be solved via the Schur factorization of $A$, defined in the real case as

where $Z$ is an orthogonal matrix and $T$ is an upper quasi-triangular matrix with $1\times 1$ and $2\times 2$ diagonal blocks, the $2\times 2$ blocks corresponding to complex conjugate pairs of eigenvalues of $A$. In the complex case, the Schur factorization is

where $Z$ is unitary and $T$ is a complex upper triangular matrix.

$$A=ZT{Z}^{\mathrm{T}}\text{,}$$ |

$$A=ZT{Z}^{\mathrm{H}}\text{,}$$ |

The columns of $Z$ are called the Schur vectors. For each $k$ ($1\le k\le n$), the first $k$ columns of $Z$ form an orthonormal basis for the invariant subspace corresponding to the first $k$ eigenvalues on the diagonal of $T$. Because this basis is orthonormal, it is preferable in many applications to compute Schur vectors rather than eigenvectors. It is possible to order the Schur factorization so that any desired set of $k$ eigenvalues occupy the $k$ leading positions on the diagonal of $T$.

The two basic tasks of the nonsymmetric eigenvalue functions are to compute, for a given matrix $A$, some values of $\lambda $ and, if desired, their associated right eigenvectors $v$, and the Schur factorization.

The generalized nonsymmetric eigenvalue problem is to find the eigenvalues, $\lambda $, and corresponding eigenvectors, $v\ne 0$, such that

$$Av=\lambda Bv\text{, \hspace{1em}}ABv=\lambda v\text{,\hspace{1em} and \hspace{1em}}BAv=\lambda v\text{.}$$ |

More precisely, a vector $v$ as just defined is called a right eigenvector of the matrix pair $(A,B)$, and a vector $u\ne 0$ satisfying

is called a left eigenvector of the matrix pair $(A,B)$.

$${u}^{\mathrm{T}}A=\lambda {u}^{\mathrm{T}}B\text{\hspace{1em}}({u}^{\mathrm{H}}A=\lambda {u}^{\mathrm{H}}B\text{ when}u\text{ is complex})$$ |

The polynomial eigenvalue problem is to find the eigenvalues, $\lambda $, and the corresponding eigenvectors, $v\ne 0$, such that

$${\sum}_{i=0}^{p}{\lambda}^{i}{A}_{i}v=0\text{.}$$ |

Here the ${A}_{i}$ are matrices, and $p$ is known as the degree of the problem.

More precisely, a vector $v$ as just defined is a right eigenvector of the problem and a vector $u\ne 0$ satisfying

is a left eigenvector of the problem.

$${\sum}_{i=0}^{p}{\lambda}^{i}u{A}_{i}=0\text{,}$$ |

The singular value decomposition (SVD) of an $m\times n$ matrix $A$ is given by

where $U$ and $V$ are orthogonal (unitary) and $\Sigma $ is an $m\times n$ diagonal matrix with real diagonal elements, ${\sigma}_{i}$, such that

The ${\sigma}_{i}$ are the singular values of $A$ and the first $\mathrm{min}\phantom{\rule{0.125em}{0ex}}(m,n)$ columns of $U$ and $V$ are the left and right singular vectors of $A$. The singular values and singular vectors satisfy

where ${u}_{i}$ and ${v}_{i}$ are the $i$th columns of $U$ and $V$ respectively.

$$A=U\Sigma {V}^{\mathrm{T}}\text{, \hspace{1em}}(A=U\Sigma {V}^{\mathrm{H}}\text{in the complex case})$$ |

$${\sigma}_{1}\ge {\sigma}_{2}\ge \cdots \ge {\sigma}_{\mathrm{min}\phantom{\rule{0.125em}{0ex}}(m,n)}\ge 0\text{.}$$ |

$$A{v}_{i}={\sigma}_{i}{u}_{i}\text{\hspace{1em} and \hspace{1em}}{A}^{\mathrm{T}}{u}_{i}={\sigma}_{i}{v}_{i}\text{\hspace{1em}}(\text{or}{A}^{\mathrm{H}}{u}_{i}={\sigma}_{i}{v}_{i})\text{\hspace{1em} so that \hspace{1em}}{A}^{\mathrm{T}}A{u}_{i}={\sigma}_{i}^{2}{u}_{i}\text{\hspace{1em} (}{A}^{\mathrm{H}}A{u}_{i}={\sigma}_{i}^{2}{u}_{i}\text{)}$$ |

Thus selected singular values and the corresponding right singular vectors may be computed by finding eigenvalues and eigenvectors for the symmetric matrix ${A}^{\mathrm{T}}A$ (or the Hermitian matrix ${A}^{\mathrm{H}}A$ if $A$ is complex).

An alternative approach is to use the relationship

and thus compute selected singular values and vectors via the symmetric matrix

$$\left(\begin{array}{cc}0& A\\ {A}^{\mathrm{T}}& 0\end{array}\right)\text{}\left(\begin{array}{c}U\\ V\end{array}\right)=\left(\begin{array}{c}U\\ V\end{array}\right)\Sigma $$ |

$$C=\left(\begin{array}{cc}0& A\\ {A}^{\mathrm{T}}& 0\end{array}\right)\text{ \hspace{1em}}(C=\left(\begin{array}{cc}0& A\\ {A}^{\mathrm{H}}& 0\end{array}\right)\text{ if}A\text{ is complex})\text{.}$$ |

In many applications, one is interested in computing a few (say $k$) of the largest singular values and corresponding vectors. If ${U}_{k}$, ${V}_{k}$ denote the leading $k$ columns of $U$ and $V$ respectively, and if ${\Sigma}_{k}$ denotes the leading principal submatrix of $\Sigma $, then

is the best rank-$k$ approximation to $A$ in both the $2$-norm and the Frobenius norm. Often a very small $k$ will suffice to approximate important features of the original $A$ or to approximately solve least squares problems involving $A$.

$${A}_{k}\equiv {U}_{k}{\Sigma}_{k}{{V}^{\mathrm{T}}}_{k}\text{\hspace{1em} (or}{U}_{k}{\Sigma}_{k}{{V}^{\mathrm{H}}}_{k}\text{)}$$ |

$$Ax=\lambda x$$ | (1) |

Both the ARPACK and FEAST suites can handle standard, generalized, symmetric, Hermitian and non-Hermitian eigenvalue problems, with both left and right eigenvectors returned. However, the suites differ in the subset of eigenvalues that will be returned.

The ARPACK solvers can be instructed to find the eigenvalues with the largest and/or smallest magnitudes, real parts or imaginary parts.

The FEAST solvers allow you to specify a region in the complex plane (or an interval on the real line for Hermitian problems) within which eigenvalues will be found.

Note also that FEAST contains solvers for the polynomial eigenvalue problem.

The ARPACK functions available in this chapter divide essentially into three suites of basic reverse communication functions and some general purpose functions for banded systems.

- (i)Maximum flexibility in the representation and storage of sparse matrices. All matrix operations are performed outside the solver function, thereby avoiding the need for a complicated interface with enough flexibility to cope with all types of storage schemes and sparsity patterns. This also applies to preconditioners.
- (ii)Enhanced user interaction: you can closely monitor the solution and tidy or immediate termination can be requested. This is useful, for example, when alternative termination criteria are to be employed or in case of failure of the external functions used to perform matrix operations.

At present there are suites of basic functions for real symmetric and nonsymmetric systems, and for complex systems.

The structure of this part of the chapter has been designed to cater for as many types of application as possible. If a general purpose function exists which is suitable for a given application you are recommended to use it. If you then decide you need some additional flexibility it is easy to achieve this by using basic and utility functions which reproduce the algorithm used in the general purpose function, but allow more access to algorithmic control parameters and monitoring.

The suite of basic functions f12aac, f12abc, f12acc, f12adc and f12aec implements the iterative solution of real nonsymmetric eigenvalue problems, finding estimates for a specified spectrum of eigenvalues. These eigenvalue estimates are often referred to as Ritz values and the error bounds obtained are referred to as the Ritz estimates. These functions allow a choice of termination criteria and many other options for specifying the problem type, allow monitoring of the solution process, and can return Ritz estimates of the calculated Ritz values of the problem $A$.

For complex matrices there is an equivalent suite of functions. f12anc, f12apc, f12aqc, f12arc and f12asc are the basic functions which implement corresponding methods used for real nonsymmetric systems. Note that these functions are to be used for both Hermitian and non-Hermitian problems. Occasionally, when using these functions on a complex Hermitian problem, eigenvalues will be returned with small but nonzero imaginary part due to unavoidable round-off errors. These should be ignored unless they are significant with respect to the eigenvalues of largest magnitude that have been computed.

There are general purpose functions for the case where the matrices are known to be banded. In these cases an initialization function is called first to set up default options, and the problem is solved by a single call to a solver function. The matrices are supplied, in LAPACK banded-storage format, as arguments to the solver function. For real general matrices these functions are f12afc and f12agc; and for complex matrices the pair is f12atc and f12auc. With each pair non-default options can be set, following a call to the initialization function, using f12adc for real matrices and f12arc for complex matrices. For real matrices that can be supplied in the sparse matrix compressed column storage (CCS) format, the driver function f02ekc is available. This function uses functions from Chapter F12 in conjunction with direct solver functions from Chapter F11.

There is little computational penalty in using the non-Hermitian complex functions for a Hermitian problem. The only additional cost is to compute eigenvalues of a Hessenberg rather than a tridiagonal matrix. The difference in computational cost should be negligible compared to the overall cost.

The suite of basic functions f12fac, f12fbc, f12fcc, f12fdc and f12fec implement a Lanczos method for the iterative solution of the real symmetric eigenproblem.

There is a general purpose function pair for the case where the matrices are known to be banded. In this case an initialization function, f12ffc, is called first to set up default options, and the problem is solved by a single call to a solver function, f12fgc. The matrices are supplied, in LAPACK banded-storage format, as arguments to f12fgc. Non-default options can be set, following a call to f12ffc, using f12fdc.

The partial singular value decomposition, ${A}_{k}$ (as defined in Section 2.7), of an $(m\times n)$ matrix $A$ can be computed efficiently using functions from this chapter. For real matrices, the suite of functions listed in Section 4.1.3 (for symmetric problems) can be used; for complex matrices, the corresponding suite of functions for complex problems can be used; however, there are no general purpose functions for complex problems.

The driver function f02wgc is available for computing the partial SVD of real matrices. The matrix is not supplied to f02wgc; rather, a user-defined function argument provides the results of performing Matrix-vector products.

For both real and complex matrices, you should use the default options (see, for example, the options listed in Section 11 in **f12fdc**) for problem type (${\mathbf{Standard}}$), computational mode (${\mathbf{Regular}}$) and spectrum (${\mathbf{Largest\; Magnitude}}$). The operation to be performed on request by the reverse communication function (e.g., f12fbc) is, for real matrices, to multiply the returned vector by the symmetric matrix ${A}^{\mathrm{T}}A$ if $m\ge n$, or by $A{A}^{\mathrm{T}}$ if $m<n$. For complex matrices, the corresponding Hermitian matrices are ${A}^{\mathrm{H}}A$ and $A{A}^{\mathrm{H}}$.

The right ($m\ge n$) or left ($m<n$) singular vectors are returned by the post-processing function (e.g., f12fcc). The left (or right) singular vectors can be recovered from the returned singular vectors. Providing the largest singular vectors are not multiple or tightly clustered, there should be no problem in obtaining numerically orthogonal left singular vectors from the computed right singular vectors (or vice versa).

The second example in Section 10 in **f12fbc** illustrates how the partial singular value decomposition of a real matrix can be performed using the suite of functions for finding some eigenvalues of a real symmetric matrix. In this case $m\ge n$, however, the program is easily amended to perform the same task in the case $m<n$.

Similarly, functions in this part of the chapter may be used to estimate the $2$-norm condition number,

$${K}_{2}\left(A\right)=\frac{{\sigma}_{1}}{{\sigma}_{n}}\text{.}$$ |

This can be achieved by setting the option ${\mathbf{Both\; Ends}}$ to get the largest and smallest few singular values, then taking the ratio of largest to smallest computed singular values as your estimate.

Other functions for the solution of sparse linear eigenproblems can be found in Chapters F02 and F08. In particular, tridiagonal and band matrices are addressed in Chapter F08 whereas sparse matrices are addressed in Chapter F02.

This section will describe the complete structure of the reverse communication interfaces. Numerous computational modes are available, including several shift-invert strategies designed to accelerate convergence. Two of the more sophisticated modes will be described in detail. The remaining ones are quite similar in principle, but require slightly different tasks to be performed with the reverse communication interface.

This section is structured as follows. The naming conventions used, and the data types available are described in Section 4.2.1, spectral transformations are discussed in Section 4.2.2. Spectral transformations are usually extremely effective but there are a number of problem dependent issues that determine which one to use. In Section 4.2.3 we describe the reverse communication interface needed to exercise the various shift-invert options. Each shift-invert option is specified as a computational mode and all of these are summarised in the remaining sections. There is a subsection for each problem type and hence these sections are quite similar and repetitive. Once the basic idea is understood, it is probably best to turn directly to the subsection that describes the problem setting that is most interesting to you.

Perhaps the easiest way to rapidly become acquainted with the modes in this part of the chapter is to run each of the example programs which use the various modes. These may be used as templates and adapted to solve specific problems.

Functions for solving nonsymmetric (real and complex) eigenvalue problems, in their short names, have as first letter after the chapter name, the letter ‘a’, e.g., f12abc; equivalent functions for symmetric eigenvalue problems will have this letter replaced by the letter ‘f’ (and ‘_symm’ added to their long names), e.g., f12fbc. For the letter following this, functions for real eigenvalue problems will have letters in the range ‘a to m’ (and have long names beginning ‘nag_real’) while those for complex eigenvalue problems will have letters correspondingly shifted into the range ‘n to z’ (and long names beginning ‘nag_complex’); so, for example, the complex equivalent of f12adc is f12arc, while the real symmetric equivalent is f12fdc.

A suite of five functions are named consecutively in their short names and differ only in the final word of their long names, e.g., f12aac, f12abc, f12acc, f12adc and f12aec.
Each general purpose function has its own initialization function, but uses the option setting function from the suite relevant to the problem type. Thus each general purpose function can be viewed as belonging to a suite of three functions, even though only two functions will be named consecutively. For example, f12adc, f12afc and f12agc represent the suite of functions for solving a banded real symmetric eigenvalue problem.

The most general problem that may be solved here is to compute a few selected eigenvalues and corresponding eigenvectors for

$$Ax=\lambda Bx\text{, \hspace{1em} where}A\text{ and}B\text{ are real or complex}n\times n\text{ matrices.}$$ | (2) |

The shift and invert spectral transformation is used to enhance convergence to a desired portion of the spectrum. If $(x,\lambda )$ is an eigen-pair for $(A,B)$ and $\sigma \ne \lambda $ then

This transformation is effective for finding eigenvalues near $\sigma $ since the ${n}_{\nu}$ eigenvalues of $C\equiv {\left(A-\sigma B\right)}^{\mathrm{-1}}B$ that are largest in magnitude correspond to the ${n}_{\nu}$ eigenvalues ${\lambda}_{j}$ of the original problem that are nearest to the shift $\sigma $ in absolute value. These transformed eigenvalues of largest magnitude are precisely the eigenvalues that are easy to compute with a Krylov method. (See Barrett et al. (1994)). Once they are found, they may be transformed back to eigenvalues of the original problem. The direct relation is

and the eigenvector ${x}_{j}$ associated with ${\nu}_{j}$ in the transformed problem is also an eigenvector of the original problem corresponding to ${\lambda}_{j}$. Usually the Arnoldi process will rapidly obtain good approximations to the eigenvalues of $C$ of largest magnitude. However, to implement this transformation, you must provide the means to solve linear systems involving $A-\sigma B$ either with a matrix factorization or with an iterative method.

$${\left(A-\sigma B\right)}^{\mathrm{-1}}Bx=\nu x\text{, \hspace{1em} where}\nu =\frac{1}{\lambda -\sigma}\text{.}$$ | (3) |

$${\lambda}_{j}=\sigma +\frac{1}{{\nu}_{j}}$$ |

In general, $C$ will be non-Hermitian even if $A$ and $B$ are both Hermitian. However, this is easily remedied. The assumption that $B$ is Hermitian positive definite implies that the bilinear form

is an inner product. If $B$ is positive semidefinite and singular, then a semi-inner product results. This is a weighted $B$-inner product and vectors $x$, $y$ are called $B$-orthogonal if $\u27e8x,y\u27e9=0$. It is easy to show that if $A$ is Hermitian (self-adjoint) then $C$ is Hermitian self-adjoint with respect to this $B$-inner product (meaning $\u27e8Cx,y\u27e9=\u27e8x,Cy\u27e9$ for all vectors $x$, $y$). Therefore, symmetry will be preserved if we force the computed basis vectors to be orthogonal in this $B$-inner product. Implementing this $B$-orthogonality requires you to provide a matrix-vector product $Bv$ on request along with each application of $C$. In the following sections we shall discuss some of the more familiar transformations to the standard eigenproblem. However, when $B$
is positive (semi)definite, we recommend using the shift-invert spectral transformation with $B$-inner products if at all possible. This is a far more robust transformation when $B$ is ill-conditioned or singular. With a little extra manipulation (provided automatically in the post-processing functions) the semi-inner product induced by $B$ prevents corruption of the computed basis vectors by roundoff-error associated with the presence of infinite eigenvalues. These very ill-conditioned eigenvalues are generally associated with a singular or highly ill-conditioned $B$. A detailed discussion of this theory may be found in Chapter 4 of Lehoucq et al. (1998).

$$\u27e8x,y\u27e9\equiv {x}^{\mathrm{H}}By$$ |

Shift-invert spectral transformations are very effective and should even be used on standard problems,
$B=I$, whenever possible. This is particularly true when interior eigenvalues are sought or when the desired eigenvalues are clustered. Roughly speaking, a set of eigenvalues is clustered if the maximum distance between any two eigenvalues in that set is much smaller than the minimum distance between these eigenvalues and any other eigenvalues of $(A,B)$.

If you have a generalized problem $B\ne I$, then you must provide a way to solve linear systems with either $A$, $B$ or a linear combination of the two matrices in order to use the reverse communication suites in this chapter. In this case, a sparse direct method should be used to factor the appropriate matrix whenever possible. The resulting factorization may be used repeatedly to solve the required linear systems once it has been obtained. If instead you decide to use an iterative method, the accuracy of the solutions must be commensurate with the convergence tolerance used for the Arnoldi iteration. A slightly more stringent tolerance is needed relative to the desired accuracy of the eigenvalue calculation.

The main drawback with using the shift-invert spectral transformation is that the coefficient matrix $A-\sigma B$ is typically indefinite in the Hermitian case and has zero-valued eigenvalues in the non-Hermitian case. These are often the most difficult situations for iterative methods and also for sparse direct methods.

The decision to use a spectral transformation on a standard eigenvalue problem $B=I$ or to use one of the simple modes is problem dependent. The simple modes have the advantage that you only need to supply a matrix vector product $Av$. However, this approach is usually only successful for problems where extremal non-clustered eigenvalues are sought. In non-Hermitian problems, extremal means eigenvalues near the boundary of the spectrum of $A$. For Hermitian problems, extremal means eigenvalues at the left- or right-hand end points of the spectrum of $A$. The notion of non-clustered (or well separated) is difficult to define without going into considerable detail. A simplistic notion of a well-separated eigenvalue ${\lambda}_{j}$ for a Hermitian problem would be $\Vert {\lambda}_{i}-{\lambda}_{j}\Vert >\tau \Vert {\lambda}_{n}-{\lambda}_{1}\Vert $
for all $j\ne i$ with $\tau \gg \epsilon $, where ${\lambda}_{1}$ and ${\lambda}_{n}$ are the smallest and largest algebraically. Unless a matrix vector product is quite difficult to code or extremely expensive computationally, it is probably worth trying to use the simple mode first if you are seeking extremal eigenvalues.

The remainder of this section discusses additional transformations that may be applied to convert a generalized eigenproblem to a standard eigenproblem. These are appropriate when $B$ is well-conditioned (Hermitian or non-Hermitian).

If $B$ is Hermitian positive definite and well-conditioned
($\Vert B\Vert \Vert {B}^{-1}\Vert $ is of modest size), then computing the Cholesky factorization $B=L{L}^{\mathrm{H}}$ and converting equation (2) to

provides a transformation to a standard eigenvalue problem. In this case, a request for a matrix vector product would be satisfied with the following three steps:

$$\left({L}^{-1}A{L}^{-\mathrm{H}}\right)y=\mathrm{\lambda y}\text{, \hspace{1em} where}{L}^{\mathrm{H}}x=y$$ |

- (i)Solve ${L}^{\mathrm{H}}z=v$ for $z$.
- (ii)Matrix-vector multiply $z\leftarrow Az$.
- (iii)Solve $Lw=z$ for $w$.

Upon convergence, a computed eigenvector $y$ for
$\left({L}^{-1}A{L}^{-\mathrm{H}}\right)$
is converted to an eigenvector $x$ of the original problem by solving the triangular system ${L}^{\mathrm{H}}x=y$. This transformation is most appropriate when $A$ is Hermitian, $B$ is Hermitian positive definite and extremal eigenvalues are sought. This is because when $A$ is Hermitian, so is $\left({L}^{-1}A{L}^{-\mathrm{H}}\right)$.

If $A$ is Hermitian positive definite and the smallest eigenvalues are sought, then it would be best to reverse the roles of $A$ and $B$ in the above description and ask for the largest algebraic eigenvalues or those of largest magnitude. Upon convergence, a computed eigenvalue
$\hat{\lambda}$
would then be converted to an eigenvalue of the original problem by the relation
$\lambda \leftarrow \frac{1}{\hat{\lambda}}$.

If neither $A$ nor $B$ is Hermitian positive semidefinite, then a direct transformation to standard form is required. One simple way to obtain a direct transformation of equation (2) to a standard eigenvalue problem $Cx=\lambda x$ is to multiply on the left by ${B}^{-1}$ which results in $C={B}^{-1}A$. Of course, you should not perform this transformation explicitly since it will most likely convert a sparse problem into a dense one. If possible, you should obtain a direct factorization of $B$ and when a matrix-vector product involving $C$ is called for, it may be accomplished with the following two steps:

- (i)Matrix-vector multiply $z\leftarrow Av$.
- (ii)Solve $Bw=z$ for $w$.

Several problem-dependent issues may modify this strategy. If $B$ is singular or if you are interested in eigenvalues near a point $\sigma $ then you may choose to work with $C\equiv {\left(A-\sigma B\right)}^{\mathrm{-1}}B$ but without using the $B$-inner products discussed previously. In this case you will have to transform the converged eigenvalues of $C$ to eigenvalues of the original problem.

The reverse communication interface function for real nonsymmetric problems is f12abc; for complex problems is f12apc; and for real symmetric problems is f12fbc. First the reverse communication loop structure will be described and then the details and nuances of the problem setup will be discussed. We use the symbol $\mathrm{op}$ for the operator that is applied to vectors in the Arnoldi/Lanczos process and $B$ will stand for the matrix to use in the weighted inner product described previously. For the shift-invert spectral transformation mode $\mathrm{op}$ denotes ${\left(A-\sigma B\right)}^{\mathrm{-1}}B$.

The basic idea is to set up a loop that repeatedly call one of f12abc, f12apc and f12fbc. On each return, you must either apply $\mathrm{op}$ or $B$ to a specified vector or exit the loop depending upon the value returned in the reverse communication argument irevcm.

The example program in
Section 10 in **f12aec**
illustrates the reverse communication loop for f12abc in shift-invert mode for a generalized nonsymmetric eigenvalue problem. This loop structure will be identical for the symmetric problem calling f12fbc. The loop structure is also identical for the complex arithmetic function f12apc.

In the example, the matrix $B$ is assumed to be symmetric and positive semidefinite. In the loop structure, you will have to supply a function to obtain a matrix factorization of $\left(A-\sigma B\right)$ that may repeatedly be used to solve linear systems. Moreover, a function needs to be provided to perform the matrix-vector product $z=Bv$ and a function is required to solve linear systems of the form $\left(A-\sigma B\right)w=z$ as needed using the previously computed factorization.

When convergence has taken place (indicated by ${\mathbf{irevcm}}=5$ and ${\mathbf{fail}}=0$), the reverse communication loop will be exited. Then, post-processing using the relevant function from f12acc, f12aqc and f12fcc must be done to recover the eigenvalues and corresponding eigenvectors of the original problem. When operating in shift-invert mode, the eigenvalue selection option is normally set to ${\mathbf{Largest\; Magnitude}}$. The post-processing function is then used to convert the converged eigenvalues of $\mathrm{op}$ to eigenvalues of the original problem (2). Also, when $B$ is singular or ill-conditioned, the post-processing function takes steps to purify the eigenvectors and rid them of numerical corruption from eigenvectors corresponding to near-infinite eigenvalues. These procedures are performed automatically when operating in any one of the computational modes described above and later in this section.

You may wish to construct alternative computational modes using spectral transformations that are not addressed by any of the modes specified in this chapter. The reverse communication interface will easily accommodate these modifications. However, it will most likely be necessary to construct explicit transformations of the eigenvalues of $\mathrm{op}$ to eigenvalues of the original problem in these situations.

The problem set up is similar for all of the available computational modes. In the previous section, a detailed description of the reverse communication loop for a specific mode (Shift-invert for a Generalized Problem) was given. To use this or any of the other modes listed below, you are strongly urged to modify one of the example programs.

The first thing to decide is whether the problem will require a spectral transformation. If the problem is generalized, $B\ne I$, then a spectral transformation will be required (see Section 4.2.2). Such a transformation will most likely be needed for a standard problem if the desired eigenvalues are in the interior of the spectrum or if they are clustered at the desired part of the spectrum. Once this decision has been made and $\mathrm{op}$ has been specified, an efficient means to implement the action of the operator $\mathrm{op}$ on a vector must be devised. The expense of applying $\mathrm{op}$ to a vector will of course have direct impact on performance.

Shift-invert spectral transformations may be implemented with or without the use of a weighted $B$-inner product. The relation between the eigenvalues of $\mathrm{op}$ and the eigenvalues of the original problem must also be understood in order to make the appropriate eigenvalue selection option (e.g., ${\mathbf{Largest\; Magnitude}}$) in order to recover eigenvalues of interest for the original problem. You must specify the number of eigenvalues to compute, which eigenvalues are of interest, the number of basis vectors to use, and whether or not the problem is standard or generalized. These items are controlled by setting options via the option setting function.

Setting the number of eigenvalues nev and the number of basis vectors ncv (in the setup function) for optimal performance is very much problem dependent. If possible, it is best to avoid setting nev in a way that will split clusters of eigenvalues. As a rule of thumb ${\mathbf{ncv}}\ge 2\times {\mathbf{nev}}$
is reasonable. There are trade-offs due to the cost of the user-supplied matrix-vector products and the cost of the implicit restart mechanism. If the user-supplied matrix-vector product is relatively cheap, then a smaller value of ncv may lead to more user matrix-vector products and implicit Arnoldi iterations but an overall decrease in computation time. Convergence behaviour can be quite different depending on which of the spectrum options (e.g., ${\mathbf{Largest\; Magnitude}}$) is chosen. The Arnoldi process tends to converge most rapidly to extreme points of the spectrum. Implicit restarting can be effective in focusing on and isolating a selected set of eigenvalues near these extremes. In principle, implicit restarting could isolate eigenvalues in the interior, but in practice this is difficult and usually unsuccessful. If you are interested in eigenvalues near a point that is in the interior of the spectrum, a shift-invert strategy is usually required for reasonable convergence.

The integer argument irevcm is the reverse communication flag that will specify a requested action on return from one of the solver functions f12abc, f12apc and f12fbc. The options ${\mathbf{Standard}}$ and ${\mathbf{Generalized}}$ specify if this is a standard or generalized eigenvalue problem. The dimension of the problem is specified on the call to the initialization function only; this value, together with the number of eigenvalues and the dimension of the basis vectors is passed through the communication array. There are a number of spectrum options which specify the eigenvalues to be computed; these options differ depending on whether a Hermitian or non-Hermitian eigenvalue problem is to be solved. For example, the ${\mathbf{Both\; Ends}}$ is specific to Hermitian (symmetric) problems while the ${\mathbf{Largest\; Imaginary}}$ is specific to non-Hermitian eigenvalue problems (see Section 11.1 in **f12adc**). The specification of problem type will be described separately but the reverse communication interface and loop structure is the same for each type of the basic modes ${\mathbf{Regular}}$, ${\mathbf{Regular\; Inverse}}$, ${\mathbf{Shifted\; Inverse}}$ (also ${\mathbf{Shifted\; Inverse\; Real}}$ and ${\mathbf{Shifted\; Inverse\; Imaginary}}$ for real nonsymmetric problems), and for the problem type: ${\mathbf{Standard}}$ or ${\mathbf{Generalized}}$. There are some additional specialised modes for symmetric problems, ${\mathbf{Buckling}}$ and ${\mathbf{Cayley}}$, and for real nonsymmetric problems with complex shifts applied in real arithmetic. You are encouraged to examine the documented example programs for these modes.

The ${\mathbf{Tolerance}}$ specifies the accuracy requested. If you wish to supply shifts for implicit restarting then the ${\mathbf{Supplied\; Shifts}}$ must be selected, otherwise the default ${\mathbf{Exact\; Shifts}}$ strategy will be used. The ${\mathbf{Supplied\; Shifts}}$ should only be used when you have a great deal of knowledge about the spectrum and about the implicit restarted Arnoldi method and its underlying theory. The ${\mathbf{Iteration\; Limit}}$ should be set to the maximum number of implicit restarts allowed. The cost of an implicit restart step (major iteration) is in the order of
$4n({\mathbf{ncv}}-{\mathbf{nev}})$
floating-point operations for the dense matrix operations and ${\mathbf{ncv}}-{\mathbf{nev}}$
matrix-vector products $w\leftarrow \mathrm{Av}$ with the matrix $A$.

The choice of computational mode through the option setting function is very important. The legitimate computational mode options available differ with each problem type and are listed below for each of them.

The reverse communication interface function for symmetric eigenvalue problems is f12fbc. The option for selecting the region of the spectrum of interest can be one of those listed in Table 1.

${\mathbf{Largest\; Magnitude}}$ | The eigenvalues of greatest magnitude |

${\mathbf{Largest\; Algebraic}}$ | The eigenvalues of largest algebraic value (rightmost) |

${\mathbf{Smallest\; Magnitude}}$ | The eigenvalues of least magnitude. |

${\mathbf{Smallest\; Algebraic}}$ | The eigenvalues of smallest algebraic value (leftmost) |

${\mathbf{Both\; Ends}}$ | The eigenvalues from both ends of the algebraic spectrum |

Table 2 lists the spectral transformation options for symmetric eigenvalue problems together with the specification of $\mathrm{op}$ and $B$ for each mode and the problem type option setting.

Problem Type |
Mode |
Problem |
$\mathrm{op}$ | $\mathit{B}$ |
---|---|---|---|---|

${\mathbf{Standard}}$ | ${\mathbf{Regular}}$ | $Ax=\lambda x$ | $A$ | $I$ |

${\mathbf{Standard}}$ | ${\mathbf{Shifted\; Inverse}}$ | $Ax=\lambda x$ | ${(A-\sigma I)}^{\mathrm{-1}}$ | $I$ |

${\mathbf{Generalized}}$ | ${\mathbf{Regular\; Inverse}}$ | $Ax=\lambda Bx$ | ${B}^{-1}Ax$ | $B$ |

${\mathbf{Generalized}}$ | ${\mathbf{Shifted\; Inverse}}$ | $Ax=\lambda Bx$ | ${(A-\sigma B)}^{\mathrm{-1}}B$ | $B$ |

${\mathbf{Generalized}}$ | ${\mathbf{Buckling}}$ | $\mathrm{Kx}=\lambda {K}_{G}x$ | ${(K-\sigma {K}_{G})}^{\mathrm{-1}}K$ | $K$ |

${\mathbf{Generalized}}$ | ${\mathbf{Cayley}}$ | $Ax=\lambda Bx$ | ${(A-\sigma B)}^{\mathrm{-1}}(A+\sigma B)$ | $B$ |

When $A$ is a general non-Hermitian matrix and $B$ is Hermitian and positive semidefinite, then the selection of the eigenvalues is controlled by the choice of one of the options in Table 3.

${\mathbf{Largest\; Magnitude}}$ | The eigenvalues of greatest magnitude |

${\mathbf{Smallest\; Magnitude}}$ | The eigenvalues of least magnitude |

${\mathbf{Largest\; Real}}$ | The eigenvalues with largest real part |

${\mathbf{Smallest\; Real}}$ | The eigenvalues with smallest real part |

${\mathbf{Largest\; Imaginary}}$ | The eigenvalues with largest imaginary part |

${\mathbf{Smallest\; Imaginary}}$ | The eigenvalues with smallest imaginary part |

Table 4 lists the spectral transformation options for real nonsymmetric eigenvalue problems together with the specification of $\mathrm{op}$ and $B$ for each mode and the problem type option setting. The equivalent listing for complex non-Hermitian eigenvalue problems is given in Table 5.

Problem Type |
Mode |
Problem |
$\mathrm{op}$ | $\mathit{B}$ |
---|---|---|---|---|

${\mathbf{Standard}}$ | ${\mathbf{Regular}}$ | $Ax=\lambda x$ | $A$ | $I$ |

${\mathbf{Standard}}$ | ${\mathbf{Shifted\; Inverse\; Real}}$ | $Ax=\lambda x$ | ${(A-\sigma I)}^{\mathrm{-1}}$ | $I$ |

${\mathbf{Generalized}}$ | ${\mathbf{Regular\; Inverse}}$ | $Ax=\lambda Bx$ | ${B}^{-1}Ax$ | $B$ |

${\mathbf{Generalized}}$ | ${\mathbf{Shifted\; Inverse\; Real}}$ with real $\sigma $ | $Ax=\lambda Bx$ | ${(A-\sigma B)}^{\mathrm{-1}}B$ | $B$ |

${\mathbf{Generalized}}$ | ${\mathbf{Shifted\; Inverse\; Real}}$ with complex $\sigma $ | $Ax=\lambda Bx$ | $\text{real}\left\{{(A-\sigma B)}^{\mathrm{-1}}B\right\}$ | $B$ |

${\mathbf{Generalized}}$ | ${\mathbf{Shifted\; Inverse\; Imaginary}}$ with complex $\sigma $ | $Ax=\lambda Bx$ | $\text{imag}\left\{{(A-\sigma B)}^{\mathrm{-1}}B\right\}$ | $B$ |

Note that there are two shifted inverse modes with complex shifts in Table 4. Since $\sigma $ is complex, these both require the factorization of the matrix
$A-\sigma B$ in complex arithmetic even though, in the case of real nonsymmetric problems, both $A$ and $B$ are real. The only advantage of using this option for real nonsymmetric problems instead of using the equivalent suite for complex problems is that all of the internal operations in the Arnoldi process are executed in real arithmetic. This results in a factor of two saving in storage and a factor of four saving in computational cost. There is additional post-processing that is somewhat more complicated than the other modes in order to get the eigenvalues and eigenvectors of the original problem. These modes are only recommended if storage is extremely critical.

Problem Type |
Mode |
Problem |
$\mathrm{op}$ | $\mathit{B}$ |
---|---|---|---|---|

${\mathbf{Standard}}$ | ${\mathbf{Regular}}$ | $Ax=\lambda x$ | $A$ | $I$ |

${\mathbf{Standard}}$ | ${\mathbf{Shifted\; Inverse}}$ | $Ax=\lambda x$ | ${(A-\sigma I)}^{\mathrm{-1}}$ | $I$ |

${\mathbf{Generalized}}$ | ${\mathbf{Regular\; Inverse}}$ | $Ax=\lambda Bx$ | ${B}^{-1}Ax$ | $B$ |

${\mathbf{Generalized}}$ | ${\mathbf{Shifted\; Inverse}}$ | $Ax=\lambda Bx$ | ${(A-\sigma B)}^{\mathrm{-1}}B$ | $B$ |

On the final successful return from a reverse communication function, the corresponding post-processing function must be called to get eigenvalues of the original problem and the corresponding eigenvectors if desired. In the case of ${\mathbf{Shifted\; Inverse}}$ modes for ${\mathbf{Generalized}}$ problems, there are some subtleties to recovering eigenvectors when $B$ is ill-conditioned. This process is called eigenvector purification. It prevents eigenvectors from being corrupted with noise due to the presence of eigenvectors corresponding to near infinite eigenvalues. These operations are completely transparent to you. There is negligible additional cost to obtain eigenvectors. An orthonormal (Arnoldi/Lanczos) basis is always computed. The approximate eigenvalues of the original problem are returned in ascending algebraic order. The option relevant to this function is ${\mathbf{Vectors}}$ which may be set to values that determine whether only eigenvalues are desired or whether corresponding eigenvectors and/or Schur vectors are required. The value of the shift $\sigma $ used in spectral transformations must be passed to the post-processing function through the appropriately named argument(s). The eigenvectors returned are normalized to have unit length with respect to the semi-inner product that was used. Thus, if $B=I$ then they will have unit length in the standard-norm. In general, a computed eigenvector $x$ will satisfy ${x}^{\mathrm{H}}Bx=1$.

The option setting function for each suite allows the setting of three options that control solution printing and the monitoring of the iterative and post-processing stages. These three options are: ${\mathbf{Advisory}}$, ${\mathbf{Monitoring}}$ and ${\mathbf{Print\; Level}}$. By default, no solution monitoring or printing is performed. The ${\mathbf{Advisory}}$ option controls where solution details are printed; the ${\mathbf{Monitoring}}$ option controls where monitoring details are to be printed and is mainly used for debugging purposes; the ${\mathbf{Print\; Level}}$ option controls the amount of detail to be printed, see individual option setting function documents for specifications of each print level. The value passed to ${\mathbf{Advisory}}$ and ${\mathbf{Monitoring}}$ can be the same, but it is recommended that the two sets of information be kept separate. Note that the monitoring information can become very voluminous for the highest settings of ${\mathbf{Print\; Level}}$.

To use the above options to print information to a file, the function x04acc must be called to open a file with a given name and return an associated Nag_FileID (see Section 3.1.1 in the Introduction to the NAG Library CL Interface) for that file. The Nag_FileID (see Section 3.1.1 in the Introduction to the NAG Library CL Interface) value can then be passed to the advisory or monitoring option setting string. On final exit from the post-processing function the file may be closed by a call to x04adc.

The following example extract shows how the files ‘solut.dat’ and ‘monit.dat’ may be opened for the printing of solution and monitoring information respectively.

Nag_FileID solutid, monitid; char option1[16], option2[16]; x04acc("solut.dat", 1, &solutid, &fail); x04acc("monit.dat", 1, &monitid, &fail); Vsprintf(option1, "advisory=%4ld", (Integer) solutid); Vsprintf(option2, "monitoring=%4ld", (Integer) monitid); . . . f12adc(option1, icomm, comm, &fail); f12adc(option2, icomm, comm, &fail); f12adc("print level = 10", icomm, comm, &fail); . . . x04adc(solutid, &fail); x04adc(monitid, &fail);

The NAG FEAST suite of functions all have short names beginning with ‘f12j’. They are divided into the following types of function:

Solving an eigenvalue problem using the FEAST algorithm involves the following function calls.

- 1.Call f12jac to initialize the handle to the internal data structure used by the functions and set options to their default values.
- 2.Optionally, call f12jbc to set any options if different from their defaults (for example, the number of quadrature nodes on the contour, or the location of the ellipse if such a contour is to be used). f12jbc should be called once for each option to be set.
- 3.Call one of the contour setting functions f12jec (for Hermitian and real symmetric problems), f12jfc (for circular or elliptical contours) or f12jgc (for maximum flexibility in your choice of contour). These functions will generate a set of quadrature nodes and weights to be used by the solvers.
- 4.Call one of the reverse communication solvers f12jjc, f12jkc, f12jrc, f12jsc, f12jtc, f12juc or f12jvc.
- 5.Call f12jzc to destroy the handle to the internal data structure.

The exact choice of which contour setting function and which solver to use is problem-dependent and is detailed in Section 5.2.3.

The contour setting functions create a set of nodes and weights describing the contour within which eigenvalues are required. There are three such functions.

f12jec is intended for use with Hermitian or real symmetric eigenvalue problems (the eigenvalues of such problems all lie on the real line). You need only specify the limits of the real interval within which eigenvalues will be sought. f12jec uses these to generate an elliptical contour, symmetric about the real axis. Prior to calling f12jec, you can set the eccentricity of the ellipse, and the number of contour integration points using the option setting function f12jbc.

f12jfc is intended for non-Hermitian eigenvalue problems. It generates nodes and weights for an elliptical contour in the complex plane. You need only specify the horizontal radius and the location of the centre of the ellipse. Prior to calling f12jfc you can use f12jbc to rotate the ellipse, control its eccentricity and specify the number of integration points to use.

f12jgc gives you the maximum flexibility in creating your own contour. It is intended for non-Hermitian problems. Your contour can be made up of a combination of line segments and half ellipses. You must specify the start and end points of each segment of the contour, together with the number of integration points that should be assigned to each segment. f12jgc will use this information to generate the nodes and weights of a polygonal approximation to the contour. The contour must be convex (the behaviour of the solvers is undefined if a concave contour is used).

Note that f12jbc allows you to choose between three types of quadrature: Gauss-Legendre, Trapezoidal and (for Hermitian and real symmetric problems only) Zolotarev. The choice of quadrature will change the values of the nodes and weights computed by the contour setting functions. The type of quadrature, and the number of integration points used both influence the convergence rate of the algorithm. In general, increasing the number of integration points increases the convergence rate, at the expense of more expensive iterations, and using Zolotarev quadrature is recomended for Hermitian eigenvalue problems.

The solvers use reverse communication (see Section 7 in How to Use the NAG Library for further information). They return repeatedly to the calling program with the argument irevcm set to specified values which require the calling program to carry out a specific task (either to compute a matrix-vector product or to solve a linear system), or to signal the completion of the computation. Reverse communication offers maximum flexibility in the representation and storage of sparse matrices. All matrix operations are performed outside the solver function, thereby avoiding the need for a complicated interface with enough flexibility to cope with all types of storage schemes and sparsity patterns.

When FEAST requires the calling program to solve a system of linear equations, this will occur in two stages.

- (i)FEAST will first ask the calling program to compute a factorization of a matrix suitable for solving the linear system. For dense matrices this might be a Bunch-Kaufman factorization (f07nrc) or an $LU$ decomposition (f07arc). For sparse matrices this could be an incomplete $LU$ factorization (f11dnc) or even just a preconditioner. The factorization should be stored as it may be reused several times.
- (ii)FEAST will ask the calling program to use the factorization computed in (i) to solve linear systems with different sets of righthand sides. When a new factorization is required (i.e., FEAST returns to step (i)), the factorization previously computed in step (i) can be overwritten.

Note that FEAST uses an inverse residual iteration algorithm which enables the linear systems to be solved with very low accuracy with no impact on the double precision convergence rate. Thus single precision solvers, and very high convergence tolerances are entirely acceptable when factorizing and solving the linear systems, provided the condition numbers of the linear systems are not so high as to prevent such low precision solvers from obtaining any degree of accuracy.

The size of the search subspace m0 affects the convergence of the algorithm. Increasing m0 will improve convergence, but will require more memory and result in a more expensive computation. As a general rule of thumb, m0 should exceed the number of eigenvalues in the search contour by a factor of approximately $1.5$ (note that FEAST can be used to estimate the number of eigenvalues inside the contour prior to embarking on the full eigenvalue computation by setting the option ${\mathbf{Execution\; Mode}}=\mathrm{Estimate}$ in f12jbc).

In principal, the FEAST algorithm can be used to find many thousands of eigenpairs within a large search contour. However, in practice better performance will be achieved if the computation is split into multiple smaller contours (which could then be searched in parallel).

The following table shows which contour setting function and which reverse communication solver should be used for the different problem types. Recall that for all problem types the initialization function f12jac should first be called, and the cleanup function f12jzc should be called after the solver.

Problem Type | Contour Setting Function | Reverse Communication Solver |
---|---|---|

real symmetric | f12jec | f12jjc |

real nonsymmetric |
f12jfc (circular or elliptical contours) f12jgc (general contours) |
f12jkc |

complex Hermitian | f12jec | f12jrc |

complex symmetric |
f12jfc (circular or elliptical contours) f12jgc (general contours) |
f12jsc |

complex nonsymmetric |
f12jfc (circular or elliptical contours) f12jgc (general contours) |
f12jtc |

polynomial symmetric |
f12jfc (circular or elliptical contours) f12jgc (general contours) |
f12juc |

polynomial nonsymmetric |
f12jfc (circular or elliptical contours) f12jgc (general contours) |
f12jvc |

ARPACK routines, |

Standard or generalized eigenvalue problems for complex matrices, |

banded matrices, |

initialize problem and method | f12atc |

selected eigenvalues, eigenvectors and/or Schur vectors | f12auc |

general matrices, |

initialize problem and method | f12anc |

option setting | f12arc |

reverse communication implicitly restarted Arnoldi method | f12apc |

reverse communication monitoring | f12asc |

selected eigenvalues, eigenvectors and/or Schur vectors of original problem | f12aqc |

Standard or generalized eigenvalue problems for real nonsymmetric matrices, |

banded matrices, |

initialize problem and method | f12afc |

selected eigenvalues, eigenvectors and/or Schur vectors | f12agc |

general matrices, |

initialize problem and method | f12aac |

option setting | f12adc |

reverse communication implicitly restarted Arnoldi method | f12abc |

reverse communication monitoring | f12aec |

selected eigenvalues, eigenvectors and/or Schur vectors of original problem | f12acc |

Standard or generalized eigenvalue problems for real symmetric matrices, |

banded matrices, |

initialize problem and method | f12ffc |

selected eigenvalues, eigenvectors and/or Schur vectors | f12fgc |

general matrices, |

initialize problem and method | f12fac |

option setting | f12fdc |

reverse communication implicitly restarted Arnoldi(Lanczos) method | f12fbc |

reverse communication monitoring | f12fec |

selected eigenvalues and eigenvectors and/or Schur vectors of original problem | f12fcc |

NAG FEAST suite, |

contour setting, |

elliptical contour for nonsymmetric or complex symmetric eigenvalue problems | f12jfc |

general contour for nonsymmetric or complex symmetric eigenvalue problems | f12jgc |

real symmetric/complex Hermitian eigenvalue problems | f12jec |

deallocation | f12jzc |

initialization | f12jac |

option setting | f12jbc |

solvers, |

complex Hermitian | f12jrc |

complex nonsymmetric | f12jtc |

complex symmetric | f12jsc |

polynomial nonsymmetric | f12jvc |

polynomial symmetric | f12juc |

real nonsymmetric | f12jkc |

real symmetric | f12jjc |

None.

None.

Barrett R, Berry M, Chan T F, Demmel J, Donato J, Dongarra J, Eijkhout V, Pozo R, Romine C and Van der Vorst H (1994) *Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods* SIAM, Philadelphia

Lehoucq R B (1995) Analysis and implementation of an implicitly restarted iteration *PhD Thesis* Rice University, Houston, Texas

Lehoucq R B (2001) Implicitly restarted Arnoldi methods and subspace iteration *SIAM Journal on Matrix Analysis and Applications* **23** 551–562

Lehoucq R B and Scott J A (1996) An evaluation of software for computing eigenvalues of sparse nonsymmetric matrices *Preprint MCS-P547-1195* Argonne National Laboratory

Lehoucq R B and Sorensen D C (1996) Deflation techniques for an implicitly restarted Arnoldi iteration *SIAM Journal on Matrix Analysis and Applications* **17** 789–821

Lehoucq R B, Sorensen D C and Yang C (1998) *ARPACK Users' Guide: Solution of Large-scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods* SIAM, Philadelphia

Polizzi E (2009) Density-Matrix-Based Algorithms for Solving Eigenvalue Problems Phys. Rev. B. **79** 115112

Saad Y (1992) *Numerical Methods for Large Eigenvalue Problems* Manchester University Press, Manchester, UK