hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox Chapter Introduction

F08 — Least Squares and Eigenvalue Problems (LAPACK)

Scope of the Chapter

This chapter provides functions for the solution of linear least squares problems, eigenvalue problems and singular value problems, as well as associated computations. It provides functions for: Functions are provided for both real and complex data.
For a general introduction to the solution of linear least squares problems, you should turn first to Chapter F04. The decision trees, at the end of Chapter F04, direct you to the most appropriate functions in Chapters F04 or F08. Chapters F04 and F08 contain Black Box (or driver) functions which enable standard linear least squares problems to be solved by a call to a single function.
For a general introduction to eigenvalue and singular value problems, you should turn first to Chapter F02. The decision trees, at the end of Chapter F02, direct you to the most appropriate functions in Chapters F02 or F08. Chapters F02 and F08 contain Black Box (or driver) functions which enable standard types of problem to be solved by a call to a single function. Often functions in Chapter F02 call Chapter F08 functions to perform the necessary computational tasks.
The functions in this chapter (Chapter F08) handle only dense, band, tridiagonal and Hessenberg matrices (not matrices with more specialised structures, or general sparse matrices). The tables in Section [Recommendations on Choice and Use of Available Functions] and the decision trees in Section [Decision Trees] direct you to the most appropriate functions in Chapter F08.
The functions in this chapter have all been derived from the LAPACK project (see Anderson et al. (1999)). They have been designed to be efficient on a wide range of high-performance computers, without compromising efficiency on conventional serial machines.
It is not expected that you will need to read all of the following sections, but rather you will pick out those sections relevant to your particular problem.

Background to the Problems

This section is only a brief introduction to the numerical solution of linear least squares problems, eigenvalue and singular value problems. Consult a standard textbook for a more thorough discussion, for example Golub and Van Loan (1996).

Linear Least Squares Problems

The linear least squares problem is
minimize bAx2,
x
minimize x b-Ax2,
(1)
where AA is an mm by nn matrix, bb is a given mm element vector and xx is an nn-element solution vector.
In the most usual case mnmn and rank(A) = nrank(A)=n, so that AA has full rank and in this case the solution to problem (1) is unique; the problem is also referred to as finding a least squares solution to an overdetermined system of linear equations.
When m < nm<n and rank(A) = mrank(A)=m, there are an infinite number of solutions xx which exactly satisfy bAx = 0b-Ax=0. In this case it is often useful to find the unique solution xx which minimizes x2x2, and the problem is referred to as finding a minimum norm solution to an underdetermined system of linear equations.
In the general case when we may have rank(A) < min (m,n)rank(A)<min(m,n) – in other words, AA may be rank-deficient – we seek the minimum norm least squares solution xx which minimizes both x2x2 and bAx2b-Ax2.
This chapter (Chapter F08) contains driver functions to solve these problems with a single call, as well as computational functions that can be combined with functions in Chapter F07 to solve these linear least squares problems. Chapter F04 also contains Black Box functions to solve these linear least squares problems in standard cases. The next two sections discuss the factorizations that can be used in the solution of linear least squares problems.

Orthogonal Factorizations and Least Squares Problems

A number of functions are provided for factorizing a general rectangular mm by nn matrix AA, as the product of an orthogonal matrix (unitary if complex) and a triangular (or possibly trapezoidal) matrix.
A real matrix QQ is orthogonal if QTQ = IQTQ=I; a complex matrix QQ is unitary if QHQ = IQHQ=I. Orthogonal or unitary matrices have the important property that they leave the 22-norm of a vector invariant, so that
x2 = Qx2,
x2 =Qx2,
if QQ is orthogonal or unitary. They usually help to maintain numerical stability because they do not amplify rounding errors.
Orthogonal factorizations are used in the solution of linear least squares problems. They may also be used to perform preliminary steps in the solution of eigenvalue or singular value problems, and are useful tools in the solution of a number of other problems.

QR factorization

The most common, and best known, of the factorizations is the QRQR factorization given by
A = Q
(R)
0
,   if ​mn,
A =Q R 0 ,   if ​mn,
where RR is an nn by nn upper triangular matrix and QQ is an mm by mm orthogonal (or unitary) matrix. If AA is of full rank nn, then RR is nonsingular. It is sometimes convenient to write the factorization as
A = (Q1Q2)
(R)
0
A =(Q1Q2) R 0
which reduces to
A = Q1R,
A =Q1R,
where Q1Q1 consists of the first nn columns of QQ, and Q2Q2 the remaining mnm-n columns.
If m < nm<n, RR is trapezoidal, and the factorization can be written
A = Q (R1R2) ,   if ​m < n,
A =Q (R1R2) ,   if ​m<n,
where R1R1 is upper triangular and R2R2 is rectangular.
The QRQR factorization can be used to solve the linear least squares problem (1) when mnmn and AA is of full rank, since
‖b − Ax‖2 = ‖QTb − QTAx‖2 =
(c1 − Rx) c2
2,
b-Ax 2= QTb-QTAx 2= c1-Rx c2 2,
where
c ≡ (c1) c2 =
  Q1T b Q2T b  
= QTb;
c c1 c2 = Q1T b Q2T b =QTb;
and c1c1 is an nn-element vector. Then xx is the solution of the upper triangular system
Rx = c1.
Rx=c1.
The residual vector rr is given by
r = bAx = Q
(0)
c2
.
r =b-Ax=Q 0 c2 .
The residual sum of squares r22r22 may be computed without forming rr explicitly, since
r2 = bAx2 = c22.
r2 =b-Ax2=c22.

LQ factorization

The LQLQ factorization is given by
A = (L0) Q = (L0)
(Q1)
Q2
= LQ1,   if ​mn,
A =(L0) Q=(L0) Q1 Q2 =LQ1,   if ​mn,
where LL is mm by mm lower triangular, QQ is nn by nn orthogonal (or unitary), Q1Q1 consists of the first mm rows of QQ, and Q2Q2 the remaining nmn-m rows.
The LQLQ factorization of AA is essentially the same as the QRQR factorization of ATAT (AHAH if AA is complex), since
A = (L0) QAT = QT
(LT)
0
.
A =(L0) QAT=QT LT 0 .
The LQLQ factorization may be used to find a minimum norm solution of an underdetermined system of linear equations Ax = bAx=b where AA is mm by nn with m < nm<n and has rank mm. The solution is given by
x = QT
(L1b)
0
.
x =QT L-1b 0 .

QR factorization with column pivoting

To solve a linear least squares problem (1) when AA is not of full rank, or the rank of AA is in doubt, we can perform either a QRQR factorization with column pivoting or a singular value decomposition.
The QRQR factorization with column pivoting is given by
A = Q
(R)
0
PT,  mn,
A =Q R 0 PT,  mn,
where QQ and RR are as before and PP is a (real) permutation matrix, chosen (in general) so that
|r11||r22||rnn|
|r11||r22||rnn|
and moreover, for each kk,
|rkk|Rk : j,j2,  j = k + 1,,n.
|rkk|Rk:j,j2,  j=k+1,,n.
If we put
R =
(R11R12)
0 R22
R = R11 R12 0 R22
where R11R11 is the leading kk by kk upper triangular sub-matrix of RR then, in exact arithmetic, if rank(A) = krank(A)=k, the whole of the sub-matrix R22R22 in rows and columns k + 1k+1 to nn would be zero. In numerical computation, the aim must be to determine an index kk, such that the leading sub-matrix R11R11 is well-conditioned, and R22R22 is negligible, so that
R =
(R11R12)
0 R22
(R11R12)
0 0
.
R = R11 R12 0 R22 R11 R12 0 0 .
Then kk is the effective rank of AA. See Golub and Van Loan (1996) for a further discussion of numerical rank determination.
The so-called basic solution to the linear least squares problem (1) can be obtained from this factorization as
x = P
(R1111)
0
,
x =P R11-1c^1 0 ,
where 1c^1 consists of just the first kk elements of c = QTbc=QTb.

Complete orthogonal factorization

The Q R Q R  factorization with column pivoting does not enable us to compute a minimum norm solution to a rank-deficient linear least squares problem, unless R12 = 0 R12 = 0 . However, by applying for further orthogonal (or unitary) transformations from the right to the upper trapezoidal matrix
(R11R12)
R11 R12 , R12 R12  can be eliminated:
(R11R12)
Z =
(T110)
.
R11 R12 Z = T11 0 .
This gives the complete orthogonal factorization
AP = Q
(T110)
0 0
ZT
AP = Q T11 0 0 0 ZT
from which the minimum norm solution can be obtained as
x = P Z
( T1111)
0
.
x = P Z T11-1 c^1 0 .

Other factorizations

The QLQL and RQRQ factorizations are given by
A = Q
(0)
L
,  if ​ m n ,
A = Q 0 L ,  if ​ m n ,
and
A =
(0R)
Q ,  if ​ m n .
A = 0 R Q ,  if ​ m n .
The factorizations are less commonly used than either the QRQR or LQLQ factorizations described above, but have applications in, for example, the computation of generalized QRQR factorizations.

The Singular Value Decomposition

The singular value decomposition (SVD) of an mm by nn matrix AA is given by
A = UΣVT,  (A = UΣVHin the complex case)
A =UΣVT,  (A=UΣVHin the complex case)
where UU and VV are orthogonal (unitary) and ΣΣ is an mm by nn diagonal matrix with real diagonal elements, σiσi, such that
σ1σ2σmin (m,n)0.
σ1σ2σmin(m,n)0.
The σiσi are the singular values of AA and the first min (m,n)min(m,n) columns of UU and VV are the left and right singular vectors of AA. The singular values and singular vectors satisfy
Avi = σiui  and  ATui = σivi(or ​AHui = σivi)
Avi=σiui  and  ATui=σivi(or ​AHui=σivi)
where uiui and vivi are the iith columns of UU and VV respectively.
The computation proceeds in the following stages.
  1. The matrix AA is reduced to bidiagonal form A = U1BV1TA=U1BV1T if AA is real (A = U1BV1HA=U1BV1H if AA is complex), where U1U1 and V1V1 are orthogonal (unitary if AA is complex), and BB is real and upper bidiagonal when mnmn and lower bidiagonal when m < nm<n, so that BB is nonzero only on the main diagonal and either on the first superdiagonal (if mnmn) or the first subdiagonal (if m < nm<n).
  2. The SVD of the bidiagonal matrix BB is computed as B = U2Σ V2T B=U2Σ V2T , where U2U2 and V2V2 are orthogonal and ΣΣ is diagonal as described above. The singular vectors of AA are then U = U1U2U=U1U2 and V = V1V2V=V1V2.
If mnmn, it may be more efficient to first perform a QRQR factorization of AA, and then compute the SVD of the nn by nn matrix RR, since if A = QRA=QR and R = UΣVTR=UΣVT, then the SVD of AA is given by A = (QU)ΣVTA=(QU)ΣVT.
Similarly, if mnmn, it may be more efficient to first perform an LQLQ factorization of AA.
This chapter supports two primary algorithms for computing the SVD of a bidiagonal matrix. They are:
(i) the divide and conquer algorithm;
(ii) the QRQR algorithm.
The divide and conquer algorithm is much faster than the QRQR algorithm if singular vectors of large matrices are required.

The Singular Value Decomposition and Least Squares Problems

The SVD may be used to find a minimum norm solution to a (possibly) rank-deficient linear least squares problem (1). The effective rank, kk, of AA can be determined as the number of singular values which exceed a suitable threshold. Let Σ̂Σ^ be the leading kk by kk sub-matrix of ΣΣ, and V^ be the matrix consisting of the first kk columns of VV. Then the solution is given by
x = Σ̂11,
x =V^Σ^-1c^1,
where 1c^1 consists of the first kk elements of c = UTb = U2T U1T bc=UTb= U2TU1T b.

Generalized Linear Least Squares Problems

The simple type of linear least squares problem described in Section [Linear Least Squares Problems] can be generalized in various ways.
  1. Linear least squares problems with equality constraints:
    find ​x​ to minimize ​S = cAx22  subject to  Bx = d,
    find ​x​ to minimize ​S=c-Ax22  subject to  Bx=d,
    where AA is mm by nn and BB is pp by nn, with pnm + ppnm+p. The equations Bx = dBx=d may be regarded as a set of equality constraints on the problem of minimizing SS. Alternatively the problem may be regarded as solving an overdetermined system of equations
    (A)
    B
    x =
    (c)
    d
    ,
    A B x= c d ,
    where some of the equations (those involving BB) are to be solved exactly, and the others (those involving AA) are to be solved in a least squares sense. The problem has a unique solution on the assumptions that BB has full row rank pp and the matrix
    (A)
    B
    A B  has full column rank nn. (For linear least squares problems with inequality constraints, refer to Chapter E04.)
  2. General Gauss–Markov linear model problems:
    minimize ​y2  subject to  d = Ax + By,
    minimize ​y2  subject to  d=Ax+By,
    where AA is mm by nn and BB is mm by pp, with nmn + pnmn+p. When B = IB=I, the problem reduces to an ordinary linear least squares problem. When BB is square and nonsingular, it is equivalent to a weighted linear least squares problem:
    find ​x​ to minimize ​B1(dAx)2.
    find ​x​ to minimize ​B-1(d-Ax)2.
    The problem has a unique solution on the assumptions that AA has full column rank nn, and the matrix (A,B)(A,B) has full row rank mm. Unless BB is diagonal, for numerical stability it is generally preferable to solve a weighted linear least squares problem as a general Gauss–Markov linear model problem.

Generalized Orthogonal Factorization and Generalized Linear Least Squares Problems

Generalized QR Factorization

The generalized QRQR (GQR) factorization of an nn by mm matrix AA and an nn by pp matrix BB is given by the pair of factorizations
A = QR   and   B = QTZ ,
A = QR   and   B = QTZ ,
where QQ and ZZ are respectively nn by nn and pp by pp orthogonal matrices (or unitary matrices if AA and BB are complex). RR has the form
R =
m
m(R11)
nm 0
, if ​ n m ,
R = mm(R11) n-m 0 , if ​ n m ,
or
R =
nmn
n(R11R12)
, if ​ n < m ,
R = nm-nn(R11R12) , if ​ n < m ,
where R11 R11  is upper triangular. TT has the form
T =
pnn
n(0T12)
, if ​ n p ,
T = p-nnn(0T12) , if ​ n p ,
or
T =
p
np(T11)
p T21
, if ​ n > p ,
T = pn-p(T11) p T21 , if ​ n > p ,
where T12 T12  or T21 T21  is upper triangular.
Note that if BB is square and nonsingular, the GQR factorization of AA and BB implicitly gives the QRQR factorization of the matrix B1 A B-1 A :
B1 = ZT (T1R)
B-1 = ZT ( T-1 R )
without explicitly computing the matrix inverse B1 B-1  or the product B1 A B-1 A .
The GQR factorization can be used to solve the general (Gauss–Markov) linear model problem (GLM) (see Section [Generalized Linear Least Squares Problems], but note that AA and BB are dimensioned differently there as mm by nn and pp by nn respectively). Using the GQR factorization of AA and BB, we rewrite the equation d = Ax + By d = Ax + By  as
QTd = QTAx + QTBy
= Rx + TZy.
QTd = QTAx + QTBy = Rx + TZy.
We partition this as
= mm(R11) n − m 0 x + p − n + mn − mm(T11T12) n − m 0 T22 (y1) y2
d1 d2 = mm(R11) n-m 0 x + p-n+mn-mm(T11T12) n-m 0 T22 y1 y2
(d1) d2
where
(d1)
d2
QTd ,  and  
(y1)
y2
Zy .
d1 d2 QTd ,  and   y1 y2 Zy .
The GLM problem is solved by setting
y1 = 0   and   y2 = T221 d2
y1 = 0   and   y2 = T22-1 d2
from which we obtain the desired solutions
x = R111 (d1T12y2)   and   y = ZT
(0)
y2
.
x = R11-1 ( d1 - T12 y2 )   and   y = ZT 0 y2 .

Generalized RQ Factorization

The generalized RQRQ (GRQ) factorization of an mm by nn matrix AA and a pp by nn matrix BB is given by the pair of factorizations
A = R Q ,   B = Z T Q
A = R Q ,   B = Z T Q
where QQ and ZZ are respectively nn by nn and pp by pp orthogonal matrices (or unitary matrices if AA and BB are complex). RR has the form
R =
nmm
m(0R12)
,  if ​ mn ,
R = n-mmm(0R12) ,  if ​ mn ,
or
R =
n
mn(R11)
n R21
,  if ​ m > n ,
R = nm-n(R11) n R21 ,  if ​ m>n ,
where R12 R12  or R21 R21  is upper triangular. TT has the form
T =
n
n(T11)
pn 0
,   if ​ pn ,
T = nn(T11) p-n 0 ,   if ​ pn ,
or
T =
pnp
p(T11T12)
,   if ​ p < n ,
T = pn-pp(T11T12) ,   if ​ p<n ,
where T11 T11  is upper triangular.
Note that if BB is square and nonsingular, the GRQ factorization of AA and BB implicitly gives the RQRQ factorization of the matrix AB1 AB-1 :
AB1 = (RT1) ZT
AB-1 = (RT-1) ZT
without explicitly computing the matrix B1 B-1  or the product AB1 AB-1 .
The GRQ factorization can be used to solve the linear equality-constrained least squares problem (LSE) (see Section [Generalized Linear Least Squares Problems]). We use the GRQ factorization of BB and AA (note that BB and AA have swapped roles), written as
B = T Q   and   A = Z R Q .
B = T Q   and   A = Z R Q .
We write the linear equality constraints Bx = d Bx=d  as
T Q x = d ,
T Q x = d ,
which we partition as:
npp
p(0T12)
(x1)
x2
= d   where  
(x1)
x2
Qx .
n-ppp(0T12) x1 x2 = d   where   x1 x2 Qx .
Therefore x2 x2  is the solution of the upper triangular system
T12 x2 = d .
T12 x2 = d .
Furthermore,
Axc2 = ZTAxZTc2
= RQxZTc2
.
Ax-c2 = ZT Ax - ZT c 2 = RQx - ZT c 2 .
We partition this expression as:
npp
np(R11R12)
p + mn 0 R22
(x1)
x2
(c1)
c2
,
n-ppn-p(R11R12) p+m-n 0 R22 x1 x2 - c1 c2 ,
where
(c1)
c2
ZTc
c1 c2 ZTc .
To solve the LSE problem, we set
R11 x1 + R12 x2 c1 = 0
R11 x1 + R12 x2 - c1 = 0
which gives x1 x1  as the solution of the upper triangular system
R11 x1 = c1 R12 x2 .
R11 x1 = c1 - R12 x2 .
Finally, the desired solution is given by
x = QT
(x1)
x2
.
x = QT x1 x2 .

Generalized Singular Value Decomposition (GSVD)

The generalized (or quotient) singular value decomposition of an mm by nn matrix AA and a pp by nn matrix BB is given by the pair of factorizations
A = U Σ1 [0,R] QT   and   B = V Σ2 [0,R] QT .
A = U Σ1 [0,R] QT   and   B = V Σ2 [0,R] QT .
The matrices in these factorizations have the following properties:
UU is mm by mm, VV is pp by pp, QQ is nn by nn, and all three matrices are orthogonal. If AA and BB are complex, these matrices are unitary instead of orthogonal, and QT QT  should be replaced by QH QH  in the pair of factorizations.
RR is rr by rr, upper triangular and nonsingular. [0,R] [0,R]  is rr by nn (in other words, the 00 is an rr by nrn-r zero matrix). The integer rr is the rank of
(A)
B
A B , and satisfies rnrn.
Σ1 Σ1  is mm by rr, Σ2 Σ2  is pp by rr, both are real, non-negative and diagonal, and Σ1T Σ1 + Σ2T Σ2 = I Σ1T Σ1 + Σ2T Σ2 = I . Write Σ1T Σ1 = diag(α12,,αr2) Σ1T Σ1 = diag(α12,,αr2)  and Σ2T Σ2 = diag(β12,,βr2) Σ2T Σ2 = diag(β12,,βr2) , where αi αi  and βi βi  lie in the interval from 00 to 11. The ratios α1 / β1 ,, αr / βr α1 / β1 ,, αr / βr  are called the generalized singular values of the pair AA, BB. If βi = 0 βi = 0 , then the generalized singular value αi / βi αi / βi  is infinite.
Σ1 Σ1  and Σ2 Σ2  have the following detailed structures, depending on whether mr0 m-r0  or mr < 0 m-r<0 . In the first case, mr0 m-r0 , then
Σ1 =
kl
k(I0)
l 0 C
mkl 0 0
  and   Σ2 =
kl
l(0S)
pl 0 0
.
Σ1 = klk(I0) l 0 C m-k-l 0 0   and   Σ2 = kll(0S) p-l 0 0 .
Here ll is the rank of BB, k = rl k=r-l , CC and SS are diagonal matrices satisfying C2 + S2 = I C2 + S2 = I , and SS is nonsingular. We may also identify α1 = = αk = 1 α1 = = αk = 1 , αk + i = cii αk+i = cii , for i = 1,2,, l i=1,2,, l, β1 = = βk = 0 β1 = = βk = 0 , and βk + i = sii βk+i = sii , for i = 1,2,, l i=1,2,, l . Thus, the first kk generalized singular values α1 / β1 ,, αk / βk α1 / β1 ,, αk / βk  are infinite, and the remaining ll generalized singular values are finite.
In the second case, when mr < 0 m-r<0 ,
Σ1 =
kmkk + lm
k(I00)
mk 0 C 0
Σ1 = km-kk+l-mk(I00) m-k 0 C 0
and
Σ2 =
kmkk + lm
mk(0S0)
k + lm0 0 I
pl0 0 0
.
Σ2 = km-kk+l-mm-k(0S0) k+l-m0 0 I p-l0 0 0 .
Again, ll is the rank of BB, k = rl k=r-l , CC and SS are diagonal matrices satisfying C2 + S2 = I C2 + S2 = I , and SS is nonsingular, and we may identify α1 = = αk = 1 α1 = = αk = 1 , αk + i = cii αk+i = cii , for i = 1,2,, mk i=1,2,, m-k , αm + 1 = = αr = 0 αm+1 = = αr = 0 , β1 = = βk = 0 β1 = = βk = 0 , βk + i = sii βk+i = sii , for i = 1,2,, mk i=1,2,, m-k  and βm + 1 = = βr = 1 βm+1 = = βr = 1 . Thus, the first kk generalized singular values α1 / β1 ,, αk / βk α1 / β1 ,, αk / βk  are infinite, and the remaining ll generalized singular values are finite.
Here are some important special case of the generalized singular value decomposition. First, if BB is square and nonsingular, then r = nr=n and the generalized singular value decomposition of AA and BB is equivalent to the singular value decomposition of AB1 AB-1 , where the singular values of AB1 AB-1  are equal to the generalized singular values of the pair AA, BB:
AB1 = (UΣ1RQT) (VΣ2RQT)1 = U (Σ1Σ21) VT .
AB-1 = ( U Σ1 R QT ) ( V Σ2 R QT ) -1 = U ( Σ1 Σ2-1 ) VT .
Second, if the columns of (ATBT)T ( AT BT )T  are orthonormal, then r = nr=n, R = IR=I and the generalized singular value decomposition of AA and BB is equivalent to the CS (Cosine–Sine) decomposition of (ATBT)T ( AT BT )T :
(A)
B
=
(U0)
0 V
(Σ1)
Σ2
QT .
A B = U 0 0 V Σ1 Σ2 QT .
Third, the generalized eigenvalues and eigenvectors of ATA λ BTB ATA - λ BTB  can be expressed in terms of the generalized singular value decomposition: Let
X = Q
(I0)
0 R1
.
X = Q I 0 0 R-1 .
Then
XT AT AX =
(00)
0 Σ1TΣ1
  and   XT BT BX =
(00)
0 Σ2TΣ2
.
XT AT AX = 0 0 0 Σ1TΣ1   and   XT BT BX = 0 0 0 Σ2TΣ2 .
Therefore, the columns of XX are the eigenvectors of ATA λ BTB ATA - λ BTB , and ‘nontrivial’ eigenvalues are the squares of the generalized singular values (see also Section [Generalized Symmetric-definite Eigenvalue Problems]). ‘Trivial’ eigenvalues are those corresponding to the leading nrn-r columns of XX, which span the common null space of ATA ATA  and BTB BTB . The ‘trivial eigenvalues’ are not well defined.

Symmetric Eigenvalue Problems

The symmetric eigenvalue problem is to find the eigenvalues, λλ, and corresponding eigenvectors, z0z0, such that
Az = λz,  A = AT,   where ​A​ is real.
Az=λz,  A=AT,   where ​A​ is real.
For the Hermitian eigenvalue problem we have
Az = λ z,   A = AH,   where ​ A​ is complex.
Az=λ z,   A=AH,   where ​ A​ is complex.
For both problems the eigenvalues λλ are real.
When all eigenvalues and eigenvectors have been computed, we write
A = ZΛZT(or ​A = ZΛZH​ if complex),
A =ZΛZT(or ​A=ZΛZH​ if complex),
where ΛΛ is a diagonal matrix whose diagonal elements are the eigenvalues, and ZZ is an orthogonal (or unitary) matrix whose columns are the eigenvectors. This is the classical spectral factorization of AA.
The basic task of the symmetric eigenproblem functions is to compute values of λλ and, optionally, corresponding vectors zz for a given matrix AA. This computation proceeds in the following stages.
  1. The real symmetric or complex Hermitian matrix AA is reduced to real tridiagonal form TT. If AA is real symmetric this decomposition is A = QTQTA=QTQT with QQ orthogonal and TT symmetric tridiagonal. If AA is complex Hermitian, the decomposition is A = QTQHA=QTQH with QQ unitary and TT, as before, real symmetric tridiagonal.
  2. Eigenvalues and eigenvectors of the real symmetric tridiagonal matrix TT are computed. If all eigenvalues and eigenvectors are computed, this is equivalent to factorizing TT as T = SΛSTT=SΛST, where SS is orthogonal and ΛΛ is diagonal. The diagonal entries of ΛΛ are the eigenvalues of TT, which are also the eigenvalues of AA, and the columns of SS are the eigenvectors of TT; the eigenvectors of AA are the columns of Z = QSZ=QS, so that A = ZΛZTA=ZΛZT (ZΛZHZΛZH when AA is complex Hermitian).
This chapter supports four primary algorithms for computing eigenvalues and eigenvectors of real symmetric matrices and complex Hermitian matrices. They are:
(i) the divide-and-conquer algorithm;
(ii) the QRQR algorithm;
(iii) bisection followed by inverse iteration;
(iv) the Relatively Robust Representation (RRR).
The divide-and-conquer algorithm is generally more efficient than the traditional QRQR algorithm for computing all eigenvalues and eigenvectors, but the RRR algorithm tends to be fastest of all. For further information and references see Anderson et al. (1999).

Generalized Symmetric-definite Eigenvalue Problems

This section is concerned with the solution of the generalized eigenvalue problems Az = λBzAz=λBz, ABz = λzABz=λz, and BAz = λzBAz=λz, where AA and BB are real symmetric or complex Hermitian and BB is positive definite. Each of these problems can be reduced to a standard symmetric eigenvalue problem, using a Cholesky factorization of BB as either B = LLTB=LLT or B = UTUB=UTU (LLHLLH or UHUUHU in the Hermitian case).
With B = LLTB=LLT, we have
Az = λBz(L1ALT)(LTz) = λ(LTz).
Az=λBz(L-1AL-T)(LTz)=λ(LTz).
Hence the eigenvalues of Az = λBzAz=λBz are those of Cy = λyCy=λy, where CC is the symmetric matrix C = L1ALTC=L-1AL-T and y = LTzy=LTz. In the complex case CC is Hermitian with C = L1ALHC=L-1AL-H and y = LHzy=LHz.
Table 1 summarises how each of the three types of problem may be reduced to standard form Cy = λyCy=λy, and how the eigenvectors zz of the original problem may be recovered from the eigenvectors yy of the reduced problem. The table applies to real problems; for complex problems, transposed matrices must be replaced by conjugate-transposes.
  Type of problem Factorization of BB Reduction Recovery of eigenvectors
1. Az = λBzAz=λBz B = LLTB=LLT,
B = UTUB=UTU
C = L1ALTC=L-1AL-T,
C = UTAU1C=U-TAU-1
z = LTyz=L-Ty,
z = U1yz=U-1y
2. ABz = λzABz=λz B = LLTB=LLT,
B = UTUB=UTU
C = LTALC=LTAL,
C = UAUTC=UAUT
z = LTyz=L-Ty,
z = U1yz=U-1y
3. BAz = λzBAz=λz B = LLTB=LLT,
B = UTUB=UTU
C = LTALC=LTAL,
C = UAUTC=UAUT
z = Lyz=Ly,
z = UTyz=UTy
Table 1
Reduction of generalized symmetric-definite eigenproblems to standard problems
When the generalized symmetric-definite problem has been reduced to the corresponding standard problem Cy = λyCy=λy, this may then be solved using the functions described in the previous section. No special functions are needed to recover the eigenvectors zz of the generalized problem from the eigenvectors yy of the standard problem, because these computations are simple applications of Level 2 or Level 3 BLAS

Packed Storage for Symmetric Matrices

Functions which handle symmetric matrices are usually designed so that they use either the upper or lower triangle of the matrix; it is not necessary to store the whole matrix. If either the upper or lower triangle is stored conventionally in the upper or lower triangle of a two-dimensional array, the remaining elements of the array can be used to store other useful data. However, that is not always convenient, and if it is important to economize on storage, the upper or lower triangle can be stored in a one-dimensional array of length n(n + 1) / 2n(n+1)/2; that is, the storage is almost halved.
This storage format is referred to as packed storage; it is described in Section [Packed storage] in the F07 Chapter Introduction.
Functions designed for packed storage are usually less efficient, especially on high-performance computers, so there is a trade-off between storage and efficiency.

Band Matrices

A band matrix is one whose elements are confined to a relatively small number of subdiagonals or superdiagonals on either side of the main diagonal. Algorithms can take advantage of bandedness to reduce the amount of work and storage required. The storage scheme for band matrices is described in Section [Band storage] in the F07 Chapter Introduction.
If the problem is the generalized symmetric definite eigenvalue problem Az = λBzAz=λBz and the matrices AA and BB are additionally banded, the matrix CC as defined in Section [Generalized Symmetric-definite Eigenvalue Problems] is, in general, full. We can reduce the problem to a banded standard problem by modifying the definition of CC thus:
C = XTAX,   where  X = U1Q  or ​LTQ,
C =XTAX,   where  X=U-1Q  or ​L-TQ,
where QQ is an orthogonal matrix chosen to ensure that CC has bandwidth no greater than that of AA.
A further refinement is possible when AA and BB are banded, which halves the amount of work required to form CC. Instead of the standard Cholesky factorization of BB as UTUUTU or LLTLLT, we use a split Cholesky factorization B = STSB=STS, where
S =
(U11)
M21 L22
S = U11 M21 L22
with U11U11 upper triangular and L22L22 lower triangular of order approximately n / 2n/2; SS has the same bandwidth as BB.

Nonsymmetric Eigenvalue Problems

The nonsymmetric eigenvalue problem is to find the eigenvalues, λλ, and corresponding eigenvectors, v0v0, such that
Av = λv.
Av=λv.
More precisely, a vector vv as just defined is called a right eigenvector of AA, and a vector u0u0 satisfying
uTA = λuT(uHA = λuH  when ​u​ is complex)
uTA=λuT(uHA=λuH  when ​u​ is complex)
is called a left eigenvector of AA.
A real matrix AA may have complex eigenvalues, occurring as complex conjugate pairs.
This problem can be solved via the Schur factorization of AA, defined in the real case as
A = ZTZT,
A =ZTZT,
where ZZ is an orthogonal matrix and TT is an upper quasi-triangular matrix with 11 by 11 and 22 by 22 diagonal blocks, the 22 by 22 blocks corresponding to complex conjugate pairs of eigenvalues of AA. In the complex case, the Schur factorization is
A = ZTZH,
A =ZTZH,
where ZZ is unitary and TT is a complex upper triangular matrix.
The columns of ZZ are called the Schur vectors. For each kk (1kn1kn), the first kk columns of ZZ form an orthonormal basis for the invariant subspace corresponding to the first kk eigenvalues on the diagonal of TT. Because this basis is orthonormal, it is preferable in many applications to compute Schur vectors rather than eigenvectors. It is possible to order the Schur factorization so that any desired set of kk eigenvalues occupy the kk leading positions on the diagonal of TT.
The two basic tasks of the nonsymmetric eigenvalue functions are to compute, for a given matrix AA, all nn values of λλ and, if desired, their associated right eigenvectors vv and/or left eigenvectors uu, and the Schur factorization.
These two basic tasks can be performed in the following stages.
  1. A general matrix AA is reduced to upper Hessenberg form HH which is zero below the first subdiagonal. The reduction may be written A = QHQTA=QHQT with QQ orthogonal if AA is real, or A = QHQHA=QHQH with QQ unitary if AA is complex.
  2. The upper Hessenberg matrix HH is reduced to Schur form TT, giving the Schur factorization H = STSTH=STST (for HH real) or H = STSHH=STSH (for HH complex). The matrix SS (the Schur vectors of HH) may optionally be computed as well. Alternatively SS may be postmultiplied into the matrix QQ determined in stage 11, to give the matrix Z = QSZ=QS, the Schur vectors of AA. The eigenvalues are obtained from the diagonal elements or diagonal blocks of TT.
  3. Given the eigenvalues, the eigenvectors may be computed in two different ways. Inverse iteration can be performed on HH to compute the eigenvectors of HH, and then the eigenvectors can be multiplied by the matrix QQ in order to transform them to eigenvectors of AA. Alternatively the eigenvectors of TT can be computed, and optionally transformed to those of HH or AA if the matrix SS or ZZ is supplied.
The accuracy with which eigenvalues can be obtained can often be improved by balancing a matrix. This is discussed further in Section [Balancing and condition for the nonsymmetric eigenproblem] below.

Generalized Nonsymmetric Eigenvalue Problem

The generalized nonsymmetric eigenvalue problem is to find the eigenvalues, λλ, and corresponding eigenvectors, v0v0, such that
Av = λBv.
Av=λBv.
More precisely, a vector vv as just defined is called a right eigenvector of the matrix pair (A,B)(A,B), and a vector u0u0 satisfying
uTA = λuTB(uHA = λuHB​ when ​u​ is complex)
uTA=λuTB(uHA=λuHB​ when ​u​ is complex)
is called a left eigenvector of the matrix pair (A,B)(A,B).
If BB is singular then the problem has one or more infinite eigenvalues λ = λ=, corresponding to Bv = 0Bv=0. Note that if AA is nonsingular, then the equivalent problem μAv = BvμAv=Bv is perfectly well defined and an infinite eigenvalue corresponds to μ = 0μ=0. To deal with both finite (including zero) and infinite eigenvalues, the functions in this chapter do not compute λλ explicitly, but rather return a pair of numbers (α,β)(α,β) such that if β0β0 
λ = α / β
λ =α/β
and if α0α0 and β = 0β=0 then λ = λ=. ββ is always returned as real and non-negative. Of course, computationally an infinite eigenvalue may correspond to a small ββ rather than an exact zero.
For a given pair (A,B)(A,B) the set of all the matrices of the form (AλB)(A-λB) is called a matrix pencil and λλ and vv are said to be an eigenvalue and eigenvector of the pencil (AλB)(A-λB). If AA and BB are both singular and share a common null space then
det(AλB)0
det(A-λB)0
so that the pencil (AλB)(A-λB) is singular for all λλ. In other words any λλ can be regarded as an eigenvalue. In exact arithmetic a singular pencil will have α = β = 0α=β=0 for some (α,β)(α,β). Computationally if some pair (α,β)(α,β) is small then the pencil is singular, or nearly singular, and no reliance can be placed on any of the computed eigenvalues. Singular pencils can also manifest themselves in other ways; see, in particular, Sections 2.3.5.2 and 4.11.1.4 of Anderson et al. (1999) for further details.
The generalized eigenvalue problem can be solved via the generalized Schur factorization of the pair (A,B)(A,B) defined in the real case as
A = QSZT,  B = QTZT,
A =QSZT,  B=QTZT,
where QQ and ZZ are orthogonal, TT is upper triangular with non-negative diagonal elements and SS is upper quasi-triangular with 11 by 11 and 22 by 22 diagonal blocks, the 22 by 22 blocks corresponding to complex conjugate pairs of eigenvalues. In the complex case, the generalized Schur factorization is
A = QSZH,  B = QTZH,
A =QSZH,  B=QTZH,
where QQ and ZZ are unitary and SS and TT are upper triangular, with TT having real non-negative diagonal elements. The columns of QQ and ZZ are called respectively the left and right generalized Schur vectors and span pairs of deflating subspaces of AA and BB, which are a generalization of invariant subspaces.
It is possible to order the generalized Schur factorization so that any desired set of kk eigenvalues correspond to the kk leading positions on the diagonals of the pair (S,T)(S,T).
The two basic tasks of the generalized nonsymmetric eigenvalue functions are to compute, for a given pair (A,B)(A,B), all nn values of λλ and, if desired, their associated right eigenvectors vv and/or left eigenvectors uu, and the generalized Schur factorization.
These two basic tasks can be performed in the following stages.
  1. The matrix pair (A,B)(A,B) is reduced to generalized upper Hessenberg form (H,R)(H,R), where HH is upper Hessenberg (zero below the first subdiagonal) and RR is upper triangular. The reduction may be written as A = Q1HZ1T, B = Q1RZ1TA=Q1HZ1T, B=Q1RZ1T in the real case with Q1Q1 and Z1Z1 orthogonal, and A = Q1H Z1H , B = Q1R Z1H A=Q1H Z1H , B=Q1R Z1H  in the complex case with Q1Q1 and Z1Z1 unitary.
  2. The generalized upper Hessenberg form (H,R)(H,R) is reduced to the generalized Schur form (S,T)(S,T) using the generalized Schur factorization H = Q2S Z2TH=Q2S Z2T, R = Q2T Z2TR=Q2T Z2T in the real case with Q2Q2 and Z2Z2 orthogonal, and H = Q2 SZ2H, R = Q2T Z2HH=Q2 SZ2H, R=Q2T Z2H in the complex case. The generalized Schur vectors of (A,B)(A,B) are given by Q = Q1Q2Q=Q1Q2, Z = Z1Z2Z=Z1Z2. The eigenvalues are obtained from the diagonal elements (or blocks) of the pair (S,T)(S,T).
  3. Given the eigenvalues, the eigenvectors of the pair (S,T)(S,T) can be computed, and optionally transformed to those of (H,R)(H,R) or (A,B)(A,B).
The accuracy with which eigenvalues can be obtained can often be improved by balancing a matrix pair. This is discussed further in Section [Balancing the generalized eigenvalue problem] below.

The Sylvester Equation and the Generalized Sylvester Equation

The Sylvester equation is a matrix equation of the form
AX + XB = C,
AX+XB=C,
where AA, BB, and CC are given matrices with AA being mm by mm, BB an nn by nn matrix and CC, and the solution matrix XX, mm by nn matrices. The solution of a special case of this equation occurs in the computation of the condition number for an invariant subspace, but a combination of functions in this chapter allows the solution of the general Sylvester equation.
Functions are also provided for solving a special case of the generalized Sylvester equations
ARLB = C ,  DRLE = F ,
AR-LB=C ,  DR-LE=F ,
where (A,D)(A,D), (B,E)(B,E) and (C,F)(C,F) are given matrix pairs, and RR and LL are the solution matrices.

Error and Perturbation Bounds and Condition Numbers

In this section we discuss the effects of rounding errors in the solution process and the effects of uncertainties in the data, on the solution to the problem. A number of the functions in this chapter return information, such as condition numbers, that allow these effects to be assessed. First we discuss some notation used in the error bounds of later sections.
The bounds usually contain the factor p(n)p(n) (or p(m,n)p(m,n)), which grows as a function of the matrix dimension nn (or matrix dimensions mm and nn). It measures how errors can grow as a function of the matrix dimension, and represents a potentially different function for each problem. In practice, it usually grows just linearly; p(n)10np(n)10n is often true, although generally only much weaker bounds can be actually proved. We normally describe p(n)p(n) as a ‘modestly growing’ function of nn. For detailed derivations of various p(n)p(n), see Golub and Van Loan (1996) and Wilkinson (1965).
For linear equation (see Chapter F07) and least squares solvers, we consider bounds on the relative error x / xx-x^/x in the computed solution x^, where xx is the true solution. For eigenvalue problems we consider bounds on the error |λiλ̂i||λi-λ^i| in the iith computed eigenvalue λ̂iλ^i, where λiλi is the true iith eigenvalue. For singular value problems we similarly consider bounds |σiσ̂i||σi-σ^i|.
Bounding the error in computed eigenvectors and singular vectors iv^i is more subtle because these vectors are not unique: even though we restrict i2 = 1v^i2=1 and vi2 = 1vi2=1, we may still multiply them by arbitrary constants of absolute value 11. So to avoid ambiguity we bound the angular difference between iv^i and the true vector vivi, so that
θ(vi,i) = acute angle between ​vi​ and ​i
= arccos|viHi|.
θ(vi,v^i) = acute angle between ​vi​ and ​v^i = arccos|viHv^i|.
(2)
Here arccos(θ)arccos(θ) is in the standard range: 0arccos(θ) < π0arccos(θ)<π. When θ(vi,i)θ(vi,v^i) is small, we can choose a constant αα with absolute value 11 so that αvii2θ(vi,i)αvi-v^i2θ(vi,v^i).
In addition to bounds for individual eigenvectors, bounds can be obtained for the spaces spanned by collections of eigenvectors. These may be much more accurately determined than the individual eigenvectors which span them. These spaces are called invariant subspaces in the case of eigenvectors, because if vv is any vector in the space, AvAv is also in the space, where AA is the matrix. Again, we will use angle to measure the difference between a computed space S^ and the true space SS:
θ(S,) = acute angle between ​S​ and ​
= max
sS
s0
  min
0
  θ(s,)   or   max
0
  min
sS
s0
  θ(s,)
θ(S,S^) = acute angle between ​S​ and ​S^ = max sS s0 min s^S^ s^0 θ(s,s^)   or   max s^S^ s^0 min sS s0 θ(s,s^)
(3)
θ(S,)θ(S,S^) may be computed as follows. Let SS be a matrix whose columns are orthonormal and spanSspanS. Similarly let S^ be an orthonormal matrix with columns spanning S^. Then
θ(S,) = arccosσmin(SH).
θ(S,S^)=arccosσmin(SHS^).
Finally, we remark on the accuracy of the bounds when they are large. Relative errors like x / xx^-x/x and angular errors like θ(i,vi)θ(v^i,vi) are only of interest when they are much less than 11. Some stated bounds are not strictly true when they are close to 11, but rigorous bounds are much more complicated and supply little extra information in the interesting case of small errors. These bounds are indicated by using the symbol , or ‘approximately less than’, instead of the usual . Thus, when these bounds are close to 1 or greater, they indicate that the computed answer may have no significant digits at all, but do not otherwise bound the error.
A number of functions in this chapter return error estimates and/or condition number estimates directly. In other cases Anderson et al. (1999) gives code fragments to illustrate the computation of these estimates, and a number of the Chapter F08 example programs, for the driver functions, implement these code fragments.

Least squares problems

The conventional error analysis of linear least squares problems goes as follows. The problem is to find the xx minimizing Axb2Ax-b2. Let x^ be the solution computed using one of the methods described above. We discuss the most common case, where AA is overdetermined (i.e., has more rows than columns) and has full rank.
Then the computed solution x^ has a small normwise backward error. In other words x^ minimizes (A + E)(b + f)2(A+E)x^-(b+f)2, where
max ((E2)/(A2),(f2)/(b2)) p(n)ε
max( E2 A2 , f2 b2 ) p(n)ε
and p(n)p(n) is a modestly growing function of nn and εε is the machine precision. Let κ2(A) = σmax(A) / σmin(A)κ2(A)=σmax(A)/σmin(A), ρ = Axb2ρ=Ax-b2, and sin(θ) = ρ / b2sin(θ)=ρ/b2. Then if p(n)εp(n)ε is small enough, the error xx^-x is bounded by
(x2)/(x2)p(n)ε {(2κ2(A))/(cos(θ)) + tan(θ)κ22(A)} .
x-x^2 x2 p(n)ε {2κ2(A) cos(θ) +tan(θ)κ22(A)} .
If AA is rank-deficient, the problem can be regularized by treating all singular values less than a user-specified threshold as exactly zero. See Golub and Van Loan (1996) for error bounds in this case, as well as for the underdetermined case.
The solution of the overdetermined, full-rank problem may also be characterised as the solution of the linear system of equations
(IA)
AT 0
(r)
x
=
(b)
0
.
I A AT 0 r x = b 0 .
By solving this linear system (see Chapter F07) component-wise error bounds can also be obtained Arioli et al. (1989).

The singular value decomposition

The usual error analysis of the SVD algorithm is as follows (see Golub and Van Loan (1996)).
The computed SVD, Σ̂TU^Σ^V^T, is nearly the exact SVD of A + EA+E, i.e., A + E = ( + δ)Σ̂( + δ)A+E=(U^+δU^)Σ^(V^+δV^) is the true SVD, so that + δU^+δU^ and + δV^+δV^ are both orthogonal, where E2 / A2p(m,n)εE2/A2p(m,n)ε, δp(m,n)εδU^p(m,n)ε, and δp(m,n)εδV^p(m,n)ε. Here p(m,n)p(m,n) is a modestly growing function of mm and nn and εε is the machine precision. Each computed singular value σ̂iσ^i differs from the true σiσi by an amount satisfying the bound
|σ̂iσi|p(m,n)εσ1.
|σ^i-σi|p(m,n)εσ1.
Thus large singular values (those near σ1σ1) are computed to high relative accuracy and small ones may not be.
The angular difference between the computed left singular vector iu^i and the true uiui satisfies the approximate bound
θ(i,ui)(p(m,n)εA2)/(gapi)
θ(u^i,ui)p(m,n)εA2gapi
where
gapi = min |σiσj|
ji
gap i = min ji |σi-σj|
is the absolute gap between σiσi and the nearest other singular value. Thus, if σiσi is close to other singular values, its corresponding singular vector uiui may be inaccurate. The same bound applies to the computed right singular vector iv^i and the true vector vivi. The gaps may be easily obtained from the computed singular values.
Let S^ be the space spanned by a collection of computed left singular vectors {i,iI}{u^i,iI}, where II is a subset of the integers from 11 to nn. Let SS be the corresponding true space. Then
θ(,S)(p(m,n)εA2)/(gapI).
θ(S^,S)p(m,n)εA2 gapI .
where
gapI = min {|σiσj|  for ​iI,jI}
gapI=min{|σi-σj|   for ​ iI,jI}
is the absolute gap between the singular values in II and the nearest other singular value. Thus, a cluster of close singular values which is far away from any other singular value may have a well determined space S^ even if its individual singular vectors are ill-conditioned. The same bound applies to a set of right singular vectors {i,iI}{v^i,iI}.
In the special case of bidiagonal matrices, the singular values and singular vectors may be computed much more accurately (see Demmel and Kahan (1990)). A bidiagonal matrix BB has nonzero entries only on the main diagonal and the diagonal immediately above it (or immediately below it). Reduction of a dense matrix to bidiagonal form BB can introduce additional errors, so the following bounds for the bidiagonal case do not apply to the dense case.
Using the functions in this chapter, each computed singular value of a bidiagonal matrix is accurate to nearly full relative accuracy, no matter how tiny it is, so that
|σ̂iσi|p(m,n)εσi.
|σ^i-σi|p(m,n)εσi.
The computed left singular vector iu^i has an angular error at most about
θ(i,ui)(p(m,n)ε)/(relgapi)
θ(u^i,ui)p(m,n)εrelgapi
where
relgapi = min |σiσj| / (σi + σj)
ji
relgapi= min ji |σi-σj| / (σi+σj)
is the relative gap between σiσi and the nearest other singular value. The same bound applies to the right singular vector iv^i and vivi. Since the relative gap may be much larger than the absolute gap, this error bound may be much smaller than the previous one. The relative gaps may be easily obtained from the computed singular values.

The symmetric eigenproblem

The usual error analysis of the symmetric eigenproblem is as follows (see Parlett (1998)).
The computed eigendecomposition Λ̂TZ^Λ^Z^T is nearly the exact eigendecomposition of A + EA+E, i.e., A + E = ( + δ)Λ̂( + δ)TA+E=(Z^+δZ^)Λ^(Z^+δZ^)T is the true eigendecomposition so that + δZ^+δZ^ is orthogonal, where E2 / A2p(n)εE2/A2p(n)ε and δ2p(n)εδZ^2p(n)ε and p(n)p(n) is a modestly growing function of nn and εε is the machine precision. Each computed eigenvalue λ̂iλ^i differs from the true λiλi by an amount satisfying the bound
|λ̂iλi|p(n)εA2.
|λ^i-λi|p(n)εA2.
Thus large eigenvalues (those near maxi  |λi| = A2 max i |λi| = A2 ) are computed to high relative accuracy and small ones may not be.
The angular difference between the computed unit eigenvector iz^i and the true zizi satisfies the approximate bound
θ(i,zi)(p(n)εA2)/(gapi)
θ(z^i,zi)p(n)εA2gapi
if p(n)εp(n)ε is small enough, where
gapi = min |λiλj|
ji
gapi= min ji |λi-λj|
is the absolute gap between λiλi and the nearest other eigenvalue. Thus, if λiλi is close to other eigenvalues, its corresponding eigenvector zizi may be inaccurate. The gaps may be easily obtained from the computed eigenvalues.
Let S^ be the invariant subspace spanned by a collection of eigenvectors {i,iI}{z^i,iI}, where II is a subset of the integers from 11 to nn. Let SS be the corresponding true subspace. Then
θ(,S)(p(n)εA2)/(gapI)
θ(S^,S)p(n)εA2 gapI
where
gapI = min {|λiλj|  for ​iI,jI}
gapI=min{|λi-λj|   for ​ iI,jI}
is the absolute gap between the eigenvalues in II and the nearest other eigenvalue. Thus, a cluster of close eigenvalues which is far away from any other eigenvalue may have a well determined invariant subspace S^ even if its individual eigenvectors are ill-conditioned.
In the special case of a real symmetric tridiagonal matrix TT, functions in this chapter can compute the eigenvalues and eigenvectors much more accurately. See Anderson et al. (1999) for further details.

The generalized symmetric-definite eigenproblem

The three types of problem to be considered are AλBA-λB, ABλIAB-λI and BAλIBA-λI. In each case AA and BB are real symmetric (or complex Hermitian) and BB is positive definite. We consider each case in turn, assuming that functions in this chapter are used to transform the generalized problem to the standard symmetric problem, followed by the solution of the symmetric problem. In all cases
gapi = min |λiλj|
ji
gapi= min ji |λi-λj|
is the absolute gap between λiλi and the nearest other eigenvalue.
  1. AλBA-λB. The computed eigenvalues λ̂iλ^i can differ from the true eigenvalues λiλi by an amount
    |λ̂iλi|p(n)εB12A2.
    |λ^i-λi|p(n)εB-12A2.
    The angular difference between the computed eigenvector iz^i and the true eigenvector zizi is
    θ (i,zi) ( p(n) ε B12 A2 (κ2(B)) 1 / 2 )/(gapi) .
    θ (z^i,zi) p(n) ε B-12 A2 ( κ2(B) ) 1/2 gapi .
  2. ABλIAB-λI or BAλIBA-λI. The computed eigenvalues λ̂iλ^i can differ from the true eigenvalues λiλi by an amount
    |λ̂iλi|p(n)εB2A2.
    |λ^i-λi|p(n)εB2A2.
    The angular difference between the computed eigenvector iz^i and the true eigenvector zizi is
    θ (i,zi) (p(n) ε B2 A2 (κ2(B)) 1 / 2 )/(gapi) .
    θ (z^i,zi) p(n) ε B2 A2 ( κ2(B) ) 1/2 gapi .
These error bounds are large when BB is ill-conditioned with respect to inversion (κ2(B)κ2(B) is large). It is often the case that the eigenvalues and eigenvectors are much better conditioned than indicated here. One way to get tighter bounds is effective when the diagonal entries of BB differ widely in magnitude, as for example with a graded matrix.
  1. AλBA-λB. Let D = diag(b111 / 2,,bnn1 / 2) D = diag( b11-1/2 ,, b nn -1/2 )  be a diagonal matrix. Then replace BB by DBDDBD and AA by DADDAD in the above bounds.
  2. ABλIAB-λI or BAλIBA-λI. Let D = diag(b111 / 2,,bnn1 / 2)D=diag(b11-1/2,,bnn -1/2) be a diagonal matrix. Then replace BB by DBDDBD and AA by D1AD1D-1AD-1 in the above bounds.
Further details can be found in Anderson et al. (1999).

The nonsymmetric eigenproblem

The nonsymmetric eigenvalue problem is more complicated than the symmetric eigenvalue problem. In this section, we just summarise the bounds. Further details can be found in Anderson et al. (1999).
We let λ̂iλ^i be the iith computed eigenvalue and λiλi the iith true eigenvalue. Let iv^i be the corresponding computed right eigenvector, and vivi the true right eigenvector (so Avi = λi viAvi=λi vi). If II is a subset of the integers from 11 to nn, we let λIλI denote the average of the selected eigenvalues: λI = (iI λi) / (iI 1)λI=(iI λi)/ (iI1), and similarly for λ̂Iλ^I. We also let SISI denote the subspace spanned by {vi,iI}{vi,iI}; it is called a right invariant subspace because if vv is any vector in SISI then AvAv is also in SISI. IS^I is the corresponding computed subspace.
The algorithms for the nonsymmetric eigenproblem are normwise backward stable: they compute the exact eigenvalues, eigenvectors and invariant subspaces of slightly perturbed matrices (A + E) E (A+E) E , where E p (n) ε A E p (n) ε A . Some of the bounds are stated in terms of E2E2 and others in terms of EFEF; one may use p(n)εp(n)ε for either quantity.
Functions are provided so that, for each (λ̂i,iλ^i,v^i) pair the two values sisi and sepisepi, or for a selected subset II of eigenvalues the values sIsI and sepIsepI can be obtained, for which the error bounds in Table 2 are true for sufficiently small EE, (which is why they are called asymptotic):
Simple eigenvalue |λ̂iλi|E2 / si|λ^i-λi|E2/si
Eigenvalue cluster |λ̂IλI|E2 / sI|λ^I-λI|E2/sI
Eigenvector θ(i,vi)EF / sepiθ(v^i,vi)EF/sepi
Invariant subspace θ(I,SI)EF / sepIθ(S^I,SI)EF/sepI
Table 2
Asymptotic error bounds for the nonsymmetric eigenproblem
If the problem is ill-conditioned, the asymptotic bounds may only hold for extremely small EE. The global error bounds of Table 3 are guaranteed to hold for all EF < s × sep / 4EF<s×sep/4:
Simple eigenvalue |λ̂iλi|nE2 / si|λ^i-λi|nE2/si Holds for all EE
Eigenvalue cluster |λ̂IλI|2E2 / sI|λ^I-λI|2E2/sI Requires EF < sI × sepI / 4EF<sI×sepI/4
Eigenvector θ(i,vi)arctan(2EF / (sepi4EF / si))θ(v^i,vi)arctan(2EF/(sepi-4EF/si)) Requires EF < si × sepi / 4EF<si×sepi/4
Invariant subspace θ(I,SI)arctan(2EF / (sepI4EF / sI))θ(S^I,SI)arctan(2EF/(sepI-4EF/sI)) Requires EF < sI × sepI / 4EF<sI×sepI/4
Table 3
Global error bounds for the nonsymmetric eigenproblem

Balancing and condition for the nonsymmetric eigenproblem

There are two preprocessing steps one may perform on a matrix AA in order to make its eigenproblem easier. The first is permutation, or reordering the rows and columns to make AA more nearly upper triangular (closer to Schur form): A = PAPTA=PAPT, where PP is a permutation matrix. If AA is permutable to upper triangular form (or close to it), then no floating point operations (or very few) are needed to reduce it to Schur form. The second is scaling by a diagonal matrix DD to make the rows and columns of AA more nearly equal in norm: A = DAD1A=DAD-1. Scaling can make the matrix norm smaller with respect to the eigenvalues, and so possibly reduce the inaccuracy contributed by roundoff (see Chapter 11 of Wilkinson and Reinsch (1971)). We refer to these two operations as balancing.
Permuting has no effect on the condition numbers or their interpretation as described previously. Scaling, however, does change their interpretation and further details can be found in Anderson et al. (1999).

The generalized nonsymmetric eigenvalue problem

The algorithms for the generalized nonsymmetric eigenvalue problem are normwise backward stable: they compute the exact eigenvalues (as the pairs (α,β)(α,β)), eigenvectors and deflating subspaces of slightly perturbed pairs (A + E,B + F)(A+E,B+F), where
(E,F)Fp(n)ε(A,B)F.
(E,F)Fp(n)ε(A,B)F.
Asymptotic and global error bounds can be obtained, which are generalizations of those given in Tables 2 and 3. See Section 4.11 of Anderson et al. (1999) for details. Functions are provided to compute estimates of reciprocal conditions numbers for eigenvalues and eigenspaces.

Balancing the generalized eigenvalue problem

As with the standard nonsymmetric eigenvalue problem, there are two preprocessing steps one may perform on a matrix pair (A,B)(A,B) in order to make its eigenproblem easier; permutation and scaling, which together are referred to as balancing, as indicated in the following two steps.
  1. The balancing function first attempts to permute AA and BB to block upper triangular form by a similarity transformation:
    PAPT=F= F11 F12 F13 F22 F23 F33 , PBPT=G= G11 G12 G13 G22 G23 G33 ,
    PAPT = F = ( F11 F12 F13 F22 F23 F33 ),
    PBPT = G = ( G11 G12 G13 G22 G23 G33 ),
    where PP is a permutation matrix, F11F11, F33F33, G11G11 and G33G33 are upper triangular. Then the diagonal elements of the matrix (F11,G11)(F11,G11) and (G33,H33)(G33,H33) are generalized eigenvalues of (A,B)(A,B). The rest of the generalized eigenvalues are given by the matrix pair (F22,G22)(F22,G22). Subsequent operations to compute the eigenvalues of (A,B)(A,B) need only be applied to the matrix (F22,G22)(F22,G22); this can save a significant amount of work if (F22,G22)(F22,G22) is smaller than the original matrix pair (A,B)(A,B). If no suitable permutation exists (as is often the case), then there is no gain in efficiency or accuracy.
  2. The balancing function applies a diagonal similarity transformation to (F,G)(F,G), to make the rows and columns of (F22,G22)(F22,G22) as close as possible in the norm:
    DFD-1= I D22 I F11 F12 F13 F22 F23 F33 I D22-1 I , DGD-1= I D22 I G11 G12 G13 G22 G23 G33 I D22-1 I .
    DFD − 1 = ( I D22 I ) ( F11 F12 F13 F22 F23 F33 ) ( I D22 − 1 I ),
    DGD − 1 = ( I D22 I ) ( G11 G12 G13 G22 G23 G33 ) ( I D22 − 1 I ) .
    This transformation usually improves the accuracy of computed generalized eigenvalues and eigenvectors. However, there are exceptional occasions when this transformation increases the norm of the pencil; in this case accuracy could be lower with diagonal balancing.
    See Anderson et al. (1999) for further details.

Other problems

Error bounds for other problems such as the generalized linear least squares problem and generalized singular value decomposition can be found in Anderson et al. (1999).

Block Partitioned Algorithms

A number of the functions in this chapter use what is termed a block partitioned algorithm. This means that at each major step of the algorithm a block of rows or columns is updated, and much of the computation is performed by matrix-matrix operations on these blocks. The matrix-matrix operations are performed by calls to the Level 3 BLAS which are the key to achieving high performance on many modern computers. In the case of the QRQR algorithm for reducing an upper Hessenberg matrix to Schur form, a multishift strategy is used in order to improve performance. See Golub and Van Loan (1996) or Anderson et al. (1999) for more about block partitioned algorithms and the multishift strategy.
The performance of a block partitioned algorithm varies to some extent with the block size – that is, the number of rows or columns per block. This is a machine-dependent parameter, which is set to a suitable value when the library is implemented on each range of machines. You do not normally need to be aware of what value is being used. Different block sizes may be used for different functions. Values in the range 1616 to 6464 are typical.
On more conventional machines there is often no advantage from using a block partitioned algorithm, and then the functions use an unblocked algorithm (effectively a block size of 11), relying solely on calls to the Level 2 BLAS

Recommendations on Choice and Use of Available Functions

Available Functions

The tables in the following sub-sections show the functions which are provided for performing different computations on different types of matrices. Each entry in the table gives the NAG function name and the LAPACK double precision name .
Black box (or driver) functions are provided for the solution of most problems. In a number of cases there are simple drivers, which just return the solution to the problem, as well as expert drivers, which return additional information, such as condition number estimates, and may offer additional facilities such as balancing. The following sub-sections give tables for the driver functions.

Driver functions

Linear least squares problems (LLS)
Operation real complex
solve LLS using QRQR or LQLQ factorization
solve LLS using complete orthogonal factorization
solve LLS using SVD
solve LLS using divide-and-conquer SVD
nag_lapack_dgels (f08aa)
nag_lapack_dgelsy (f08ba)
nag_lapack_dgelss (f08ka)
nag_lapack_dgelsd (f08kc)
nag_lapack_zgels (f08an)
nag_lapack_zgelsy (f08bn)
nag_lapack_zgelss (f08kn)
nag_lapack_zgelsd (f08kq)
Generalized linear least squares problems (LSE and GLM)
Operation real complex
solve LSE problem using GRQ
solve GLM problem using GQR
nag_lapack_dgglse (f08za)
nag_lapack_dggglm (f08zb)
nag_lapack_zgglse (f08zn)
nag_lapack_zggglm (f08zp)
Symmetric eigenvalue problems (SEP)
Function and storage scheme real complex
simple driver
divide-and-conquer driver
expert driver
RRR driver
nag_lapack_dsyev (f08fa)
nag_lapack_dsyevd (f08fc)
nag_lapack_dsyevx (f08fb)
nag_lapack_dsyevr (f08fd)
nag_lapack_zheev (f08fn)
nag_lapack_zheevd (f08fq)
nag_lapack_zheevx (f08fp)
nag_lapack_zheevr (f08fr)
packed storage
simple driver
divide-and-conquer driver
expert driver

nag_lapack_dspev (f08ga)
nag_lapack_dspevd (f08gc)
nag_lapack_dspevx (f08gb)

nag_lapack_zhpev (f08gn)
nag_lapack_zhpevd (f08gq)
nag_lapack_zhpevx (f08gp)
band matrix
simple driver
divide-and-conquer driver
expert driver

nag_lapack_dsbev (f08ha)
nag_lapack_dsbevd (f08hc)
nag_lapack_dsbevx (f08hb)

nag_lapack_zhbev (f08hn)
nag_lapack_zhbevd (f08hq)
nag_lapack_zhbevx (f08hp)
tridiagonal matrix
simple driver
divide-and-conquer driver
expert driver
RRR driver

nag_lapack_dstev (f08ja)
nag_lapack_dstevd (f08jc)
nag_lapack_dstevx (f08jb)
nag_lapack_dstevr (f08jd)
 
Nonsymmetric eigenvalue problem (NEP)
Function and storage scheme real complex
simple driver for Schur factorization
expert driver for Schur factorization
simple driver for eigenvalues/vectors
expert driver for eigenvalues/vectors
nag_lapack_dgees (f08pa)
nag_lapack_dgeesx (f08pb)
nag_lapack_dgeev (f08na)
nag_lapack_dgeevx (f08nb)
nag_lapack_zgees (f08pn)
nag_lapack_zgeesx (f08pp)
nag_lapack_zgeev (f08nn)
nag_lapack_zgeevx (f08np)
Singular value decomposition (SVD)
Function and storage scheme real complex
simple driver
divide-and-conquer driver
simple driver for one-sided Jacobi SVD
expert driver for one-sided Jacobi SVD
nag_lapack_dgesvd (f08kb)
nag_lapack_dgesdd (f08kd)
nag_lapack_dgesvj (f08kj)
nag_lapack_dgejsv (f08kh)
nag_lapack_zgesvd (f08kp)
nag_lapack_zgesdd (f08kr)
Generalized symmetric definite eigenvalue problems (GSEP)
Function and storage scheme real complex
simple driver
divide-and-conquer driver
expert driver
nag_lapack_dsygv (f08sa)
nag_lapack_dsygvd (f08sc)
nag_lapack_dsygvx (f08sb)
nag_lapack_zhegv (f08sn)
nag_lapack_zhegvd (f08sq)
nag_lapack_zhegvx (f08sp)
packed storage
simple driver
divide-and-conquer driver
expert driver

nag_lapack_dspgv (f08ta)
nag_lapack_dspgvd (f08tc)
nag_lapack_dspgvx (f08tb)

nag_lapack_zhpgv (f08tn)
nag_lapack_zhpgvd (f08tq)
nag_lapack_zhpgvx (f08tp)
band matrix
simple driver
divide-and-conquer driver
expert driver

nag_lapack_dsbgv (f08ua)
nag_lapack_dsbgvd (f08uc)
nag_lapack_dsbgvx (f08ub)

nag_lapack_zhbgv (f08un)
nag_lapack_zhbgvd (f08uq)
nag_lapack_zhbgvx (f08up)
Generalized nonsymmetric eigenvalue problem (GNEP)
Function and storage scheme real complex
simple driver for Schur factorization
expert driver for Schur factorization
simple driver for eigenvalues/vectors
expert driver for eigenvalues/vectors
nag_lapack_dgges (f08xa)
nag_lapack_dggesx (f08xb)
nag_lapack_dggev (f08wa)
nag_lapack_dggevx (f08wb)
nag_lapack_zgges (f08xn)
nag_lapack_zggesx (f08xp)
nag_lapack_zggev (f08wn)
nag_lapack_zggevx (f08wp)
Generalized singular value decomposition (GSVD)
Function and storage scheme real complex
singular values/vectors nag_lapack_dggsvd (f08va) nag_lapack_zggsvd (f08vn)

Computational functions

It is possible to solve problems by calling two or more functions in sequence. Some common sequences of functions are indicated in the tables in the following sub-sections; an asterisk ( * *) against a function name means that the sequence of calls is illustrated in the example program for that function.
Orthogonal factorizations
Functions are provided for QRQR factorization (with and without column pivoting), and for LQLQ, QLQL and RQRQ factorizations (without pivoting only), of a general real or complex rectangular matrix. A function is also provided for the RQRQ factorization of a real or complex upper trapezoidal matrix. (LAPACK refers to this as the RZRZ factorization.)
The factorization functions do not form the matrix QQ explicitly, but represent it as a product of elementary reflectors (see Section [Representation of orthogonal or unitary matrices]). Additional functions are provided to generate all or part of QQ explicitly if it is required, or to apply QQ in its factored form to another matrix (specifically to compute one of the matrix products QCQC, QTCQTC, CQCQ or CQTCQT with QTQT replaced by QHQH if CC and QQ are complex).
  Factorize without pivoting Factorize with pivoting Generate matrix QQ Apply matrix QQ
QRQR factorization, real matrices nag_lapack_dgeqrf (f08ae) nag_lapack_dgeqp3 (f08bf) nag_lapack_dorgqr (f08af) nag_lapack_dormqr (f08ag)
LQLQ factorization, real matrices nag_lapack_dgelqf (f08ah)   nag_lapack_dorglq (f08aj) nag_lapack_dormlq (f08ak)
QLQL factorization, real matrices nag_lapack_dgeqlf (f08ce)   nag_lapack_dorgql (f08cf) nag_lapack_dormql (f08cg)
RQRQ factorization, real matrices nag_lapack_dgerqf (f08ch)   nag_lapack_dorgrq (f08cj) nag_lapack_dormrq (f08ck)
RQRQ factorization, real upper trapezoidal matrices nag_lapack_dtzrzf (f08bh)     nag_lapack_dormrz (f08bk)
QRQR factorization, complex matrices nag_lapack_zgeqrf (f08as) nag_lapack_zgeqp3 (f08bt) nag_lapack_zungqr (f08at) nag_lapack_zunmqr (f08au)
LQLQ factorization, complex matrices nag_lapack_zgelqf (f08av)   nag_lapack_zunglq (f08aw) nag_lapack_zunmlq (f08ax)
QLQL factorization, complex matrices nag_lapack_zgeqlf (f08cs)   nag_lapack_zungql (f08ct) nag_lapack_zunmql (f08cu)
RQRQ factorization, complex matrices nag_lapack_zgerqf (f08cv)   nag_lapack_zungrq (f08cw) nag_lapack_zunmrq (f08cx)
RQRQ factorization, complex upper trapezoidal matrices nag_lapack_ztzrzf (f08bv)     nag_lapack_zunmrz (f08bx)
To solve linear least squares problems, as described in Sections [ factorization] or [ factorization with column pivoting], functions based on the QRQR factorization can be used:
real data, full-rank problem nag_lapack_dgeqrf (f08ae)*, nag_lapack_dormqr (f08ag)
complex data, full-rank problem nag_lapack_zgeqrf (f08as)*, nag_lapack_zunmqr (f08au)
real data, rank-deficient problem nag_lapack_dgeqp3 (f08bf)*, nag_lapack_dormqr (f08ag)
complex data, rank-deficient problem nag_lapack_zgeqp3 (f08bt)*, nag_lapack_zunmqr (f08au)
To find the minimum norm solution of under-determined systems of linear equations, as described in Section [ factorization], functions based on the LQLQ factorization can be used:
real data, full-rank problem nag_lapack_dgelqf (f08ah)*, nag_lapack_dormlq (f08ak)
complex data, full-rank problem nag_lapack_zgelqf (f08av)*, nag_lapack_zunmlq (f08ax)
Generalized orthogonal factorizations
Functions are provided for the generalized QRQR and RQRQ factorizations of real and complex matrix pairs.
  Factorize
Generalized QRQR factorization, real matrices nag_lapack_dggqrf (f08ze)
Generalized RQRQ factorization, real matrices nag_lapack_dggrqf (f08zf)
Generalized QRQR factorization, complex matrices nag_lapack_zggqrf (f08zs)
Generalized RQRQ factorization, complex matrices nag_lapack_zggrqf (f08zt)
Singular value problems
Functions are provided to reduce a general real or complex rectangular matrix AA to real bidiagonal form BB by an orthogonal transformation A = QBPTA=QBPT (or by a unitary transformation A = QBPHA=QBPH if AA is complex). Different functions allow a full matrix AA to be stored conventionally (see Section [Conventional storage]), or a band matrix to use band storage (see Section [Band storage] in the F07 Chapter Introduction).
The functions for reducing full matrices do not form the matrix QQ or PP explicitly; additional functions are provided to generate all or part of them, or to apply them to another matrix, as with the functions for orthogonal factorizations. Explicit generation of QQ or PP is required before using the bidiagonal QRQR algorithm to compute left or right singular vectors of AA.
The functions for reducing band matrices have options to generate QQ or PP if required.
Further functions are provided to compute all or part of the singular value decomposition of a real bidiagonal matrix; the same functions can be used to compute the singular value decomposition of a real or complex matrix that has been reduced to bidiagonal form.
  Reduce to
bidiagonal
form
Generate
matrix QQ
or PTPT
Apply
matrix QQ
or PP
Reduce band
matrix to
bidiagonal
form
SVD of
bidiagonal
form (QRQR
algorithm)
SVD of
bidiagonal
form (divide and conquer)
real matrices nag_lapack_dgebrd (f08ke) nag_lapack_dorgbr (f08kf) nag_lapack_dormbr (f08kg) nag_lapack_dgbbrd (f08le) nag_lapack_dbdsqr (f08me) nag_lapack_dbdsdc (f08md)
complex matrices nag_lapack_zgebrd (f08ks) nag_lapack_zungbr (f08kt) nag_lapack_zunmbr (f08ku) nag_lapack_zgbbrd (f08ls) nag_lapack_zbdsqr (f08ms)  
Given the singular values, nag_lapack_ddisna (f08fl) is provided to compute the reciprocal condition numbers for the left or right singular vectors of a real or complex matrix.
To compute the singular values and vectors of a rectangular matrix, as described in Section [The Singular Value Decomposition], use the following sequence of calls:
Rectangular matrix (standard storage)
real matrix, singular values and vectors nag_lapack_dgebrd (f08ke), nag_lapack_dorgbr (f08kf)*, nag_lapack_dbdsqr (f08me)
complex matrix, singular values and vectors nag_lapack_zgebrd (f08ks), nag_lapack_zungbr (f08kt)*, nag_lapack_zbdsqr (f08ms)
Rectangular matrix (banded)
real matrix, singular values and vectors nag_lapack_dgbbrd (f08le), nag_lapack_dorgbr (f08kf), nag_lapack_dbdsqr (f08me)
complex matrix, singular values and vectors nag_lapack_zgbbrd (f08ls), nag_lapack_zungbr (f08kt), nag_lapack_zbdsqr (f08ms)
To use the singular value decomposition to solve a linear least squares problem, as described in Section [The Singular Value Decomposition and Least Squares Problems], the following functions are required:
real data nag_lapack_dgebrd (f08ke), nag_lapack_dorgbr (f08kf), nag_lapack_dormbr (f08kg), nag_lapack_dbdsqr (f08me)
complex data nag_lapack_zgebrd (f08ks), nag_lapack_zungbr (f08kt), nag_lapack_zunmbr (f08ku), nag_lapack_zbdsqr (f08ms)
Generalized singular value decomposition
Functions are provided to compute the generalized SVD of a real or complex matrix pair (A,B)(A,B) in upper trapezoidal form. Functions are also provided to reduce a general real or complex matrix pair to the required upper trapezoidal form.
  Reduce to trapezoidal form Generalized SVD of trapezoidal form
real matrices nag_lapack_dggsvp (f08ve) nag_lapack_dtgsja (f08ye)
complex matrices nag_lapack_zggsvp (f08vs) nag_lapack_ztgsja (f08ys)
Symmetric eigenvalue problems
Functions are provided to reduce a real symmetric or complex Hermitian matrix AA to real tridiagonal form TT by an orthogonal similarity transformation A = QTQTA=QTQT (or by a unitary transformation A = QTQHA=QTQH if AA is complex). Different functions allow a full matrix AA to be stored conventionally (see Section [Conventional storage] in the F07 Chapter Introduction) or in packed storage (see Section [Packed storage] in the F07 Chapter Introduction); or a band matrix to use band storage (see Section [Band storage] in the F07 Chapter Introduction).
The functions for reducing full matrices do not form the matrix QQ explicitly; additional functions are provided to generate QQ, or to apply it to another matrix, as with the functions for orthogonal factorizations. Explicit generation of QQ is required before using the QRQR algorithm to find all the eigenvectors of AA; application of QQ to another matrix is required after eigenvectors of TT have been found by inverse iteration, in order to transform them to eigenvectors of AA.
The functions for reducing band matrices have an option to generate QQ if required.
  Reduce to tridiagonal form Generate matrix QQ Apply matrix QQ
real symmetric matrices nag_lapack_dsytrd (f08fe) nag_lapack_dorgtr (f08ff) nag_lapack_dormtr (f08fg)
real symmetric matrices (packed storage) nag_lapack_dsptrd (f08ge) nag_lapack_dopgtr (f08gf) nag_lapack_dopmtr (f08gg)
real symmetric band matrices nag_lapack_dsbtrd (f08he)    
complex Hermitian matrices nag_lapack_zhetrd (f08fs) nag_lapack_zungtr (f08ft) nag_lapack_zunmtr (f08fu)
complex Hermitian matrices (packed storage) nag_lapack_zhptrd (f08gs) nag_lapack_zupgtr (f08gt) nag_lapack_zupmtr (f08gu)
complex Hermitian band matrices nag_lapack_zhbtrd (f08hs)    
Given the eigenvalues, nag_lapack_ddisna (f08fl) is provided to compute the reciprocal condition numbers for the eigenvectors of a real symmetric or complex Hermitian matrix.
A variety of functions are provided to compute eigenvalues and eigenvectors of the real symmetric tridiagonal matrix TT, some computing all eigenvalues and eigenvectors, some computing selected eigenvalues and eigenvectors. The same functions can be used to compute eigenvalues and eigenvectors of a real symmetric or complex Hermitian matrix which has been reduced to tridiagonal form.
Eigenvalues and eigenvectors of real symmetric tridiagonal matrices:
The original (non-reduced) matrix is Real or Complex Hermitian
all eigenvalues (root-free QRQR algorithm) nag_lapack_dsterf (f08jf)
all eigenvalues (root-free QRQR algorithm called by divide-and-conquer) nag_lapack_dstevd (f08jc) or nag_lapack_dstedc (f08jh)
all eigenvalues (RRR) nag_lapack_dstegr (f08jl)
selected eigenvalues (bisection) nag_lapack_dstebz (f08jj)
The original (non-reduced) matrix is Real
all eigenvalues and eigenvectors (QRQR algorithm) nag_lapack_dsteqr (f08je)
all eigenvalues and eigenvectors (divide-and-conquer) nag_lapack_dstevd (f08jc) or nag_lapack_dstedc (f08jh)
all eigenvalues and eigenvectors (RRR) nag_lapack_dstegr (f08jl)
all eigenvalues and eigenvectors (positive definite case) nag_lapack_dpteqr (f08jg)
selected eigenvectors (inverse iteration) nag_lapack_dstein (f08jk)
The original (non-reduced) matrix is Complex Hermitian
all eigenvalues and eigenvectors (QRQR algorithm) nag_lapack_zsteqr (f08js)
all eigenvalues and eigenvectors (divide and conquer) nag_lapack_zstedc (f08jv)
all eigenvalues and eigenvectors (RRR) nag_lapack_zstegr (f08jy)
all eigenvalues and eigenvectors (positive definite case) nag_lapack_zpteqr (f08ju)
selected eigenvectors (inverse iteration) nag_lapack_zstein (f08jx)
The following sequences of calls may be used to compute various combinations of eigenvalues and eigenvectors, as described in Section [Symmetric Eigenvalue Problems].
Sequences for computing eigenvalues and eigenvectors
Real Symmetric matrix (standard storage)
all eigenvalues and eigenvectors (using divide-and-conquer) nag_lapack_dsyevd (f08fc)
all eigenvalues and eigenvectors (using QRQR algorithm) nag_lapack_dsytrd (f08fe), nag_lapack_dorgtr (f08ff)*, nag_lapack_dsteqr (f08je)
all eigenvalues and eigenvectors (RRR) nag_lapack_dsytrd (f08fe), nag_lapack_dormtr (f08fg), nag_lapack_dstegr (f08jl)
selected eigenvalues and eigenvectors (bisection and inverse iteration) nag_lapack_dsytrd (f08fe), nag_lapack_dormtr (f08fg), nag_lapack_dstebz (f08jj), nag_lapack_dstein (f08jk)*
Real Symmetric matrix (packed storage)
all eigenvalues and eigenvectors (using divide-and-conquer) nag_lapack_dspevd (f08gc)
all eigenvalues and eigenvectors (using QRQR algorithm) nag_lapack_dsptrd (f08ge), nag_lapack_dopgtr (f08gf)*, nag_lapack_dsteqr (f08je)
all eigenvalues and eigenvectors (RRR) nag_lapack_dsptrd (f08ge), nag_lapack_dopmtr (f08gg), nag_lapack_dstegr (f08jl)
selected eigenvalues and eigenvectors (bisection and inverse iteration) nag_lapack_dsptrd (f08ge), nag_lapack_dopmtr (f08gg), nag_lapack_dstebz (f08jj), nag_lapack_dstein (f08jk)*
Real Symmetric banded matrix
all eigenvalues and eigenvectors (using divide-and-conquer) nag_lapack_dsbevd (f08hc)
all eigenvalues and eigenvectors (using QRQR algorithm) nag_lapack_dsbtrd (f08he)*, nag_lapack_dsteqr (f08je)
Complex Hermitian matrix (standard storage)
all eigenvalues and eigenvectors (using divide-and-conquer) nag_lapack_zheevd (f08fq)
all eigenvalues and eigenvectors (using QRQR algorithm) nag_lapack_zhetrd (f08fs), nag_lapack_zungtr (f08ft)*, nag_lapack_zsteqr (f08js)
all eigenvalues and eigenvectors (RRR) nag_lapack_zhetrd (f08fs), nag_lapack_zunmtr (f08fu), nag_lapack_zstegr (f08jy)
selected eigenvalues and eigenvectors (bisection and inverse iteration) nag_lapack_zhetrd (f08fs), nag_lapack_zunmtr (f08fu), nag_lapack_dstebz (f08jj), nag_lapack_zstein (f08jx)*
Complex Hermitian matrix (packed storage)
all eigenvalues and eigenvectors (using divide-and-conquer) nag_lapack_zhpevd (f08gq)
all eigenvalues and eigenvectors (using QRQR algorithm) nag_lapack_zhptrd (f08gs), nag_lapack_zupgtr (f08gt)*, nag_lapack_zsteqr (f08js)
all eigenvalues and eigenvectors (RRR) nag_lapack_zhptrd (f08gs), nag_lapack_zupmtr (f08gu) and nag_lapack_zstegr (f08jy)
selected eigenvalues and eigenvectors (bisection and inverse iteration) nag_lapack_zhptrd (f08gs), nag_lapack_zupmtr (f08gu), nag_lapack_dstebz (f08jj), nag_lapack_zstein (f08jx)*
Complex Hermitian banded matrix
all eigenvalues and eigenvectors (using divide-and-conquer) nag_lapack_zhbevd (f08hq)
all eigenvalues and eigenvectors (using QRQR algorithm) nag_lapack_zhbtrd (f08hs)*, nag_lapack_zsteqr (f08js)
Generalized symmetric-definite eigenvalue problems
Functions are provided for reducing each of the problems Ax = λBxAx=λBx, ABx = λxABx=λx or BAx = λxBAx=λx to an equivalent standard eigenvalue problem Cy = λyCy=λy. Different functions allow the matrices to be stored either conventionally or in packed storage. The positive definite matrix BB must first be factorized using a function from Chapter F07. There is also a function which reduces the problem Ax = λBxAx=λBx where AA and BB are banded, to an equivalent banded standard eigenvalue problem; this uses a split Cholesky factorization for which a function in Chapter F08 is provided.
  Reduce to standard problem Reduce to standard problem (packed storage) Reduce to standard problem (band matrices)
real symmetric matrices nag_lapack_dsygst (f08se) nag_lapack_dspgst (f08te) nag_lapack_dsbgst (f08ue)
complex Hermitian matrices nag_lapack_zhegst (f08ss) nag_lapack_zhpgst (f08ts) nag_lapack_zhbgst (f08us)
The equivalent standard problem can then be solved using the functions discussed in Section [Symmetric eigenvalue problems]. For example, to compute all the eigenvalues, the following functions must be called:
real symmetric-definite problem nag_lapack_dpotrf (f07fd), nag_lapack_dsygst (f08se)*, nag_lapack_dsytrd (f08fe), nag_lapack_dsterf (f08jf)
real symmetric-definite problem, packed storage nag_lapack_dpptrf (f07gd), nag_lapack_dspgst (f08te)*, nag_lapack_dsptrd (f08ge), nag_lapack_dsterf (f08jf)
real symmetric-definite banded problem nag_lapack_dpbstf (f08uf)*, nag_lapack_dsbgst (f08ue)*, nag_lapack_dsbtrd (f08he), nag_lapack_dsterf (f08jf)
complex Hermitian-definite problem nag_lapack_zpotrf (f07fr), nag_lapack_zhegst (f08ss)*, nag_lapack_zhetrd (f08fs), nag_lapack_dsterf (f08jf)
complex Hermitian-definite problem, packed storage nag_lapack_zpptrf (f07gr), nag_lapack_zhpgst (f08ts)*, nag_lapack_zhptrd (f08gs), nag_lapack_dsterf (f08jf)
complex Hermitian-definite banded problem nag_lapack_zpbstf (f08ut)*, nag_lapack_zhbgst (f08us)*, nag_lapack_zhbtrd (f08hs), nag_lapack_dsterf (f08jf)
If eigenvectors are computed, the eigenvectors of the equivalent standard problem must be transformed back to those of the original generalized problem, as indicated in Section [Generalized Symmetric-definite Eigenvalue Problems]; functions from may be used for this.
Nonsymmetric eigenvalue problems
Functions are provided to reduce a general real or complex matrix AA to upper Hessenberg form HH by an orthogonal similarity transformation A = QHQTA=QHQT (or by a unitary transformation A = QHQHA=QHQH if AA is complex).
These functions do not form the matrix QQ explicitly; additional functions are provided to generate QQ, or to apply it to another matrix, as with the functions for orthogonal factorizations. Explicit generation of QQ is required before using the QRQR algorithm on HH to compute the Schur vectors; application of QQ to another matrix is needed after eigenvectors of HH have been computed by inverse iteration, in order to transform them to eigenvectors of AA.
Functions are also provided to balance the matrix before reducing it to Hessenberg form, as described in Section [Balancing and condition for the nonsymmetric eigenproblem]. Companion functions are required to transform Schur vectors or eigenvectors of the balanced matrix to those of the original matrix.
  Reduce to Hessenberg form Generate matrix QQ Apply matrix QQ Balance Back­transform vectors after balancing
real matrices nag_lapack_dgehrd (f08ne) nag_lapack_dorghr (f08nf) nag_lapack_dormhr (f08ng) nag_lapack_dgebal (f08nh) nag_lapack_dgebak (f08nj)
complex matrices nag_lapack_zgehrd (f08ns) nag_lapack_zunghr (f08nt) nag_lapack_zunmhr (f08nu) nag_lapack_zgebal (f08nv) nag_lapack_zgebak (f08nw)
Functions are provided to compute the eigenvalues and all or part of the Schur factorization of an upper Hessenberg matrix. Eigenvectors may be computed either from the upper Hessenberg form by inverse iteration, or from the Schur form by back-substitution; these approaches are equally satisfactory for computing individual eigenvectors, but the latter may provide a more accurate basis for a subspace spanned by several eigenvectors.
Additional functions estimate the sensitivities of computed eigenvalues and eigenvectors, as discussed in Section [The nonsymmetric eigenproblem].
  Eigenvalues and Schur factorization (QRQR algorithm) Eigenvectors from Hessenberg form (inverse iteration) Eigenvectors from Schur factorization Sensitivities of eigenvalues and eigenvectors
real matrices nag_lapack_dhseqr (f08pe) nag_lapack_dhsein (f08pk) nag_lapack_dtrevc (f08qk) nag_lapack_dtrsna (f08ql)
complex matrices nag_lapack_zhseqr (f08ps) nag_lapack_zhsein (f08px) nag_lapack_ztrevc (f08qx) nag_lapack_ztrsna (f08qy)
Finally functions are provided for reordering the Schur factorization, so that eigenvalues appear in any desired order on the diagonal of the Schur form. The functions nag_lapack_dtrexc (f08qf) and nag_lapack_ztrexc (f08qt) simply swap two diagonal elements or blocks, and may need to be called repeatedly to achieve a desired order. The functions nag_lapack_dtrsen (f08qg) and nag_lapack_ztrsen (f08qu) perform the whole reordering process for the important special case where a specified cluster of eigenvalues is to appear at the top of the Schur form; if the Schur vectors are reordered at the same time, they yield an orthonormal basis for the invariant subspace corresponding to the specified cluster of eigenvalues. These functions can also compute the sensitivities of the cluster of eigenvalues and the invariant subspace.
  Reorder Schur factorization Reorder Schur factorization, find basis for invariant subspace and estimate sensitivities
real matrices nag_lapack_dtrexc (f08qf) nag_lapack_dtrsen (f08qg)
complex matrices nag_lapack_ztrexc (f08qt) nag_lapack_ztrsen (f08qu)
The following sequences of calls may be used to compute various combinations of eigenvalues, Schur vectors and eigenvectors, as described in Section [Nonsymmetric Eigenvalue Problems]:
real matrix, all eigenvalues and Schur factorization nag_lapack_dgehrd (f08ne), nag_lapack_dorghr (f08nf)*, nag_lapack_dhseqr (f08pe)
real matrix, all eigenvalues and selected eigenvectors nag_lapack_dgehrd (f08ne), nag_lapack_dormhr (f08ng), nag_lapack_dhseqr (f08pe), nag_lapack_dhsein (f08pk)
real matrix, all eigenvalues and eigenvectors (with balancing) nag_lapack_dgebal (f08nh)*, nag_lapack_dgehrd (f08ne), nag_lapack_dorghr (f08nf), nag_lapack_dgebak (f08nj), nag_lapack_dhseqr (f08pe), nag_lapack_dhsein (f08pk)
complex matrix, all eigenvalues and Schur factorization nag_lapack_zgehrd (f08ns), nag_lapack_zunghr (f08nt)*, nag_lapack_zhseqr (f08ps)
complex matrix, all eigenvalues and selected eigenvectors nag_lapack_zgehrd (f08ns), nag_lapack_zunmhr (f08nu), nag_lapack_zhseqr (f08ps), nag_lapack_zhsein (f08px)*
complex matrix, all eigenvalues and eigenvectors (with balancing) nag_lapack_zgebal (f08nv)*, nag_lapack_zgehrd (f08ns), nag_lapack_zunghr (f08nt), nag_lapack_zgebak (f08nw), nag_lapack_zhseqr (f08ps), nag_lapack_zhsein (f08px)
Generalized nonsymmetric eigenvalue problems
Functions are provided to reduce a real or complex matrix pair (A1,R1)(A1,R1), where A1A1 is general and R1R1 is upper triangular, to generalized upper Hessenberg form by orthogonal transformations A1 = Q1HZ1TA1=Q1HZ1T, R1 = Q1RZ1TR1=Q1RZ1T, (or by unitary transformations A1 = Q1HZ1HA1=Q1HZ1H, R = Q1R1Z1HR=Q1R1Z1H, in the complex case). These functions can optionally return Q1Q1 and/or Z1Z1. Note that to transform a general matrix pair (A,B)(A,B) to the form (A1,R1)(A1,R1) a QRQR factorization of BB (B = R1B=Q~R1) should first be performed and the matrix A1A1 obtained as A1 = TAA1=Q~TA (see Section [Orthogonal factorizations] above).
Functions are also provided to balance a general matrix pair before reducing it to generalized Hessenberg form, as described in Section [Balancing the generalized eigenvalue problem]. Companion functions are provided to transform vectors of the balanced pair to those of the original matrix pair.
  Reduce to generalized Hessenberg form Balance Backtransform vectors after balancing
real matrices nag_lapack_dgghrd (f08we) nag_lapack_dggbal (f08wh) nag_lapack_dggbak (f08wj)
complex matrices nag_lapack_zgghrd (f08ws) nag_lapack_zggbal (f08wv) nag_lapack_zggbak (f08ww)
Functions are provided to compute the eigenvalues (as the pairs (α,β)(α,β)) and all or part of the generalized Schur factorization of a generalized upper Hessenberg matrix pair. Eigenvectors may be computed from the generalized Schur form by back-substitution.
Additional functions estimate the sensitivities of computed eigenvalues and eigenvectors.
  Eigenvalues and generalized Schur factorization (QZQZ algorithm) Eigenvectors from generalized Schur factorization Sensitivities of eigenvalues and eigenvectors
real matrices nag_lapack_dhgeqz (f08xe) nag_lapack_dtgevc (f08yk) nag_lapack_dtgsna (f08yl)
complex matrices nag_lapack_zhgeqz (f08xs) nag_lapack_ztgevc (f08yx) nag_lapack_ztgsna (f08yy)
Finally, functions are provided for reordering the generalized Schur factorization so that eigenvalues appear in any desired order on the diagonal of the generalized Schur form. nag_lapack_dtgexc (f08yf) and nag_lapack_ztgexc (f08yt) simply swap two diagonal elements or blocks, and may need to be called repeatedly to achieve a desired order. nag_lapack_dtgsen (f08yg) and nag_lapack_ztgsen (f08yu) perform the whole reordering process for the important special case where a specified cluster of eigenvalues is to appear at the top of the generalized Schur form; if the Schur vectors are reordered at the same time, they yield an orthonormal basis for the deflating subspace corresponding to the specified cluster of eigenvalues. These functions can also compute the sensitivities of the cluster of eigenvalues and the deflating subspace.
  Reorder generalized Schur factorization Reorder generalized Schur factorization, find basis for deflating subspace and estimate sensitivites
real matrices nag_lapack_dtgexc (f08yf) nag_lapack_dtgsen (f08yg)
complex matrices nag_lapack_ztgexc (f08yt) nag_lapack_ztgsen (f08yu)
The following sequences of calls may be used to compute various combinations of eigenvalues, generalized Schur vectors and eigenvectors
real matrix pair, all eigenvalues (with balancing) nag_lapack_dgeqrf (f08ae), nag_lapack_dormqr (f08ag), nag_lapack_dgghrd (f08we), nag_lapack_dggbal (f08wh), nag_lapack_dhgeqz (f08xe)*
real matrix pair, all eigenvalues and generalized Schur factorization nag_lapack_dgeqrf (f08ae), nag_lapack_dorgqr (f08af), nag_lapack_dormqr (f08ag), nag_lapack_dgghrd (f08we), nag_lapack_dhgeqz (f08xe)
real matrix pair, all eigenvalues and eigenvectors (with balancing) nag_lapack_dgeqrf (f08ae), nag_lapack_dorgqr (f08af), nag_lapack_dormqr (f08ag), nag_lapack_dgghrd (f08we), nag_lapack_dggbal (f08wh), nag_lapack_dhgeqz (f08xe), nag_lapack_dtgevc (f08yk)*, nag_lapack_dggbak (f08wj)
complex matrix pair, all eigenvalues (with balancing) nag_lapack_zgeqrf (f08as), nag_lapack_zunmqr (f08au), nag_lapack_zgghrd (f08ws), nag_lapack_zggbal (f08wv), nag_lapack_zhgeqz (f08xs)*
complex matrix pair, all eigenvalues and generalized Schur factorization nag_lapack_zgeqrf (f08as), nag_lapack_zungqr (f08at), nag_lapack_zunmqr (f08au), nag_lapack_zgghrd (f08ws), nag_lapack_zhgeqz (f08xs)
complex matrix pair, all eigenvalues and eigenvectors (with balancing) nag_lapack_zgeqrf (f08as), nag_lapack_zungqr (f08at), nag_lapack_zunmqr (f08au), nag_lapack_zgghrd (f08ws), nag_lapack_zggbal (f08wv), nag_lapack_zhgeqz (f08xs), nag_lapack_ztgevc (f08yx)*, nag_lapack_zggbak (f08ww)
The Sylvester equation and the generalized Sylvester equation
Functions are provided to solve the real or complex Sylvester equation AX ± XB = CAX±XB=C, where AA and BB are upper quasi-triangular if real, or upper triangular if complex. To solve the general form of the Sylvester equation in which AA and BB are general square matrices, AA and BB must be reduced to upper (quasi-) triangular form by the Schur factorization, using functions described in Section [Nonsymmetric eigenvalue problems]. For more details, see the documents for the functions listed below.
  Solve the Sylvester equation
real matrices nag_lapack_dtrsyl (f08qh)
complex matrices nag_lapack_ztrsyl (f08qv)
Functions are also provided to solve the real or complex generalized Sylvester equations
ARLB = C ,   ​ DRLE = F ,
AR-LB=C ,   ​ DR-LE=F ,
where the pairs (A,D)(A,D) and (B,E)(B,E) are in generalized Schur form. To solve the general form of the generalized Sylvester equation in which (A,D)(A,D) and (B,E)(B,E) are general matrix pairs, (A,D)(A,D) and (B,E)(B,E) must first be reduced to generalized Schur form.
  Solve the generalized Sylvester equation
real matrices nag_lapack_dtgsyl (f08yh)
complex matrices nag_lapack_ztgsyl (f08yv)

Matrix Storage Schemes

In this chapter the following storage schemes are used for matrices:
These storage schemes are compatible with those used in Chapter F07, but different schemes for packed, band and tridiagonal storage are used in a few older functions in Chapters F01, F02, F03 and F04.

Conventional storage

Please see Section [Conventional storage] in the F07 Chapter Introduction for full details.

Packed storage

Please see Section [Packed storage] in the F07 Chapter Introduction for full details.

Band storage

Please see Section [Band storage] in the F07 Chapter Introduction for full details.

Tridiagonal and bidiagonal matrices

A symmetric tridiagonal or bidiagonal matrix is stored in two one-dimensional arrays, one of length nn containing the diagonal elements, and one of length n1n-1 containing the off-diagonal elements. (Older functions in Chapter F02 store the off-diagonal elements in elements 2 : n2:n of a vector of length nn.)

Real diagonal elements of complex matrices

Please see Section [Real diagonal elements of complex matrices] in the F07 Chapter Introduction for full details.

Representation of orthogonal or unitary matrices

A real orthogonal or complex unitary matrix (usually denoted QQ) is often represented in the NAG Toolbox as a product of elementary reflectors – also referred to as elementary Householder matrices (usually denoted HiHi). For example,
Q = H1H2Hk.
Q =H1H2Hk.
You need not be aware of the details, because functions are provided to work with this representation, either to generate all or part of QQ explicitly, or to multiply a given matrix by QQ or QTQT (QHQH in the complex case) without forming QQ explicitly.
Nevertheless, the following further details may occasionally be useful.
An elementary reflector (or elementary Householder matrix) HH of order nn is a unitary matrix of the form
H = IτvvH
H=I-τvvH
(4)
where ττ is a scalar, and vv is an nn-element vector, with |τ|2v22 = 2 × Re(τ)|τ|2v22=2×Re(τ); vv is often referred to as the Householder vector. Often vv has several leading or trailing zero elements, but for the purpose of this discussion assume that HH has no such special structure.
There is some redundancy in the representation (4)(4), which can be removed in various ways. The representation used in Chapter F08 and in LAPACK (which differs from those used in some of the functions in Chapters F01, F02 and F04) sets v1 = 1v1=1; hence v1v1 need not be stored. In real arithmetic, 1τ21τ2, except that τ = 0τ=0 implies H = IH=I.
In complex arithmetic, ττ may be complex, and satisfies 1Re(τ)21Re(τ)2 and |τ1|1|τ-1|1. Thus a complex HH is not Hermitian (as it is in other representations), but it is unitary, which is the important property. The advantage of allowing ττ to be complex is that, given an arbitrary complex vector x,Hx,H can be computed so that
HHx = β(1,0,,0)T
HHx=β(1,0,,0)T
with real ββ. This is useful, for example, when reducing a complex Hermitian matrix to real symmetric tridiagonal form, or a complex rectangular matrix to real bidiagonal form.

Parameter Conventions

Option parameters

Most functions in this chapter have one or more option parameters, of type string. The descriptions in Section 5 of the function documents refer only to upper case values (for example uplo = 'U'uplo='U' or uplo = 'L'uplo='L'); however in every case, the corresponding lower case characters may be supplied (with the same meaning). Any other value is illegal.
A longer character string can be passed as the actual parameter, making the calling program more readable, but only the first character is significant. (This is a feature of Fortran 77.) For example:
[a, d, e, tau, info] = f08fe('Upper', a);

Problem dimensions

It is permissible for the problem dimensions (for example, m or n) to be passed as zero, in which case the computation (or part of it) is skipped. Negative dimensions are regarded as an error.

Decision Trees

The following decision trees are principally for the computation (general purpose) functions. See Section [Linear least squares problems (LLS)] for tables of the driver (black box) functions.

General Purpose Functions (eigenvalues and eigenvectors)

Tree 1: Real Symmetric Eigenvalue Problems

Are eigenvalues only required? _
yes
Are all the eigenvalues required? _
yes
Is AA tridiagonal? _
yes
nag_lapack_dstevd (f08jc) or nag_lapack_dsterf (f08jf)
| | no
|
| | Is AA band matrix? _
yes
(nag_lapack_dsbtrd (f08he) and nag_lapack_dsterf (f08jf)) or nag_lapack_dsbevd (f08hc)
| | no
|
| | Is one triangle of AA stored as a linear array? _
yes
(nag_lapack_dsptrd (f08ge) and nag_lapack_dsterf (f08jf)) or nag_lapack_dspevd (f08gc)
| | no
|
| | (nag_lapack_dsytrd (f08fe) and nag_lapack_dsterf (f08jf)) or nag_lapack_dsyev (f08fa) or nag_lapack_dsyevd (f08fc)
| no
|
| Is AA tridiagonal? _
yes
nag_lapack_dstebz (f08jj)
| no
|
| Is AA a band matrix? _
yes
nag_lapack_dsbtrd (f08he) and nag_lapack_dstebz (f08jj)
| no
|
| Is one triangle of AA stored as a linear array? _
yes
nag_lapack_dsptrd (f08ge) and nag_lapack_dstebz (f08jj)
| no
|
| (nag_lapack_dsytrd (f08fe) and nag_lapack_dstebz (f08jj)) or nag_lapack_dsyevx (f08fb)
no
|
Are all eigenvalues and eigenvectors required? _
yes
Is AA tridiagonal? _
yes
nag_lapack_dsteqr (f08je), nag_lapack_dstevd (f08jc), nag_lapack_dstedc (f08jh) or nag_lapack_dstegr (f08jl)
| no
|
| Is AA a band matrix? _
yes
(nag_lapack_dsbtrd (f08he) and nag_lapack_dsteqr (f08je)) or nag_lapack_dsbevd (f08hc)
| no
|
| Is one triangle of AA stored as a linear array? _
yes
(nag_lapack_dsptrd (f08ge), nag_lapack_dopgtr (f08gf) and nag_lapack_dsteqr (f08je)) or nag_lapack_dspevd (f08gc)
| no
|
| (nag_lapack_dsytrd (f08fe), nag_lapack_dorgtr (f08ff) and nag_lapack_dsteqr (f08je)) or nag_lapack_dsyev (f08fa) or nag_lapack_dsyevd (f08fc)
no
|
Is AA tridiagonal? _
yes
nag_lapack_dstebz (f08jj) and nag_lapack_dstein (f08jk)
no
|
Is one triangle of AA stored as a linear array? _
yes
nag_lapack_dsptrd (f08ge), nag_lapack_dstebz (f08jj), nag_lapack_dstein (f08jk) and nag_lapack_dopmtr (f08gg)
no
|
(nag_lapack_dsytrd (f08fe), nag_lapack_dstebz (f08jj), nag_lapack_dstein (f08jk) and nag_lapack_dormtr (f08fg)) or nag_lapack_dsyevx (f08fb)

Tree 2: Real Generalized Symmetric-definite Eigenvalue Problems

Are eigenvalues only required? _
yes
Are all the eigenvalues required? _
yes
Are AA and BB band matrices? _
yes
nag_lapack_dpbstf (f08uf), nag_lapack_dsbgst (f08ue), nag_lapack_dsbtrd (f08he) and nag_lapack_dsterf (f08jf)
| | no
|
| | Are AA and BB stored with one triangle as a linear array? _
yes
nag_lapack_dpptrf (f07gd), nag_lapack_dspgst (f08te), nag_lapack_dsptrd (f08ge) and nag_lapack_dsterf (f08jf)
| | no
|
| | nag_lapack_dpotrf (f07fd), nag_lapack_dsygst (f08se), nag_lapack_dsytrd (f08fe) and nag_lapack_dsterf (f08jf)
| no
|
| Are AA and BB band matrices? _
yes
nag_lapack_dpbstf (f08uf), nag_lapack_dsbgst (f08ue), nag_lapack_dsbtrd (f08he) and nag_lapack_dstebz (f08jj)
| no
|
| Are AA and BB stored with one triangle as a linear array? _
yes
nag_lapack_dpptrf (f07gd), nag_lapack_dspgst (f08te), nag_lapack_dsptrd (f08ge) and nag_lapack_dstebz (f08jj)
| no
|
| nag_lapack_dpotrf (f07fd), nag_lapack_dsygst (f08se), nag_lapack_dsptrd (f08ge) and nag_lapack_dstebz (f08jj)
no
|
Are all eigenvalues and eigenvectors required? _
yes
Are AA and BB stored with one triangle as a linear array? _
yes
nag_lapack_dpptrf (f07gd), nag_lapack_dspgst (f08te), nag_lapack_dsptrd (f08ge), nag_lapack_dopgtr (f08gf) and nag_lapack_dsteqr (f08je)
| no
|
| nag_lapack_dpotrf (f07fd), nag_lapack_dsygst (f08se), nag_lapack_dsytrd (f08fe), nag_lapack_dorgtr (f08ff) and nag_lapack_dsteqr (f08je)
no
|
Are AA and BB band matrices? _
yes
nag_lapack_dpbstf (f08uf), nag_lapack_dsbgst (f08ue), nag_lapack_dsbtrd (f08he) and nag_lapack_dstein (f08jk)
no
|
Are AA and BB stored with one triangle as a linear array? _
yes
nag_lapack_dpptrf (f07gd), nag_lapack_dspgst (f08te), nag_lapack_dsptrd (f08ge), nag_lapack_dstebz (f08jj), nag_lapack_dstein (f08jk) and nag_lapack_dopmtr (f08gg)
no
|
nag_lapack_dpotrf (f07fd), nag_lapack_dsygst (f08se), nag_lapack_dsytrd (f08fe), nag_lapack_dstebz (f08jj), nag_lapack_dstein (f08jk) and nag_lapack_dormtr (f08fg)
Note: the functions for band matrices only handle the problem Ax = λBxAx=λBx; the other functions handle all three types of problems (Ax = λBxAx=λBx, ABx = λxABx=λx or BAx = λxBAx=λx) except that, if the problem is BAx = λxBAx=λx and eigenvectors are required, must be used instead of instead of .

Tree 3: Real Nonsymmetric Eigenvalue Problems

Are eigenvalues required? _
yes
Is AA an upper Hessenberg matrix? _
yes
nag_lapack_dhseqr (f08pe)
| no
|
| nag_lapack_dgeev (f08na) or nag_lapack_dgeevx (f08nb) or (nag_lapack_dgebal (f08nh), nag_lapack_dgehrd (f08ne) and nag_lapack_dhseqr (f08pe))
no
|
Is the Schur factorization of AA required? _
yes
Is AA an upper Hessenberg matrix? _
yes
nag_lapack_dhseqr (f08pe)
| no
|
| nag_lapack_dgeevx (f08nb) or (nag_lapack_dgehrd (f08ne), nag_lapack_dorghr (f08nf), nag_lapack_dhseqr (f08pe) or nag_lapack_dgebak (f08nj))
no
|
Are all eigenvectors required? _
yes
Is AA an upper Hessenberg matrix? _
yes
nag_lapack_dhseqr (f08pe) or nag_lapack_dtrevc (f08qk)
| no
|
| nag_lapack_dgeev (f08na) or nag_lapack_dgeevx (f08nb) or (nag_lapack_dgebal (f08nh), nag_lapack_dgehrd (f08ne), nag_lapack_dorghr (f08nf), nag_lapack_dhseqr (f08pe), nag_lapack_dtrevc (f08qk) or nag_lapack_dgebak (f08nj))
no
|
Is AA an upper Hessenberg matrix? _
yes
nag_lapack_dhseqr (f08pe) or nag_lapack_dhsein (f08pk)
no
|
nag_lapack_dgebal (f08nh), nag_lapack_dgehrd (f08ne), nag_lapack_dhseqr (f08pe), nag_lapack_dhsein (f08pk), nag_lapack_dormhr (f08ng) or nag_lapack_dgebak (f08nj)

Tree 4: Real Generalized Nonsymmetric Eigenvalue Problems

Are eigenvalues only required? _
yes
Are AA and BB in generalized upper Hessenberg form? _
yes
nag_lapack_dhgeqz (f08xe)
| no
|
| nag_lapack_dggbal (f08wh), nag_lapack_dgeqrf (f08ae), nag_lapack_dormqr (f08ag), nag_lapack_dgghrd (f08we) and nag_lapack_dhgeqz (f08xe)
no
|
Is the generalized Schur factorization of AA and BB required? _
yes
Are AA and BB in generalized upper Hessenberg form? _
yes
nag_lapack_dhgeqz (f08xe)
| no
|
| nag_lapack_dgeqrf (f08ae), nag_lapack_dormqr (f08ag), nag_lapack_dorgqr (f08af), nag_lapack_dgghrd (f08we), nag_lapack_dhgeqz (f08xe) and nag_lapack_dtgevc (f08yk)
no
|
Are AA and BB in generalized upper Hessenberg form? _
yes
nag_lapack_dhgeqz (f08xe) and nag_lapack_dtgevc (f08yk)
no
|
nag_lapack_dggbal (f08wh), nag_lapack_dgeqrf (f08ae), nag_lapack_dormqr (f08ag), nag_lapack_dorgqr (f08af), nag_lapack_dgghrd (f08we), nag_lapack_dhgeqz (f08xe), nag_lapack_dtgevc (f08yk) and nag_lapack_dggbak (f08wj)

Tree 5: Complex Hermitian Eigenvalue Problems

Are eigenvalues only required? _
yes
Are all the eigenvalues required? _
yes
Is AA a band matrix? _
yes
(nag_lapack_zhbtrd (f08hs) and nag_lapack_dsterf (f08jf)) or nag_lapack_zhbevd (f08hq)
| | no
|
| | Is one triangle of AA stored as a linear array? _
yes
(nag_lapack_zhptrd (f08gs) and nag_lapack_dsterf (f08jf)) or nag_lapack_zhpevd (f08gq)
| | no
|
| | (nag_lapack_zhetrd (f08fs) and nag_lapack_dsterf (f08jf)) or nag_lapack_zheevd (f08fq)
| no
|
| Is AA a band matrix? _
yes
nag_lapack_zhbtrd (f08hs) and nag_lapack_dstebz (f08jj)
| no
|
| Is one triangle of AA stored as a linear array? _
yes
nag_lapack_zhptrd (f08gs) and nag_lapack_dstebz (f08jj)
| no
|
| nag_lapack_zhetrd (f08fs) and nag_lapack_dstebz (f08jj)
no
|
Are all eigenvalues and eigenvectors required? _
yes
Is AA a band matrix? _
yes
(nag_lapack_zhbtrd (f08hs) and nag_lapack_zsteqr (f08js)) or nag_lapack_zhbevd (f08hq)
| no
|
| Is one triangle of AA stored as a linear array? _
yes
(nag_lapack_zhptrd (f08gs), nag_lapack_zupgtr (f08gt) and nag_lapack_zsteqr (f08js)) or nag_lapack_zhpevd (f08gq)
| no
|
| (nag_lapack_zhetrd (f08fs), nag_lapack_zungtr (f08ft) and nag_lapack_zsteqr (f08js)) or nag_lapack_zheevd (f08fq)
no
|
Is one triangle of AA stored as a linear array? _
yes
nag_lapack_zhptrd (f08gs), nag_lapack_dstebz (f08jj), nag_lapack_zstein (f08jx) and nag_lapack_zupmtr (f08gu)
no
|
nag_lapack_zhetrd (f08fs), nag_lapack_dstebz (f08jj), nag_lapack_zstein (f08jx) and nag_lapack_zunmtr (f08fu)

Tree 6: Complex Generalized Hermitian-definite Eigenvalue Problems

Are eigenvalues only required? _
yes
Are all eigenvalues required? _
yes
Are AA and BB stored with one triangle as a linear array? _
yes
nag_lapack_zpptrf (f07gr), nag_lapack_zhpgst (f08ts), nag_lapack_zhptrd (f08gs) and nag_lapack_dsterf (f08jf)
| | no
|
| | nag_lapack_zpotrf (f07fr), nag_lapack_zhegst (f08ss), nag_lapack_zhetrd (f08fs) and nag_lapack_dsterf (f08jf)
| no
|
| Are AA and BB stored with one triangle as a linear array? _
yes
nag_lapack_zpptrf (f07gr), nag_lapack_zhpgst (f08ts), nag_lapack_zhptrd (f08gs) and nag_lapack_dstebz (f08jj)
| no
|
| nag_lapack_zpotrf (f07fr), nag_lapack_zhegst (f08ss), nag_lapack_zhptrd (f08gs) and nag_lapack_dstebz (f08jj)
no
|
Are all eigenvalues and eigenvectors required? _
yes
Are AA and BB stored with one triangle as a linear array? _
yes
nag_lapack_zpptrf (f07gr), nag_lapack_zhpgst (f08ts), nag_lapack_zhptrd (f08gs) and nag_lapack_zupgtr (f08gt)
| no
|
| nag_lapack_zpotrf (f07fr), nag_lapack_zhegst (f08ss), nag_lapack_zhetrd (f08fs), nag_lapack_zungtr (f08ft) and nag_lapack_zsteqr (f08js)
no
|
Are AA and BB stored with one triangle as a linear array? _
yes
nag_lapack_zpptrf (f07gr), nag_lapack_zhpgst (f08ts), nag_lapack_zhptrd (f08gs), nag_lapack_dstebz (f08jj), nag_lapack_zstein (f08jx) and nag_lapack_zupmtr (f08gu)
no
|
nag_lapack_zpotrf (f07fr), nag_lapack_zhegst (f08ss), nag_lapack_zhetrd (f08fs), nag_lapack_dstebz (f08jj), nag_lapack_zstein (f08jx) and nag_lapack_zunmtr (f08fu)

Tree 7: Complex non-Hermitian Eigenvalue Problems

Are eigenvalues only required? _
yes
Is AA an upper Hessenberg matrix? _
yes
nag_lapack_zhseqr (f08ps)
| no
|
| nag_lapack_zgebal (f08nv), nag_lapack_zgehrd (f08ns) and nag_lapack_zhseqr (f08ps)
no
|
Is the Schur factorization of AA required? _
yes
Is AA an upper Hessenberg matrix? _
yes
nag_lapack_zhseqr (f08ps)
| no
|
| nag_lapack_zgehrd (f08ns), nag_lapack_zunghr (f08nt), nag_lapack_zhseqr (f08ps) and nag_lapack_zgebak (f08nw)
no
|
Are all eigenvectors required? _
yes
Is AA an upper Hessenberg matrix? _
yes
nag_lapack_zhseqr (f08ps) and nag_lapack_ztrevc (f08qx)
| no
|
| nag_lapack_zgebal (f08nv), nag_lapack_zgehrd (f08ns), nag_lapack_zunghr (f08nt), nag_lapack_zhseqr (f08ps), nag_lapack_ztrevc (f08qx) and nag_lapack_zgebak (f08nw)
no
|
Is AA an upper Hessenberg matrix? _
yes
nag_lapack_zhseqr (f08ps) and nag_lapack_zhsein (f08px)
no
|
nag_lapack_zgebal (f08nv), nag_lapack_zgehrd (f08ns), nag_lapack_zhseqr (f08ps), nag_lapack_zhsein (f08px), nag_lapack_zunmhr (f08nu) and nag_lapack_zgebak (f08nw)

Tree 8: Complex Generalized non-Hermitian Eigenvalue Problems

Are eigenvalues only required? _
yes
Are AA and BB in generalized upper Hessenberg form? _
yes
nag_lapack_zhgeqz (f08xs)
| no
|
| nag_lapack_zggbal (f08wv), nag_lapack_zgeqrf (f08as), nag_lapack_zunmqr (f08au), nag_lapack_zgghrd (f08ws) and nag_lapack_zhgeqz (f08xs)
no
|
Is the generalized Schur factorization of AA and BB required? _
yes
Are AA and BB in generalized upper Hessenberg form? _
yes
nag_lapack_zhgeqz (f08xs)
| no
|
| nag_lapack_zgeqrf (f08as), nag_lapack_zunmqr (f08au), nag_lapack_zungqr (f08at), nag_lapack_zgghrd (f08ws), nag_lapack_zhgeqz (f08xs) and nag_lapack_ztgevc (f08yx)
no
|
Are AA and BB in generalized upper Hessenberg form? _
yes
nag_lapack_zhgeqz (f08xs) and nag_lapack_ztgevc (f08yx)
no
|
nag_lapack_zggbal (f08wv), nag_lapack_zgeqrf (f08as), nag_lapack_zunmqr (f08au), nag_lapack_zungqr (f08at), nag_lapack_zgghrd (f08ws), nag_lapack_zhgeqz (f08xs), nag_lapack_ztgevc (f08yx) and nag_lapack_zggbak (f08ww)

General Purpose Functions (singular value decomposition)

Tree 9

Is AA a complex matrix? _
yes
Is AA banded? _
yes
nag_lapack_zgbbrd (f08ls) and nag_lapack_zbdsqr (f08ms)
| no
|
| Are singular values only required? _
yes
nag_lapack_zgebrd (f08ks) and nag_lapack_zbdsqr (f08ms)
| no
|
| nag_lapack_zgebrd (f08ks), nag_lapack_zungbr (f08kt) and nag_lapack_zbdsqr (f08ms)
no
|
Is AA bidiagonal? _
yes
nag_lapack_dbdsqr (f08me)
no
|
Is AA banded? _
yes
nag_lapack_dgbbrd (f08le) and nag_lapack_dbdsqr (f08me)
no
|
Are singular values only required? _
yes
nag_lapack_dgebrd (f08ke) and nag_lapack_dbdsqr (f08me)
no
|
nag_lapack_dgebrd (f08ke), nag_lapack_dorgbr (f08kf) and nag_lapack_dbdsqr (f08me)

Functionality Index

Backtransformation of eigenvectors from those of balanced forms, 
    complex matrix nag_lapack_zgebak (f08nw)
    real matrix nag_lapack_dgebak (f08nj)
Backtransformation of generalized eigenvectors from those of balanced forms, 
    complex matrix nag_lapack_zggbak (f08ww)
    real matrix nag_lapack_dggbak (f08wj)
Balancing, 
    complex general matrix nag_lapack_zgebal (f08nv)
    complex general matrix pair nag_lapack_zggbal (f08wv)
    real general matrix nag_lapack_dgebal (f08nh)
    real general matrix pair nag_lapack_dggbal (f08wh)
Eigenvalue problems for condensed forms of matrices, 
    complex Hermitian matrix, 
        eigenvalues and eigenvectors, 
            band matrix, 
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm, using packed storage nag_lapack_zhbevd (f08hq)
                all eigenvalues and eigenvectors by root-free QR algorithm nag_lapack_zhbev (f08hn)
                all eigenvalues and eigenvectors by root-free QR algorithm or selected eigenvalues and eigenvectors by bisection and inverse iteration nag_lapack_zhbevx (f08hp)
            general matrix, 
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm nag_lapack_zheevd (f08fq)
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm, using packed storage nag_lapack_zhpevd (f08gq)
                all eigenvalues and eigenvectors by root-free QR algorithm nag_lapack_zheev (f08fn)
                all eigenvalues and eigenvectors by root-free QR algorithm, using packed storage nag_lapack_zhpev (f08gn)
                all eigenvalues and eigenvectors by root-free QR algorithm or selected eigenvalues and eigenvectors by bisection and inverse iteration nag_lapack_zheevx (f08fp)
                all eigenvalues and eigenvectors by root-free QR algorithm or selected eigenvalues and eigenvectors by bisection and inverse iteration, using packed storage nag_lapack_zhpevx (f08gp)
                all eigenvalues and eigenvectors using Relatively Robust Representations or selected eigenvalues and eigenvectors by bisection and inverse iteration nag_lapack_zheevr (f08fr)
        eigenvalues only, 
            band matrix, 
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm nag_lapack_zhbev (f08hn)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, or selected eigenvalues by bisection nag_lapack_zhbevx (f08hp)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, using packed storage nag_lapack_zhbevd (f08hq)
            general matrix, 
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm nag_lapack_zheev (f08fn)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, or selected eigenvalues by bisection nag_lapack_zheevx (f08fp)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, or selected eigenvalues by bisection, using packed storage nag_lapack_zhpevx (f08gp)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, using packed storage nag_lapack_zhpev (f08gn)
    complex upper Hessenberg matrix, reduced from complex general matrix, 
        eigenvalues and Schur factorization nag_lapack_zhseqr (f08ps)
        selected right and/or left eigenvectors by inverse iteration nag_lapack_zhsein (f08px)
    real bidiagonal matrix, 
        singular value decomposition, 
            after reduction from complex general matrix nag_lapack_zbdsqr (f08ms)
            after reduction from real general matrix nag_lapack_dbdsqr (f08me)
            after reduction from real general matrix, using divide-and-conquer nag_lapack_dbdsdc (f08md)
    real symmetric matrix, 
        eigenvalues and eigenvectors, 
            band matrix, 
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm nag_lapack_dsbevd (f08hc)
                all eigenvalues and eigenvectors by root-free QR algorithm nag_lapack_dsbev (f08ha)
                all eigenvalues and eigenvectors by root-free QR algorithm or selected eigenvalues and eigenvectors by bisection and inverse iteration nag_lapack_dsbevx (f08hb)
            general matrix, 
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm nag_lapack_dsyevd (f08fc)
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm, using packed storage nag_lapack_dspevd (f08gc)
                all eigenvalues and eigenvectors by root-free QR algorithm nag_lapack_dsyev (f08fa)
                all eigenvalues and eigenvectors by root-free QR algorithm, using packed storage nag_lapack_dspev (f08ga)
                all eigenvalues and eigenvectors by root-free QR algorithm or selected eigenvalues and eigenvectors by bisection and inverse iteration nag_lapack_dsyevx (f08fb)
                all eigenvalues and eigenvectors by root-free QR algorithm or selected eigenvalues and eigenvectors by bisection and inverse iteration, using packed storage nag_lapack_dspevx (f08gb)
                all eigenvalues and eigenvectors using Relatively Robust Representations or selected eigenvalues and eigenvectors by bisection and inverse iteration nag_lapack_dsyevr (f08fd)
        eigenvalues only, 
            band matrix, 
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm nag_lapack_dsbev (f08ha)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, or selected eigenvalues by bisection nag_lapack_dsbevx (f08hb)
            general matrix, 
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm nag_lapack_dsyev (f08fa)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, or selected eigenvalues by bisection nag_lapack_dsyevx (f08fb)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, or selected eigenvalues by bisection, using packed storage nag_lapack_dspevx (f08gb)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, using packed storage nag_lapack_dspev (f08ga)
    real symmetric tridiagonal matrix, 
        eigenvalues and eigenvectors, 
            after reduction from complex Hermitian matrix, 
                all eigenvalues and eigenvectors nag_lapack_zsteqr (f08js)
                all eigenvalues and eigenvectors, positive definite matrix nag_lapack_zpteqr (f08ju)
                all eigenvalues and eigenvectors, using divide-and-conquer nag_lapack_zstedc (f08jv)
                all eigenvalues and eigenvectors, using Relatively Robust Representations nag_lapack_zstegr (f08jy)
                selected eigenvectors by inverse iteration nag_lapack_zstein (f08jx)
            all eigenvalues and eigenvectors nag_lapack_dsteqr (f08je)
            all eigenvalues and eigenvectors, by divide-and-conquer nag_lapack_dstedc (f08jh)
            all eigenvalues and eigenvectors, positive definite matrix nag_lapack_dpteqr (f08jg)
            all eigenvalues and eigenvectors, using Relatively Robust Representations nag_lapack_dstegr (f08jl)
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm nag_lapack_dstevd (f08jc)
            all eigenvalues and eigenvectors by root-free QR algorithm nag_lapack_dstev (f08ja)
            all eigenvalues and eigenvectors by root-free QR algorithm or selected eigenvalues and eigenvectors by bisection and inverse iteration nag_lapack_dstevx (f08jb)
            all eigenvalues and eigenvectors using Relatively Robust Representations or selected eigenvalues and eigenvectors by bisection and inverse iteration nag_lapack_dstevr (f08jd)
            selected eigenvectors by inverse iteration nag_lapack_dstein (f08jk)
        eigenvalues only, 
            all eigenvalues by root-free QR algorithm nag_lapack_dsterf (f08jf)
            all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm nag_lapack_dstev (f08ja)
            all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, or selected eigenvalues by bisection nag_lapack_dstevx (f08jb)
            selected eigenvalues by bisection nag_lapack_dstebz (f08jj)
    real upper Hessenberg matrix, reduced from real general matrix, 
        eigenvalues and Schur factorization nag_lapack_dhseqr (f08pe)
        selected right and/or left eigenvectors by inverse iteration nag_lapack_dhsein (f08pk)
Eigenvalue problems for nonsymmetric matrices, 
    complex matrix, 
        all eigenvalues, Schur form, Schur vectors and reciprocal condition numbers nag_lapack_zgeesx (f08pp)
        all eigenvalues, Schur form and Schur vectors nag_lapack_zgees (f08pn)
        all eigenvalues and left/right eigenvectors nag_lapack_zgeev (f08nn)
        all eigenvalues and left/right eigenvectors, plus balancing transformation and reciprocal condition numbers nag_lapack_zgeevx (f08np)
    real matrix, 
        all eigenvalues, real Schur form, Schur vectors and reciprocal condition numbers nag_lapack_dgeesx (f08pb)
        all eigenvalues, real Schur form and Schur vectors nag_lapack_dgees (f08pa)
        all eigenvalues and left/right eigenvectors nag_lapack_dgeev (f08na)
        all eigenvalues and left/right eigenvectors, plus balancing transformation and reciprocal condition numbers  nag_lapack_dgeevx (f08nb)
Eigenvalues and generalized Schur factorization, 
    complex generalized upper Hessenberg form nag_lapack_zhgeqz (f08xs)
    real generalized upper Hessenberg form nag_lapack_dhgeqz (f08xe)
General Gauss–Markov linear model, 
    solves a complex general Gauss–Markov linear model problem nag_lapack_zggglm (f08zp)
    solves a real general Gauss–Markov linear model problem nag_lapack_dggglm (f08zb)
Generalized eigenvalue problems for condensed forms of matrices, 
    complex Hermitian-definite eigenproblems, 
        banded matrices, 
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm nag_lapack_zhbgvd (f08uq)
            all eigenvalues and eigenvectors by reduction to tridiagonal form nag_lapack_zhbgv (f08un)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form nag_lapack_zhbgvx (f08up)
        general matrices, 
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm nag_lapack_zhegvd (f08sq)
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm, packed storage format nag_lapack_zhpgvd (f08tq)
            all eigenvalues and eigenvectors by reduction to tridiagonal form nag_lapack_zhegv (f08sn)
            all eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format nag_lapack_zhpgv (f08tn)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form nag_lapack_zhegvx (f08sp)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format nag_lapack_zhpgvx (f08tp)
    real symmetric-definite eigenproblems, 
        banded matrices, 
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm nag_lapack_dsbgvd (f08uc)
            all eigenvalues and eigenvectors by reduction to tridiagonal form nag_lapack_dsbgv (f08ua)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form nag_lapack_dsbgvx (f08ub)
        general matrices, 
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm nag_lapack_dsygvd (f08sc)
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm, packed storage format nag_lapack_dspgvd (f08tc)
            all eigenvalues and eigenvectors by reduction to tridiagonal form nag_lapack_dsygv (f08sa)
            all eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format nag_lapack_dspgv (f08ta)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form nag_lapack_dsygvx (f08sb)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format nag_lapack_dspgvx (f08tb)
Generalized eigenvalue problems for nonsymmetric matrix pairs, 
    complex nonsymmetric matrix pairs, 
        all eigenvalues, generalized Schur form, Schur vectors and reciprocal condition numbers nag_lapack_zggesx (f08xp)
        all eigenvalues, generalized Schur form and Schur vectors nag_lapack_zgges (f08xn)
        all eigenvalues and left/right eigenvectors nag_lapack_zggev (f08wn)
        all eigenvalues and left/right eigenvectors, plus the balancing transformation and reciprocal condition numbers nag_lapack_zggevx (f08wp)
    real nonsymmetric matrix pairs, 
        all eigenvalues, generalized real Schur form and left/right Schur vectors nag_lapack_dgges (f08xa)
        all eigenvalues, generalized real Schur form and left/right Schur vectors, plus reciprocal condition numbers nag_lapack_dggesx (f08xb)
        all eigenvalues and left/right eigenvectors nag_lapack_dggev (f08wa)
        all eigenvalues and left/right eigenvectors, plus the balancing transformation and reciprocal condition numbers nag_lapack_dggevx (f08wb)
Generalized QR factorization, 
    complex matrices nag_lapack_zggqrf (f08zs)
    real matrices nag_lapack_dggqrf (f08ze)
Generalized RQ factorization, 
    complex matrices nag_lapack_zggrqf (f08zt)
    real matrices nag_lapack_dggrqf (f08zf)
Generalized singular value decomposition, 
    after reduction from complex general matrix, 
        complex triangular or trapezoidal matrix pair nag_lapack_ztgsja (f08ys)
    after reduction from real general matrix, 
        real triangular or trapezoidal matrix pair nag_lapack_dtgsja (f08ye)
    complex matrix pair nag_lapack_zggsvd (f08vn)
    real matrix pair nag_lapack_dggsvd (f08va)
    reduction of a pair of general matrices to triangular or trapezoidal form, 
        complex matrices nag_lapack_zggsvp (f08vs)
        real matrices nag_lapack_dggsvp (f08ve)
least squares problems, 
    complex matrices, 
        apply orthogonal matrix nag_lapack_zunmrz (f08bx)
        minimum norm solution using a complete orthogonal factorization nag_lapack_zgelsy (f08bn)
        minimum norm solution using the singular value decomposition nag_lapack_zgelss (f08kn)
        minimum norm solution using the singular value decomposition (divide-and-conquer) nag_lapack_zgelsd (f08kq)
        reduction of upper trapezoidal matrix to upper triangular form nag_lapack_ztzrzf (f08bv)
    real matrices, 
        apply orthogonal matrix nag_lapack_dormrz (f08bk)
        minimum norm solution using a complete orthogonal factorization nag_lapack_dgelsy (f08ba)
        minimum norm solution using the singular value decomposition nag_lapack_dgelss (f08ka)
        minimum norm solution using the singular value decomposition (divide-and-conquer) nag_lapack_dgelsd (f08kc)
        reduction of upper trapezoidal matrix to upper triangular form nag_lapack_dtzrzf (f08bh)
least squares problems with linear equality constraints, 
    complex matrices, 
        minimum norm solution subject to linear equality constraints using a generalized RQ factorization nag_lapack_zgglse (f08zn)
    real matrices, 
        minimum norm solution subject to linear equality constraints using a generalized RQ factorization nag_lapack_dgglse (f08za)
Left and right eigenvectors of a pair of matrices, 
    complex upper triangular matrices nag_lapack_ztgevc (f08yx)
    real quasi-triangular matrices nag_lapack_dtgevc (f08yk)
LQ factorization and related operations, 
    complex matrices, 
        apply unitary matrix nag_lapack_zunmlq (f08ax)
        factorization nag_lapack_zgelqf (f08av)
        form all or part of unitary matrix nag_lapack_zunglq (f08aw)
    real matrices, 
        apply orthogonal matrix nag_lapack_dormlq (f08ak)
        factorization nag_lapack_dgelqf (f08ah)
        form all or part of orthogonal matrix nag_lapack_dorglq (f08aj)
Operations on eigenvectors of a real symmetric or complex Hermitian matrix, or singular vectors of a general matrix, 
    estimate condition numbers nag_lapack_ddisna (f08fl)
Operations on generalized Schur factorization of a general matrix pair, 
    complex matrix, 
        estimate condition numbers of eigenvalues and/or eigenvectors nag_lapack_ztgsna (f08yy)
        re-order Schur factorization nag_lapack_ztgexc (f08yt)
        re-order Schur factorization, compute generalized eigenvalues and condition numbers nag_lapack_ztgsen (f08yu)
    real matrix, 
        estimate condition numbers of eigenvalues and/or eigenvectors nag_lapack_dtgsna (f08yl)
        re-order Schur factorization nag_lapack_dtgexc (f08yf)
        re-order Schur factorization, compute generalized eigenvalues and condition numbers nag_lapack_dtgsen (f08yg)
Operations on Schur factorization of a general matrix, 
    complex matrix, 
        compute left and/or right eigenvectors nag_lapack_ztrevc (f08qx)
        estimate sensitivities of eigenvalues and/or eigenvectors nag_lapack_ztrsna (f08qy)
        re-order Schur factorization nag_lapack_ztrexc (f08qt)
        re-order Schur factorization, compute basis of invariant subspace, and estimate sensitivities nag_lapack_ztrsen (f08qu)
    real matrix, 
        compute left and/or right eigenvectors nag_lapack_dtrevc (f08qk)
        estimate sensitivities of eigenvalues and/or eigenvectors nag_lapack_dtrsna (f08ql)
        re-order Schur factorization nag_lapack_dtrexc (f08qf)
        re-order Schur factorization, compute basis of invariant subspace, and estimate sensitivities nag_lapack_dtrsen (f08qg)
Overdetermined and underdetermined linear systems, 
    complex matrices, 
        solves an overdetermined or undetermined complex linear system nag_lapack_zgels (f08an)
    real matrices, 
        solves an overdetermined or undetermined real linear system nag_lapack_dgels (f08aa)
QL factorization and related operations, 
    complex matrices, 
        apply unitary matrix nag_lapack_zunmql (f08cu)
        factorization nag_lapack_zgeqlf (f08cs)
        form all or part of unitary matrix nag_lapack_zungql (f08ct)
    real matrices, 
        apply orthogonal matrix nag_lapack_dormql (f08cg)
        factorization nag_lapack_dgeqlf (f08ce)
        form all or part of orthogonal matrix nag_lapack_dorgql (f08cf)
QR factorization and related operations, 
    complex matrices, 
        apply unitary matrix nag_lapack_zunmqr (f08au)
        factorization nag_lapack_zgeqrf (f08as)
        factorization, 
            with column pivoting, using BLAS-3 nag_lapack_zgeqp3 (f08bt)
        factorization, with column pivoting nag_lapack_zgeqpf (f08bs)
        form all or part of unitary matrix nag_lapack_zungqr (f08at)
    real matrices, 
        apply orthogonal matrix nag_lapack_dormqr (f08ag)
        factorization nag_lapack_dgeqrf (f08ae)
        factorization, 
            with column pivoting, using BLAS-3 nag_lapack_dgeqp3 (f08bf)
        factorization, with column pivoting nag_lapack_dgeqpf (f08be)
        form all or part of orthogonal matrix nag_lapack_dorgqr (f08af)
Reduction of a pair of general matrices to generalized upper Hessenberg form, 
    orthogonal reduction, real matrices nag_lapack_dgghrd (f08we)
    unitary reduction, complex matrices nag_lapack_zgghrd (f08ws)
Reduction of eigenvalue problems to condensed forms, and related operations, 
    complex general matrix to upper Hessenberg form, 
        apply orthogonal matrix nag_lapack_zunmhr (f08nu)
        form orthogonal matrix nag_lapack_zunghr (f08nt)
        reduce to Hessenberg form nag_lapack_zgehrd (f08ns)
    complex Hermitian band matrix to real symmetric tridiagonal form nag_lapack_zhbtrd (f08hs)
    complex Hermitian matrix to real symmetric tridiagonal form, 
        apply unitary matrix nag_lapack_zunmtr (f08fu)
        apply unitary matrix, packed storage nag_lapack_zupmtr (f08gu)
        form unitary matrix nag_lapack_zungtr (f08ft)
        form unitary matrix, packed storage nag_lapack_zupgtr (f08gt)
        reduce to tridiagonal form nag_lapack_zhetrd (f08fs)
        reduce to tridiagonal form, packed storage nag_lapack_zhptrd (f08gs)
    complex rectangular band matrix to real upper bidiagonal form nag_lapack_zgbbrd (f08ls)
    complex rectangular matrix to real bidiagonal form, 
        apply unitary matrix nag_lapack_zunmbr (f08ku)
        form unitary matrix nag_lapack_zungbr (f08kt)
        reduce to bidiagonal form nag_lapack_zgebrd (f08ks)
    real general matrix to upper Hessenberg form, 
        apply orthogonal matrix nag_lapack_dormhr (f08ng)
        form orthogonal matrix nag_lapack_dorghr (f08nf)
        reduce to Hessenberg form nag_lapack_dgehrd (f08ne)
    real rectangular band matrix to upper bidiagonal form nag_lapack_dgbbrd (f08le)
    real rectangular matrix to bidiagonal form, 
        apply orthogonal matrix nag_lapack_dormbr (f08kg)
        form orthogonal matrix nag_lapack_dorgbr (f08kf)
        reduce to bidiagonal form nag_lapack_dgebrd (f08ke)
    real symmetric band matrix to symmetric tridiagonal form nag_lapack_dsbtrd (f08he)
    real symmetric matrix to symmetric tridiagonal form, 
        apply orthogonal matrix nag_lapack_dormtr (f08fg)
        apply orthogonal matrix, packed storage nag_lapack_dopmtr (f08gg)
        form orthogonal matrix nag_lapack_dorgtr (f08ff)
        form orthogonal matrix, packed storage nag_lapack_dopgtr (f08gf)
        reduce to tridiagonal form nag_lapack_dsytrd (f08fe)
        reduce to tridiagonal form, packed storage nag_lapack_dsptrd (f08ge)
Reduction of generalized eigenproblems to standard eigenproblems, 
    complex Hermitian-definite banded generalized eigenproblem Ax = λBx nag_lapack_zhbgst (f08us)
    complex Hermitian-definite generalized eigenproblem Ax = λBx, ABx = λx or BAx = λx nag_lapack_zhegst (f08ss)
    complex Hermitian-definite generalized eigenproblem Ax = λBx, ABx = λx or BAx = λx, packed storage nag_lapack_zhpgst (f08ts)
    real symmetric-definite banded generalized eigenproblem Ax = λBx nag_lapack_dsbgst (f08ue)
    real symmetric-definite generalized eigenproblem Ax = λBx, ABx = λx or BAx = λx nag_lapack_dsygst (f08se)
    real symmetric-definite generalized eigenproblem Ax = λBx, ABx = λx or BAx = λx, packed storage nag_lapack_dspgst (f08te)
RQ factorization and related operations, 
    complex matrices, 
        apply unitary matrix nag_lapack_zunmrq (f08cx)
        factorization nag_lapack_zgerqf (f08cv)
        form all or part of unitary matrix nag_lapack_zungrq (f08cw)
    real matrices, 
        apply orthogonal matrix nag_lapack_dormrq (f08ck)
        factorization nag_lapack_dgerqf (f08ch)
        form all or part of orthogonal matrix nag_lapack_dorgrq (f08cj)
Singular value decomposition, 
    complex matrix, 
        using a divide-and-conquer algorithm nag_lapack_zgesdd (f08kr)
        using bidiagonal QR iteration nag_lapack_zgesvd (f08kp)
    real matrix, 
        preconditioned Jacobi SVD using fast scaled rotations and de Rijks pivoting nag_lapack_dgejsv (f08kh)
        using a divide-and-conquer algorithm nag_lapack_dgesdd (f08kd)
        using bidiagonal QR iteration nag_lapack_dgesvd (f08kb)
        using fast scaled rotation and de Rijks pivoting nag_lapack_dgesvj (f08kj)
Solve generalized Sylvester equation, 
    complex matrices nag_lapack_ztgsyl (f08yv)
    real matrices nag_lapack_dtgsyl (f08yh)
Solve reduced form of Sylvester matrix equation, 
    complex matrices nag_lapack_ztrsyl (f08qv)
    real matrices nag_lapack_dtrsyl (f08qh)
Split Cholesky factorization, 
    complex Hermitian positive definite band matrix nag_lapack_zpbstf (f08ut)
    real symmetric positive definite band matrix nag_lapack_dpbstf (f08uf)

References

Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia
Arioli M, Duff I S and de Rijk P P M (1989) On the augmented system approach to sparse least squares problems Numer. Math. 55 667–684
Demmel J W and Kahan W (1990) Accurate singular values of bidiagonal matrices SIAM J. Sci. Statist. Comput. 11 873–912
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Moler C B and Stewart G W (1973) An algorithm for generalized matrix eigenproblems SIAM J. Numer. Anal. 10 241–256
Parlett B N (1998) The Symmetric Eigenvalue Problem SIAM, Philadelphia
Stewart G W and Sun J-G (1990) Matrix Perturbation Theory Academic Press, London
Ward R C (1981) Balancing the generalized eigenvalue problem SIAM J. Sci. Stat. Comp. 2 141–152
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013