# NAG CL InterfaceF08 (Lapackeig)Least Squares and Eigenvalue Problems (LAPACK)

## 1Scope of the Chapter

This chapter provides functions for the solution of linear least squares problems, eigenvalue problems and singular value problems, as well as associated computations. It provides functions for:
• solution of linear least squares problems;
• solution of symmetric eigenvalue problems;
• solution of nonsymmetric eigenvalue problems;
• solution of singular value problems;
• solution of generalized linear least squares problems;
• solution of generalized symmetric-definite eigenvalue problems;
• solution of generalized nonsymmetric eigenvalue problems;
• solution of generalized singular value problems;
• matrix factorizations associated with the above problems;
• estimating condition numbers of eigenvalue and eigenvector problems;
• estimating the numerical rank of a matrix;
• solution of the Sylvester matrix equation.
Functions are provided for both real and complex data.
For a general introduction to the solution of linear least squares problems, you should turn first to Chapter F04. The decision trees, at the end of Chapter F04, direct you to the most appropriate functions in Chapters F04 or F08. Chapters F04 and F08 contain Black Box (or driver) functions which enable standard linear least squares problems to be solved by a call to a single function.
For a general introduction to eigenvalue and singular value problems, you should turn first to Chapter F02. The decision trees, at the end of Chapter F02, direct you to the most appropriate functions in Chapters F02 or F08. Chapters F02 and F08 contain Black Box (or driver) functions which enable standard types of problem to be solved by a call to a single function. Often functions in Chapter F02 call Chapter F08 functions to perform the necessary computational tasks.
The functions in this chapter (Chapter F08) handle only dense, band, tridiagonal and Hessenberg matrices (not matrices with more specialised structures, or general sparse matrices). The tables in Section 3 and the decision trees in Section 4 direct you to the most appropriate functions in Chapter F08.
The functions in this chapter have all been derived from the LAPACK project (see Anderson et al. (1999)). They have been designed to be efficient on a wide range of high-performance computers, without compromising efficiency on conventional serial machines.

## 2Background to the Problems

This section is only a brief introduction to the numerical solution of linear least squares problems, eigenvalue and singular value problems. Consult a standard textbook for a more thorough discussion, for example Golub and Van Loan (2012).

### 2.1Linear Least Squares Problems

The linear least squares problem is
 $minimize x ‖b-Ax‖2,$ (1)
where $A$ is an $m×n$ matrix, $b$ is a given $m$ element vector and $x$ is an $n$-element solution vector.
In the most usual case $m\ge n$ and $\mathrm{rank}\left(A\right)=n$, so that $A$ has full rank and in this case the solution to problem (1) is unique; the problem is also referred to as finding a least squares solution to an overdetermined system of linear equations.
When $m and $\mathrm{rank}\left(A\right)=m$, there are an infinite number of solutions $x$ which exactly satisfy $b-Ax=0$. In this case it is often useful to find the unique solution $x$ which minimizes ${‖x‖}_{2}$, and the problem is referred to as finding a minimum norm solution to an underdetermined system of linear equations.
In the general case when we may have $\mathrm{rank}\left(A\right)<\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$ – in other words, $A$ may be rank-deficient – we seek the minimum norm least squares solution $x$ which minimizes both ${‖x‖}_{2}$ and ${‖b-Ax‖}_{2}$.
This chapter (Chapter F08) contains driver functions to solve these problems with a single call, as well as computational functions that can be combined with functions in Chapter F07 to solve these linear least squares problems. The next two sections discuss the factorizations that can be used in the solution of linear least squares problems.

### 2.2Orthogonal Factorizations and Least Squares Problems

A number of functions are provided for factorizing a general rectangular $m×n$ matrix $A$, as the product of an orthogonal matrix (unitary if complex) and a triangular (or possibly trapezoidal) matrix.
A real matrix $Q$ is orthogonal if ${Q}^{\mathrm{T}}Q=I$; a complex matrix $Q$ is unitary if ${Q}^{\mathrm{H}}Q=I$. Orthogonal or unitary matrices have the important property that they leave the $2$-norm of a vector invariant, so that
 $‖x‖2 =‖Qx‖2,$
if $Q$ is orthogonal or unitary. They usually help to maintain numerical stability because they do not amplify rounding errors.
Orthogonal factorizations are used in the solution of linear least squares problems. They may also be used to perform preliminary steps in the solution of eigenvalue or singular value problems, and are useful tools in the solution of a number of other problems.

#### 2.2.1$\mathbit{Q}\mathbit{R}$ factorization

The most common, and best known, of the factorizations is the $QR$ factorization given by
 $A =Q ( R 0 ) , if ​m≥n,$
where $R$ is an $n×n$ upper triangular matrix and $Q$ is an $m×m$ orthogonal (or unitary) matrix. If $A$ is of full rank $n$, then $R$ is nonsingular. It is sometimes convenient to write the factorization as
 $A =(Q1Q2) ( R 0 )$
which reduces to
 $A =Q1R,$
where ${Q}_{1}$ consists of the first $n$ columns of $Q$, and ${Q}_{2}$ the remaining $m-n$ columns.
If $m, $R$ is trapezoidal, and the factorization can be written
 $A =Q (R1R2) , if ​m
where ${R}_{1}$ is upper triangular and ${R}_{2}$ is rectangular.
The $QR$ factorization can be used to solve the linear least squares problem (1) when $m\ge n$ and $A$ is of full rank, since
 $‖b-Ax‖2=‖QTb-QTAx‖2=‖( c1-Rx c2 )‖2,$
where
 $c≡ ( c1 c2 )= ( Q1T b Q2T b )=QTb;$
and ${c}_{1}$ is an $n$-element vector. Then $x$ is the solution of the upper triangular system
 $Rx=c1.$
The residual vector $r$ is given by
 $r =b-Ax=Q ( 0 c2 ) .$
The residual sum of squares ${{‖r‖}_{2}}^{2}$ may be computed without forming $r$ explicitly, since
 $‖r‖2 =‖b-Ax‖2=‖c2‖2.$

#### 2.2.2$\mathbit{L}\mathbit{Q}$ factorization

The $LQ$ factorization is given by
 $A =(L 0) Q=(L 0) ( Q1 Q2 )=LQ1, if ​m≤n,$
where $L$ is $m×m$ lower triangular, $Q$ is $n×n$ orthogonal (or unitary), ${Q}_{1}$ consists of the first $m$ rows of $Q$, and ${Q}_{2}$ the remaining $n-m$ rows.
The $LQ$ factorization of $A$ is essentially the same as the $QR$ factorization of ${A}^{\mathrm{T}}$ (${A}^{\mathrm{H}}$ if $A$ is complex), since
 $A =(L 0) Q⇔AT=QT ( LT 0 ) .$
The $LQ$ factorization may be used to find a minimum norm solution of an underdetermined system of linear equations $Ax=b$ where $A$ is $m×n$ with $m and has rank $m$. The solution is given by
 $x =QT ( L-1b 0 ) .$

#### 2.2.3$\mathbit{Q}\mathbit{R}$ factorization with column pivoting

To solve a linear least squares problem (1) when $A$ is not of full rank, or the rank of $A$ is in doubt, we can perform either a $QR$ factorization with column pivoting or a singular value decomposition.
The $QR$ factorization with column pivoting is given by
 $A =Q ( R 0 ) PT, m≥n,$
where $Q$ and $R$ are as before and $P$ is a (real) permutation matrix, chosen (in general) so that
 $|r11|≥|r22|≥⋯≥|rnn|$
and moreover, for each $k$,
 $|rkk|≥‖Rk:j,j‖2, j=k+1,…,n.$
If we put
 $R = ( R11 R12 0 R22 )$
where ${R}_{11}$ is the leading $k×k$ upper triangular sub-matrix of $R$ then, in exact arithmetic, if $\mathrm{rank}\left(A\right)=k$, the whole of the sub-matrix ${R}_{22}$ in rows and columns $k+1$ to $n$ would be zero. In numerical computation, the aim must be to determine an index $k$, such that the leading sub-matrix ${R}_{11}$ is well-conditioned, and ${R}_{22}$ is negligible, so that
 $R = ( R11 R12 0 R22 ) ≃ ( R11 R12 0 0 ) .$
Then $k$ is the effective rank of $A$. See Golub and Van Loan (2012) for a further discussion of numerical rank determination.
The so-called basic solution to the linear least squares problem (1) can be obtained from this factorization as
 $x =P ( R11−1c^1 0 ),$
where ${\stackrel{^}{c}}_{1}$ consists of just the first $k$ elements of $c={Q}^{\mathrm{T}}b$.

#### 2.2.4Complete orthogonal factorization

The $QR$ factorization with column pivoting does not enable us to compute a minimum norm solution to a rank-deficient linear least squares problem, unless ${R}_{12}=0$. However, by applying for further orthogonal (or unitary) transformations from the right to the upper trapezoidal matrix $\left(\begin{array}{cc}{R}_{11}& {R}_{12}\end{array}\right)$, ${R}_{12}$ can be eliminated:
 $( R11 R12 ) Z = ( T11 0 ) .$
This gives the complete orthogonal factorization
 $AP = Q ( T11 0 0 0 ) ZT$
from which the minimum norm solution can be obtained as
 $x = P Z ( T11−1 c^1 0 ) .$

#### 2.2.5Updating a $\mathbit{Q}\mathbit{R}$ factorization

Section 2.2.1 gave the forms of the $QR$ factorization of an $m×n$ matrix $A$ for the two cases $m\ge n$ and $m. Taking first the case $m\ge n$, the least squares solution of
 $Ax = b = nb1m-nb2()$
is the solution of
 $Rx = Q1Tb .$
If the original system is now augmented by the addition of $p$ rows so that we require the solution of
 $( A B ) x = mbpb3()$
where $B$ is $p×n$, then this is equivalent to finding the least squares solution of
 $A^ x = nnRpB() x = ( Q1Tb b3 ) = b^ .$
This now requires the $QR$ factorization of the $n+p×n$ triangular-rectangular matrix $\stackrel{^}{A}$.
For the case $m, the least squares solution of the augmented system reduces to
 $A^x = ( B R1 R2 ) x = ( b3 QTb ) = b^ ,$
where $\stackrel{^}{A}$ is pentagonal.
In both cases $\stackrel{^}{A}$ can be written as a special case of a triangular-pentagonal matrix consisting of an upper triangular part on top of a rectangular part which is itself on top of a trapezoidal part. In the first case there is no trapezoidal part, in the second case a zero upper triangular part can be added, and more generally the two cases can be combined.

#### 2.2.6Other factorizations

The $QL$ and $RQ$ factorizations are given by
 $A = Q ( 0 L ) , if ​ m ≥ n ,$
and
 $A = ( 0 R ) Q , if ​ m ≤ n .$
The factorizations are less commonly used than either the $QR$ or $LQ$ factorizations described above, but have applications in, for example, the computation of generalized $QR$ factorizations.

### 2.3The Singular Value Decomposition

The singular value decomposition (SVD) of an $m×n$ matrix $A$ is given by
 $A =UΣVT, (A=UΣVHin the complex case)$
where $U$ and $V$ are orthogonal (unitary) and $\Sigma$ is an $m×n$ diagonal matrix with real diagonal elements, ${\sigma }_{i}$, such that
 $σ1≥σ2≥⋯≥σmin(m,n)≥0.$
The ${\sigma }_{i}$ are the singular values of $A$ and the first $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$ columns of $U$ and $V$ are the left and right singular vectors of $A$. The singular values and singular vectors satisfy
 $Avi=σiui and ATui=σivi (or ​AHui=σivi)$
where ${u}_{i}$ and ${v}_{i}$ are the $i$th columns of $U$ and $V$ respectively.
The computation proceeds in the following stages.
1. 1.The matrix $A$ is reduced to bidiagonal form $A={U}_{1}B{V}_{1}^{\mathrm{T}}$ if $A$ is real ($A={U}_{1}B{V}_{1}^{\mathrm{H}}$ if $A$ is complex), where ${U}_{1}$ and ${V}_{1}$ are orthogonal (unitary if $A$ is complex), and $B$ is real and upper bidiagonal when $m\ge n$ and lower bidiagonal when $m, so that $B$ is nonzero only on the main diagonal and either on the first superdiagonal (if $m\ge n$) or the first subdiagonal (if $m).
2. 2.The SVD of the bidiagonal matrix $B$ is computed as $B={U}_{2}\Sigma {V}_{2}^{\mathrm{T}}$, where ${U}_{2}$ and ${V}_{2}$ are orthogonal and $\Sigma$ is diagonal as described above. The singular vectors of $A$ are then $U={U}_{1}{U}_{2}$ and $V={V}_{1}{V}_{2}$.
If $m\gg n$, it may be more efficient to first perform a $QR$ factorization of $A$, and then compute the SVD of the $n×n$ matrix $R$, since if $A=QR$ and $R=U\Sigma {V}^{\mathrm{T}}$, then the SVD of $A$ is given by $A=\left(QU\right)\Sigma {V}^{\mathrm{T}}$.
Similarly, if $m\ll n$, it may be more efficient to first perform an $LQ$ factorization of $A$.
This chapter supports three primary algorithms for computing the SVD of a bidiagonal matrix. They are:
1. (i)the divide and conquer algorithm;
2. (ii)the $QR$ algorithm;
3. (iii)eigenpairs of an associated symmetric tridiagonal matrix.
The divide and conquer algorithm is much faster than the $QR$ algorithm if singular vectors of large matrices are required. If only a relatively small number ($<10$%) of singular values and associated singular vectors are required, then the third algorithm listed above is likely to be faster than the divide-and-conquer algorithm.

### 2.4The Singular Value Decomposition and Least Squares Problems

The SVD may be used to find a minimum norm solution to a (possibly) rank-deficient linear least squares problem (1). The effective rank, $k$, of $A$ can be determined as the number of singular values which exceed a suitable threshold. Let $\stackrel{^}{\Sigma }$ be the leading $k×k$ sub-matrix of $\Sigma$, and $\stackrel{^}{V}$ be the matrix consisting of the first $k$ columns of $V$. Then the solution is given by
 $x =V^Σ^−1c^1,$
where ${\stackrel{^}{c}}_{1}$ consists of the first $k$ elements of $c={U}^{\mathrm{T}}b={U}_{2}^{\mathrm{T}}{U}_{1}^{\mathrm{T}}b$.

### 2.5Generalized Linear Least Squares Problems

The simple type of linear least squares problem described in Section 2.1 can be generalized in various ways.
1. 1.Linear least squares problems with equality constraints:
 $find ​x​ to minimize ​S=‖c-Ax‖22 subject to Bx=d,$
where $A$ is $m×n$ and $B$ is $p×n$, with $p\le n\le m+p$. The equations $Bx=d$ may be regarded as a set of equality constraints on the problem of minimizing $S$. Alternatively the problem may be regarded as solving an overdetermined system of equations
 $( A B ) x= ( c d ) ,$
where some of the equations (those involving $B$) are to be solved exactly, and the others (those involving $A$) are to be solved in a least squares sense. The problem has a unique solution on the assumptions that $B$ has full row rank $p$ and the matrix $\left(\begin{array}{c}A\\ B\end{array}\right)$ has full column rank $n$. (For linear least squares problems with inequality constraints, refer to Chapter E04.)
2. 2.General Gauss–Markov linear model problems:
 $minimize ​‖y‖2 subject to d=Ax+By,$
where $A$ is $m×n$ and $B$ is $m×p$, with $n\le m\le n+p$. When $B=I$, the problem reduces to an ordinary linear least squares problem. When $B$ is square and nonsingular, it is equivalent to a weighted linear least squares problem:
 $find ​x​ to minimize ​‖B-1(d-Ax)‖2.$
The problem has a unique solution on the assumptions that $A$ has full column rank $n$, and the matrix $\left(A,B\right)$ has full row rank $m$. Unless $B$ is diagonal, for numerical stability it is generally preferable to solve a weighted linear least squares problem as a general Gauss–Markov linear model problem.

### 2.6Generalized Orthogonal Factorization and Generalized Linear Least Squares Problems

#### 2.6.1Generalized $\mathbit{Q}\mathbit{R}$ Factorization

The generalized $QR$ (GQR) factorization of an $n×m$ matrix $A$ and an $n×p$ matrix $B$ is given by the pair of factorizations
 $A = QR and B = QTZ ,$
where $Q$ and $Z$ are respectively $n×n$ and $p×p$ orthogonal matrices (or unitary matrices if $A$ and $B$ are complex). $R$ has the form
 $R = mmR11n-m0() , if ​ n ≥ m ,$
or
 $R = nm-nnR11R12() , if ​ n < m ,$
where ${R}_{11}$ is upper triangular. $T$ has the form
 $T = p-nnn0T12() , if ​ n ≤ p ,$
or
 $T = pn-pT11pT21() , if ​ n > p ,$
where ${T}_{12}$ or ${T}_{21}$ is upper triangular.
Note that if $B$ is square and nonsingular, the GQR factorization of $A$ and $B$ implicitly gives the $QR$ factorization of the matrix ${B}^{-1}A$:
 $B-1A = ZT (T-1R)$
without explicitly computing the matrix inverse ${B}^{-1}$ or the product ${B}^{-1}A$ (remembering that the inverse of an invertible upper triangular matrix and the product of two upper triangular matrices is an upper triangular matrix).
The GQR factorization can be used to solve the general (Gauss–Markov) linear model problem (GLM) (see Section 2.5, but note that $A$ and $B$ are dimensioned differently there as $m×n$ and $p×n$ respectively). Using the GQR factorization of $A$ and $B$, we rewrite the equation $d=Ax+By$ as
 $QTd = QTAx + QTBy = Rx + TZy.$
We partition this as
 $( d1 d2 ) = mmR11n-m0() x + p-n+mn-mmT11T12n-m0T22() ( y1 y2 )$
where
 $( d1 d2 ) ≡ QTd , and ( y1 y2 ) ≡ Zy .$
The GLM problem is solved by setting
 $y1 = 0 and y2 = T22−1 d2$
from which we obtain the desired solutions
 $x = R11−1 (d1-T12y2) and y = ZT ( 0 y2 ) .$

#### 2.6.2Generalized $\mathbit{R}\mathbit{Q}$ Factorization

The generalized $RQ$ (GRQ) factorization of an $m×n$ matrix $A$ and a $p×n$ matrix $B$ is given by the pair of factorizations
 $A = R Q , B = Z T Q$
where $Q$ and $Z$ are respectively $n×n$ and $p×p$ orthogonal matrices (or unitary matrices if $A$ and $B$ are complex). $R$ has the form
 $R = n-mmm0R12() , if ​ m≤n ,$
or
 $R = nm-nR11nR21() , if ​ m>n ,$
where ${R}_{12}$ or ${R}_{21}$ is upper triangular. $T$ has the form
 $T = nnT11p-n0() , if ​ p≥n ,$
or
 $T = pn-ppT11T12() , if ​ p
where ${T}_{11}$ is upper triangular.
Note that if $B$ is square and nonsingular, the GRQ factorization of $A$ and $B$ implicitly gives the $RQ$ factorization of the matrix $A{B}^{-1}$:
 $AB-1 = (RT-1) ZT$
without explicitly computing the matrix ${B}^{-1}$ or the product $A{B}^{-1}$ (remembering that the inverse of an invertible upper triangular matrix and the product of two upper triangular matrices is an upper triangular matrix).
The GRQ factorization can be used to solve the linear equality-constrained least squares problem (LSE) (see Section 2.5). We use the GRQ factorization of $B$ and $A$ (note that $B$ and $A$ have swapped roles), written as
 $B = T Q and A = Z R Q .$
We write the linear equality constraints $Bx=d$ as
 $T Q x = d ,$
which we partition as:
 $n-ppp0T12() ( x1 x2 ) = d where ( x1 x2 ) ≡ Qx .$
Therefore, ${x}_{2}$ is the solution of the upper triangular system
 $T12 x2 = d .$
Furthermore,
 $‖Ax-c‖2 = ‖ZTAx-ZTc‖2 = ‖RQx-ZTc‖2 .$
We partition this expression as:
 $n-ppn-pR11R12p+m-n0R22() ( x1 x2 ) - ( c1 c2 ) ,$
where $\left(\begin{array}{c}{c}_{1}\\ {c}_{2}\end{array}\right)\equiv {Z}^{\mathrm{T}}c$.
To solve the LSE problem, we set
 $R11 x1 + R12 x2 - c1 = 0$
which gives ${x}_{1}$ as the solution of the upper triangular system
 $R11 x1 = c1 - R12 x2 .$
Finally, the desired solution is given by
 $x = QT ( x1 x2 ) .$

#### 2.6.3Generalized Singular Value Decomposition (GSVD)

The generalized (or quotient) singular value decomposition of an $m×n$ matrix $A$ and a $p×n$ matrix $B$ is given by the pair of factorizations
 $A = U Σ1 [0,R] QT and B = V Σ2 [0,R] QT .$
The matrices in these factorizations have the following properties:
• $U$ is $m×m$, $V$ is $p×p$, $Q$ is $n×n$, and all three matrices are orthogonal. If $A$ and $B$ are complex, these matrices are unitary instead of orthogonal, and ${Q}^{\mathrm{T}}$ should be replaced by ${Q}^{\mathrm{H}}$ in the pair of factorizations.
• $R$ is $r×r$, upper triangular and nonsingular. $\left[0,R\right]$ is $r×n$ (in other words, the $0$ is an $r×n-r$ zero matrix). The integer $r$ is the rank of $\left(\begin{array}{c}A\\ B\end{array}\right)$, and satisfies $r\le n$.
• ${\Sigma }_{1}$ is $m×r$, ${\Sigma }_{2}$ is $p×r$, both are real, non-negative and diagonal, and ${\Sigma }_{1}^{\mathrm{T}}{\Sigma }_{1}+{\Sigma }_{2}^{\mathrm{T}}{\Sigma }_{2}=I$. Write ${\Sigma }_{1}^{\mathrm{T}}{\Sigma }_{1}=\mathrm{diag}\left({\alpha }_{1}^{2},\dots ,{\alpha }_{r}^{2}\right)$ and ${\Sigma }_{2}^{\mathrm{T}}{\Sigma }_{2}=\mathrm{diag}\left({\beta }_{1}^{2},\dots ,{\beta }_{r}^{2}\right)$, where ${\alpha }_{i}$ and ${\beta }_{i}$ lie in the interval from $0$ to $1$. The ratios ${\alpha }_{1}/{\beta }_{1},\dots ,{\alpha }_{r}/{\beta }_{r}$ are called the generalized singular values of the pair $A$, $B$. If ${\beta }_{i}=0$, then the generalized singular value ${\alpha }_{i}/{\beta }_{i}$ is infinite.
${\Sigma }_{1}$ and ${\Sigma }_{2}$ have the following detailed structures, depending on whether $m\ge r$ or $m. In the first case, $m\ge r$, then
 $Σ1 = klkI0l0Cm-k-l00() and Σ2 = kll0Sp-l00() .$
Here $l$ is the rank of $B$, $k=r-l$, $C$ and $S$ are diagonal matrices satisfying ${C}^{2}+{S}^{2}=I$, and $S$ is nonsingular. We may also identify ${\alpha }_{1}=\cdots ={\alpha }_{k}=1$, ${\alpha }_{k+i}={c}_{ii}$, for $i=1,2,\dots ,l$, ${\beta }_{1}=\cdots ={\beta }_{k}=0$, and ${\beta }_{k+i}={s}_{ii}$, for $i=1,2,\dots ,l$. Thus, the first $k$ generalized singular values ${\alpha }_{1}/{\beta }_{1},\dots ,{\alpha }_{k}/{\beta }_{k}$ are infinite, and the remaining $l$ generalized singular values are finite.
In the second case, when $m,
 $Σ1 = km-kk+l-mkI00m-k0C0()$
and
 $Σ2 = km-kk+l-mm-k0S0k+l-m00Ip-l000() .$
Again, $l$ is the rank of $B$, $k=r-l$, $C$ and $S$ are diagonal matrices satisfying ${C}^{2}+{S}^{2}=I$, and $S$ is nonsingular, and we may identify ${\alpha }_{1}=\cdots ={\alpha }_{k}=1$, ${\alpha }_{k+i}={c}_{ii}$, for $i=1,2,\dots ,m-k$, ${\alpha }_{m+1}=\cdots ={\alpha }_{r}=0$, ${\beta }_{1}=\cdots ={\beta }_{k}=0$, ${\beta }_{k+i}={s}_{ii}$, for $i=1,2,\dots ,m-k$ and ${\beta }_{m+1}=\cdots ={\beta }_{r}=1$. Thus, the first $k$ generalized singular values ${\alpha }_{1}/{\beta }_{1},\dots ,{\alpha }_{k}/{\beta }_{k}$ are infinite, and the remaining $l$ generalized singular values are finite.
Here are some important special cases of the generalized singular value decomposition. First, if $B$ is square and nonsingular, then $r=n$ and the generalized singular value decomposition of $A$ and $B$ is equivalent to the singular value decomposition of $A{B}^{-1}$, where the singular values of $A{B}^{-1}$ are equal to the generalized singular values of the pair $A$, $B$:
 $AB-1 = (UΣ1RQT) (VΣ2RQT) −1 = U (Σ1Σ2−1) VT .$
Second, for the matrix $C$, where
 $C ≡ ( A B )$
if the columns of $C$ are orthonormal, then $r=n$, $R=I$ and the generalized singular value decomposition of $A$ and $B$ is equivalent to the CS (Cosine–Sine) decomposition of $C$:
 $( A B ) = ( U 0 0 V ) ( Σ1 Σ2 ) QT .$
Third, the generalized eigenvalues and eigenvectors of ${A}^{\mathrm{T}}A-\lambda {B}^{\mathrm{T}}B$ can be expressed in terms of the generalized singular value decomposition: Let
 $X = Q ( I 0 0 R-1 ) .$
Then
 $XT AT AX = ( 0 0 0 Σ1TΣ1 ) and XT BT BX = ( 0 0 0 Σ2TΣ2 ) .$
Therefore, the columns of $X$ are the eigenvectors of ${A}^{\mathrm{T}}A-\lambda {B}^{\mathrm{T}}B$, and ‘nontrivial’ eigenvalues are the squares of the generalized singular values (see also Section 2.8). ‘Trivial’ eigenvalues are those corresponding to the leading $n-r$ columns of $X$, which span the common null space of ${A}^{\mathrm{T}}A$ and ${B}^{\mathrm{T}}B$. The ‘trivial eigenvalues’ are not well defined.

#### 2.6.4The Full CS Decomposition of Orthogonal Matrices

In Section 2.6.3 the CS (Cosine-Sine) decomposition of an orthogonal matrix partitioned into two submatrices $A$ and $B$ was given by
 $( A B ) = ( U 0 0 V ) ( Σ1 Σ2 ) QT .$
The full CS decomposition of an $m×m$ orthogonal matrix $X$ partitions $X$ into four submatrices and factorizes as
 $( X11 X12 X21 X22 ) = ( U1 0 0 U2 ) ( Σ11 -Σ12 Σ21 Σ22 ) ( V1 0 0 V2 ) T$
where, ${X}_{11}$ is a $p×q$ submatrix (which implies the dimensions of ${X}_{12}$, ${X}_{21}$ and ${X}_{22}$); ${U}_{1}$, ${U}_{2}$, ${V}_{1}$ and ${V}_{2}$ are orthogonal matrices of dimensions $p$, $m-p$, $q$ and $m-q$ respectively; ${\Sigma }_{11}$ is the $p×q$ single-diagonal matrix
 $Σ11 = k11-rrq-k11k11-rI00r0C0p-k1100() , k11 = min(p,q)$
${\Sigma }_{12}$ is the $p×m-q$ single-diagonal matrix
 $Σ12 = m-q-k12rk12-rp-k1200r0S0k12-r00I() , k12 = min(p,m-q) ,$
${\Sigma }_{21}$ is the $m-p×q$ single-diagonal matrix
 $Σ21 = q-k21rk21-rm-p-k2100r0S0k21-r00I() , k21 = min(m-p,q) ,$
and, ${\Sigma }_{21}$ is the $m-p×q$ single-diagonal matrix
 $Σ22 = k22-rrm-q-k22k22-rI00r0C0m-p-k2200() , k22 = min(m-p,m-q)$
where $r=\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(p,m-p,q,m-q\right)$ and the missing zeros remind us that either the column or the row is missing. The $r×r$ diagonal matrices $C$ and $S$ are such that ${C}^{2}+{S}^{2}=I$.
This is equivalent to the simultaneous singular value decomposition of the four submatrices ${X}_{11}$, ${X}_{12}$, ${X}_{21}$ and ${X}_{22}$.

### 2.7Symmetric Eigenvalue Problems

The symmetric eigenvalue problem is to find the eigenvalues, $\lambda$, and corresponding eigenvectors, $z\ne 0$, such that
 $Az=λz, A=AT, where ​A​ is real.$
For the Hermitian eigenvalue problem we have
 $Az=λ z, A=AH, where ​ A​ is complex.$
For both problems the eigenvalues $\lambda$ are real.
When all eigenvalues and eigenvectors have been computed, we write
 $A =ZΛZT (or ​A=ZΛZH​ if complex),$
where $\Lambda$ is a diagonal matrix whose diagonal elements are the eigenvalues, and $Z$ is an orthogonal (or unitary) matrix whose columns are the eigenvectors. This is the classical spectral factorization of $A$.
The basic task of the symmetric eigenproblem functions is to compute values of $\lambda$ and, optionally, corresponding vectors $z$ for a given matrix $A$. This computation proceeds in the following stages.
1. 1.The real symmetric or complex Hermitian matrix $A$ is reduced to real tridiagonal form $T$. If $A$ is real symmetric this decomposition is $A=QT{Q}^{\mathrm{T}}$ with $Q$ orthogonal and $T$ symmetric tridiagonal. If $A$ is complex Hermitian, the decomposition is $A=QT{Q}^{\mathrm{H}}$ with $Q$ unitary and $T$, as before, real symmetric tridiagonal.
2. 2.Eigenvalues and eigenvectors of the real symmetric tridiagonal matrix $T$ are computed. If all eigenvalues and eigenvectors are computed, this is equivalent to factorizing $T$ as $T=S\Lambda {S}^{\mathrm{T}}$, where $S$ is orthogonal and $\Lambda$ is diagonal. The diagonal entries of $\Lambda$ are the eigenvalues of $T$, which are also the eigenvalues of $A$, and the columns of $S$ are the eigenvectors of $T$; the eigenvectors of $A$ are the columns of $Z=QS$, so that $A=Z\Lambda {Z}^{\mathrm{T}}$ ($Z\Lambda {Z}^{\mathrm{H}}$ when $A$ is complex Hermitian).
This chapter supports four primary algorithms for computing eigenvalues and eigenvectors of real symmetric matrices and complex Hermitian matrices. They are:
1. (i)the divide-and-conquer algorithm;
2. (ii)the $QR$ algorithm;
3. (iii)bisection followed by inverse iteration;
4. (iv)the Relatively Robust Representation (RRR).
The divide-and-conquer algorithm is generally more efficient than the traditional $QR$ algorithm for computing all eigenvalues and eigenvectors, but the RRR algorithm tends to be fastest of all. For further information and references see Anderson et al. (1999).

### 2.8Generalized Symmetric-definite Eigenvalue Problems

This section is concerned with the solution of the generalized eigenvalue problems $Az=\lambda Bz$, $ABz=\lambda z$, and $BAz=\lambda z$, where $A$ and $B$ are real symmetric or complex Hermitian and $B$ is positive definite. Each of these problems can be reduced to a standard symmetric eigenvalue problem, using a Cholesky factorization of $B$ as either $B=L{L}^{\mathrm{T}}$ or $B={U}^{\mathrm{T}}U$ ($L{L}^{\mathrm{H}}$ or ${U}^{\mathrm{H}}U$ in the Hermitian case).
With $B=L{L}^{\mathrm{T}}$, we have
 $Az=λBz⇒(L-1AL-T)(LTz)=λ(LTz).$
Hence the eigenvalues of $Az=\lambda Bz$ are those of $Cy=\lambda y$, where $C$ is the symmetric matrix $C={L}^{-1}A{L}^{-\mathrm{T}}$ and $y={L}^{\mathrm{T}}z$. In the complex case $C$ is Hermitian with $C={L}^{-1}A{L}^{-\mathrm{H}}$ and $y={L}^{\mathrm{H}}z$.
Table 1 summarises how each of the three types of problem may be reduced to standard form $Cy=\lambda y$, and how the eigenvectors $z$ of the original problem may be recovered from the eigenvectors $y$ of the reduced problem. The table applies to real problems; for complex problems, transposed matrices must be replaced by conjugate-transposes.
Table 1
Reduction of generalized symmetric-definite eigenproblems to standard problems
Type of problem Factorization of $\mathbit{B}$ Reduction Recovery of eigenvectors
1. $Az=\lambda Bz$ $B=L{L}^{\mathrm{T}}$,
$B={U}^{\mathrm{T}}U$
$C={L}^{-1}A{L}^{-\mathrm{T}}$,
$C={U}^{-\mathrm{T}}A{U}^{-1}$
$z={L}^{-\mathrm{T}}y$,
$z={U}^{-1}y$
2. $ABz=\lambda z$ $B=L{L}^{\mathrm{T}}$,
$B={U}^{\mathrm{T}}U$
$C={L}^{\mathrm{T}}AL$,
$C=UA{U}^{\mathrm{T}}$
$z={L}^{-\mathrm{T}}y$,
$z={U}^{-1}y$
3. $BAz=\lambda z$ $B=L{L}^{\mathrm{T}}$,
$B={U}^{\mathrm{T}}U$
$C={L}^{\mathrm{T}}AL$,
$C=UA{U}^{\mathrm{T}}$
$z=Ly$,
$z={U}^{\mathrm{T}}y$
When the generalized symmetric-definite problem has been reduced to the corresponding standard problem $Cy=\lambda y$, this may then be solved using the functions described in the previous section. No special functions are needed to recover the eigenvectors $z$ of the generalized problem from the eigenvectors $y$ of the standard problem, because these computations are simple applications of Level 2 or Level 3 BLAS (see Chapter F16).

### 2.9Packed Storage for Symmetric Matrices

Functions which handle symmetric matrices are usually designed so that they use either the upper or lower triangle of the matrix; it is not necessary to store the whole matrix. If either the upper or lower triangle is stored conventionally in the upper or lower triangle of a two-dimensional array, the remaining elements of the array can be used to store other useful data. However, that is not always convenient, and if it is important to economize on storage, the upper or lower triangle can be stored in a one-dimensional array of length $n\left(n+1\right)/2$; that is, the storage is almost halved.
This storage format is referred to as packed storage; it is described in Section 3.4.2 in the F07 Chapter Introduction.
Functions designed for packed storage are usually less efficient, especially on high-performance computers, so there is a trade-off between storage and efficiency.

### 2.10Band Matrices

A band matrix is one whose elements are confined to a relatively small number of subdiagonals or superdiagonals on either side of the main diagonal. Algorithms can take advantage of bandedness to reduce the amount of work and storage required. The storage scheme for band matrices is described in Section 3.4.4 in the F07 Chapter Introduction.
If the problem is the generalized symmetric definite eigenvalue problem $Az=\lambda Bz$ and the matrices $A$ and $B$ are additionally banded, the matrix $C$ as defined in Section 2.8 is, in general, full. We can reduce the problem to a banded standard problem by modifying the definition of $C$ thus:
 $C =XTAX, where X=U-1Q or ​L-TQ,$
where $Q$ is an orthogonal matrix chosen to ensure that $C$ has bandwidth no greater than that of $A$.
A further refinement is possible when $A$ and $B$ are banded, which halves the amount of work required to form $C$. Instead of the standard Cholesky factorization of $B$ as ${U}^{\mathrm{T}}U$ or $L{L}^{\mathrm{T}}$, we use a split Cholesky factorization $B={S}^{\mathrm{T}}S$, where
 $S = ( U11 M21 L22 )$
with ${U}_{11}$ upper triangular and ${L}_{22}$ lower triangular of order approximately $n/2$; $S$ has the same bandwidth as $B$.

### 2.11Nonsymmetric Eigenvalue Problems

The nonsymmetric eigenvalue problem is to find the eigenvalues, $\lambda$, and corresponding eigenvectors, $v\ne 0$, such that
 $Av=λv.$
More precisely, a vector $v$ as just defined is called a right eigenvector of $A$, and a vector $u\ne 0$ satisfying
 $uTA=λuT (uHA=λuH when ​u​ is complex)$
is called a left eigenvector of $A$.
A real matrix $A$ may have complex eigenvalues, occurring as complex conjugate pairs.
This problem can be solved via the Schur factorization of $A$, defined in the real case as
 $A =ZTZT,$
where $Z$ is an orthogonal matrix and $T$ is an upper quasi-triangular matrix with $1×1$ and $2×2$ diagonal blocks, the $2×2$ blocks corresponding to complex conjugate pairs of eigenvalues of $A$. In the complex case, the Schur factorization is
 $A =ZTZH,$
where $Z$ is unitary and $T$ is a complex upper triangular matrix.
The columns of $Z$ are called the Schur vectors. For each $k$ ($1\le k\le n$), the first $k$ columns of $Z$ form an orthonormal basis for the invariant subspace corresponding to the first $k$ eigenvalues on the diagonal of $T$. Because this basis is orthonormal, it is preferable in many applications to compute Schur vectors rather than eigenvectors. It is possible to order the Schur factorization so that any desired set of $k$ eigenvalues occupy the $k$ leading positions on the diagonal of $T$.
The two basic tasks of the nonsymmetric eigenvalue functions are to compute, for a given matrix $A$, all $n$ values of $\lambda$ and, if desired, their associated right eigenvectors $v$ and/or left eigenvectors $u$, and the Schur factorization.
These two basic tasks can be performed in the following stages.
1. 1.A general matrix $A$ is reduced to upper Hessenberg form $H$ which is zero below the first subdiagonal. The reduction may be written $A=QH{Q}^{\mathrm{T}}$ with $Q$ orthogonal if $A$ is real, or $A=QH{Q}^{\mathrm{H}}$ with $Q$ unitary if $A$ is complex.
2. 2.The upper Hessenberg matrix $H$ is reduced to Schur form $T$, giving the Schur factorization $H=ST{S}^{\mathrm{T}}$ (for $H$ real) or $H=ST{S}^{\mathrm{H}}$ (for $H$ complex). The matrix $S$ (the Schur vectors of $H$) may optionally be computed as well. Alternatively $S$ may be postmultiplied into the matrix $Q$ determined in stage $1$, to give the matrix $Z=QS$, the Schur vectors of $A$. The eigenvalues are obtained from the diagonal elements or diagonal blocks of $T$.
3. 3.Given the eigenvalues, the eigenvectors may be computed in two different ways. Inverse iteration can be performed on $H$ to compute the eigenvectors of $H$, and then the eigenvectors can be multiplied by the matrix $Q$ in order to transform them to eigenvectors of $A$. Alternatively the eigenvectors of $T$ can be computed, and optionally transformed to those of $H$ or $A$ if the matrix $S$ or $Z$ is supplied.
The accuracy with which eigenvalues can be obtained can often be improved by balancing a matrix. This is discussed further in Section 2.14.6 below.

### 2.12Generalized Nonsymmetric Eigenvalue Problem

The generalized nonsymmetric eigenvalue problem is to find the eigenvalues, $\lambda$, and corresponding eigenvectors, $v\ne 0$, such that
 $Av=λBv.$
More precisely, a vector $v$ as just defined is called a right eigenvector of the matrix pair $\left(A,B\right)$, and a vector $u\ne 0$ satisfying
 $uTA=λuTB (uHA=λuHB​ when ​u​ is complex)$
is called a left eigenvector of the matrix pair $\left(A,B\right)$.
If $B$ is singular then the problem has one or more infinite eigenvalues $\lambda =\infty$, corresponding to $Bv=0$. Note that if $A$ is nonsingular, then the equivalent problem $\mu Av=Bv$ is perfectly well defined and an infinite eigenvalue corresponds to $\mu =0$. To deal with both finite (including zero) and infinite eigenvalues, the functions in this chapter do not compute $\lambda$ explicitly, but rather return a pair of numbers $\left(\alpha ,\beta \right)$ such that if $\beta \ne 0$
 $λ =α/β$
and if $\alpha \ne 0$ and $\beta =0$ then $\lambda =\infty$. $\beta$ is always returned as real and non-negative. Of course, computationally an infinite eigenvalue may correspond to a small $\beta$ rather than an exact zero.
For a given pair $\left(A,B\right)$ the set of all the matrices of the form $\left(A-\lambda B\right)$ is called a matrix pencil and $\lambda$ and $v$ are said to be an eigenvalue and eigenvector of the pencil $\left(A-\lambda B\right)$. If $A$ and $B$ are both singular and share a common null space then
 $det(A-λB)≡0$
so that the pencil $\left(A-\lambda B\right)$ is singular for all $\lambda$. In other words any $\lambda$ can be regarded as an eigenvalue. In exact arithmetic a singular pencil will have $\alpha =\beta =0$ for some $\left(\alpha ,\beta \right)$. Computationally if some pair $\left(\alpha ,\beta \right)$ is small then the pencil is singular, or nearly singular, and no reliance can be placed on any of the computed eigenvalues. Singular pencils can also manifest themselves in other ways; see, in particular, Sections 2.3.5.2 and 4.11.1.4 of Anderson et al. (1999) for further details.
The generalized eigenvalue problem can be solved via the generalized Schur factorization of the pair $\left(A,B\right)$ defined in the real case as
 $A =QSZT, B=QTZT,$
where $Q$ and $Z$ are orthogonal, $T$ is upper triangular with non-negative diagonal elements and $S$ is upper quasi-triangular with $1×1$ and $2×2$ diagonal blocks, the $2×2$ blocks corresponding to complex conjugate pairs of eigenvalues. In the complex case, the generalized Schur factorization is
 $A =QSZH, B=QTZH,$
where $Q$ and $Z$ are unitary and $S$ and $T$ are upper triangular, with $T$ having real non-negative diagonal elements. The columns of $Q$ and $Z$ are called respectively the left and right generalized Schur vectors and span pairs of deflating subspaces of $A$ and $B$, which are a generalization of invariant subspaces.
It is possible to order the generalized Schur factorization so that any desired set of $k$ eigenvalues correspond to the $k$ leading positions on the diagonals of the pair $\left(S,T\right)$.
The two basic tasks of the generalized nonsymmetric eigenvalue functions are to compute, for a given pair $\left(A,B\right)$, all $n$ values of $\lambda$ and, if desired, their associated right eigenvectors $v$ and/or left eigenvectors $u$, and the generalized Schur factorization.
These two basic tasks can be performed in the following stages.
1. 1.The matrix pair $\left(A,B\right)$ is reduced to generalized upper Hessenberg form $\left(H,R\right)$, where $H$ is upper Hessenberg (zero below the first subdiagonal) and $R$ is upper triangular. The reduction may be written as $A={Q}_{1}H{Z}_{1}^{\mathrm{T}},B={Q}_{1}R{Z}_{1}^{\mathrm{T}}$ in the real case with ${Q}_{1}$ and ${Z}_{1}$ orthogonal, and $A={Q}_{1}H{Z}_{1}^{\mathrm{H}},B={Q}_{1}R{Z}_{1}^{\mathrm{H}}$ in the complex case with ${Q}_{1}$ and ${Z}_{1}$ unitary.
2. 2.The generalized upper Hessenberg form $\left(H,R\right)$ is reduced to the generalized Schur form $\left(S,T\right)$ using the generalized Schur factorization $H={Q}_{2}S{Z}_{2}^{\mathrm{T}}$, $R={Q}_{2}T{Z}_{2}^{\mathrm{T}}$ in the real case with ${Q}_{2}$ and ${Z}_{2}$ orthogonal, and $H={Q}_{2}S{Z}_{2}^{\mathrm{H}},R={Q}_{2}T{Z}_{2}^{\mathrm{H}}$ in the complex case. The generalized Schur vectors of $\left(A,B\right)$ are given by $Q={Q}_{1}{Q}_{2}$, $Z={Z}_{1}{Z}_{2}$. The eigenvalues are obtained from the diagonal elements (or blocks) of the pair $\left(S,T\right)$.
3. 3.Given the eigenvalues, the eigenvectors of the pair $\left(S,T\right)$ can be computed, and optionally transformed to those of $\left(H,R\right)$ or $\left(A,B\right)$.
The accuracy with which eigenvalues can be obtained can often be improved by balancing a matrix pair. This is discussed further in Section 2.14.8 below.

### 2.13The Sylvester Equation and the Generalized Sylvester Equation

The Sylvester equation is a matrix equation of the form
 $AX+XB=C,$
where $A$, $B$, and $C$ are given matrices with $A$ being $m×m$, $B$ an $n×n$ matrix and $C$, and the solution matrix $X$, $m×n$ matrices. The solution of a special case of this equation occurs in the computation of the condition number for an invariant subspace, but a combination of functions in this chapter allows the solution of the general Sylvester equation.
Functions are also provided for solving a special case of the generalized Sylvester equations
 $AR-LB=C , DR-LE=F ,$
where $\left(A,D\right)$, $\left(B,E\right)$ and $\left(C,F\right)$ are given matrix pairs, and $R$ and $L$ are the solution matrices.

### 2.14Error and Perturbation Bounds and Condition Numbers

In this section we discuss the effects of rounding errors in the solution process and the effects of uncertainties in the data, on the solution to the problem. A number of the functions in this chapter return information, such as condition numbers, that allow these effects to be assessed. First we discuss some notation used in the error bounds of later sections.
The bounds usually contain the factor $p\left(n\right)$ (or $p\left(m,n\right)$), which grows as a function of the matrix dimension $n$ (or matrix dimensions $m$ and $n$). It measures how errors can grow as a function of the matrix dimension, and represents a potentially different function for each problem. In practice, it usually grows just linearly; $p\left(n\right)\le 10n$ is often true, although generally only much weaker bounds can be actually proved. We normally describe $p\left(n\right)$ as a ‘modestly growing’ function of $n$. For detailed derivations of various $p\left(n\right)$, see Golub and Van Loan (2012) and Wilkinson (1965).
For linear equation (see Chapter F07) and least squares solvers, we consider bounds on the relative error $‖x-\stackrel{^}{x}‖/‖x‖$ in the computed solution $\stackrel{^}{x}$, where $x$ is the true solution. For eigenvalue problems we consider bounds on the error $|{\lambda }_{i}-{\stackrel{^}{\lambda }}_{i}|$ in the $i$th computed eigenvalue ${\stackrel{^}{\lambda }}_{i}$, where ${\lambda }_{i}$ is the true $i$th eigenvalue. For singular value problems we similarly consider bounds $|{\sigma }_{i}-{\stackrel{^}{\sigma }}_{i}|$.
Bounding the error in computed eigenvectors and singular vectors ${\stackrel{^}{v}}_{i}$ is more subtle because these vectors are not unique: even though we restrict ${‖{\stackrel{^}{v}}_{i}‖}_{2}=1$ and ${‖{v}_{i}‖}_{2}=1$, we may still multiply them by arbitrary constants of absolute value $1$. So to avoid ambiguity we bound the angular difference between ${\stackrel{^}{v}}_{i}$ and the true vector ${v}_{i}$, so that
 $θ(vi,v^i) = acute angle between ​vi​ and ​v^i = arccos|viHv^i|.$ (2)
Here $\mathrm{arccos}\left(\theta \right)$ is in the standard range: $0\le \mathrm{arccos}\left(\theta \right)<\pi$. When $\theta \left({v}_{i},{\stackrel{^}{v}}_{i}\right)$ is small, we can choose a constant $\alpha$ with absolute value $1$ so that ${‖\alpha {v}_{i}-{\stackrel{^}{v}}_{i}‖}_{2}\approx \theta \left({v}_{i},{\stackrel{^}{v}}_{i}\right)$.
In addition to bounds for individual eigenvectors, bounds can be obtained for the spaces spanned by collections of eigenvectors. These may be much more accurately determined than the individual eigenvectors which span them. These spaces are called invariant subspaces in the case of eigenvectors, because if $v$ is any vector in the space, $Av$ is also in the space, where $A$ is the matrix. Again, we will use angle to measure the difference between a computed space $\stackrel{^}{S}$ and the true space $S$:
 $θ(S,S^) = acute angle between ​S​ and ​S^ = max s∈S s≠0 min s^∈S^ s^≠0 θ(s,s^) or max s^∈S^ s^≠0 min s∈S s≠0 θ(s,s^)$ (3)
$\theta \left(S,\stackrel{^}{S}\right)$ may be computed as follows. Let $S$ be a matrix whose columns are orthonormal and $\mathrm{span}S$. Similarly let $\stackrel{^}{S}$ be an orthonormal matrix with columns spanning $\stackrel{^}{S}$. Then
 $θ(S,S^)=arccos⁡σmin(SHS^).$
Finally, we remark on the accuracy of the bounds when they are large. Relative errors like $‖\stackrel{^}{x}-x‖/‖x‖$ and angular errors like $\theta \left({\stackrel{^}{v}}_{i},{v}_{i}\right)$ are only of interest when they are much less than $1$. Some stated bounds are not strictly true when they are close to $1$, but rigorous bounds are much more complicated and supply little extra information in the interesting case of small errors. These bounds are indicated by using the symbol $\lesssim$, or ‘approximately less than’, instead of the usual $\le$. Thus, when these bounds are close to 1 or greater, they indicate that the computed answer may have no significant digits at all, but do not otherwise bound the error.
A number of functions in this chapter return error estimates and/or condition number estimates directly. In other cases Anderson et al. (1999) gives code fragments to illustrate the computation of these estimates, and a number of the Chapter F08 example programs, for the driver functions, implement these code fragments.

#### 2.14.1Least squares problems

The conventional error analysis of linear least squares problems goes as follows. The problem is to find the $x$ minimizing ${‖Ax-b‖}_{2}$. Let $\stackrel{^}{x}$ be the solution computed using one of the methods described above. We discuss the most common case, where $A$ is overdetermined (i.e., has more rows than columns) and has full rank.
Then the computed solution $\stackrel{^}{x}$ has a small normwise backward error. In other words $\stackrel{^}{x}$ minimizes ${‖\left(A+E\right)\stackrel{^}{x}-\left(b+f\right)‖}_{2}$, where
 $max( ‖E‖2 ‖A‖2 , ‖f‖2 ‖b‖2 ) ≤ p(n)ε$
and $p\left(n\right)$ is a modestly growing function of $n$ and $\epsilon$ is the machine precision. Let ${\kappa }_{2}\left(A\right)={\sigma }_{\mathrm{max}}\left(A\right)/{\sigma }_{\mathrm{min}}\left(A\right)$, $\rho ={‖Ax-b‖}_{2}$, and $\mathrm{sin}\left(\theta \right)=\rho /{‖b‖}_{2}$. Then if $p\left(n\right)\epsilon$ is small enough, the error $\stackrel{^}{x}-x$ is bounded by
 $‖x-x^‖2 ‖x‖2 ≲p(n)ε {2κ2(A) cos(θ) +tan(θ)κ22(A)} .$
If $A$ is rank-deficient, the problem can be regularized by treating all singular values less than a user-specified threshold as exactly zero. See Golub and Van Loan (2012) for error bounds in this case, as well as for the underdetermined case.
The solution of the overdetermined, full-rank problem may also be characterised as the solution of the linear system of equations
 $( I A AT 0 ) ( r x )= ( b 0 ) .$
By solving this linear system (see Chapter F07) component-wise error bounds can also be obtained (see Arioli et al. (1989)).

#### 2.14.2The singular value decomposition

The usual error analysis of the SVD algorithm is as follows (see Golub and Van Loan (2012)).
The computed SVD, $\stackrel{^}{U}\stackrel{^}{\Sigma }{\stackrel{^}{V}}^{\mathrm{T}}$, is nearly the exact SVD of $A+E$, i.e., $A+E=\left(\stackrel{^}{U}+\delta \stackrel{^}{U}\right)\stackrel{^}{\Sigma }\left(\stackrel{^}{V}+\delta \stackrel{^}{V}\right)$ is the true SVD, so that $\stackrel{^}{U}+\delta \stackrel{^}{U}$ and $\stackrel{^}{V}+\delta \stackrel{^}{V}$ are both orthogonal, where ${‖E‖}_{2}/{‖A‖}_{2}\le p\left(m,n\right)\epsilon$, $‖\delta \stackrel{^}{U}‖\le p\left(m,n\right)\epsilon$, and $‖\delta \stackrel{^}{V}‖\le p\left(m,n\right)\epsilon$. Here $p\left(m,n\right)$ is a modestly growing function of $m$ and $n$ and $\epsilon$ is the machine precision. Each computed singular value ${\stackrel{^}{\sigma }}_{i}$ differs from the true ${\sigma }_{i}$ by an amount satisfying the bound
 $|σ^i-σi|≤p(m,n)εσ1.$
Thus large singular values (those near ${\sigma }_{1}$) are computed to high relative accuracy and small ones may not be.
The angular difference between the computed left singular vector ${\stackrel{^}{u}}_{i}$ and the true ${u}_{i}$ satisfies the approximate bound
 $θ(u^i,ui)≲p(m,n)ε‖A‖2gapi$
where
 $gap i = min j≠i |σi-σj|$
is the absolute gap between ${\sigma }_{i}$ and the nearest other singular value. Thus, if ${\sigma }_{i}$ is close to other singular values, its corresponding singular vector ${u}_{i}$ may be inaccurate. The same bound applies to the computed right singular vector ${\stackrel{^}{v}}_{i}$ and the true vector ${v}_{i}$. The gaps may be easily obtained from the computed singular values.
Let $\stackrel{^}{\mathbit{S}}$ be the space spanned by a collection of computed left singular vectors $\left\{{\stackrel{^}{u}}_{i},i\in \mathbit{I}\right\}$, where $\mathbit{I}$ is a subset of the integers from $1$ to $n$. Let $\mathbit{S}$ be the corresponding true space. Then
 $θ(S^,S)≲p(m,n)ε‖A‖2 gapI .$
where
 $gapI=min{|σi-σj| for ​i∈I,j∉I}$
is the absolute gap between the singular values in $\mathbit{I}$ and the nearest other singular value. Thus, a cluster of close singular values which is far away from any other singular value may have a well determined space $\stackrel{^}{\mathbit{S}}$ even if its individual singular vectors are ill-conditioned. The same bound applies to a set of right singular vectors $\left\{{\stackrel{^}{v}}_{i},i\in \mathbit{I}\right\}$.
In the special case of bidiagonal matrices, the singular values and singular vectors may be computed much more accurately (see Demmel and Kahan (1990)). A bidiagonal matrix $B$ has nonzero entries only on the main diagonal and the diagonal immediately above it (or immediately below it). Reduction of a dense matrix to bidiagonal form $B$ can introduce additional errors, so the following bounds for the bidiagonal case do not apply to the dense case.
Using the functions in this chapter, each computed singular value of a bidiagonal matrix is accurate to nearly full relative accuracy, no matter how tiny it is, so that
 $|σ^i-σi|≤p(m,n)εσi.$
The computed left singular vector ${\stackrel{^}{u}}_{i}$ has an angular error at most about
 $θ(u^i,ui)≲p(m,n)εrelgapi$
where
 $relgapi= min j≠i |σi-σj| / (σi+σj)$
is the relative gap between ${\sigma }_{i}$ and the nearest other singular value. The same bound applies to the right singular vector ${\stackrel{^}{v}}_{i}$ and ${v}_{i}$. Since the relative gap may be much larger than the absolute gap, this error bound may be much smaller than the previous one. The relative gaps may be easily obtained from the computed singular values.

#### 2.14.3The symmetric eigenproblem

The usual error analysis of the symmetric eigenproblem is as follows (see Parlett (1998)).
The computed eigendecomposition $\stackrel{^}{Z}\stackrel{^}{\Lambda }{\stackrel{^}{Z}}^{\mathrm{T}}$ is nearly the exact eigendecomposition of $A+E$, i.e., $A+E=\left(\stackrel{^}{Z}+\delta \stackrel{^}{Z}\right)\stackrel{^}{\Lambda }{\left(\stackrel{^}{Z}+\delta \stackrel{^}{Z}\right)}^{\mathrm{T}}$ is the true eigendecomposition so that $\stackrel{^}{Z}+\delta \stackrel{^}{Z}$ is orthogonal, where ${‖E‖}_{2}/{‖A‖}_{2}\le p\left(n\right)\epsilon$ and ${‖\delta \stackrel{^}{Z}‖}_{2}\le p\left(n\right)\epsilon$ and $p\left(n\right)$ is a modestly growing function of $n$ and $\epsilon$ is the machine precision. Each computed eigenvalue ${\stackrel{^}{\lambda }}_{i}$ differs from the true ${\lambda }_{i}$ by an amount satisfying the bound
 $|λ^i-λi|≤p(n)ε‖A‖2.$
Thus large eigenvalues (those near $\underset{i}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}|{\lambda }_{i}|={‖A‖}_{2}$) are computed to high relative accuracy and small ones may not be.
The angular difference between the computed unit eigenvector ${\stackrel{^}{z}}_{i}$ and the true ${z}_{i}$ satisfies the approximate bound
 $θ(z^i,zi)≲p(n)ε‖A‖2gapi$
if $p\left(n\right)\epsilon$ is small enough, where
 $gapi= min j≠i |λi-λj|$
is the absolute gap between ${\lambda }_{i}$ and the nearest other eigenvalue. Thus, if ${\lambda }_{i}$ is close to other eigenvalues, its corresponding eigenvector ${z}_{i}$ may be inaccurate. The gaps may be easily obtained from the computed eigenvalues.
Let $\stackrel{^}{\mathbit{S}}$ be the invariant subspace spanned by a collection of eigenvectors $\left\{{\stackrel{^}{z}}_{i},i\in \mathbit{I}\right\}$, where $\mathbit{I}$ is a subset of the integers from $1$ to $n$. Let $\mathbit{S}$ be the corresponding true subspace. Then
 $θ(S^,S)≲p(n)ε‖A‖2 gapI$
where
 $gapI=min{|λi-λj| for ​i∈I,j∉I}$
is the absolute gap between the eigenvalues in $\mathbit{I}$ and the nearest other eigenvalue. Thus, a cluster of close eigenvalues which is far away from any other eigenvalue may have a well determined invariant subspace $\stackrel{^}{\mathbit{S}}$ even if its individual eigenvectors are ill-conditioned.
In the special case of a real symmetric tridiagonal matrix $T$, functions in this chapter can compute the eigenvalues and eigenvectors much more accurately. See Anderson et al. (1999) for further details.

#### 2.14.4The generalized symmetric-definite eigenproblem

The three types of problem to be considered are $A-\lambda B$, $AB-\lambda I$ and $BA-\lambda I$. In each case $A$ and $B$ are real symmetric (or complex Hermitian) and $B$ is positive definite. We consider each case in turn, assuming that functions in this chapter are used to transform the generalized problem to the standard symmetric problem, followed by the solution of the symmetric problem. In all cases
 $gapi= min j≠i |λi-λj|$
is the absolute gap between ${\lambda }_{i}$ and the nearest other eigenvalue.
1. 1.$A-\lambda B$. The computed eigenvalues ${\stackrel{^}{\lambda }}_{i}$ can differ from the true eigenvalues ${\lambda }_{i}$ by an amount
 $|λ^i-λi|≲p(n)ε‖B-1‖2‖A‖2.$
The angular difference between the computed eigenvector ${\stackrel{^}{z}}_{i}$ and the true eigenvector ${z}_{i}$ is
 $θ (z^i,zi) ≲ p(n) ε ‖B-1‖2 ‖A‖2 (κ2(B)) 1/2 gapi .$
2. 2.$AB-\lambda I$ or $BA-\lambda I$. The computed eigenvalues ${\stackrel{^}{\lambda }}_{i}$ can differ from the true eigenvalues ${\lambda }_{i}$ by an amount
 $|λ^i-λi|≲p(n)ε‖B‖2‖A‖2.$
The angular difference between the computed eigenvector ${\stackrel{^}{z}}_{i}$ and the true eigenvector ${z}_{i}$ is
 $θ (z^i,zi) ≲ p(n) ε ‖B‖2 ‖A‖2 (κ2(B)) 1/2 gapi .$
These error bounds are large when $B$ is ill-conditioned with respect to inversion (${\kappa }_{2}\left(B\right)$ is large). It is often the case that the eigenvalues and eigenvectors are much better conditioned than indicated here. One way to get tighter bounds is effective when the diagonal entries of $B$ differ widely in magnitude, as for example with a graded matrix.
1. 1.$A-\lambda B$. Let $D=\mathrm{diag}\left({b}_{11}^{-1/2},\dots ,{b}_{nn}^{-1/2}\right)$ be a diagonal matrix. Then replace $B$ by $DBD$ and $A$ by $DAD$ in the above bounds.
2. 2.$AB-\lambda I$ or $BA-\lambda I$. Let $D=\mathrm{diag}\left({b}_{11}^{-1/2},\dots ,{b}_{nn}^{-1/2}\right)$ be a diagonal matrix. Then replace $B$ by $DBD$ and $A$ by ${D}^{-1}A{D}^{-1}$ in the above bounds.
Further details can be found in Anderson et al. (1999).

#### 2.14.5The nonsymmetric eigenproblem

The nonsymmetric eigenvalue problem is more complicated than the symmetric eigenvalue problem. In this section, we just summarise the bounds. Further details can be found in Anderson et al. (1999).
We let ${\stackrel{^}{\lambda }}_{i}$ be the $i$th computed eigenvalue and ${\lambda }_{i}$ the $i$th true eigenvalue. Let ${\stackrel{^}{v}}_{i}$ be the corresponding computed right eigenvector, and ${v}_{i}$ the true right eigenvector (so $A{v}_{i}={\lambda }_{i}{v}_{i}$). If $\mathbit{I}$ is a subset of the integers from $1$ to $n$, we let ${\lambda }_{\mathbit{I}}$ denote the average of the selected eigenvalues: ${\lambda }_{\mathbit{I}}=\left(\sum _{i\in \mathbit{I}}\phantom{\rule{0.25em}{0ex}}{\lambda }_{i}\right)/\left(\sum _{i\in \mathbit{I}}\phantom{\rule{0.25em}{0ex}}1\right)$, and similarly for ${\stackrel{^}{\lambda }}_{\mathbit{I}}$. We also let ${\mathbit{S}}_{\mathbit{I}}$ denote the subspace spanned by $\left\{{v}_{i},i\in \mathbit{I}\right\}$; it is called a right invariant subspace because if $v$ is any vector in ${\mathbit{S}}_{\mathbit{I}}$ then $Av$ is also in ${\mathbit{S}}_{\mathbit{I}}$. ${\stackrel{^}{\mathbit{S}}}_{\mathbit{I}}$ is the corresponding computed subspace.
The algorithms for the nonsymmetric eigenproblem are normwise backward stable: they compute the exact eigenvalues, eigenvectors and invariant subspaces of slightly perturbed matrices $\left(A+E\right)$, where $‖E‖\le p\left(n\right)\epsilon ‖A‖$. Some of the bounds are stated in terms of ${‖E‖}_{2}$ and others in terms of ${‖E‖}_{F}$; one may use $p\left(n\right)\epsilon$ for either quantity.
Functions are provided so that, for each (${\stackrel{^}{\lambda }}_{i},{\stackrel{^}{v}}_{i}$) pair the two values ${s}_{i}$ and ${\mathit{sep}}_{i}$, or for a selected subset $\mathbit{I}$ of eigenvalues the values ${s}_{\mathbit{I}}$ and ${\mathit{sep}}_{\mathbit{I}}$ can be obtained, for which the error bounds in Table 2 are true for sufficiently small $‖E‖$, (which is why they are called asymptotic):
 Simple eigenvalue $|{\stackrel{^}{\lambda }}_{i}-{\lambda }_{i}|\lesssim {‖E‖}_{2}/{s}_{i}$ Eigenvalue cluster $|{\stackrel{^}{\lambda }}_{\mathbit{I}}-{\lambda }_{\mathbit{I}}|\lesssim {‖E‖}_{2}/{s}_{\mathbit{I}}$ Eigenvector $\theta \left({\stackrel{^}{v}}_{i},{v}_{i}\right)\lesssim {‖E‖}_{F}/{\mathit{sep}}_{i}$ Invariant subspace $\theta \left({\stackrel{^}{S}}_{\mathbit{I}},{S}_{\mathbit{I}}\right)\lesssim {‖E‖}_{F}/{\mathit{sep}}_{\mathbit{I}}$
If the problem is ill-conditioned, the asymptotic bounds may only hold for extremely small $‖E‖$. The global error bounds of Table 3 are guaranteed to hold for all ${‖E‖}_{F}:
 Simple eigenvalue $|{\stackrel{^}{\lambda }}_{i}-{\lambda }_{i}|\le n{‖E‖}_{2}/{s}_{i}$ Holds for all $E$ Eigenvalue cluster $|{\stackrel{^}{\lambda }}_{\mathbit{I}}-{\lambda }_{\mathbit{I}}|\le 2{‖E‖}_{2}/{s}_{\mathbit{I}}$ Requires ${‖E‖}_{F}<{s}_{\mathbit{I}}×{\mathit{sep}}_{\mathbit{I}}/4$ Eigenvector $\theta \left({\stackrel{^}{v}}_{i},{v}_{i}\right)\le \mathrm{arctan}\left(2{‖E‖}_{F}/\left({\mathit{sep}}_{i}-4{‖E‖}_{F}/{s}_{i}\right)\right)$ Requires ${‖E‖}_{F}<{s}_{i}×{\mathit{sep}}_{i}/4$ Invariant subspace $\theta \left({\stackrel{^}{S}}_{\mathbit{I}},{S}_{\mathbit{I}}\right)\le \mathrm{arctan}\left(2{‖E‖}_{F}/\left({\mathit{sep}}_{\mathbit{I}}-4{‖E‖}_{F}/{s}_{\mathbit{I}}\right)\right)$ Requires ${‖E‖}_{F}<{s}_{\mathbit{I}}×{\mathit{sep}}_{\mathbit{I}}/4$

#### 2.14.6Balancing and condition for the nonsymmetric eigenproblem

There are two preprocessing steps one may perform on a matrix $A$ in order to make its eigenproblem easier. The first is permutation, or reordering the rows and columns to make $A$ more nearly upper triangular (closer to Schur form): ${A}^{\prime }=PA{P}^{\mathrm{T}}$, where $P$ is a permutation matrix. If ${A}^{\prime }$ is permutable to upper triangular form (or close to it), then no floating-point operations (or very few) are needed to reduce it to Schur form. The second is scaling by a diagonal matrix $D$ to make the rows and columns of ${A}^{\prime }$ more nearly equal in norm: ${A}^{\prime \prime }=D{A}^{\prime }{D}^{-1}$. Scaling can make the matrix norm smaller with respect to the eigenvalues, and so possibly reduce the inaccuracy contributed by roundoff (see Chapter 11 of Wilkinson and Reinsch (1971)). We refer to these two operations as balancing.
Permuting has no effect on the condition numbers or their interpretation as described previously. Scaling, however, does change their interpretation and further details can be found in Anderson et al. (1999).

#### 2.14.7The generalized nonsymmetric eigenvalue problem

The algorithms for the generalized nonsymmetric eigenvalue problem are normwise backward stable: they compute the exact eigenvalues (as the pairs $\left(\alpha ,\beta \right)$), eigenvectors and deflating subspaces of slightly perturbed pairs $\left(A+E,B+F\right)$, where
 $‖(E,F)‖F≤p(n)ε‖(A,B)‖F.$
Asymptotic and global error bounds can be obtained, which are generalizations of those given in Tables 2 and 3. See Section 4.11 of Anderson et al. (1999) for details. Functions are provided to compute estimates of reciprocal conditions numbers for eigenvalues and eigenspaces.

#### 2.14.8Balancing the generalized eigenvalue problem

As with the standard nonsymmetric eigenvalue problem, there are two preprocessing steps one may perform on a matrix pair $\left(A,B\right)$ in order to make its eigenproblem easier; permutation and scaling, which together are referred to as balancing, as indicated in the following two steps.
1. 1.The balancing function first attempts to permute $A$ and $B$ to block upper triangular form by a similarity transformation:
 $PAPT=F= ( F11 F12 F13 F22 F23 F33 ), PBPT=G= ( G11 G12 G13 G22 G23 G33 ),$
where $P$ is a permutation matrix, ${F}_{11}$, ${F}_{33}$, ${G}_{11}$ and ${G}_{33}$ are upper triangular. Then the diagonal elements of the matrix $\left({F}_{11},{G}_{11}\right)$ and $\left({G}_{33},{H}_{33}\right)$ are generalized eigenvalues of $\left(A,B\right)$. The rest of the generalized eigenvalues are given by the matrix pair $\left({F}_{22},{G}_{22}\right)$. Subsequent operations to compute the eigenvalues of $\left(A,B\right)$ need only be applied to the matrix $\left({F}_{22},{G}_{22}\right)$; this can save a significant amount of work if $\left({F}_{22},{G}_{22}\right)$ is smaller than the original matrix pair $\left(A,B\right)$. If no suitable permutation exists (as is often the case), then there is no gain in efficiency or accuracy.
2. 2.The balancing function applies a diagonal similarity transformation to $\left(F,G\right)$, to make the rows and columns of $\left({F}_{22},{G}_{22}\right)$ as close as possible in the norm:
 $DFD-1= ( I D22 I ) ( F11 F12 F13 F22 F23 F33 ) ( I D22−1 I ), DGD-1= ( I D22 I ) ( G11 G12 G13 G22 G23 G33 ) ( I D22−1 I ) .$
This transformation usually improves the accuracy of computed generalized eigenvalues and eigenvectors. However, there are exceptional occasions when this transformation increases the norm of the pencil; in this case accuracy could be lower with diagonal balancing.
See Anderson et al. (1999) for further details.

#### 2.14.9Other problems

Error bounds for other problems such as the generalized linear least squares problem and generalized singular value decomposition can be found in Anderson et al. (1999).

### 2.15Block Partitioned Algorithms

A number of the functions in this chapter use what is termed a block partitioned algorithm. This means that at each major step of the algorithm a block of rows or columns is updated, and much of the computation is performed by matrix-matrix operations on these blocks. These matrix-matrix operations make efficient use of computer memory and are key to achieving high performance. See Golub and Van Loan (2012) or Anderson et al. (1999) for more about block partitioned algorithms.
The performance of a block partitioned algorithm varies to some extent with the block size – that is, the number of rows or columns per block. This is a machine-dependent constant, which is set to a suitable value when the Library is implemented on each range of machines.

## 3Recommendations on Choice and Use of Available Functions

### 3.1Available Functions

The tables in the following sub-sections show the functions which are provided for performing different computations on different types of matrices. Each entry in the table gives the NAG function short name.
Black Box (or driver) functions are provided for the solution of most problems. In a number of cases there are simple drivers, which just return the solution to the problem, as well as expert drivers, which return additional information, such as condition number estimates, and may offer additional facilities such as balancing. The following sub-sections give tables for the driver functions.

#### 3.1.1Driver functions

##### 3.1.1.1Linear least squares problems (LLS)
Operation real complex
solve LLS using $QR$ or $LQ$ factorization
solve LLS using complete orthogonal factorization
solve LLS using SVD
solve LLS using divide-and-conquer SVD
f08aac
f08bac
f08kac
f08kcc
f08anc
f08bnc
f08knc
f08kqc
##### 3.1.1.2Generalized linear least squares problems (LSE and GLM)
Operation real complex
solve LSE problem using GRQ
solve GLM problem using GQR
f08zac
f08zbc
f08znc
f08zpc
##### 3.1.1.3Symmetric eigenvalue problems (SEP)
Function and storage scheme real complex
simple driver
divide-and-conquer driver
expert driver
RRR driver
f08fac
f08fcc
f08fbc
f08fdc
f08fnc
f08fqc
f08fpc
f08frc
packed storage
simple driver
divide-and-conquer driver
expert driver

f08gac
f08gcc
f08gbc

f08gnc
f08gqc
f08gpc
band matrix
simple driver
divide-and-conquer driver
expert driver

f08hac
f08hcc
f08hbc

f08hnc
f08hqc
f08hpc
tridiagonal matrix
simple driver
divide-and-conquer driver
expert driver
RRR driver

f08jac
f08jcc
f08jbc
f08jdc
##### 3.1.1.4Nonsymmetric eigenvalue problem (NEP)
Function and storage scheme real complex
simple driver for Schur factorization
expert driver for Schur factorization
simple driver for eigenvalues/vectors
expert driver for eigenvalues/vectors
f08pac
f08pbc
f08nac
f08nbc
f08pnc
f08ppc
f08nnc
f08npc
##### 3.1.1.5Singular value decomposition (SVD)
Function and storage scheme real complex
simple driver
divide-and-conquer driver
expert driver
simple driver for one-sided Jacobi SVD
expert driver for one-sided Jacobi SVD
f08kbc
f08kdc
f08kmc
f08kjc
f08khc
f08kpc
f08krc
f08kzc
f08kwc
f08kvc
##### 3.1.1.6Generalized symmetric definite eigenvalue problems (GSEP)
Function and storage scheme real complex
simple driver
divide-and-conquer driver
expert driver
f08sac
f08scc
f08sbc
f08snc
f08sqc
f08spc
packed storage
simple driver
divide-and-conquer driver
expert driver

f08tac
f08tcc
f08tbc

f08tnc
f08tqc
f08tpc
band matrix
simple driver
divide-and-conquer driver
expert driver

f08uac
f08ucc
f08ubc

f08unc
f08uqc
f08upc
##### 3.1.1.7Generalized nonsymmetric eigenvalue problem (GNEP)
Function and storage scheme real complex
simple driver for Schur factorization
expert driver for Schur factorization
simple driver for eigenvalues/vectors
expert driver for eigenvalues/vectors
f08xcc
f08xbc
f08wcc
f08wbc
f08xqc
f08xpc
f08wqc
f08wpc
##### 3.1.1.8Generalized singular value decomposition (GSVD)
Function and storage scheme real complex
singular values/vectors f08vcc f08vqc

#### 3.1.2Computational functions

It is possible to solve problems by calling two or more functions in sequence. Some common sequences of functions are indicated in the tables in the following sub-sections; an asterisk ($*$) against a function name means that the sequence of calls is illustrated in the example program for that function.
##### 3.1.2.1Orthogonal factorizations
Functions are provided for $QR$ factorization (with and without column pivoting), and for $LQ$, $QL$ and $RQ$ factorizations (without pivoting only), of a general real or complex rectangular matrix. A function is also provided for the $RQ$ factorization of a real or complex upper trapezoidal matrix. (LAPACK refers to this as the $RZ$ factorization.)
The factorization functions do not form the matrix $Q$ explicitly, but represent it as a product of elementary reflectors (see Section 3.4.6). Additional functions are provided to generate all or part of $Q$ explicitly if it is required, or to apply $Q$ in its factored form to another matrix (specifically to compute one of the matrix products $QC$, ${Q}^{\mathrm{T}}C$, $CQ$ or $C{Q}^{\mathrm{T}}$ with ${Q}^{\mathrm{T}}$ replaced by ${Q}^{\mathrm{H}}$ if $C$ and $Q$ are complex).
Factorize
without
pivoting
Factorize
with
pivoting
Factorize
(blocked)
Generate
matrix $\mathbit{Q}$
Apply
matrix $\mathbit{Q}$
Apply
$\mathbit{Q}$ (blocked)
$QR$ factorization,
real matrices
f08aec f08bfc f08abc f08afc f08agc f08acc
$QR$ factorization,
real triangular-pentagonal
f08bbc f08bcc
$LQ$ factorization,
real matrices
f08ahc f08ajc f08akc
$QL$ factorization,
real matrices
f08cec f08cfc f08cgc
$RQ$ factorization,
real matrices
f08chc f08cjc f08ckc
$RQ$ factorization,
real upper trapezoidal matrices
f08bhc f08bkc
$QR$ factorization,
complex matrices
f08asc f08btc f08apc f08atc f08auc f08aqc
$QR$ factorization,
complex triangular-pentagonal
f08bpc f08bqc
$LQ$ factorization,
complex matrices
f08avc f08awc f08axc
$QL$ factorization,
complex matrices
f08csc f08ctc f08cuc
$RQ$ factorization,
complex matrices
f08cvc f08cwc f08cxc
$RQ$ factorization,
complex upper trapezoidal matrices
f08bvc f08bxc
To solve linear least squares problems, as described in Sections 2.2.1 or 2.2.3, functions based on the $QR$ factorization can be used:
 real data, full-rank problem f08aac, f08aec and f08agc, f08abc and f08acc, f16yjc complex data, full-rank problem f08anc, f08asc and f08auc, f08apc and f08aqc, f16zjc real data, rank-deficient problem f08bfc*, f16yjc, f08agc complex data, rank-deficient problem f08btc*, f16zjc, f08auc
To find the minimum norm solution of underdetermined systems of linear equations, as described in Section 2.2.2, functions based on the $LQ$ factorization can be used:
 real data, full-rank problem f08ahc*, f16yjc, f08akc complex data, full-rank problem f08avc*, f16zjc, f08axc
##### 3.1.2.2Generalized orthogonal factorizations
Functions are provided for the generalized $QR$ and $RQ$ factorizations of real and complex matrix pairs.
Factorize
Generalized $QR$ factorization, real matrices f08zec
Generalized $RQ$ factorization, real matrices f08zfc
Generalized $QR$ factorization, complex matrices f08zsc
Generalized $RQ$ factorization, complex matrices f08ztc
##### 3.1.2.3Singular value problems
Functions are provided to reduce a general real or complex rectangular matrix $A$ to real bidiagonal form $B$ by an orthogonal transformation $A=QB{P}^{\mathrm{T}}$ (or by a unitary transformation $A=QB{P}^{\mathrm{H}}$ if $A$ is complex). Different functions allow a full matrix $A$ to be stored conventionally (see Section 3.4.1), or a band matrix to use band storage (see Section 3.4.4 in the F07 Chapter Introduction).
The functions for reducing full matrices do not form the matrix $Q$ or $P$ explicitly; additional functions are provided to generate all or part of them, or to apply them to another matrix, as with the functions for orthogonal factorizations. Explicit generation of $Q$ or $P$ is required before using the bidiagonal $QR$ algorithm to compute left or right singular vectors of $A$.
The functions for reducing band matrices have options to generate $Q$ or $P$ if required.
Further functions are provided to compute all or part of the singular value decomposition of a real bidiagonal matrix; the same functions can be used to compute the singular value decomposition of a real or complex matrix that has been reduced to bidiagonal form.
real complex
Reduce to bidiagonal form f08kec f08ksc
Generate matrix $Q$ or ${P}^{\mathrm{T}}$ f08kfc f08ktc
Apply matrix $Q$ or $P$ f08kgc f08kuc
Reduce band matrix to bidiagonal form f08lec f08lsc
SVD of bidiagonal form ($QR$ algorithm) f08mec f08msc
SVD of bidiagonal form (divide and conquer) f08mdc
SVD of bidiagonal form (tridiagonal eigenproblem) f08mbc
Where $m\gg n$, the first stage should be preceeded by a $QR$ factorization with the remaining stages operating on the resultant $R$ matrix (see Section 3.1.2.1). The left singular vectors obtained must then be premultiplied by $Q$ to obtain the left singular vectors of the original matrix. Similarly, if $m\ll n$, then an initial $LQ$ factorization and a final post-multiplication by $Q$ on the right singular vectors should be performed, with the above listed stages operating on the matrix $L$.
Given the singular values, f08flc is provided to compute the reciprocal condition numbers for the left or right singular vectors of a real or complex matrix.
To compute the singular values and vectors of a rectangular matrix, as described in Section 2.3, use the following sequence of calls:
Rectangular matrix (standard storage)
 real matrix, singular values and vectors f08kec, f08kfc*, f08mec complex matrix, singular values and vectors f08ksc, f08ktc*, f08msc
Rectangular matrix (banded)
 real matrix, singular values and vectors f08lec, f08kfc, f08mec complex matrix, singular values and vectors f08lsc, f08ktc, f08msc
To use the singular value decomposition to solve a linear least squares problem, as described in Section 2.4, the following functions are required:
 real data f16yac, f08kec, f08kfc, f08kgc, f08mec complex data f16zac, f08ksc, f08ktc, f08kuc, f08msc
##### 3.1.2.4Generalized singular value decomposition
Functions are provided to compute the generalized SVD of a real or complex matrix pair $\left(A,B\right)$ in upper trapezoidal form. Functions are also provided to reduce a general real or complex matrix pair to the required upper trapezoidal form.
Reduce to
trapezoidal form
Generalized SVD
of trapezoidal form
real matrices f08vgc f08yec
complex matrices f08vuc f08ysc
Functions are provided for the full CS decomposition of orthogonal and unitary matrices expressed as $2×2$ partitions of submatrices. For real orthogonal matrices the CS decomposition is performed by f08rac, while for unitary matrices the equivalent function is f08rnc.
##### 3.1.2.5Symmetric eigenvalue problems
Functions are provided to reduce a real symmetric or complex Hermitian matrix $A$ to real tridiagonal form $T$ by an orthogonal similarity transformation $A=QT{Q}^{\mathrm{T}}$ (or by a unitary transformation $A=QT{Q}^{\mathrm{H}}$ if $A$ is complex). Different functions allow a full matrix $A$ to be stored conventionally (see Section 3.4.1 in the F07 Chapter Introduction) or in packed storage (see Section 3.4.2 in the F07 Chapter Introduction); or a band matrix to use band storage (see Section 3.4.4 in the F07 Chapter Introduction).
The functions for reducing full matrices do not form the matrix $Q$ explicitly; additional functions are provided to generate $Q$, or to apply it to another matrix, as with the functions for orthogonal factorizations. Explicit generation of $Q$ is required before using the $QR$ algorithm to find all the eigenvectors of $A$; application of $Q$ to another matrix is required after eigenvectors of $T$ have been found by inverse iteration, in order to transform them to eigenvectors of $A$.
The functions for reducing band matrices have an option to generate $Q$ if required.
Reduce to
tridiagonal
form
Generate
matrix $\mathbit{Q}$
Apply
matrix $\mathbit{Q}$
real symmetric matrices f08fec f08ffc f08fgc
real symmetric matrices (packed storage) f08gec f08gfc f08ggc
real symmetric band matrices f08hec
complex Hermitian matrices f08fsc f08ftc f08fuc
complex Hermitian matrices (packed storage) f08gsc f08gtc f08guc
complex Hermitian band matrices f08hsc
Given the eigenvalues, f08flc is provided to compute the reciprocal condition numbers for the eigenvectors of a real symmetric or complex Hermitian matrix.
A variety of functions are provided to compute eigenvalues and eigenvectors of the real symmetric tridiagonal matrix $T$, some computing all eigenvalues and eigenvectors, some computing selected eigenvalues and eigenvectors. The same functions can be used to compute eigenvalues and eigenvectors of a real symmetric or complex Hermitian matrix which has been reduced to tridiagonal form.
Eigenvalues and eigenvectors of real symmetric tridiagonal matrices:
The original (non-reduced) matrix is Real Symmetric or Complex Hermitian
all eigenvalues (root-free $QR$ algorithm) f08jfc
all eigenvalues (root-free $QR$ algorithm called by divide-and-conquer) f08jcc or f08jhc
selected eigenvalues (bisection) f08jjc
selected eigenvalues (RRR) f08jlc
The original (non-reduced) matrix is Real Symmetric
all eigenvalues and eigenvectors ($QR$ algorithm) f08jec
all eigenvalues and eigenvectors (divide-and-conquer) f08jcc or f08jhc
all eigenvalues and eigenvectors (positive definite case) f08jgc
selected eigenvectors (inverse iteration) f08jkc
selected eigenvalues and eigenvectors (RRR) f08jlc
The original (non-reduced) matrix is Complex Hermitian
all eigenvalues and eigenvectors ($QR$ algorithm) f08jsc
all eigenvalues and eigenvectors (divide and conquer) f08jvc
all eigenvalues and eigenvectors (positive definite case) f08juc
selected eigenvectors (inverse iteration) f08jxc
selected eigenvalues and eigenvectors (RRR) f08jyc
The following sequences of calls may be used to compute various combinations of eigenvalues and eigenvectors, as described in Section 2.7.
Sequences for computing eigenvalues and eigenvectors
Real Symmetric matrix (standard storage)
all eigenvalues and eigenvectors (using divide-and-conquer) f08fcc
all eigenvalues and eigenvectors (using $QR$ algorithm) f08fec, f08ffc*, f08jec
selected eigenvalues and eigenvectors (bisection and inverse iteration) f08fec, f08fgc, f08jjc, f08jkc*
selected eigenvalues and eigenvectors (RRR) f08fec, f08fgc, f08jlc
Real Symmetric matrix (packed storage)
all eigenvalues and eigenvectors (using divide-and-conquer) f08gcc
all eigenvalues and eigenvectors (using $QR$ algorithm) f08gec, f08gfc and f08jec
selected eigenvalues and eigenvectors (bisection and inverse iteration) f08gec, f08ggc, f08jjc, f08jkc*
selected eigenvalues and eigenvectors (RRR) f08gec, f08ggc, f08jlc
Real Symmetric banded matrix
all eigenvalues and eigenvectors (using divide-and-conquer) f08hcc
all eigenvalues and eigenvectors (using $QR$ algorithm) f08hec*, f08jec
Complex Hermitian matrix (standard storage)
all eigenvalues and eigenvectors (using divide-and-conquer) f08fqc
all eigenvalues and eigenvectors (using $QR$ algorithm) f08fsc, f08ftc*, f08jsc
selected eigenvalues and eigenvectors (bisection and inverse iteration) f08fsc, f08fuc, f08jjc, f08jxc*
selected eigenvalues and eigenvectors (RRR) f08fsc, f08fuc, f08jyc
Complex Hermitian matrix (packed storage)
all eigenvalues and eigenvectors (using divide-and-conquer) f08gqc
all eigenvalues and eigenvectors (using $QR$ algorithm) f08gsc, f08gtc*, f08jsc
selected eigenvalues and eigenvectors (bisection and inverse iteration) f08gsc, f08guc, f08jjc, f08jxc*
selected eigenvalues and eigenvectors (RRR) f08gsc, f08guc and f08jyc
Complex Hermitian banded matrix
all eigenvalues and eigenvectors (using divide-and-conquer) f08hqc
all eigenvalues and eigenvectors (using $QR$ algorithm) f08hsc*, f08jsc
##### 3.1.2.6Generalized symmetric-definite eigenvalue problems
Functions are provided for reducing each of the problems $Ax=\lambda Bx$, $ABx=\lambda x$ or $BAx=\lambda x$ to an equivalent standard eigenvalue problem $Cy=\lambda y$. Different functions allow the matrices to be stored either conventionally or in packed storage. The positive definite matrix $B$ must first be factorized using a function from Chapter F07. There is also a function which reduces the problem $Ax=\lambda Bx$ where $A$ and $B$ are banded, to an equivalent banded standard eigenvalue problem; this uses a split Cholesky factorization for which a function in Chapter F08 is provided.
Reduce to
standard problem
Reduce to
standard problem
(packed storage)
Reduce to
standard problem
(band matrices)
real symmetric matrices f08sec f08tec f08uec
complex Hermitian matrices f08ssc f08tsc f08usc
The equivalent standard problem can then be solved using the functions discussed in Section 3.1.2.5. For example, to compute all the eigenvalues, the following functions must be called:
 real symmetric-definite problem f07fdc, f08sec*, f08fec, f08jfc real symmetric-definite problem, packed storage f07gdc, f08tec*, f08gec, f08jfc real symmetric-definite banded problem f08ufc*, f08uec*, f08hec, f08jfc complex Hermitian-definite problem f07frc, f08ssc*, f08fsc, f08jfc complex Hermitian-definite problem, packed storage f07grc, f08tsc*, f08gsc, f08jfc complex Hermitian-definite banded problem f08utc*, f08usc*, f08hsc, f08jfc
If eigenvectors are computed, the eigenvectors of the equivalent standard problem must be transformed back to those of the original generalized problem, as indicated in Section 2.8; functions from Chapter F16 may be used for this.
##### 3.1.2.7Nonsymmetric eigenvalue problems
Functions are provided to reduce a general real or complex matrix $A$ to upper Hessenberg form $H$ by an orthogonal similarity transformation $A=QH{Q}^{\mathrm{T}}$ (or by a unitary transformation $A=QH{Q}^{\mathrm{H}}$ if $A$ is complex).
These functions do not form the matrix $Q$ explicitly; additional functions are provided to generate $Q$, or to apply it to another matrix, as with the functions for orthogonal factorizations. Explicit generation of $Q$ is required before using the $QR$ algorithm on $H$ to compute the Schur vectors; application of $Q$ to another matrix is needed after eigenvectors of $H$ have been computed by inverse iteration, in order to transform them to eigenvectors of $A$.
Functions are also provided to balance the matrix before reducing it to Hessenberg form, as described in Section 2.14.6. Companion functions are required to transform Schur vectors or eigenvectors of the balanced matrix to those of the original matrix.
Reduce to
Hessenberg
form
Generate
matrix $\mathbit{Q}$
Apply
matrix $\mathbit{Q}$
Balance Back­transform
vectors after
balancing
real matrices f08nec f08nfc f08ngc f08nhc f08njc
complex matrices f08nsc f08ntc f08nuc f08nvc f08nwc
Functions are provided to compute the eigenvalues and all or part of the Schur factorization of an upper Hessenberg matrix. Eigenvectors may be computed either from the upper Hessenberg form by inverse iteration, or from the Schur form by back-substitution; these approaches are equally satisfactory for computing individual eigenvectors, but the latter may provide a more accurate basis for a subspace spanned by several eigenvectors.
Additional functions estimate the sensitivities of computed eigenvalues and eigenvectors, as discussed in Section 2.14.5.
Eigenvalues and
Schur factorization
($\mathbit{Q}\mathbit{R}$ algorithm)
Eigenvectors from
Hessenberg form
(inverse iteration)
Eigenvectors from
Schur factorization
Sensitivities of
eigenvalues and
eigenvectors
real matrices f08pec f08pkc f08qkc f08qlc
complex matrices f08psc f08pxc f08qxc f08qyc
Finally functions are provided for reordering the Schur factorization, so that eigenvalues appear in any desired order on the diagonal of the Schur form. The functions f08qfc and f08qtc simply swap two diagonal elements or blocks, and may need to be called repeatedly to achieve a desired order. The functions f08qgc and f08quc perform the whole reordering process for the important special case where a specified cluster of eigenvalues is to appear at the top of the Schur form; if the Schur vectors are reordered at the same time, they yield an orthonormal basis for the invariant subspace corresponding to the specified cluster of eigenvalues. These functions can also compute the sensitivities of the cluster of eigenvalues and the invariant subspace.
Reorder
Schur factorization
Reorder
Schur factorization,
find basis for invariant
subspace and estimate
sensitivities
real matrices f08qfc f08qgc
complex matrices f08qtc f08quc
The following sequences of calls may be used to compute various combinations of eigenvalues, Schur vectors and eigenvectors, as described in Section 2.11:
 real matrix, all eigenvalues and Schur factorization f08nec, f08nfc*, f08pec real matrix, all eigenvalues and selected eigenvectors f08nec, f08ngc, f08pec, f08pkc real matrix, all eigenvalues and eigenvectors (with balancing) f08nhc*, f08nec, f08nfc, f08njc, f08pec, f08pkc complex matrix, all eigenvalues and Schur factorization f08nsc, f08ntc*, f08psc complex matrix, all eigenvalues and selected eigenvectors f08nsc, f08nuc, f08psc, f08pxc* complex matrix, all eigenvalues and eigenvectors (with balancing) f08nvc*, f08nsc, f08ntc, f08nwc, f08psc, f08pxc
##### 3.1.2.8Generalized nonsymmetric eigenvalue problems
Functions are provided to reduce a real or complex matrix pair $\left({A}_{1},{R}_{1}\right)$, where ${A}_{1}$ is general and ${R}_{1}$ is upper triangular, to generalized upper Hessenberg form by orthogonal transformations ${A}_{1}={Q}_{1}H{Z}_{1}^{\mathrm{T}}$, ${R}_{1}={Q}_{1}R{Z}_{1}^{\mathrm{T}}$, (or by unitary transformations ${A}_{1}={Q}_{1}H{Z}_{1}^{\mathrm{H}}$, $R={Q}_{1}{R}_{1}{Z}_{1}^{\mathrm{H}}$, in the complex case). These functions can optionally return ${Q}_{1}$ and/or ${Z}_{1}$. Note that to transform a general matrix pair $\left(A,B\right)$ to the form $\left({A}_{1},{R}_{1}\right)$ a $QR$ factorization of $B$ ($B=\stackrel{~}{Q}{R}_{1}$) should first be performed and the matrix ${A}_{1}$ obtained as ${A}_{1}={\stackrel{~}{Q}}^{\mathrm{T}}A$ (see Section 3.1.2.1 above).
Functions are also provided to balance a general matrix pair before reducing it to generalized Hessenberg form, as described in Section 2.14.8. Companion functions are provided to transform vectors of the balanced pair to those of the original matrix pair.
Reduce to
generalized
Hessenberg form
Balance Backtransform
vectors after
balancing
real matrices f08wfc f08whc f08wjc
complex matrices f08wtc f08wvc f08wwc
Functions are provided to compute the eigenvalues (as the pairs $\left(\alpha ,\beta \right)$) and all or part of the generalized Schur factorization of a generalized upper Hessenberg matrix pair. Eigenvectors may be computed from the generalized Schur form by back-substitution.
Additional functions estimate the sensitivities of computed eigenvalues and eigenvectors.
Eigenvalues and
generalized Schur
factorization
($\mathbit{QZ}$ algorithm)
Eigenvectors from
generalized Schur
factorization
Sensitivities of
eigenvalues and
eigenvectors
real matrices f08xec f08ykc f08ylc
complex matrices f08xsc f08yxc f08yyc
Finally, functions are provided for reordering the generalized Schur factorization so that eigenvalues appear in any desired order on the diagonal of the generalized Schur form. f08yfc and f08ytc simply swap two diagonal elements or blocks, and may need to be called repeatedly to achieve a desired order. f08ygc and f08yuc perform the whole reordering process for the important special case where a specified cluster of eigenvalues is to appear at the top of the generalized Schur form; if the Schur vectors are reordered at the same time, they yield an orthonormal basis for the deflating subspace corresponding to the specified cluster of eigenvalues. These functions can also compute the sensitivities of the cluster of eigenvalues and the deflating subspace.
Reorder generalized Schur
factorization
Reorder generalized Schur
factorization, find basis for
deflating subspace and
estimate sensitivites
real matrices f08yfc f08ygc
complex matrices f08ytc f08yuc
The following sequences of calls may be used to compute various combinations of eigenvalues, generalized Schur vectors and eigenvectors
 real matrix pair, all eigenvalues (with balancing) f08aec, f08agc (or f08abc, f08acc), f08wfc, f08whc, f08xec* real matrix pair, all eigenvalues and generalized Schur factorization f08aec, f08afc, f08agc (or f08abc, f08acc), f08wfc, f08xec real matrix pair, all eigenvalues and eigenvectors (with balancing) f16qfc, f16qhc, f08aec, f08afc, f08agc (or f08abc, f08acc), f08wfc, f08whc, f08xec, f08ykc*, f08wjc complex matrix pair, all eigenvalues (with balancing) f08asc, f08auc (or f08apc, f08aqc), f08wtc, f08wvc, f08xsc* complex matrix pair, all eigenvalues and generalized Schur factorization f08asc, f08atc, f08auc (or f08apc, f08aqc), f08wtc, f08xsc complex matrix pair, all eigenvalues and eigenvectors (with balancing) f16tfc, f16thc, f08asc, f08atc, f08auc (or f08apc, f08aqc), f08wtc, f08wvc, f08xsc, f08yxc*, f08wwc
##### 3.1.2.9The Sylvester equation and the generalized Sylvester equation
Functions are provided to solve the real or complex Sylvester equation $AX±XB=C$, where $A$ and $B$ are upper quasi-triangular if real, or upper triangular if complex. To solve the general form of the Sylvester equation in which $A$ and $B$ are general square matrices, $A$ and $B$ must be reduced to upper (quasi-) triangular form by the Schur factorization, using functions described in Section 3.1.2.7. For more details, see the documents for the functions listed below.
Solve the Sylvester equation
real matrices f08qhc
complex matrices f08qvc
Functions are also provided to solve the real or complex generalized Sylvester equations
 $AR-LB=C , ​ DR-LE=F ,$
where the pairs $\left(A,D\right)$ and $\left(B,E\right)$ are in generalized Schur form. To solve the general form of the generalized Sylvester equation in which $\left(A,D\right)$ and $\left(B,E\right)$ are general matrix pairs, $\left(A,D\right)$ and $\left(B,E\right)$ must first be reduced to generalized Schur form.
Solve the generalized Sylvester equation
real matrices f08yhc
complex matrices f08yvc

### 3.2NAG Names and LAPACK Names

The functions may be called either by their NAG short names or by their NAG long names which contain their double precision LAPACK names.
References to Chapter F08 functions in the manual normally include the LAPACK double precision names, for example f08aec. The LAPACK routine names follow a simple scheme. Each name has the structure xyyzzz, where the components have the following meanings:
• the initial letter x indicates the data type (real or complex) and precision:  s real, single precision d real, double precision c complex, single precision z complex, double precision
• the second and third letters yy indicate the type of the matrix $A$ or matrix pair $\left(A,B\right)$ (and in some cases the storage scheme):  bd bidiagonal di diagonal gb general band ge general gg general pair ($B$ may be triangular) hb (complex) Hermitian band he Hermitian hg generalized upper Hessenberg hp Hermitian (packed storage) hs upper Hessenberg op (real) orthogonal (packed storage) or (real) orthogonal pt symmetric or Hermitian positive definite tridiagonal sb (real) symmetric band sp symmetric (packed storage) st (real) symmetric tridiagonal sy symmetric tg triangular pair (one may be quasi-triangular) tp triangular-pentagonal tr triangular (or quasi-triangular) un (complex) unitary up (complex) unitary (packed storage)
• the last three letters zzz indicate the computation performed. For example, qrf is a $QR$ factorization.
Thus the function nag_lapackeig_dgeqrf performs a $QR$ factorization of a real general matrix; the corresponding function for a complex general matrix is nag_lapackeig_zgeqrf.

### 3.3Matrix Storage Schemes

In this chapter the following storage schemes are used for matrices:
• conventional storage in a two-dimensional array;
• packed storage for symmetric or Hermitian matrices;
• packed storage for orthogonal or unitary matrices;
• band storage for general, symmetric or Hermitian band matrices;
• storage of bidiagonal, symmetric or Hermitian tridiagonal matrices in two one-dimensional arrays.
These storage schemes are compatible with those used in Chapters F07 and F16, but different schemes for packed, band and tridiagonal storage are used in a few older functions in Chapters F01, F02, F03 and F04.

#### 3.3.1Conventional storage

Please see Section 3.4.1 in the F07 Chapter Introduction for full details.

#### 3.3.2Packed storage

Please see Section 3.4.2 in the F07 Chapter Introduction for full details.

#### 3.3.3Band storage

Please see Section 3.4.4 in the F07 Chapter Introduction for full details.

#### 3.3.4Tridiagonal and bidiagonal matrices

A symmetric tridiagonal or bidiagonal matrix is stored in two one-dimensional arrays, one of length $n$ containing the diagonal elements, and one of length $n-1$ containing the off-diagonal elements. (Older functions in Chapter F02 store the off-diagonal elements in elements $2:n$ of a vector of length $n$.)

#### 3.3.5Real diagonal elements of complex matrices

Please see Section 3.4.6 in the F07 Chapter Introduction for full details.

#### 3.3.6Representation of orthogonal or unitary matrices

A real orthogonal or complex unitary matrix (usually denoted $Q$) is often represented in the NAG Library as a product of elementary reflectors – also referred to as elementary Householder matrices (usually denoted ${H}_{i}$). For example,
 $Q =H1H2⋯Hk.$
You need not be aware of the details, because functions are provided to work with this representation, either to generate all or part of $Q$ explicitly, or to multiply a given matrix by $Q$ or ${Q}^{\mathrm{T}}$ (${Q}^{\mathrm{H}}$ in the complex case) without forming $Q$ explicitly.
Nevertheless, the following further details may occasionally be useful.
An elementary reflector (or elementary Householder matrix) $H$ of order $n$ is a unitary matrix of the form
 $H=I-τvvH$ (4)
where $\tau$ is a scalar, and $v$ is an $n$-element vector, with ${|\tau |}^{2}{{‖v‖}_{2}}^{2}=2×\mathrm{Re}\left(\tau \right)$; $v$ is often referred to as the Householder vector. Often $v$ has several leading or trailing zero elements, but for the purpose of this discussion assume that $H$ has no such special structure.
There is some redundancy in the representation $\left(4\right)$, which can be removed in various ways. The representation used in Chapter F08 and in LAPACK (which differs from those used in some of the functions in Chapters F01, F02 and F04) sets ${v}_{1}=1$; hence ${v}_{1}$ need not be stored. In real arithmetic, $1\le \tau \le 2$, except that $\tau =0$ implies $H=I$.
In complex arithmetic, $\tau$ may be complex, and satisfies $1\le \mathrm{Re}\left(\tau \right)\le 2$ and $|\tau -1|\le 1$. Thus a complex $H$ is not Hermitian (as it is in other representations), but it is unitary, which is the important property. The advantage of allowing $\tau$ to be complex is that, given an arbitrary complex vector $x,H$ can be computed so that
 $HHx=β(1,0,…,0)T$
with real $\beta$. This is useful, for example, when reducing a complex Hermitian matrix to real symmetric tridiagonal form, or a complex rectangular matrix to real bidiagonal form.

### 3.4Argument Conventions

#### 3.4.1Option Arguments

In addition to the order argument of type Nag_OrderType, most functions in this Chapter have one or more option arguments of various types; only options of the correct type may be supplied.
For example,
`nag_lapackeig_dsytrd(Nag_RowMajor,Nag_Upper,...)`

#### 3.4.2Problem dimensions

It is permissible for the problem dimensions (for example, m or n) to be passed as zero, in which case the computation (or part of it) is skipped. Negative dimensions are regarded as an error.

### 3.5Normalizing Output Vectors

In cases where a function computes a set of orthogonal or unitary vectors, e.g., eigenvectors or an orthogonal matrix factorization, it is possible for these vectors to differ between implementations, but still be correct. Under a strict normalization that enforces uniqueness of solution, these different solutions can be shown to be the same under that normalization. For example, an eigenvector $v$ is computed such that ${|v|}_{2}=1$. However, the vector $\alpha v$, where $\alpha$ is a scalar such that ${|\alpha |}_{2}=1$, is also an eigenvector. So for symmetric eigenproblems where eigenvectors are real valued, $\alpha =1$, or $-1$; and for complex eigenvectors, $\alpha$ can lie anywhere on the unit circle on the complex plane, $\alpha =\mathrm{exp}\left(i\theta \right)$.
Another example is in the computation of the singular valued decomposition of a matrix. Consider the factorization
 $A = UKΣKH VH ,$
where $K$ is a diagonal matrix with elements on the unit circle. Then $UK$ and $VK$ are corresponding left and right singular vectors of $A$ for any such choice of $K$.
The example programs for functions in Chapter F08 take care to perform post-processing normalizations, in such cases as those highlighted above, so that a unique set of results can be displayed over many implementations of the NAG Library (see Section 10 in f08yxc). Similar care should be taken to obtain unique vectors and matrices when calling functions in Chapter F08, particularly when these are used in equivalence tests.

## 4Decision Trees

The following decision trees are principally for the computation (general purpose) functions.

### Tree 1: Real Symmetric Eigenvalue Problems

 Are eigenvalues only required? Are all the eigenvalues required? Is $A$ tridiagonal? f08jcc or f08jfc yes yes yes no no no Is $A$ band matrix? (f08hec and f08jfc) or f08hcc yes no Is one triangle of $A$ stored as a linear array? (f08gec and f08jfc) or f08gcc yes no (f08fec and f08jfc) or f08fac or f08fcc Is $A$ tridiagonal? f08jjc yes no Is $A$ a band matrix? f08hec and f08jjc yes no Is one triangle of $A$ stored as a linear array? f08gec and f08jjc yes no (f08fec and f08jjc) or f08fbc Are all eigenvalues and eigenvectors required? Is $A$ tridiagonal? f08jec, f08jcc, f08jhc or f08jlc yes yes no no Is $A$ a band matrix? (f08hec and f08jec) or f08hcc yes no Is one triangle of $A$ stored as a linear array? (f08gec, f08gfc and f08jec) or f08gcc yes no (f08fec, f08ffc and f08jec) or f08fac or f08fcc Is $A$ tridiagonal? f08jjc, f08jkc or f08jlc yes no Is one triangle of $A$ stored as a linear array? f08gec, f08jjc, f08jkc and f08ggc yes no (f08fec, f08jjc, f08jkc and f08fgc) or f08fbc

### Tree 2: Real Generalized Symmetric-definite Eigenvalue Problems

 Are eigenvalues only required? Are all the eigenvalues required? Are $A$ and $B$ band matrices? f08ufc, f08uec, f08hec and f08jfc yes yes yes no no no Are $A$ and $B$ stored with one triangle as a linear array? f07gdc, f08tec, f08gec and f08jfc yes no f07fdc, f08sec, f08fec and f08jfc Are $A$ and $B$ band matrices? f08ufc, f08uec, f08hec and f08jjc yes no Are $A$ and $B$ stored with one triangle as a linear array? f07gdc, f08tec, f08gec and f08jjc yes no f07fdc, f08sec, f08gec and f08jjc Are all eigenvalues and eigenvectors required? Are $A$ and $B$ stored with one triangle as a linear array? f07gdc, f08tec, f08gec, f08gfc, f08jec and f16plc yes yes no no f07fdc, f08sec, f08fec, f08ffc, f08jec and f16yjc Are $A$ and $B$ band matrices? f08ufc, f08uec, f08hec, f08jkc and f16yjc yes no Are $A$ and $B$ stored with one triangle as a linear array? f07gdc, f08tec, f08gec, f08jjc, f08jkc, f08ggc and f16plc yes no f07fdc, f08sec, f08fec, f08jjc, f08jkc, f08fgc and f16yjc
Note: the functions for band matrices only handle the problem $Ax=\lambda Bx$; the other functions handle all three types of problems ($Ax=\lambda Bx$, $ABx=\lambda x$ or $BAx=\lambda x$) except that, if the problem is $BAx=\lambda x$ and eigenvectors are required, f16phc must be used instead of f16plc and f16yfc instead of f16yjc.

### Tree 3: Real Nonsymmetric Eigenvalue Problems

 Are eigenvalues required? Is $A$ an upper Hessenberg matrix? f08pec yes yes no no f08nac or f08nbc or (f08nhc, f08nec and f08pec) Is the Schur factorization of $A$ required? Is $A$ an upper Hessenberg matrix? f08pec yes yes no no f08nbc or (f08nec, f08nfc, f08pec or f08njc) Are all eigenvectors required? Is $A$ an upper Hessenberg matrix? f08pec or f08qkc yes yes no no f08nac or f08nbc or (f08nhc, f08nec, f08nfc, f08pec, f08qkc or f08njc) Is $A$ an upper Hessenberg matrix? f08pec or f08pkc yes no f08nhc, f08nec, f08pec, f08pkc, f08ngc or f08njc

### Tree 4: Real Generalized Nonsymmetric Eigenvalue Problems

 Are eigenvalues only required? Are $A$ and $B$ in generalized upper Hessenberg form? f08xec yes yes no no f08wbc, or f08whc and f08wcc Is the generalized Schur factorization of $A$ and $B$ required? Are $A$ and $B$ in generalized upper Hessenberg form? f08xec yes yes no no f08xbc or f08xcc Are $A$ and $B$ in generalized upper Hessenberg form? f08xec and f08ykc yes no f08wbc, or f08whc, f08wcc and f08wjc

### Tree 5: Complex Hermitian Eigenvalue Problems

 Are eigenvalues only required? Are all the eigenvalues required? Is $A$ a band matrix? (f08hsc and f08jfc) or f08hqc yes yes yes no no no Is one triangle of $A$ stored as a linear array? (f08gsc and f08jfc) or f08gqc yes no (f08fsc and f08jfc) or f08fqc Is $A$ a band matrix? f08hsc and f08jjc yes no Is one triangle of $A$ stored as a linear array? f08gsc and f08jjc yes no f08fsc and f08jjc Are all eigenvalues and eigenvectors required? Is $A$ a band matrix? (f08hsc and f08jsc) or f08hqc yes yes no no Is one triangle of $A$ stored as a linear array? (f08gsc, f08gtc and f08jsc) or f08gqc yes no (f08fsc, f08ftc and f08jsc) or f08fqc Is one triangle of $A$ stored as a linear array? f08gsc, f08jjc, f08jxc and f08guc yes no f08fsc, f08jjc, f08jxc and f08fuc

### Tree 6: Complex Generalized Hermitian-definite Eigenvalue Problems

 Are eigenvalues only required? Are all eigenvalues required? Are $A$ and $B$ stored with one triangle as a linear array? f07grc, f08tsc, f08gsc and f08jfc yes yes yes no no no f07frc, f08ssc, f08fsc and f08jfc Are $A$ and $B$ stored with one triangle as a linear array? f07grc, f08tsc, f08gsc and f08jjc yes no f07frc, f08ssc, f08gsc and f08jjc Are all eigenvalues and eigenvectors required? Are $A$ and $B$ stored with one triangle as a linear array? f07grc, f08tsc, f08gsc, f08gtc and f16psc yes yes no no f07frc, f08ssc, f08fsc, f08ftc, f08jsc and f16zjc Are $A$ and $B$ stored with one triangle as a linear array? f07grc, f08tsc, f08gsc, f08jjc, f08jxc, f08guc and f16slc yes no f07frc, f08ssc, f08fsc, f08jjc, f08jxc, f08fuc and f16zjc

### Tree 7: Complex non-Hermitian Eigenvalue Problems

 Are eigenvalues only required? Is $A$ an upper Hessenberg matrix? f08psc yes yes no no f08nvc, f08nsc and f08psc Is the Schur factorization of $A$ required? Is $A$ an upper Hessenberg matrix? f08psc yes yes no no f08nsc, f08ntc, f08psc and f08nwc Are all eigenvectors required? Is $A$ an upper Hessenberg matrix? f08psc and f08qxc yes yes no no f08nvc, f08nsc, f08ntc, f08psc, f08qxc and f08nwc Is $A$ an upper Hessenberg matrix? f08psc and f08pxc yes no f08nvc, f08nsc, f08psc, f08pxc, f08nuc and f08nwc

### Tree 8: Complex Generalized non-Hermitian Eigenvalue Problems

 Are eigenvalues only required? Are $A$ and $B$ in generalized upper Hessenberg form? f08xsc yes yes no no f08wpc, or f08wqc and f08wvc Is the generalized Schur factorization of $A$ and $B$ required? Are $A$ and $B$ in generalized upper Hessenberg form? f08xsc yes yes no no f08xpc or f08xqc Are $A$ and $B$ in generalized upper Hessenberg form? f08xsc and f08yxc yes no f08wpc, or f08wvc, f08wqc and f08wwc

### Tree 9: Singular Value Decomposition of a Matrix

 Is $A$ a complex matrix? Is $A$ banded? f08lsc and f08msc yes yes no no Are singular values only required? f08ksc and f08msc yes no f08ksc, f08ktc, f08kvc, f08kwc, f08kzc and f08msc Is $A$ bidiagonal? f08mbc and f08mec yes no Is $A$ banded? f08lec and f08mec yes no Are singular values only required? f08kec and f08mec yes no f08kec, f08kfc, f08khc, f08kjc, f08kmc and f08mec

### Tree 10: Singular Value Decompositon of a Matrix Pair

 Are $A$ and $B$ complex matrices? f08vqc yes no f08vcc

## 5Functionality Index

 Back transformation of eigenvectors from those of balanced forms,
 complex matrix f08nwc
 real matrix f08njc
 Back transformation of generalized eigenvectors from those of balanced forms,
 complex matrix f08wwc
 real matrix f08wjc
 Balancing,
 complex general matrix f08nvc
 complex general matrix pair f08wvc
 real general matrix f08nhc
 real general matrix pair f08whc
 Eigenvalue problems for condensed forms of matrices,
 complex Hermitian matrix,
 eigenvalues and eigenvectors,
 band matrix,
 all/selected eigenvalues and eigenvectors by root-free $QR$ algorithm f08hpc
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm, using packed storage f08hqc
 all eigenvalues and eigenvectors by root-free $QR$ algorithm f08hnc
 general matrix,
 all/selected eigenvalues and eigenvectors by root-free $QR$ algorithm f08fpc
 all/selected eigenvalues and eigenvectors by root-free $QR$ algorithm, using packed storage f08gpc
 all/selected eigenvalues and eigenvectors using Relatively Robust Representations f08frc
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08fqc
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm, using packed storage f08gqc
 all eigenvalues and eigenvectors by root-free $QR$ algorithm f08fnc
 all eigenvalues and eigenvectors by root-free $QR$ algorithm, using packed storage f08gnc
 eigenvalues only,
 band matrix,
 all/selected eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm f08hpc
 all eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm f08hnc
 all eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm, using packed storage f08hqc
 general matrix,
 all/selected eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm f08fpc
 all/selected eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm, using packed storage f08gpc
 all eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm f08fnc
 all eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm, using packed storage f08gnc
 complex upper Hessenberg matrix, reduced from complex general matrix,
 eigenvalues and Schur factorization f08psc
 selected right and/or left eigenvectors by inverse iteration f08pxc
 real bidiagonal matrix,
 singular value decomposition,
 after reduction from complex general matrix f08msc
 after reduction from real general matrix f08mec
 after reduction from real general matrix, using divide-and-conquer f08mdc
 real symmetric matrix,
 eigenvalues and eigenvectors,
 band matrix,
 all/selected eigenvalues and eigenvectors by root-free $QR$ algorithm f08hbc
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08hcc
 all eigenvalues and eigenvectors by root-free $QR$ algorithm f08hac
 general matrix,
 all/selected eigenvalues and eigenvectors by root-free $QR$ algorithm f08fbc
 all/selected eigenvalues and eigenvectors by root-free $QR$ algorithm, using packed storage f08gbc
 all/selected eigenvalues and eigenvectors using Relatively Robust Representations f08fdc
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08fcc
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm, using packed storage f08gcc
 all eigenvalues and eigenvectors by root-free $QR$ algorithm f08fac
 all eigenvalues and eigenvectors by root-free $QR$ algorithm, using packed storage f08gac
 eigenvalues only,
 band matrix,
 all/selected eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm f08hbc
 all eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm f08hac
 general matrix,
 all/selected eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm f08fbc
 all/selected eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm, using packed storage f08gbc
 all eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm f08fac
 all eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm, using packed storage f08gac
 real symmetric tridiagonal matrix,
 eigenvalues and eigenvectors,
 after reduction from complex Hermitian matrix,
 all/selected eigenvalues and eigenvectors, using Relatively Robust Representations f08jyc
 all eigenvalues and eigenvectors f08jsc
 all eigenvalues and eigenvectors, positive definite matrix f08juc
 all eigenvalues and eigenvectors, using divide-and-conquer f08jvc
 selected eigenvectors by inverse iteration f08jxc
 all/selected eigenvalues and eigenvectors, using Relatively Robust Representations f08jlc
 all/selected eigenvalues and eigenvectors by root-free $QR$ algorithm f08jbc
 all/selected eigenvalues and eigenvectors using Relatively Robust Representations f08jdc
 all eigenvalues and eigenvectors f08jec
 all eigenvalues and eigenvectors, by divide-and-conquer f08jhc
 all eigenvalues and eigenvectors, positive definite matrix f08jgc
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08jcc
 all eigenvalues and eigenvectors by root-free $QR$ algorithm f08jac
 selected eigenvectors by inverse iteration f08jkc
 eigenvalues only,
 all/selected eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm f08jbc
 all eigenvalues by root-free $QR$ algorithm f08jfc
 all eigenvalues by the Pal–Walker–Kahan variant of the $QL$ or $QR$ algorithm f08jac
 selected eigenvalues only f08jjc
 real upper Hessenberg matrix, reduced from real general matrix,
 eigenvalues and Schur factorization f08pec
 selected right and/or left eigenvectors by inverse iteration f08pkc
 Eigenvalue problems for nonsymmetric matrices,
 complex matrix,
 all eigenvalues, Schur form, Schur vectors and reciprocal condition numbers f08ppc
 all eigenvalues, Schur form and Schur vectors f08pnc
 all eigenvalues and left/right eigenvectors f08nnc
 all eigenvalues and left/right eigenvectors, plus balancing transformation and reciprocal condition numbers f08npc
 real matrix,
 all eigenvalues, real Schur form, Schur vectors and reciprocal condition numbers f08pbc
 all eigenvalues, real Schur form and Schur vectors f08pac
 all eigenvalues and left/right eigenvectors f08nac
 all eigenvalues and left/right eigenvectors, plus balancing transformation and reciprocal condition numbers f08nbc
 Eigenvalues and generalized Schur factorization,
 complex generalized upper Hessenberg form f08xsc
 real generalized upper Hessenberg form f08xec
 General Gauss–Markov linear model,
 solves a complex general Gauss–Markov linear model problem f08zpc
 solves a real general Gauss–Markov linear model problem f08zbc
 Generalized eigenvalue problems for condensed forms of matrices,
 complex Hermitian-definite eigenproblems,
 banded matrices,
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08uqc
 all eigenvalues and eigenvectors by reduction to tridiagonal form f08unc
 selected eigenvalues and eigenvectors by reduction to tridiagonal form f08upc
 general matrices,
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08sqc
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm, packed storage format f08tqc
 all eigenvalues and eigenvectors by reduction to tridiagonal form f08snc
 all eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format f08tnc
 selected eigenvalues and eigenvectors by reduction to tridiagonal form f08spc
 selected eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format f08tpc
 real symmetric-definite eigenproblems,
 banded matrices,
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08ucc
 all eigenvalues and eigenvectors by reduction to tridiagonal form f08uac
 selected eigenvalues and eigenvectors by reduction to tridiagonal form f08ubc
 general matrices,
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08scc
 all eigenvalues and eigenvectors by a divide-and-conquer algorithm, packed storage format f08tcc
 all eigenvalues and eigenvectors by reduction to tridiagonal form f08sac
 all eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format f08tac
 selected eigenvalues and eigenvectors by reduction to tridiagonal form f08sbc
 selected eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format f08tbc
 Generalized eigenvalue problems for nonsymmetric matrix pairs,
 complex nonsymmetric matrix pairs,
 all eigenvalues, generalized Schur form, Schur vectors and reciprocal condition numbers f08xpc
 all eigenvalues, generalized Schur form and Schur vectors, using level 3 BLAS f08xqc
 all eigenvalues and left/right eigenvectors, plus the balancing transformation and reciprocal condition numbers f08wpc
 all eigenvalues and left/right eigenvectors, using level 3 BLAS f08wqc
 real nonsymmetric matrix pairs,
 all eigenvalues, generalized real Schur form and left/right Schur vectors, plus reciprocal condition numbers f08xbc
 all eigenvalues, generalized real Schur form and left/right Schur vectors, using level 3 BLAS f08xcc
 all eigenvalues and left/right eigenvectors, plus the balancing transformation and reciprocal condition numbers f08wbc
 all eigenvalues and left/right eigenvectors, using level 3 BLAS f08wcc
 Generalized $QR$ factorization,
 complex matrices f08zsc
 real matrices f08zec
 Generalized $RQ$ factorization,
 complex matrices f08ztc
 real matrices f08zfc
 Generalized singular value decomposition,
 after reduction from complex general matrix,
 complex triangular or trapezoidal matrix pair f08ysc
 after reduction from real general matrix,
 real triangular or trapezoidal matrix pair f08yec
 complex matrix pair, using level 3 BLAS f08vqc
 partitioned orthogonal matrix (CS decomposition) f08rac
 partitioned unitary matrix (CS decomposition) f08rnc
 real matrix pair, using level 3 BLAS f08vcc
 reduction of a pair of general matrices to triangular or trapezoidal form,
 complex matrices, using level 3 BLAS f08vuc
 real matrices, using level 3 BLAS f08vgc
 least squares problems,
 complex matrices,
 apply orthogonal matrix f08bxc
 minimum norm solution using a complete orthogonal factorization f08bnc
 minimum norm solution using the singular value decomposition f08knc
 minimum norm solution using the singular value decomposition (divide-and-conquer) f08kqc
 reduction of upper trapezoidal matrix to upper triangular form f08bvc
 real matrices,
 apply orthogonal matrix f08bkc
 minimum norm solution using a complete orthogonal factorization f08bac
 minimum norm solution using the singular value decomposition f08kac
 minimum norm solution using the singular value decomposition (divide-and-conquer) f08kcc
 reduction of upper trapezoidal matrix to upper triangular form f08bhc
 least squares problems with linear equality constraints,
 complex matrices,
 minimum norm solution subject to linear equality constraints using a generalized $RQ$ factorization f08znc
 real matrices,
 minimum norm solution subject to linear equality constraints using a generalized $RQ$ factorization f08zac
 Left and right eigenvectors of a pair of matrices,
 complex upper triangular matrices f08yxc
 real quasi-triangular matrices f08ykc
 $LQ$ factorization and related operations,
 complex matrices,
 apply unitary matrix f08axc
 factorization f08avc
 form all or part of unitary matrix f08awc
 real matrices,
 apply orthogonal matrix f08akc
 factorization f08ahc
 form all or part of orthogonal matrix f08ajc
 Operations on eigenvectors of a real symmetric or complex Hermitian matrix, or singular vectors of a general matrix,
 estimate condition numbers f08flc
 Operations on generalized Schur factorization of a general matrix pair,
 complex matrix,
 estimate condition numbers of eigenvalues and/or eigenvectors f08yyc
 re-order Schur factorization f08ytc
 re-order Schur factorization, compute generalized eigenvalues and condition numbers f08yuc
 real matrix,
 estimate condition numbers of eigenvalues and/or eigenvectors f08ylc
 re-order Schur factorization f08yfc
 re-order Schur factorization, compute generalized eigenvalues and condition numbers f08ygc
 Operations on Schur factorization of a general matrix,
 complex matrix,
 compute left and/or right eigenvectors f08qxc
 estimate sensitivities of eigenvalues and/or eigenvectors f08qyc
 re-order Schur factorization f08qtc
 re-order Schur factorization, compute basis of invariant subspace, and estimate sensitivities f08quc
 real matrix,
 compute left and/or right eigenvectors f08qkc
 estimate sensitivities of eigenvalues and/or eigenvectors f08qlc
 re-order Schur factorization f08qfc
 re-order Schur factorization, compute basis of invariant subspace, and estimate sensitivities f08qgc
 Overdetermined and underdetermined linear systems,
 complex matrices,
 solves an overdetermined or undetermined complex linear system f08anc
 real matrices,
 solves an overdetermined or undetermined real linear system f08aac
 Performs a reduction of eigenvalue problems to condensed forms, and related operations,
 real rectangular band matrix to upper bidiagonal form f08lec
 $QL$ factorization and related operations,
 complex matrices,
 apply unitary matrix f08cuc
 factorization f08csc
 form all or part of unitary matrix f08ctc
 real matrices,
 apply orthogonal matrix f08cgc
 factorization f08cec
 form all or part of orthogonal matrix f08cfc
 $QR$ factorization and related operations,
 complex matrices,
 general matrices,
 apply unitary matrix f08auc
 apply unitary matrix, explicitly blocked f08aqc
 factorization f08asc
 factorization,
 with column pivoting, using BLAS-3 f08btc
 factorization, explicitly blocked f08apc
 form all or part of unitary matrix f08atc
 triangular-pentagonal matrices,
 apply unitary matrix f08bqc
 factorization f08bpc
 real matrices,
 general matrices,
 apply orthogonal matrix f08agc
 apply orthogonal matrix, explicitly blocked f08acc
 factorization,
 with column pivoting, using BLAS-3 f08bfc
 factorization, orthogonal matrix f08aec
 factorization, with explicit blocking f08abc
 form all or part of orthogonal matrix f08afc
 triangular-pentagonal matrices,
 apply orthogonal matrix f08bbc
 factorization f08bcc
 Reduction of a pair of general matrices to generalized upper Hessenberg form,
 orthogonal reduction, real matrices, using level 3 BLAS f08wfc
 unitary reduction, complex matrices, using level 3 BLAS f08wtc
 Reduction of eigenvalue problems to condensed forms, and related operations,
 complex general matrix to upper Hessenberg form,
 apply orthogonal matrix f08nuc
 form orthogonal matrix f08ntc
 reduce to Hessenberg form f08nsc
 complex Hermitian band matrix to real symmetric tridiagonal form f08hsc
 complex Hermitian matrix to real symmetric tridiagonal form,
 apply unitary matrix f08fuc
 apply unitary matrix, packed storage f08guc
 form unitary matrix f08ftc
 form unitary matrix, packed storage f08gtc
 reduce to tridiagonal form f08fsc
 reduce to tridiagonal form, packed storage f08gsc
 complex rectangular band matrix to real upper bidiagonal form f08lsc
 complex rectangular matrix to real bidiagonal form,
 apply unitary matrix f08kuc
 form unitary matrix f08ktc
 reduce to bidiagonal form f08ksc
 real general matrix to upper Hessenberg form,
 apply orthogonal matrix f08ngc
 form orthogonal matrix f08nfc
 reduce to Hessenberg form f08nec
 real rectangular matrix to bidiagonal form,
 apply orthogonal matrix f08kgc
 form orthogonal matrix f08kfc
 reduce to bidiagonal form f08kec
 real symmetric band matrix to symmetric tridiagonal form f08hec
 real symmetric matrix to symmetric tridiagonal form,
 apply orthogonal matrix f08fgc
 apply orthogonal matrix, packed storage f08ggc
 form orthogonal matrix f08ffc
 form orthogonal matrix, packed storage f08gfc
 reduce to tridiagonal form f08fec
 reduce to tridiagonal form, packed storage f08gec
 Reduction of generalized eigenproblems to standard eigenproblems,
 complex Hermitian-definite banded generalized eigenproblem $Ax=\lambda Bx$ f08usc
 complex Hermitian-definite generalized eigenproblem $Ax=\lambda Bx$, $ABx=\lambda x$ or $BAx=\lambda x$ f08ssc
 complex Hermitian-definite generalized eigenproblem $Ax=\lambda Bx$, $ABx=\lambda x$ or $BAx=\lambda x$, packed storage f08tsc
 real symmetric-definite banded generalized eigenproblem $Ax=\lambda Bx$ f08uec
 real symmetric-definite generalized eigenproblem $Ax=\lambda Bx$, $ABx=\lambda x$ or $BAx=\lambda x$ f08sec
 real symmetric-definite generalized eigenproblem $Ax=\lambda Bx$, $ABx=\lambda x$ or $BAx=\lambda x$, packed storage f08tec
 $RQ$ factorization and related operations,
 complex matrices,
 apply unitary matrix f08cxc
 factorization f08cvc
 form all or part of unitary matrix f08cwc
 real matrices,
 apply orthogonal matrix f08ckc
 factorization f08chc
 form all or part of orthogonal matrix f08cjc
 Singular value decomposition,
 complex matrix,
 all/selected singular values and, optionally, the corresponding singular vectors f08kzc
 preconditioned Jacobi SVD using fast scaled rotations and de Rijks pivoting f08kvc
 using a divide-and-conquer algorithm f08krc
 using bidiagonal $QR$ iteration f08kpc
 using fast scaled rotation and de Rijks pivoting f08kwc
 real matrix,
 all/selected singular values and, optionally, the corresponding singular vectors f08kmc
 preconditioned Jacobi SVD using fast scaled rotations and de Rijks pivoting f08khc
 using a divide-and-conquer algorithm f08kdc
 using bidiagonal $QR$ iteration f08kbc
 using fast scaled rotation and de Rijks pivoting f08kjc
 real square bidiagonal matrix,
 all/selected singular values and, optionally, the corresponding singular vectors f08mbc
 Solve generalized Sylvester equation,
 complex matrices f08yvc
 real matrices f08yhc
 Solve reduced form of Sylvester matrix equation,
 complex matrices f08qvc
 real matrices f08qhc
 Split Cholesky factorization,
 complex Hermitian positive definite band matrix f08utc
 real symmetric positive definite band matrix f08ufc

None.

None.

## 8References

Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia
Arioli M, Duff I S and de Rijk P P M (1989) On the augmented system approach to sparse least squares problems Numer. Math. 55 667–684
Demmel J W and Kahan W (1990) Accurate singular values of bidiagonal matrices SIAM J. Sci. Statist. Comput. 11 873–912
Golub G H and Van Loan C F (2012) Matrix Computations (4th Edition) Johns Hopkins University Press, Baltimore
Moler C B and Stewart G W (1973) An algorithm for generalized matrix eigenproblems SIAM J. Numer. Anal. 10 241–256
Parlett B N (1998) The Symmetric Eigenvalue Problem SIAM, Philadelphia
Stewart G W and Sun J-G (1990) Matrix Perturbation Theory Academic Press, London
Ward R C (1981) Balancing the generalized eigenvalue problem SIAM J. Sci. Stat. Comp. 2 141–152
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag