f01bvf transforms the generalized symmetric-definite eigenproblem $Ax=\lambda {\mathbf{b}}x$ to the equivalent standard eigenproblem $Cy=\lambda y$, where $A$, ${\mathbf{b}}$ and $C$ are symmetric band matrices and ${\mathbf{b}}$ is positive definite. ${\mathbf{b}}$ must have been decomposed by f01buf.
The routine may be called by the names f01bvf or nagf_matop_real_symm_posdef_geneig.
3Description
$A$ is a symmetric band matrix of order $n$ and bandwidth $2{m}_{A}+1$. The positive definite symmetric band matrix $B$, of order $n$ and bandwidth $2{m}_{B}+1$, must have been previously decomposed by f01buf as $ULD{L}^{\mathrm{T}}{U}^{\mathrm{T}}$. f01bvf applies $U$, $L$ and $D$ to $A$, ${m}_{A}$ rows at a time, restoring the band form of $A$ at each stage by plane rotations. The argument $k$ defines the change-over point in the decomposition of $B$ as used by f01buf and is also used as a change-over point in the transformations applied by this routine. For maximum efficiency, $k$ should be chosen to be the multiple of ${m}_{A}$ nearest to $n/2$. The resulting symmetric band matrix $C$ is overwritten on a. The eigenvalues of $C$, and thus of the original problem, may be found using f08hefandf08jff. For selected eigenvalues, use f08hefandf08jjf.
4References
Crawford C R (1973) Reduction of a band-symmetric generalized eigenvalue problem Comm. ACM16 41–44
5Arguments
1: $\mathbf{n}$ – IntegerInput
On entry: $n$, the order of the matrices $A$, $B$ and $C$.
2: $\mathbf{ma1}$ – IntegerInput
On entry: ${m}_{A}+1$, where ${m}_{A}$ is the number of nonzero superdiagonals in $A$. Normally ${\mathbf{ma1}}\ll {\mathbf{n}}$.
3: $\mathbf{mb1}$ – IntegerInput
On entry: ${m}_{B}+1$, where ${m}_{B}$ is the number of nonzero superdiagonals in $B$.
Constraint:
${\mathbf{mb1}}\le {\mathbf{ma1}}$.
4: $\mathbf{m3}$ – IntegerInput
On entry: the value of $3{m}_{A}+{m}_{B}$.
5: $\mathbf{k}$ – IntegerInput
On entry: $k$, the change-over point in the transformations. It must be the same as the value used by f01buf in the decomposition of $B$.
Suggested value:
the optimum value is the multiple of ${m}_{A}$ nearest to $n/2$.
6: $\mathbf{a}({\mathbf{lda}},{\mathbf{n}})$ – Real (Kind=nag_wp) arrayInput/Output
On entry: the upper triangle of the $n\times n$ symmetric band matrix $A$, with the diagonal of the matrix stored in the $({m}_{A}+1)$th row of the array, and the ${m}_{A}$ superdiagonals within the band stored in the first ${m}_{A}$ rows of the array. Each column of the matrix is stored in the corresponding column of the array. For example, if $n=6$ and ${m}_{A}=2$, the storage scheme is
Elements in the top left corner of the array need not be set. The matrix elements within the band can be assigned to the correct elements of the array using the following
code:
Do j = 1, n
Do i = max(1,j-ma1+1), J
a(i-j+ma1,j) = matrix (i,j)
End Do
End Do
On exit: is overwritten by the corresponding elements of $C$.
7: $\mathbf{lda}$ – IntegerInput
On entry: the first dimension of the array a as declared in the (sub)program from which f01bvf is called.
Constraint:
${\mathbf{lda}}\ge {\mathbf{ma1}}$.
8: $\mathbf{b}({\mathbf{ldb}},{\mathbf{n}})$ – Real (Kind=nag_wp) arrayInput/Output
On entry: the elements of the decomposition of matrix $B$ as returned by f01buf.
On exit: the elements of b will have been permuted.
9: $\mathbf{ldb}$ – IntegerInput
On entry: the first dimension of the array b as declared in the (sub)program from which f01bvf is called.
Constraint:
${\mathbf{ldb}}\ge {\mathbf{mb1}}$.
10: $\mathbf{v}({\mathbf{ldv}},{\mathbf{m3}})$ – Real (Kind=nag_wp) arrayWorkspace
11: $\mathbf{ldv}$ – IntegerInput
On entry: the first dimension of the array v as declared in the (sub)program from which f01bvf is called.
Constraint:
${\mathbf{ldv}}\ge {m}_{A}+{m}_{B}$.
12: $\mathbf{w}\left({\mathbf{m3}}\right)$ – Real (Kind=nag_wp) arrayWorkspace
13: $\mathbf{ifail}$ – IntegerInput/Output
On entry: ifail must be set to $0$, $\mathrm{-1}$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $\mathrm{-1}$ means that an error message is printed while a value of $1$ means that it is not.
If halting is not appropriate, the value $\mathrm{-1}$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $0$ is recommended. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $\mathrm{-1}$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
On entry, ${\mathbf{mb1}}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{ma1}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{mb1}}\le {\mathbf{ma1}}$.
${\mathbf{ifail}}=-99$
An unexpected error has been triggered by this routine. Please
contact NAG.
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
7Accuracy
In general the computed system is exactly congruent to a problem $(A+E)x=\lambda (B+F)x$, where $\Vert E\Vert $ and $\Vert F\Vert $ are of the order of $\epsilon \kappa \left(B\right)\Vert A\Vert $ and $\epsilon \kappa \left(B\right)\Vert B\Vert $ respectively, where $\kappa \left(B\right)$ is the condition number of $B$ with respect to inversion and $\epsilon $ is the machine precision. This means that when $B$ is positive definite but not well-conditioned with respect to inversion, the method, which effectively involves the inversion of $B$, may lead to a severe loss of accuracy in well-conditioned eigenvalues.
8Parallelism and Performance
f01bvf is not threaded in any implementation.
9Further Comments
The time taken by f01bvf is approximately proportional to ${n}^{2}{m}_{B}^{2}$ and the distance of $k$ from $n/2$, e.g., $k=n/4$ and $k=3n/4$ take $502\%$ longer.
When $B$ is positive definite and well-conditioned with respect to inversion, the generalized symmetric eigenproblem can be reduced to the standard symmetric problem $Py=\lambda y$ where $P={L}^{-1}A{L}^{-\mathrm{T}}$ and $B=L{L}^{\mathrm{T}}$, the Cholesky factorization.
When $A$ and $B$ are of band form, especially if the bandwidth is small compared with the order of the matrices, storage considerations may rule out the possibility of working with $P$ since it will be a full matrix in general. However, for any factorization of the form $B=S{S}^{\mathrm{T}}$, the generalized symmetric problem reduces to the standard form
and there does exist a factorization such that ${S}^{-1}A{S}^{-\mathrm{T}}$ is still of band form (see Crawford (1973)). Writing
$$C={S}^{-1}A{S}^{-\mathrm{T}}\text{\hspace{1em} and \hspace{1em}}y={S}^{\mathrm{T}}x$$
the standard form is $Cy=\lambda y$ and the bandwidth of $C$ is the maximum bandwidth of $A$ and $B$.
Each stage in the transformation consists of two phases. The first reduces a leading principal sub-matrix of $B$ to the identity matrix and this introduces nonzero elements outside the band of $A$. In the second, further transformations are applied which leave the reduced part of $B$ unaltered and drive the extra elements upwards and off the top left corner of $A$. Alternatively, $B$ may be reduced to the identity matrix starting at the bottom right-hand corner and the extra elements introduced in $A$ can be driven downwards.
The advantage of the $ULD{L}^{\mathrm{T}}{U}^{\mathrm{T}}$ decomposition of $B$ is that no extra elements have to be pushed over the whole length of $A$. If $k$ is taken as approximately $n/2$, the shifting is limited to halfway. At each stage the size of the triangular bumps produced in $A$ depends on the number of rows and columns of $B$ which are eliminated in the first phase and on the bandwidth of $B$. The number of rows and columns over which these triangles are moved at each step in the second phase is equal to the bandwidth of $A$.
In this routine, a is defined as being at least as wide as $B$ and must be filled out with zeros if necessary as it is overwritten with $C$. The number of rows and columns of $B$ which are effectively eliminated at each stage is ${m}_{A}$.
10Example
This example finds the three smallest eigenvalues of $Ax=\lambda Bx$, where