PDF version (NAG web site
, 64bit version, 64bit version)
NAG Toolbox: nag_sparse_real_symm_basic_setup (f11gd)
Purpose
nag_sparse_real_symm_basic_setup (f11gd) is a setup function, the first in a suite of three functions for the iterative solution of a symmetric system of simultaneous linear equations.
nag_sparse_real_symm_basic_setup (f11gd) must be called before the iterative solver,
nag_sparse_real_symm_basic_solver (f11ge). The third function in the suite,
nag_sparse_real_symm_basic_diag (f11gf), can be used to return additional information about the computation.
These three functions are suitable for the solution of large sparse symmetric systems of equations.
Syntax
[
lwreq,
work,
ifail] = f11gd(
method,
precon,
n,
tol,
maxitn,
anorm,
sigmax,
maxits,
monit, 'sigcmp',
sigcmp, 'norm_p',
norm_p, 'weight',
weight, 'iterm',
iterm, 'sigtol',
sigtol)
[
lwreq,
work,
ifail] = nag_sparse_real_symm_basic_setup(
method,
precon,
n,
tol,
maxitn,
anorm,
sigmax,
maxits,
monit, 'sigcmp',
sigcmp, 'norm_p',
norm_p, 'weight',
weight, 'iterm',
iterm, 'sigtol',
sigtol)
Description
The suite consisting of the functions
nag_sparse_real_symm_basic_setup (f11gd),
nag_sparse_real_symm_basic_solver (f11ge) and
nag_sparse_real_symm_basic_diag (f11gf) is designed to solve the symmetric system of simultaneous linear equations
$Ax=b$ of order
$n$, where
$n$ is large and the matrix of the coefficients
$A$ is sparse.
nag_sparse_real_symm_basic_setup (f11gd) is a setup function which must be called before
nag_sparse_real_symm_basic_solver (f11ge), the iterative solver. The third function in the suite,
nag_sparse_real_symm_basic_diag (f11gf) can be used to return additional information about the computation. One of the following methods can be used:
1. 
Conjugate Gradient Method (CG)
For this method (see Hestenes and Stiefel (1952), Golub and Van Loan (1996), Barrett et al. (1994) and Dias da Cunha and Hopkins (1994)), the matrix $A$ should ideally be positive definite. The application of the Conjugate Gradient method to indefinite matrices may lead to failure or to lack of convergence. 
2. 
Lanczos Method (SYMMLQ)
This method, based upon the algorithm SYMMLQ (see Paige and Saunders (1975) and Barrett et al. (1994)), is suitable for both positive definite and indefinite matrices. It is more robust than the Conjugate Gradient method but less efficient when $A$ is positive definite. 
3. 
Minimum Residual Method (MINRES)
This method may be used when the matrix is indefinite. It seeks to reduce the norm of the residual at each iteration and often takes fewer iterations than the other methods. It does however require slightly more memory. 
The CG and SYMMLQ methods start from the residual
${r}_{0}=bA{x}_{0}$, where
${x}_{0}$ is an initial estimate for the solution (often
${x}_{0}=0$), and generate an orthogonal basis for the Krylov subspace
$\mathrm{span}\left\{{A}^{\mathit{k}}{r}_{0}\right\}$, for
$\mathit{k}=0,1,\dots $, by means of threeterm recurrence relations (see
Golub and Van Loan (1996)). A sequence of symmetric tridiagonal matrices
$\left\{{T}_{k}\right\}$ is also generated. Here and in the following, the index
$k$ denotes the iteration count. The resulting symmetric tridiagonal systems of equations are usually more easily solved than the original problem. A sequence of solution iterates
$\left\{{x}_{k}\right\}$ is thus generated such that the sequence of the norms of the residuals
$\left\{\Vert {r}_{k}\Vert \right\}$ converges to a required tolerance. Note that, in general, the convergence is not monotonic.
In exact arithmetic, after $n$ iterations, this process is equivalent to an orthogonal reduction of $A$ to symmetric tridiagonal form, ${T}_{n}={Q}^{\mathrm{T}}AQ$; the solution ${x}_{n}$ would thus achieve exact convergence. In finiteprecision arithmetic, cancellation and roundoff errors accumulate causing loss of orthogonality. These methods must therefore be viewed as genuinely iterative methods, able to converge to a solution within a prescribed tolerance.
The orthogonal basis is not formed explicitly in either method. The basic difference between the Conjugate Gradient and Lanczos methods lies in the method of solution of the resulting symmetric tridiagonal systems of equations: the conjugate gradient method is equivalent to carrying out an $LD{L}^{\mathrm{T}}$ (Cholesky) factorization whereas the Lanczos method (SYMMLQ) uses an $LQ$ factorization.
Faster convergence for all the methods can be achieved using a
preconditioner (see
Golub and Van Loan (1996) and
Barrett et al. (1994)). A preconditioner maps the original system of equations onto a different system, say
with, hopefully, better characteristics with respect to its speed of convergence: for example, the condition number of the matrix of the coefficients can be improved or eigenvalues in its spectrum can be made to coalesce. An orthogonal basis for the Krylov subspace
$\mathrm{span}\left\{{\stackrel{}{A}}^{\mathit{k}}{\stackrel{}{r}}_{0}\right\}$, for
$\mathit{k}=0,1,\dots $, is generated and the solution proceeds as outlined above. The algorithms used are such that the solution and residual iterates of the original system are produced, not their preconditioned counterparts. Note that an unsuitable preconditioner or no preconditioning at all may result in a very slow rate, or lack, of convergence. However, preconditioning involves a tradeoff between the reduction in the number of iterations required for convergence and the additional computational costs per iteration. Also, setting up a preconditioner may involve nonnegligible overheads.
A preconditioner must be
symmetric and positive definite, i.e., representable by
$M=E{E}^{\mathrm{T}}$, where
$M$ is nonsingular, and such that
$\stackrel{}{A}={E}^{1}A{E}^{\mathrm{T}}\sim {I}_{n}$ in
(1), where
${I}_{n}$ is the identity matrix of order
$n$. Also, we can define
$\stackrel{}{r}={E}^{1}r$ and
$\stackrel{}{x}={E}^{\mathrm{T}}x$. These are formal definitions, used only in the design of the algorithms; in practice, only the means to compute the matrixvector products
$v=Au$ and to solve the preconditioning equations
$Mv=u$ are required, that is, explicit information about
$M$,
$E$ or their inverses is not required at any stage.
The first termination criterion
is available for both conjugate gradient and Lanczos (SYMMLQ) methods. In
(2),
$p=1,\infty \text{ or}2$ and
$\tau $ denotes a userspecified tolerance subject to
$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(10,\sqrt{n}\right)\epsilon \le \tau <1$, where
$\epsilon $ is the
machine precision. Facilities are provided for the estimation of the norm of the matrix of the coefficients
${\Vert A\Vert}_{1}={\Vert A\Vert}_{\infty}$, when this is not known in advance, used in
(2), by applying Higham's method (see
Higham (1988)). Note that
${\Vert A\Vert}_{2}$ cannot be estimated internally. This criterion uses an error bound derived from
backward error analysis to ensure that the computed solution is the exact solution of a problem as close to the original as the termination tolerance requires. Termination criteria employing bounds derived from
forward error analysis could be used, but any such criteria would require information about the condition number
$\kappa \left(A\right)$ which is not easily obtainable.
The second termination criterion
is available only for the Lanczos method (SYMMLQ). In
(3),
${\sigma}_{1}\left(\stackrel{}{A}\right)={\Vert \stackrel{}{A}\Vert}_{2}$ is the largest singular value of the (preconditioned) iteration matrix
$\stackrel{}{A}$. This termination criterion monitors the progress of the solution of the preconditioned system of equations and is less expensive to apply than criterion
(2). When
${\sigma}_{1}\left(\stackrel{}{A}\right)$ is not supplied, facilities are provided for its estimation by
${\sigma}_{1}\left(\stackrel{}{A}\right)\sim {\displaystyle \underset{k}{\mathrm{max}}}\phantom{\rule{0.25em}{0ex}}{\sigma}_{1}\left({T}_{k}\right)$. The interlacing property
${\sigma}_{1}\left({T}_{k1}\right)\le {\sigma}_{1}\left({T}_{k}\right)$ and Gerschgorin's theorem provide lower and upper bounds from which
${\sigma}_{1}\left({T}_{k}\right)$ can be easily computed by bisection. Alternatively, the less expensive estimate
${\sigma}_{1}\left(\stackrel{}{A}\right)\sim {\displaystyle \underset{k}{\mathrm{max}}}\phantom{\rule{0.25em}{0ex}}{\Vert {T}_{k}\Vert}_{1}$ can be used, where
${\sigma}_{1}\left(\stackrel{}{A}\right)\le {\Vert {T}_{k}\Vert}_{1}$ by Gerschgorin's theorem. Note that only order of magnitude estimates are required by the termination criterion.
Termination criterion
(2) is the recommended choice, despite its (small) additional costs per iteration when using the Lanczos method (SYMMLQ). Also, if the norm of the initial estimate is much larger than the norm of the solution, that is, if
$\Vert {x}_{0}\Vert \gg \Vert x\Vert $, a dramatic loss of significant digits could result in complete lack of convergence. The use of criterion
(2) will enable the detection of such a situation, and the iteration will be restarted at a suitable point. No such restart facilities are provided for criterion
(3).
Optionally, a vector
$w$ of userspecified weights can be used in the computation of the vector norms in termination criterion
(2), i.e.,
${{\Vert v\Vert}_{p}}^{\left(w\right)}={\Vert {v}^{\left(w\right)}\Vert}_{p}$, where
${\left({v}^{\left(w\right)}\right)}_{\mathit{i}}={w}_{\mathit{i}}{v}_{\mathit{i}}$, for
$\mathit{i}=1,2,\dots ,n$. Note that the use of weights increases the computational costs.
The MINRES algorithm terminates when the norm of the residual of the preconditioned system $F$, ${\Vert F\Vert}_{2}\le \tau \times {\Vert \stackrel{}{A}\Vert}_{2}\times {\Vert {x}_{k}\Vert}_{2}$, where $\stackrel{}{A}$ is the preconditioned matrix.
The termination criteria discussed are not robust in the presence of a nontrivial nullspace of
$A$, i.e., when
$A$ is singular. It is then possible for
${\Vert {x}_{k}\Vert}_{p}$ to grow without limit, spuriously satisfying the termination criterion. If singularity is suspected, more robust functions can be found in
Chapter E04.
The sequence of calls to the functions comprising the suite is enforced: first, the setup function
nag_sparse_real_symm_basic_setup (f11gd) must be called, followed by the solver
nag_sparse_real_symm_basic_solver (f11ge). The diagnostic function
nag_sparse_real_symm_basic_diag (f11gf) can be called either when
nag_sparse_real_symm_basic_solver (f11ge) is carrying out a monitoring step or after
nag_sparse_real_symm_basic_solver (f11ge) has completed its tasks. Incorrect sequencing will raise an error condition.
References
Barrett R, Berry M, Chan T F, Demmel J, Donato J, Dongarra J, Eijkhout V, Pozo R, Romine C and Van der Vorst H (1994) Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods SIAM, Philadelphia
Dias da Cunha R and Hopkins T (1994) PIM 1.1 — the parallel iterative method package for systems of linear equations user's guide — Fortran 77 version Technical Report Computing Laboratory, University of Kent at Canterbury, Kent, UK
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Hestenes M and Stiefel E (1952) Methods of conjugate gradients for solving linear systems J. Res. Nat. Bur. Stand. 49 409–436
Higham N J (1988) FORTRAN codes for estimating the onenorm of a real or complex matrix, with applications to condition estimation ACM Trans. Math. Software 14 381–396
Paige C C and Saunders M A (1975) Solution of sparse indefinite systems of linear equations SIAM J. Numer. Anal. 12 617–629
Parameters
Compulsory Input Parameters
 1:
$\mathrm{method}$ – string

The iterative method to be used.
 ${\mathbf{method}}=\text{'CG'}$
 Conjugate gradient method (CG).
 ${\mathbf{method}}=\text{'SYMMLQ'}$
 Lanczos method (SYMMLQ).
 ${\mathbf{method}}=\text{'MINRES'}$
 Minimum residual method (MINRES).
Constraint:
${\mathbf{method}}=\text{'CG'}$, $\text{'SYMMLQ'}$ or $\text{'MINRES'}$.
 2:
$\mathrm{precon}$ – string (length ≥ 1)

Determines whether preconditioning is used.
 ${\mathbf{precon}}=\text{'N'}$
 No preconditioning.
 ${\mathbf{precon}}=\text{'P'}$
 Preconditioning.
Constraint:
${\mathbf{precon}}=\text{'N'}$ or $\text{'P'}$.
 3:
$\mathrm{n}$ – int64int32nag_int scalar

$n$, the order of the matrix $A$.
Constraint:
${\mathbf{n}}>0$.
 4:
$\mathrm{tol}$ – double scalar

The tolerance
$\tau $ for the termination criterion.
If
${\mathbf{tol}}\le 0.0$,
$\tau =\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(\sqrt{\epsilon},\sqrt{n}\epsilon \right)$ is used, where
$\epsilon $ is the
machine precision.
Otherwise $\tau =\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{tol}},10\epsilon ,\sqrt{n}\epsilon \right)$ is used.
Constraint:
${\mathbf{tol}}<1.0$.
 5:
$\mathrm{maxitn}$ – int64int32nag_int scalar

The maximum number of iterations.
Constraint:
${\mathbf{maxitn}}>0$.
 6:
$\mathrm{anorm}$ – double scalar

If
${\mathbf{anorm}}>0.0$, the value of
${\Vert A\Vert}_{p}$ to be used in the termination criterion
(2) (
${\mathbf{iterm}}=1$).
If
${\mathbf{anorm}}\le 0.0$,
${\mathbf{iterm}}=1$ and
${\mathbf{norm\_p}}=\text{'1'}$ or
$\text{'I'}$, then
${\Vert A\Vert}_{1}={\Vert A\Vert}_{\infty}$ is estimated internally by
nag_sparse_real_symm_basic_solver (f11ge).
If
${\mathbf{iterm}}=2$, then
anorm is not referenced.
It has no effect if ${\mathbf{method}}=\text{'MINRES'}$.
Constraint:
if ${\mathbf{iterm}}=1$ and ${\mathbf{norm\_p}}=2$, ${\mathbf{anorm}}>0.0$.
 7:
$\mathrm{sigmax}$ – double scalar

If
${\mathbf{sigmax}}>0.0$, the value of
${\sigma}_{1}\left(\stackrel{}{A}\right)={\Vert {E}^{1}A{E}^{\mathrm{T}}\Vert}_{2}$.
If
${\mathbf{sigmax}}\le 0.0$,
${\sigma}_{1}\left(\stackrel{}{A}\right)$ is estimated by
nag_sparse_real_symm_basic_solver (f11ge) when either
${\mathbf{sigcmp}}=\text{'S'}$ or termination criterion
(3) (
${\mathbf{iterm}}=2$) is employed, though it will be used only in the latter case.
Otherwise, or if
${\mathbf{method}}=\text{'MINRES'}$,
sigmax is not referenced.
 8:
$\mathrm{maxits}$ – int64int32nag_int scalar
Suggested value:
${\mathbf{maxits}}=\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(10,n\right)$ when
sigtol is of the order of its default value
$\left(0.01\right)$.
The maximum iteration number
$k={\mathbf{maxits}}$ for which
${\sigma}_{1}\left({T}_{k}\right)$ is computed by bisection (see also
Description). If
${\mathbf{sigcmp}}=\text{'N'}$ or
${\mathbf{sigmax}}>0.0$, or if
${\mathbf{method}}=\text{'MINRES'}$, then
maxits is not referenced.
Constraint:
if ${\mathbf{sigcmp}}=\text{'S'}$ and ${\mathbf{sigmax}}\le 0.0$, $1\le {\mathbf{maxits}}\le {\mathbf{maxitn}}$.
 9:
$\mathrm{monit}$ – int64int32nag_int scalar

If
${\mathbf{monit}}>0$, the frequency at which a monitoring step is executed by
nag_sparse_real_symm_basic_solver (f11ge): the current solution and residual iterates will be returned by
nag_sparse_real_symm_basic_solver (f11ge) and a call to
nag_sparse_real_symm_basic_diag (f11gf) made possible every
monit iterations, starting from the (
monit)th. Otherwise, no monitoring takes place.
There are some additional computational costs involved in monitoring the solution and residual vectors when the Lanczos method (SYMMLQ) is used.
Constraint:
${\mathbf{monit}}\le {\mathbf{maxitn}}$.
Optional Input Parameters
 1:
$\mathrm{sigcmp}$ – string (length ≥ 1)
Default:
$\text{'N'}$
Determines whether an estimate of
${\sigma}_{1}\left(\stackrel{}{A}\right)={\Vert {E}^{1}A{E}^{\mathrm{T}}\Vert}_{2}$, the largest singular value of the preconditioned matrix of the coefficients, is to be computed using the bisection method on the sequence of tridiagonal matrices
$\left\{{T}_{k}\right\}$ generated during the iteration. Note that
$\stackrel{}{A}=A$ when a preconditioner is not used.
If
${\mathbf{sigmax}}>0.0$ (see below), i.e., when
${\sigma}_{1}\left(\stackrel{}{A}\right)$ is supplied, the value of
sigcmp is ignored.
 ${\mathbf{sigcmp}}=\text{'S'}$
 ${\sigma}_{1}\left(\stackrel{}{A}\right)$ is to be computed using the bisection method.
 ${\mathbf{sigcmp}}=\text{'N'}$
 The bisection method is not used.
If the termination criterion
(3) is used, requiring
${\sigma}_{1}\left(\stackrel{}{A}\right)$, an inexpensive estimate is computed and used (see
Description).
It is not used if ${\mathbf{method}}=\text{'MINRES'}$.
Constraint:
${\mathbf{sigcmp}}=\text{'S'}$ or $\text{'N'}$.
 2:
$\mathrm{norm\_p}$ – string (length ≥ 1)
Suggested value:
 if ${\mathbf{iterm}}=1$, ${\mathbf{norm\_p}}=\text{'I'}$;
 if ${\mathbf{iterm}}=2$, ${\mathbf{norm\_p}}=\text{'2'}$.
Default:
 if ${\mathbf{iterm}}=1$, $\text{'I'}$;
 otherwise $\text{'2'}$.
If
${\mathbf{method}}=\text{'CG'}$ or
$\text{'SYMMLQ'}$,
norm_p defines the matrix and vector norm to be used in the termination criteria.
 ${\mathbf{norm\_p}}=\text{'1'}$
 Use the ${l}_{1}$ norm.
 ${\mathbf{norm\_p}}=\text{'I'}$
 Use the ${l}_{\infty}$ norm.
 ${\mathbf{norm\_p}}=\text{'2'}$
 Use the ${l}_{2}$ norm.
It has no effect if ${\mathbf{method}}=\text{'MINRES'}$.
Constraints:
 if ${\mathbf{iterm}}=1$, ${\mathbf{norm\_p}}=\text{'1'}$, $\text{'I'}$ or $\text{'2'}$;
 if ${\mathbf{iterm}}=2$, ${\mathbf{norm\_p}}=\text{'2'}$.
 3:
$\mathrm{weight}$ – string (length ≥ 1)
Default:
$\text{'N'}$
Specifies whether a vector
$w$ of usersupplied weights is to be used in the vector norms used in the computation of termination criterion
(2) (
${\mathbf{iterm}}=1$):
${{\Vert v\Vert}_{p}}^{\left(w\right)}={\Vert {v}^{\left(w\right)}\Vert}_{p}$, where
${v}_{\mathit{i}}^{\left(w\right)}={w}_{\mathit{i}}{v}_{\mathit{i}}$, for
$\mathit{i}=1,2,\dots ,n$. The suffix
$p=1,2,\infty $ denotes the vector norm used, as specified by the argument
norm_p. Note that weights cannot be used when
${\mathbf{iterm}}=2$, i.e., when criterion
(3) is used.
 ${\mathbf{weight}}=\text{'W'}$
 Usersupplied weights are to be used and must be supplied on initial entry to nag_sparse_real_symm_basic_solver (f11ge).
 ${\mathbf{weight}}=\text{'N'}$
 All weights are implicitly set equal to one. Weights do not need to be supplied on initial entry to nag_sparse_real_symm_basic_solver (f11ge).
It has no effect if ${\mathbf{method}}=\text{'MINRES'}$.
Constraints:
 if ${\mathbf{iterm}}=1$, ${\mathbf{weight}}=\text{'W'}$ or $\text{'N'}$;
 if ${\mathbf{iterm}}=2$, ${\mathbf{weight}}=\text{'N'}$.
 4:
$\mathrm{iterm}$ – int64int32nag_int scalar
Default:
$1$
Defines the termination criterion to be used.
 ${\mathbf{iterm}}=1$
 Use the termination criterion defined in (2) (both conjugate gradient and Lanczos (SYMMLQ) methods).
 ${\mathbf{iterm}}=2$
 Use the termination criterion defined in (3) (Lanczos method (SYMMLQ) only).
It has no effect if ${\mathbf{method}}=\text{'MINRES'}$.
Constraints:
 if ${\mathbf{method}}=\text{'CG'}$, ${\mathbf{iterm}}=1$;
 if ${\mathbf{method}}=\text{'SYMMLQ'}$, ${\mathbf{iterm}}=1$ or $2$.
 5:
$\mathrm{sigtol}$ – double scalar
Suggested value:
${\mathbf{sigtol}}=0.01$ should be sufficient in most cases.
Default:
$0.01$
The tolerance used in assessing the convergence of the estimate of
${\sigma}_{1}\left(\stackrel{}{A}\right)={\Vert \stackrel{}{A}\Vert}_{2}$ when the bisection method is used.
If ${\mathbf{sigtol}}\le 0.0$, the default value ${\mathbf{sigtol}}=0.01$ is used. The actual value used is $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{sigtol}},\epsilon \right)$.
If
${\mathbf{sigcmp}}=\text{'N'}$ or
${\mathbf{sigmax}}>0.0$, then
sigtol is not referenced.
It has no effect if ${\mathbf{method}}=\text{'MINRES'}$.
Constraint:
if ${\mathbf{sigcmp}}=\text{'S'}$ and ${\mathbf{sigmax}}\le 0.0$, ${\mathbf{sigtol}}<1.0$.
Output Parameters
 1:
$\mathrm{lwreq}$ – int64int32nag_int scalar

The minimum amount of workspace required by
nag_sparse_real_symm_basic_solver (f11ge). (See also
Arguments in
nag_sparse_real_symm_basic_solver (f11ge).)
 2:
$\mathrm{work}\left(\mathit{lwork}\right)$ – double array

$\mathit{lwork}=120$.
The array
work is initialized by
nag_sparse_real_symm_basic_setup (f11gd). It must
not be modified before calling the next function in the suite, namely
nag_sparse_real_symm_basic_solver (f11ge).
 3:
$\mathrm{ifail}$ – int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see
Error Indicators and Warnings).
Error Indicators and Warnings
Errors or warnings detected by the function:
 ${\mathbf{ifail}}=i$
If ${\mathbf{ifail}}=i$, parameter $i$ had an illegal value on entry. The parameters are numbered as follows:
1:
method, 2:
precon, 3:
sigcmp, 4:
norm_p, 5:
weight, 6:
iterm, 7:
n, 8:
tol, 9:
maxitn, 10:
anorm, 11:
sigmax, 12:
sigtol, 13:
maxits, 14:
monit, 15:
lwreq, 16:
work, 17:
lwork, 18:
ifail.
It is possible that
ifail refers to a parameter that is omitted from the MATLAB interface. This usually indicates that an error in one of the other input parameters has caused an incorrect value to be inferred.
 ${\mathbf{ifail}}=1$

nag_sparse_real_symm_basic_setup (f11gd) has been called out of sequence.
 ${\mathbf{ifail}}=99$
An unexpected error has been triggered by this routine. Please
contact
NAG.
 ${\mathbf{ifail}}=399$
Your licence key may have expired or may not have been installed correctly.
 ${\mathbf{ifail}}=999$
Dynamic memory allocation failed.
Accuracy
Not applicable.
Further Comments
When
${\sigma}_{1}\left(\stackrel{}{A}\right)$ is not supplied (
${\mathbf{sigmax}}\le 0.0$) but it is required, it is estimated by
nag_sparse_real_symm_basic_solver (f11ge) using either of the two methods described in
Description, as specified by the argument
sigcmp. In particular, if
${\mathbf{sigcmp}}=\text{'S'}$, then the computation of
${\sigma}_{1}\left(\stackrel{}{A}\right)$ is deemed to have converged when the differences between three successive values of
${\sigma}_{1}\left({T}_{k}\right)$ differ, in a relative sense, by less than the tolerance
sigtol, i.e., when
The computation of
${\sigma}_{1}\left(\stackrel{}{A}\right)$ is also terminated when the iteration count exceeds the maximum value allowed, i.e.,
$k\ge {\mathbf{maxits}}$.
Bisection is increasingly expensive with increasing iteration count. A reasonably large value of
sigtol, of the order of the suggested value, is recommended and an excessive value of
maxits should be avoided. Under these conditions,
${\sigma}_{1}\left(\stackrel{}{A}\right)$ usually converges within very few iterations.
Example
This example solves a symmetric system of simultaneous linear equations using the conjugate gradient method, where the matrix of the coefficients
$A$, has a random sparsity pattern. An incomplete Cholesky preconditioner is used (
nag_sparse_real_symm_precon_ichol (f11ja) and
nag_sparse_real_symm_precon_ichol_solve (f11jb)).
Open in the MATLAB editor:
f11gd_example
function f11gd_example
fprintf('f11gd example results\n\n');
n = int64(7);
nz = int64(16);
a = zeros(1000,1);
irow = zeros(1000,1,'int64');
icol = irow;
a(1:16) = [4 1 5 2 2 3 1 1 4 1 2 3 2 1 2 5];
irow(1:16) = [1 2 2 3 4 4 5 5 5 6 6 6 7 7 7 7];
icol(1:16) = [1 1 2 3 2 4 1 4 5 2 5 6 1 2 3 7];
lfill = int64(0);
dtol = 0;
mic = 'N';
dscale = 0;
ipiv = zeros(n, 1, 'int64');
[a, irow, icol, ipiv, istr, nnzc, npivm, ifail] = ...
f11ja( ...
nz, a, irow, icol, lfill, dtol, mic, dscale, ipiv);
method = 'CG';
precon = 'P';
tol = 1e6;
maxitn = int64(20);
anorm = 0;
sigmax = 0;
maxits = int64(7);
monit = int64(2);
[lwreq, work, ifail] = ...
f11gd(method, precon, n, tol, maxitn, anorm, sigmax, ...
maxits, monit, 'sigcmp', 'S', 'norm_p', '1');
irevcm = int64(0);
u = zeros(n,1);
v = [15; 18; 8; 21; 11; 10; 29];
wgt = zeros(n,1);
while (irevcm ~= 4)
[irevcm, u, v, work, ifail] = f11ge( ...
irevcm, u, v, wgt, work);
if (irevcm == 1)
[v, ifail] = f11xe( ...
a, irow, icol, 'N', u, 'nz', nz);
elseif (irevcm == 2)
[v, ifail] = f11jb( ...
a, irow, icol, ipiv, istr, 'N', u);
elseif (irevcm == 3)
[itn, stplhs, stprhs, anorm, sigmax, its, sigerr, ifail] = ...
f11gf(work);
fprintf('\nMonitoring at iteration number %d\n',itn);
fprintf('residual norm: %14.4e\n\n', stplhs);
fprintf(' Solution Vector Residual Vector\n');
fprintf('%16.4f %16.4e\n', [u'; v']);
end
end
[itn, stplhs, stprhs, anorm, sigmax, its, sigerr, ifail] = f11gf(work);
fprintf('\nNumber of iterations for convergence: %d\n', itn);
fprintf('Residual norm: %14.4e\n', stplhs);
fprintf('Righthand side of termination criteria: %14.4e\n', stprhs);
fprintf('inorm of matrix a: %14.4e\n', anorm);
fprintf('\n Solution Vector Residual Vector\n');
fprintf('%16.4f %12.2e\n', [u'; v']);
f11gd example results
Monitoring at iteration number 2
residual norm: 1.9938e+00
Solution Vector Residual Vector
0.9632 2.2960e01
1.9934 2.2254e01
3.0583 9.5827e02
4.1453 2.5155e01
4.8289 1.7160e01
5.6630 6.7533e01
7.1062 3.4737e01
Monitoring at iteration number 4
residual norm: 6.6574e03
Solution Vector Residual Vector
0.9994 1.0551e03
2.0011 2.4675e03
3.0008 1.7116e05
3.9996 4.4929e05
4.9991 2.1359e03
5.9993 8.7482e04
7.0007 6.2045e05
Number of iterations for convergence: 5
Residual norm: 2.0428e14
Righthand side of termination criteria: 3.9200e04
inorm of matrix a: 1.0000e+01
Solution Vector Residual Vector
1.0000 0.00e+00
2.0000 0.00e+00
3.0000 2.66e15
4.0000 3.55e15
5.0000 5.33e15
6.0000 1.78e15
7.0000 7.11e15
PDF version (NAG web site
, 64bit version, 64bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015