Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_sparse_real_symm_basic_setup (f11gd)

## Purpose

nag_sparse_real_symm_basic_setup (f11gd) is a setup function, the first in a suite of three functions for the iterative solution of a symmetric system of simultaneous linear equations. nag_sparse_real_symm_basic_setup (f11gd) must be called before the iterative solver, nag_sparse_real_symm_basic_solver (f11ge). The third function in the suite, nag_sparse_real_symm_basic_diag (f11gf), can be used to return additional information about the computation.
These three functions are suitable for the solution of large sparse symmetric systems of equations.

## Syntax

[lwreq, work, ifail] = f11gd(method, precon, n, tol, maxitn, anorm, sigmax, maxits, monit, 'sigcmp', sigcmp, 'norm_p', norm_p, 'weight', weight, 'iterm', iterm, 'sigtol', sigtol)
[lwreq, work, ifail] = nag_sparse_real_symm_basic_setup(method, precon, n, tol, maxitn, anorm, sigmax, maxits, monit, 'sigcmp', sigcmp, 'norm_p', norm_p, 'weight', weight, 'iterm', iterm, 'sigtol', sigtol)

## Description

The suite consisting of the functions nag_sparse_real_symm_basic_setup (f11gd), nag_sparse_real_symm_basic_solver (f11ge) and nag_sparse_real_symm_basic_diag (f11gf) is designed to solve the symmetric system of simultaneous linear equations Ax = b$Ax=b$ of order n$n$, where n$n$ is large and the matrix of the coefficients A$A$ is sparse.
nag_sparse_real_symm_basic_setup (f11gd) is a setup function which must be called before nag_sparse_real_symm_basic_solver (f11ge), the iterative solver. The third function in the suite, nag_sparse_real_symm_basic_diag (f11gf) can be used to return additional information about the computation. One of the following methods can be used:
For this method (see Hestenes and Stiefel (1952), Golub and Van Loan (1996), Barrett et al. (1994) and Dias da Cunha and Hopkins (1994)), the matrix A$A$ should ideally be positive definite. The application of the Conjugate Gradient method to indefinite matrices may lead to failure or to lack of convergence.
2. Lanczos Method (SYMMLQ)
This method, based upon the algorithm SYMMLQ (see Paige and Saunders (1975) and Barrett et al. (1994)), is suitable for both positive definite and indefinite matrices. It is more robust than the Conjugate Gradient method but less efficient when A$A$ is positive definite.
3. Minimum Residual Method (MINRES)
This method may be used when the matrix is indefinite. It seeks to reduce the norm of the residual at each iteration and often takes fewer iterations than the other methods. It does however require slightly more memory.
The CG and SYMMLQ methods start from the residual r0 = bAx0${r}_{0}=b-A{x}_{0}$, where x0${x}_{0}$ is an initial estimate for the solution (often x0 = 0${x}_{0}=0$), and generate an orthogonal basis for the Krylov subspace span{Akr0}$\mathrm{span}\left\{{A}^{\mathit{k}}{r}_{0}\right\}$, for k = 0,1,$\mathit{k}=0,1,\dots$, by means of three-term recurrence relations (see Golub and Van Loan (1996)). A sequence of symmetric tridiagonal matrices {Tk}$\left\{{T}_{k}\right\}$ is also generated. Here and in the following, the index k$k$ denotes the iteration count. The resulting symmetric tridiagonal systems of equations are usually more easily solved than the original problem. A sequence of solution iterates {xk}$\left\{{x}_{k}\right\}$ is thus generated such that the sequence of the norms of the residuals {rk}$\left\{‖{r}_{k}‖\right\}$ converges to a required tolerance. Note that, in general, the convergence is not monotonic.
In exact arithmetic, after n$n$ iterations, this process is equivalent to an orthogonal reduction of A$A$ to symmetric tridiagonal form, Tn = QTAQ${T}_{n}={Q}^{\mathrm{T}}AQ$; the solution xn${x}_{n}$ would thus achieve exact convergence. In finite-precision arithmetic, cancellation and round-off errors accumulate causing loss of orthogonality. These methods must therefore be viewed as genuinely iterative methods, able to converge to a solution within a prescribed tolerance.
The orthogonal basis is not formed explicitly in either method. The basic difference between the Conjugate Gradient and Lanczos methods lies in the method of solution of the resulting symmetric tridiagonal systems of equations: the conjugate gradient method is equivalent to carrying out an LDLT$LD{L}^{\mathrm{T}}$ (Cholesky) factorization whereas the Lanczos method (SYMMLQ) uses an LQ$LQ$ factorization.
Faster convergence for all the methods can be achieved using a preconditioner (see Golub and Van Loan (1996) and Barrett et al. (1994)). A preconditioner maps the original system of equations onto a different system, say
 Ax = b, $A-x-=b-,$ (1)
with, hopefully, better characteristics with respect to its speed of convergence: for example, the condition number of the matrix of the coefficients can be improved or eigenvalues in its spectrum can be made to coalesce. An orthogonal basis for the Krylov subspace span{Akr0}$\mathrm{span}\left\{{\stackrel{-}{A}}^{\mathit{k}}{\stackrel{-}{r}}_{0}\right\}$, for k = 0,1,$\mathit{k}=0,1,\dots$, is generated and the solution proceeds as outlined above. The algorithms used are such that the solution and residual iterates of the original system are produced, not their preconditioned counterparts. Note that an unsuitable preconditioner or no preconditioning at all may result in a very slow rate, or lack, of convergence. However, preconditioning involves a trade-off between the reduction in the number of iterations required for convergence and the additional computational costs per iteration. Also, setting up a preconditioner may involve non-negligible overheads.
A preconditioner must be symmetric and positive definite, i.e., representable by M = EET$M=E{E}^{\mathrm{T}}$, where M$M$ is nonsingular, and such that A = E1AETIn$\stackrel{-}{A}={E}^{-1}A{E}^{-\mathrm{T}}\sim {I}_{n}$ in (1), where In${I}_{n}$ is the identity matrix of order n$n$. Also, we can define r = E1r$\stackrel{-}{r}={E}^{-1}r$ and x = ETx$\stackrel{-}{x}={E}^{\mathrm{T}}x$. These are formal definitions, used only in the design of the algorithms; in practice, only the means to compute the matrix-vector products v = Au$v=Au$ and to solve the preconditioning equations Mv = u$Mv=u$ are required, that is, explicit information about M$M$, E$E$ or their inverses is not required at any stage.
The first termination criterion
 ‖rk‖p ≤ τ ( ‖b‖p + ‖A‖p × ‖xk‖p) $‖rk‖p ≤ τ ( ‖b‖p + ‖A‖p × ‖xk‖p )$ (2)
is available for both conjugate gradient and Lanczos (SYMMLQ) methods. In (2), p = 1,​ or ​2$p=1,\infty \text{​ or ​}2$ and τ$\tau$ denotes a user-specified tolerance subject to max (10,sqrt(n))ετ < 1$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(10,\sqrt{n}\right)\epsilon \le \tau <1$, where ε$\epsilon$ is the machine precision. Facilities are provided for the estimation of the norm of the matrix of the coefficients A1 = A${‖A‖}_{1}={‖A‖}_{\infty }$, when this is not known in advance, used in (2), by applying Higham's method (see Higham (1988)). Note that A2${‖A‖}_{2}$ cannot be estimated internally. This criterion uses an error bound derived from backward error analysis to ensure that the computed solution is the exact solution of a problem as close to the original as the termination tolerance requires. Termination criteria employing bounds derived from forward error analysis could be used, but any such criteria would require information about the condition number κ(A)$\kappa \left(A\right)$ which is not easily obtainable.
The second termination criterion
 ‖rk‖2 ≤ τ max (1.0, ‖b‖2 / ‖r0‖2 ) ( ‖r0‖2 + σ1 (A) × ‖Δxk‖2 ) $‖r-k‖2 ≤ τ max(1.0, ‖b‖2 / ‖r0‖2 ) ( ‖ r-0 ‖2 + σ1 (A-) × ‖Δx-k‖2 )$ (3)
is available only for the Lanczos method (SYMMLQ). In (3), σ1(A) = A2${\sigma }_{1}\left(\stackrel{-}{A}\right)={‖\stackrel{-}{A}‖}_{2}$ is the largest singular value of the (preconditioned) iteration matrix A$\stackrel{-}{A}$. This termination criterion monitors the progress of the solution of the preconditioned system of equations and is less expensive to apply than criterion (2). When σ1(A)${\sigma }_{1}\left(\stackrel{-}{A}\right)$ is not supplied, facilities are provided for its estimation by σ1(A)maxk σ1(Tk)${\sigma }_{1}\left(\stackrel{-}{A}\right)\sim \underset{k}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}{\sigma }_{1}\left({T}_{k}\right)$. The interlacing property σ1(Tk1)σ1(Tk)${\sigma }_{1}\left({T}_{k-1}\right)\le {\sigma }_{1}\left({T}_{k}\right)$ and Gerschgorin's theorem provide lower and upper bounds from which σ1(Tk)${\sigma }_{1}\left({T}_{k}\right)$ can be easily computed by bisection. Alternatively, the less expensive estimate σ1(A)maxk Tk1${\sigma }_{1}\left(\stackrel{-}{A}\right)\sim \underset{k}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}{‖{T}_{k}‖}_{1}$ can be used, where σ1(A)Tk1${\sigma }_{1}\left(\stackrel{-}{A}\right)\le {‖{T}_{k}‖}_{1}$ by Gerschgorin's theorem. Note that only order of magnitude estimates are required by the termination criterion.
Termination criterion (2) is the recommended choice, despite its (small) additional costs per iteration when using the Lanczos method (SYMMLQ). Also, if the norm of the initial estimate is much larger than the norm of the solution, that is, if x0x$‖{x}_{0}‖\gg ‖x‖$, a dramatic loss of significant digits could result in complete lack of convergence. The use of criterion (2) will enable the detection of such a situation, and the iteration will be restarted at a suitable point. No such restart facilities are provided for criterion (3).
Optionally, a vector w$w$ of user-specified weights can be used in the computation of the vector norms in termination criterion (2), i.e., vp(w) = v(w)p${{‖v‖}_{p}}^{\left(w\right)}={‖{v}^{\left(w\right)}‖}_{p}$, where (v(w))i = wi vi${\left({v}^{\left(w\right)}\right)}_{\mathit{i}}={w}_{\mathit{i}}{v}_{\mathit{i}}$, for i = 1,2,,n$\mathit{i}=1,2,\dots ,n$. Note that the use of weights increases the computational costs.
The MINRES algorithm terminates when the norm of the residual of the preconditioned system F$F$, F2τ × A2 × xk2 ${‖F‖}_{2}\le \tau ×{‖\stackrel{-}{A}‖}_{2}×{‖{x}_{k}‖}_{2}$, where A$\stackrel{-}{A}$ is the preconditioned matrix.
The sequence of calls to the functions comprising the suite is enforced: first, the setup function nag_sparse_real_symm_basic_setup (f11gd) must be called, followed by the solver nag_sparse_real_symm_basic_solver (f11ge). The diagnostic function nag_sparse_real_symm_basic_diag (f11gf) can be called either when nag_sparse_real_symm_basic_solver (f11ge) is carrying out a monitoring step or after nag_sparse_real_symm_basic_solver (f11ge) has completed its tasks. Incorrect sequencing will raise an error condition.

## References

Barrett R, Berry M, Chan T F, Demmel J, Donato J, Dongarra J, Eijkhout V, Pozo R, Romine C and Van der Vorst H (1994) Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods SIAM, Philadelphia
Dias da Cunha R and Hopkins T (1994) PIM 1.1 — the parallel iterative method package for systems of linear equations user's guide — Fortran 77 version Technical Report Computing Laboratory, University of Kent at Canterbury, Kent, UK
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Hestenes M and Stiefel E (1952) Methods of conjugate gradients for solving linear systems J. Res. Nat. Bur. Stand. 49 409–436
Higham N J (1988) FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation ACM Trans. Math. Software 14 381–396
Paige C C and Saunders M A (1975) Solution of sparse indefinite systems of linear equations SIAM J. Numer. Anal. 12 617–629

## Parameters

### Compulsory Input Parameters

1:     method – string
The iterative method to be used.
method = 'CG'${\mathbf{method}}=\text{'CG'}$
method = 'SYMMLQ'${\mathbf{method}}=\text{'SYMMLQ'}$
Lanczos method (SYMMLQ).
method = 'MINRES'${\mathbf{method}}=\text{'MINRES'}$
Minimum residual method (MINRES).
Constraint: method = 'CG'${\mathbf{method}}=\text{'CG'}$, 'SYMMLQ'$\text{'SYMMLQ'}$ or 'MINRES'$\text{'MINRES'}$.
2:     precon – string (length ≥ 1)
Determines whether preconditioning is used.
precon = 'N'${\mathbf{precon}}=\text{'N'}$
No preconditioning.
precon = 'P'${\mathbf{precon}}=\text{'P'}$
Preconditioning.
Constraint: precon = 'N'${\mathbf{precon}}=\text{'N'}$ or 'P'$\text{'P'}$.
3:     n – int64int32nag_int scalar
n$n$, the order of the matrix A$A$.
Constraint: n > 0${\mathbf{n}}>0$.
4:     tol – double scalar
The tolerance τ$\tau$ for the termination criterion.
If tol0.0${\mathbf{tol}}\le 0.0$, τ = max (sqrt(ε),sqrt(n)ε)$\tau =\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(\sqrt{\epsilon },\sqrt{n}\epsilon \right)$ is used, where ε$\epsilon$ is the machine precision.
Otherwise τ = max (tol,10ε,sqrt(n)ε)$\tau =\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{tol}},10\epsilon ,\sqrt{n}\epsilon \right)$ is used.
Constraint: tol < 1.0${\mathbf{tol}}<1.0$.
5:     maxitn – int64int32nag_int scalar
The maximum number of iterations.
Constraint: maxitn > 0${\mathbf{maxitn}}>0$.
6:     anorm – double scalar
If anorm > 0.0${\mathbf{anorm}}>0.0$, the value of Ap${‖A‖}_{p}$ to be used in the termination criterion (2) (iterm = 1${\mathbf{iterm}}=1$).
If anorm0.0${\mathbf{anorm}}\le 0.0$, iterm = 1${\mathbf{iterm}}=1$ and norm = '1'${\mathbf{norm}}=\text{'1'}$ or 'I'$\text{'I'}$, then A1 = A${‖A‖}_{1}={‖A‖}_{\infty }$ is estimated internally by nag_sparse_real_symm_basic_solver (f11ge).
If iterm = 2${\mathbf{iterm}}=2$, then anorm is not referenced.
It has no effect if method = 'MINRES'${\mathbf{method}}=\text{'MINRES'}$.
Constraint: if iterm = 1${\mathbf{iterm}}=1$ and norm = 2${\mathbf{norm}}=2$, anorm > 0.0${\mathbf{anorm}}>0.0$.
7:     sigmax – double scalar
If sigmax > 0.0${\mathbf{sigmax}}>0.0$, the value of σ1(A) = E1AET2${\sigma }_{1}\left(\stackrel{-}{A}\right)={‖{E}^{-1}A{E}^{-\mathrm{T}}‖}_{2}$.
If sigmax0.0${\mathbf{sigmax}}\le 0.0$, σ1(A)${\sigma }_{1}\left(\stackrel{-}{A}\right)$ is estimated by nag_sparse_real_symm_basic_solver (f11ge) when either sigcmp = 'S'${\mathbf{sigcmp}}=\text{'S'}$ or termination criterion (3) (iterm = 2${\mathbf{iterm}}=2$) is employed, though it will be used only in the latter case.
Otherwise, or if method = 'MINRES'${\mathbf{method}}=\text{'MINRES'}$, sigmax is not referenced.
8:     maxits – int64int32nag_int scalar
The maximum iteration number k = maxits$k={\mathbf{maxits}}$ for which σ1(Tk)${\sigma }_{1}\left({T}_{k}\right)$ is computed by bisection (see also Section [Description]). If sigcmp = 'N'${\mathbf{sigcmp}}=\text{'N'}$ or sigmax > 0.0${\mathbf{sigmax}}>0.0$, or if method = 'MINRES'${\mathbf{method}}=\text{'MINRES'}$, then maxits is not referenced.
Constraint: if sigcmp = 'S'${\mathbf{sigcmp}}=\text{'S'}$ and sigmax0.0${\mathbf{sigmax}}\le 0.0$, 1maxitsmaxitn$1\le {\mathbf{maxits}}\le {\mathbf{maxitn}}$.
9:     monit – int64int32nag_int scalar
If monit > 0${\mathbf{monit}}>0$, the frequency at which a monitoring step is executed by nag_sparse_real_symm_basic_solver (f11ge): the current solution and residual iterates will be returned by nag_sparse_real_symm_basic_solver (f11ge) and a call to nag_sparse_real_symm_basic_diag (f11gf) made possible every monit iterations, starting from the (monit)th. Otherwise, no monitoring takes place.
There are some additional computational costs involved in monitoring the solution and residual vectors when the Lanczos method (SYMMLQ) is used.
Constraint: ${\mathbf{monit}}\le {\mathbf{maxitn}}$.

### Optional Input Parameters

1:     sigcmp – string (length ≥ 1)
Determines whether an estimate of σ1(A) = E1AET2${\sigma }_{1}\left(\stackrel{-}{A}\right)={‖{E}^{-1}A{E}^{-\mathrm{T}}‖}_{2}$, the largest singular value of the preconditioned matrix of the coefficients, is to be computed using the bisection method on the sequence of tridiagonal matrices {Tk}$\left\{{T}_{k}\right\}$ generated during the iteration. Note that A = A$\stackrel{-}{A}=A$ when a preconditioner is not used.
If sigmax > 0.0${\mathbf{sigmax}}>0.0$ (see below), i.e., when σ1(A)${\sigma }_{1}\left(\stackrel{-}{A}\right)$ is supplied, the value of sigcmp is ignored.
sigcmp = 'S'${\mathbf{sigcmp}}=\text{'S'}$
σ1(A)${\sigma }_{1}\left(\stackrel{-}{A}\right)$ is to be computed using the bisection method.
sigcmp = 'N'${\mathbf{sigcmp}}=\text{'N'}$
The bisection method is not used.
If the termination criterion (3) is used, requiring σ1(A)${\sigma }_{1}\left(\stackrel{-}{A}\right)$, an inexpensive estimate is computed and used (see Section [Description]).
It is not used if method = 'MINRES'${\mathbf{method}}=\text{'MINRES'}$.
Default: 'N'$\text{'N'}$
Constraint: sigcmp = 'S'${\mathbf{sigcmp}}=\text{'S'}$ or 'N'$\text{'N'}$.
2:     norm_p – string (length ≥ 1)
If method = 'CG'${\mathbf{method}}=\text{'CG'}$ or 'SYMMLQ'$\text{'SYMMLQ'}$, norm_p defines the matrix and vector norm to be used in the termination criteria.
norm = '1'${\mathbf{norm}}=\text{'1'}$
Use the l1${l}_{1}$ norm.
norm = 'I'${\mathbf{norm}}=\text{'I'}$
Use the l${l}_{\infty }$ norm.
norm = '2'${\mathbf{norm}}=\text{'2'}$
Use the l2${l}_{2}$ norm.
It has no effect if method = 'MINRES'${\mathbf{method}}=\text{'MINRES'}$.
Default:
• if iterm = 1${\mathbf{iterm}}=1$, 'I' $\text{'I'}$;
• otherwise '2' $\text{'2'}$.
Constraints:
• if iterm = 1${\mathbf{iterm}}=1$, norm = '1'${\mathbf{norm}}=\text{'1'}$, 'I'$\text{'I'}$ or '2'$\text{'2'}$;
• if iterm = 2${\mathbf{iterm}}=2$, norm = '2'${\mathbf{norm}}=\text{'2'}$.
3:     weight – string (length ≥ 1)
Specifies whether a vector w$w$ of user-supplied weights is to be used in the vector norms used in the computation of termination criterion (2) (iterm = 1${\mathbf{iterm}}=1$): vp(w) = v(w)p${{‖v‖}_{p}}^{\left(w\right)}={‖{v}^{\left(w\right)}‖}_{p}$, where vi(w) = wi vi${v}_{\mathit{i}}^{\left(w\right)}={w}_{\mathit{i}}{v}_{\mathit{i}}$, for i = 1,2,,n$\mathit{i}=1,2,\dots ,n$. The suffix p = 1,2,$p=1,2,\infty$ denotes the vector norm used, as specified by the parameter norm_p. Note that weights cannot be used when iterm = 2${\mathbf{iterm}}=2$, i.e., when criterion (3) is used.
weight = 'W'${\mathbf{weight}}=\text{'W'}$
User-supplied weights are to be used and must be supplied on initial entry to nag_sparse_real_symm_basic_solver (f11ge).
weight = 'N'${\mathbf{weight}}=\text{'N'}$
All weights are implicitly set equal to one. Weights do not need to be supplied on initial entry to nag_sparse_real_symm_basic_solver (f11ge).
It has no effect if method = 'MINRES'${\mathbf{method}}=\text{'MINRES'}$.
Default: 'N'$\text{'N'}$
Constraints:
• if iterm = 1${\mathbf{iterm}}=1$, weight = 'W'${\mathbf{weight}}=\text{'W'}$ or 'N'$\text{'N'}$;
• if iterm = 2${\mathbf{iterm}}=2$, weight = 'N'${\mathbf{weight}}=\text{'N'}$.
4:     iterm – int64int32nag_int scalar
Defines the termination criterion to be used.
iterm = 1${\mathbf{iterm}}=1$
Use the termination criterion defined in (2) (both conjugate gradient and Lanczos (SYMMLQ) methods).
iterm = 2${\mathbf{iterm}}=2$
Use the termination criterion defined in (3) (Lanczos method (SYMMLQ) only).
It has no effect if method = 'MINRES'${\mathbf{method}}=\text{'MINRES'}$.
Default: 1$1$
Constraints:
• if method = 'CG'${\mathbf{method}}=\text{'CG'}$, iterm = 1${\mathbf{iterm}}=1$;
• if method = 'SYMMLQ'${\mathbf{method}}=\text{'SYMMLQ'}$, iterm = 1${\mathbf{iterm}}=1$ or 2$2$.
5:     sigtol – double scalar
The tolerance used in assessing the convergence of the estimate of σ1(A) = A2${\sigma }_{1}\left(\stackrel{-}{A}\right)={‖\stackrel{-}{A}‖}_{2}$ when the bisection method is used.
If sigtol0.0${\mathbf{sigtol}}\le 0.0$, the default value sigtol = 0.01${\mathbf{sigtol}}=0.01$ is used. The actual value used is max (sigtol,ε)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{sigtol}},\epsilon \right)$.
If sigcmp = 'N'${\mathbf{sigcmp}}=\text{'N'}$ or sigmax > 0.0${\mathbf{sigmax}}>0.0$, then sigtol is not referenced.
It has no effect if method = 'MINRES'${\mathbf{method}}=\text{'MINRES'}$.
Default: 0.01$0.01$
Constraint: if sigcmp = 'S'${\mathbf{sigcmp}}=\text{'S'}$ and sigmax0.0${\mathbf{sigmax}}\le 0.0$, sigtol < 1.0${\mathbf{sigtol}}<1.0$.

lwork

### Output Parameters

1:     lwreq – int64int32nag_int scalar
The minimum amount of workspace required by nag_sparse_real_symm_basic_solver (f11ge). (See also Section [Parameters] in (f11ge).)
2:     work(lwork) – double array
lwork120$\mathit{lwork}\ge 120$.
The array work is initialized by nag_sparse_real_symm_basic_setup (f11gd). It must not be modified before calling the next function in the suite, namely nag_sparse_real_symm_basic_solver (f11ge).
3:     ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

## Error Indicators and Warnings

Errors or warnings detected by the function:
ifail = i${\mathbf{ifail}}=-i$
If ifail = i${\mathbf{ifail}}=-i$, parameter i$i$ had an illegal value on entry. The parameters are numbered as follows:
1: method, 2: precon, 3: sigcmp, 4: norm_p, 5: weight, 6: iterm, 7: n, 8: tol, 9: maxitn, 10: anorm, 11: sigmax, 12: sigtol, 13: maxits, 14: monit, 15: lwreq, 16: work, 17: lwork, 18: ifail.
It is possible that ifail refers to a parameter that is omitted from the MATLAB interface. This usually indicates that an error in one of the other input parameters has caused an incorrect value to be inferred.
ifail = 1${\mathbf{ifail}}=1$
nag_sparse_real_symm_basic_setup (f11gd) has been called out of sequence.

## Accuracy

Not applicable.

When σ1(A)${\sigma }_{1}\left(\stackrel{-}{A}\right)$ is not supplied (sigmax0.0${\mathbf{sigmax}}\le 0.0$) but it is required, it is estimated by nag_sparse_real_symm_basic_solver (f11ge) using either of the two methods described in Section [Description], as specified by the parameter sigcmp. In particular, if sigcmp = 'S'${\mathbf{sigcmp}}=\text{'S'}$, then the computation of σ1(A)${\sigma }_{1}\left(\stackrel{-}{A}\right)$ is deemed to have converged when the differences between three successive values of σ1(Tk)${\sigma }_{1}\left({T}_{k}\right)$ differ, in a relative sense, by less than the tolerance sigtol, i.e., when
 max ((|σ1(k) − σ1(k − 1)|)/(σ1(k)), ​(|σ1(k) − σ1(k − 2)|)/(σ1(k))) ≤ sigtol. $max( |σ1(k)-σ1(k-1)| σ1(k) , ​ |σ1(k)-σ1(k-2)| σ1(k) ) ≤sigtol.$
The computation of σ1(A)${\sigma }_{1}\left(\stackrel{-}{A}\right)$ is also terminated when the iteration count exceeds the maximum value allowed, i.e., kmaxits$k\ge {\mathbf{maxits}}$.
Bisection is increasingly expensive with increasing iteration count. A reasonably large value of sigtol, of the order of the suggested value, is recommended and an excessive value of maxits should be avoided. Under these conditions, σ1(A)${\sigma }_{1}\left(\stackrel{-}{A}\right)$ usually converges within very few iterations.

## Example

```function nag_sparse_real_symm_basic_setup_example
nz = 16;
a = zeros(10000,1);
a(1:16) = [4;1;5;2;2;3;-1;1;4;1;-2;3;2;-1;-2;5];
irow = zeros(10000,1,'int64');
irow(1:16) = [int64(1);2;2;3;4;4;5;5;5;6;6;6;7;7;7;7];
icol = zeros(10000,1,'int64');
icol(1:16) = [int64(1);1;2;3;2;4;1;4;5;2;5;6;1;2;3;7];
lfill = int64(0);
dtol = 0;
mic = 'N';
dscale = 0;
ipiv = zeros(7, 1, 'int64');
method = 'CG    ';
precon = 'P';
n = 7;
tol = 1e-06;
maxitn = int64(20);
anorm = 0;
sigmax = 0;
maxits = int64(7);
monit = int64(2);
irevcm = int64(0);
u = [0; 0; 0; 0; 0; 0; 0];
v = [15; 18; -8; 21; 11; 10; 29];
wgt = [0; 0; 0; 0; 0; 0; 0];

[a, irow, icol, ipiv, istr, nnzc, npivm, ifail] = ...
nag_sparse_real_symm_precon_ichol(int64(nz), a, irow, icol, lfill, dtol, ...
mic, dscale, ipiv);

[lwreq, work, ifail] = ...
nag_sparse_real_symm_basic_setup(method, precon, int64(n), tol, maxitn, ...
anorm, sigmax, maxits, monit, 'sigcmp', 'S', 'norm_p', '1');

while (irevcm ~= 4)
[irevcm, u, v, work, ifail] = ...
nag_sparse_real_symm_basic_solver(irevcm, u, v, wgt, work);
if (irevcm == 1)
[v, ifail] = ...
nag_sparse_real_symm_matvec(a(1:nz), irow(1:nz), icol(1:nz), 'N', u);
if (ifail ~= 0)
irecvm = 6;
end
elseif (irevcm == 2)
[v, ifail] = ...
nag_sparse_real_symm_precon_ichol_solve(a, irow, icol, ipiv, istr, 'N', u);
if (ifail ~= 0)
irecvm = 6;
end
elseif (irevcm == 3)
[itn, stplhs, stprhs, anorm, sigmax, its, sigerr, ifail] = ...
nag_sparse_real_symm_basic_diag(work);
if (ifail ~= 0)
irecvm = 6;
end
fprintf('\nMonitoring at iteration number %d\nresidual norm: %14.4e\n', ...
itn, stplhs);
fprintf('\n   Solution Vector  Residual Vector\n');
for i = 1:n
fprintf('%+16.4e %+16.4e\n', u(i), v(i));
end
end
end

% Get information about the computation
[itn, stplhs, stprhs, anorm, sigmax, its, sigerr, ifail] = ...
nag_sparse_real_symm_basic_diag(work);
fprintf('\nNumber of iterations for convergence: %d\n', itn);
fprintf('Residual norm:                           %14.4e\n', stplhs);
fprintf('Right-hand side of termination criteria: %14.4e\n', stprhs);
fprintf('i-norm of matrix a:                      %14.4e\n', anorm);
fprintf('\n   Solution Vector  Residual Vector\n');
for i = 1:n
fprintf('%+16.4e %+16.4e\n', u(i), v(i));
end
```
```

Monitoring at iteration number 2
residual norm:     1.9938e+00

Solution Vector  Residual Vector
+9.6320e-01      -2.2960e-01
+1.9934e+00      +2.2254e-01
+3.0583e+00      +9.5827e-02
+4.1453e+00      -2.5155e-01
+4.8289e+00      -1.7160e-01
+5.6630e+00      +6.7533e-01
+7.1062e+00      -3.4737e-01

Monitoring at iteration number 4
residual norm:     6.6574e-03

Solution Vector  Residual Vector
+9.9940e-01      -1.0551e-03
+2.0011e+00      -2.4675e-03
+3.0008e+00      -1.7116e-05
+3.9996e+00      +4.4929e-05
+4.9991e+00      +2.1359e-03
+5.9993e+00      -8.7482e-04
+7.0007e+00      +6.2045e-05

Number of iterations for convergence: 5
Residual norm:                               1.8652e-14
Right-hand side of termination criteria:     3.9200e-04
i-norm of matrix a:                          1.0000e+01

Solution Vector  Residual Vector
+1.0000e+00      +1.7764e-15
+2.0000e+00      +0.0000e+00
+3.0000e+00      -8.8818e-16
+4.0000e+00      -7.1054e-15
+5.0000e+00      -5.3291e-15
+6.0000e+00      +0.0000e+00
+7.0000e+00      +3.5527e-15

```
```function f11gd_example
nz = 16;
a = zeros(10000,1);
a(1:16) = [4;
1;
5;
2;
2;
3;
-1;
1;
4;
1;
-2;
3;
2;
-1;
-2;
5];
irow = zeros(10000,1,'int64');
irow(1:16) = [int64(1);2;2;3;4;4;5;5;5;6;6;6;7;7;7;7];
icol = zeros(10000,1,'int64');
icol(1:16) = [int64(1);1;2;3;2;4;1;4;5;2;5;6;1;2;3;7];
lfill = int64(0);
dtol = 0;
mic = 'N';
dscale = 0;
ipiv = zeros(7, 1, 'int64');
method = 'CG    ';
precon = 'P';
n = 7;
tol = 1e-06;
maxitn = int64(20);
anorm = 0;
sigmax = 0;
maxits = int64(7);
monit = int64(2);
irevcm = int64(0);
u = [0; 0; 0; 0; 0; 0; 0];
v = [15; 18; -8; 21; 11; 10; 29];
wgt = [0; 0; 0; 0; 0; 0; 0];

[a, irow, icol, ipiv, istr, nnzc, npivm, ifail] = ...
f11ja(int64(nz), a, irow, icol, lfill, dtol, mic, dscale, ipiv);

[lwreq, work, ifail] = ...
f11gd(method, precon, int64(n), tol, maxitn, anorm, sigmax, ...
maxits, monit, 'sigcmp', 'S', 'norm_p', '1');

while (irevcm ~= 4)
[irevcm, u, v, work, ifail] = f11ge(irevcm, u, v, wgt, work);
if (irevcm == 1)
[v, ifail] = f11xe(a(1:nz), irow(1:nz), icol(1:nz), 'N', u);
if (ifail ~= 0)
irecvm = 6;
end
elseif (irevcm == 2)
[v, ifail] = f11jb(a, irow, icol, ipiv, istr, 'N', u);
if (ifail ~= 0)
irecvm = 6;
end
elseif (irevcm == 3)
[itn, stplhs, stprhs, anorm, sigmax, its, sigerr, ifail] = f11gf(work);
if (ifail ~= 0)
irecvm = 6;
end
fprintf('\nMonitoring at iteration number %d\nresidual norm: %14.4e\n', ...
itn, stplhs);
fprintf('\n   Solution Vector  Residual Vector\n');
for i = 1:n
fprintf('%+16.4e %+16.4e\n', u(i), v(i));
end
end
end

% Get information about the computation
[itn, stplhs, stprhs, anorm, sigmax, its, sigerr, ifail] = f11gf(work);
fprintf('\nNumber of iterations for convergence: %d\n', itn);
fprintf('Residual norm:                           %14.4e\n', stplhs);
fprintf('Right-hand side of termination criteria: %14.4e\n', stprhs);
fprintf('i-norm of matrix a:                      %14.4e\n', anorm);
fprintf('\n   Solution Vector  Residual Vector\n');
for i = 1:n
fprintf('%+16.4e %+16.4e\n', u(i), v(i));
end
```
```

Monitoring at iteration number 2
residual norm:     1.9938e+00

Solution Vector  Residual Vector
+9.6320e-01      -2.2960e-01
+1.9934e+00      +2.2254e-01
+3.0583e+00      +9.5827e-02
+4.1453e+00      -2.5155e-01
+4.8289e+00      -1.7160e-01
+5.6630e+00      +6.7533e-01
+7.1062e+00      -3.4737e-01

Monitoring at iteration number 4
residual norm:     6.6574e-03

Solution Vector  Residual Vector
+9.9940e-01      -1.0551e-03
+2.0011e+00      -2.4675e-03
+3.0008e+00      -1.7116e-05
+3.9996e+00      +4.4929e-05
+4.9991e+00      +2.1359e-03
+5.9993e+00      -8.7482e-04
+7.0007e+00      +6.2045e-05

Number of iterations for convergence: 5
Residual norm:                               2.0428e-14
Right-hand side of termination criteria:     3.9200e-04
i-norm of matrix a:                          1.0000e+01

Solution Vector  Residual Vector
+1.0000e+00      +0.0000e+00
+2.0000e+00      +0.0000e+00
+3.0000e+00      -2.6645e-15
+4.0000e+00      -3.5527e-15
+5.0000e+00      -5.3291e-15
+6.0000e+00      +1.7764e-15
+7.0000e+00      +7.1054e-15

```