Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_linsys_real_gen_sparse_lsqsol (f04qa)

## Purpose

nag_linsys_real_gen_sparse_lsqsol (f04qa) solves sparse nonsymmetric equations, sparse linear least squares problems and sparse damped linear least squares problems, using a Lanczos algorithm.

## Syntax

[b, x, se, itnlim, itn, anorm, acond, rnorm, arnorm, xnorm, user, inform, ifail] = f04qa(n, b, aprod, damp, atol, btol, conlim, itnlim, msglvl, 'm', m, 'user', user)
[b, x, se, itnlim, itn, anorm, acond, rnorm, arnorm, xnorm, user, inform, ifail] = nag_linsys_real_gen_sparse_lsqsol(n, b, aprod, damp, atol, btol, conlim, itnlim, msglvl, 'm', m, 'user', user)

## Description

nag_linsys_real_gen_sparse_lsqsol (f04qa) can be used to solve a system of linear equations
 $Ax=b$ (1)
where $A$ is an $n$ by $n$ sparse nonsymmetric matrix, or can be used to solve linear least squares problems, so that nag_linsys_real_gen_sparse_lsqsol (f04qa) minimizes the value $\rho$ given by
 $ρ=r, r=b-Ax$ (2)
where $A$ is an $m$ by $n$ sparse matrix and $‖r‖$ denotes the Euclidean length of $r$ so that ${‖r‖}^{2}={r}^{\mathrm{T}}r$. A damping argument, $\lambda$, may be included in the least squares problem in which case nag_linsys_real_gen_sparse_lsqsol (f04qa) minimizes the value $\rho$ given by
 $ρ2=r2+λ2x2.$ (3)
$\lambda$ is supplied as the argument damp and should of course be zero if the solution to problems (1) or (2) is required. Minimizing $\rho$ in (3) is often called ridge regression.
nag_linsys_real_gen_sparse_lsqsol (f04qa) is based upon algorithm LSQR (see Paige and Saunders (1982a) and Paige and Saunders (1982b)) and solves the problems by an algorithm based upon the Lanczos process. The function does not require $A$ explicitly, but $A$ is specified via aprod which must perform the operations $\left(y+Ax\right)$ and $\left(x+{A}^{\mathrm{T}}y\right)$ for a given $n$-element vector $x$ and $m$ element vector $y$. A argument to aprod specifies which of the two operations is required on a given entry.
The function also returns estimates of the standard errors of the sample regression coefficients (${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$) given by the diagonal elements of the estimated variance-covariance matrix $V$. When problem (2) is being solved and $A$ is of full rank, then $V$ is given by
 $V=s2ATA-1, s2=ρ2/m-n, m>n$
and when problem (3) is being solved then $V$ is given by
 $V=s2ATA+λ2I -1, s2=ρ2/m, λ≠0.$
Let $\stackrel{-}{A}$ denote the matrix
 $A-=A, λ=0; A-= A λI , λ≠0,$ (4)
let $\stackrel{-}{r}$ denote the residual vector
 $r-=r, λ=0; r-= b 0 -A-x, λ≠0$ (5)
corresponding to an iterate $x$, so that $\rho =‖\stackrel{-}{r}‖$ is the function being minimized, and let $‖A‖$ denote the Frobenius (Euclidean) norm of $A$. Then the function accepts $x$ as a solution if it is estimated that one of the following two conditions is satisfied:
 $ρ≤tol1A-.x+tol2b$ (6)
 $A-T r- ≤ tol1 A- ρ$ (7)
where ${\mathit{tol}}_{1}$ and ${\mathit{tol}}_{2}$ are user-supplied tolerances which estimate the relative errors in $A$ and $b$ respectively. Condition (6) is appropriate for compatible problems where, in theory, we expect the residual to be zero and will be satisfied by an acceptable solution $x$ to a compatible problem. Condition (7) is appropriate for incompatible systems where we do not expect the residual to be zero and is based on the observation that, in theory,
 $A-T r- = 0$
when $x$ is a solution to the least squares problem, and so (7) will be satisfied by an acceptable solution $x$ to a linear least squares problem.
The function also includes a test to prevent convergence to solutions, $x$, with unacceptably large elements. This can happen if $A$ is nearly singular or is nearly rank deficient. If we let the singular values of $\stackrel{-}{A}$ be
 $σ1≥σ2≥⋯≥σn≥0$
then the condition number of $\stackrel{-}{A}$ is defined as
 $condA-=σ1/σk$
where ${\sigma }_{k}$ is the smallest nonzero singular value of $\stackrel{-}{A}$ and hence $k$ is the rank of $\stackrel{-}{A}$. When $k, then $\stackrel{-}{A}$ is rank deficient, the least squares solution is not unique and nag_linsys_real_gen_sparse_lsqsol (f04qa) will normally converge to the minimal length solution. In practice $\stackrel{-}{A}$ will not have exactly zero singular values, but may instead have small singular values that we wish to regard as zero.
The function provides for this possibility by terminating if
 $condA-≥clim$ (8)
where ${c}_{\mathrm{lim}}$ is a user-supplied limit on the condition number of $\stackrel{-}{A}$. For problem (1) termination with this condition indicates that $A$ is nearly singular and for problem (2) indicates that $A$ is nearly rank deficient and so has near linear dependencies in its columns. In this case inspection of $‖r‖$, $‖{A}^{\mathrm{T}}r‖$ and $‖x‖$, which are all returned by the function, will indicate whether or not an acceptable solution has been found. Condition (8), perhaps in conjunction with $\lambda \ne 0$, can be used to try and ‘regularize’ least squares solutions. A full discussion of the stopping criteria is given in Section 6 of Paige and Saunders (1982a).
Introduction of a nonzero damping argument $\lambda$ tends to reduce the size of the computed solution and to make its components less sensitive to changes in the data, and nag_linsys_real_gen_sparse_lsqsol (f04qa) is applicable when a value of $\lambda$ is known a priori. To have an effect, $\lambda$ should normally be at least $\sqrt{\epsilon }‖A‖$ where $\epsilon$ is the machine precision. For further discussion see Paige and Saunders (1982b) and the references given there.
Whenever possible the matrix $A$ should be scaled so that the relative errors in the elements of $A$ are all of comparable size. Such a scaling helps to prevent the least squares problem from being unnecessarily sensitive to data errors and will normally reduce the number of iterations required. At the very least, in the absence of better information, the columns of $A$ should be scaled to have roughly equal column length.

## References

Paige C C and Saunders M A (1982a) LSQR: An algorithm for sparse linear equations and sparse least squares ACM Trans. Math. Software 8 43–71
Paige C C and Saunders M A (1982b) Algorithm 583 LSQR: Sparse linear equations and least squares problems ACM Trans. Math. Software 8 195–209

## Parameters

### Compulsory Input Parameters

1:     $\mathrm{n}$int64int32nag_int scalar
$n$, the number of columns of the matrix $A$.
Constraint: ${\mathbf{n}}\ge 1$.
2:     $\mathrm{b}\left({\mathbf{m}}\right)$ – double array
The right-hand side vector $b$.
3:     $\mathrm{aprod}$ – function handle or string containing name of m-file
aprod must perform the operations $y:=y+Ax$ and $x:=x+{A}^{\mathrm{T}}y$ for given vectors $x$ and $y$.
[mode, x, y, user] = aprod(mode, m, n, x, y, user, lruser, liuser)

Input Parameters

1:     $\mathrm{mode}$int64int32nag_int scalar
Specifies which operation is to be performed.
${\mathbf{mode}}=1$
aprod must compute $y+Ax$.
${\mathbf{mode}}=2$
aprod must compute $x+{A}^{\mathrm{T}}y$.
2:     $\mathrm{m}$int64int32nag_int scalar
$m$, the number of rows of $A$.
3:     $\mathrm{n}$int64int32nag_int scalar
$n$, the number of columns of $A$.
4:     $\mathrm{x}\left({\mathbf{n}}\right)$ – double array
The vector $x$.
5:     $\mathrm{y}\left({\mathbf{m}}\right)$ – double array
The vector $y$.
6:     $\mathrm{user}$ – Any MATLAB object
7:     $\mathrm{lruser}$int64int32nag_int scalar
8:     $\mathrm{liuser}$int64int32nag_int scalar
aprod is called from nag_linsys_real_gen_sparse_lsqsol (f04qa) with the object supplied to nag_linsys_real_gen_sparse_lsqsol (f04qa).

Output Parameters

1:     $\mathrm{mode}$int64int32nag_int scalar
May be used as a flag to indicate a failure in the computation of $y+Ax$ or $x+{A}^{\mathrm{T}}y$. If mode is negative on exit from aprod, nag_linsys_real_gen_sparse_lsqsol (f04qa) will exit immediately with ifail set to mode.
2:     $\mathrm{x}\left({\mathbf{n}}\right)$ – double array
If ${\mathbf{mode}}=1$, x must be unchanged.
If ${\mathbf{mode}}=2$, x must contain $x+{A}^{\mathrm{T}}y$.
3:     $\mathrm{y}\left({\mathbf{m}}\right)$ – double array
If ${\mathbf{mode}}=1$, y must contain $y+Ax$.
If ${\mathbf{mode}}=2$, y must be unchanged.
4:     $\mathrm{user}$ – Any MATLAB object
4:     $\mathrm{damp}$ – double scalar
The value $\lambda$. If either problem (1) or problem (2) is to be solved, then damp must be supplied as zero.
5:     $\mathrm{atol}$ – double scalar
The tolerance, ${\mathit{tol}}_{1}$, of the convergence criteria (6) and (7); it should be an estimate of the largest relative error in the elements of $A$. For example, if the elements of $A$ are correct to about $4$ significant figures, then atol should be set to about $5×{10}^{-4}$. If atol is supplied as less than $\epsilon$, where $\epsilon$ is the machine precision, then the value $\epsilon$ is used instead of atol.
6:     $\mathrm{btol}$ – double scalar
The tolerance, ${\mathit{tol}}_{2}$, of the convergence criterion (6); it should be an estimate of the largest relative error in the elements of $B$. For example, if the elements of $B$ are correct to about $4$ significant figures, then btol should be set to about $5×{10}^{-4}$. If btol is supplied as less than $\epsilon$, then the value $\epsilon$ is used instead of btol.
7:     $\mathrm{conlim}$ – double scalar
The value ${c}_{\mathrm{lim}}$ of equation (8); it should be an upper limit on the condition number of $\stackrel{-}{A}$. conlim should not normally be chosen much larger than $1.0/{\mathbf{atol}}$. If conlim is supplied as zero, then the value $1.0/\epsilon$ is used instead of conlim.
8:     $\mathrm{itnlim}$int64int32nag_int scalar
An upper limit on the number of iterations. If ${\mathbf{itnlim}}\le 0$, then the value n is used in place of itnlim, but for ill-conditioned problems a higher value of itnlim is likely to be necessary.
9:     $\mathrm{msglvl}$int64int32nag_int scalar
The level of printing from nag_linsys_real_gen_sparse_lsqsol (f04qa). If ${\mathbf{msglvl}}\le 0$, then no printing occurs, but otherwise messages will be output on the advisory message channel (see nag_file_set_unit_advisory (x04ab)). A description of the printed output is given in Printed output. The level of printing is determined as follows:
${\mathbf{msglvl}}\le 0$
No printing.
${\mathbf{msglvl}}=1$
A brief summary is printed just prior to return from nag_linsys_real_gen_sparse_lsqsol (f04qa).
${\mathbf{msglvl}}\ge 2$
A summary line is printed periodically to monitor the progress of nag_linsys_real_gen_sparse_lsqsol (f04qa), together with a brief summary just prior to return from nag_linsys_real_gen_sparse_lsqsol (f04qa).

### Optional Input Parameters

1:     $\mathrm{m}$int64int32nag_int scalar
Default: the dimension of the array b.
$m$, the number of rows of the matrix $A$.
Constraint: ${\mathbf{m}}\ge 1$.
2:     $\mathrm{user}$ – Any MATLAB object
user is not used by nag_linsys_real_gen_sparse_lsqsol (f04qa), but is passed to aprod. Note that for large objects it may be more efficient to use a global variable which is accessible from the m-files than to use user.

### Output Parameters

1:     $\mathrm{b}\left({\mathbf{m}}\right)$ – double array
2:     $\mathrm{x}\left({\mathbf{n}}\right)$ – double array
The solution vector $x$.
3:     $\mathrm{se}\left({\mathbf{n}}\right)$ – double array
The estimates of the standard errors of the components of $x$. Thus ${\mathbf{se}}\left(i\right)$ contains an estimate of $\sqrt{{\nu }_{ii}}$, where ${\nu }_{ii}$ is the $i$th diagonal element of the estimated variance-covariance matrix $V$. The estimates returned in se will be the lower bounds on the actual estimated standard errors, but will usually be correct to at least one significant figure.
4:     $\mathrm{itnlim}$int64int32nag_int scalar
Unchanged unless ${\mathbf{itnlim}}\le 0$ on entry, in which case it is set to n.
5:     $\mathrm{itn}$int64int32nag_int scalar
The number of iterations performed.
6:     $\mathrm{anorm}$ – double scalar
An estimate of $‖\stackrel{-}{A}‖$ for the matrix $\stackrel{-}{A}$ of (4).
7:     $\mathrm{acond}$ – double scalar
An estimate of $\mathrm{cond}\left(\stackrel{-}{A}\right)$ which is a lower bound.
8:     $\mathrm{rnorm}$ – double scalar
An estimate of $‖\stackrel{-}{r}‖$ for the residual, $\stackrel{-}{r}$, of (5) corresponding to the solution $x$ returned in x. Note that $‖\stackrel{-}{r}‖$ is the function being minimized.
9:     $\mathrm{arnorm}$ – double scalar
An estimate of the $‖{\stackrel{-}{A}}^{\mathrm{T}}\stackrel{-}{r}‖$ corresponding to the solution $x$ returned in x.
10:   $\mathrm{xnorm}$ – double scalar
An estimate of $‖x‖$ for the solution $x$ returned in x.
11:   $\mathrm{user}$ – Any MATLAB object
12:   $\mathrm{inform}$int64int32nag_int scalar
The reason for termination of nag_linsys_real_gen_sparse_lsqsol (f04qa).
${\mathbf{inform}}=0$
The exact solution is $x=0$. No iterations are performed in this case.
${\mathbf{inform}}=1$
The termination criterion of (6) has been satisfied with ${\mathit{tol}}_{1}$ and ${\mathit{tol}}_{2}$ as the values supplied in atol and btol respectively.
${\mathbf{inform}}=2$
The termination criterion of (7) has been satisfied with ${\mathit{tol}}_{1}$ as the value supplied in atol.
${\mathbf{inform}}=3$
The termination criterion of (6) has been satisfied with ${\mathit{tol}}_{1}$ and/or ${\mathit{tol}}_{2}$ as the value $\epsilon$, where $\epsilon$ is the machine precision. One or both of the values supplied in atol and btol must have been less than $\epsilon$ and was too small for this machine.
${\mathbf{inform}}=4$
The termination criterion of (7) has been satisfied with ${\mathit{tol}}_{1}$ as the value $\epsilon$. The value supplied in atol must have been less than $\epsilon$ and was too small for this machine.
The values ${\mathbf{inform}}=5$, $6$ and $7$ correspond to failure with ${\mathbf{ifail}}={\mathbf{2}}$, ${\mathbf{3}}$ and ${\mathbf{4}}$ respectively (see Error Indicators and Warnings) and when ifail is negative inform will be set to the same negative value.
13:   $\mathrm{ifail}$int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see Error Indicators and Warnings).

## Error Indicators and Warnings

Errors or warnings detected by the function:

Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.

W  ${\mathbf{ifail}}<0$
A negative value of ifail indicates an exit from nag_linsys_real_gen_sparse_lsqsol (f04qa) because you have set mode negative in aprod. The value of ifail will be the same as your setting of mode.
${\mathbf{ifail}}=1$
 On entry, ${\mathbf{m}}<1$, or ${\mathbf{n}}<1$, or $\mathit{lruser}<1$, or $\mathit{liuser}<1$.
${\mathbf{ifail}}=2$
The condition of (8) has been satisfied for the value of ${c}_{\mathrm{lim}}$ supplied in conlim. If this failure is unexpected you should check that aprod is working correctly. Although conditions (6) or (7) have not been satisfied, the values returned in rnorm, arnorm and xnorm may nevertheless indicate that an acceptable solution has been reached.
${\mathbf{ifail}}=3$
The condition of (8) has been satisfied for the value ${c}_{\mathrm{lim}}=1.0/\epsilon$, where $\epsilon$ is the machine precision. The matrix $\stackrel{-}{A}$ is nearly singular or rank deficient and the problem is too ill-conditioned for this machine. If this failure is unexpected, you should check that aprod is working correctly.
${\mathbf{ifail}}=4$
The limit on the number of iterations has been reached. The number of iterations required by nag_linsys_real_gen_sparse_lsqsol (f04qa) and the condition of the matrix $\stackrel{-}{A}$ can depend strongly on the scaling of the problem. Poor scaling of the rows and columns of $A$ should be avoided whenever possible.
${\mathbf{ifail}}=-99$
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.

## Accuracy

When the problem is compatible, the computed solution $x$ will satisfy the equation
 $r=b-Ax,$
where an estimate of $‖r‖$ is returned in the argument rnorm. When the problem is incompatible, the computed solution $x$ will satisfy the equation
 $A-T r- = e ,$
where an estimate of $‖e‖$ is returned in the argument arnorm. See also Section 6.2 of Paige and Saunders (1982b).

The time taken by nag_linsys_real_gen_sparse_lsqsol (f04qa) is likely to be principally determined by the time taken in aprod, which is called twice on each iteration, once with ${\mathbf{mode}}=1$ and once with ${\mathbf{mode}}=2$. The time taken per iteration by the remaining operations in nag_linsys_real_gen_sparse_lsqsol (f04qa) is approximately proportional to $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$.
The Lanczos process will usually converge more quickly if $A$ is pre-conditioned by a nonsingular matrix $M$ that approximates $A$ in some sense and is also chosen so that equations of the form $My=c$ can efficiently be solved for $y$. For a discussion of pre-conditioning, see the F11 Chapter Introduction. In the context of nag_linsys_real_gen_sparse_lsqsol (f04qa), problem (1) is equivalent to
 $AM-1y=b, Mx=y$
and problem (2) is equivalent to minimizing
 $ρ=r, r=b-AM-1y, Mx=y.$
Note that the normal matrix ${\left(A{M}^{-1}\right)}^{\mathrm{T}}\left(A{M}^{-1}\right)={M}^{-\mathrm{T}}\left({A}^{\mathrm{T}}A\right){M}^{-1}$ so that the pre-conditioning $A{M}^{-1}$ is equivalent to the pre-conditioning ${M}^{-\mathrm{T}}\left({A}^{\mathrm{T}}A\right){M}^{-1}$ of the normal matrix ${A}^{\mathrm{T}}A$.
Pre-conditioning can be incorporated into nag_linsys_real_gen_sparse_lsqsol (f04qa) simply by coding aprod to compute $y+A{M}^{-1}x$ and $x+{M}^{-\mathrm{T}}{A}^{\mathrm{T}}y$ in place of $y+Ax$ and $x+{A}^{\mathrm{T}}y$ respectively, and then solving the equations $Mx=y$ for $x$ on return from nag_linsys_real_gen_sparse_lsqsol (f04qa). The quantity $y+A{M}^{-1}x$ should be computed by solving $Mz=x$ for $z$ and then computing $y+Az$, and $x+{M}^{-\mathrm{T}}{A}^{\mathrm{T}}y$ should be computed by solving ${M}^{\mathrm{T}}z={A}^{\mathrm{T}}y$ for $z$ and then forming $x+z$.

### Description of the Printed Output

When ${\mathbf{msglvl}}>0$, then nag_linsys_real_gen_sparse_lsqsol (f04qa) will produce output (except in the case where the function fails with ${\mathbf{ifail}}={\mathbf{1}}$) on the advisory message channel (see nag_file_set_unit_advisory (x04ab)).
When ${\mathbf{msglvl}}\ge 2$ then a summary line is printed periodically giving the following information:
 Output Meaning ITN Iteration number, $k$. X(1) The first element of the current iterate ${x}_{k}$. FUNCTION The current value of the function, $\rho$, being minimized. COMPAT An estimate of $‖{\stackrel{-}{r}}_{k}‖/‖b‖$, where ${\stackrel{-}{r}}_{k}$ is the residual corresponding to ${x}_{k}$. This value should converge to zero (in theory) if and only if the problem is compatible. COMPAT decreases monotonically. INCOMPAT An estimate of $‖{\stackrel{-}{A}}^{\mathrm{T}}{\stackrel{-}{r}}_{k}‖/\left(‖\stackrel{-}{A}‖‖{\stackrel{-}{r}}_{k}‖\right)$ which should converge to zero if and only if at the solution $\rho$ is nonzero. INCOMPAT is not usually monotonic. NRM(ABAR) A monotonically increasing estimate of $‖\stackrel{-}{A}‖$. COND(ABAR) A monotonically increasing estimate of the condition number $\mathrm{cond}\left(\stackrel{-}{A}\right)$.

## Example

This example solves the linear least squares problem
 $min⁡ρ =r, r=b-Ax$
where $A$ is the $13$ by $12$ matrix and $b$ is the $13$ element vector given by
 $A= 1 0 0 -1 0 0 0 0 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 0 0 0 1 -1 0 0 0 0 0 0 0 0 -1 0 -1 4 -1 0 0 -1 0 0 0 0 0 -1 0 -1 4 -1 0 0 -1 0 0 0 0 0 0 0 -1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 -1 0 0 0 0 0 0 0 -1 0 0 -1 4 -1 0 -1 0 0 0 0 0 -1 0 0 -1 4 -1 0 -1 0 0 0 0 0 0 0 0 -1 1 0 0 0 0 0 0 0 0 0 -1 0 0 1 0 0 0 0 0 0 0 0 0 -1 0 0 1 1 1 1 0 0 1 1 0 0 1 1 1 , b=-h2 0 0 0 1 1 0 0 1 1 0 0 0 -h-3$
with $h=0.1$.
Such a problem can arise by considering the Neumann problem on a rectangle
 $δu δn =0 δu δn =0 ∇ 2 u=gx,y δu δn =0 ∫ c u=1 δu δn =0$
where $C$ is the boundary of the rectangle, and discretizing as illustrated below with the square mesh
The $12$ by $12$ symmetric part of $A$ represents the difference equations and the final row comes from the normalizing condition. The example program has $g\left(x,y\right)=1$ at all the internal mesh points, but apart from this is written in a general manner so that the number of rows (NROWS) and columns (NCOLS) in the grid can readily be altered.
```function f04qa_example

fprintf('f04qa example results\n\n');

m = int64(13);
n = int64(12);
a = [  1,  0,  0, -1,  0,  0,  0,  0,  0,  0,  0,  0;
0,  1,  0,  0, -1,  0,  0,  0,  0,  0,  0,  0;
0,  0,  1, -1,  0,  0,  0,  0,  0,  0,  0,  0;
-1,  0, -1,  4, -1,  0,  0, -1,  0,  0,  0,  0;
0, -1,  0, -1,  4, -1,  0,  0, -1,  0,  0,  0;
0,  0,  0,  0, -1,  1,  0,  0,  0,  0,  0,  0;
0,  0,  0,  0,  0,  0,  1, -1,  0,  0,  0,  0;
0,  0,  0, -1,  0,  0, -1,  4, -1,  0, -1,  0;
0,  0,  0,  0, -1,  0,  0, -1,  4, -1,  0, -1;
0,  0,  0,  0,  0,  0,  0,  0, -1,  1,  0,  0;
0,  0,  0,  0,  0,  0,  0, -1,  0,  0,  1,  0;
0,  0,  0,  0,  0,  0,  0,  0, -1,  0,  0,  1;
1,  1,  1,  0,  0,  1,  1,  0,  0,  1,  1,  1];
b = zeros(m,1);
b(4:5) = -0.01;
b(8:9) = -0.01;
b(m)   = 10;

damp   = 0;
atol   = 1e-05;
btol   = 1e-04;
conlim = 10^6 - 10^-11;
itnlim = int64(100);
msglvl = int64(1);

[b, x, se, itnlim, itn, anorm, acond, rnorm, arnorm, xnorm, ...
ruser, inform, ifail] = ...
f04qa( ...
n, b, @aprod, damp, atol, btol, conlim, itnlim, msglvl, 'user', a);
fprintf('\n\nSolution is x:\n');
fprintf('%9.3f%9.3f%9.3f%9.3f%9.3f%9.3f\n',x);
fprintf('\n\nNorm of the residual = %12.2e\n', rnorm);

function [mode, x, y, user] = aprod(mode, m, n, x, y, user)

% a is passed as user

if (mode == 1)
y = y + user*x;
else
x = x + transpose(user)*y;
end
```
```f04qa example results

Output from sparse linear least squares solver.

Least squares solution of  A*x = b

The matrix A has     13 rows and     12 cols
The damping parameter is  damp =      0.00E+00

atol =     1.00E-05     conlim =      1.00E+06
btol =     1.00E-04     itnlim =    100

No. of iterations  =      2
stopping condition =      2
( The least squares solution is good enough, given atol )

Actual        norm(rbar), norm(x)            1.15E-02    4.33E+00
Norm(transpose(Abar)*rbar)                6.98E-15
Estimates of  norm(Abar), cond(Abar)         4.12E+00    2.45E+00

Solution is x:
1.250    1.250    1.250    1.247    1.247    1.250
1.250    1.247    1.247    1.250    1.250    1.250

Norm of the residual =     1.15e-02
```