Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_opt_lsq_uncon_mod_func_comp (e04fc)

## Purpose

nag_opt_lsq_uncon_mod_func_comp (e04fc) is a comprehensive algorithm for finding an unconstrained minimum of a sum of squares of $m$ nonlinear functions in $n$ variables $\left(m\ge n\right)$. No derivatives are required.
The function is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

## Syntax

[x, fsumsq, fvec, fjac, s, v, niter, nf, user, ifail] = e04fc(m, lsqfun, lsqmon, x, 'n', n, 'iprint', iprint, 'maxcal', maxcal, 'eta', eta, 'xtol', xtol, 'stepmx', stepmx, 'user', user)
[x, fsumsq, fvec, fjac, s, v, niter, nf, user, ifail] = nag_opt_lsq_uncon_mod_func_comp(m, lsqfun, lsqmon, x, 'n', n, 'iprint', iprint, 'maxcal', maxcal, 'eta', eta, 'xtol', xtol, 'stepmx', stepmx, 'user', user)
Note: the interface to this routine has changed since earlier releases of the toolbox:
 At Mark 24: maxcal was made optional; w and iw were removed from the interface; user was added to the interface At Mark 22: liw and lw were removed from the interface

## Description

nag_opt_lsq_uncon_mod_func_comp (e04fc) is essentially identical to the function LSQNDN in the NPL Algorithms Library. It is applicable to problems of the form
 $Minimize⁡Fx=∑i=1mfix2$
where $x={\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)}^{\mathrm{T}}$ and $m\ge n$. (The functions ${f}_{i}\left(x\right)$ are often referred to as ‘residuals’.)
You must supply lsqfun to calculate the values of the ${f}_{i}\left(x\right)$ at any point $x$.
From a starting point ${x}^{\left(1\right)}$ supplied by you, the function generates a sequence of points ${x}^{\left(2\right)},{x}^{\left(3\right)},\dots$, which is intended to converge to a local minimum of $F\left(x\right)$. The sequence of points is given by
 $x k+1 =x k +αkp k$
where the vector ${p}^{\left(k\right)}$ is a direction of search, and ${\alpha }^{\left(k\right)}$ is chosen such that $F\left({x}^{\left(k\right)}+{\alpha }^{\left(k\right)}{p}^{\left(k\right)}\right)$ is approximately a minimum with respect to ${\alpha }^{\left(k\right)}$.
The vector ${p}^{\left(k\right)}$ used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then ${p}^{\left(k\right)}$ is an approximation to the Gauss–Newton direction; otherwise additional function evaluations are made so as to enable ${p}^{\left(k\right)}$ to be a more accurate approximation to the Newton direction.
The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton's method.

## References

Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem SIAM J. Numer. Anal. 15 977–992

## Parameters

### Compulsory Input Parameters

1:     $\mathrm{m}$int64int32nag_int scalar
The number $m$ of residuals, ${f}_{i}\left(x\right)$, and the number $n$ of variables, ${x}_{j}$.
Constraint: $1\le {\mathbf{n}}\le {\mathbf{m}}$.
2:     $\mathrm{lsqfun}$ – function handle or string containing name of m-file
lsqfun must calculate the vector of values ${f}_{i}\left(x\right)$ at any point $x$. (However, if you do not wish to calculate the residuals at a particular $x$, there is the option of setting a argument to cause nag_opt_lsq_uncon_mod_func_comp (e04fc) to terminate immediately.)
[iflag, fvec, user] = lsqfun(iflag, m, n, xc, user)

Input Parameters

1:     $\mathrm{iflag}$int64int32nag_int scalar
Has a non-negative value.
2:     $\mathrm{m}$int64int32nag_int scalar
$m$, the numbers of residuals.
3:     $\mathrm{n}$int64int32nag_int scalar
$n$, the numbers of variables.
4:     $\mathrm{xc}\left({\mathbf{n}}\right)$ – double array
The point $x$ at which the values of the ${f}_{i}$ are required.
5:     $\mathrm{user}$ – Any MATLAB object
lsqfun is called from nag_opt_lsq_uncon_mod_func_comp (e04fc) with the object supplied to nag_opt_lsq_uncon_mod_func_comp (e04fc).

Output Parameters

1:     $\mathrm{iflag}$int64int32nag_int scalar
If lsqfun resets iflag to some negative number, nag_opt_lsq_uncon_mod_func_comp (e04fc) will terminate immediately, with ifail set to your setting of iflag.
2:     $\mathrm{fvec}\left({\mathbf{m}}\right)$ – double array
Unless iflag is reset to a negative number, ${\mathbf{fvec}}\left(\mathit{i}\right)$ must contain the value of ${f}_{\mathit{i}}$ at the point $x$, for $\mathit{i}=1,2,\dots ,m$.
3:     $\mathrm{user}$ – Any MATLAB object
Note:  lsqfun should be tested separately before being used in conjunction with nag_opt_lsq_uncon_mod_func_comp (e04fc).
3:     $\mathrm{lsqmon}$ – function handle or string containing name of m-file
If ${\mathbf{iprint}}\ge 0$, you must supply lsqmon which is suitable for monitoring the minimization process. lsqmon must not change the values of any of its arguments.
If ${\mathbf{iprint}}<0$, the string nag_opt_lsq_dummy_lsqmon (e04fdz) can be used as lsqmon.
[user] = lsqmon(m, n, xc, fvec, fjac, ldfjac, s, igrade, niter, nf, user)

Input Parameters

1:     $\mathrm{m}$int64int32nag_int scalar
$m$, the numbers of residuals.
2:     $\mathrm{n}$int64int32nag_int scalar
$n$, the numbers of variables.
3:     $\mathrm{xc}\left({\mathbf{n}}\right)$ – double array
The coordinates of the current point $x$.
4:     $\mathrm{fvec}\left({\mathbf{m}}\right)$ – double array
The values of the residuals ${f}_{i}$ at the current point $x$.
5:     $\mathrm{fjac}\left(\mathit{ldfjac},{\mathbf{n}}\right)$ – double array
${\mathbf{fjac}}\left(\mathit{i},\mathit{j}\right)$ contains the value of $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ at the current point $x$, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$.
6:     $\mathrm{ldfjac}$int64int32nag_int scalar
The first dimension of the array fjac.
7:     $\mathrm{s}\left({\mathbf{n}}\right)$ – double array
The singular values of the current approximation to the Jacobian matrix. Thus s may be useful as information about the structure of your problem.
8:     $\mathrm{igrade}$int64int32nag_int scalar
nag_opt_lsq_uncon_mod_func_comp (e04fc) estimates the dimension of the subspace for which the Jacobian matrix can be used as a valid approximation to the curvature (see Gill and Murray (1978)). This estimate is called the grade of the Jacobian matrix, and igrade gives its current value.
9:     $\mathrm{niter}$int64int32nag_int scalar
The number of iterations which have been performed in nag_opt_lsq_uncon_mod_func_comp (e04fc).
10:   $\mathrm{nf}$int64int32nag_int scalar
The number of times that lsqfun has been called so far. (However, for intermediate calls of lsqmon, nf is calculated on the assumption that the latest linear search has been successful. If this is not the case, then the $n$ evaluations allowed for approximating the Jacobian at the new point will not in fact have been made. nf will be accurate at the final call of lsqmon.)
11:   $\mathrm{user}$ – Any MATLAB object
lsqmon is called from nag_opt_lsq_uncon_mod_func_comp (e04fc) with the object supplied to nag_opt_lsq_uncon_mod_func_comp (e04fc).

Output Parameters

1:     $\mathrm{user}$ – Any MATLAB object
Note:  you should normally print the sum of squares of residuals, so as to be able to examine the sequence of values of $F\left(x\right)$ mentioned in Accuracy. It is usually helpful to print xc, the estimated gradient of the sum of squares, niter and nf.
4:     $\mathrm{x}\left({\mathbf{n}}\right)$ – double array
${\mathbf{x}}\left(\mathit{j}\right)$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.

### Optional Input Parameters

1:     $\mathrm{n}$int64int32nag_int scalar
Default: For n, the dimension of the array x.
The number $m$ of residuals, ${f}_{i}\left(x\right)$, and the number $n$ of variables, ${x}_{j}$.
Constraint: $1\le {\mathbf{n}}\le {\mathbf{m}}$.
2:     $\mathrm{iprint}$int64int32nag_int scalar
Default: $1$
The frequency with which lsqmon is to be called.
If ${\mathbf{iprint}}>0$, lsqmon is called once every iprint iterations and just before exit from nag_opt_lsq_uncon_mod_func_comp (e04fc).
If ${\mathbf{iprint}}=0$, lsqmon is just called at the final point.
If ${\mathbf{iprint}}<0$, lsqmon is not called at all.
iprint should normally be set to a small positive number.
3:     $\mathrm{maxcal}$int64int32nag_int scalar
Default: ${\mathbf{maxcal}}=400×n$
The limit you set on the number of times that lsqfun may be called by nag_opt_lsq_uncon_mod_func_comp (e04fc). There will be an error exit (see Error Indicators and Warnings) after maxcal calls of lsqfun.
Constraint: ${\mathbf{maxcal}}\ge 1$.
4:     $\mathrm{eta}$ – double scalar
Suggested value: ${\mathbf{eta}}=0.5$ (${\mathbf{eta}}=0.0$ if ${\mathbf{n}}=1$).
Default:
• if ${\mathbf{n}}=1$, $0.0$;
• otherwise $0.5$.
Every iteration of nag_opt_lsq_uncon_mod_func_comp (e04fc) involves a linear minimization, i.e., minimization of $F\left({x}^{\left(k\right)}+{\alpha }^{\left(k\right)}{p}^{\left(k\right)}\right)$ with respect to ${\alpha }^{\left(k\right)}$.
Specifies how accurately the linear minimizations are to be performed. The minimum with respect to ${\alpha }^{\left(k\right)}$ will be located more accurately for small values of eta (say, $0.01$) than for large values (say, $0.9$). Although accurate linear minimizations will generally reduce the number of iterations performed by nag_opt_lsq_uncon_mod_func_comp (e04fc), they will increase the number of calls of lsqfun made each iteration. On balance it is usually more efficient to perform a low accuracy minimization.
Constraint: $0.0\le {\mathbf{eta}}<1.0$.
5:     $\mathrm{xtol}$ – double scalar
Suggested value: if $F\left(x\right)$ and the variables are scaled roughly as described in Further Comments and $\epsilon$ is the machine precision, then a setting of order ${\mathbf{xtol}}=\sqrt{\epsilon }$ will usually be appropriate. If xtol is set to $0.0$ or some positive value less than $10\epsilon$, nag_opt_lsq_uncon_mod_func_comp (e04fc) will use $10\epsilon$ instead of xtol, since $10\epsilon$ is probably the smallest reasonable setting.
Default: $0.0$
The accuracy in $x$ to which the solution is required.
If ${x}_{\mathrm{true}}$ is the true value of $x$ at the minimum, then ${x}_{\mathrm{sol}}$, the estimated position before a normal exit, is such that
 $xsol-xtrue
where $‖y‖=\sqrt{\sum _{j=1}^{n}{y}_{j}^{2}}$. For example, if the elements of ${x}_{\mathrm{sol}}$ are not much larger than $1.0$ in modulus and if ${\mathbf{xtol}}=\text{1.0e−5}$, then ${x}_{\mathrm{sol}}$ is usually accurate to about five decimal places. (For further details see Accuracy.)
Constraint: ${\mathbf{xtol}}\ge 0.0$.
6:     $\mathrm{stepmx}$ – double scalar
Default: $100000.0$
An estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency, a slight overestimate is preferable.) nag_opt_lsq_uncon_mod_func_comp (e04fc) will ensure that, for each iteration,
 $∑j=1nxj k -xj k-1 2≤ stepmx 2,$
where $k$ is the iteration number. Thus, if the problem has more than one solution, nag_opt_lsq_uncon_mod_func_comp (e04fc) is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence ${x}^{\left(k\right)}$ entering a region where the problem is ill-behaved and can help avoid overflow in the evaluation of $F\left(x\right)$. However, an underestimate of stepmx can lead to inefficiency.
Constraint: ${\mathbf{stepmx}}\ge {\mathbf{xtol}}$.
7:     $\mathrm{user}$ – Any MATLAB object
user is not used by nag_opt_lsq_uncon_mod_func_comp (e04fc), but is passed to lsqfun and lsqmon. Note that for large objects it may be more efficient to use a global variable which is accessible from the m-files than to use user.

### Output Parameters

1:     $\mathrm{x}\left({\mathbf{n}}\right)$ – double array
The final point ${x}^{\left(k\right)}$. Thus, if ${\mathbf{ifail}}={\mathbf{0}}$ on exit, ${\mathbf{x}}\left(j\right)$ is the $j$th component of the estimated position of the minimum.
2:     $\mathrm{fsumsq}$ – double scalar
The value of $F\left(x\right)$, the sum of squares of the residuals ${f}_{i}\left(x\right)$, at the final point given in x.
3:     $\mathrm{fvec}\left({\mathbf{m}}\right)$ – double array
The value of the residual ${f}_{\mathit{i}}\left(x\right)$ at the final point given in x, for $\mathit{i}=1,2,\dots ,m$.
4:     $\mathrm{fjac}\left(\mathit{ldfjac},{\mathbf{n}}\right)$ – double array
The estimate of the first derivative $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ at the final point given in x, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$.
5:     $\mathrm{s}\left({\mathbf{n}}\right)$ – double array
The singular values of the estimated Jacobian matrix at the final point. Thus s may be useful as information about the structure of your problem.
6:     $\mathrm{v}\left(\mathit{ldv},{\mathbf{n}}\right)$ – double array
The matrix $V$ associated with the singular value decomposition
 $J=USVT$
of the estimated Jacobian matrix at the final point, stored by columns. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of ${J}^{\mathrm{T}}J$.
7:     $\mathrm{niter}$int64int32nag_int scalar
The number of iterations which have been performed in nag_opt_lsq_uncon_mod_func_comp (e04fc).
8:     $\mathrm{nf}$int64int32nag_int scalar
The number of times that the residuals have been evaluated (i.e., number of calls of lsqfun).
9:     $\mathrm{user}$ – Any MATLAB object
10:   $\mathrm{ifail}$int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see Error Indicators and Warnings).

## Error Indicators and Warnings

Note: nag_opt_lsq_uncon_mod_func_comp (e04fc) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:

Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.

W  ${\mathbf{ifail}}<0$
A negative value of ifail indicates an exit from nag_opt_lsq_uncon_mod_func_comp (e04fc) because you have set iflag negative in lsqfun. The value of ifail will be the same as your setting of iflag.
${\mathbf{ifail}}=1$
 On entry, ${\mathbf{n}}<1$, or ${\mathbf{m}}<{\mathbf{n}}$, or ${\mathbf{maxcal}}<1$, or ${\mathbf{eta}}<0.0$, or ${\mathbf{eta}}\ge 1.0$, or ${\mathbf{xtol}}<0.0$, or ${\mathbf{stepmx}}<{\mathbf{xtol}}$, or $\mathit{ldfjac}<{\mathbf{m}}$, or $\mathit{ldv}<{\mathbf{n}}$, or $\mathit{liw}<1$, or $\mathit{lw}<6×{\mathbf{n}}+{\mathbf{m}}×{\mathbf{n}}+2×{\mathbf{m}}+{\mathbf{n}}×\left({\mathbf{n}}-1\right)/2$, when ${\mathbf{n}}>1$, or $\mathit{lw}<7+3×{\mathbf{m}}$, when ${\mathbf{n}}=1$.
When this exit occurs, no values will have been assigned to fsumsq, or to the elements of fvec, fjac, s or v.
${\mathbf{ifail}}=2$
There have been maxcal calls of lsqfun. If steady reductions in the sum of squares, $F\left(x\right)$, were monitored up to the point where this exit occurred, then the exit probably occurred simply because maxcal was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that $F\left(x\right)$ has no minimum.
W  ${\mathbf{ifail}}=3$
The conditions for a minimum have not all been satisfied, but a lower point could not be found. This could be because xtol has been set so small that rounding errors in the evaluation of the residuals make attainment of the convergence conditions impossible.
${\mathbf{ifail}}=4$
The method for computing the singular value decomposition of the estimated Jacobian matrix has failed to converge in a reasonable number of sub-iterations. It may be worth applying nag_opt_lsq_uncon_mod_func_comp (e04fc) again starting with an initial approximation which is not too close to the point at which the failure occurred.
${\mathbf{ifail}}=-99$
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
The values ${\mathbf{ifail}}={\mathbf{2}}$, ${\mathbf{3}}$ or ${\mathbf{4}}$ may also be caused by mistakes in lsqfun, by the formulation of the problem or by an awkward function. If there are no such mistakes it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.

## Accuracy

A successful exit (${\mathbf{ifail}}={\mathbf{0}}$) is made from nag_opt_lsq_uncon_mod_func_comp (e04fc) when (B1, B2 and B3) or B4 or B5 hold, where
 $B1 ≡ α k ×p k
and where $‖.‖$ and $\epsilon$ are as defined in Arguments, and ${F}^{\left(k\right)}$ and ${g}^{\left(k\right)}$ are the values of $F\left(x\right)$ and its vector of estimated first derivatives at ${x}^{\left(k\right)}$. If ${\mathbf{ifail}}={\mathbf{0}}$ then the vector in x on exit, ${x}_{\mathrm{sol}}$, is almost certainly an estimate of ${x}_{\mathrm{true}}$, the position of the minimum to the accuracy specified by xtol.
If ${\mathbf{ifail}}={\mathbf{3}}$, then ${x}_{\mathrm{sol}}$ may still be a good estimate of ${x}_{\mathrm{true}}$, but to verify this you should make the following checks. If
 (a) the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or a fast linear rate, and (b) $g{\left({x}_{\mathrm{sol}}\right)}^{\mathrm{T}}g\left({x}_{\mathrm{sol}}\right)<10\epsilon$, where $\mathrm{T}$ denotes transpose, then it is almost certain that ${x}_{\mathrm{sol}}$ is a close approximation to the minimum. When (b) is true, then usually $F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to $F\left({x}_{\mathrm{true}}\right)$. The values of $F\left({x}^{\left(k\right)}\right)$ can be calculated in lsqmon, and the vector $g\left({x}_{\mathrm{sol}}\right)$ can be calculated from the contents of fvec and fjac on exit from nag_opt_lsq_uncon_mod_func_comp (e04fc).
Further suggestions about confirmation of a computed solution are given in the E04 Chapter Introduction.

The number of iterations required depends on the number of variables, the number of residuals, the behaviour of $F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed per iteration of nag_opt_lsq_uncon_mod_func_comp (e04fc) varies, but for $m\gg n$ is approximately $n×{m}^{2}+\mathit{O}\left({n}^{3}\right)$. In addition, each iteration makes at least $n+1$ calls of lsqfun. So, unless the residuals can be evaluated very quickly, the run time will be dominated by the time spent in lsqfun.
Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of the ${x}_{j}$ are each in the range $\left(-1,+1\right)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix of $F\left(x\right)$ at the solution is well-conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_lsq_uncon_mod_func_comp (e04fc) will take less computer time.
When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a subsequent call to nag_opt_lsq_uncon_covariance (e04yc), using information returned in the arrays s and v. See nag_opt_lsq_uncon_covariance (e04yc) for further details.

## Example

This example finds least squares estimates of ${x}_{1},{x}_{2}$ and ${x}_{3}$ in the model
 $y=x1+t1x2t2+x3t3$
using the $15$ sets of data given in the following table.
 $y t1 t2 t3 0.14 1.0 15.0 1.0 0.18 2.0 14.0 2.0 0.22 3.0 13.0 3.0 0.25 4.0 12.0 4.0 0.29 5.0 11.0 5.0 0.32 6.0 10.0 6.0 0.35 7.0 9.0 7.0 0.39 8.0 8.0 8.0 0.37 9.0 7.0 7.0 0.58 10.0 6.0 6.0 0.73 11.0 5.0 5.0 0.96 12.0 4.0 4.0 1.34 13.0 3.0 3.0 2.10 14.0 2.0 2.0 4.39 15.0 1.0 1.0$
The program uses $\left(0.5,1.0,1.5\right)$ as the initial guess at the position of the minimum.
```function e04fc_example

fprintf('e04fc example results\n\n');

global y t;

m  = int64(15);

% Function parameters (data for fitting model)
y=[0.14,0.18,0.22,0.25,0.29,...
0.32,0.35,0.39,0.37,0.58,...
0.73,0.96,1.34,2.10,4.39];
t = zeros(m,3);
t = [ 1.0  15.0 1.0;
2.0  14.0 2.0;
3.0  13.0 3.0;
4.0  12.0 4.0;
5.0  11.0 5.0;
6.0  10.0 6.0;
7.0   9.0 7.0;
8.0   8.0 8.0;
9.0   7.0 7.0;
10.0   6.0 6.0;
11.0   5.0 5.0;
12.0   4.0 4.0;
13.0   3.0 3.0;
14.0   2.0 2.0;
15.0   1.0 1.0];

n  = 3;
% Initial guess
x  = [0.5; 1; 1.5];

[x, fsumsq, fvec, fjac, s, v, niter, nf, user, ifail] = ...
e04fc(m, @lsqfun, @lsqmon, x);

fprintf('\nBest fit model parameters are:\n');
for i = 1:n
fprintf('        x_%d = %10.3f\n',i,x(i));
end
fprintf('\nResiduals for observed data:\n');
fprintf('  %8.4f  %8.4f  %8.4f  %8.4f  %8.4f\n',fvec);
fprintf('\nSum of squares of residuals:\n');
disp(fsumsq);

function [iflag,fvecc, user] = lsqfun(iflag, m, n, xc, user)

global y t;

fvecc = zeros(m,1);
for i=1:double(m)
fvecc(i) = xc(1) + t(i,1)/(xc(2)*t(i,2)+xc(3)*t(i,3)) - y(i);
end

function [user] = lsqmon(m, n, xc, fvecc, fjacc, ljc, ...
if (niter == 0)
fprintf('   Itn         F evals        SUMSQ  \n');
end;

fsumsq=dot(fvecc,fvecc);
fprintf('  %3d          %3d        %10.6f\n', niter, nf, fsumsq);
```
```e04fc example results

Itn         F evals        SUMSQ
0            4         10.210374
1            8          0.198730
2           12          0.009232
3           16          0.008215
4           25          0.008215
5           31          0.008215

Best fit model parameters are:
x_1 =      0.082
x_2 =      1.133
x_3 =      2.344

Residuals for observed data:
-0.0059   -0.0003    0.0003    0.0065   -0.0008
-0.0013   -0.0045   -0.0200    0.0822   -0.0182
-0.0148   -0.0147   -0.0112   -0.0042    0.0068

Sum of squares of residuals:
0.0082

```