Integer type:** int32**** int64**** nag_int** show int32 show int32 show int64 show int64 show nag_int show nag_int

nag_opt_lsq_uncon_mod_func_easy (e04fy) is an easy-to-use algorithm for finding an unconstrained minimum of a sum of squares of m$m$ nonlinear functions in n$n$ variables (m ≥ n)$(m\ge n)$. No derivatives are required.

It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

nag_opt_lsq_uncon_mod_func_easy (e04fy) is essentially identical to the function LSNDN1 in the NPL Algorithms Library. It is applicable to problems of the form

where
x
=
(x_{1},x_{2}, … ,x_{n})^{T}
$x={({x}_{1},{x}_{2},\dots ,{x}_{n})}^{\mathrm{T}}$ and m ≥ n$m\ge n$. (The functions f_{i}(x)${f}_{i}\left(x\right)$ are often referred to as ‘residuals’.)

$$\mathrm{Minimize}F\left(x\right)=\sum _{i=1}^{m}{\left[{f}_{i}\left(x\right)\right]}^{2}$$ |

You must supply a function to evaluate functions f_{i}(x)${f}_{i}\left(x\right)$ at any point x$x$.

From a starting point supplied by you, a sequence of points is generated which is intended to converge to a local minimum of the sum of squares. These points are generated using estimates of the curvature of F(x)$F\left(x\right)$.

Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem *SIAM J. Numer. Anal.* **15** 977–992

- 1: m – int64int32nag_int scalar
- The number m$m$ of residuals, f
_{i}(x)${f}_{i}\left(x\right)$, and the number n$n$ of variables, x_{j}${x}_{j}$. - 2: lsfun1 – function handle or string containing name of m-file
- You must supply this function to calculate the vector of values f
_{i}(x)${f}_{i}\left(x\right)$ at any point x$x$. It should be tested separately before being used in conjunction with nag_opt_lsq_uncon_mod_func_easy (e04fy) (see the E04 Chapter Introduction).[fvec, user] = lsfun1(m, n, xc, user)**Input Parameters**- 1: m – int64int32nag_int scalar
- m$m$, the numbers of residuals.
- 2: n – int64int32nag_int scalar
- n$n$, the numbers of variables.
- 3: xc(n) – double array
- The point x$x$ at which the values of the f
_{i}${f}_{i}$ are required. - 4: user – Any MATLAB object
- lsfun1 is called from nag_opt_lsq_uncon_mod_func_easy (e04fy) with the object supplied to nag_opt_lsq_uncon_mod_func_easy (e04fy).

**Output Parameters** - 3: x(n) – double array
- n, the dimension of the array, must satisfy the constraint 1 ≤ n ≤ m$1\le {\mathbf{n}}\le {\mathbf{m}}$.x(j)${\mathbf{x}}\left(\mathit{j}\right)$ must be set to a guess at the j$\mathit{j}$th component of the position of the minimum, for j = 1,2, … ,n$\mathit{j}=1,2,\dots ,n$.

- 1: n – int64int32nag_int scalar
*Default*: The dimension of the array x.The number m$m$ of residuals, f_{i}(x)${f}_{i}\left(x\right)$, and the number n$n$ of variables, x_{j}${x}_{j}$.- 2: user – Any MATLAB object

- w lw iuser ruser

- 1: x(n) – double array
- 2: fsumsq – double scalar
- The value of the sum of squares, F(x)$F\left(x\right)$, corresponding to the final point stored in x.
- 3: user – Any MATLAB object
- 4: ifail – int64int32nag_int scalar
- ifail = 0${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

Errors or warnings detected by the function:

Cases prefixed with `W` are classified as warnings and
do not generate an error of type NAG:error_*n*. See nag_issue_warnings.

On entry, n < 1${\mathbf{n}}<1$, or m < n${\mathbf{m}}<{\mathbf{n}}$, or lw < 7 × n + n × n + 2 × m × n + 3 × m + n × (n − 1) / 2$\mathit{lw}<7\times {\mathbf{n}}+{\mathbf{n}}\times {\mathbf{n}}+2\times {\mathbf{m}}\times {\mathbf{n}}+3\times {\mathbf{m}}+{\mathbf{n}}\times ({\mathbf{n}}-1)/2$, when n > 1${\mathbf{n}}>1$, or lw < 9 + 5 × m$\mathit{lw}<9+5\times {\mathbf{m}}$, when n = 1${\mathbf{n}}=1$.

`W`ifail = 3${\mathbf{ifail}}=3$- The final point does not satisfy the conditions for acceptance as a minimum, but no lower point could be found.

- An auxiliary function has been unable to complete a singular value decomposition in a reasonable number of sub-iterations.

`W`ifail = 5${\mathbf{ifail}}=5$`W`ifail = 6${\mathbf{ifail}}=6$`W`ifail = 7${\mathbf{ifail}}=7$`W`ifail = 8${\mathbf{ifail}}=8$- There is some doubt about whether the point x$x$ found by nag_opt_lsq_uncon_mod_func_easy (e04fy) is a minimum of F(x)$F\left(x\right)$. The degree of confidence in the result decreases as ifail increases. Thus, when ifail = 5${\mathbf{ifail}}={\mathbf{5}}$ it is probable that the final x$x$ gives a good estimate of the position of a minimum, but when ifail = 8${\mathbf{ifail}}={\mathbf{8}}$ it is very unlikely that the function has found a minimum.

If you are not satisfied with the result (e.g., because ifail lies between 3$3$ and 8$8$), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. Repeated failure may indicate some defect in the formulation of the problem.

If the problem is reasonably well scaled and a successful exit is made, then, for a computer with a mantissa of t$t$ decimals, one would expect to get about t / 2 − 1$t/2-1$ decimals accuracy in the components of x$x$ and between t − 1$t-1$ (if F(x)$F\left(x\right)$ is of order 1$1$ at the minimum) and 2t − 2$2t-2$ (if F(x)$F\left(x\right)$ is close to zero at the minimum) decimals accuracy in F(x)$F\left(x\right)$.

The number of iterations required depends on the number of variables, the number of residuals and their behaviour, and the distance of the starting point from the solution. The number of multiplications performed per iteration of nag_opt_lsq_uncon_mod_func_easy (e04fy) varies, but for m ≫ n$m\gg n$ is approximately n × m^{2} + O(n^{3})$n\times {m}^{2}+\mathit{O}\left({n}^{3}\right)$. In addition, each iteration makes at least n + 1$n+1$ calls of lsfun1. So, unless the residuals can be evaluated very quickly, the run time will be dominated by the time spent in lsfun1.

Ideally, the problem should be scaled so that the minimum value of the sum of squares is in the range (0, + 1)$(0,+1)$, and so that at points a unit distance away from the solution the sum of squares is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_lsq_uncon_mod_func_easy (e04fy) will take less computer time.

When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a subsequent call to nag_opt_lsq_uncon_covariance (e04yc), using information returned in segments of the workspace array w. See nag_opt_lsq_uncon_covariance (e04yc) for further details.

Open in the MATLAB editor: nag_opt_lsq_uncon_mod_func_easy_example

function nag_opt_lsq_uncon_mod_func_easy_examplem = int64(15); x = [0.5; 1; 1.5]; y = [0.14,0.18,0.22,0.25,0.29,0.32,0.35,0.39,0.37,0.58,0.73,0.96,1.34,2.10,4.39]; t = [[1.0, 15.0, 1.0], [2.0, 14.0, 2.0], [3.0, 13.0, 3.0], [4.0, 12.0, 4.0], [5.0, 11.0, 5.0], [6.0, 10.0, 6.0], [7.0, 9.0, 7.0], [8.0, 8.0, 8.0], [9.0, 7.0, 7.0], [10.0, 6.0, 6.0], [11.0, 5.0, 5.0], [12.0, 4.0, 4.0], [13.0, 3.0, 3.0], [14.0, 2.0, 2.0], [15.0, 1.0, 1.0]]; user = {y, t, 3}; [xOut, fsumsq, user, ifail] = nag_opt_lsq_uncon_mod_func_easy(m, @lsfun1, x, 'user', user)function [fvecc, user] = lsfun1(m, n, xc, user)fvecc=zeros(m,1); % y is in user{1} and t is in user{2} for i = 1:double(m) fvecc(i) = xc(1) + user{2}(i,1)/(xc(2)*user{2}(i,2)+xc(3)*user{2}(i,3))- ... user{1}(i); end

xOut = 0.0824 1.1330 2.3437 fsumsq = 0.0082 user = [1x15 double] [15x3 double] [3] ifail = 0

Open in the MATLAB editor: e04fy_example

function e04fy_examplem = int64(15); x = [0.5; 1; 1.5]; y = [0.14,0.18,0.22,0.25,0.29,0.32,0.35,0.39,0.37,0.58,0.73,0.96,1.34,2.10,4.39]; t = [[1.0, 15.0, 1.0], [2.0, 14.0, 2.0], [3.0, 13.0, 3.0], [4.0, 12.0, 4.0], [5.0, 11.0, 5.0], [6.0, 10.0, 6.0], [7.0, 9.0, 7.0], [8.0, 8.0, 8.0], [9.0, 7.0, 7.0], [10.0, 6.0, 6.0], [11.0, 5.0, 5.0], [12.0, 4.0, 4.0], [13.0, 3.0, 3.0], [14.0, 2.0, 2.0], [15.0, 1.0, 1.0]]; user = {y, t, 3}; [xOut, fsumsq, user, ifail] = e04fy(m, @lsfun1, x, 'user', user)function [fvecc, user] = lsfun1(m, n, xc, user)fvecc=zeros(m,1); % y is in user{1} and t is in user{2} for i = 1:double(m) fvecc(i) = xc(1) + user{2}(i,1)/(xc(2)*user{2}(i,2)+xc(3)*user{2}(i,3))- ... user{1}(i); end

xOut = 0.0824 1.1330 2.3437 fsumsq = 0.0082 user = [1x15 double] [15x3 double] [3] ifail = 0

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013