# NAG FL Interfacee04ggf (handle_​solve_​bxnl)

Note: this routine uses optional parameters to define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm and to Section 12 for a detailed description of the specification of the optional parameters.

## ▸▿ Contents

Settings help

FL Name Style:

FL Specification Language:

## 1Purpose

e04ggf is a bound-constrained nonlinear least squares trust region solver (BXNL) from the NAG optimization modelling suite aimed for small to medium-scale problems.

## 2Specification

Fortran Interface
 Subroutine e04ggf ( nvar, x, nres, rx,
 Integer, Intent (In) :: nvar, nres Integer, Intent (Inout) :: iuser(*), ifail Real (Kind=nag_wp), Intent (Inout) :: x(nvar), ruser(*) Real (Kind=nag_wp), Intent (Out) :: rx(nres), rinfo(100), stats(100) Type (c_ptr), Intent (In) :: handle, cpuser External :: lsqfun, lsqgrd, lsqhes, lsqhprd, monit
#include <nag.h>
 void e04ggf_ (void **handle, void (NAG_CALL *lsqfun)(const Integer *nvar, const double x[], const Integer *nres, double rx[], Integer *inform, Integer iuser[], double ruser[], void **cpuser),void (NAG_CALL *lsqgrd)(const Integer *nvar, const double x[], const Integer *nres, const Integer *nnzrd, double rdx[], Integer *inform, Integer iuser[], double ruser[], void **cpuser),void (NAG_CALL *lsqhes)(const Integer *nvar, const double x[], const Integer *nres, const double lambda[], double hx[], Integer *inform, Integer iuser[], double ruser[], void **cpuser),void (NAG_CALL *lsqhprd)(const Integer *nvar, const double x[], const double y[], const Integer *nres, double hxy[], Integer *inform, Integer iuser[], double ruser[], void **cpuser),void (NAG_CALL *monit)(const Integer *nvar, const double x[], Integer *inform, const double rinfo[], const double stats[], Integer iuser[], double ruser[], void **cpuser),const Integer *nvar, double x[], const Integer *nres, double rx[], double rinfo[], double stats[], Integer iuser[], double ruser[], void **cpuser, Integer *ifail)
The routine may be called by the names e04ggf or nagf_opt_handle_solve_bxnl.

## 3Description

e04ggf computes a solution $x$ to the nonlinear least squares problem
 $minimize x ∈ ℝ nvar f(x)= 12 ∑ i=1 nres [wiri(x)] 2 + σp ‖x‖2p subject to lx ≤ x ≤ ux ,$ (1)
where ${r}_{i}\left(x\right),i=1,\dots ,{n}_{\mathrm{res}}$, are smooth nonlinear functions called residuals, ${w}_{i},i=1,\dots ,{n}_{\mathrm{res}}$ are weights (by default they are all defined to $1$, see Section 9.2 on how to change them), and the rightmost element represents the regularization term with parameter $\sigma \ge 0$ and power $p>0$ (by default the regularization term is not used, see Section 11 on how to enable it). The constraint elements ${l}_{x}$ and ${u}_{x}$ are ${n}_{\mathrm{var}}$-dimensional vectors defining the bounds on the variables.
Typically in a calibration or data fitting context, the residuals will be defined as the difference between the observed values ${y}_{i}$ at ${t}_{i}$ and the values provided by a nonlinear model $\varphi \left(t;x\right)$, i.e., ${r}_{i}\left(x\right)≔{y}_{i}-\varphi \left({t}_{i};x\right)$. If these residuals (errors) follow a Gaussian distribution, then the values of the optimal parameter vector ${x}^{*}$ are the maximum likelihood estimates. For a description of the various algorithms implemented for solving this problem see Section 11. It is also recommended that you read Section 2.2.3 in the E04 Chapter Introduction.
e04ggf serves as a solver for problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as a means of communication for routines in the NAG optimization modelling suite. First, the problem handle is initialized by calling e04raf. A nonlinear least square residual objective can be added by calling e04rmf and, optionally, (simple) box constraints can be defined by calling e04rhf. It should be noted that e04ggf internally works with a dense representation of the residual Jacobian even if a sparse structure was defined in the call to e04rmf. Once the problem is fully described, the handle may be passed to the solver e04ggf. When the handle is not needed anymore, e04rzf should be called to destroy it and deallocate the memory held within. For more information refer to the NAG optimization modelling suite in Section 3.1 in the E04 Chapter Introduction.
The algorithm is based on the trust region framework and its behaviour can be modified by various optional parameters (see Section 12) which can be set by e04zmf and e04zpf anytime between the initialization of the handle by e04raf and a call to the solver. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various starting points and/or optional parameters. The option getter e04znf can be called to retrieve the current value of any option.
Several options might have significant impact on the performance of the solver. Even though the defaults were chosen to suit the majority of anticipated problems, it is recommended that you experiment with the option settings to find the most suitable set of options for a particular problem, see Sections 11 and 12 for further details.
Adachi S, Iwata S, Nakatsukasa Y, and Takeda A (2015) Solving the trust region subproblem by a generalized eigenvalue problem Technical report, METR 2015-14. Mathematical Engineering, The University of Tokyo https://www.keisu.t.u-tokyo.ac.jp/data/2015/METR15-14.pdf
Conn A R, Gould N I M and Toint Ph L (2000) Trust Region Methods SIAM, Philadephia
Gould N I M, Orban D, and Toint Ph L (2003) GALAHAD, a library of thread-safe Fortran 90 packages for large-scale nonlinear optimization ACM Transactions on Mathematical Software (TOMS) 29(4) 353–372
Gould N I M, Rees T, and Scott J A (2017) A higher order method for solving nonlinear least-squares problems Technical report, RAL-P-1027-010 RAL Library. STFC Rutherford Appleton Laboratory http://www.numerical.rl.ac.uk/people/rees/pdf/RAL-P-2017-010.pdf
Kanzow C, Yamashita N, and Fukushima M (2004) Levenberg-Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints Journal of Computational and Applied Mathematics 174 375–397
Lanczos C (1956) Applied Analysis 272–280 Prentice Hall, Englewood Cliffs, NJ, USA
Nielsen H B (1999) Damping parameter in Marquadt’s Method Technical report TR IMM-REP-1999-05. Department of Mathematical Modelling, Technical University of Denmark http://www2.imm.dtu.dk/documents/ftp/tr99/tr05_99.pdf
Nocedal J and Wright S J (2006) Numerical Optimization (2nd Edition) Springer Series in Operations Research, Springer, New York

## 5Arguments

1: $\mathbf{handle}$Type (c_ptr) Input
On entry: the handle to the problem. It needs to be initialized (e.g., by e04raf) and to hold a problem formulation compatible with e04ggf. It must not be changed between calls to the NAG optimization modelling suite.
2: $\mathbf{lsqfun}$Subroutine, supplied by the user. External Procedure
lsqfun must evaluate the value of the nonlinear residuals, ${r}_{i}\left(x\right)≔{y}_{i}-\varphi \left({t}_{i};x\right),i=1,\dots ,{n}_{\mathrm{res}}$, at a specified point $x$.
The specification of lsqfun is:
Fortran Interface
 Subroutine lsqfun ( nvar, x, nres, rx,
 Integer, Intent (In) :: nvar, nres Integer, Intent (Inout) :: inform, iuser(*) Real (Kind=nag_wp), Intent (In) :: x(nvar) Real (Kind=nag_wp), Intent (Inout) :: ruser(*) Real (Kind=nag_wp), Intent (Out) :: rx(nres) Type (c_ptr), Intent (In) :: cpuser
 void lsqfun (const Integer *nvar, const double x[], const Integer *nres, double rx[], Integer *inform, Integer iuser[], double ruser[], void **cpuser)
1: $\mathbf{nvar}$Integer Input
On entry: ${n}_{\mathrm{var}}$, the current number of decision variables, $x$, in the model.
2: $\mathbf{x}\left({\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Input
On entry: $x$, the vector of variable values at which the residuals, ${r}_{i}$, are to be evaluated.
3: $\mathbf{nres}$Integer Input
On entry: ${n}_{\mathrm{res}}$, the current number of residuals in the model.
4: $\mathbf{rx}\left({\mathbf{nres}}\right)$Real (Kind=nag_wp) array Output
On exit: the value of the residual vector, $r\left(x\right)$, evaluated at $x$.
5: $\mathbf{inform}$Integer Input/Output
On entry: a non-negative value.
On exit: may be used to indicate that some residuals could not be computed at the requested point. This can be done by setting inform to a negative value. The solver will attempt a rescue procedure and request an alternative point. If the rescue procedure fails, the solver will exit with ${\mathbf{ifail}}={\mathbf{25}}$.
6: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
7: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
8: $\mathbf{cpuser}$Type (c_ptr) User Workspace
lsqfun is called with the arguments iuser, ruser and cpuser as supplied to e04ggf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to lsqfun.
lsqfun must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04ggf is called. Arguments denoted as Input must not be changed by this procedure.
3: $\mathbf{lsqgrd}$Subroutine, supplied by the user. External Procedure
lsqgrd evaluates the residual gradients, $\nabla {r}_{i}\left(x\right)$, at a specified point $x$.
The specification of lsqgrd is:
Fortran Interface
 Subroutine lsqgrd ( nvar, x, nres, rdx,
 Integer, Intent (In) :: nvar, nres, nnzrd Integer, Intent (Inout) :: inform, iuser(*) Real (Kind=nag_wp), Intent (In) :: x(nvar) Real (Kind=nag_wp), Intent (Inout) :: rdx(nnzrd), ruser(*) Type (c_ptr), Intent (In) :: cpuser
 void lsqgrd (const Integer *nvar, const double x[], const Integer *nres, const Integer *nnzrd, double rdx[], Integer *inform, Integer iuser[], double ruser[], void **cpuser)
1: $\mathbf{nvar}$Integer Input
On entry: ${n}_{\mathrm{var}}$, the current number of decision variables, $x$, in the model.
2: $\mathbf{x}\left({\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Input
On entry: $x$, the vector of variable values at which the residual gradients, $\nabla {r}_{i}\left(x\right)$, are to be evaluated.
3: $\mathbf{nres}$Integer Input
On entry: ${n}_{\mathrm{res}}$, the current number of residuals in the model.
4: $\mathbf{nnzrd}$Integer Input
On entry: the number of nonzeros in the first derivative matrix. If ${\mathbf{isparse}}=0$ in the call to e04rmf (recommended use for e04ggf) then ${\mathbf{nnzrd}}={\mathbf{nvar}}*{\mathbf{nres}}$.
5: $\mathbf{rdx}\left({\mathbf{nnzrd}}\right)$Real (Kind=nag_wp) array Input/Output
On entry: the elements should only be assigned and not referenced.
On exit: the vector containing the nonzero residual gradients evaluated at $x$,
 $∇r(x) = [∇r1(x),∇r2(x),…,∇r nres (x)],$
where
 $∇ri(x)= [ ∂ri(x) ∂x1 ,…, ∂ri(x) ∂x nvar ]T.$
The elements must be stored in the same order as the defined sparsity pattern provided in the call to e04rmf.
6: $\mathbf{inform}$Integer Input/Output
On entry: a non-negative value.
On exit: may be used to indicate that the residual gradients could not be computed at the requested point. This can be done by setting inform to a negative value. The solver will attempt a rescue procedure and request an alternative point. If the rescue procedure fails, the solver will exit with ${\mathbf{ifail}}={\mathbf{25}}$.
7: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
8: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
9: $\mathbf{cpuser}$Type (c_ptr) User Workspace
lsqgrd is called with the arguments iuser, ruser and cpuser as supplied to e04ggf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to lsqgrd.
lsqgrd must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04ggf is called. Arguments denoted as Input must not be changed by this procedure.
4: $\mathbf{lsqhes}$Subroutine, supplied by the NAG Library or the user. External Procedure
lsqhes evaluates the residual Hessians, ${\nabla }^{2}{r}_{i}\left(x\right)$, at a specified point $x$. By default, the optional parameter ${\mathbf{Bxnl Use Second Derivatives}}=\mathrm{NO}$ and lsqhes is never called. lsqhes may be substituted by the dummy subroutine e04ggu supplied in the NAG Library.
This subroutine will only be called if the optional parameter ${\mathbf{Bxnl Use Second Derivatives}}=\mathrm{YES}$ and if the model (see Section 11.2) requires second order information. Under these circumstances, if you do not provide a valid lsqhes the solver will terminate with either ${\mathbf{ifail}}={\mathbf{6}}$ or ${\mathbf{ifail}}={\mathbf{21}}$.
The specification of lsqhes is:
Fortran Interface
 Subroutine lsqhes ( nvar, x, nres, hx,
 Integer, Intent (In) :: nvar, nres Integer, Intent (Inout) :: inform, iuser(*) Real (Kind=nag_wp), Intent (In) :: x(nvar), lambda(nres) Real (Kind=nag_wp), Intent (Inout) :: hx(nvar,nvar), ruser(*) Type (c_ptr), Intent (In) :: cpuser
 void lsqhes (const Integer *nvar, const double x[], const Integer *nres, const double lambda[], double hx[], Integer *inform, Integer iuser[], double ruser[], void **cpuser)
1: $\mathbf{nvar}$Integer Input
On entry: ${n}_{\mathrm{var}}$, the current number of decision variables, $x$, in the model.
2: $\mathbf{x}\left({\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Input
On entry: $x$, the vector of decision variables at the current iteration.
3: $\mathbf{nres}$Integer Input
On entry: ${n}_{\mathrm{res}}$, the current number of residuals in the model.
4: $\mathbf{lambda}\left({\mathbf{nres}}\right)$Real (Kind=nag_wp) array Input
On entry: $\lambda$, the vector containing the (weighted) residuals at $x$, ${\lambda }_{i}={w}_{i}{r}_{i}\left(x\right)$. See (1) and Section 9.2.
5: $\mathbf{hx}\left({\mathbf{nvar}},{\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Input/Output
On entry: the elements should only be assigned and not referenced.
On exit: a dense square (symmetric) matrix containing the weighted sum of residual Hessians,
 $H(x)= ∑ i=1 nres λi ∇ 2 ri (x),$
where
 $∇2 ri(x) = ( ∂2 ∂x1 ∂x1 ri (x) ∂2 ∂x1 ∂x2 ri (x) … ∂2 ∂x1 ∂ x nvar ri (x) ∂2 ∂x2 ∂x1 ri (x) ∂2 ∂x2 ∂x2 ri (x) … ∂2 ∂x2 ∂ x nvar ri (x) ⋮ ⋮ ⋱ ⋮ ∂2 ∂ x nvar ∂x1 ri (x) ∂2 ∂ x nvar ∂x2 ri (x) … ∂2 ∂ x nvar ∂ x nvar ri (x) ) ,$
is also a dense square (symmetric) matrix containing the $i$th residual Hessian evaluated at the point $x$. All matrix elements must be provided: both upper and lower triangular parts.
6: $\mathbf{inform}$Integer Input/Output
On entry: a non-negative value.
On exit: may be used to indicate that one or more elements of the residual Hessian could not be computed at the requested point. This can be done by setting inform to a negative value. The solver will attempt a rescue procedure and if the rescue procedure fails, the solver will exit with ${\mathbf{ifail}}={\mathbf{25}}$.
7: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
8: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
9: $\mathbf{cpuser}$Type (c_ptr) User Workspace
lsqhes is called with the arguments iuser, ruser and cpuser as supplied to e04ggf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to lsqhes.
lsqhes must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04ggf is called. Arguments denoted as Input must not be changed by this procedure.
5: $\mathbf{lsqhprd}$Subroutine, supplied by the NAG Library or the user. External Procedure
lsqhprd evaluates the residual Hessians, ${\nabla }^{2}{r}_{i}\left(x\right)$, at a specified point, $x$, and performs matrix-vector products with a given vector, $y$, returning the dense matrix $\left[{\nabla }^{2}{r}_{1}\left(x\right)y,{\nabla }^{2}{r}_{2}\left(x\right)y,\dots ,{\nabla }^{2}{r}_{{n}_{\mathrm{res}}}\left(x\right)y\right]$. If you do not supply this subroutine, it may be substituted by the dummy subroutine e04ggv supplied in the NAG Library.
The specification of lsqhprd is:
Fortran Interface
 Subroutine lsqhprd ( nvar, x, y, nres, hxy,
 Integer, Intent (In) :: nvar, nres Integer, Intent (Inout) :: inform, iuser(*) Real (Kind=nag_wp), Intent (In) :: x(nvar), y(nvar) Real (Kind=nag_wp), Intent (Inout) :: hxy(nvar,nres), ruser(*) Type (c_ptr), Intent (In) :: cpuser
 void lsqhprd (const Integer *nvar, const double x[], const double y[], const Integer *nres, double hxy[], Integer *inform, Integer iuser[], double ruser[], void **cpuser)
1: $\mathbf{nvar}$Integer Input
On entry: ${n}_{\mathrm{var}}$, the current number of decision variables, $x$, in the model.
2: $\mathbf{x}\left({\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Input
On entry: $x$, the vector of decision variables at the current iteration.
3: $\mathbf{y}\left({\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Input
On entry: $y$, the vector used to perform the required matrix-vector products.
4: $\mathbf{nres}$Integer Input
On entry: ${n}_{\mathrm{res}}$, the current number of residuals in the model.
5: $\mathbf{hxy}\left({\mathbf{nvar}},{\mathbf{nres}}\right)$Real (Kind=nag_wp) array Input/Output
On entry: the elements should only be assigned and not referenced.
On exit: a dense matrix of size ${n}_{\mathrm{var}}×{n}_{\mathrm{res}}$ containing the following matrix-vector products,
 $H(x,y)= [ ∇ 2 r1(x)y, ∇ 2 r2(x)y,…, ∇ 2 rnres(x)y] .$
6: $\mathbf{inform}$Integer Input/Output
On entry: The first call to lsqhprd will have a non-zero value and can be used to optimize your code in order to avoid recalculations of common quantities when evaluating the Hessians. For all other instances inform will have a value of zero. This notification parameter may be safely ignored if such optimization is not required.
On exit: may be used to indicate that one or more elements of the residual Hessian could not be computed at the requested point. This can be done by setting inform to a negative value. The solver will attempt a rescue procedure and if the rescue procedure fails, the solver will exit with ${\mathbf{ifail}}={\mathbf{25}}$. The value of inform returned on the first call is ignored.
7: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
8: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
9: $\mathbf{cpuser}$Type (c_ptr) User Workspace
lsqhprd is called with the arguments iuser, ruser and cpuser as supplied to e04ggf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to lsqhprd.
lsqhprd must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04ggf is called. Arguments denoted as Input must not be changed by this procedure.
6: $\mathbf{monit}$Subroutine, supplied by the NAG Library or the user. External Procedure
monit is provided to enable monitoring of the progress of the optimization and, if necessary, to halt the optimization process.
If no monitoring is required, monit may be the dummy subroutine e04ffu supplied in the NAG Library.
monit is called at the end of every $i$th step where $i$ is controlled by the optional parameter Bxnl Monitor Frequency (the default value is $0$, monit is not called).
The specification of monit is:
Fortran Interface
 Subroutine monit ( nvar, x,
 Integer, Intent (In) :: nvar Integer, Intent (Inout) :: inform, iuser(*) Real (Kind=nag_wp), Intent (In) :: x(nvar), rinfo(100), stats(100) Real (Kind=nag_wp), Intent (Inout) :: ruser(*) Type (c_ptr), Intent (In) :: cpuser
 void monit (const Integer *nvar, const double x[], Integer *inform, const double rinfo[], const double stats[], Integer iuser[], double ruser[], void **cpuser)
1: $\mathbf{nvar}$Integer Input
On entry: ${n}_{\mathrm{var}}$, the current number of decision variables, $x$, in the model.
2: $\mathbf{x}\left({\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Input
On entry: the current best point.
3: $\mathbf{inform}$Integer Input/Output
On entry: a non-negative value.
On exit: may be used to request the solver to stop immediately by setting inform to a non-zero value in which case it will terminate with ${\mathbf{ifail}}={\mathbf{20}}$; otherwise, the solver will proceed normally.
4: $\mathbf{rinfo}\left(100\right)$Real (Kind=nag_wp) array Input
On entry: best objective value computed and various indicators (the values are as described in the main argument rinfo).
5: $\mathbf{stats}\left(100\right)$Real (Kind=nag_wp) array Input
On entry: solver statistics at monitoring steps or at the end of the current iteration (the values are as described in the main argument stats).
6: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
7: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
8: $\mathbf{cpuser}$Type (c_ptr) User Workspace
monit is called with the arguments iuser, ruser and cpuser as supplied to e04ggf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to monit.
monit must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04ggf is called. Arguments denoted as Input must not be changed by this procedure.
7: $\mathbf{nvar}$Integer Input
On entry: ${n}_{\mathrm{var}}$, the current number of decision variables, $x$, in the model.
8: $\mathbf{x}\left({\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Input/Output
On entry: ${x}_{0}$, the initial estimates of the variables, $x$.
On exit: the final values of the variables, $x$.
9: $\mathbf{nres}$Integer Input
On entry: ${n}_{\mathrm{res}}$, the current number of residuals in the model.
10: $\mathbf{rx}\left({\mathbf{nres}}\right)$Real (Kind=nag_wp) array Output
On exit: the values of the residuals at the final point given in x.
11: $\mathbf{rinfo}\left(100\right)$Real (Kind=nag_wp) array Output
On exit: objective value and various indicators at monitoring steps or at the end of the final iteration. The measures are given in the table below:
 $1$ Objective function value, $f\left(x\right)$. $2$ Norm of the projected gradient at the current iterate, see PG STEP in Section 11.4 and (8) in Section 11.5. $3$ Norm of the scaled projected gradient at the current iterate, see (8) in Section 11.5 $4$ Norm of the step between the current and previous iterate. $5$ Convergence tests result. A scalar value between $0-7$ indicates whether a convergence test has passed. Specifically, $1$ indicates small objective test passed, $2$ indicates small (scaled) gradient test passed, $4$ indicates small step test passed. In the case where two or more tests passed, they are accumulated. $6$ Norm of the current iterate $x$. If regularization is requested, then this value was used in the regularization and it might differ from $‖x‖$ if $x$ has fixed or disabled elements. $7$–$100$ Reserved for future use.
12: $\mathbf{stats}\left(100\right)$Real (Kind=nag_wp) array Output
On exit: solver statistics at monitoring steps or at the end of the final iteration as given in the table below:
 $1$ Number of iterations performed. $2$ Total number of calls to the objective function lsqfun. $3$ Total number of calls to the objective gradient function lsqgrd. $4$ Total number of calls to the objective Hessian function lsqhes. $5$ Total time in seconds spent in the solver. It includes time spent in user-supplied subroutines. $6$ Number of calls to the objective function lsqfun required by linesearch steps. $7$ Number of calls to the objective gradient function lsqgrd required by linesearch steps. $8$ Number of calls to the objective function lsqfun required by projected gradient steps. $9$ Number of calls to the objective gradient function lsqgrd required by projected gradient steps. $10$ Number of inner iterations performed, see option ${\mathbf{Bxnl Model}}=\mathrm{TENSOR-NEWTON}$. $11$ Number of linesearch iterations performed. $12$ Number of projected gradient iterations performed. $13$ Total number of calls to the objective auxiliary Hessian function lsqhprd. $14$–$100$ Reserved for future use.
13: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
14: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
15: $\mathbf{cpuser}$Type (c_ptr) User Workspace
iuser, ruser and cpuser are not used by e04ggf, but are passed directly to lsqfun, lsqgrd, lsqhes, lsqhprd and monit and may be used to pass information to these routines. If you do not need to reference cpuser, it should be initialized to c_null_ptr.
16: $\mathbf{ifail}$Integer Input/Output
On entry: ifail must be set to $0$, $-1$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $-1$ means that an error message is printed while a value of $1$ means that it is not.
If halting is not appropriate, the value $-1$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $-1$ is recommended since useful values can be provided in some output arguments even when ${\mathbf{ifail}}\ne {\mathbf{0}}$ on exit. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).

## 6Error Indicators and Warnings

If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
Note: in some cases e04ggf may return useful information.
${\mathbf{ifail}}=1$
The supplied handle does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been properly initialized or it has been corrupted.
${\mathbf{ifail}}=2$
The problem is already being solved.
This solver does not support the model defined in the handle.
${\mathbf{ifail}}=3$
Unsupported model and method chosen. The specified model in optional parameter Bxnl Model is not supported by the method specified by the optional parameter ${\mathbf{Bxnl Nlls Method}}=\mathrm{POWELL-DOGLEG}$.
Unsupported option combinations. The specified combination of values for optional parameters Bxnl Nlls Method and Bxnl Glob Method is not supported.
${\mathbf{ifail}}=4$
On entry, ${\mathbf{nvar}}=⟨\mathit{\text{value}}⟩$, expected $\mathrm{value}=⟨\mathit{\text{value}}⟩$.
Constraint: nvar must match the current number of variables of the model in the handle.
The information supplied does not match with that previously stored.
On entry, ${\mathbf{nres}}=⟨\mathit{\text{value}}⟩$ must match that given during the definition of the objective in the handle, i.e., $⟨\mathit{\text{value}}⟩$.
There are no decision variables. nvar must be greater than zero.
${\mathbf{ifail}}=6$
Exact second derivatives needed for tensor model.
Model in the optional parameter ${\mathbf{Bxnl Model}}=\mathrm{TENSOR-NEWTON}$ requires exact second derivatives but ${\mathbf{Bxnl Use Second Derivatives}}=\mathrm{NO}$.
Provide second derivatives via lsqhes and optionally lsqhprd routines, and set optional parameter ${\mathbf{Bxnl Use Second Derivatives}}=\mathrm{YES}$.
${\mathbf{ifail}}=11$
Custom residual weights are required, optional parameter ${\mathbf{Bxnl Use Weights}}=\mathrm{YES}$, but the weights data is missing, of the wrong expected size or has invalid values. Please refer to Section 9.2.
${\mathbf{ifail}}=18$
Numerical difficulties encountered and solver was terminated.
This error can be caused by an ill-posed or poorly scaled problem.
${\mathbf{ifail}}=19$
Iteration limit reached while solving a subproblem.
Maximum number of iterations reached while trying to solve an auxiliary subproblem.
Line Search failed.
Line Search in the projected gradient direction did not find an acceptable new iterate.
${\mathbf{ifail}}=20$
User requested termination during a monitoring step. inform was set to a non-zero value within the user-supplied function monit.
${\mathbf{ifail}}=21$
The current starting point is unusable.
While trying to evaluate the starting point ${x}_{0}$, either inform was set to a non-zero value within the user-supplied functions, lsqfun, lsqgrd or lsqhes, or an Infinity or NaN was detected in values returned from them.
${\mathbf{ifail}}=22$
Maximum number of iterations reached.
Use optional parameter Bxnl Iteration Limit to reset the limit.
${\mathbf{ifail}}=23$
The solver terminated after the maximum time allowed was exceeded.
Maximum number of seconds exceeded. Use optional parameter Time Limit to reset the limit.
${\mathbf{ifail}}=24$
The solver was terminated because no further progress could be achieved.
This can indicate that the solver is calculating very small step sizes and is making very little progress. It could also indicate that the problem has been solved to the best numerical accuracy possible given the current scaling.
It can also indicate that a recovery procedure was interrupted due to the user-supplied function lsqgrd being incorrect.
${\mathbf{ifail}}=25$
Invalid number detected in user-supplied function and recovery failed.
Either inform was set to a non-zero value within one of the user-supplied functions, lsqfun, lsqgrd, lsqhes, or lsqhprd, or an Infinity or NaN was detected in values returned from them and the recovery attempt failed.
${\mathbf{ifail}}=-99$
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.

## 7Accuracy

The accuracy of the solution is determined by optional parameters Bxnl Stop Abs Tol Fun, Bxnl Stop Abs Tol Grd, Bxnl Stop Rel Tol Fun, Bxnl Stop Rel Tol Grd, and Bxnl Stop Step Tol. If ${\mathbf{ifail}}={\mathbf{0}}$ on exit, the returned point satisfies (7), (8) or (9) to the defined accuracies.
Please refer to Section 11.5 and the description of the particular options in Section 12.

## 8Parallelism and Performance

e04ggf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
e04ggf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.

### 9.1Description of the Printed Output

The solver can print information to give an overview of the problem and the progress of the computation. The output may be sent to two independent unit numbers which are set by optional parameters Print File and Monitoring File. Optional parameters Print Level, Print Options, Monitoring Level and Print Solution determine the exposed level of detail. This allows, for example, a detailed log file to be generated while the condensed information is displayed on the screen.
By default (${\mathbf{Print File}}=6$, ${\mathbf{Print Level}}=2$), the following sections are printed to the standard output:
• Optional parameters list (if ${\mathbf{Print Options}}=\mathrm{YES}$)
• Problem statistics
• Iteration log
• Summary
• Solution (if ${\mathbf{Print Solution}}=\mathrm{YES}$)
The header is a message indicating the start of the solver. It should look like:
```----------------------------------------------------------------------
E04GG, Nonlinear least squares method for bound-constrained problems
----------------------------------------------------------------------```
Optional parameters list
If ${\mathbf{Print Options}}=\mathrm{YES}$, a list of the optional parameters and their values is printed before the problem statistics. The list shows all options of the solver, each displayed on one line. Each line contains the option name, its current value and an indicator for how it was set. The options unchanged from their defaults are noted by ‘d’ and the ones you have set are noted by ‘U’. Note that the output format is compatible with the file format expected by e04zpf. The output looks similar to:
```Begin of Options
Bxnl Model                    =        Gauss-newton     * U
Bxnl Nlls Method              =             Galahad     * d
Bxnl Glob Method              =                 Reg     * U
Bxnl Reg Order                =                Auto     * d
Bxnl Tn Method                =           Min-1-var     * d
Bxnl Basereg Pow              =         2.00000E+00     * d
Bxnl Basereg Term             =         1.00000E-02     * d
Bxnl Iteration Limit          =                1000     * d
Bxnl Use Second Derivatives   =                 Yes     * U
End of Options```
Problem statistics
If ${\mathbf{Print Level}}\ge 2$, statistics on the problem are printed, for example:
```Problem Statistics
No of variables                  4 (+2 disabled, +0 fixed)
free (unconstrained)           3
bounded                        1
Objective function    LeastSquares
No of residuals               16 (+8 disabled)```
Iteration log
If ${\mathbf{Print Level}}=2$, the solver will print a summary line for each step. An iteration is considered successful when it yields a decrease of the objective, either sufficiently close to the decrease predicted by the model or to a given relative threshold. Each line shows the iteration number (Iter), the value of the objective function (error), the absolute and relative norms for the projected gradient (optim) and (rel optim), this last one is used in the convergence test of equation (8). The output looks as follows:
```--------------------------------------------
Iter |   error  |   optim    |  rel optim
--------------------------------------------
0 3.6953E+01  1.30501E+01  1.51801E+00
1 5.6464E-01  1.61396E+00  1.51877E+00
2 7.7340E-02  7.06213E-01  1.79563E+00
3 2.2354E-02  5.21619E-01  2.46694E+00
4 7.0758E-03  3.11695E-01  2.62016E+00
5 2.2392E-03  1.76206E-01  2.63307E+00```
If ${\mathbf{Print Level}}\ge 3$, each line additionally shows the current value of the trust region radius (Delta), quality of the model (rho), some flags relating to the iteration (S2IF), inner iteration counter (inn it) for the tensor Newton model, the step length taken (step), trust region loop exit status (loop), performed line search type (LS), as well as the projection factor over the constraints (tau). It might look as follows:
```---------------------------------------------------------------------------------------------------
Iter |   error  |   optim    |  rel optim |  Delta  |   rho   |S2IF| inn it|  step | loop|LS| tau
---------------------------------------------------------------------------------------------------
65 2.2217E-06  3.04184E-05  1.44305E-02  4.00E+02  9.22E-01 SNR-       0 1.3E-03 gamma    1.00
66 2.2203E-06  1.44158E-05  6.84102E-03  8.00E+02  9.82E-01 SNR-       0 2.6E-03 gamma    1.00
67 2.2177E-06  1.49444E-06  7.09599E-04  1.60E+03  9.84E-01 SNR-       0 4.9E-03 TR Ok    1.00
68 2.2132E-06  3.44970E-06  1.63968E-03  3.20E+03  9.71E-01 SNR-       0 9.2E-03 TR Ok    1.00
69 2.2054E-06  1.01207E-05  4.81891E-03  6.40E+03  9.47E-01 SNR-       0 1.7E-02 TR Ok    1.00```
Iteration flags column (S2IF) contains four flags related to the iteration. Flag ‘S’ indicates if the trust region iteration was successful (S) or unsuccessful (U). Flag ‘2’ shows if iteration used second-order information: yes (Y), no (N), tensor (T), or approximate (A). Flag ‘I’ indicates iteration type: regular (R) or inner (I). Exit flag of inner solver ‘F’ has three states: subproblem converged (C), not solved (E), or current iteration is inside subproblem or tensor model not used (-). For details on the interpretation of rho and tau, see Section 11.
If Tensor-Newton model is chosen, then details of each inner iteration can be printed by setting ${\mathbf{Print Level}}=4$, output is similar to:
```---------------------------------------------------------------------------------------------------
Iter |   error  |   optim    |  rel optim |  Delta  |   rho   |S2IF| inn it|  step | loop|LS| tau
---------------------------------------------------------------------------------------------------
0 3.6953E+01  1.30501E+01  1.51801E+00  1.00E+02 -1.00E+00 ----
---------------------------------------------------------------------------------------------------
0 3.6953E+01  2.00107E+01  2.32768E+00  1.00E+02 -1.00E+00 --I-
1                                       3.91E-01 -1.35E-01 UNI-
1 3.2210E+01  2.19541E+01  2.73529E+00  3.91E-01  3.15E-01 SNI-         9.6E-01 TR Ok    1.00
2                                       1.95E-01 -6.51E-01 UYI-
2 3.2210E+01  2.19541E+01  2.73529E+00  1.23E-30  1.00E+00 SYI-         9.2E-16 TR Ok    1.00
3                                       3.08E-31 -2.00E+00 UYI-
3 3.2210E+01  2.19541E+01  2.73529E+00  6.16E-31  1.00E+00 SYI-         6.5E-16 TR Ok    1.00
---------------------------------------------------------------------------------------------------
1 1.9040E+01  9.07931E+00  1.47131E+00  1.00E+02  3.77E+00 STRC       4 9.6E-01 TR Ok    1.00
---------------------------------------------------------------------------------------------------
0 1.9040E+01  1.59354E+01  2.58235E+00  1.00E+02 -1.00E+00 --I-
1                                       2.50E+01 -4.67E+00 UNI-
1 1.5631E+01  1.59186E+01  2.84704E+00  3.91E-01  4.09E-01 SNI-         6.4E-01 TR Ok    1.00
2 1.5631E+01  1.59186E+01  2.84704E+00  9.86E-30  1.00E+00 SYI-         1.9E-15 TR Ok    1.00
3 1.5631E+01  1.59186E+01  2.84704E+00  6.16E-31  1.00E+00 SYI-         4.8E-16 TR Ok    1.00
---------------------------------------------------------------------------------------------------
2 8.5311E+00  6.26943E+00  1.51779E+00  1.00E+02  3.08E+00 STRC       9 6.4E-01 TR Ok    1.00```
Note the iteration type flag ‘I’ change under the S2IF column, the output reports on 2 (R) regular iterations where each required 3 (I) inner iterations.
Additionally, if ${\mathbf{Print Level}}=5$, each iteration produces more information that expands over several lines. This additional information can contain:
• Details on the trust region subproblem;
• Iteration log for auxiliary iterative methods;
• Line Search iteration logs.
The output might look as follows:
```*** Solving the trust region subproblem using More-Sorensen ***
A is symmetric positive definite
iter    nd            sigma         sigma_shift
0    3.6778E-01    0.0000E+00    0.0000E+00
nq =   7.7000E+02
1    1.2571E-02    6.4469E-06    6.4469E-06
We're within the trust region radius
Leaving More-Sorensen
Model evaluated successfully: m_k(d) =   2.3089E-08
*** Subproblem solution found ***
Actual reduction (in cost function) =   1.2065E-09
Predicted reduction (in model) =   4.1866E-09
rho returned =   2.8819E-01
Successful step -- Delta staying at  1.2570E-02```
Summary
Once the solver finishes, a summary is produced:
``` -------------------------------------------------------------------------------
Status: converged, an optimal solution was found
-------------------------------------------------------------------------------
Value of the objective             2.17328E-06
Norm of scaled projected gradient  7.29019E-06
Norm of step                       4.98107E-04
Iterations                                  80
Inner iterations                           0
LS iterations                              0
PG iterations                              0
Function evaluations                        81
Hessian evaluations (objhes)                 0
Hessian evaluations (objhprd)                0
LS function calls                          0
PG function calls                          0
Optionally, if ${\mathbf{Stats Time}}=\mathrm{YES}$, the timings are printed:
``` Timing
Total time spent                        2.43 sec
-------------------------------------------------------------------------------```
Solution
If ${\mathbf{Print Solution}}=\mathrm{YES}$, the values of the primal variables are printed, furthermore if the problem is constrained, the dual variables are also reported, see Lagrangian Multipliers in e04kff and the dual variables storage format described in Section 3.1 in e04svf. It might look as follows:
``` Primal variables:
idx   Lower bound       Value       Upper bound
1   0.00000E+00    4.58516E-01    1.00000E+00
2       -inf       3.05448E+00         inf
3       -inf       4.65146E+00         inf
4     Disabled         NaN          Disabled

Box bounds dual variables:
idx   Lower bound       Value       Upper bound       Value
1   0.00000E+00    0.00000E+00    1.00000E+00    9.52218E-10
2       -inf       4.66962E-11         inf       0.00000E+00
3       -inf       6.55098E-11         inf       0.00000E+00
4     Disabled         NaN          Disabled         NaN```

### 9.2Residual Weights

A typical use for weights in the least square fitting context is to account for uncertainity in the observed data, ${\sigma }_{i}^{2}$, by setting the weights to
 $w ∈ ℝ nres with wi2= 1 σi2 , for ​ i=1,…,nres .$
The idea behind this choice is to give less importance (small weight) to measurements which have large variance.
In order to use weights,
1. 1.request to use weights by setting the optional parameter ${\mathbf{Bxnl Use Weights}}=\mathrm{YES}$ (this will request the solver to query the handle for an array of weights), and
2. 2.store the weights array in the handle. This is done by calling e04rxf with the command ${\mathbf{cmdstr}}=\text{'}\mathrm{Residual Weights}\text{'}$ and passing the array length and weights array, ${\mathbf{lrarr}}={\mathbf{nres}}$ and ${\mathbf{rarr}}=w$, respectively. Weights are required for each residual and all weights must be positive.
These steps must be done after the handle is initialized (via e.g., e04raf) but before calling the solver e04ggf. The stored weights in the handle will only be accessed if ${\mathbf{Bxnl Use Weights}}=\mathrm{YES}$, otherwise all weights are assumed to be $1$ and the handle is not queried for residual weights.
If the solver is expecting to use weights but they are not provided, or the array length is wrong or have non-positive values, then the solver will terminate with ${\mathbf{ifail}}={\mathbf{11}}$.

### 9.3Internal Changes

Internal changes have been made to this routine as follows:
• At Mark 27.1:
Added support for underdetermined problems. Solver can export covariance matrix at a solution point.
For details of all known issues which have been reported for the NAG Library please refer to the Known Issues.

## 10Example

In this example we solve the Lanczos-3 Lanczos (1956) problem, a nonlinear least squares regression. The regression problem consists of ${\mathbf{nres}}=24$ observations, $\left(t,y\right)$, to be fitted over the ${\mathbf{nvar}}=6$ parameter model
 $y= ϕ (t;x) + ε = x1 e -x2t + x3 e -x4t + x5 e -x6t + ε ,$
and the residuals for the regression are
 $ri (x) = yi - ϕ (ti,x) i = 1 : nres ,$
where
 $t = ( 0.0000,0 0.0500,0 0.1000,0 0.1500,0 0.2000,0 0.2500,0 0.3000,0 0.3500,0 0.4000,0 0.4500, ( 0.5000,0 0.5500,0 0.6000,0 0.6500,0 0.7000,0 0.7500,0 8.0000,0 8.5000,0 9.0000,0 0.9500,0 ( 1.0000,0 1.0500,0 1.1000,0 1.1500 ) ; y = ( 2.5134,0 2.0443,0 1.6684,0 1.3664,0 1.1232,0 0.9269,0 0.7679,0 0.6389,0 0.5338,0 0.4479,0 ( 0.3776,0 0.3197,0 0.2720,0 0.2325,0 0.1997,0 0.1723,0 0.1493,0 0.1301,0 0.1138,0 0.1000,0 ( 0.0883,0 0.0783,0 0.0698,0 0.0624).$
The following bounds are defined on the variables
 $0.0 ≤ x1 ≤ 1.0, -1.0 ≤ x2, -1.0 ≤ x3, -1.0 ≤ x4, -1.0 ≤ x5 ≤ 1.0, -1.0 ≤ x6 ≤ 10.0.$
The initial guess is ${x}_{0}=\left(1.2,0.3,5.6,5.5,6.5,7.6\right)$.
The expected solution is ${x}^{*}=\left(0.4445,1.8734,3.0665,4.6404,-0.9982,4.6409\right)$ with residual norm close to $2.1×{10}^{-6}$.

### 10.1Program Text

Program Text (e04ggfe.f90)

None.

### 10.3Program Results

Program Results (e04ggfe.r)

## 11Algorithmic Details

This section contains a short description of the underlying algorithms used in e04ggf, a bound-constrained nonlinear least squares (BXNL) solver that uses a model-based trust region framework adapted to exploit the least squares problem structure. It is based on a collaborative work between NAG and the STFC Rutherford Appleton Laboratory. For further details, see Gould et al. (2017) and references therein.

### 11.1Trust Region Algorithm

In this section, we are interested in generic nonlinear least squares problems of the form
 $minimize x ∈ ℝ nvar f(x)= 12 ∑ i=1 nres [wiri(x)] 2 + σp ‖x‖2p subject to lx ≤ x ≤ ux .$ (2)
where ${r}_{i}\left(x\right)$, $i=1,\dots ,{n}_{\mathrm{res}}$, are smooth nonlinear functions called redisuals, ${w}_{i}>0,i=1,\dots ,{n}_{\mathrm{res}}$ are the weights (by default they are all defined to $1$, see Section 9.2 on how to change them), and the rightmost element represents the optional regularization term of parameter $\sigma \ge 0$ and power $p>0$. The constraint elements ${l}_{x}$ and ${u}_{x}$ are ${n}_{\mathrm{var}}$-dimensional vectors defining the bounds on the variables. For the rest of this chapter, and without any loss of generality, it is assumed that weights are all set to the default value of $1$ and are excluded from the formulae. e04ggf is an iterative framework for solving (2) which consists of a variety of algorithms that solve the trust region subproblem. The fundamental ideas of the framework follow.
At each point ${x}_{k}$, the algorithm builds a model of the function at the next step, $f\left({x}_{k}+{s}_{k}\right)$, which we refer to as ${m}_{k}$ (see Section 11.2).
Once the model has been formed, the candidate for the next point is found by solving a suitable subproblem (see Section 11.3). Let ${P}_{\Omega }\left(x\right)$ be the Euclidean projection operator over the feasible set, then the quantity
 $ρk= f (xk) - f (PΩ(xk+sk)) mk (xk) - mk (PΩ(xk+sk)) ,$
is used to assess the quality of the proposed step. If it is sufficiently large we accept the step and ${x}_{k+1}$ is set to ${P}_{\Omega }\left({x}_{k}+{s}_{k}\right)$; if not, the trust region radius ${\Delta }_{k}$ is reduced and the resulting new trust region subproblem is solved. If the step is very successful ($\rho$ is close to $1$), ${\Delta }_{k}$ is increased.
Under certain circumstances, it is deemed that the projection of the current point with the trust region step will not produce a successful point and the new step ${s}_{k}$ is calculated using a convenient line search step.
This process continues until a point is found that satisfies the stopping criteria described in Section 11.5. More precisely, it can be described as:
• BXNL Algorithm
1. 1.Initialization
Set $k=1$, make a feasible initial guess ${x}_{0}$ and set trust region radius ${\Delta }_{0}$.
2. 2.Iteration k
1. (i)Stop if ${x}_{k}$ is a solution point (see stopping criteria in Section 11.5).
2. (ii)Solve the trust region subproblem with ${\Delta }_{k}$ and provide step ${s}_{k}$.
3. (iii)Project the new point ${x}_{k}+{s}_{k}$ into the bound constraints.
4. (iv)Evaluate the objective at the new point.
5. (v)Update the trust region radius ${\Delta }_{k+1}$ based on the ratio ${\rho }_{k}$.
6. (vi)If the objective has decreased sufficiently (successful step) choose ${x}_{k+1}={P}_{\Omega }\left({x}_{k}+{s}_{k}\right)$. Return to 2(i).
7. (vii)Assess the severity of the projection for the new point using the ratio ${\tau }_{k}=\frac{‖{P}_{\Omega }\left({x}_{k}+{s}_{k}\right)-{x}_{k}‖}{‖{s}_{k}‖}$.
8. (viii)If ${\tau }_{k}$ is close to $1$, either return to 2(ii) with ${\Delta }_{k+1}$ or try to perform a line search in the direction ${d}_{k}^{LS}={P}_{\Omega }\left({x}_{k}+{s}_{k}\right)-{x}_{k}$. If successful set ${s}_{k}={\alpha }_{k}{d}_{k}^{LS}$ and ${x}_{k+1}={x}_{k}+{s}_{k}$. Return to 2(i).
9. (ix)Take a projected gradient step by performing a line search in the direction ${d}_{k}^{PG}={P}_{\Omega }\left({x}_{k}-\nabla f\left({x}_{k}\right)\right)-{x}_{k}$, set ${s}_{k}={\alpha }_{k}{d}_{k}^{PG}$ and ${x}_{k+1}={x}_{k}+{s}_{k}$.
Note: the use of the regularization term in (2) is optional and is not used by default. To enable regularization please refer to the optional parameters Bxnl Basereg Type, Bxnl Basereg Pow, Bxnl Basereg Term, and Bxnl Reg Order.

### 11.2Models

A vital component of the algorithm is the choice of model employed. There are four choices available which are controlled by the optional parameter Bxnl Model.
Gauss–Newton
This option specifies to the solver that it use the Gauss–Newton model. For this case, $r\left({x}_{k}+s\right)$ is replaced by its first-order Taylor approximation, $r\left({x}_{k}\right)+\nabla r{\left({x}_{k}\right)}^{\mathrm{T}}s=r\left({x}_{k}\right)+{J}_{k}s$. The model is, therefore, given by
 $mk (s) = mkGN (s) = 12 ‖r(xk)+Jks‖ 2 .$
Quasi–Newton
This option specifies to use a Newton type model. For this case, the model is taken to be the second-order Taylor approximation of the objective function $f\left({x}_{k+1}\right)$. For this choice of model the gradient and Hessian are ${g}_{k}={J}_{k}^{T}r\left({x}_{k}\right)$ and ${H}_{k}=\sum _{i=1}^{{n}_{\mathrm{res}}}{r}_{i}\left({x}_{k}\right){\nabla }^{2}{r}_{i}\left({x}_{k}\right)$. The model is given by
 $mk (s) = mkQN (s) = f(xk) + gkTs + 12 sT Hk s .$
If the second derivatives of $r\left(x\right)$ are not available (i.e., the optional parameter ${\mathbf{Bxnl Use Second Derivatives}}=\mathrm{No}$), then the method approximates the matrix ${H}_{k}$. If ${\mathbf{Print Level}}\ge 3$, the flag ‘2’ in the iteration log will display (A), see Iteration log in Section 9.1.
Hybrid
This option specifies to the solver that it use the hybrid model. In practice the Gauss–Newton model tends to work well far away from the solution, whereas the Newton model performs better once it is near to the minimum (particularly if the residual is large at the solution). This option tells the solver to switch between the previous two models, picking the model that is most appropriate for the step. In particular, it starts by using ${m}_{k}^{GN}$ and switches to ${m}_{k}^{\mathrm{QN}}$ when it considers it is close enough to the solution. If, in subsequent iterations, it fails to get a decrease in the function value, then the algorithm interprets this as being not sufficiently close to the solution and switches back to using the Gauss–Newton model.
Tensor–Newton
This option specifies to the solver that it use the tensor model. The model is based on a second-order Taylor approximation to the residual, ${r}_{i}\left({x}_{k}+s\right)\approx {\left({t}_{k}\left(s\right)\right)}_{i}≔{r}_{i}\left({x}_{k}\right)+{\left({J}_{k}\right)}_{i}s+\frac{1}{2}{s}^{\mathrm{T}}{\nabla }^{2}{r}_{i}\left({x}_{k}\right)s$, where ${\left(J\right)}_{i}$ is the $i$th row of $J$. The tensor model used is
 $mkTN(s)=12 ‖tk(s)‖ 2.$ (3)

### 11.3Subproblems

The next point ${x}_{k+1}$ is estimated by finding a step, ${s}_{k}$, that minimizes the model chosen in Bxnl Model, subject to a globalization strategy. e04ggf supports the use of two such strategies: trust region or regularization, these can be set using the optional parameter ${\mathbf{Bxnl Glob Method}}=\mathrm{TR}$ or $\mathrm{REG}$ respectively. If ${\mathbf{Bxnl Model}}=\mathrm{Gauss-Newton}$, $\mathrm{Quasi-Newton}$ or $\mathrm{Hybrid}$, then the model is quadratic and the available methods to solve the subproblem are described in the next two subsections. If the ${\mathbf{Bxnl Model}}=\mathrm{TENSOR-NEWTON}$, then the model is not quadratic and the methods available are described in Section 11.3.3.

#### 11.3.1Trust region method

The methods mentioned in this subsection are only used when ${\mathbf{Bxnl Model}}=\mathrm{Gauss-Newton}$, $\mathrm{Quasi-Newton}$ or $\mathrm{Hybrid}$ and ${\mathbf{Bxnl Glob Method}}=\mathrm{TR}$. The trust region subproblem to solve is
 $sk = arg min s∈ℝnvar mk (s) subject to ‖s‖≤Δk .$ (4)
The next step is taken to be the solution of the previous problem and the method used to solve it is selected using the optional parameter Bxnl Nlls Method. The methods available are:
• Powell's dogleg method
• Approximates the solution to (4) by using Powell's dogleg method. This takes, as the step, a linear combination of the Gauss–Newton step and the steepest descent step.
• Generalized eigenvalue problem (AINT)
Solves the trust region subproblem using the trust region solver of Adachi, Iwata, Nakatsukasa, and Takeda (AINT). This reformulates and solves the problem (4) as a generalized eigenvalue problem. See Adachi et al. (2015) for more details.
• Moré–Sorensen Method
This method solves (4) using a variant of the Moré–Sorensen method. In particular, it implements Algorithm 7.3.6 of Conn et al. (2000).
Solves (4) by converting the problem into the form
 $minq wT q + 12 qTDq subject to ‖q‖ ≤ Δk ,$
where $D$ is a diagonal matrix from the eigendecomposition, $VD{V}^{\mathrm{T}}$, of the derivatives of either ${m}_{k}^{GN}\left(s\right)$ or ${m}_{k}^{QN}\left(s\right)$. The vectors $w$ and $q$ gather the rest of the transformation involving $s$ and $r\left({x}_{k}\right)$. This is solved by performing an eigendecomposition of the Hessian in the model and calling the GALAHAD routine DTRS. For further details see Gould et al. (2003).

#### 11.3.2Regularization

The methods mentioned in this subsection are only used when ${\mathbf{Bxnl Model}}=\mathrm{Gauss-Newton}$, $\mathrm{Quasi-Newton}$ or $\mathrm{Hybrid}$ and ${\mathbf{Bxnl Glob Method}}=\mathrm{REG}$. The regularized subproblem to solve is
 $sk = arg min s∈ℝnvar mk(s) + 1 Δk 1p ‖s‖p .$ (5)
The next step to take is the solution to the previous problem. The methods provided to solve (5) are
• Solve by linear system
This option estimates the step ${s}_{k}$ by solving a shifted linear system. Currently it only supports quadratic regularization ($p=2.0$) and it can be set using the optional parameter Bxnl Reg Order. The default value of the optional parameter is ${\mathbf{Bxnl Reg Order}}=\mathrm{AUTO}$ and automatically selects $p=2.0$. If $p\ne 2.0$ the solver terminates with ${\mathbf{ifail}}={\mathbf{24}}$.
This method is used when ${\mathbf{Bxnl Nlls Method}}=\mathrm{LINEAR SOLVER}$.
This method solves the regularized problem by first converting it to the form
 $minq wT q + 12 qT D q + 1p ‖q‖p ,$
where $D$ is a diagonal matrix from the eigendecomposition, $VD{V}^{\mathrm{T}}$, of the derivatives of either ${m}_{k}^{GN}\left(s\right)$ or ${m}_{k}^{QN}\left(s\right)$. The vectors $w$ and $q$ gather the rest of the transformation involving $s$ and $r\left({x}_{k}\right)$, and $p$ is the regularization order chosen using the optional parameter Bxnl Reg Order. The problem is solved by performing an eigendecomposition of the Hessian in the model and calling the GALAHAD routine DRQS. For further details see Gould et al. (2003).

#### 11.3.3Tensor Newton subproblem

This section describes the regularized methods used to solve the non-quadratic tensor model (3) subproblem, i.e., the step subproblem when ${\mathbf{Bxnl Model}}=\mathrm{TENSOR-NEWTON}$. The schemes implemented find the next step by solving
 $minimize s∈ℝnvar 12 ∑ i=1 nres (tk(s)) i 2 .$ (6)
Note that (6) is also a sum-of-squares problem and, as such, can be solved by recursively calling e04ggf. In this context, the iterations performed by the recursive call to the solver are called inner iterations, otherwise they are called regular or outer iterations. When ${\mathbf{Print Level}}\ge \mathrm{3}$, the iteration type is shown under the flag ‘I’ of the ‘iteration flags’ column while the inner iteration count is shown under the column ‘inn it’ of the Iteration log (see Section 9.1). The method used to solve (6) can be chosen by the optional parameter Bxnl Tn Method and the implemented methods are:
• Implicit solve
This method will solve recursively problem (6) using a quadratic model. GALAHAD routine DRQS is used to estimate the next step by solving the (implicitly) regularized quadratic subproblem
 $arg min s∈ℝnvar mkQN (s) + 12 1Δk ‖s‖2 + 12 1δk ‖s‖2 ,$
where ${m}_{k}^{\mathrm{QN}}\left(s\right)$ is a quasi-Newton model of (3) and the regularization power is $p=2$. The problem can be viewed as having two regularization terms, $\sigma =1/{\Delta }_{k}$ is the (fixed) regularization term from the outer iteration and $\stackrel{^}{\sigma }=1/{\delta }_{k}$ is the regularization term of the inner iteration which is free to be updated as required by the solver.
This method is used when ${\mathbf{Bxnl Tn Method}}=\mathrm{IMPLICIT}$.
• Tacit solve with ${n}_{\mathrm{var}}$ additional terms
This method will solve recursively problem (6) using a hybrid model, tacitly reformulating the problem to incorporate ${n}_{\mathrm{var}}$ additional residuals (see Section 11.3.4). This is implemented by internally setting ${\mathbf{Bxnl Basereg Type}}=\mathrm{EXPAND-NVAR-DOF}$, ${\mathbf{Bxnl Basereg Pow}}=2.0$ and the ${\mathbf{Bxnl Basereg Term}}=1/{\Delta }_{k}$. The subproblem to determine the next step has ${n}_{\mathrm{var}}+{n}_{\mathrm{res}}$ parameters and is of the form
 $minimize s∈ℝnvar 12 ‖r^(s)‖2 , where ​ r^ (s) i = { (tk(s)) i 1 ≤ i ≤ nres 1 Δk si nres + 1 ≤ i ≤ nvar + nres .$
This subproblem is solved using GALAHAD routine DRQS and is used when ${\mathbf{Bxnl Tn Method}}=\mathrm{MIN-NVAR}$.
• Tacit solve with one additional term
This method will solve recursively problem (6) using a hybrid model, tacitly reformulating the problem to incorporate one additional residual (see Section 11.3.4). This is implemented by internally setting ${\mathbf{Bxnl Basereg Type}}=\mathrm{EXPAND-1-DOF}$, ${\mathbf{Bxnl Basereg Pow}}=3.0$ and the ${\mathbf{Bxnl Basereg Term}}=1/{\Delta }_{k}$. As in the previous method, the subproblem to determine the next step is solved using GALAHAD routine DRQS.
This method is used when ${\mathbf{Bxnl Tn Method}}=\mathrm{MIN-1-VAR}$.
• Explicit solve with ${n}_{\mathrm{var}}$ additional terms
This method will expand the search space with ${n}_{\mathrm{var}}$ additional parameters and solve recursively a regularized variant of problem (6) using a hybrid model (see Section 11.3.4). This is implemented by internally expanding the search space and setting ${\mathbf{Bxnl Basereg Type}}=\mathrm{EXPAND-NVAR-DOF}$, ${\mathbf{Bxnl Basereg Pow}}=2.0$ and the ${\mathbf{Bxnl Basereg Term}}=1/{\Delta }_{k}$. Analogous to the previous methods, the subproblem to determine the next step is solved using GALAHAD routine DRQS.
This method is used when ${\mathbf{Bxnl Tn Method}}=\mathrm{ADD-NVAR}$.
• Explicit solve with one additional term
This method will expand the search space with an additional parameter and solve recursively a regularized variant of problem (6) using a hybrid model (see Section 11.3.4). This is implemented by internally adding an additional residual term and setting ${\mathbf{Bxnl Basereg Type}}=\mathrm{EXPAND-1-DOF}$, ${\mathbf{Bxnl Basereg Pow}}=3.0$ and the ${\mathbf{Bxnl Basereg Term}}=1/{\Delta }_{k}$. Analogous to the previous methods, the subproblem to determine the next step is solved using GALAHAD routine DRQS.
This method is used when ${\mathbf{Bxnl Tn Method}}=\mathrm{ADD-1-VAR}$.

#### 11.3.4Incorporating the regularizer

The method used to incorporate the regularization specified by $\sigma$ and $p$ in problem (2) is defined using the optional parameter Bxnl Basereg Type. The implemented choices are:
• None
Sets $\sigma =0$ and solves the non-regularized variant of the problem, this is the default.
• Reformulation using ${n}_{\mathrm{var}}$ DoF
Solves a nonlinear least squares problem with ${n}_{\mathrm{var}}$ additional degrees of freedom. The new residual objective function, $\stackrel{^}{r}:{R}^{{n}_{\mathrm{var}}}\to {R}^{{n}_{\mathrm{res}}+{n}_{\mathrm{var}}}$, is defined as
 $r^i (x) = { ri(x) for i=1 ,…, nres σ (x) j for i=nres + j, j=1 ,…, nvar ,$
where ${\left(x\right)}_{j}$ denotes the $j$th component of the iterate $x$ and $\sigma$ is provided using optional parameter Bxnl Basereg Term. This option requires that the (base) regularization power $p$ in (2) be $2.0$, i.e., ${\mathbf{Bxnl Basereg Pow}}=\mathrm{2.0}$ (the default value).
This method is used when ${\mathbf{Bxnl Basereg Type}}=\mathrm{EXPAND-NVAR-DOF}$.
Solves a nonlinear least squares problem with one additional degree of freedom. The new residual objective function $\overline{r}:{R}^{{n}_{\mathrm{var}}}\to {R}^{{n}_{\mathrm{res}}+1}$, is defined as
 $r¯ i (x)= { ri(x) 1≤i≤nres 2σp ‖x‖p/2 i= nres+1 .$
Analogous to the previous case, $\sigma$ is defined using the optional parameter Bxnl Basereg Term. When using this option, it is recommended that the (base) regularization power $p$ in (2) be $3.0$, i.e., ${\mathbf{Bxnl Basereg Pow}}=\mathrm{3.0}$.
This method is used when ${\mathbf{Bxnl Basereg Type}}=\mathrm{EXPAND-1-DOF}$.

### 11.4Bound Constraints

e04ggf handles the bound constraints by projecting candidate points into the feasible set. The implemented framework is an adaptation of Algorithm 3.12 described in Kanzow et al. (2004), where the Levenberg–Marquardt step is replaced by a trust region (TR) step. The framework consists of three major steps. It first attempts a projected TR step and, if unsuccessful, attempts a Wolfe-type line search step along the projected TR step direction, otherwise, it defaults to a projected gradient step with an Armijo-type line search, specifically,
• TR step
The trust region loop needs to be interrupted if the proposed steps, ${s}_{k}$, lead to points outside of the feasible set, i.e., are orthogonal with respect to the active bounds. This is monitored by the ratio ${\tau }_{k}=\frac{‖{P}_{\Omega }\left({x}_{k}+{s}_{k}\right)-{x}_{k}‖}{‖{s}_{k}‖}$, where ${P}_{\Omega }$ is the Euclidean projection operator over the feasible set. $\tau$ provides a convenient way to assess how severe the projection is, if $\tau \approx 0$ then the step, ${s}_{k}$, is indeed orthogonal to the active space and does not provide a suitable search direction so the loop is terminated. On the contrary, if $\tau \approx 1$ then ${s}_{k}$ has components that are not orthogonal to the active set that can be explored.
The TR step is taken when it is deemed that it makes enough progress in decreasing the error.
• LS step
This step is attempted when the TR step is unsuccessful but the direction ${d}_{k}^{LS}={P}_{\Omega }\left({x}_{k}+{s}_{k}\right)-{x}_{k}$ is considered of descent and a viable search direction in the sense that
 $∇ f(xk)T dkLS ≤ -ρ ‖dkLS‖ ν ,$
with ${s}_{k}$ the TR step, $\rho >0$ and $\nu >1$. A weak Wolfe-type line search along this direction is performed to find the next point. During the line search the intermediary candidates are projected into the feasible set and kept feasible, for details see Section 11.3 in e04kff.
• PG step
The projected gradient (PG) step is only taken if both the TR step and the LS step were unsuccessful. It consists of an Armijo-type line search along the projected gradient direction, ${d}_{k}^{PG}={P}_{\Omega }\left({x}_{k}-\nabla f\left({x}_{k}\right)\right)-{x}_{k}$, for more details on this method refer to Section 11.2 in e04kff.

### 11.5Stopping Criteria

The solver considers that it has found a solution and stops when at least one of the following three conditions is met within the defined absolute or relative tolerances $\left({\epsilon }_{\mathrm{abs}}^{f}>0,{\epsilon }_{\mathrm{rel}}^{f}>0,{\epsilon }_{\mathrm{abs}}^{g}>0,{\epsilon }_{\mathrm{rel}}^{g}>0,{\epsilon }_{\mathrm{step}}>0\right)$,
 $‖r(xk)‖ ≤ max(εabsf, εrelf ‖r(x0)‖ ) ,$ (7)
 $‖ d k PG ‖ ‖r( x k )‖ ≤ max(εabsg, εrelg ‖ d 0 PG ‖ ‖r( x 0 )‖ ) ,$ (8)
 $‖xk-xk-1‖ ≤ εstep.$ (9)
Where ${d}_{k}^{PG}$ is the projected gradient (see PG step in Section 11.4) and is reported in the column optim of the output while the left-hand side of (8) is reported in the column rel optim, see Iteration log in Section 9.1. If the problem is unconstrained, then the projected gradient reduces to the gradient and the convergence tests are done over the gradient norm. The stopping tolerances can be changed using the optional parameters Bxnl Stop Abs Tol Fun, Bxnl Stop Abs Tol Grd, Bxnl Stop Rel Tol Fun, Bxnl Stop Rel Tol Grd, and Bxnl Stop Step Tol. see Section 12 for details. If these parameters are set too small in relation to the complexity and scaling of the problem, the routine can terminate with ${\mathbf{ifail}}={\mathbf{19}}$, ${\mathbf{22}}$ or ${\mathbf{24}}$.

### 11.6A Note About Lagrangian Multipliers

It is often useful to have access to the Lagrangian multipliers (dual variables) associated with the constraints if there are any defined. In the case where only simple bounds are present, the multipliers directly relate to the values of the gradient at the solution. The multipliers of the active bounds are the absolute values of the associated elements of the gradient. The multipliers of the inactive bounds are always zero.
The multipliers based on the final gradient value can be retrieved by calling e04rxf with the command string cmdstr $=$ Dual Variables. The format is the same as for other routines, see Section 3.1 in e04svf in e04svf.
Note that if the problem has not fully converged, the provided approximation might be quite crude.

## 12Optional Parameters

Several optional parameters in e04ggf define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e04ggf these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling e04zmf anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
The option values may be retrieved by e04znf.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section 12.1.

### 12.1Description of the Optional Parameters

For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
• the keywords, where the minimum abbreviation of each keyword is underlined;
• a parameter value, where the letters $a$, $i$ and $r$ denote options that take character, integer and real values respectively;
• the default value, where the symbol $\epsilon$ is a generic notation for machine precision (see x02ajf).
All options accept the value $\mathrm{DEFAULT}$ to return single options to their default states.
Keywords and character values are case and white space insensitive.
 Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
 Bxnl Basereg Pow $r$ Default $=2.0$
This parameter defines the regularization power $p$ in (1) and for the tensor Newton subproblem (when ${\mathbf{Bxnl Tn Method}}=\mathrm{IMPLICIT}$). Some values are restricted depending on the type of regularization specified, see Bxnl Basereg Type for more details.
Constraint: ${\mathbf{Bxnl Basereg Pow}}>0$.
 Bxnl Basereg Term $r$ Default $=0.01$
This parameter defines the regularization term $\sigma$ in (1) and for the tensor Newton subproblem (when ${\mathbf{Bxnl Tn Method}}=\mathrm{IMPLICIT}$).
Constraint: ${\mathbf{Bxnl Basereg Term}}>0$.
 Bxnl Basereg Type $a$ Default $=\mathrm{NONE}$
This parameter specifies the method used to incorporate the regularizer into (1) and optionally into the tensor Newton subproblem (when ${\mathbf{Bxnl Model}}=\mathrm{Tensor-Newton}$ and ${\mathbf{Bxnl Tn Method}}=\mathrm{IMPLICIT}$).
The option ${\mathbf{Bxnl Basereg Type}}=\mathrm{EXPAND-NVAR-DOF}$ reformulates the original problem by expanding it with ${n}_{\mathrm{var}}$ degrees of freedom that is subsequently solved. For the case ${\mathbf{Bxnl Basereg Type}}=\mathrm{EXPAND-1-DOF}$ the residual vector is extended with a new term of the form $\frac{\sigma }{p}{‖x‖}_{2}^{p}$; for this method a value of $p=3$ is recommended.
If ${\mathbf{Bxnl Basereg Type}}=\mathrm{EXPAND-NVAR-DOF}$ then the regularization power term $p$ must be $2.0$, that is ${\mathbf{Bxnl Basereg Pow}}=\mathrm{2.0}$. For further details see Section 11.3.
Constraint: ${\mathbf{Bxnl Basereg Type}}=\mathrm{NONE}$, $\mathrm{EXPAND-NVAR-DOF}$ or $\mathrm{EXPAND-1-DOF}$.
 Bxnl Save Covariance Matrix $a$ Default $=\mathrm{NO}$
This parameter indicates to the solver to store the covariance matrix into the handle.
If ${\mathbf{Bxnl Save Covariance Matrix}}=\mathrm{YES}$ then the lower triangle part of the covariance matrix is stored in packed column order (see Section 3.3.2 in the F07 Chapter Introduction) into the handle and can be retrieved via e04rxf using ${\mathbf{cmdstr}}=\mathrm{COVARIANCE MATRIX}$ with ${\mathbf{lrarr}}=\left({n}_{\mathrm{var}}×\left({n}_{\mathrm{var}}+1\right)\right)/2$.
In the special case where ${\mathbf{Bxnl Save Covariance Matrix}}=\mathrm{VARIANCE}$, only the diagonal elements of the covariance matrix are stored in the handle and can be retrieved via e04rxf using ${\mathbf{cmdstr}}=\mathrm{VARIANCE}$ with ${\mathbf{lrarr}}={n}_{\mathrm{var}}$.
Similarly, if ${\mathbf{Bxnl Save Covariance Matrix}}=\mathrm{HESSIAN}$ then the lower triangle part of the matrix $H\left(x\right)=\nabla r\left(x\right){\nabla r\left(x\right)}^{\mathrm{T}}={J\left(x\right)}^{\mathrm{T}}J\left(x\right)$ is stored in packed column order into the handle and can be retrieved via e04rxf using ${\mathbf{cmdstr}}=\mathrm{HESSIAN MATRIX}$ with ${\mathbf{lrarr}}=\left({n}_{\mathrm{var}}×\left({n}_{\mathrm{var}}+1\right)\right)/2$.
Limitations: If the number of enabled residuals is not greater than the number of enabled variables, or the pseudo-inverse of $H\left(x\right)$ could not be calculated, then the covariance matrix (variance vector) is not stored in the handle and will not be available.
For more information on how the covariance matrix is estimated, see e04ycf.
Constraint: ${\mathbf{Bxnl Save Covariance Matrix}}=\mathrm{NO}$, $\mathrm{YES}$, $\mathrm{VARIANCE}$ or $\mathrm{HESSIAN}$.
 Bxnl Stop Abs Tol Fun $r$ Default $\text{}=2.2{\epsilon }^{\frac{1}{3}}$
This parameter specifies the relative tolerance for the error test, specifically, it sets the value of ${\epsilon }_{\mathrm{abs}}^{f}$ of equation (7) in Section 11.5. Setting Bxnl Stop Abs Tol Fun to a large value may cause the solver to stop prematurely with a suboptimal solution.
Constraint: ${\mathbf{Bxnl Stop Abs Tol Fun}}>0$.
 Bxnl Stop Abs Tol Grd $r$ Default $\text{}={\epsilon }^{\frac{1}{2}}$
This parameter specifies the relative tolerance for the gradient test, specifically, it sets the value of ${\epsilon }_{\mathrm{abs}}^{g}$ of equation (8) in Section 11.5. Setting Bxnl Stop Abs Tol Grd to a large value may cause the solver to stop prematurely with a suboptimal solution.
Constraint: ${\mathbf{Bxnl Stop Abs Tol Grd}}>0$.
 Bxnl Stop Rel Tol Fun $r$ Default $\text{}={\epsilon }^{\frac{1}{2}}$
This parameter specifies the relative tolerance for the error test, specifically, it sets the value of ${\epsilon }_{\mathrm{rel}}^{f}$ of equation (7) in Section 11.5. Setting Bxnl Stop Rel Tol Fun to a large value may cause the solver to stop prematurely with a suboptimal solution.
Constraint: ${\mathbf{Bxnl Stop Rel Tol Fun}}>0$.
 Bxnl Stop Rel Tol Grd $r$ Default $\text{}={\epsilon }^{\frac{1}{2}}$
This parameter specifies the relative tolerance for the gradient test, specifically, it sets the value of ${\epsilon }_{\mathrm{rel}}^{g}$ of equation (8) in Section 11.5. Setting Bxnl Stop Rel Tol Grd to a large value may cause the solver to stop prematurely with a suboptimal solution.
Constraint: ${\mathbf{Bxnl Stop Rel Tol Grd}}>0$.
 Bxnl Stop Step Tol $r$ Default $\text{}=2\epsilon$
Specifies the stopping tolerance for the step length test, specifically, it sets the value for ${\epsilon }_{\mathrm{step}}$ of equation (9) in Section 11.5. Setting Bxnl Stop Step Tol to a large value may cause the solver to stop prematurely with a suboptimal solution.
Under certain circumstances, e.g., when in doubt of the quality of the first- or second-order derivatives, in the event of the solver exiting with a successful step length test, it is recommended to verify that either the error or the gradient norm is acceptably small.
Constraint: ${\mathbf{Bxnl Stop Step Tol}}>0$.
 Bxnl Reg Order $a$ Default $\text{}=\mathrm{AUTO}$
This parameter specifies the order of the regularization $p$ in (5) used when ${\mathbf{Bxnl Glob Method}}=\mathrm{Reg}$.
Some values for $p$ are restricted depending on the method chosen in Bxnl Nlls Method, see Section 11.3.2 for more details.
Constraint: ${\mathbf{Bxnl Reg Order}}=\mathrm{AUTO}$, $\mathrm{QUADRATIC}$ or $\mathrm{CUBIC}$.
 Bxnl Glob Method $a$ Default $=\mathrm{TR}$
This parameter specifies the globalization method used to estimate the next step ${s}_{k}$. It also determines the class of subproblem to solve. The trust region subproblem finds the step by minimizing the specified model withing a given radius. On the other hand, when ${\mathbf{Bxnl Glob Method}}=REG$, the problem is reformulated by adding an aditional regularization term and minimized in order to find the next step ${s}_{k}$. See Section 11.3 for more details.
Constraint: ${\mathbf{Bxnl Glob Method}}=\mathrm{TR}$ or $\mathrm{REG}$.
 Bxnl Nlls Method $a$ Default $=\mathrm{GALAHAD}$
This parameter defines the method used to estimate the next step ${s}_{k}$ in ${x}_{k+1}={x}_{k}+{s}_{k}$. It only applies to ${\mathbf{Bxnl Model}}=\mathrm{GAUSS-NEWTON}$, $\mathrm{QUASI-NEWTON}$ or $\mathrm{HYBRID}$. When the globalization technique chosen is trust region (${\mathbf{Bxnl Glob Method}}=TR$) the methods for Bxnl Nlls Method available are Powell's dogleg method, a generalized eigenvalue method (AINT) Adachi et al. (2015), a variant of Moré–Sorensen's method, and GALAHAD's DTRS method. Otherwise, when the globalization method chosen is via regularization (${\mathbf{Bxnl Glob Method}}=REG$) the methods available are comprised by a linear system solver and GALAHAD's DRQS method. See Section 11.3 for more details.
Constraint: ${\mathbf{Bxnl Nlls Method}}=\mathrm{POWELL-DOGLEG}$, $\mathrm{AINT}$, $\mathrm{MORE-SORENSEN}$, $\mathrm{LINEAR SOLVER}$ or $\mathrm{GALAHAD}$.
 Bxnl Model $a$ Default $=\mathrm{HYBRID}$
This parameter specifies which model is used to approximate the objective function and estimate the next point that reduces the error. This is one of the most important optional parameters and should be chosen according to the problem characteristics. The models are briefly described in Section 11.2.
Constraint: ${\mathbf{Bxnl Model}}=\mathrm{GAUSS-NEWTON}$, $\mathrm{QUASI-NEWTON}$, $\mathrm{HYBRID}$ or $\mathrm{TENSOR-NEWTON}$.
 Bxnl Tn Method $a$ Default $=\mathrm{MIN-1-VAR}$
This parameter specifies how to solve the subproblem and find the next step ${s}_{k}$ for the tensor Newton model, ${\mathbf{Bxnl Model}}=\mathrm{TENSOR-NEWTON}$. The subproblems are solved using a range of regularization schemes. See Section 11.3.3.
Constraint: ${\mathbf{Bxnl Tn Method}}=\mathrm{IMPLICIT}$, $\mathrm{MIN-1-VAR}$, $\mathrm{MIN-NVAR}$, $\mathrm{ADD-1-VAR}$ or $\mathrm{ADD-NVAR}$.
 Bxnl Use Second Derivatives $a$ Default $=\mathrm{NO}$
This parameter indicates whether the weighted sum of residual Hessians are available through the call-back lsqhes. If ${\mathbf{Bxnl Use Second Derivatives}}=\mathrm{NO}$ and the specified model in Bxnl Model requires user-suppied second derivatives, then the solver will terminate with ${\mathbf{ifail}}={\mathbf{6}}$.
Constraint: ${\mathbf{Bxnl Use Second Derivatives}}=\mathrm{YES}$ or $\mathrm{NO}$.
 Bxnl Use Weights $a$ Default $=\mathrm{NO}$
This parameter indicates whether to use a weighted nonlinear least square model. If ${\mathbf{Bxnl Use Weights}}=\mathrm{YES}$ then the weights ${w}_{i}>0,i=1,\dots ,{n}_{\mathrm{res}}$ in (2) must be supplied by you via e04rxf. If weights are to be used, then all ${n}_{\mathrm{res}}$ elements must be provided, see Section 9.2. If the solver is expecting to use weights but they are not provided or have non-positive values, then the solver will terminate with ${\mathbf{ifail}}={\mathbf{11}}$.
Constraint: ${\mathbf{Bxnl Use Second Derivatives}}=\mathrm{YES}$ or $\mathrm{NO}$.
 Bxnl Iteration Limit $i$ Default $=1000$
This parameter specifies the maximum amount of major iterations the solver is alloted. If this limit is reached, then the solver will terminate with ${\mathbf{ifail}}={\mathbf{22}}$.
Constraint: ${\mathbf{Bxnl Iteration Limit}}\ge 1$.
 Bxnl Monitor Frequency $i$ Default $=0$
If ${\mathbf{Bxnl Monitor Frequency}}>0$, the user-supplied subroutine monit will be called at the end of every $i$th step for monitoring purposes.
Constraint: ${\mathbf{Bxnl Monitor Frequency}}\ge 0$.
 Bxnl Print Header $i$ Default $=30$
This parameter defines, in number of iterations, the frequency with which to print the iteration log header.
Constraint: ${\mathbf{Bxnl Print Header}}\ge 1$.
 Infinite Bound Size $r$ Default $\text{}={10}^{20}$
This defines the ‘infinite’ bound $\mathit{bigbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to $\mathit{bigbnd}$ will be regarded as $+\infty$ (and similarly any lower bound less than or equal to $-\mathit{bigbnd}$ will be regarded as $-\infty$). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
Constraint: ${\mathbf{Infinite Bound Size}}\ge 1000$.
 Monitoring File $i$ Default $\text{}=-1$
If $i\ge 0$, the unit number for the secondary (monitoring) output. If ${\mathbf{Monitoring File}}=-1$, no secondary output is provided. The information output to this unit is controlled by Monitoring Level.
Constraint: ${\mathbf{Monitoring File}}\ge -1$.
 Monitoring Level $i$ Default $=4$
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as for Print Level.
Constraint: $0\le {\mathbf{Monitoring Level}}\le 5$.
 Print File $i$ Default $=\text{advisory message unit number}$
If $i\ge 0$, the unit number for the primary output of the solver. If ${\mathbf{Print File}}=-1$, the primary output is completely turned off independently of other settings. The default value is the advisory message unit number as defined by x04abf at the time of the optional parameters initialization, e.g., at the initialization of the handle. The information output to this unit is controlled by Print Level.
Constraint: ${\mathbf{Print File}}\ge -1$.
 Print Level $i$ Default $=2$
This parameter defines how detailed information should be printed by the solver to the primary and secondary output.
$\mathbit{i}$ Output
$0$ No output from the solver.
$1$ The Header and Summary.
$2$, $3$, $4$, $5$ Additionally, the Iteration log.
Constraint: $0\le {\mathbf{Print Level}}\le 5$.
 Print Options $a$ Default $=\mathrm{YES}$
If ${\mathbf{Print Options}}=\mathrm{YES}$, a listing of optional parameters will be printed to the primary output and is always printed to the secondary output.
Constraint: ${\mathbf{Print Options}}=\mathrm{YES}$ or $\mathrm{NO}$.
 Print Solution $a$ Default $=\mathrm{NO}$
If ${\mathbf{Print Solution}}=\mathrm{X}$, the final values of the primal variables are printed on the primary and secondary outputs.
If ${\mathbf{Print Solution}}=\mathrm{YES}$ or $\mathrm{ALL}$, in addition to the primal variables, the final values of the dual variables are printed on the primary and secondary outputs.
Constraint: ${\mathbf{Print Solution}}=\mathrm{YES}$, $\mathrm{NO}$, $\mathrm{X}$ or $\mathrm{ALL}$.
 Stats Time $a$ Default $=\mathrm{NO}$
This parameter turns on timing. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice $\mathrm{YES}$ is equivalent to $\mathrm{WALL CLOCK}$.
Constraint: ${\mathbf{Stats Time}}=\mathrm{YES}$, $\mathrm{NO}$, $\mathrm{CPU}$ or $\mathrm{WALL CLOCK}$.
 Time Limit $r$ Default $\text{}={10}^{6}$
A limit to the number of seconds that the solver can use to solve one problem. If at the end of an iteration this limit is exceeded, the solver will terminate with ${\mathbf{ifail}}={\mathbf{23}}$.
Constraint: ${\mathbf{Time Limit}}>0$.