Note:this function usesoptional parametersto define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm and to Section 12 for a detailed description of the specification of the optional parameters.
e04ggc is a bound-constrained nonlinear least squares trust region solver (BXNL) from the NAG optimization modelling suite aimed for small to medium-scale problems.
where ${r}_{i}\left(x\right),i=1,\dots ,{n}_{\mathrm{res}}$, are smooth nonlinear functions called residuals,
${w}_{i},i=1,\dots ,{n}_{\mathrm{res}}$ are weights (by default they are all defined to $1$, see Section 9.2 on how to change them), and the rightmost element represents the regularization term with parameter $\sigma \ge 0$ and power $p>0$ (by default the regularization term is not used, see Section 11 on how to enable it). The constraint elements ${l}_{x}$ and ${u}_{x}$ are ${n}_{\mathrm{var}}$-dimensional vectors defining the bounds on the variables.
Typically in a calibration or data fitting context, the residuals will be defined as the difference between the observed values ${y}_{i}$ at
${t}_{i}$ and the values provided by a
nonlinear model $\varphi (t;x)$, i.e., ${r}_{i}\left(x\right)\u2254{y}_{i}-\varphi ({t}_{i};x)$. If these residuals (errors) follow a Gaussian distribution, then the values of the optimal parameter vector ${x}^{*}$ are the maximum likelihood estimates. For a description of the various algorithms implemented for solving this problem see Section 11. It is also recommended that you read Section 2.2.3 in the E04 Chapter Introduction.
e04ggc serves as a solver for problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as a means of communication for functions in the NAG optimization modelling suite. First, the problem handle is initialized by calling e04rac. A nonlinear least square residual objective can be added by calling e04rmc and, optionally, (simple) box constraints can be defined by calling e04rhc. It should be noted that e04ggc internally works with a dense representation of the residual Jacobian even if a sparse structure was defined in the call to e04rmc. Once the problem is fully described, the handle may be passed to the solver e04ggc. When the handle is not needed anymore, e04rzc should be called to destroy it and deallocate the memory held within. For more information refer to the NAG optimization modelling suite in Section 4.1 in the E04 Chapter Introduction.
The algorithm is based on the trust region framework and its behaviour can be modified by various optional parameters (see Section 12) which can be set by e04zmc and e04zpc anytime between the initialization of the handle by e04rac and a call to the solver. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various starting points and/or optional parameters. The option getter e04znc can be called to retrieve the current value of any option.
Several options might have significant impact on the performance of the solver.
Even though the defaults were chosen to suit the majority of anticipated problems, it is recommended that you experiment with the option settings to find the most suitable set of options for a particular problem, see Sections 11 and 12 for further details.
4References
Adachi S,
Iwata S,
Nakatsukasa Y, and
Takeda A
(2015)
Solving the trust region subproblem by a generalized eigenvalue problem
Technical report, METR 2015-14.
Mathematical Engineering, The University of Tokyo
https://www.keisu.t.u-tokyo.ac.jp/data/2015/METR15-14.pdf
Conn A R, Gould N I M and Toint Ph L (2000) Trust Region Methods SIAM, Philadephia
Gould N I M,
Orban D, and
Toint Ph L
(2003)
GALAHAD, a library of thread-safe Fortran 90 packages for
large-scale nonlinear optimization
ACM Transactions on Mathematical Software (TOMS)29(4)
353–372
Kanzow C,
Yamashita N, and
Fukushima M
(2004)
Levenberg-Marquardt methods with strong local convergence
properties for solving nonlinear equations with convex
constraints
Journal of Computational and Applied Mathematics174
375–397
Lanczos C
(1956)
Applied Analysis
272–280
Prentice Hall, Englewood Cliffs, NJ, USA
Nielsen H B
(1999)
Damping parameter in Marquadt’s Method
Technical report TR IMM-REP-1999-05.
Department of Mathematical Modelling, Technical University of Denmark
http://www2.imm.dtu.dk/documents/ftp/tr99/tr05_99.pdf
Nocedal J and Wright S J (2006) Numerical Optimization (2nd Edition) Springer Series in Operations Research, Springer, New York
5Arguments
1: $\mathbf{handle}$ – void *Input
On entry: the handle to the problem. It needs to be initialized (e.g., by e04rac) and to hold a problem formulation compatible with e04ggc. It must not be changed between calls to the NAG optimization modelling suite.
2: $\mathbf{lsqfun}$ – function, supplied by the userExternal Function
lsqfun must evaluate the value of the nonlinear residuals, ${r}_{i}\left(x\right)\u2254{y}_{i}-\varphi ({t}_{i};x),i=1,\dots ,{n}_{\mathrm{res}}$, at a specified point $x$.
On exit: the value of the residual vector, $r\left(x\right)$, evaluated at $x$.
5: $\mathbf{inform}$ – Integer *Input/Output
On entry: a non-negative value.
On exit: may be used to indicate that some residuals could not be computed at the requested point. This can be done by setting inform to a negative value. The solver will attempt a rescue procedure and request an alternative point. If the rescue procedure fails, the solver will exit with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_USER_NAN.
6: $\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to lsqfun.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void *. Before calling e04ggc you may allocate memory and initialize these pointers with various quantities for use by lsqfun when called from e04ggc (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
3: $\mathbf{lsqgrd}$ – function, supplied by the userExternal Function
lsqgrd evaluates the residual gradients, $\nabla {r}_{i}\left(x\right)$, at a specified point $x$.
On entry: $x$, the vector of variable values at which the
residual gradients, $\nabla {r}_{i}\left(x\right)$, are to be evaluated.
3: $\mathbf{nres}$ – IntegerInput
On entry: ${n}_{\mathrm{res}}$, the current number of residuals in the model.
4: $\mathbf{nnzrd}$ – IntegerInput
On entry: the number of nonzeros in the first derivative matrix.
If ${\mathbf{isparse}}=0$
in the call to e04rmc (recommended use for e04ggc) then
${\mathbf{nnzrd}}={\mathbf{nvar}}*{\mathbf{nres}}$.
The elements must be stored in the same order as the defined sparsity pattern
provided in the call to e04rmc.
6: $\mathbf{inform}$ – Integer *Input/Output
On entry: a non-negative value.
On exit: may be used to indicate that the residual gradients could not be computed at the requested point. This can be done by setting inform to a negative value. The solver will attempt a rescue procedure and request an alternative point. If the rescue procedure fails, the solver will exit with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_USER_NAN.
7: $\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to lsqgrd.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void *. Before calling e04ggc you may allocate memory and initialize these pointers with various quantities for use by lsqgrd when called from e04ggc (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
4: $\mathbf{lsqhes}$ – function, supplied by the userExternal Function
lsqhes evaluates the residual Hessians,
${\nabla}^{2}{r}_{i}\left(x\right)$,
at a specified point $x$.
By default, the optional parameter ${\mathbf{Bxnl\; Use\; Second\; Derivatives}}=\mathrm{NO}$ and lsqhes
is never called. lsqhes may be
specified as NULLFN.
This function will only be called if the optional parameter ${\mathbf{Bxnl\; Use\; Second\; Derivatives}}=\mathrm{YES}$ and
if the model (see Section 11.2)
requires second order
information. Under these circumstances, if you do not provide a valid
lsqhes the solver will terminate with either
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_DERIV_ERRORS or ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_FAILED_START.
is also a dense square (symmetric) matrix containing the $i$th residual Hessian evaluated at the point $x$. All matrix elements must be
provided: both upper and lower triangular parts.
6: $\mathbf{inform}$ – Integer *Input/Output
On entry: a non-negative value.
On exit: may be used to indicate that one or more elements of the residual Hessian could not be computed at the requested point. This can be done by setting inform to a negative value. The solver will attempt a rescue procedure and if the rescue procedure fails, the solver will exit with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_USER_NAN.
7: $\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to lsqhes.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void *. Before calling e04ggc you may allocate memory and initialize these pointers with various quantities for use by lsqhes when called from e04ggc (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
5: $\mathbf{lsqhprd}$ – function, supplied by the userExternal Function
lsqhprd evaluates the residual Hessians,
${\nabla}^{2}{r}_{i}\left(x\right)$,
at a specified point, $x$, and performs matrix-vector
products with a given vector, $y$,
returning the dense matrix
$[{\nabla}^{2}{r}_{1}\left(x\right)y,{\nabla}^{2}{r}_{2}\left(x\right)y,\dots ,{\nabla}^{2}{r}_{{n}_{\mathrm{res}}}\left(x\right)y]$.
If you do not supply this function, it may be
specified as NULLFN.
On entry: The first call to lsqhprd will have a non-zero value and can be used to optimize your code in order to avoid recalculations of common quantities when evaluating the Hessians. For all other instances inform will have a value of zero. This notification parameter may be safely ignored if such optimization is not required.
On exit: may be used to indicate that one or more elements of the residual Hessian could not be computed at the requested point. This can be done by setting inform to a negative value. The solver will attempt a rescue procedure and if the rescue procedure fails, the solver will exit with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_USER_NAN. The value of inform returned on the first call is ignored.
7: $\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to lsqhprd.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void *. Before calling e04ggc you may allocate memory and initialize these pointers with various quantities for use by lsqhprd when called from e04ggc (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
6: $\mathbf{monit}$ – function, supplied by the userExternal Function
monit is provided to enable monitoring of the progress of the optimization and, if necessary, to halt the optimization process.
If no monitoring is required, monit may be specified as NULLFN.
monit is called at the end of every $i$th step where $i$ is controlled by the optional parameter ${\mathbf{Bxnl\; Monitor\; Frequency}}$ (the default value is $0$, monit is not called).
On exit: may be used to request the solver to stop immediately
by setting inform to a non-zero value in which case it will terminate
with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_USER_STOP; otherwise, the solver will proceed normally.
On entry: solver statistics at monitoring steps or at the end of the current iteration (the values are as described in the main argument stats).
6: $\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to monit.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void *. Before calling e04ggc you may allocate memory and initialize these pointers with various quantities for use by monit when called from e04ggc (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
7: $\mathbf{nvar}$ – IntegerInput
On entry: ${n}_{\mathrm{var}}$, the current number of decision variables,
$x$, in the model.
Norm of the scaled projected gradient at the current iterate, see (8) in Section 11.5
$3$
Norm of the step between the current and previous iterate.
$4$
Convergence tests result. A scalar value between
$0-7$ indicates whether a convergence test has passed. Specifically,
$1$ indicates small objective test passed,
$2$ indicates small (scaled) gradient test passed,
$4$ indicates small step test passed. In the case where two or more tests passed, they are
accumulated.
$5$
Norm of the current iterate $x$. If regularization is requested, then this value was used in the regularization and it might differ from $\Vert x\Vert $ if $x$ has fixed or disabled elements.
On exit: solver statistics at monitoring steps or at the end of the final iteration as given in the table below:
$0$
Number of iterations performed.
$1$
Total number of calls to the objective function lsqfun.
$2$
Total number of calls to the objective gradient function lsqgrd.
$3$
Total number of calls to the objective Hessian function lsqhes.
$4$
Total time in seconds spent in the solver. It includes time spent in user-supplied subroutines.
$5$
Number of calls to the objective function lsqfun required by linesearch steps.
$6$
Number of calls to the objective gradient function lsqgrd required by linesearch steps.
$7$
Number of calls to the objective function lsqfun required by projected gradient steps.
$8$
Number of calls to the objective gradient function lsqgrd required by projected gradient steps.
$9$
Number of inner iterations performed, see option ${\mathbf{Bxnl\; Model}}=\mathrm{TENSOR-NEWTON}$.
$10$
Number of linesearch iterations performed.
$11$
Number of projected gradient iterations performed.
$12$
Total number of calls to the objective auxiliary Hessian function lsqhprd.
$13-99$
Reserved for future use.
13: $\mathbf{comm}$ – Nag_Comm *
The NAG communication argument (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
14: $\mathbf{fail}$ – NagError *Input/Output
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).
e04ggc returns with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_NOERROR if the iterates have converged to a point $x$ that satisfies the convergence criteria described in Section 11.5.
6Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information.
NE_BAD_PARAM
On entry, argument $\u27e8\mathit{\text{value}}\u27e9$ had an illegal value.
NE_DC_MATCH
Data for residual weights not found or is invalid.
Custom residual weights are required, optional parameter ${\mathbf{Bxnl\; Use\; Weights}}=\mathrm{YES}$, but the weights data is missing, of the wrong expected size or has invalid values. Please refer to Section 9.2.
NE_DERIV_ERRORS
Exact second derivatives needed for tensor model.
Model in the optional parameter ${\mathbf{Bxnl\; Model}}=\mathrm{TENSOR-NEWTON}$ requires exact second derivatives but ${\mathbf{Bxnl\; Use\; Second\; Derivatives}}=\mathrm{NO}$. Provide second derivatives via lsqhes and optionally lsqhprd functions, and set optional parameter ${\mathbf{Bxnl\; Use\; Second\; Derivatives}}=\mathrm{YES}$.
NE_FAILED_START
The current starting point is unusable.
While trying to evaluate the starting point ${x}_{0}$, either inform was set to a non-zero value within the user-supplied functions, lsqfun, lsqgrd or lsqhes, or an Infinity or NaN was detected in values returned from them.
NE_HANDLE
The supplied handle does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been properly initialized or it has been corrupted.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 7.5 in the Introduction to the NAG Library CL Interface for further information.
NE_NO_IMPROVEMENT
The solver was terminated because no further progress could be achieved.
This can indicate that the solver is calculating very small step sizes and is making very little progress. It could also indicate that the problem has been solved to the best numerical accuracy possible given the current scaling.
It can also indicate that a recovery procedure was interrupted due to the user-supplied function lsqgrd being incorrect.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library CL Interface for further information.
NE_NUM_DIFFICULTIES
Numerical difficulties encountered and solver was terminated.
This error can be caused by an ill-posed or poorly scaled problem.
NE_PHASE
The problem is already being solved.
Unsupported model and method chosen.
The specified model in optional parameter ${\mathbf{Bxnl\; Model}}$ is not supported by the method specified by the optional parameter ${\mathbf{Bxnl\; Nlls\; Method}}=\mathrm{POWELL-DOGLEG}$.
Unsupported option combinations.
The specified combination of values for optional parameters ${\mathbf{Bxnl\; Nlls\; Method}}$ and ${\mathbf{Bxnl\; Glob\; Method}}$ is not supported.
NE_REF_MATCH
On entry, ${\mathbf{nvar}}=\u27e8\mathit{\text{value}}\u27e9$, expected $\mathrm{value}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: nvar must match the current number of variables of the model in the handle.
The information supplied does not match with that previously stored.
On entry, ${\mathbf{nres}}=\u27e8\mathit{\text{value}}\u27e9$ must match that given during the definition of the objective in the handle, i.e., $\u27e8\mathit{\text{value}}\u27e9$.
There are no decision variables.
nvar must be greater than zero.
NE_SETUP_ERROR
This solver does not support the model defined in the handle.
NE_TIME_LIMIT
The solver terminated after the maximum time allowed was exceeded.
Maximum number of seconds exceeded. Use optional parameter ${\mathbf{Time\; Limit}}$ to reset the limit.
NE_TOO_MANY_ITER
Maximum number of iterations reached.
Use optional parameter ${\mathbf{Bxnl\; Iteration\; Limit}}$ to reset the limit.
NE_TOO_MANY_MINOR_ITER
Iteration limit reached while solving a subproblem.
Maximum number of iterations reached while trying to solve an auxiliary subproblem.
Line Search failed.
Line Search in the projected gradient direction did not find an acceptable new iterate.
NE_USER_NAN
Invalid number detected in user-supplied function and recovery failed.
Either inform
was set to a non-zero value within one of the user-supplied functions, lsqfun, lsqgrd, lsqhes, or lsqhprd, or an Infinity or NaN was detected in values returned from them and the recovery attempt failed.
NE_USER_STOP
User requested termination during a monitoring step. inform was set to a non-zero value within the user-supplied function monit.
7Accuracy
The accuracy of the solution is determined by
optional parameters
${\mathbf{Bxnl\; Stop\; Abs\; Tol\; Fun}}$,
${\mathbf{Bxnl\; Stop\; Abs\; Tol\; Grd}}$,
${\mathbf{Bxnl\; Stop\; Rel\; Tol\; Fun}}$,
${\mathbf{Bxnl\; Stop\; Rel\; Tol\; Grd}}$, and
${\mathbf{Bxnl\; Stop\; Step\; Tol}}$.
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_NOERROR on exit, the returned point satisfies
(7),
(8) or
(9) to the defined accuracies.
Please refer to Section 11.5 and the description of the particular options in Section 12.
8Parallelism and Performance
e04ggc is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
e04ggc makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
9Further Comments
9.1Description of the Printed Output
The solver can print information to give an overview of the problem and the progress of the computation. The output may be sent to two independent file ID which are set by optional parameters ${\mathbf{Print\; File}}$ and ${\mathbf{Monitoring\; File}}$. Optional parameters ${\mathbf{Print\; Level}}$, ${\mathbf{Print\; Options}}$, ${\mathbf{Monitoring\; Level}}$ and ${\mathbf{Print\; Solution}}$ determine the exposed level of detail. This allows, for example, a detailed log file to be generated while the condensed information is displayed on the screen.
By default (${\mathbf{Print\; File}}=6$, ${\mathbf{Print\; Level}}=2$), the following sections are printed to the standard output:
Header
Optional parameters list (if ${\mathbf{Print\; Options}}=\mathrm{YES}$)
The header is a message indicating the start of the solver. It should look like:
----------------------------------------------------------------------
E04GG, Nonlinear least squares method for bound-constrained problems
----------------------------------------------------------------------
Optional parameters list
If ${\mathbf{Print\; Options}}=\mathrm{YES}$, a list of the optional parameters and their values is printed before the problem statistics. The list shows all options of the solver, each displayed on one line. Each line contains the option name, its current value and an indicator for how it was set. The options unchanged from their defaults are noted by ‘d’ and the ones you have set are noted by ‘U’. Note that the output format is compatible with the file format expected by e04zpc. The output looks similar to:
Begin of Options
Bxnl Model = Gauss-newton * U
Bxnl Nlls Method = Galahad * d
Bxnl Glob Method = Reg * U
Bxnl Reg Order = Auto * d
Bxnl Tn Method = Min-1-var * d
Bxnl Basereg Pow = 2.00000E+00 * d
Bxnl Basereg Term = 1.00000E-02 * d
Bxnl Iteration Limit = 1000 * d
Bxnl Use Second Derivatives = Yes * U
End of Options
Problem statistics
If ${\mathbf{Print\; Level}}\ge 2$, statistics on the problem are printed, for example:
Problem Statistics
No of variables 4 (+2 disabled, +0 fixed)
free (unconstrained) 3
bounded 1
Objective function LeastSquares
No of residuals 16 (+8 disabled)
Iteration log
If ${\mathbf{Print\; Level}}=2$, the solver will print a summary line for each step. An iteration is considered successful when it yields a decrease of the objective, either sufficiently close to the decrease predicted by the model or to a given relative threshold. Each line shows the iteration number (Iter), the value of the objective function (error), the absolute and relative norms for the projected gradient (optim) and (rel optim), this last one is used in the convergence test of equation (8). The output looks as follows:
If ${\mathbf{Print\; Level}}\ge 3$, each line additionally shows the current value of the trust region radius (Delta), quality of the model (rho), some flags relating to the iteration (S2IF), inner iteration counter (inn it) for the tensor Newton model, the step length taken (step), trust region loop exit status (loop), performed line search type (LS), as well as the projection factor over the constraints (tau). It might look as follows:
Iteration flags column (S2IF) contains four flags related to the iteration. Flag ‘S’ indicates if the trust region iteration was successful (S) or unsuccessful (U). Flag ‘2’ shows if iteration used second-order information: yes (Y), no (N), tensor (T), or approximate (A). Flag ‘I’ indicates iteration type: regular (R) or inner (I). Exit flag of inner solver ‘F’ has three states: subproblem converged (C), not solved (E), or current iteration is inside subproblem or tensor model not used (-). For details on the interpretation of rho and tau, see Section 11.
If Tensor-Newton model is chosen, then details of each inner iteration can be printed by setting ${\mathbf{Print\; Level}}=4$, output is similar to:
Note the iteration type flag ‘I’ change under the S2IF column, the output reports on 2 (R) regular iterations where each required 3 (I) inner iterations.
Additionally, if ${\mathbf{Print\; Level}}=5$, each iteration produces more information that expands over several lines. This additional information can contain:
Details on the trust region subproblem;
Iteration log for auxiliary iterative methods;
Line Search iteration logs.
The output might look as follows:
*** Solving the trust region subproblem using More-Sorensen ***
A is symmetric positive definite
iter nd sigma sigma_shift
0 3.6778E-01 0.0000E+00 0.0000E+00
nq = 7.7000E+02
1 1.2571E-02 6.4469E-06 6.4469E-06
We're within the trust region radius
Leaving More-Sorensen
Model evaluated successfully: m_k(d) = 2.3089E-08
*** Subproblem solution found ***
Actual reduction (in cost function) = 1.2065E-09
Predicted reduction (in model) = 4.1866E-09
rho returned = 2.8819E-01
Successful step -- Delta staying at 1.2570E-02
Summary
Once the solver finishes, a summary is produced:
-------------------------------------------------------------------------------
Status: converged, an optimal solution was found
small (scaled) projected gradient norm
-------------------------------------------------------------------------------
Value of the objective 2.17328E-06
Norm of projected gradient 1.51989E-08
Norm of scaled projected gradient 7.29019E-06
Norm of step 4.98107E-04
Iterations 80
Inner iterations 0
LS iterations 0
PG iterations 0
Function evaluations 81
Gradient evaluations 81
Hessian evaluations (objhes) 0
Hessian evaluations (objhprd) 0
LS function calls 0
LS gradient calls 0
PG function calls 0
PG gradient calls 0
Optionally, if ${\mathbf{Stats\; Time}}=\mathrm{YES}$, the timings are printed:
Timing
Total time spent 2.43 sec
-------------------------------------------------------------------------------
Solution
If ${\mathbf{Print\; Solution}}=\mathrm{YES}$, the values of the primal variables are printed, furthermore if the problem is constrained, the dual variables are also reported, see Lagrangian Multipliers in e04kfc and the dual variables storage format described in Section 3.1 in e04svc. It might look as follows:
Primal variables:
idx Lower bound Value Upper bound
1 0.00000E+00 4.58516E-01 1.00000E+00
2 -inf 3.05448E+00 inf
3 -inf 4.65146E+00 inf
4 Disabled NaN Disabled
Box bounds dual variables:
idx Lower bound Value Upper bound Value
1 0.00000E+00 0.00000E+00 1.00000E+00 9.52218E-10
2 -inf 4.66962E-11 inf 0.00000E+00
3 -inf 6.55098E-11 inf 0.00000E+00
4 Disabled NaN Disabled NaN
9.2Residual Weights
A typical use for weights in the least square fitting context is to account for uncertainity in the observed data, ${\sigma}_{i}^{2}$, by setting the weights to
The idea behind this choice is to give less importance (small weight) to measurements which have large variance.
In order to use weights,
1.request to use weights by setting the optional parameter ${\mathbf{Bxnl\; Use\; Weights}}=\mathrm{YES}$ (this will request the solver to query the handle for an array of weights), and
2.store the weights array in the handle. This is done by calling e04rxc with the command ${\mathbf{cmdstr}}=\text{'}\mathrm{Residual\; Weights}\text{'}$ and passing the array length and weights array, ${\mathbf{lrarr}}={\mathbf{nres}}$ and ${\mathbf{rarr}}=w$, respectively. Weights are required for each residual and all weights must be positive.
These steps must be done after the handle is initialized (via e.g., e04rac) but before calling the solver e04ggc. The stored weights in the handle will only be accessed if ${\mathbf{Bxnl\; Use\; Weights}}=\mathrm{YES}$, otherwise all weights are assumed to be $1$ and the handle is not queried for residual weights.
If the solver is expecting to use weights but they are not provided, or the array length is wrong or have non-positive values, then the solver will terminate with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_DC_MATCH.
9.3Internal Changes
Internal changes have been made to this function as follows:
At Mark 27.1:
Added support for underdetermined problems. Solver can export covariance matrix at a solution point.
For details of all known issues which have been reported for the NAG Library please refer to the Known Issues.
10Example
In this example we solve the Lanczos-3 Lanczos (1956) problem, a nonlinear least squares regression. The regression problem consists of ${\mathbf{nres}}=24$ observations, $(t,y)$, to be fitted over the ${\mathbf{nvar}}=6$ parameter model
This section contains a short description of the underlying algorithms used in e04ggc, a bound-constrained nonlinear least squares (BXNL) solver that uses a model-based trust region framework adapted to exploit the least squares problem structure. It is based on a collaborative work between NAG and the STFC Rutherford Appleton Laboratory. For further details, see Gould et al. (2017) and references therein.
11.1Trust Region Algorithm
In this section, we are interested in generic nonlinear least squares problems of the form
where ${r}_{i}\left(x\right)$, $i=1,\dots ,{n}_{\mathrm{res}}$, are smooth nonlinear functions called redisuals, ${w}_{i}>0,i=1,\dots ,{n}_{\mathrm{res}}$ are the weights (by default they are all defined to $1$, see Section 9.2 on how to change them), and the rightmost element represents the optional regularization term of parameter $\sigma \ge 0$ and power $p>0$. The constraint elements ${l}_{x}$ and ${u}_{x}$ are ${n}_{\mathrm{var}}$-dimensional vectors defining the bounds on the variables. For the rest of this chapter, and without any loss of generality, it is assumed that weights are all set to the default value of $1$ and are excluded from the formulae. e04ggc is an iterative framework for solving (2) which consists of a variety of algorithms that solve the trust region subproblem. The fundamental ideas of the framework follow.
At each point ${x}_{k}$, the algorithm builds a model of the function at the next step, $f({x}_{k}+{s}_{k})$, which we refer to as ${m}_{k}$ (see Section 11.2).
Once the model has been formed, the candidate for the next point is found by solving a suitable subproblem (see Section 11.3).
Let
${P}_{\Omega}\left(x\right)$
be the Euclidean projection operator over the feasible set, then the quantity
is used to assess the quality of the proposed step. If it is sufficiently large we accept the step and ${x}_{k+1}$ is set to
${P}_{\Omega}({x}_{k}+{s}_{k})$;
if not, the trust region radius ${\Delta}_{k}$ is reduced and the resulting new trust region
subproblem is solved. If the step is very successful ($\rho $ is close to $1$), ${\Delta}_{k}$ is increased.
Under certain circumstances, it is deemed that the projection of the current point with the trust region step will not produce a successful point and
the new step ${s}_{k}$ is calculated using a convenient line search step.
This process continues until a point is found that satisfies the stopping criteria described in Section 11.5. More precisely, it can be described as:
BXNL Algorithm
1.Initialization
Set $k=1$, make a feasible initial guess ${x}_{0}$ and set trust region radius ${\Delta}_{0}$.
2.Iterationk
(i)Stop if ${x}_{k}$ is a solution point (see stopping criteria in Section 11.5).
(ii)Solve the trust region subproblem with ${\Delta}_{k}$ and provide step ${s}_{k}$.
(iii)Project the new point ${x}_{k}+{s}_{k}$ into the bound constraints.
(iv)Evaluate the objective at the new point.
(v)Update the trust region radius ${\Delta}_{k+1}$ based on the ratio ${\rho}_{k}$.
(vi)If the objective has decreased sufficiently (successful step) choose ${x}_{k+1}={P}_{\Omega}({x}_{k}+{s}_{k})$. Return to 2(i).
(vii)Assess the severity of the projection for the new point using the ratio ${\tau}_{k}=\frac{\Vert {P}_{\Omega}({x}_{k}+{s}_{k})-{x}_{k}\Vert}{\Vert {s}_{k}\Vert}$.
(viii)If ${\tau}_{k}$ is close to $1$, either return to 2(ii) with ${\Delta}_{k+1}$ or try to perform a line search in the direction ${d}_{k}^{LS}={P}_{\Omega}({x}_{k}+{s}_{k})-{x}_{k}$.
If successful set ${s}_{k}={\alpha}_{k}{d}_{k}^{LS}$ and ${x}_{k+1}={x}_{k}+{s}_{k}$. Return to 2(i).
(ix)Take a projected gradient step by performing a line search in the direction ${d}_{k}^{PG}={P}_{\Omega}({x}_{k}-\nabla f\left({x}_{k}\right))-{x}_{k}$, set ${s}_{k}={\alpha}_{k}{d}_{k}^{PG}$ and ${x}_{k+1}={x}_{k}+{s}_{k}$.
Note: the use of the regularization term in (2) is optional and is not used by default. To enable regularization please refer to the optional parameters
${\mathbf{Bxnl\; Basereg\; Type}}$,
${\mathbf{Bxnl\; Basereg\; Pow}}$,
${\mathbf{Bxnl\; Basereg\; Term}}$, and
${\mathbf{Bxnl\; Reg\; Order}}$.
11.2Models
A vital component of the algorithm is the choice of model employed. There are four choices available which are controlled by the optional parameter ${\mathbf{Bxnl\; Model}}$.
Gauss–Newton
This option specifies to the solver that it use the Gauss–Newton model. For this case, $r({x}_{k}+s)$ is replaced by its first-order Taylor approximation, $r\left({x}_{k}\right)+\nabla r{\left({x}_{k}\right)}^{\mathrm{T}}s=r\left({x}_{k}\right)+{J}_{k}s$. The model is, therefore, given by
This option specifies to use a Newton type model. For this case, the model is taken to be the second-order Taylor approximation of the objective function $f\left({x}_{k+1}\right)$. For this choice of model the gradient and Hessian are ${g}_{k}={J}_{k}^{T}r\left({x}_{k}\right)$ and ${H}_{k}={\displaystyle \sum _{i=1}^{{n}_{\mathrm{res}}}}{r}_{i}\left({x}_{k}\right){\nabla}^{2}{r}_{i}\left({x}_{k}\right)$. The model is given by
If the second derivatives of $r\left(x\right)$ are not available (i.e., the optional parameter ${\mathbf{Bxnl\; Use\; Second\; Derivatives}}=\mathrm{No}$), then the method approximates the matrix ${H}_{k}$. If ${\mathbf{Print\; Level}}\ge 3$, the flag ‘2’ in the iteration log will display (A), see Iteration log in Section 9.1.
Hybrid
This option specifies to the solver that it use the hybrid model. In practice the Gauss–Newton model tends to work well far away from the solution, whereas the Newton model performs better once it is near to the minimum (particularly if the residual is large at the solution). This option tells the solver to switch between the previous two models, picking the model that is most appropriate for the step. In particular, it starts by using ${m}_{k}^{GN}$ and switches to ${m}_{k}^{\mathrm{QN}}$ when it considers it is close enough to the solution. If, in subsequent iterations, it fails to get a decrease in the function value, then the algorithm interprets this as being not sufficiently close to the solution and switches back to using the Gauss–Newton model.
Tensor–Newton
This option specifies to the solver that it use the tensor model. The model is based on a second-order Taylor approximation to the residual, ${r}_{i}({x}_{k}+s)\approx {\left({t}_{k}\left(s\right)\right)}_{i}\u2254{r}_{i}\left({x}_{k}\right)+{\left({J}_{k}\right)}_{i}s+\frac{1}{2}{s}^{\mathrm{T}}{\nabla}^{2}{r}_{i}\left({x}_{k}\right)s$, where ${\left(J\right)}_{i}$ is the $i$th row of $J$. The tensor model used is
The next point ${x}_{k+1}$ is estimated by finding a step, ${s}_{k}$, that minimizes the model chosen in ${\mathbf{Bxnl\; Model}}$, subject to a globalization strategy. e04ggc supports the use of two such strategies: trust region or regularization, these can be set using the optional parameter ${\mathbf{Bxnl\; Glob\; Method}}=\mathrm{TR}$ or $\mathrm{REG}$ respectively. If ${\mathbf{Bxnl\; Model}}=\mathrm{Gauss-Newton}$, $\mathrm{Quasi-Newton}$ or $\mathrm{Hybrid}$, then the model is quadratic and the available methods to solve the subproblem are described in the next two subsections. If the ${\mathbf{Bxnl\; Model}}=\mathrm{TENSOR-NEWTON}$, then the model is not quadratic and the methods available are described in Section 11.3.3.
11.3.1Trust region method
The methods mentioned in this subsection are only used when ${\mathbf{Bxnl\; Model}}=\mathrm{Gauss-Newton}$, $\mathrm{Quasi-Newton}$ or $\mathrm{Hybrid}$ and ${\mathbf{Bxnl\; Glob\; Method}}=\mathrm{TR}$. The trust region subproblem to solve is
$${s}_{k}=\underset{s\in {\mathbb{R}}^{{n}_{\mathrm{var}}}}{\text{arg min}}\phantom{\rule{0.25em}{0ex}}{m}_{k}\left(s\right)\text{\hspace{1em} subject to \hspace{1em}}\Vert s\Vert \le {\Delta}_{k}\text{.}$$
(4)
The next step is taken to be the solution of the previous problem and the method used to solve it is selected using the optional parameter ${\mathbf{Bxnl\; Nlls\; Method}}$. The methods available are:
Powell's dogleg method
Approximates the solution to (4) by using Powell's dogleg method. This takes, as the step, a linear combination of the Gauss–Newton step and the steepest descent step.
Generalized eigenvalue problem (AINT)
Solves the trust region subproblem using the trust region solver of Adachi, Iwata, Nakatsukasa, and Takeda (AINT). This reformulates and solves the problem (4) as a generalized eigenvalue problem. See Adachi et al. (2015) for more details.
Moré–Sorensen Method
This method solves (4) using a variant of the Moré–Sorensen method. In particular, it implements Algorithm 7.3.6 of Conn et al. (2000).
GALAHAD's DTRS method
Solves (4) by converting the problem into the form
$${\mathrm{min}}_{q}{w}^{\mathrm{T}}q+\frac{1}{2}{q}^{\mathrm{T}}Dq\text{\hspace{1em} subject to \hspace{1em}}\Vert q\Vert \le {\Delta}_{k}\text{,}$$
where $D$ is a diagonal matrix from the eigendecomposition, $VD{V}^{\mathrm{T}}$, of the derivatives of either ${m}_{k}^{GN}\left(s\right)$ or ${m}_{k}^{QN}\left(s\right)$. The vectors $w$ and $q$ gather the rest of the transformation involving $s$ and $r\left({x}_{k}\right)$. This is solved by performing an eigendecomposition of the Hessian in the model and calling the GALAHAD function DTRS. For further details see Gould et al. (2003).
11.3.2Regularization
The methods mentioned in this subsection are only used when ${\mathbf{Bxnl\; Model}}=\mathrm{Gauss-Newton}$, $\mathrm{Quasi-Newton}$ or $\mathrm{Hybrid}$ and ${\mathbf{Bxnl\; Glob\; Method}}=\mathrm{REG}$. The regularized subproblem to solve is
The next step to take is the solution to the previous problem. The methods provided to solve (5) are
Solve by linear system
This option estimates the step ${s}_{k}$ by solving a shifted linear system. Currently it only supports quadratic regularization ($p=2.0$) and it can be set using the optional parameter ${\mathbf{Bxnl\; Reg\; Order}}$. The default value of the optional parameter is ${\mathbf{Bxnl\; Reg\; Order}}=\mathrm{AUTO}$ and automatically selects $p=2.0$. If $p\ne 2.0$ the solver terminates with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_NO_IMPROVEMENT.
This method is used when ${\mathbf{Bxnl\; Nlls\; Method}}=\mathrm{LINEAR\; SOLVER}$.
GALAHAD's DRQS method
This method solves the regularized problem by first converting it to the form
where $D$ is a diagonal matrix from the eigendecomposition, $VD{V}^{\mathrm{T}}$, of the derivatives of either
${m}_{k}^{GN}\left(s\right)$ or
${m}_{k}^{QN}\left(s\right)$. The vectors $w$ and $q$ gather the rest of the transformation involving $s$ and
$r\left({x}_{k}\right)$,
and $p$ is the regularization order chosen using the optional parameter ${\mathbf{Bxnl\; Reg\; Order}}$.
The problem is solved by performing an eigendecomposition of the Hessian in the model and calling the GALAHAD function DRQS. For further details see Gould et al. (2003).
11.3.3Tensor Newton subproblem
This section describes the regularized methods used to solve the non-quadratic tensor model (3) subproblem, i.e., the step subproblem when ${\mathbf{Bxnl\; Model}}=\mathrm{TENSOR-NEWTON}$. The schemes implemented find the next step by solving
Note that (6) is also a sum-of-squares problem and, as such, can be solved by recursively calling e04ggc. In this context, the iterations performed by the recursive call to the solver are called inner iterations, otherwise they are called regular or outer iterations. When ${\mathbf{Print\; Level}}\ge \mathrm{3}$, the iteration type is shown under the flag ‘I’ of the ‘iteration flags’ column while the inner iteration count is shown under the column ‘inn it’ of the Iteration log (see Section 9.1). The method used to solve (6) can be chosen by the optional parameter ${\mathbf{Bxnl\; Tn\; Method}}$ and the implemented methods are:
Implicit solve
This method will solve recursively problem (6) using a quadratic model. GALAHAD function DRQS is used to estimate the next step by solving the
(implicitly) regularized quadratic subproblem
where ${m}_{k}^{\mathrm{QN}}\left(s\right)$ is a quasi-Newton model of (3) and the regularization power is $p=2$. The problem can be viewed as having two regularization terms, $\sigma =1/{\Delta}_{k}$ is the (fixed) regularization term from the outer iteration and $\hat{\sigma}=1/{\delta}_{k}$ is the regularization term of the inner iteration which is free to be updated as required by the solver.
This method is used when ${\mathbf{Bxnl\; Tn\; Method}}=\mathrm{IMPLICIT}$.
Tacit solve with ${n}_{\mathrm{var}}$ additional terms
This method will solve recursively problem (6) using a hybrid model, tacitly reformulating the problem to incorporate ${n}_{\mathrm{var}}$ additional residuals (see Section 11.3.4). This is implemented by internally setting
${\mathbf{Bxnl\; Basereg\; Type}}=\mathrm{EXPAND-NVAR-DOF}$,
${\mathbf{Bxnl\; Basereg\; Pow}}=2.0$ and the
${\mathbf{Bxnl\; Basereg\; Term}}=1/{\Delta}_{k}$. The subproblem to determine the next step has ${n}_{\mathrm{var}}+{n}_{\mathrm{res}}$ parameters and is of the form
This subproblem is solved using GALAHAD function DRQS and is used when ${\mathbf{Bxnl\; Tn\; Method}}=\mathrm{MIN-NVAR}$.
Tacit solve with one additional term
This method will solve recursively problem (6) using a hybrid model, tacitly reformulating the problem to incorporate one additional residual (see Section 11.3.4). This is implemented by internally setting
${\mathbf{Bxnl\; Basereg\; Type}}=\mathrm{EXPAND-1-DOF}$,
${\mathbf{Bxnl\; Basereg\; Pow}}=3.0$ and the
${\mathbf{Bxnl\; Basereg\; Term}}=1/{\Delta}_{k}$. As in the previous method, the subproblem to determine the next step is solved using GALAHAD function DRQS.
This method is used when ${\mathbf{Bxnl\; Tn\; Method}}=\mathrm{MIN-1-VAR}$.
Explicit solve with ${n}_{\mathrm{var}}$ additional terms
This method will expand the search space with ${n}_{\mathrm{var}}$ additional parameters and solve recursively a regularized variant of problem (6) using a hybrid model (see Section 11.3.4). This is implemented by internally expanding the search space and setting
${\mathbf{Bxnl\; Basereg\; Type}}=\mathrm{EXPAND-NVAR-DOF}$,
${\mathbf{Bxnl\; Basereg\; Pow}}=2.0$ and the
${\mathbf{Bxnl\; Basereg\; Term}}=1/{\Delta}_{k}$. Analogous to the previous methods, the subproblem to determine the next step is solved using GALAHAD function DRQS.
This method is used when ${\mathbf{Bxnl\; Tn\; Method}}=\mathrm{ADD-NVAR}$.
Explicit solve with one additional term
This method will expand the search space with an additional parameter and solve recursively a regularized variant of problem (6) using a hybrid model (see Section 11.3.4). This is implemented by internally adding an additional residual term and setting
${\mathbf{Bxnl\; Basereg\; Type}}=\mathrm{EXPAND-1-DOF}$,
${\mathbf{Bxnl\; Basereg\; Pow}}=3.0$ and the
${\mathbf{Bxnl\; Basereg\; Term}}=1/{\Delta}_{k}$. Analogous to the previous methods, the subproblem to determine the next step is solved using GALAHAD function DRQS.
This method is used when ${\mathbf{Bxnl\; Tn\; Method}}=\mathrm{ADD-1-VAR}$.
11.3.4Incorporating the regularizer
The method used to incorporate the regularization specified by $\sigma $ and $p$ in problem (2) is defined using the optional parameter ${\mathbf{Bxnl\; Basereg\; Type}}$. The implemented choices are:
None
Sets $\sigma =0$ and solves the non-regularized variant of the problem, this is the default.
Reformulation using ${n}_{\mathrm{var}}$ DoF
Solves a nonlinear least squares problem with ${n}_{\mathrm{var}}$ additional degrees of freedom.
The new residual objective function, $\hat{r}:{R}^{{n}_{\mathrm{var}}}\to {R}^{{n}_{\mathrm{res}}+{n}_{\mathrm{var}}}$, is defined as
where ${\left(x\right)}_{j}$ denotes the $j$th component of the iterate $x$ and $\sigma $ is provided using optional parameter ${\mathbf{Bxnl\; Basereg\; Term}}$. This option requires that the (base) regularization power $p$ in (2) be $2.0$, i.e., ${\mathbf{Bxnl\; Basereg\; Pow}}=\mathrm{2.0}$ (the default value).
This method is used when ${\mathbf{Bxnl\; Basereg\; Type}}=\mathrm{EXPAND-NVAR-DOF}$.
Reformulation adding 1 DoF
Solves a nonlinear least squares problem with one additional degree of freedom. The new residual objective function $\overline{r}:{R}^{{n}_{\mathrm{var}}}\to {R}^{{n}_{\mathrm{res}}+1}$, is defined as
Analogous to the previous case, $\sigma $ is defined using the optional parameter ${\mathbf{Bxnl\; Basereg\; Term}}$. When using this option, it is recommended that the (base) regularization power $p$ in (2) be $3.0$, i.e., ${\mathbf{Bxnl\; Basereg\; Pow}}=\mathrm{3.0}$.
This method is used when ${\mathbf{Bxnl\; Basereg\; Type}}=\mathrm{EXPAND-1-DOF}$.
11.4Bound Constraints
e04ggc handles the bound constraints by projecting candidate points into the feasible set. The implemented framework is an adaptation of Algorithm 3.12 described in Kanzow et al. (2004), where the Levenberg–Marquardt step is replaced by a trust region (TR) step. The framework consists of three major steps. It first attempts a projected TR step and, if unsuccessful, attempts a Wolfe-type line search step along the projected TR step direction, otherwise, it defaults to a projected gradient step with an Armijo-type line search, specifically,
TR step
The trust region loop needs to be interrupted if the proposed steps, ${s}_{k}$, lead to points outside of the feasible set, i.e., are orthogonal with respect to the active bounds. This is monitored by the ratio ${\tau}_{k}=\frac{\Vert {P}_{\Omega}({x}_{k}+{s}_{k})-{x}_{k}\Vert}{\Vert {s}_{k}\Vert}$, where ${P}_{\Omega}$ is the Euclidean projection operator over the feasible set. $\tau $ provides a convenient way to assess how severe the projection is, if $\tau \approx 0$ then the step, ${s}_{k}$, is indeed orthogonal to the active space and does not provide a suitable search direction so the loop is terminated. On the contrary, if $\tau \approx 1$ then ${s}_{k}$ has components that are not orthogonal to the active set that can be explored.
The TR step is taken when it is deemed that it makes enough progress in decreasing the error.
LS step
This step is attempted when the TR step is unsuccessful but the direction ${d}_{k}^{LS}={P}_{\Omega}({x}_{k}+{s}_{k})-{x}_{k}$ is considered of descent and a viable search direction in the sense that
with ${s}_{k}$ the TR step, $\rho >0$ and $\nu >1$. A weak Wolfe-type line search along this direction is performed to find the next point. During the line search the intermediary candidates are projected into the feasible set and kept feasible, for details see Section 11.3 in e04kfc.
PG step
The projected gradient (PG) step is only taken if both the TR step and the LS step were unsuccessful.
It consists of an Armijo-type line search along the projected gradient direction, ${d}_{k}^{PG}={P}_{\Omega}({x}_{k}-\nabla f\left({x}_{k}\right))-{x}_{k}$, for more details on this method refer to
Section 11.2 in e04kfc.
11.5Stopping Criteria
The solver considers that it has found a solution and stops when at least one of the following three conditions is met within the defined absolute or relative tolerances
$({\epsilon}_{\mathrm{abs}}^{f}>0,{\epsilon}_{\mathrm{rel}}^{f}>0,{\epsilon}_{\mathrm{abs}}^{g}>0,{\epsilon}_{\mathrm{rel}}^{g}>0,{\epsilon}_{\mathrm{step}}>0)$,
Where ${d}_{k}^{PG}$ is the projected gradient (see PG step in Section 11.4)
and is reported in the column optim of the output while the left-hand side of (8) is reported in the column rel optim, see Iteration log in Section 9.1.
If the problem is unconstrained, then the projected gradient reduces to the gradient and the convergence tests are done over the gradient norm. The stopping tolerances can be changed using the optional parameters
${\mathbf{Bxnl\; Stop\; Abs\; Tol\; Fun}}$,
${\mathbf{Bxnl\; Stop\; Abs\; Tol\; Grd}}$,
${\mathbf{Bxnl\; Stop\; Rel\; Tol\; Fun}}$,
${\mathbf{Bxnl\; Stop\; Rel\; Tol\; Grd}}$, and
${\mathbf{Bxnl\; Stop\; Step\; Tol}}$. see Section 12 for details. If these parameters are set too small in relation to the complexity and scaling of the problem, the function can terminate with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_NO_IMPROVEMENT, NE_TOO_MANY_ITER or NE_TOO_MANY_MINOR_ITER.
11.6A Note About Lagrangian Multipliers
It is often useful to have access to the Lagrangian multipliers (dual variables) associated with the constraints if there are any defined. In the case where only simple bounds are present, the multipliers directly relate to the values of the gradient at the solution. The multipliers of the active bounds are the absolute values of the associated elements of the gradient. The multipliers of the inactive bounds are always zero.
The multipliers based on the final gradient value can be retrieved by calling e04rxc with the command string cmdstr$=$Dual Variables. The format is the same as for other functions, see Section 3.1 in e04svc in e04svc.
Note that if the problem has not fully converged, the provided approximation might be quite crude.
12Optional Parameters
Several optional parameters in e04ggc define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e04ggc these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling e04zmc anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
the keywords, where the minimum abbreviation of each keyword is underlined;
a parameter value,
where the letters $a$, $i$ and $r$ denote options that take character, integer and real values respectively;
the default value, where the symbol $\epsilon $ is a generic notation for machine precision (see X02AJC).
All options accept the value $\mathrm{DEFAULT}$ to return single options to their default states.
Keywords and character values are case and white space insensitive.
Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
Bxnl Basereg Pow
$r$
Default $=2.0$
This parameter defines the regularization power $p$ in (1) and for the tensor Newton subproblem (when ${\mathbf{Bxnl\; Tn\; Method}}=\mathrm{IMPLICIT}$). Some values are restricted depending on the type of regularization specified, see
${\mathbf{Bxnl\; Basereg\; Type}}$ for more details.
Constraint: ${\mathbf{Bxnl\; Basereg\; Pow}}>0$.
Bxnl Basereg Term
$r$
Default $=0.01$
This parameter defines the regularization term $\sigma $ in (1) and for the tensor Newton subproblem (when ${\mathbf{Bxnl\; Tn\; Method}}=\mathrm{IMPLICIT}$).
Constraint: ${\mathbf{Bxnl\; Basereg\; Term}}>0$.
Bxnl Basereg Type
$a$
Default $=\mathrm{NONE}$
This parameter specifies the method used to incorporate the regularizer into (1) and optionally into the tensor Newton subproblem (when ${\mathbf{Bxnl\; Model}}=\mathrm{Tensor-Newton}$ and ${\mathbf{Bxnl\; Tn\; Method}}=\mathrm{IMPLICIT}$).
The option ${\mathbf{Bxnl\; Basereg\; Type}}=\mathrm{EXPAND-NVAR-DOF}$ reformulates the original problem by expanding it with ${n}_{\mathrm{var}}$ degrees of freedom that is subsequently solved. For the case ${\mathbf{Bxnl\; Basereg\; Type}}=\mathrm{EXPAND-1-DOF}$ the residual vector is extended with a new term of the form $\frac{\sigma}{p}{\Vert x\Vert}_{2}^{p}$; for this method a value of $p=3$ is recommended.
If ${\mathbf{Bxnl\; Basereg\; Type}}=\mathrm{EXPAND-NVAR-DOF}$ then the regularization power term $p$ must be $2.0$, that is ${\mathbf{Bxnl\; Basereg\; Pow}}=\mathrm{2.0}$. For further details see Section 11.3.
Constraint: ${\mathbf{Bxnl\; Basereg\; Type}}=\mathrm{NONE}$, $\mathrm{EXPAND-NVAR-DOF}$ or $\mathrm{EXPAND-1-DOF}$.
Bxnl Save Covariance Matrix
$a$
Default $=\mathrm{NO}$
This parameter indicates to the solver to store the covariance matrix into the handle.
If ${\mathbf{Bxnl\; Save\; Covariance\; Matrix}}=\mathrm{YES}$ then
the lower triangle part of the covariance matrix is stored in packed column order
(see Section 3.4.2 in the F07 Chapter Introduction) into the handle and can be retrieved via e04rxc using
${\mathbf{cmdstr}}=\mathrm{COVARIANCE\; MATRIX}$ with
${\mathbf{lrarr}}=({n}_{\mathrm{var}}\times ({n}_{\mathrm{var}}+1))/2$.
In the special case where ${\mathbf{Bxnl\; Save\; Covariance\; Matrix}}=\mathrm{VARIANCE}$,
only the diagonal elements of the covariance matrix are stored in the handle and can be retrieved via e04rxc using
${\mathbf{cmdstr}}=\mathrm{VARIANCE}$
with
${\mathbf{lrarr}}={n}_{\mathrm{var}}$.
Similarly, if
${\mathbf{Bxnl\; Save\; Covariance\; Matrix}}=\mathrm{HESSIAN}$ then the lower triangle part of the matrix
$H\left(x\right)=\nabla r\left(x\right){\nabla r\left(x\right)}^{\mathrm{T}}={J\left(x\right)}^{\mathrm{T}}J\left(x\right)$
is stored in packed column order into the handle and can be retrieved via e04rxc using
${\mathbf{cmdstr}}=\mathrm{HESSIAN\; MATRIX}$ with
${\mathbf{lrarr}}=({n}_{\mathrm{var}}\times ({n}_{\mathrm{var}}+1))/2$.
Limitations: If the number of enabled residuals is not greater than the number of enabled variables, or
the pseudo-inverse of $H\left(x\right)$ could not be calculated, then the
covariance matrix (variance vector) is not stored in the handle and will not be available.
For more information on how the covariance matrix is estimated, see e04ycc.
Constraint: ${\mathbf{Bxnl\; Save\; Covariance\; Matrix}}=\mathrm{NO}$, $\mathrm{YES}$, $\mathrm{VARIANCE}$ or $\mathrm{HESSIAN}$.
Bxnl Stop Abs Tol Fun
$r$
Default $\text{}=2.2{\epsilon}^{\frac{1}{3}}$
This parameter specifies the relative tolerance for the error test, specifically, it sets the value of ${\epsilon}_{\mathrm{abs}}^{f}$ of equation (7) in Section 11.5. Setting ${\mathbf{Bxnl\; Stop\; Abs\; Tol\; Fun}}$ to a large value may cause the solver to stop prematurely with a suboptimal solution.
This parameter specifies the relative tolerance for the gradient test, specifically, it sets the value of ${\epsilon}_{\mathrm{abs}}^{g}$ of equation (8) in Section 11.5. Setting ${\mathbf{Bxnl\; Stop\; Abs\; Tol\; Grd}}$ to a large value may cause the solver to stop prematurely with a suboptimal solution.
This parameter specifies the relative tolerance for the error test, specifically, it sets the value of ${\epsilon}_{\mathrm{rel}}^{f}$ of equation (7) in Section 11.5. Setting ${\mathbf{Bxnl\; Stop\; Rel\; Tol\; Fun}}$ to a large value may cause the solver to stop prematurely with a suboptimal solution.
This parameter specifies the relative tolerance for the gradient test, specifically, it sets the value of ${\epsilon}_{\mathrm{rel}}^{g}$ of equation (8) in Section 11.5. Setting ${\mathbf{Bxnl\; Stop\; Rel\; Tol\; Grd}}$ to a large value may cause the solver to stop prematurely with a suboptimal solution.
Specifies the stopping tolerance for the step length test, specifically, it sets the value for ${\epsilon}_{\mathrm{step}}$ of equation (9) in Section 11.5. Setting ${\mathbf{Bxnl\; Stop\; Step\; Tol}}$ to a large value may cause the solver to stop prematurely with a suboptimal solution.
Under certain circumstances, e.g., when in doubt of the quality of the first- or second-order derivatives, in the event of the solver exiting with a successful step length test, it is recommended to verify that either the error or the gradient norm is acceptably small.
This parameter specifies the order of the regularization $p$ in (5) used when ${\mathbf{Bxnl\; Glob\; Method}}=\mathrm{Reg}$.
Some values for $p$ are restricted depending on the method chosen in ${\mathbf{Bxnl\; Nlls\; Method}}$, see Section 11.3.2 for more details.
Constraint: ${\mathbf{Bxnl\; Reg\; Order}}=\mathrm{AUTO}$, $\mathrm{QUADRATIC}$ or $\mathrm{CUBIC}$.
Bxnl Glob Method
$a$
Default $=\mathrm{TR}$
This parameter specifies the globalization method used to estimate the next step ${s}_{k}$. It also determines the class of subproblem to solve. The trust region subproblem finds the step by minimizing the specified model withing a given radius. On the other hand, when ${\mathbf{Bxnl\; Glob\; Method}}=\mathrm{reg}$, the problem is reformulated by adding an aditional regularization term and minimized in order to find the next step ${s}_{k}$. See Section 11.3 for more details.
Constraint: ${\mathbf{Bxnl\; Glob\; Method}}=\mathrm{TR}$ or $\mathrm{REG}$.
Bxnl Nlls Method
$a$
Default $=\mathrm{GALAHAD}$
This parameter defines the method used to estimate the next step ${s}_{k}$ in ${x}_{k+1}={x}_{k}+{s}_{k}$. It only applies to ${\mathbf{Bxnl\; Model}}=\mathrm{GAUSS-NEWTON}$, $\mathrm{QUASI-NEWTON}$ or $\mathrm{HYBRID}$. When the globalization technique chosen is trust region (${\mathbf{Bxnl\; Glob\; Method}}=\mathrm{tr}$) the methods for ${\mathbf{Bxnl\; Nlls\; Method}}$ available are Powell's dogleg method, a generalized eigenvalue method (AINT) Adachi et al. (2015), a variant of Moré–Sorensen's method, and GALAHAD's DTRS method. Otherwise, when the globalization method chosen is via regularization (${\mathbf{Bxnl\; Glob\; Method}}=\mathrm{reg}$) the methods available are comprised by a linear system solver and GALAHAD's DRQS method. See Section 11.3 for more details.
Constraint: ${\mathbf{Bxnl\; Nlls\; Method}}=\mathrm{POWELL-DOGLEG}$, $\mathrm{AINT}$, $\mathrm{MORE-SORENSEN}$, $\mathrm{LINEAR\; SOLVER}$ or $\mathrm{GALAHAD}$.
Bxnl Model
$a$
Default $=\mathrm{HYBRID}$
This parameter specifies which model is used to approximate the objective function and estimate the next point that reduces the error. This is one of the most important optional parameters and should be chosen according to the problem characteristics. The models are briefly described in Section 11.2.
Constraint: ${\mathbf{Bxnl\; Model}}=\mathrm{GAUSS-NEWTON}$, $\mathrm{QUASI-NEWTON}$, $\mathrm{HYBRID}$ or $\mathrm{TENSOR-NEWTON}$.
Bxnl Tn Method
$a$
Default $=\mathrm{MIN-1-VAR}$
This parameter specifies how to solve the subproblem and find the next step ${s}_{k}$ for the tensor Newton model, ${\mathbf{Bxnl\; Model}}=\mathrm{TENSOR-NEWTON}$. The subproblems are solved using a range of regularization schemes. See Section 11.3.3.
Constraint: ${\mathbf{Bxnl\; Tn\; Method}}=\mathrm{IMPLICIT}$, $\mathrm{MIN-1-VAR}$, $\mathrm{MIN-NVAR}$, $\mathrm{ADD-1-VAR}$ or $\mathrm{ADD-NVAR}$.
Bxnl Use Second Derivatives
$a$
Default $=\mathrm{NO}$
This parameter indicates whether the weighted sum of residual Hessians are available through the call-back lsqhes. If ${\mathbf{Bxnl\; Use\; Second\; Derivatives}}=\mathrm{NO}$ and the specified model in ${\mathbf{Bxnl\; Model}}$ requires user-suppied second derivatives, then the solver will terminate with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_DERIV_ERRORS.
Constraint: ${\mathbf{Bxnl\; Use\; Second\; Derivatives}}=\mathrm{YES}$ or $\mathrm{NO}$.
Bxnl Use Weights
$a$
Default $=\mathrm{NO}$
This parameter indicates whether to use a weighted nonlinear least square model. If ${\mathbf{Bxnl\; Use\; Weights}}=\mathrm{YES}$ then the weights ${w}_{i}>0,i=1,\dots ,{n}_{\mathrm{res}}$ in (2) must be supplied by you via e04rxc. If weights are to be used, then all ${n}_{\mathrm{res}}$ elements must be provided, see Section 9.2. If the solver is expecting to use weights but they are not provided or have non-positive values, then the solver will terminate with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_DC_MATCH.
Constraint: ${\mathbf{Bxnl\; Use\; Second\; Derivatives}}=\mathrm{YES}$ or $\mathrm{NO}$.
Bxnl Iteration Limit
$i$
Default $=1000$
This parameter specifies the maximum amount of major iterations the solver is alloted. If this limit is reached, then the solver will terminate with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_TOO_MANY_ITER.
This defines the ‘infinite’ bound $\mathit{bigbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to $\mathit{bigbnd}$ will be regarded as $+\infty $ (and similarly any lower bound less than or equal to $-\mathit{bigbnd}$ will be regarded as $-\infty $). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
(See Section 3.1.1 in the Introduction to the NAG Library CL Interface for further information on NAG data types.)
If $i\ge 0$, the
Nag_FileID number (as returned from x04acc) for the secondary (monitoring) output. If ${\mathbf{Monitoring\; File}}=\mathrm{-1}$, no secondary output is provided. The information output to this file ID is controlled by ${\mathbf{Monitoring\; Level}}$.
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as for ${\mathbf{Print\; Level}}$.
(See Section 3.1.1 in the Introduction to the NAG Library CL Interface for further information on NAG data types.)
If $i\ge 0$, the
Nag_FileID number (as returned from x04acc, stdout as the default) for the primary output of the solver. If ${\mathbf{Print\; File}}=\mathrm{-1}$, the primary output is completely turned off independently of other settings. The information output to this unit is controlled by ${\mathbf{Print\; Level}}$.
This parameter defines how detailed information should be printed by the solver to the primary and secondary output.
$\mathit{i}$
Output
$0$
No output from the solver.
$1$
The Header and Summary.
$2$, $3$, $4$, $5$
Additionally, the Iteration log.
Constraint: $0\le {\mathbf{Print\; Level}}\le 5$.
Print Options
$a$
Default $=\mathrm{YES}$
If ${\mathbf{Print\; Options}}=\mathrm{YES}$, a listing of optional parameters will be printed to the primary output and is always printed to the secondary output.
Constraint: ${\mathbf{Print\; Options}}=\mathrm{YES}$ or $\mathrm{NO}$.
Print Solution
$a$
Default $=\mathrm{NO}$
If ${\mathbf{Print\; Solution}}=\mathrm{X}$, the final values of the primal variables are printed on the primary and secondary outputs.
If ${\mathbf{Print\; Solution}}=\mathrm{YES}$ or $\mathrm{ALL}$, in addition to the primal variables, the final values of the dual variables are printed on the primary and secondary outputs.
Constraint: ${\mathbf{Print\; Solution}}=\mathrm{YES}$, $\mathrm{NO}$, $\mathrm{X}$ or $\mathrm{ALL}$.
Stats Time
$a$
Default $=\mathrm{NO}$
This parameter turns on timing. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice $\mathrm{YES}$ is equivalent to $\mathrm{WALL\; CLOCK}$.
Constraint: ${\mathbf{Stats\; Time}}=\mathrm{YES}$, $\mathrm{NO}$, $\mathrm{CPU}$ or $\mathrm{WALL\; CLOCK}$.
Time Limit
$r$
Default $\text{}={10}^{6}$
A limit to the number of seconds that the solver can use to solve one problem. If at the end of an iteration this limit is exceeded, the solver will terminate with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_TIME_LIMIT.