# NAG FL Interfacee05kbf (handle_​solve_​mcs)

Note: this routine uses optional parameters to define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm, and to Section 12 for a detailed description of the specification of the optional parameters.

## ▸▿ Contents

Settings help

FL Name Style:

FL Specification Language:

## 1Purpose

e05kbf is designed to find the global minimum or maximum of an arbitrary function, subject to simple bound-constraints using a multi-level coordinate search method. Derivatives are not required, but convergence is only guaranteed if the objective function is continuous in a neighbourhood of a global optimum. It is not intended for large problems.

## 2Specification

Fortran Interface
 Subroutine e05kbf ( nvar, x,
 Integer, Intent (In) :: nvar Integer, Intent (Inout) :: iuser(*), ifail Real (Kind=nag_wp), Intent (Inout) :: ruser(*) Real (Kind=nag_wp), Intent (Out) :: x(nvar), rinfo(100), stats(100) Type (c_ptr), Intent (In) :: handle, cpuser External :: objfun, monit
#include <nag.h>
 void e05kbf_ (void **handle, void (NAG_CALL *objfun)(const Integer *nvar, const double x[], double *fx, Integer *inform, Integer iuser[], double ruser[], void **cpuser),void (NAG_CALL *monit)(const Integer *nvar, const double x[], Integer *inform, const double rinfo[], const double stats[], Integer iuser[], double ruser[], void **cpuser),const Integer *nvar, double x[], double rinfo[], double stats[], Integer iuser[], double ruser[], void **cpuser, Integer *ifail)
The routine may be called by the names e05kbf or nagf_glopt_handle_solve_mcs.

## 3Description

e05kbf is aimed at minimizing a nonlinear objective function subject to bound constraints:
 $minimize x∈ℝn f (x) subject to lx ≤ x ≤ ux .$
Here $f$ is a smooth nonlinear function and ${l}_{x}$ and ${u}_{x}$ are $n$-dimensional vectors defining bounds on the variables.
e05kbf serves as a solver for compatible problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as a means of communication for routines in the NAG optimization modelling suite. To define a compatible problem handle, you must call e04raf to initialize it followed, optionally, by e04rgf, e04ref or e04rff to define the objective function and e04rhf to define bounds on the variables. If e04rhf is not called, all the variables will be considered free by the solver. It should be noted that e05kbf always assumes that the gradient of the objective is dense, therefore, defining a sparse structure for the residuals in the call to e04rgf will have no effect. Additionally, the multi-level coordinate search method used by this solver relies on dividing the feasible space in ‘boxes’ (see Section 11 for a more thorough explanation) so it is advisable to define reasonable bounds for the variables using e04rhf. See Section 3.1 in the E04 Chapter Introduction for more details about the NAG optimization modelling suite.
The algorithm behaviour and solver strategy can be modified by various optional parameters (see Section 12) which can be set by e04zmf and e04zpf at any time between the initialization of the handle by e04raf and a call to the solver. The optional parameters' names specific for this solver start with the prefix MCS (Multi-level Coordinate Search). The default values for these optional parameters are chosen to work well in the general case, but it is recommended that you tune them to your particular problem. In particular, if the objective function is known to be numerically difficult, it could be desirable to define a customized initialization list for the algorithm with the optional parameter MCS Initialization Method. For more details on how to create a custom initialization list, please refer to Section 9.2. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various initialization lists and/or optional parameters.
The method used by e05kbf is based on MCS, the Multi-level Coordinate Search method described in Huyer and Neumaier (1999), and the algorithm it uses is described in detail in Section 11.

## 4References

Huyer W and Neumaier A (1999) Global optimization by multi-level coordinate search Journal of Global Optimization 14 331–355

## 5Arguments

1: $\mathbf{handle}$Type (c_ptr) Input
On entry: the handle to the problem. It needs to be initialized (e.g., by e04raf) and to hold a problem formulation compatible with e05kbf. It must not be changed between calls to the NAG optimization modelling suite.
2: $\mathbf{objfun}$Subroutine, supplied by the NAG Library or the user. External Procedure
objfun must calculate the value of the nonlinear objective function $f\left(x\right)$ at a specified point $x$. If there is no nonlinear objective (e.g., e04ref or e04rff was called to define a linear or quadratic objective function), objfun will never be called by e05kbf and may be the dummy routine e04kfv (included in the NAG Library.)
The specification of objfun is:
Fortran Interface
 Subroutine objfun ( nvar, x, fx,
 Integer, Intent (In) :: nvar Integer, Intent (Inout) :: inform, iuser(*) Real (Kind=nag_wp), Intent (In) :: x(nvar) Real (Kind=nag_wp), Intent (Inout) :: ruser(*) Real (Kind=nag_wp), Intent (Out) :: fx Type (c_ptr), Intent (In) :: cpuser
 void objfun (const Integer *nvar, const double x[], double *fx, Integer *inform, Integer iuser[], double ruser[], void **cpuser)
1: $\mathbf{nvar}$Integer Input
On entry: $n$, the current number of decision variables $x$ in the model.
2: $\mathbf{x}\left({\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Input
On entry: the vector $x$ of variable values at which the objective function is to be evaluated.
3: $\mathbf{fx}$Real (Kind=nag_wp) Output
On exit: the value of the objective function at $x$.
4: $\mathbf{inform}$Integer Input/Output
On entry: a non-negative value.
On exit: may be used to indicate that the requested objective value could not be computed. Specifically, it can be set to a negative value:
${\mathbf{inform}}<0$
The solver will cleanly exit with ${\mathbf{ifail}}={\mathbf{20}}$ and return the best available point as well as the solve statistics.
5: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
6: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
7: $\mathbf{cpuser}$Type (c_ptr) User Workspace
objfun is called with the arguments iuser, ruser and cpuser as supplied to e05kbf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to objfun.
objfun must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e05kbf is called. Arguments denoted as Input must not be changed by this procedure.
Note: objfun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e05kbf. If your code inadvertently does return any NaNs or infinities, e05kbf is likely to produce unexpected results.
3: $\mathbf{monit}$Subroutine, supplied by the NAG Library or the user. External Procedure
monit is provided to enable you to monitor the progress of the optimization and, optionally, to terminate the solver early if necessary. It is invoked at the end of every $i$th step where $i$ is given by the MCS Monitor Frequency (the default is $0$, monit is not called). A step is complete when the procedure in which a sub-box is considered for splitting finishes correctly.
monit may be the dummy subroutine e04kfu (included in the NAG Library).
The specification of monit is:
Fortran Interface
 Subroutine monit ( nvar, x,
 Integer, Intent (In) :: nvar Integer, Intent (Inout) :: inform, iuser(*) Real (Kind=nag_wp), Intent (In) :: x(nvar), rinfo(100), stats(100) Real (Kind=nag_wp), Intent (Inout) :: ruser(*) Type (c_ptr), Intent (In) :: cpuser
 void monit (const Integer *nvar, const double x[], Integer *inform, const double rinfo[], const double stats[], Integer iuser[], double ruser[], void **cpuser)
1: $\mathbf{nvar}$Integer Input
On entry: $n$, the current number of decision variables $x$ in the model.
2: $\mathbf{x}\left({\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Input
On entry: the vector $x$ of decision variables at the current iteration.
3: $\mathbf{inform}$Integer Input/Output
On entry: a non-negative value.
On exit: may be used to request the solver to stop immediately. Specifically, if ${\mathbf{inform}}<0$ the solver will terminate immediately with ${\mathbf{ifail}}={\mathbf{20}}$; otherwise, the solver will proceed normally.
4: $\mathbf{rinfo}\left(100\right)$Real (Kind=nag_wp) array Input
On entry: error measures and various indicators at the end of the current iteration as described in rinfo.
5: $\mathbf{stats}\left(100\right)$Real (Kind=nag_wp) array Input
On entry: solver statistics at the end of the current iteration as described in stats.
6: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
7: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
8: $\mathbf{cpuser}$Type (c_ptr) User Workspace
monit is called with the arguments iuser, ruser and cpuser as supplied to e05kbf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to monit.
monit must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e05kbf is called. Arguments denoted as Input must not be changed by this procedure.
Note: monit should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e05kbf. If your code inadvertently does return any NaNs or infinities, e05kbf is likely to produce unexpected results.
4: $\mathbf{nvar}$Integer Input
On entry: $n$, the current number of decision variables $x$ in the model.
5: $\mathbf{x}\left({\mathbf{nvar}}\right)$Real (Kind=nag_wp) array Output
On exit: the final values of the variables, $x$.
6: $\mathbf{rinfo}\left(100\right)$Real (Kind=nag_wp) array Output
On exit: error measures and various indicators at the end of the final iteration as given in the table below:
 $1$ Objective function value $\mathrm{f}\left(x\right)$. $2$ Number of sweeps. $3$–$100$ Reserved for future use.
7: $\mathbf{stats}\left(100\right)$Real (Kind=nag_wp) array Output
On exit: solver statistics at the end of the final iteration as given in the table below:
 $1$ Number of calls to the objective function. $2$ Total time spent in the solver (including time spent evaluating the objective). $3$ Total time spent evaluating the objective function. $4$ Number of sub-boxes. $5$ Number of splits. $6$ Cumulative number of splits by initialization list. $7$ The current lowest level containing no split boxes. $8$–$100$ Reserved for future use.
8: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
9: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
10: $\mathbf{cpuser}$Type (c_ptr) User Workspace
iuser, ruser and cpuser are not used by e05kbf, but are passed directly to objfun and monit and may be used to pass information to these routines. If you do not need to reference cpuser, it should be initialized to c_null_ptr.
11: $\mathbf{ifail}$Integer Input/Output
On entry: ifail must be set to $0$, $-1$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $-1$ means that an error message is printed while a value of $1$ means that it is not.
If halting is not appropriate, the value $-1$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $-1$ is recommended since useful values can be provided in some output arguments even when ${\mathbf{ifail}}\ne {\mathbf{0}}$ on exit. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).

## 6Error Indicators and Warnings

If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
Note: in some cases e05kbf may return useful information.
${\mathbf{ifail}}=1$
The supplied handle does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been properly initialized or it has been corrupted.
${\mathbf{ifail}}=2$
The problem is already being solved.
This solver does not support the model defined in the handle.
${\mathbf{ifail}}=3$
A finite initialization list could not be computed internally. Consider reformulating the bounds on the problem, try providing your own initialization list, use the randomization option (${\mathbf{MCS Initialization Method}}=\mathrm{CUSTOM}$) or vary the value of Infinite Bound Size.
The user-supplied initialization list contained infinite values, as determined by the optional parameter Infinite Bound Size.
${\mathbf{ifail}}=4$
On entry, ${\mathbf{nvar}}=⟨\mathit{\text{value}}⟩$, expected $\mathrm{value}=⟨\mathit{\text{value}}⟩$.
Constraint: nvar must match the current number of variables of the model in the handle.
${\mathbf{ifail}}=5$
On entry, user-supplied $\mathit{initpt}\left(i\right)=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{x}}\left(i\right)$ is not fixed then $\mathit{initpt}\left(\mathit{i}\right)\ge 1$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$.
For more details, please refer to Section 9.2.
On entry, user-supplied $\mathit{initpt}\left(i\right)=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$ and $\mathit{sdlist}=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{x}}\left(i\right)$ is not fixed then $\mathit{initpt}\left(\mathit{i}\right)\le \mathit{sdlist}$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$.
For more details, please refer to Section 9.2.
On entry, user-supplied $\mathit{list}\left(i,\mathit{j}\right)=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$, $\mathit{j}=⟨\mathit{\text{value}}⟩$, and $\mathit{bl}\left(i\right)=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{x}}\left(i\right)$ is not fixed then $\mathit{list}\left(\mathit{i},\mathit{j}\right)\ge \mathit{bl}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$ and $\mathit{j}=1,2,\dots ,\mathit{numpts}\left(\mathit{i}\right)$.
For more details, please refer to Section 9.2.
On entry, user-supplied $\mathit{list}\left(i,\mathit{j}\right)=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$, $\mathit{j}=⟨\mathit{\text{value}}⟩$, and $\mathit{bu}\left(i\right)=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{x}}\left(i\right)$ is not fixed then $\mathit{list}\left(i,\mathit{j}\right)\le \mathit{bu}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$ and $\mathit{j}=1,2,\dots ,\mathit{numpts}\left(\mathit{i}\right)$.
For more details, please refer to Section 9.2.
On entry, user-supplied $\mathit{numpts}\left(i\right)=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{x}}\left(i\right)$ is not fixed then $\mathit{numpts}\left(\mathit{i}\right)\ge 3$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$.
For more details, please refer to Section 9.2.
On entry, user-supplied $\mathit{numpts}\left(i\right)=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$ and $\mathit{sdlist}=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{x}}\left(i\right)$ is not fixed then $\mathit{numpts}\left(\mathit{i}\right)\le \mathit{sdlist}$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$.
For more details, please refer to Section 9.2.
On entry, user-supplied section $\mathit{list}\left(i,1:\mathit{numpts}\left(i\right)\right)$ contained $\mathit{ndist}$ distinct elements, and $\mathit{ndist}<\mathit{numpts}\left(i\right)$: $\mathit{ndist}=⟨\mathit{\text{value}}⟩$, $\mathit{numpts}\left(i\right)=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$.
For more details, please refer to Section 9.2.
On entry, user-supplied section $\mathit{list}\left(i,1:\mathit{numpts}\left(i\right)\right)$ was not in ascending order: $\mathit{numpts}\left(i\right)=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$.
For more details, please refer to Section 9.2.
The dimension of the array ‘MCS List’ is not a multiple of ${\mathbf{nvar}}$. For more details, please refer to Section 9.2.
${\mathbf{ifail}}=6$
There were $n=⟨\mathit{\text{value}}⟩$ variables and the optional parameter MCS Splits Limit $\mathit{smax}=⟨\mathit{\text{value}}⟩$.
Constraint: $\mathit{smax}>n+2$.
Use e04zmf to set compatible option values.
${\mathbf{ifail}}=7$
The dummy objfun routine was called but the problem requires these values. Please provide a proper objfun routine.
${\mathbf{ifail}}=8$
An error occurred during initialization. It is likely that points from the initialization list are very close together. Try relaxing the bounds on the variables or use a different initialization method.
An error occurred during linesearching. It is likely that your objective function is badly scaled: try rescaling it. Also, try relaxing the bounds or use a different initialization method. If the problem persists, please contact NAG quoting error code $⟨\mathit{\text{value}}⟩$.
${\mathbf{ifail}}=11$
The optional parameter MCS Initialization Method was set to CUSTOM but one of the arrays $\mathit{list}$, $\mathit{initpt}$ or $\mathit{numpts}$ was not passed correctly to the handle.
For more details, please refer to Section 9.2.
${\mathbf{ifail}}=20$
User-supplied monitoring routine requested termination.
User-supplied objective function requested termination.
${\mathbf{ifail}}=21$
The function evaluations limit was exceeded.
Approximately MCS Max Objective Calls function calls have been made without your chosen termination criterion being satisfied.
${\mathbf{ifail}}=24$
The division procedure completed but your target value could not be reached.
Despite every sub-box being processed MCS Splits Limit times, the target value you provided in MCS Target Objective Value could not be found to the tolerances given in MCS Target Objective Error and MCS Target Objective Safeguard. You could try reducing MCS Splits Limit or the objective tolerances.
${\mathbf{ifail}}=-99$
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.

## 7Accuracy

If ${\mathbf{ifail}}={\mathbf{0}}$ on exit, then the vector returned in the array x is an estimate of the solution $\mathbf{x}$ whose function value satisfies your termination criterion: the function value was static for MCS Static Limit sweeps through levels, or
 $F(x) - objval ≤ max( objerr × |objval| ,objsfg) ,$
where $\mathit{objval}$ is the value of the optional parameter MCS Target Objective Value, $\mathit{objerr}$ is the value of the optional parameter MCS Target Objective Error, and $\mathit{objsfg}$ is the value of the optional parameter MCS Target Objective Safeguard.

## 8Parallelism and Performance

e05kbf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
e05kbf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.

### 9.1Description of the Printed Output

The solver can print information to give an overview of the problem and the progress of the computation. The output may be sent to two independent unit numbers which are set by optional parameters Print File and Monitoring File. Optional parameters Print Level, Print Options, Monitoring Level and Print Solution determine the exposed level of detail. This allows, for example, a detailed log file to be generated while the condensed information is displayed on the screen.
By default (${\mathbf{Print File}}=6$, ${\mathbf{Print Level}}=2$), four sections are printed to the standard output: a header, a list of options, an iteration log and a summary.
The header is a message indicating the start of the solver. It should look like:
``` -------------------------------------------------
E05KB, MCS method for bound constrained problems
-------------------------------------------------```
Optional parameters list
If ${\mathbf{Print Options}}=\mathrm{YES}$, a list of the optional parameters and their values is printed. The list shows all options of the solver, each displayed on one line. The line contains the option name, its current value and an indicator for how it was set. The options left at their defaults are noted by ‘d’ and the ones you have set are noted by ‘U’. Note that the output format is compatible with the file format expected by e04zpf. The output looks as follows:
```     Mcs Initialization Method     =       Simple Bounds     * d
Mcs Local Searches            =                  On     * d
Mcs Local Searches Limit      =                  50     * d
Mcs Local Searches Tolerance  =         2.22045E-16     * d
Mcs Max Objective Calls       =                   0     * d
Mcs Monitor Frequency         =                   0     * d
Mcs Repeatability             =                 Off     * d
Mcs Splits Limit              =                   0     * d
Mcs Static Limit              =                   0     * d
Mcs Target Objective Error    =         1.02648E-04     * d
Mcs Target Objective Safeguard=         1.05367E-08     * d
Mcs Target Objective Value    =        -1.7977E+308     * d
Mcs Print Frequency           =                   1     * d```
Problem statistics
If ${\mathbf{Print Level}}\ge 2$, statistics on the problem are printed, for example:
``` Problem Statistics
No of variables                  4
free (unconstrained)           1
bounded                        3
Objective function       Nonlinear```
Iteration log
If ${\mathbf{Print Level}}\ge 2$, the solver will print a summary line for each split. Each line shows the split number, the value of the objective function (obj), the number of sweeps (nsweeps), and the cumulative number of objective function evaluations (nf). The output looks as follows:
```  ----------------------------------------------
Splits |    obj        nsweeps    |    nf   |
----------------------------------------------
1 | -8.00E+00        1       |     8   |
2 | -9.00E+00        1       |     9   |
3 | -9.00E+00        1       |    10   |
4 | -9.00E+00        1       |    11   |```
Summary
Once the solver finishes, a summary is produced:
```Status: Converged, objective function is static

Value of the objective                   -9.00000E+00
Number of objective function evaluations           81
Number of sweeps                                   10```
Note that only the iterations that decrease the objective function are printed in the iteration log, meaning that objective evaluations are likely to happen between the last printed iteration and the convergence. This leads to a small difference between the last line of the iteration log and the final summary in terms of the number of function evaluations.
Optionally, if ${\mathbf{Stats Time}}=\mathrm{YES}$, the timings are printed:
```Timings
Total time spent in the solver            0.056
Time spent in the objective evaluation    0.012```
Additionally, if ${\mathbf{Print Solution}}=\mathrm{YES}$, the solution is printed along with the bounds:
``` Primal variables:
idx   Lower bound       Value       Upper bound
1   1.00000E+00    1.00000E+00    3.00000E+00
2  -2.00000E+00   -8.52342E-02    0.00000E+00
3       -inf       4.09304E-01         inf
4   1.00000E+00    1.00000E+00    3.00000E+00```

### 9.2Creating a Custom Initialization List

By default, the initialization list as described in Section 11.1 is computed internally by the solver using one of the methods defined by the optional parameter MCS Initialization Method. It is however possible to customize the initialization method by first setting MCS Initialization Method to $\mathrm{CUSTOM}$ and providing the following information to the handle before the solver call through the utility routines e04rxf and e04rwf:
Numpts(nvar): integer array
$\mathit{numpts}$ encodes the number of splitting points in each coordinate direction; e.g., for $i=1,\mathrm{...},{\mathbf{nvar}}$, $\mathrm{numpts}\left(i\right)$ is the number of splits intially created in the $i$th direction. The coordinates of the initial boxes must be provided in the argument $\mathit{list}\left({\mathbf{nvar}}*\mathit{sdlist}\right)$, where sdlist is defined implicitly when list is passed to the problem structure.
To provide the array numpts, the routine e04rwf must be called with the command string ${\mathbf{cmdstr}}=\text{'}\mathrm{MCS number points}\text{'}$.
Constraint:
• $\mathit{numpts}\left(\mathit{i}\right)\ge 3$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$
• ${\mathrm{max}}_{i}\mathit{numpts}\left(i\right)=\mathit{sdlist}$
Initpt(nvar): integer array
$\mathit{initpt}$ designates a point stored in $\mathit{list}$ that you wish e05kbf to consider as an ‘initial point’ for the purposes of the splitting procedure. Call this initial point ${\mathit{x}}^{0}$. The coordinates of ${\mathit{x}}^{0}$ correspond to a set of indices ${\mathit{j}}_{i}$, $i=1,\mathrm{...},{\mathbf{nvar}}$, such that ${\mathit{x}}_{i}^{0}$ is stored in $\mathit{list}\left(i+\left({j}_{i}-1\right)×{\mathbf{nvar}}\right)$. You must set $\mathit{initpt}\left(i\right)={j}_{i}$ for $i=1,\mathrm{...},{\mathbf{nvar}}$.
To provide the array initpt, the routine e04rwf must be called with the command string ${\mathbf{cmdstr}}=\text{'}\mathrm{MCS initial points}\text{'}$.
Constraint:
• $1\le \mathit{initpt}\left(\mathit{i}\right)\le \mathit{sdlist}={\mathrm{max}}_{\mathit{i}}\mathit{numpts}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$
List(${\mathbf{nvar}}×\mathit{sdlist}$): real array
$\mathit{list}$ is the ‘initialization list’: whenever a sub-box in the algorithm is split for the first time (either during the initialization procedure or later), for each non-fixed coordinate $i$ the split is done at the values $\mathit{list}\left(i+\left(j-1\right)×{\mathbf{nvar}}\right)$, $j=1,\mathrm{...},\mathit{numpts}\left(i\right)$, as well as at some adaptively chosen intermediate points. For all $i$, the values $\mathit{list}\left(i+\left(j-1\right)×{\mathbf{nvar}}\right)$, $j=1,\mathrm{...},\mathit{numpts}\left(i\right)$, must be in ascending order with each entry being distinct.
The dimension sdlist is defined implicitly as $\mathit{sdlist}=\frac{\mathit{dim}\left(\mathit{list}\right)}{{\mathbf{nvar}}}$ when list is passed to the handle.
To provide the array list, the routine e04rxf must be called with the command string ${\mathbf{cmdstr}}=\text{'}\mathrm{MCS list}\text{'}$.
Constraint:
• for all $j=1,\mathrm{...},\mathit{numpts}\left(\mathit{i}\right)$ and $i=1,\mathrm{...},{\mathbf{nvar}}$, $\mathit{bl}\left(\mathit{i}\right)\le \mathit{list}\left(\mathit{i}+\left(j-1\right)×{\mathbf{nvar}}\right)\le \mathit{bu}\left(\mathit{i}\right)$, where bl and bu are the bounds defined on the variables.

## 10Example

This example finds the global minimum of the ‘peaks’ function in two dimensions
 $F(x,y) = 3 (1-x) 2 exp(- x 2 - (y+1) 2 ) -10 ⁢ ( x 5 - x 3 - y 5 ) exp(- x 2 - y 2 ) - 1 3 exp(- (x+1) 2 - y 2 )$
on the box $\left[-3,3\right]×\left[-3,3\right]$.
The function $F$ has several local minima and one global minimum in the given box. The global minimum is approximately located at $\left(0.23,-1.63\right)$, where the function value is approximately $-6.55$.

### 10.1Program Text

Program Text (e05kbfe.f90)

None.

### 10.3Program Results

Program Results (e05kbfe.r)

## 11Algorithmic Details

Here we summarise the main features of the MCS algorithm used in e05kbf, and we introduce some terminology used in the description of the subroutine and its arguments. We assume throughout that we will only do any work in coordinates $i$ in which ${x}_{i}$ is free to vary. The MCS algorithm is fully described in Huyer and Neumaier (1999).

### 11.1Initialization and Sweeps

Each sub-box is determined by a basepoint $\mathbf{x}$ and an opposite point $\mathbf{y}$. We denote such a sub-box by $B\left[\mathbf{x},\mathbf{y}\right]$. The basepoint is allowed to belong to more than one sub-box, is usually a boundary point, and is often a vertex.
An initialization procedure produces an initial set of sub-boxes. Whenever a sub-box is split along a coordinate $i$ for the first time (in the initialization procedure or later), the splitting is done at three or more user-defined values ${\left\{{x}_{i}^{j}\right\}}_{j}$ at which the objective function is sampled, and at some adaptively chosen intermediate points. At least four children are generated. More precisely, we assume that we are given
 $ℓi ≤ xi1 < xi2 < ⋯ < xiLi ≤ ui , Li ≥ 3 , for ​ i=1,2,…,n$
and a vector $\mathbf{p}$ that, for each $i$, locates within ${\left\{{x}_{i}^{j}\right\}}_{j}$ the $i$th coordinate of an initial point ${\mathbf{x}}^{0}$; that is, if ${x}_{i}^{0}={x}_{i}^{j}$ for some $j=1,2,\dots ,{L}_{i}$, then ${p}_{i}=j$. A good guess for the global optimum can be used as ${\mathbf{x}}^{0}$.
The initialization points and the vectors $\mathbf{\ell }$ and $\mathbf{p}$ are collectively called the initialization list (and sometimes we will refer to just the initialization points as ‘the initialization list’, whenever this causes no confusion). The initialization data may be input by you, or they can be set to sensible default values by e05kbf: if you provide them yourself, the arrays $\mathit{list}\left(i+\left(j-1\right)×{\mathbf{nvar}}\right)$ should contain ${x}_{i}^{j}$, $\mathit{numpts}\left(i\right)$ should contain ${L}_{i}$, and $\mathit{initpt}\left(i\right)$ should contain ${p}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$ and $\mathit{j}=1,2,\dots ,{L}_{\mathit{i}}$; please refer to Section 9.2 for more information on how to define a custom initialization list. If you wish e05kbf to use one of its preset initialization methods, you could choose one of two simple, three-point methods (see Figure 1). If the list generated by one of these methods contains infinite values, attempts are made to generate a safeguarded list using the function $\mathrm{subint}\left(x,y\right)$ (which is also used during the splitting procedure, and is described in Section 11.2). If infinite values persist, e05kbf exits with ${\mathbf{ifail}}={\mathbf{3}}$. There is also the option to generate an initialization list with the aid of linesearches (by setting ${\mathbf{MCS Initialization Method}}=SIMPLE OFF-BOUNDS$). Starting with the absolutely smallest point in the root box, linesearches are made along each coordinate. For each coordinate, the local minimizers found by the linesearches are put into the initialization list. If there were fewer than three minimizers, they are augmented by close-by values. The final preset initialization option (${\mathbf{MCS Initialization Method}}=RANDOM$) generates a randomized list, so that independent multiple runs may be made if you suspect a global optimum has not been found. Depending on whether you set the optional parameter MCS Repeatability to $\mathrm{ON}$ or $\mathrm{OFF}$, the random state is initialized to give a repeatable or non-repeatable sequence. The components of list are then generated, from a uniform distribution on the root box if the box is finite, or else in a safeguarded fashion if any bound is infinite.
Given an initialization list (preset or otherwise), e05kbf evaluates $F$ at ${\mathbf{x}}^{0}$ and sets the initial estimate of the global minimum, ${\mathbf{x}}^{*}$, to ${\mathbf{x}}^{0}$. Then, for $i=1,2,\dots ,n$, the objective function $F$ is evaluated at ${L}_{i}-1$ points that agree with ${\mathbf{x}}^{*}$ in all but the $i$th coordinate. We obtain pairs $\left({\stackrel{^}{\mathbf{x}}}^{\mathit{j}},{f}_{i}^{\mathit{j}}\right)$, for $\mathit{j}=1,2,\dots ,{L}_{i}$, with: ${\mathbf{x}}^{*}={\stackrel{^}{\mathbf{x}}}^{{j}_{1}}$, say; with, for $j\ne {j}_{1}$,
 $x^kj = { xk* if ​k≠i; xkj otherwise;$
and with
 $fij = F ( x^ j ) .$
The point having the smallest function value is renamed ${\mathbf{x}}^{*}$ and the procedure is repeated with the next coordinate.
Once e05kbf has a full set of initialization points and function values, it can generate an initial set of sub-boxes. Recall that the root box is $B\left[\mathbf{x},\mathbf{y}\right]=\left[\mathbf{\ell },\mathbf{u}\right]$, having basepoint $\mathbf{x}={\mathbf{x}}^{0}$. The opposite point $\mathbf{y}$ is a corner of $\left[\mathbf{\ell },\mathbf{u}\right]$ farthest away from $\mathbf{x}$, in some sense. The point $\mathbf{x}$ need not be a vertex of $\left[\mathbf{\ell },\mathbf{u}\right]$, and $\mathbf{y}$ is entitled to have infinite coordinates. We loop over each coordinate $i$, splitting the current box along coordinate $i$ into $2{L}_{i}-2$, $2{L}_{i}-1$ or $2{L}_{i}$ sub-intervals with exactly one of the ${\stackrel{^}{x}}_{i}^{j}$ as endpoints, depending on whether two, one or none of the ${\stackrel{^}{x}}_{i}^{j}$ are on the boundary. Thus, as well as splitting at ${\stackrel{^}{x}}_{i}^{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,{L}_{i}$, we split at additional points ${z}_{i}^{\mathit{j}}$, for $\mathit{j}=2,3,\dots ,{L}_{i}$. These additional ${z}_{i}^{j}$ are such that
 $zij = x^ i j-1 + qm ( x^ i j - x^ i j-1 ) , j=2,…,Li ,$
where $q$ is the golden-section ratio $\left(\sqrt{5}-1\right)/2$, and the exponent $m$ takes the value $1$ or $2$, chosen so that the sub-box with the smaller function value gets the larger fraction of the interval. Each child sub-box gets as basepoint the point obtained from ${\mathbf{x}}^{*}$ by changing ${x}_{i}^{*}$ to the ${x}_{i}^{j}$ that is a boundary point of the corresponding $i$th coordinate interval; this new basepoint, therefore, has function value ${f}_{i}^{j}$. The opposite point is derived from $\mathbf{y}$ by changing ${y}_{i}$ to the other end of that interval.
e05kbf can now rank the coordinates based on an estimated variability of $F$. For each $i$ we compute the union of the ranges of the quadratic interpolant through any three consecutive ${\stackrel{^}{x}}_{i}^{j}$, taking the difference between the upper and lower bounds obtained as a measure of the variability of $F$ in coordinate $i$. A vector $\mathbf{\pi }$ is populated in such a way that coordinate $i$ has the ${\pi }_{i}$th highest estimated variability. For tiebreaks, when the ${\mathbf{x}}^{*}$ obtained after splitting coordinate $i$ belongs to two sub-boxes, the one that contains the minimizer of the quadratic models is designated the current sub-box for coordinate $i+1$.
Boxes are assigned levels in the following manner. The root box is given level $1$. When a sub-box of level $s$ is split, the child with the smaller fraction of the golden-section split receives level $s+2$; all other children receive level $s+1$. The box with the better function value is given the larger fraction of the splitting interval and the smaller level because then it is more likely to be split again more quickly. We see that after the initialization procedure the first level is empty and the non-split boxes have levels $2,\dots ,{n}_{r}+2$, so it is meaningful to choose ${s}_{\mathrm{max}}$ much larger than ${n}_{r}$. Note that the internal structure of e05kbf demands that ${s}_{\mathrm{max}}$ be at least ${n}_{r}+3$.
Examples of initializations in two dimensions are given in Figure 1. In both cases the initial point is ${\mathbf{x}}^{0}=\left(\mathbf{\ell }+\mathbf{u}\right)/2$; on the left the initialization points are
 $x1 = ℓ , x2 = (ℓ+u) / 2 , x3 = u ,$
while on the right the points are
 $x1 = (5ℓ+u) / 6 , x2 = (ℓ+u) / 2 , x3 = (ℓ+5u) / 6 .$
In Figure 1, basepoints and levels after initialization are displayed. Note that these initialization lists correspond to ${\mathbf{MCS Initialization Method}}=SIMPLE BOUNDS$ and ${\mathbf{MCS Initialization Method}}=SIMPLE OFF-BOUNDS$, respectively.
After initialization, a series of sweeps through levels is begun. A sweep is defined by three steps:
1. (i)scan the list of non-split sub-boxes. Fill a record list $\mathbf{b}$ according to ${b}_{s}=0$ if there is no box at level $s$, and with ${b}_{s}$ pointing to a sub-box with the lowest function value among all sub-boxes with level $s$ otherwise, for $0;
2. (ii)the sub-box with label ${b}_{s}$ is a candidate for splitting. If the sub-box is not to be split, according to the rules described in Section 11.2, increase its level by $1$ and update ${b}_{s+1}$ if necessary. If the sub-box is split, mark it so, insert its children into the list of sub-boxes, and update $\mathbf{b}$ if any child with level ${s}^{\prime }$ yields a strict improvement of $F$ over those sub-boxes at level ${s}^{\prime }$;
3. (iii)increment $s$ by $1$. If $s={s}_{\mathrm{max}}$ then display the monitoring information and start a new sweep; else if ${b}_{s}=0$ then repeat this step; else display monitoring information and go to the previous step.
Clearly, each sweep ends after at most ${s}_{\mathrm{max}}-1$ visits of the third step.

### 11.2Splitting

Each sub-box is stored by e05kbf as a set of information about the history of the sub-box: the label of its parent, a label identifying which child of the parent it is, etc. Whenever a sub-box $B\left[\mathbf{x},\mathbf{y}\right]$ of level $s<{s}_{\mathrm{max}}$ is a candidate for splitting, as described in Section 11.1, we recover $\mathbf{x}$, $\mathbf{y}$, and the number, ${n}_{j}$, of times coordinate $j$ has been split in the history of $B$. Sub-box $B$ could be split in one of two ways.
1. (i)Splitting by rank
If $s>2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}+1\right)$, the box is always split. The splitting index is set to a coordinate $i$ such that ${n}_{i}=\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}$.
2. (ii)Splitting by expected gain
If $s\le 2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}+1\right)$, the sub-box could be split along a coordinate where a maximal gain in function value is expected. This gain is estimated according to a local separable quadratic model obtained by fitting to $2{n}_{r}+1$ function values. If the expected gain is too small the sub-box is not split at all, and its level is increased by $1$.
Eventually, a sub-box that is not eligible for splitting by expected gain will reach level $2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}+1\right)+1$ and then be split by rank, as long as ${s}_{\mathrm{max}}$ is large enough. As ${s}_{\mathrm{max}}\to \infty$, the rule for splitting by rank ensures that each coordinate is split arbitrarily often.
Before describing the details of each splitting method, we introduce the procedure for correctly handling splitting at adaptive points and for dealing with unbounded intervals. Suppose we want to split the $i$th coordinate interval $▯\left\{{x}_{i},{y}_{i}\right\}$, where we define $▯\left\{{x}_{i},{y}_{i}\right\}=\left[\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({x}_{i},{y}_{i}\right),\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({x}_{i},{y}_{i}\right)\right]$, for ${x}_{i}\in R$ and ${y}_{i}\in \overline{R}$, and where $\mathbf{x}$ is the basepoint of the sub-box being considered. The descendants of the sub-box should shrink sufficiently fast, so we should not split too close to ${x}_{i}$. Moreover, if ${y}_{i}$ is large we want the new splitting value to not be too large, so we force it to belong to some smaller interval $▯\left\{{\xi }^{\prime },{\xi }^{\prime \prime }\right\}$, determined by
 $ξ′′ = subint (xi,yi) , ξ′ = xi + (ξ′′-xi) / 10 ,$
where the function $\mathrm{subint}$ is defined by
 $subint (x,y) = { sign(y) if ​ 1000|x|<1 ​ and ​ |y|>1000 ; 10sign(y)|x| if ​ 1000|x|≥1 ​ and ​ |y|>1000|x| ; y otherwise.$

#### 11.2.1Splitting by rank

Consider a sub-box $B$ with level $s>2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}+1\right)$. Although the sub-box has reached a high level, there is at least one coordinate along which it has not been split very often. Among the $i$ such that ${n}_{i}=\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}$ for $B$, select the splitting index to be the coordinate with the lowest ${\pi }_{i}$ (and hence highest variability rank). ‘Splitting by rank’ refers to the ranking of the coordinates by ${n}_{i}$ and ${\pi }_{i}$.
If ${n}_{i}=0$, so that $B$ has never been split along coordinate $i$, the splitting is done according to the initialization list and the adaptively chosen golden-section split points, as described in Section 11.1. Also as covered there, new basepoints and opposite points are generated. The children having the smaller fraction of the golden-section split (that is, those with larger function values) are given level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left\{s+2,{s}_{\mathrm{max}}\right\}$. All other children are given level $s+1$.
Otherwise, $B$ ranges between ${x}_{i}$ and ${y}_{i}$ in the $i$th coordinate direction. The splitting value is selected to be ${z}_{i}={x}_{i}+2\left(\mathrm{subint}\left({x}_{i},{y}_{i}\right)-{x}_{i}\right)/3$; we are not attempting to split based on a large reduction in function value, merely in order to reduce the size of a large interval, so ${z}_{i}$ may not be optimal. Sub-box $B$ is split at ${z}_{i}$ and the golden-section split point, producing three parts and requiring only one additional function evaluation, at the point ${\mathbf{x}}^{\prime }$ obtained from $\mathbf{x}$ by changing the $i$th coordinate to ${z}_{i}$. The child with the smaller fraction of the golden-section split is given level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left\{s+2,{s}_{\mathrm{max}}\right\}$, while the other two parts are given level $s+1$. Basepoints are assigned as follows: the basepoint of the first child is taken to be $\mathbf{x}$, and the basepoint of the second and third children is the point ${\mathbf{x}}^{\prime }$. Opposite points are obtained by changing ${y}_{i}$ to the other end of the $i$th coordinate-interval of the corresponding child.

#### 11.2.2Splitting by expected gain

When a sub-box $B$ has level $s\le 2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}+1\right)$, we compute the optimal splitting index and splitting value from a local separable quadratic used as a simple local approximation of the objective function. To fit this curve, for each coordinate we need two additional points and their function values. Such data may be recoverable from the history of $B$: whenever the $i$th coordinate was split in the history of $B$, we obtained values that can be used for the current quadratic interpolation in coordinate $i$.
We loop over $i$; for each coordinate we pursue the history of $B$ back to the root box, and we take the first two points and function values we find, since these are expected to be closest to the current basepoint $\mathbf{x}$. If the current coordinate has not yet been split we use the initialization list. Then we generate a local separable model $e\left(\mathbf{\xi }\right)$ for $F\left(\mathbf{\xi }\right)$ by interpolation at $\mathbf{x}$ and the $2{n}_{r}$ additional points just collected:
 $e(ξ) = F(x) + ∑ i=1 n ei (ξi) .$
We define the expected gain ${\stackrel{^}{e}}_{i}$ in function value when we evaluate at a new point obtained by changing coordinate $i$ in the basepoint, for each $i$, based on two cases:
1. (i)${n}_{i}=0$. We compute the expected gain as
 $e^i = min 1≤j≤ Li {fij} - f i pi .$
Again, we split according to the initialization list, with the new basepoints and opposite points being as before.
2. (ii)${n}_{i}>0$. Now, the $i$th component of our sub-box ranges from ${x}_{i}$ to ${y}_{i}$. Using the quadratic partial correction function
 $ei (ξi) = αi (ξi-xi) + βi (ξi-xi) 2$
we can approximate the maximal gain expected when changing ${x}_{i}$ only. We will choose the splitting value from $▯\left\{{\xi }^{\prime },{\xi }^{\prime \prime }\right\}$. We compute
 $e^i = min ξi ∈ ▯ {ξ′,ξ′′} ei (ξi)$
and call ${z}_{i}$ the minimizer in $▯\left\{{\xi }^{\prime },{\xi }^{\prime \prime }\right\}$.
If the expected best function value ${f}_{\mathrm{exp}}$ satisfies
 $fexp = F(x) + min 1≤i≤n e^i < fbest ,$ (1)
where ${f}_{\mathrm{best}}$ is the current best function value (including those function values obtained by local optimization), we expect the sub-box to contain a better point and so we split it, using as splitting index the component with minimal ${\stackrel{^}{e}}_{i}$. Equation (1) prevents wasting function calls by avoiding splitting sub-boxes whose basepoints have bad function values. These sub-boxes will eventually be split by rank anyway.
We now have a splitting index and a splitting value ${z}_{i}$. The sub-box is split at ${z}_{i}$ as long as ${z}_{i}\ne {y}_{i}$, and at the golden-section split point; two or three children are produced. The larger fraction of the golden-section split receives level $s+1$, while the smaller fraction receives level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left\{s+2,{s}_{\mathrm{max}}\right\}$. If it is the case that ${z}_{i}\ne {y}_{i}$ and the third child is larger than the smaller of the two children from the golden-section split, the third child receives level $s+1$. Otherwise it is given the level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left\{s+2,{s}_{\mathrm{max}}\right\}$. The basepoint of the first child is set to $\mathbf{x}$, and the basepoint of the second (and third if it exists) is obtained by changing the $i$th coordinate of $\mathbf{x}$ to ${z}_{i}$. The opposite points are again derived by changing ${y}_{i}$ to the other end of the $i$th coordinate interval of $B$.
If equation (1) does not hold, we expect no improvement. We do not split, and we increase the level of $B$ by $1$.

### 11.3Local Search

The local optimization algorithm used by e05kbf uses linesearches along directions that are determined by minimizing quadratic models, all subject to bound constraints. Triples of vectors are computed using coordinate searches based on linesearches. These triples are used in triple search procedures to build local quadratic models for $F$. A trust region-type approach to minimize these models is then carried out, and more information about the coordinate search and the triple search can be found in Huyer and Neumaier (1999).
The local search starts by looking for better points without being too local, by making a triple search using points found by a coordinate search. This yields a new point and function value, an approximation of the gradient of the objective, and an approximation of the Hessian of the objective. Then the quadratic model for $F$ is minimized over a small box, with the solution to that minimization problem then being used as a linesearch direction to minimize the objective. A measure $r$ is computed to quantify the predictive quality of the quadratic model.
The third stage is the checking of termination criteria. The local search will stop if more than $\mathit{loclim}$ visits to this part of the local search have occurred, where $\mathit{loclim}$ is the value of the optional parameter MCS Local Searches Limit. If that is not the case, it will stop if the limit on function calls has been exceeded (see the description of the optional parameter MCS Max Objective Calls). The final criterion checks if no improvement can be made to the function value, or whether the approximated gradient $\mathbf{g}$ is small, in the sense that
 $|g|T max(|x|,| x old |) < loctol (f0-f) .$
The vector ${\mathbf{x}}_{\mathrm{old}}$ is the best point at the start of the current loop in this iterative local-search procedure, the constant $\mathit{loctol}$ is the value of the optional parameter MCS Local Searches Tolerance, $f$ is the objective value at $\mathbf{x}$, and ${f}_{0}$ is the smallest function value found by the initialization procedure.
Next, e05kbf attempts to move away from the boundary, if any components of the current point lie there, using linesearches along the offending coordinates. Local searches are terminated if no improvement could be made.
The fifth stage carries out another triple search, but this time it does not use points from a coordinate search, rather points lying within the trust region box are taken.
The final stage modifies the trust region box to be bigger or smaller, depending on the quality of the quadratic model, minimizes the new quadratic model on that box, and does a linesearch in the direction of the minimizer. The value of $r$ is updated using the new data, and then we go back to the third stage (checking of termination criteria).
The Hessians of the quadratic models generated by the local search may not be positive definite, so e05kbf uses a general nonlinear optimizer to minimize the models.

## 12Optional Parameters

Several optional parameters in e05kbf define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e05kbf these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling e04zmf anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
The option values may be retrieved by e04znf.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section 12.1.

### 12.1Description of the Optional Parameters

For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
• the keywords;
• a parameter value, where the letters $a$, $i$ and $r$ denote options that take character, integer and real values respectively;
• the default value, where the symbol $\epsilon$ is a generic notation for machine precision (see x02ajf).
All options accept the value $\mathrm{DEFAULT}$ to return single options to their default states.
Keywords and character values are case and white space insensitive.
 Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
 MCS Initialization Method $a$ Default $\text{}=\mathrm{SIMPLE BOUNDS}$
The solver provides several methods to produce the initial set of sub-boxes. When ${\mathbf{MCS Initialization Method}}=\mathrm{SIMPLE BOUNDS}$, $\mathrm{SIMPLE OFF-BOUNDS}$, $\mathrm{LINESEARCH}$ or $\mathrm{RANDOM}$, e05kbf will generate the sub-boxes internally. Please refer to Section 11.1 for more details about the differences between the automatic methods.
It is also possible to create a custom initialization list by setting ${\mathbf{MCS Initialization Method}}=\mathrm{CUSTOM}$. Section 9.2 describes how to provide the custom initialization list to the solver.
Constraint: ${\mathbf{MCS Initialization Method}}=\mathrm{SIMPLE BOUNDS}$, $\mathrm{SIMPLE OFF-BOUNDS}$, $\mathrm{LINESEARCH}$, $\mathrm{RANDOM}$ or $\mathrm{CUSTOM}$
 MCS Max Objective Calls $i$ Default $\text{}=100{n}_{r}^{2}$
This puts an approximate limit on the number of function calls allowed. The total number of calls made is checked at the top of an internal iteration loop, so it is possible that a few calls more than MCS Max Objective Calls may be made.
Constraint: ${\mathbf{MCS Max Objective Calls}}\ge 1$
 MCS Monitor Frequency $i$ Default $\text{}=0$
This parameter specifies the frequency on which to call the monitor routine monit. If zero, the monitor routine will not be called.
Constraint: ${\mathbf{MCS Monitor Frequency}}\ge 0$.
 MCS Print Frequency $i$ Default $\text{}=1$
This parameter specifies the frequency with which to print information regarding each sweep to Print File and/or Monitoring File. By default, it will print information of every sweep.
Constraint: ${\mathbf{MCS Print Frequency}}\ge 1$.
 MCS Local Searches $a$ Default $\text{}=\mathrm{ON}$
If you want to try to accelerate convergence of e05kbf by starting local searches from candidate minima, you will require MCS Local Searches to be $\mathrm{ON}$.
Constraint: ${\mathbf{MCS Local Searches}}=\mathrm{ON}$ or $\mathrm{OFF}$
 MCS Local Searches Limit $i$ Default $\text{}=50$
This defines the maximal number of iterations to be used in the trust region loop of the local-search procedure.
Constraint: ${\mathbf{MCS Local Searches Limit}}>0$
 MCS Local Searches Tolerance $r$ Default $\text{}=2\epsilon$
The value of MCS Local Searches Tolerance is the multiplier used during local searches as a stopping criterion for when the approximated gradient is small, in the sense described in Section 11.3.
Constraint: ${\mathbf{MCS Local Searches Tolerance}}\ge 2\epsilon$
 MCS Repeatability $a$ Default $\text{}=\mathrm{OFF}$
For use with random initialization lists (${\mathbf{MCS Initialization Method}}=RANDOM$). When set to $\mathrm{ON}$, The initialization list will be consistent for every call of e05kbf.
Constraint: ${\mathbf{MCS Repeatability}}=\mathrm{ON}$ or $\mathrm{OFF}$
 MCS Splits Limit $i$ Default $\text{}=⌊d\left({n}_{r}+2\right)/3⌋$
Along with the initialization list, this defines a limit on the number of times the root box will be split along any single coordinate direction. If MCS Local Searches is $\mathrm{OFF}$ you may find the default value to be too small.
Constraint: ${\mathbf{MCS Splits Limit}}>{n}_{r}+2$
 MCS Static Limit $i$ Default $\text{}=3{n}_{r}$
As the default termination criterion, computation stops when the best function value is static for MCS Static Limit sweeps through levels. This parameter is ignored if you have specified a target value to reach in MCS Target Objective Value.
Constraint: ${\mathbf{MCS Static Limit}}>0$
 MCS Target Objective Error $r$ Default $\text{}={\epsilon }^{\frac{1}{4}}$
If you have given a target objective value to reach in MCS Target Objective Value, this optional parameter sets your desired relative error between F(x) and MCS Target Objective Value, as described in Section 7. See also the description of the optional parameter MCS Target Objective Safeguard.
Constraint: ${\mathbf{MCS Target Objective Error}}\ge 2\epsilon$
 MCS Target Objective Safeguard $r$ Default $\text{}={\epsilon }^{\frac{1}{2}}$
If you have given a target objective value to reach in MCS Target Objective Value (the value of the optional parameter MCS Target Objective Value), this optional parameter sets your desired safeguarded termination tolerance, for when MCS Target Objective Value is close to zero.
Constraint: ${\mathbf{MCS Target Objective Safeguard}}\ge 2\epsilon$
 MCS Target Objective Value $r$
This parameter may be set if you wish e05kbf to use a specific value as the target function value to reach during the optimization. Setting MCS Target Objective Value overrides the default termination criterion determined by the optional parameter MCS Target Objective Value.
 Infinite Bound Size $r$ Default $\text{}={r}_{\mathrm{max}}^{\frac{1}{4}}$
This defines the ‘infinite’ bound $\mathit{bigbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to $\mathit{bigbnd}$ will be regarded as $+\infty$ (and similarly any lower bound less than or equal to $-\mathit{bigbnd}$ will be regarded as $-\infty$). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
Constraint: ${\mathbf{Infinite Bound Size}}\ge 1000$.
 Monitoring File $i$ Default $\text{}=-1$
If $i\ge 0$, the unit number for the secondary (monitoring) output. If ${\mathbf{Monitoring File}}=-1$, no secondary output is provided. The information output to this unit is controlled by Monitoring Level.
Constraint: ${\mathbf{Monitoring File}}\ge -1$.
 Monitoring Level $i$ Default $=4$
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with Print Level.
Constraint: $0\le {\mathbf{Monitoring Level}}\le 5$.
 Print File $i$ Default $=\text{advisory message unit number}$
If $i\ge 0$, the unit number for the primary output of the solver. If ${\mathbf{Print File}}=-1$, the primary output is completely turned off independently of other settings. The default value is the advisory message unit number as defined by x04abf at the time of the optional parameters initialization, e.g., at the initialization of the handle. The information output to this unit is controlled by Print Level.
Constraint: ${\mathbf{Print File}}\ge -1$.
 Print Level $i$ Default $=2$
This parameter defines how detailed information should be printed by the solver to the primary and secondary output.
$\mathbit{i}$ Output
$0$ No output from the solver.
$1$ The Header and Summary.
$2$, $3$, $4$, $5$ Additionally, the Iteration log.
Constraint: $0\le {\mathbf{Print Level}}\le 5$.
 Print Options $a$ Default $=\mathrm{YES}$
If ${\mathbf{Print Options}}=\mathrm{YES}$, a listing of optional parameters will be printed to the primary output and is always printed to the secondary output.
Constraint: ${\mathbf{Print Options}}=\mathrm{YES}$ or $\mathrm{NO}$.
 Print Solution $a$ Default $=\mathrm{NO}$
If ${\mathbf{Print Solution}}=\mathrm{YES}$, the solution will be printed to the primary and secondary output.
Constraint: ${\mathbf{Print Solution}}=\mathrm{YES}$ or $\mathrm{NO}$.
 Stats Time $a$ Default $=\mathrm{NO}$
This parameter turns on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice $\mathrm{YES}$ is equivalent to $\mathrm{WALL CLOCK}$.
Constraint: ${\mathbf{Stats Time}}=\mathrm{YES}$, $\mathrm{NO}$, $\mathrm{CPU}$ or $\mathrm{WALL CLOCK}$.
 Task $a$ Default $=\mathrm{MINIMIZE}$
This parameter specifies the required direction of the optimization. If , the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance.
Constraint: ${\mathbf{Task}}=\mathrm{MINIMIZE}$, $\mathrm{MAXIMIZE}$ or $\mathrm{FEASIBLE POINT}$.