PDF version (NAG web site
, 64bit version, 64bit version)
NAG Toolbox: nag_opt_bounds_mod_deriv2_comp (e04lb)
Purpose
nag_opt_bounds_mod_deriv2_comp (e04lb) is a comprehensive modified Newton algorithm for finding:
 an unconstrained minimum of a function of several variables
 a minimum of a function of several variables subject to fixed upper and/or lower bounds on the variables.
First and second derivatives are required. The function is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
Syntax
[
bl,
bu,
x,
hesl,
hesd,
istate,
f,
g,
iw,
w,
ifail] = e04lb(
funct,
h,
monit,
ibound,
bl,
bu,
x,
lh,
iw,
w, 'n',
n, 'iprint',
iprint, 'maxcal',
maxcal, 'eta',
eta, 'xtol',
xtol, 'stepmx',
stepmx)
[
bl,
bu,
x,
hesl,
hesd,
istate,
f,
g,
iw,
w,
ifail] = nag_opt_bounds_mod_deriv2_comp(
funct,
h,
monit,
ibound,
bl,
bu,
x,
lh,
iw,
w, 'n',
n, 'iprint',
iprint, 'maxcal',
maxcal, 'eta',
eta, 'xtol',
xtol, 'stepmx',
stepmx)
Note: the interface to this routine has changed since earlier releases of the toolbox:
At Mark 22: 
liw and lw were removed from the interface 
Description
nag_opt_bounds_mod_deriv2_comp (e04lb) is applicable to problems of the form:
Special provision is made for unconstrained minimization (i.e., problems which actually have no bounds on the
${x}_{j}$), problems which have only nonnegativity bounds, and problems in which
${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and
${u}_{1}={u}_{2}=\cdots ={u}_{n}$. It is possible to specify that a particular
${x}_{j}$ should be held constant. You must supply a starting point, a
funct to calculate the value of
$F\left(x\right)$ and its first derivatives
$\frac{\partial F}{\partial {x}_{j}}$ at any point
$x$, and a
h to calculate the second derivatives
$\frac{{\partial}^{2}F}{\partial {x}_{i}\partial {x}_{j}}$.
A typical iteration starts at the current point $x$ where ${n}_{z}$ (say) variables are free from both their bounds. The vector of first derivatives of $F\left(x\right)$ with respect to the free variables, ${g}_{z}$, and the matrix of second derivatives with respect to the free variables, $H$, are obtained. (These both have dimension ${n}_{z}$.)
The equations
are solved to give a search direction
${p}_{z}$. (The matrix
$E$ is chosen so that
$H+E$ is positive definite.)
${p}_{z}$ is then expanded to an $n$vector $p$ by the insertion of appropriate zero elements; $\alpha $ is found such that $F\left(x+\alpha p\right)$ is approximately a minimum (subject to the fixed bounds) with respect to $\alpha $, and $x$ is replaced by $x+\alpha p$. (If a saddle point is found, a special search is carried out so as to move away from the saddle point.)
If any variable actually reaches a bound, it is fixed and ${n}_{z}$ is reduced for the next iteration.
There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange multipliers are estimated for all active constraints. If any Lagrange multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., ${n}_{z}$ is increased). Otherwise, minimization continues in the current subspace until the stronger criteria are satisfied. If at this point there are no negative or nearzero Lagrange multiplier estimates, the process is terminated.
If you specify that the problem is unconstrained, nag_opt_bounds_mod_deriv2_comp (e04lb) sets the ${l}_{j}$ to ${10}^{6}$ and the ${u}_{j}$ to ${10}^{6}$. Thus, provided that the problem has been sensibly scaled, no bounds will be encountered during the minimization process and nag_opt_bounds_mod_deriv2_comp (e04lb) will act as an unconstrained minimization algorithm.
References
Gill P E and Murray W (1973) Safeguarded steplength algorithms for optimization using descent methods NPL Report NAC 37 National Physical Laboratory
Gill P E and Murray W (1974) Newtontype methods for unconstrained and linearly constrained optimization Math. Programming 7 311–350
Gill P E and Murray W (1976) Minimization subject to bounds on the variables NPL Report NAC 72 National Physical Laboratory
Parameters
Compulsory Input Parameters
 1:
$\mathrm{funct}$ – function handle or string containing name of mfile

funct must evaluate the function
$F\left(x\right)$ and its first derivatives
$\frac{\partial F}{\partial {x}_{j}}$ at any point
$x$. (However, if you do not wish to calculate
$F\left(x\right)$ or its first derivatives at a particular
$x$, there is the option of setting a argument to cause
nag_opt_bounds_mod_deriv2_comp (e04lb) to terminate immediately.)
[iflag, fc, gc, iw, w] = funct(iflag, n, xc, iw, w)
Input Parameters
 1:
$\mathrm{iflag}$ – int64int32nag_int scalar

Will have been set to $2$.
 2:
$\mathrm{n}$ – int64int32nag_int scalar

The number $n$ of variables.
 3:
$\mathrm{xc}\left({\mathbf{n}}\right)$ – double array

The point $x$ at which $F$ and the $\frac{\partial F}{\partial {x}_{j}}$ are required.
 4:
$\mathrm{iw}\left(\mathit{liw}\right)$ – int64int32nag_int array
 5:
$\mathrm{w}\left(\mathit{lw}\right)$ – double array

funct is called with the same arguments
iw,
liw,
w and
lw as for
nag_opt_bounds_mod_deriv2_comp (e04lb). They are present so that, when other library functions require the solution of a minimization subproblem, constants needed for the function evaluation can be passed through
iw and
w. Similarly, you
could use elements
$3,4,\dots ,\mathit{liw}$ of
iw and elements from
$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(8,7\times {\mathbf{n}}+{\mathbf{n}}\times \left({\mathbf{n}}1\right)/2\right)+1$ onwards of
w for passing quantities to
funct from the function which calls
nag_opt_bounds_mod_deriv2_comp (e04lb). However, because of the danger of mistakes in partitioning, it is recommended that you should pass information to
funct via global variables and not use
iw or
w at all. In any case
funct must not change the first
$2$ elements of
iw or the first
$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(8,7\times {\mathbf{n}}+{\mathbf{n}}\times \left({\mathbf{n}}1\right)/2\right)$ elements of
w.
Output Parameters
 1:
$\mathrm{iflag}$ – int64int32nag_int scalar

If it is not possible to evaluate
$F\left(x\right)$ or its first derivatives at the point
$x$ given in
xc (or if it is wished to stop the calculation for any other reason) you should reset
iflag to some negative number and return control to
nag_opt_bounds_mod_deriv2_comp (e04lb).
nag_opt_bounds_mod_deriv2_comp (e04lb) will then terminate immediately with
ifail set to your setting of
iflag.
 2:
$\mathrm{fc}$ – double scalar

Unless
iflag is reset,
funct must set
fc to the value of the objective function
$F$ at the current point
$x$.
 3:
$\mathrm{gc}\left({\mathbf{n}}\right)$ – double array

Unless
iflag is reset,
funct must set
${\mathbf{gc}}\left(j\right)$ to the value of the first derivative
$\frac{\partial F}{\partial {x}_{\mathit{j}}}$ at the point
$x$, for
$\mathit{j}=1,2,\dots ,n$.
 4:
$\mathrm{iw}\left(\mathit{liw}\right)$ – int64int32nag_int array
 5:
$\mathrm{w}\left(\mathit{lw}\right)$ – double array

Note: funct should be tested separately before being used in conjunction with
nag_opt_bounds_mod_deriv2_comp (e04lb).
 2:
$\mathrm{h}$ – function handle or string containing name of mfile

h must calculate the second derivatives of
$F$ at any point
$x$. (As with
funct, there is the option of causing
nag_opt_bounds_mod_deriv2_comp (e04lb) to terminate immediately.)
[iflag, fhesl, fhesd, iw, w] = h(iflag, n, xc, lh, fhesd, iw, w)
Input Parameters
 1:
$\mathrm{iflag}$ – int64int32nag_int scalar

Is set to a nonnegative number.
 2:
$\mathrm{n}$ – int64int32nag_int scalar

The number $n$ of variables.
 3:
$\mathrm{xc}\left({\mathbf{n}}\right)$ – double array

The point $x$ at which the second derivatives of $F$ are required.
 4:
$\mathrm{lh}$ – int64int32nag_int scalar

The length of the array
fhesl.
 5:
$\mathrm{fhesd}\left({\mathbf{n}}\right)$ – double array

The value of
$\frac{\partial F}{\partial {x}_{\mathit{j}}}$ at the point
$x$, for
$\mathit{j}=1,2,\dots ,n$.
These values may be useful in the evaluation of the second derivatives.
 6:
$\mathrm{iw}\left(\mathit{liw}\right)$ – int64int32nag_int array
 7:
$\mathrm{w}\left(\mathit{lw}\right)$ – double array

As in
funct, these arguments correspond to the arguments
iw,
liw,
w,
lw of
nag_opt_bounds_mod_deriv2_comp (e04lb).
h must not change the first two elements of
iw or the first
$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(8,7\times {\mathbf{n}}+{\mathbf{n}}\times \left({\mathbf{n}}1\right)/2\right)$ elements of
w. Again, it is recommended that you should pass quantities to
h via global variables and not use
iw or
w at all.
Output Parameters
 1:
$\mathrm{iflag}$ – int64int32nag_int scalar

If
h resets
iflag to some negative number,
nag_opt_bounds_mod_deriv2_comp (e04lb) will terminate immediately with
ifail set to your setting of
iflag.
 2:
$\mathrm{fhesl}\left({\mathbf{lh}}\right)$ – double array

Unless
iflag is reset,
h must place the strict lower triangle of the second derivative matrix of
$F$ (evaluated at the point
$x$) in
fhesl, stored by rows, i.e., set
${\mathbf{fhesl}}\left(\left(\mathit{i}1\right)\left(\mathit{i}2\right)/2+\mathit{j}\right)={\left.\frac{{\partial}^{2}F}{\partial {x}_{\mathit{i}}\partial {x}_{\mathit{j}}}\right}_{{\mathbf{xc}}}$, for
$\mathit{i}=2,3,\dots ,n$ and
$\mathit{j}=1,2,\dots ,i1$. (The upper triangle is not required because the matrix is symmetric.)
 3:
$\mathrm{fhesd}\left({\mathbf{n}}\right)$ – double array

Unless
iflag is reset,
h must place the diagonal elements of the second derivative matrix of
$F$ (evaluated at the point
$x$) in
fhesd, i.e., set
${\mathbf{fhesd}}\left(j\right)={\left.\frac{{\partial}^{2}F}{\partial {x}_{j}^{2}}\right}_{{\mathbf{xc}}}$,
$j=1,2,\dots ,n$.
 4:
$\mathrm{iw}\left(\mathit{liw}\right)$ – int64int32nag_int array
 5:
$\mathrm{w}\left(\mathit{lw}\right)$ – double array

Note: h should be tested separately before being used in conjunction with
nag_opt_bounds_mod_deriv2_comp (e04lb).
 3:
$\mathrm{monit}$ – function handle or string containing name of mfile

If
${\mathbf{iprint}}\ge 0$, you must supply
monit which is suitable for monitoring the minimization process.
monit must not change the values of any of its arguments.
If
${\mathbf{iprint}}<0$, a
monit with the correct argument list should still be supplied, although it will not be called.
[iw, w] = monit(n, xc, fc, gc, istate, gpjnrm, cond, posdef, niter, nf, iw, w)
Input Parameters
 1:
$\mathrm{n}$ – int64int32nag_int scalar

The number $n$ of variables.
 2:
$\mathrm{xc}\left({\mathbf{n}}\right)$ – double array

The coordinates of the current point $x$.
 3:
$\mathrm{fc}$ – double scalar

The value of $F\left(x\right)$ at the current point $x$.
 4:
$\mathrm{gc}\left({\mathbf{n}}\right)$ – double array

The value of
$\frac{\partial F}{\partial {x}_{\mathit{j}}}$ at the current point $x$, for $\mathit{j}=1,2,\dots ,n$.
 5:
$\mathrm{istate}\left({\mathbf{n}}\right)$ – int64int32nag_int array

Information about which variables are currently fixed on their bounds and which are free.
If
${\mathbf{istate}}\left(j\right)$ is negative,
${x}_{j}$ is currently:
– 
fixed on its upper bound if ${\mathbf{istate}}\left(j\right)=1$; 
– 
fixed on its lower bound if ${\mathbf{istate}}\left(j\right)=2$; 
– 
effectively a constant (i.e., ${l}_{j}={u}_{j}$) if ${\mathbf{istate}}\left(j\right)=3$. 
If
istate is positive, its value gives the position of
${x}_{j}$ in the sequence of free variables.
 6:
$\mathrm{gpjnrm}$ – double scalar

The Euclidean norm of the projected gradient vector ${g}_{z}$.
 7:
$\mathrm{cond}$ – double scalar

The ratio of the largest to the smallest elements of the diagonal factor
$D$ of the projected Hessian matrix (see specification of
h). This quantity is usually a good estimate of the condition number of the projected Hessian matrix. (If no variables are currently free,
cond is set to zero.)
 8:
$\mathrm{posdef}$ – logical scalar

Is set true or false according to whether the second derivative matrix for the current subspace, $H$, is positive definite or not.
 9:
$\mathrm{niter}$ – int64int32nag_int scalar

The number of iterations (as outlined in
Description) which have been performed by
nag_opt_bounds_mod_deriv2_comp (e04lb) so far.
 10:
$\mathrm{nf}$ – int64int32nag_int scalar

The number of times that
funct has been called so far. Thus
nf is the number of function and gradient evaluations made so far.
 11:
$\mathrm{iw}\left(\mathit{liw}\right)$ – int64int32nag_int array
 12:
$\mathrm{w}\left(\mathit{lw}\right)$ – double array

As in
funct, and
h, these arguments correspond to the arguments
iw,
liw,
w,
lw of
nag_opt_bounds_mod_deriv2_comp (e04lb). They are included in
monit's argument list primarily for when
nag_opt_bounds_mod_deriv2_comp (e04lb) is called by other library functions.
Output Parameters
 1:
$\mathrm{iw}\left(\mathit{liw}\right)$ – int64int32nag_int array
 2:
$\mathrm{w}\left(\mathit{lw}\right)$ – double array

You should normally print out
fc,
gpjnrm and
cond so as to be able to compare the quantities mentioned in
Accuracy. It is normally helpful to examine
xc,
posdef and
nf as well.
 4:
$\mathrm{ibound}$ – int64int32nag_int scalar

Specifies whether the problem is unconstrained or bounded. If there are bounds on the variables,
ibound can be used to indicate whether the facility for dealing with bounds of special forms is to be used. It must be set to one of the following values:
 ${\mathbf{ibound}}=0$
 If the variables are bounded and you are supplying all the ${l}_{j}$ and ${u}_{j}$ individually.
 ${\mathbf{ibound}}=1$
 If the problem is unconstrained.
 ${\mathbf{ibound}}=2$
 If the variables are bounded, but all the bounds are of the form $0\le {x}_{j}$.
 ${\mathbf{ibound}}=3$
 If all the variables are bounded, and ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$.
 ${\mathbf{ibound}}=4$
 If the problem is unconstrained. (The ${\mathbf{ibound}}=4$ option is provided purely for consistency with other functions. In nag_opt_bounds_mod_deriv2_comp (e04lb) it produces the same effect as ${\mathbf{ibound}}=1$.)
Constraint:
$0\le {\mathbf{ibound}}\le 4$.
 5:
$\mathrm{bl}\left({\mathbf{n}}\right)$ – double array

The fixed lower bounds
${l}_{j}$.
If
ibound is set to
$0$, you must set
${\mathbf{bl}}\left(\mathit{j}\right)$ to
${l}_{\mathit{j}}$, for
$\mathit{j}=1,2,\dots ,n$. (If a lower bound is not specified for any
${x}_{j}$, the corresponding
${\mathbf{bl}}\left(j\right)$ should be set to a large negative number, e.g.,
${10}^{6}$.)
If
ibound is set to
$3$, you must set
${\mathbf{bl}}\left(1\right)$ to
${l}_{1}$;
nag_opt_bounds_mod_deriv2_comp (e04lb) will then set the remaining elements of
bl equal to
${\mathbf{bl}}\left(1\right)$.
If
ibound is set to
$1$,
$2$ or
$4$,
bl will be initialized by
nag_opt_bounds_mod_deriv2_comp (e04lb).
 6:
$\mathrm{bu}\left({\mathbf{n}}\right)$ – double array

The fixed upper bounds
${u}_{j}$.
If
ibound is set to
$0$, you must set
${\mathbf{bu}}\left(\mathit{j}\right)$ to
${u}_{\mathit{j}}$, for
$\mathit{j}=1,2,\dots ,n$. (If an upper bound is not specified for any variable, the corresponding
${\mathbf{bu}}\left(j\right)$ should be set to a large positive number, e.g.,
${10}^{6}$.)
If
ibound is set to
$3$, you must set
${\mathbf{bu}}\left(1\right)$ to
${u}_{1}$;
nag_opt_bounds_mod_deriv2_comp (e04lb) will then set the remaining elements of
bu equal to
${\mathbf{bu}}\left(1\right)$.
If
ibound is set to
$1$,
$2$ or
$4$,
bu will then be initialized by
nag_opt_bounds_mod_deriv2_comp (e04lb).
 7:
$\mathrm{x}\left({\mathbf{n}}\right)$ – double array

${\mathbf{x}}\left(\mathit{j}\right)$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.
 8:
$\mathrm{lh}$ – int64int32nag_int scalar

The dimension of the array
hesl.
Constraint:
${\mathbf{lh}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}}\times \left({\mathbf{n}}1\right)/2,1\right)$.
 9:
$\mathrm{iw}\left(\mathit{liw}\right)$ – int64int32nag_int array
liw, the dimension of the array, must satisfy the constraint
$\mathit{liw}\ge 2$.
Constraint:
$\mathit{liw}\ge 2$.
 10:
$\mathrm{w}\left(\mathit{lw}\right)$ – double array
lw, the dimension of the array, must satisfy the constraint
$\mathit{lw}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(7\times {\mathbf{n}}+{\mathbf{n}}\times \left({\mathbf{n}}1\right)/2,8\right)$.
Constraint:
$\mathit{lw}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(7\times {\mathbf{n}}+{\mathbf{n}}\times \left({\mathbf{n}}1\right)/2,8\right)$.
Optional Input Parameters
 1:
$\mathrm{n}$ – int64int32nag_int scalar

Default:
the dimension of the arrays
bl,
bu,
x. (An error is raised if these dimensions are not equal.)
The number $n$ of independent variables.
Constraint:
${\mathbf{n}}\ge 1$.
 2:
$\mathrm{iprint}$ – int64int32nag_int scalar
Default:
$1$
The frequency with which
monit is to be called.
 ${\mathbf{iprint}}>0$
 monit is called once every iprint iterations and just before exit from nag_opt_bounds_mod_deriv2_comp (e04lb).
 ${\mathbf{iprint}}=0$
 monit is just called at the final point.
 ${\mathbf{iprint}}<0$
 monit is not called at all.
iprint should normally be set to a small positive number.
 3:
$\mathrm{maxcal}$ – int64int32nag_int scalar
Default:
$50\times {\mathbf{n}}$
The maximum permitted number of evaluations of
$F\left(x\right)$, i.e., the maximum permitted number of calls of
funct.
Constraint:
${\mathbf{maxcal}}\ge 1$.
 4:
$\mathrm{eta}$ – double scalar
Suggested value:
${\mathbf{eta}}=0.9$ is usually a good choice although a smaller value may be warranted if the matrix of second derivatives is expensive to compute compared with the function and first derivatives.
If ${\mathbf{n}}=1$, eta should be set to $0.0$ (also when the problem is effectively onedimensional even though
$n>1$; i.e., if for all except one of the variables the lower and upper bounds are equal).
Default:
 if ${\mathbf{n}}=1$, $0.0$;
 otherwise $0.9$.
Every iteration of
nag_opt_bounds_mod_deriv2_comp (e04lb) involves a linear minimization (i.e., minimization of
$F\left(x+\alpha p\right)$ with respect to
$\alpha $).
eta specifies how accurately these linear minimizations are to be performed. The minimum with respect to
$\alpha $ will be located more accurately for small values of
eta (say,
$0.01$) than for large values (say,
$0.9$).
Although accurate linear minimizations will generally reduce the number of iterations of nag_opt_bounds_mod_deriv2_comp (e04lb), this usually results in an increase in the number of function and gradient evaluations required for each iteration. On balance, it is usually more efficient to perform a low accuracy linear minimization.
Constraint:
$0.0\le {\mathbf{eta}}<1.0$.
 5:
$\mathrm{xtol}$ – double scalar
Default:
$0.0$
The accuracy in
$x$ to which the solution is required.
If
${x}_{\mathrm{true}}$ is the true value of
$x$ at the minimum, then
${x}_{\mathrm{sol}}$, the estimated position before a normal exit, is such that
$\Vert {x}_{\mathrm{sol}}{x}_{\mathrm{true}}\Vert <{\mathbf{xtol}}\times \left(1.0+\Vert {x}_{\mathrm{true}}\Vert \right)$, where
$\Vert y\Vert =\sqrt{{\displaystyle \sum _{j=1}^{n}}{y}_{j}^{2}}$. For example, if the elements of
${x}_{\mathrm{sol}}$ are not much larger than
$1.0$ in modulus, and if
xtol is set to
${10}^{5}$ then
${x}_{\mathrm{sol}}$ is usually accurate to about five decimal places. (For further details see
Accuracy.)
If the problem is scaled roughly as described in
Further Comments and
$\epsilon $ is the
machine precision, then
$\sqrt{\epsilon}$ is probably the smallest reasonable choice for
xtol. (This is because, normally, to machine accuracy,
$F\left(x+\sqrt{\epsilon},{e}_{j}\right)=F\left(x\right)$ where
${e}_{j}$ is any column of the identity matrix.)
If you set
xtol to
$0.0$ (or any positive value less than
$\epsilon $),
nag_opt_bounds_mod_deriv2_comp (e04lb) will use
$10.0\times \sqrt{\epsilon}$ instead of
xtol.
Constraint:
${\mathbf{xtol}}\ge 0.0$.
 6:
$\mathrm{stepmx}$ – double scalar
Default:
$100000.0$
An estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency a slight overestimate is preferable.)
nag_opt_bounds_mod_deriv2_comp (e04lb) will ensure that, for each iteration,
where
$k$ is the iteration number. Thus, if the problem has more than one solution,
nag_opt_bounds_mod_deriv2_comp (e04lb) is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence of
${x}^{\left(k\right)}$ entering a region where the problem is illbehaved and can also help to avoid possible overflow in the evaluation of
$F\left(x\right)$. However, an underestimate of
stepmx can lead to inefficiency.
Constraint:
${\mathbf{stepmx}}\ge {\mathbf{xtol}}$.
Output Parameters
 1:
$\mathrm{bl}\left({\mathbf{n}}\right)$ – double array

The lower bounds actually used by nag_opt_bounds_mod_deriv2_comp (e04lb), e.g., if ${\mathbf{ibound}}=2$, ${\mathbf{bl}}\left(1\right)={\mathbf{bl}}\left(2\right)=\cdots ={\mathbf{bl}}\left(n\right)=0.0$.
 2:
$\mathrm{bu}\left({\mathbf{n}}\right)$ – double array

The upper bounds actually used by nag_opt_bounds_mod_deriv2_comp (e04lb), e.g., if ${\mathbf{ibound}}=2$, ${\mathbf{bu}}\left(1\right)={\mathbf{bu}}\left(2\right)=\cdots ={\mathbf{bu}}\left({\mathbf{n}}\right)={10}^{6}$.
 3:
$\mathrm{x}\left({\mathbf{n}}\right)$ – double array

The final point ${x}^{\left(k\right)}$. Thus, if ${\mathbf{ifail}}={\mathbf{0}}$ on exit, ${\mathbf{x}}\left(j\right)$ is the $j$th component of the estimated position of the minimum.
 4:
$\mathrm{hesl}\left({\mathbf{lh}}\right)$ – double array

During the determination of a direction
${p}_{z}$ (see
Description),
$H+E$ is decomposed into the product
$LD{L}^{\mathrm{T}}$, where
$L$ is a unit lower triangular matrix and
$D$ is a diagonal matrix. (The matrices
$H$,
$E$,
$L$ and
$D$ are all of dimension
${n}_{z}$, where
${n}_{z}$ is the number of variables free from their bounds.
$H$ consists of those rows and columns of the full estimated second derivative matrix which relate to free variables.
$E$ is chosen so that
$H+E$ is positive definite.)
hesl and
hesd are used to store the factors
$L$ and
$D$. The elements of the strict lower triangle of
$L$ are stored row by row in the first
${n}_{z}\left({n}_{z}1\right)/2$ positions of
hesl. The diagonal elements of
$D$ are stored in the first
${n}_{z}$ positions of
hesd. In the last factorization before a normal exit, the matrix
$E$ will be zero, so that
hesl and
hesd will contain, on exit, the factors of the final estimated second derivative matrix
$H$. The elements of
hesd are useful for deciding whether to accept the results produced by
nag_opt_bounds_mod_deriv2_comp (e04lb) (see
Accuracy).
 5:
$\mathrm{hesd}\left({\mathbf{n}}\right)$ – double array

During the determination of a direction
${p}_{z}$ (see
Description),
$H+E$ is decomposed into the product
$LD{L}^{\mathrm{T}}$, where
$L$ is a unit lower triangular matrix and
$D$ is a diagonal matrix. (The matrices
$H$,
$E$,
$L$ and
$D$ are all of dimension
${n}_{z}$, where
${n}_{z}$ is the number of variables free from their bounds.
$H$ consists of those rows and columns of the full second derivative matrix which relate to free variables.
$E$ is chosen so that
$H+E$ is positive definite.)
hesl and
hesd are used to store the factors
$L$ and
$D$. The elements of the strict lower triangle of
$L$ are stored row by row in the first
${n}_{z}\left({n}_{z}1\right)/2$ positions of
hesl. The diagonal elements of
$D$ are stored in the first
${n}_{z}$ positions of
hesd.
In the last factorization before a normal exit, the matrix
$E$ will be zero, so that
hesl and
hesd will contain, on exit, the factors of the final second derivative matrix
$H$. The elements of
hesd are useful for deciding whether to accept the result produced by
nag_opt_bounds_mod_deriv2_comp (e04lb) (see
Accuracy).
 6:
$\mathrm{istate}\left({\mathbf{n}}\right)$ – int64int32nag_int array

Information about which variables are currently on their bounds and which are free. If
${\mathbf{istate}}\left(j\right)$ is:
 – equal to $1$, ${x}_{j}$ is fixed on its upper bound;
 – equal to $2$, ${x}_{j}$ is fixed on its lower bound;
 – equal to $3$, ${x}_{j}$ is effectively a constant (i.e., ${l}_{j}={u}_{j}$);
 – positive, ${\mathbf{istate}}\left(j\right)$ gives the position of ${x}_{j}$ in the sequence of free variables.
 7:
$\mathrm{f}$ – double scalar

The function value at the final point given in
x.
 8:
$\mathrm{g}\left({\mathbf{n}}\right)$ – double array

The first derivative vector corresponding to the final point given in
x. The components of
g corresponding to free variables should normally be close to zero.
 9:
$\mathrm{iw}\left(\mathit{liw}\right)$ – int64int32nag_int array
$\mathit{liw}=2$.
Communication array, used to store information between calls to nag_opt_bounds_mod_deriv2_comp (e04lb).
 10:
$\mathrm{w}\left(\mathit{lw}\right)$ – double array
$\mathit{lw}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(7\times {\mathbf{n}}+{\mathbf{n}}\times \left({\mathbf{n}}1\right)/2,8\right)$.
Communication array, used to store information between calls to nag_opt_bounds_mod_deriv2_comp (e04lb).
 11:
$\mathrm{ifail}$ – int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see
Error Indicators and Warnings).
Error Indicators and Warnings
Note: nag_opt_bounds_mod_deriv2_comp (e04lb) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and
do not generate an error of type NAG:error_n. See nag_issue_warnings.
 W ${\mathbf{ifail}}<0$

A negative value of
ifail indicates an exit from
nag_opt_bounds_mod_deriv2_comp (e04lb) because you have set
iflag negative in
funct or
h. The value of
ifail will be the same as your setting of
iflag.
 ${\mathbf{ifail}}=1$

On entry,  ${\mathbf{n}}<1$, 
or  ${\mathbf{maxcal}}<1$, 
or  ${\mathbf{eta}}<0.0$, 
or  ${\mathbf{eta}}\ge 1.0$, 
or  ${\mathbf{xtol}}<0.0$, 
or  ${\mathbf{stepmx}}<{\mathbf{xtol}}$, 
or  ${\mathbf{ibound}}<0$, 
or  ${\mathbf{ibound}}>4$, 
or  ${\mathbf{bl}}\left(j\right)>{\mathbf{bu}}\left(j\right)$ for some $j$ if ${\mathbf{ibound}}=0$, 
or  ${\mathbf{bl}}\left(1\right)>{\mathbf{bu}}\left(1\right)$ if ${\mathbf{ibound}}=3$, 
or  ${\mathbf{lh}}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\times \left({\mathbf{n}}1\right)/2\right)$, 
or  $\mathit{liw}<2$, 
or  $\mathit{lw}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(8,7\times {\mathbf{n}}+{\mathbf{n}}\times \left({\mathbf{n}}1\right)/2\right)$. 
(Note that if you have set
xtol to
$0.0$,
nag_opt_bounds_mod_deriv2_comp (e04lb) uses the default value and continues without failing.) When this exit occurs no values will have been assigned to
f or to the elements of
hesl,
hesd or
g.
 ${\mathbf{ifail}}=2$

There have been
maxcal function evaluations. If steady reductions in
$F\left(x\right)$ were monitored up to the point where this exit occurred, then the exit probably occurred simply because
maxcal was set too small, so the calculations should be restarted from the final point held in
x. This exit may also indicate that
$F\left(x\right)$ has no minimum.
 W ${\mathbf{ifail}}=3$
The conditions for a minimum have not all been met, but a lower point could not be found.
Provided that, on exit, the first derivatives of
$F\left(x\right)$ with respect to the free variables are sufficiently small, and that the estimated condition number of the second derivative matrix is not too large, this error exit may simply mean that, although it has not been possible to satisfy the specified requirements, the algorithm has in fact found the minimum as far as the accuracy of the machine permits. Such a situation can arise, for instance, if
xtol has been set so small that rounding errors in the evaluation of
$F\left(x\right)$ or its derivatives make it impossible to satisfy the convergence conditions.
If the estimated condition number of the second derivative matrix at the final point is large, it could be that the final point is a minimum, but that the smallest eigenvalue of the Hessian matrix is so close to zero that it is not possible to recognize the point as a minimum.
 ${\mathbf{ifail}}=4$
Not used. (This is done to make the significance of
${\mathbf{ifail}}={\mathbf{5}}$ similar for
nag_opt_bounds_mod_deriv_comp (e04kd) and
nag_opt_bounds_mod_deriv2_comp (e04lb).)
 W ${\mathbf{ifail}}=5$

All the Lagrange multiplier estimates which are not indisputably positive lie relatively close to zero, but it is impossible either to continue minimizing on the current subspace or to find a feasible lower point by releasing and perturbing any of the fixed variables. You should investigate as for ${\mathbf{ifail}}={\mathbf{3}}$.
 ${\mathbf{ifail}}=99$
An unexpected error has been triggered by this routine. Please
contact
NAG.
 ${\mathbf{ifail}}=399$
Your licence key may have expired or may not have been installed correctly.
 ${\mathbf{ifail}}=999$
Dynamic memory allocation failed.
The values
${\mathbf{ifail}}={\mathbf{2}}$,
${\mathbf{3}}$ or
${\mathbf{5}}$ may also be caused by mistakes in usersupplied functions
funct or
h, by the formulation of the problem or by an awkward function. If there are no such mistakes, it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
Accuracy
A successful exit (
${\mathbf{ifail}}={\mathbf{0}}$) is made from
nag_opt_bounds_mod_deriv2_comp (e04lb) when
${H}^{\left(k\right)}$ is positive definite and when (B1, B2 and B3) or B4 hold, where
(Quantities with superscript
$k$ are the values at the
$k$th iteration of the quantities mentioned in
Description.
$\epsilon $ is the
machine precision and
$\Vert .\Vert $ denotes the Euclidean norm.)
If
${\mathbf{ifail}}={\mathbf{0}}$, then the vector in
x on exit,
${x}_{\mathrm{sol}}$, is almost certainly an estimate of the position of the minimum,
${x}_{\mathrm{true}}$, to the accuracy specified by
xtol.
If
${\mathbf{ifail}}={\mathbf{3}}$ or
${\mathbf{5}}$,
${x}_{\mathrm{sol}}$ may still be a good estimate of
${x}_{\mathrm{true}}$, but the following checks should be made. Let the largest of the first
${n}_{z}$ elements of
hesd be
${\mathbf{hesd}}\left(b\right)$, let the smallest be
${\mathbf{hesd}}\left(s\right)$, and define
$k={\mathbf{hesd}}\left(b\right)/{\mathbf{hesd}}\left(s\right)$. The scalar
$k$ is usually a good estimate of the condition number of the projected Hessian matrix at
${x}_{\mathrm{sol}}$. If
(i) 
the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or fast linear rate, 
(ii) 
${\Vert {g}_{z}\left({x}_{\mathrm{sol}}\right)\Vert}^{2}<10.0\times \epsilon $, and 
(iii) 
$k<1.0/\Vert {g}_{z}\left({x}_{\mathrm{sol}}\right)\Vert $, 
then it is almost certain that
${x}_{\mathrm{sol}}$ is a close approximation to the position of a minimum. When (ii) is true, then usually
$F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to
$F\left({x}_{\mathrm{true}}\right)$. The quantities needed for these checks are all available via
monit; in particular the value of
cond in the last call of
monit before exit gives
$k$Further suggestions about confirmation of a computed solution are given in the
E04 Chapter Introduction.
Further Comments
Timing
The number of iterations required depends on the number of variables, the behaviour of
$F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed in an iteration of
nag_opt_bounds_mod_deriv2_comp (e04lb) is
$\frac{{n}_{z}^{3}}{6}+\mathit{O}\left({n}_{z}^{2}\right)$. In addition, each iteration makes one call of
h and at least one call of
funct. So, unless
$F\left(x\right)$ and its derivatives can be evaluated very quickly, the run time will be dominated by the time spent in
funct and
h.
Scaling
Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of the ${x}_{j}$ are each in the range $\left(1,+1\right)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix at the solution is wellconditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_bounds_mod_deriv2_comp (e04lb) will take less computer time.
Unconstrained Minimization
If a problem is genuinely unconstrained and has been scaled sensibly, the following points apply:
(a) 
${n}_{z}$ will always be $n$, 
(b) 
hesl and hesd will be factors of the full second derivative matrix with elements stored in the natural order, 
(c) 
the elements of $g$ should all be close to zero at the final point, 
(d) 
the values of the ${\mathbf{istate}}\left(j\right)$ given by monit and on exit from nag_opt_bounds_mod_deriv2_comp (e04lb) are unlikely to be of interest (unless they are negative, which would indicate that the modulus of one of the ${x}_{j}$ has reached ${10}^{6}$ for some reason), 
(e) 
monit's argument gpjnrm simply gives the norm of the first derivative vector. 
Example
A program to minimize
subject to the bounds
starting from the initial guess
$\left(3,1,0,1\right)$. Before calling
nag_opt_bounds_mod_deriv2_comp (e04lb), the program calls
nag_opt_check_deriv (e04hc) and
nag_opt_check_deriv2 (e04hd) to check the derivatives calculated by usersupplied functions
funct and
h.
Open in the MATLAB editor:
e04lb_example
function e04lb_example
fprintf('e04lb example results\n\n');
global monitoring;
monitoring = false;
bl = [ 1; 2; 1000000; 1];
bu = [ 3; 0; 1000000; 3];
x = [ 3; 1; 0; 1];
ibound = int64(0);
lh = int64(6);
iw(1:2) = int64(0);
w = zeros(34,1);
wstat = warning();
warning('OFF');
[bl, bu, x, hesl, hesd, istate, f, g, iw, w, ifail] = ...
e04lb(@funct, @hess, @monit, ibound, bl, bu, x, lh, iw, w);
warning(wstat);
if (ifail == 0  ifail == 5  ifail == 3)
fprintf('\nMinimum found at x: ');
fprintf(' %9.4f',x);
fprintf('\nGradients at x, g: ');
fprintf(' %9.4f',g);
fprintf('\nMinimum value : %9.4f\n\n',f);
else
fprintf('\n Error: e04lb returns ifail = %d\n',ifail);
end
function [iflag, fc, gc] = funct(iflag, n, xc)
gc = zeros(n, 1);
fc = 0;
x1 = xc(1) + 10*xc(2);
x2 = xc(3)  xc(4);
x3 = xc(2)  2*xc(3);
x4 = xc(1)  xc(4);
fc = x1^2 + 5*x2^2 + x3^4 + 10*x4^4;
gc(1) = 2*x1 + 40*x4^3;
gc(2) = 20*x1 + 4*x3^3;
gc(3) = 10*x2  8*x3^3;
gc(4) = 10*x2  40*x4^3;
function [iflag, fhesl, fhesd] = hess(iflag, n, xc, lh, fhesd)
fhesl = zeros(lh, 1);
x3 = xc(2)  2*xc(3);
x4 = xc(1)  xc(4);
fhesd(1) = 2 + 120*x4^2;
fhesd(2) = 200 + 12*x3^2;
fhesd(3) = 10 + 48*x3^2;
fhesd(4) = 10 + 120*x4^2;
fhesl(1) = 20;
fhesl(2) = 0;
fhesl(3) = 24*x3^2;
fhesl(4) = 120*x4^2;
fhesl(5) = 0;
fhesl(6) = 10;
function [] = monit(n, xc, fc, gc, istate, gpjnrm, cond, posdef, niter, nf)
global monitoring;
if (monitoring)
fprintf('\n Itn Fn evals Fn value Norm of proj gradient\n');
fprintf(' %3d %5d %15.4f %13.4f\n', niter, nf, fc, gpjnrm);
fprintf('\n j x(j) g(j) Status\n');
for j = 1:double(n)
isj = istate(j);
if (isj > 0)
fprintf('%2d %16.4f%15.4f %s\n', j, xc(j), gc(j), ' Free');
elseif (isj == 1)
fprintf('%2d %16.4f%15.4f %s\n', j, xc(j), gc(j), ' Upper Bound');
elseif (isj == 2)
fprintf('%2d %16.4f%15.4f %s\n', j, xc(j), gc(j), ' Lower Bound');
elseif (isj == 3)
fprintf('%2d %16.4f%15.4f %s\n', j, xc(j), gc(j), ' Constant');
end
end
if (cond ~= 0.0d0)
if (cond > 1.0d6)
fprintf('\nEst. condition number of projected Hessian > 10^6\n');
else
fprintf('\nEst. condition number of projected Hessian = %10.2f\n', cond);
end
if ( not(posdef) )
fprintf('\nProjected Hessian matrix is not positive definite\n');
end
end
end
e04lb example results
Minimum found at x: 1.0000 0.0852 0.4093 1.0000
Gradients at x, g: 0.2953 0.0000 0.0000 5.9070
Minimum value : 2.4338
PDF version (NAG web site
, 64bit version, 64bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015