# NAG C Library Function Document

## 1Purpose

nag_opt_bounds_deriv (e04kbc) is a comprehensive quasi-Newton algorithm for finding:
 – an unconstrained minimum of a function of several variables; – a minimum of a function of several variables subject to fixed upper and/or lower bounds on the variables.
First derivatives are required. nag_opt_bounds_deriv (e04kbc) is intended for objective functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

## 2Specification

 #include #include
void  nag_opt_bounds_deriv (Integer n,
 void (*objfun)(Integer n, const double x[], double *objf, double g[], Nag_Comm *comm),
Nag_BoundType bound, double bl[], double bu[], double x[], double *objf, double g[], Nag_E04_Opt *options, Nag_Comm *comm, NagError *fail)

## 3Description

nag_opt_bounds_deriv (e04kbc) is applicable to problems of the form:
 $Minimize F x 1 , x 2 , … , x n subject to l j ≤ x j ≤ u j , j = 1 , 2 , … , n .$
Special provision is made for unconstrained minimization (i.e., problems which actually have no bounds on the ${x}_{j}$), problems which have only non-negativity bounds, and problems in which ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$. It is possible to specify that a particular ${x}_{j}$ should be held constant. You must supply a starting point and a function objfun to calculate the value of $F\left(x\right)$ and its first derivatives $\frac{\partial F}{\partial {x}_{j}}$ at any point $x$.
A typical iteration starts at the current point $x$ where ${n}_{z}$ (say) variables are free from both their bounds. The vector ${g}_{z}$, whose elements are the derivatives of $F\left(x\right)$ with respect to the free variables, is known. A unit lower triangular matrix $L$ and a diagonal matrix $D$ (both of dimension ${n}_{z}$), such that ${LDL}^{\mathrm{T}}$ is a positive definite approximation to the matrix of second derivatives with respect to the free variables, are also stored. The equations
 $LDLT p z = - g z$
are solved to give a search direction ${p}_{z}$, which is expanded to an $n$-vector $p$ by the insertion of appropriate zero elements. Then $\alpha$ is found such that $F\left(x+\alpha p\right)$ is approximately a minimum (subject to the fixed bounds) with respect to $\alpha$; $x$ is replaced by $x+\alpha p$, and the matrices $L$ and $D$ are updated so as to be consistent with the change produced in the gradient by the step $\alpha p$. If any variable actually reaches a bound during the search along $p$, it is fixed and ${n}_{z}$ is reduced for the next iteration.
There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange-multipliers are estimated for all the active constraints. If any Lagrange-multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange-multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., ${n}_{z}$ is increased). Otherwise minimization continues in the current subspace provided that this is practicable. When it is not, or when the stronger convergence criteria is already satisfied, then, if one or more Lagrange-multiplier estimates are close to zero, a slight perturbation is made in the values of the corresponding variables in turn until a lower function value is obtained. The normal algorithm is then resumed from the perturbed point.
If a saddle point is suspected, a local search is carried out with a view to moving away from the saddle point. In addition, nag_opt_bounds_deriv (e04kbc) gives you the option of specifying that a local search should be performed when a point is found which is thought to be a constrained minimum.
If you specify that the problem is unconstrained, nag_opt_bounds_deriv (e04kbc) sets the ${l}_{j}$ to $-{10}^{10}$ and the ${u}_{j}$ to ${10}^{10}$. Thus, provided that the problem has been sensibly scaled, no bounds will be encountered during the minimization process and nag_opt_bounds_deriv (e04kbc) will act as an unconstrained minimization algorithm.
Gill P E and Murray W (1972) Quasi-Newton methods for unconstrained optimization J. Inst. Math. Appl. 9 91–108
Gill P E and Murray W (1973) Safeguarded steplength algorithms for optimization using descent methods NPL Report NAC 37 National Physical Laboratory
Gill P E and Murray W (1976) Minimization subject to bounds on the variables NPL Report NAC 72 National Physical Laboratory
Gill P E, Murray W and Pitfield R A (1972) The implementation of two revised quasi-Newton algorithms for unconstrained optimization NPL Report NAC 11 National Physical Laboratory

## 5Arguments

1:    $\mathbf{n}$IntegerInput
On entry: the number $n$ of independent variables.
Constraint: ${\mathbf{n}}\ge 1$.
2:    $\mathbf{objfun}$function, supplied by the userExternal Function
objfun must evaluate the function $F\left(x\right)$ and its first derivatives $\frac{\partial F}{\partial {x}_{j}}$ at any point $x$. (However, if you do not wish to calculate $F\left(x\right)$ or its first derivatives at a particular $x$, there is the option of setting an argument to cause nag_opt_bounds_deriv (e04kbc) to terminate immediately.)
The specification of objfun is:
 void objfun (Integer n, const double x[], double *objf, double g[], Nag_Comm *comm)
1:    $\mathbf{n}$IntegerInput
On entry: the number $n$ of variables.
2:    $\mathbf{x}\left[{\mathbf{n}}\right]$const doubleInput
On entry: the point $x$ at which the value of $F$, or $F$ and $\frac{\partial F}{\partial {x}_{j}}$, are required.
3:    $\mathbf{objf}$double *Output
On exit: objfun must set objf to the value of the objective function $F$ at the current point $x$. If it is not possible to evaluate $F$, then objfun should assign a negative value to $\mathbf{comm}\mathbf{\to }\mathbf{flag}$; nag_opt_bounds_deriv (e04kbc) will then terminate.
4:    $\mathbf{g}\left[{\mathbf{n}}\right]$doubleOutput
On exit: if $\mathbf{comm}\mathbf{\to }\mathbf{flag}=2$ on entry, then objfun must set ${\mathbf{g}}\left[j-1\right]$ to the value of the first derivative $\frac{\partial F}{\partial {x}_{j}}$ at the current point, $x$ for $j=1,2,\dots ,n$. If it is not possible to evaluate the first derivatives then objfun should assign a negative value to $\mathbf{comm}\mathbf{\to }\mathbf{flag}$; nag_opt_bounds_deriv (e04kbc) will then terminate.
(If $\mathbf{comm}\mathbf{\to }\mathbf{flag}=0$ on entry, objfun must not change the elements of g.)
5:    $\mathbf{comm}$Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to objfun.
flagIntegerInput/Output
On entry: $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ will be set to 0 or $2$. The value 0 indicates that only $F$ itself needs to be evaluated. The value 2 indicates that both $F$ and its first derivatives must be calculated.
On exit: if objfun resets $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ to some negative number then nag_opt_bounds_deriv (e04kbc) will terminate immediately with the error indicator NE_USER_STOP. If fail is supplied to nag_opt_bounds_deriv (e04kbc), ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be set to your setting of $\mathbf{comm}\mathbf{\to }\mathbf{flag}$.
firstNag_BooleanInput
On entry: will be set to Nag_TRUE on the first call to objfun and Nag_FALSE for all subsequent calls.
nfIntegerInput
On entry: the number of calculations of the objective function; this value will be equal to the number of calls made to objfun, including the current one.
userdouble *
iuserInteger *
pPointer
The type Pointer will be void * with a C compiler that defines void * and char * otherwise.
Before calling nag_opt_bounds_deriv (e04kbc) these pointers may be allocated memory and initialized with various quantities for use by objfun when called from nag_opt_bounds_deriv (e04kbc).
Note: objfun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by nag_opt_bounds_deriv (e04kbc). If your code inadvertently does return any NaNs or infinities, nag_opt_bounds_deriv (e04kbc) is likely to produce unexpected results.
Note: objfun should be tested separately before being used in conjunction with nag_opt_bounds_deriv (e04kbc). The array x must not be changed by objfun.
3:    $\mathbf{bound}$Nag_BoundTypeInput
On entry: indicates whether the problem is unconstrained or bounded and, if it is bounded, whether the facility for dealing with bounds of special forms is to be used. bound should be set to one of the following values:
${\mathbf{bound}}=\mathrm{Nag_Bounds}$
If the variables are bounded and you will be supplying all the ${l}_{j}$ and ${u}_{j}$ individually.
${\mathbf{bound}}=\mathrm{Nag_NoBounds}$
If the problem is unconstrained.
${\mathbf{bound}}=\mathrm{Nag_BoundsZero}$
If the variables are bounded, but all the bounds are of the form $0\le {x}_{j}$.
${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$
If all the variables are bounded, and ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$.
Constraint: ${\mathbf{bound}}=\mathrm{Nag_Bounds}$, $\mathrm{Nag_NoBounds}$, $\mathrm{Nag_BoundsZero}$ or $\mathrm{Nag_BoundsEqual}$.
4:    $\mathbf{bl}\left[{\mathbf{n}}\right]$doubleInput/Output
On entry: the lower bounds ${l}_{j}$.
If ${\mathbf{bound}}=\mathrm{Nag_Bounds}$, you must set ${\mathbf{bl}}\left[\mathit{j}-1\right]$ to ${l}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$. (If a lower bound is not required for any ${x}_{\mathit{j}}$, the corresponding ${\mathbf{bl}}\left[j-1\right]$ should be set to a large negative number, e.g., $-{10}^{10}$.)
If ${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$, you must set ${\mathbf{bl}}\left[0\right]$ to ${l}_{1}$; nag_opt_bounds_deriv (e04kbc) will then set the remaining elements of bl equal to ${\mathbf{bl}}\left[0\right]$.
If ${\mathbf{bound}}=\mathrm{Nag_NoBounds}$ or $\mathrm{Nag_BoundsZero}$, bl will be initialized by nag_opt_bounds_deriv (e04kbc).
On exit: the lower bounds actually used by nag_opt_bounds_deriv (e04kbc), e.g., if ${\mathbf{bound}}=\mathrm{Nag_BoundsZero}$, ${\mathbf{bl}}\left[0\right]={\mathbf{bl}}\left[1\right]=\cdots ={\mathbf{bl}}\left[n-1\right]=0.0$.
5:    $\mathbf{bu}\left[{\mathbf{n}}\right]$doubleInput/Output
On entry: the upper bounds ${u}_{j}$.
If ${\mathbf{bound}}=\mathrm{Nag_Bounds}$, you must set ${\mathbf{bu}}\left[\mathit{j}-1\right]$ to ${u}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$. (If an upper bound is not required for any ${x}_{\mathit{j}}$, the corresponding ${\mathbf{bu}}\left[j-1\right]$ should be set to a large positive number, e.g., ${10}^{10}$.)
If ${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$, you must set ${\mathbf{bu}}\left[0\right]$ to ${u}_{1}$; nag_opt_bounds_deriv (e04kbc) will then set the remaining elements of bu equal to ${\mathbf{bu}}\left[0\right]$.
If ${\mathbf{bound}}=\mathrm{Nag_NoBounds}$ or $\mathrm{Nag_BoundsZero}$, bu will be initialized by nag_opt_bounds_deriv (e04kbc).
On exit: the upper bounds actually used by nag_opt_bounds_deriv (e04kbc), e.g., if ${\mathbf{bound}}=\mathrm{Nag_BoundsZero}$, ${\mathbf{bu}}\left[0\right]={\mathbf{bu}}\left[1\right]=\cdots ={\mathbf{bu}}\left[n-1\right]={10}^{10}$.
6:    $\mathbf{x}\left[{\mathbf{n}}\right]$doubleInput/Output
On entry: ${\mathbf{x}}\left[\mathit{j}-1\right]$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.
On exit: the final point ${x}^{*}$. Thus, if ${\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE_NOERROR}$ on exit, ${\mathbf{x}}\left[j-1\right]$ is the $j$th component of the estimated position of the minimum.
7:    $\mathbf{objf}$double *Input/Output
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ or $\mathrm{Nag_Init_H_S}$, you need not initialize objf.
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_F_G_H}$ or $\mathrm{Nag_Init_All}$, objf must be set on entry to the value of $F\left(x\right)$ at the initial point supplied in x.
On exit: the function value at the final point given in x.
8:    $\mathbf{g}\left[{\mathbf{n}}\right]$doubleInput/Output
On entry:
${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_F_G_H}$ or $\mathrm{Nag_Init_All}$
g must be set on entry to the first derivative vector at the initial $x$.
${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ or $\mathrm{Nag_Init_H_S}$
g need not be set.
On exit: the first derivative vector corresponding to the final point in x. The elements of g corresponding to free variables should normally be close to zero.
9:    $\mathbf{options}$Nag_E04_Opt *Input/Output
On entry/exit: a pointer to a structure of type Nag_E04_Opt whose members are optional parameters for nag_opt_bounds_deriv (e04kbc). These structure members offer the means of adjusting some of the argument values of the algorithm and on output will supply further details of the results. A description of the members of options is given below in Section 11. Some of the results returned in options can be used by nag_opt_bounds_deriv (e04kbc) to perform a ‘warm start’ if it is re-entered (see the member ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}$ in Section 11.2).
If any of these optional parameters are required then the structure options should be declared and initialized by a call to nag_opt_init (e04xxc) and supplied as an argument to nag_opt_bounds_deriv (e04kbc). However, if the optional parameters are not required the NAG defined null pointer, E04_DEFAULT, can be used in the function call.
10:  $\mathbf{comm}$Nag_Comm *Input/Output
Note: comm is a NAG defined type (see Section 3.3.1.1 in How to Use the NAG Library and its Documentation).
On entry/exit: structure containing pointers for communication with user-supplied functions; see the above description of objfun for details. If you do not need to make use of this communication feature the null pointer NAGCOMM_NULL may be used in the call to nag_opt_bounds_deriv (e04kbc); comm will then be declared internally for use in calls to user-supplied functions.
11:  $\mathbf{fail}$NagError *Input/Output
The NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation).

## 6Error Indicators and Warnings

When one of NE_USER_STOP, NE_INT_ARG_LT, NE_BOUND, NE_DERIV_ERRORS, NE_OPT_NOT_INIT, NE_BAD_PARAM, NE_2_REAL_ARG_LT, NE_INVALID_INT_RANGE_1, NE_INVALID_REAL_RANGE_EF, NE_INVALID_REAL_RANGE_FF, NE_INIT_MEM, NE_NO_MEM, NE_HESD or NE_ALLOC_FAIL occurs, no values will have been assigned by nag_opt_bounds_deriv (e04kbc) to objf or to the elements of g, ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$, or ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$.
An exit of ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NW_TOO_MANY_ITER}}$, NW_COND_MIN and NW_LOCAL_SEARCH may also be caused by mistakes in objfun, by the formulation of the problem or by an awkward function. If there are no such mistakes, it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
NE_2_REAL_ARG_LT
On entry, ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}=〈\mathit{\text{value}}〉$ while ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}\ge {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument bound had an illegal value.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}$ had an illegal value.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ had an illegal value.
NE_BOUND
The lower bound for variable $〈\mathit{\text{value}}〉$ (array element ${\mathbf{bl}}\left[〈\mathit{\text{value}}〉\right]$) is greater than the upper bound.
NE_CHOLESKY_OVERFLOW
An overflow would have occurred during the updating of the Cholesky factors if the calculations had been allowed to continue. Restart from the current point with ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$.
NE_DERIV_ERRORS
Large errors were found in the derivatives of the objective function.
NE_HESD
The initial values of the supplied ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ has some value(s) which is negative or too small or the ratio of the largest element of ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ to the smallest is too large.
NE_INIT_MEM
Option ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=〈\mathit{string}〉$ but the pointer $〈\mathit{string}〉$ in the option structure has not been allocated memory.
NE_INT_ARG_LT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 1$.
NE_INVALID_INT_RANGE_1
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ is not valid. Correct range is ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}\ge 0$.
NE_INVALID_REAL_RANGE_EF
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ not valid. Correct range is $\epsilon \le {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}<1.0$.
NE_INVALID_REAL_RANGE_FF
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ not valid. Correct range is $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}<1.0$.
NE_NO_MEM
Option ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=〈\mathit{string}〉$ but at least one of the pointers $〈\mathit{string}〉$ in the option structure has not been allocated memory.
NE_NOT_APPEND_FILE
Cannot open file $〈\mathit{string}〉$ for appending.
NE_NOT_CLOSE_FILE
Cannot close file $〈\mathit{string}〉$.
NE_OPT_NOT_INIT
Options structure not initialized.
NE_USER_STOP
User requested termination, user flag value $\text{}=〈\mathit{\text{value}}〉$.
This exit occurs if you set $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ to a negative value in objfun. If fail is supplied the value of ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be the same as your setting of $\mathbf{comm}\mathbf{\to }\mathbf{flag}$.
NE_WRITE_ERROR
Error occurred when writing to file $〈\mathit{string}〉$.
NW_COND_MIN
The conditions for a minimum have not all been satisfied, but a lower point could not be found.
Provided that, on exit, the first derivatives of $F\left(x\right)$ with respect to the free variables are sufficiently small, and that the estimated condition number of the second derivative matrix is not too large, this error exit may simply mean that, although it has not been possible to satisfy the specified requirements, the algorithm has in fact found the minimum as far as the accuracy of the machine permits. This could be because ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ has been set so small that rounding error in objfun makes attainment of the convergence conditions impossible.
If the estimated condition number of the approximate Hessian matrix at the final point is large, it could be that the final point is a minimum but that the smallest eigenvalue of the second derivative matrix is so close to zero that it is not possible to recognize the point as a minimum.
The local search has failed to find a feasible point which gives a significant change of function value.
If the problem is a genuinely unconstrained one, this type of exit indicates that the problem is extremely ill conditioned or that the function has no minimum. If the problem has bounds which may be close to the minimum, it may just indicate that steps in the subspace of free variables happened to meet a bound before they changed the function value.
NW_TOO_MANY_ITER
The maximum number of iterations, $〈\mathit{\text{value}}〉$, have been performed.
If steady reductions in $F\left(x\right)$, were monitored up to the point where this exit occurred, then the exit probably occurred simply because ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that $F\left(x\right)$ has no minimum.

## 7Accuracy

A successful exit $\left({\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE_NOERROR}\right)$ is made from nag_opt_bounds_deriv (e04kbc) when (B1, B2 and B3) or B4 hold, and the local search (if used) confirms a minimum, where
• $\mathrm{B}1\equiv {\alpha }^{\left(k\right)}×‖{p}^{\left(k\right)}‖<\left({\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}+\sqrt{\epsilon }\right)×\left(1.0+‖{x}^{\left(k\right)}‖\right)$
• $\mathrm{B}2\equiv \left|{F}^{\left(k\right)}-{F}^{\left(k-1\right)}\right|<\left({{\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}}^{2}+\epsilon \right)×\left(1.0+\left|{F}^{\left(k\right)}\right|\right)$
• $\mathrm{B}3\equiv ‖{g}_{z}^{\left(k\right)}‖<\left({\epsilon }^{1/3}+{\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}\right)×\left(1.0+\left|{F}^{\left(k\right)}\right|\right)$
• $\mathrm{B}4\equiv ‖{g}_{z}^{\left(k\right)}‖<0.01×\sqrt{\epsilon }\text{.}$
(Quantities with superscript $k$ are the values at the $k$th iteration of the quantities mentioned in Section 3; $\epsilon$ is the machine precision, $\text{.}$ denotes the Euclidean norm and ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ is described in Section 11.)
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE_NOERROR}$, then the vector in x on exit, ${x}_{\mathrm{sol}}$, is almost certainly an estimate of the position of the minimum, ${x}_{\mathrm{true}}$, to the accuracy specified by ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NW_COND_MIN}}$ or NW_LOCAL_SEARCH, ${x}_{\mathrm{sol}}$ may still be a good estimate of ${x}_{\mathrm{true}}$, but the following checks should be made. Let the largest of the first ${n}_{z}$ elements of ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ be ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}\left[b\right]$, let the smallest be ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}\left[s\right]$, and define $k={\mathbf{options}}\mathbf{.}{\mathbf{hesd}}\left[b\right]/{\mathbf{options}}\mathbf{.}{\mathbf{hesd}}\left[s\right]$. The scalar $k$ is usually a good estimate of the condition number of the projected Hessian matrix at ${x}_{\mathrm{sol}}$. If
 (a) the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or a fast linear rate, (b) ${‖{g}_{z}\left({x}_{\mathrm{sol}}\right)‖}^{2}<10.0×\epsilon$, and (c) $k<1.0/‖{g}_{z}\left({x}_{\mathrm{sol}}\right)‖$,
then it is almost certain that ${x}_{\mathrm{sol}}$ is a close approximation to the position of a minimum. When (b) is true, then usually $F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to $F\left({x}_{\mathrm{true}}\right)$. The quantities needed for these checks are all available in the results printout from nag_opt_bounds_deriv (e04kbc); in particular the final value of Cond H gives $k$.
Further suggestions about confirmation of a computed solution are given in the e04 Chapter Introduction.

## 8Parallelism and Performance

nag_opt_bounds_deriv (e04kbc) is not threaded in any implementation.

### 9.1Timing

The number of iterations required depends on the number of variables, the behaviour of $F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed in an iteration of nag_opt_bounds_deriv (e04kbc) is roughly proportional to ${n}_{z}^{2}$. In addition, each iteration makes at least one call of objfun with $\mathbf{comm}\mathbf{\to }\mathbf{flag}=2$ if ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag_Lin_Deriv}$ is used or one call of objfun with $\mathbf{comm}\mathbf{\to }\mathbf{flag}=0$ if ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag_Lin_NoDeriv}$ is chosen. So, unless $F\left(x\right)$ can be evaluated very quickly, the run time will be dominated by the time spent in objfun.

### 9.2Scaling

Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of the ${x}_{j}$ are each in the range $\left(-1,+1\right)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix at the solution is well conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_bounds_deriv (e04kbc) will take less computer time.

### 9.3Unconstrained Minimization

If a problem is genuinely unconstrained and has been scaled sensibly, the following points apply:
 (a) ${n}_{z}$ will always be $n$, (b) if ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$ on entry, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}\left[\mathit{j}-1\right]$ has simply to be set to $\mathit{j}$, for $\mathit{j}=1,2,\dots ,n$, (c) ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ will be factors of the full approximate second derivative matrix with elements stored in the natural order, (d) the elements of g should all be close to zero at the final point, (e) the Status values given in the printout from nag_opt_bounds_deriv (e04kbc) and in ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ on exit are unlikely to be of interest (unless they are negative, which would indicate that the modulus of one of the ${x}_{j}$ has reached ${10}^{10}$ for some reason), (f) Norm g simply gives the norm of the first derivative vector.

## 10Example

This example minimizes the function
 $F = x 1 + 10 x 2 2 + 5 x 3 - x 4 2 + x 2 - 2 x 3 4 + 10 x 1 - x 4 4$
subject to the bounds
 $-1≤x1≤3 -2≤x2≤0 -1≤x4≤3$
starting from the initial guess ${\left(3.0,-0.9,0.13,1.1\right)}^{\mathrm{T}}$.
The options structure is declared and initialized by nag_opt_init (e04xxc). Four option values are read from a data file by use of nag_opt_read (e04xyc). The memory freeing function nag_opt_free (e04xzc) is used to free the memory assigned to the pointers in the option structure. You must not use the standard C function free() for this purpose.

### 10.1Program Text

Program Text (e04kbce.c)

### 10.2Program Data

Program Options (e04kbce.opt)

### 10.3Program Results

Program Results (e04kbce.r)

## 11Optional Parameters

A number of optional input and output arguments to nag_opt_bounds_deriv (e04kbc) are available through the structure argument options, type Nag_E04_Opt. An argument may be selected by assigning an appropriate value to the relevant structure member; those arguments not selected will be assigned default values. If no use is to be made of any of the optional parameters you should use the NAG defined null pointer, E04_DEFAULT, in place of options when calling nag_opt_bounds_deriv (e04kbc); the default settings will then be used for all arguments.
Before assigning values to options directly the structure must be initialized by a call to the function nag_opt_init (e04xxc). Values may then be assigned to the structure members in the normal C manner.
After return from nag_opt_bounds_deriv (e04kbc), the options structure may only be re-used for future calls of nag_opt_bounds_deriv (e04kbc) if the dimensions of the new problem are the same. Otherwise, the structure must be cleared by a call of nag_opt_free (e04xzc)) and re-initialized by a call of nag_opt_init (e04xxc) before future calls. Failure to do this will result in unpredictable behaviour.
Option settings may also be read from a text file using the function nag_opt_read (e04xyc) in which case initialization of the options structure will be performed automatically if not already done. Any subsequent direct assignment to the options structure must not be preceded by initialization.
If assignment of functions and memory to pointers in the options structure is required, then this must be done directly in the calling program; they cannot be assigned using nag_opt_read (e04xyc).

### 11.1Optional Parameter Checklist and Default Values

For easy reference, the following list shows the members of options which are valid for nag_opt_bounds_deriv (e04kbc) together with their default values where relevant. The number $\epsilon$ is a generic notation for machine precision (see nag_machine_precision (X02AJC)).
 Boolean list Nag_TRUE Nag_PrintType print_level Nag_Soln_Iter char outfile stdout void (*print_fun)() NULL Boolean deriv_check Nag_TRUE Nag_InitType init_state Nag_Init_None Integer max_iter $50{\mathbf{n}}$ double optim_tol $10\sqrt{\epsilon }$ Nag_LinFun minlin Nag_Lin_Deriv double linesearch_tol $0.9$ ($0.0$ if ${\mathbf{n}}=1$) double step_max 100000.0 double f_est Boolean local_search Nag_TRUE Integer *state size n double *hesl size $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}}\left[{\mathbf{n}}-1\right]/2,1\right)$ double *hesd size n Integer iter Integer nf

### 11.2Description of the Optional Parameters

 list – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag_TRUE}$ the argument settings in the call to nag_opt_bounds_deriv (e04kbc) will be printed.
 print_level – Nag_PrintType Default $\text{}=\mathrm{Nag_Soln_Iter}$
On entry: the level of results printout produced by nag_opt_bounds_deriv (e04kbc). The following values are available:
 $\mathrm{Nag_NoPrint}$ No output. $\mathrm{Nag_Soln}$ The final solution. $\mathrm{Nag_Iter}$ One line of output for each iteration. $\mathrm{Nag_Soln_Iter}$ The final solution and one line of output for each iteration. $\mathrm{Nag_Soln_Iter_Full}$ The final solution and detailed printout at each iteration.
Details of each level of results printout are described in Section 11.3.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_NoPrint}$, $\mathrm{Nag_Soln}$, $\mathrm{Nag_Iter}$, $\mathrm{Nag_Soln_Iter}$ or $\mathrm{Nag_Soln_Iter_Full}$.
 outfile – const char Default $\text{}=\mathtt{stdout}$
On entry: the name of the file to which results should be printed. If ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}\left[0\right]=\text{' \0 '}$ then the stdout stream is used.
 print_fun – pointer to function Default $\text{}=\text{}$ NULL
On entry: printing function defined by you; the prototype of ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ is
`void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm);`
See Section 11.3.1 below for further details.
 deriv_check – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}\ne \mathrm{Nag_Init_None}$ then the default of ${\mathbf{options}}\mathbf{.}{\mathbf{deriv_check}}$ is changed to Nag_FALSE.
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{deriv_check}}=\mathrm{Nag_TRUE}$ a check of the derivatives defined by objfun will be made at the starting point x. The derivative check is carried out by a call to nag_opt_check_deriv (e04hcc). If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}$ is set to a value other than its default value (${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$) then the default of ${\mathbf{options}}\mathbf{.}{\mathbf{deriv_check}}$ will be Nag_FALSE. A starting point of $x=0$ or $x=1$ should be avoided if this test is to be meaningful, if either of these starting points is necessary then nag_opt_check_deriv (e04hcc) should be used to check objfun at a different point prior to calling nag_opt_bounds_deriv (e04kbc).
 init_state – Nag_InitType Default $\text{}=\mathrm{Nag_Init_None}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}$ specifies which of the arguments objf, g, ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$, ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ are actually being initialized. Such information will generally reduce the time taken by nag_opt_bounds_deriv (e04kbc).
${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$
No values are assumed to have been set in any of objf, g, ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$, ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ or ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$. (nag_opt_bounds_deriv (e04kbc) will use the unit matrix as the initial estimate of the Hessian matrix.)
${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_F_G_H}$
The arguments objf and g must contain the value of $F\left(x\right)$ and its first derivatives at the starting point. The elements ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}\left[j-1\right]$ must have been set to estimates of the derivatives $\frac{{\partial }^{2}F}{\partial {x}_{j}^{2}}$ at the starting point. No values are assumed to have been set in ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ or ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$.
${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$
The arguments objf and g must contain the value of $F\left(x\right)$ and its first derivatives at the starting point. All $n$ elements of ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ must have been set to indicate which variables are on their bounds and which are free. ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ must contain the Cholesky factors of a positive definite approximation to the ${n}_{z}$ by ${n}_{z}$ Hessian matrix for the subspace of free variables. (This option is useful for restarting the minimization process if ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ is reached.)
${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_H_S}$
No values are assumed to have been set in objf or g, but ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$, ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ must have been set as for ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$. (This option is useful for starting off a minimization run using second derivative information from a previous, similar, run.)
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$, $\mathrm{Nag_Init_F_G_H}$, $\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$.
 max_iter – Integer Default $\text{}=50{\mathbf{n}}$
On entry: the limit on the number of iterations allowed before termination.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}\ge 0$.
 optim_tol – double Default $\text{}=10\sqrt{\epsilon }$
On entry: the accuracy in $x$ to which the solution is required. If ${x}_{\mathrm{true}}$ is the true value of $x$ at the minimum, then ${x}_{\mathrm{sol}}$, the estimated position prior to a normal exit, is such that
 $x sol - x true < options.optim_tol × 1.0 + x true ,$
where $‖y‖={\left({\sum }_{j=1}^{n}{y}_{j}^{2}\right)}^{1/2}$. For example, if the elements of ${x}_{\mathrm{sol}}$ are not much larger than $1.0$ in modulus and if ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ is set to ${10}^{-5}$, then ${x}_{\mathrm{sol}}$ is usually accurate to about 5 decimal places. (For further details see Section 9.) If the problem is scaled roughly as described in Section 9 and $\epsilon$ is the machine precision, then $\sqrt{\epsilon }$ is probably the smallest reasonable choice for ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$. (This is because, normally, to machine accuracy, $F\left(x+\sqrt{\epsilon }{e}_{j}\right)=F\left(x\right)$ where ${e}_{j}$ is any column of the identity matrix.)
Constraint: $\epsilon \le {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}<1.0$.
 minlin – Nag_LinFun Default $\text{}=\mathrm{Nag_Lin_Deriv}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}$ specifies whether the linear minimizations (i.e., minimizations of $F\left(x+\alpha p\right)$ with respect to $\alpha$) are to be performed by a function which just requires the evaluation of $F\left(x\right)$, $\mathrm{Nag_Lin_NoDeriv}$, or by a function which also requires the first derivatives of $F\left(x\right)$, $\mathrm{Nag_Lin_Deriv}$.
It will often be possible to evaluate the first derivatives of $F$ in about the same amount of computer time that is required for the evaluation of $F$ itself – if this is so then nag_opt_bounds_deriv (e04kbc) should be called with ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}$ set to $\mathrm{Nag_Lin_Deriv}$. However, if the evaluation of the derivatives takes more than about 4 times as long as the evaluation of $F$, then a setting of $\mathrm{Nag_Lin_NoDeriv}$ will usually be preferable. If in doubt, use the default setting $\mathrm{Nag_Lin_Deriv}$ as it is slightly more robust.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag_Lin_Deriv}$ or $\mathrm{Nag_Lin_NoDeriv}$.
 linesearch_tol – double Default $\text{}=0.9$ if ${\mathbf{n}}>1$, and $0.0$ otherwise
If ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag_Lin_NoDeriv}$ then the default value of ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ will be changed from $0.9$ to $0.5$ if ${\mathbf{n}}>1$.
On entry: every iteration of nag_opt_bounds_deriv (e04kbc) involves a linear minimization (i.e., minimization of $F\left(x+\alpha p\right)$ with respect to $\alpha$). ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ specifies how accurately these linear minimizations are to be performed. The minimum with respect to $\alpha$ will be located more accurately for small values of ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ (say 0.01) than for large values (say 0.9).
Although accurate linear minimizations will generally reduce the number of iterations performed by nag_opt_bounds_deriv (e04kbc), they will increase the number of function evaluations required for each iteration. On balance, it is usually more efficient to perform a low accuracy linear minimization.
A smaller value such as $0.01$ may be worthwhile:
 (a) if objfun takes so little computer time that it is worth using extra calls of objfun to reduce the number of iterations and associated matrix calculations (b) if $F\left(x\right)$ is a penalty or barrier function arising from a constrained minimization problem (since such problems are very difficult to solve) (c) if ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag_Lin_NoDeriv}$ and the calculation of first derivatives takes so much computer time (relative to the time taken to evaluate the function) that it is worth using extra function evaluations to reduce the number of derivative evaluations.
If ${\mathbf{n}}=1$, the default for ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}=0.0$ (if the problem is effectively one-dimensional then ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ should be set to $0.0$ even though ${\mathbf{n}}>1$; i.e., if for all except one of the variables the lower and upper bounds are equal).
Constraint: $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}<1.0$.
 step_max – double Default $\text{}=100000.0$
On entry: an estimate of the Euclidean distance between the solution and the starting point supplied. (For maximum efficiency a slight overestimate is preferable.) nag_opt_bounds_deriv (e04kbc) will ensure that, for each iteration,
 $∑ j=1 n x j k - x j k-1 2 1/2 ≤ options.step_max ,$
where $k$ is the iteration number. Thus, if the problem has more than one solution, nag_opt_bounds_deriv (e04kbc) is most likely to find the one nearest the starting point. On difficult problems, a realistic choice can prevent the sequence of ${x}^{\left(k\right)}$ entering a region where the problem is ill-behaved and can also help to avoid possible overflow in the evaluation of $F\left(x\right)$. However an underestimate of ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}$ can lead to inefficiency.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}\ge {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
 f_est – double
On entry: an estimate of the function value at the minimum. This estimate is just used for calculating suitable step lengths for starting linear minimizations off, so the choice is not too critical. However, it is better for ${\mathbf{options}}\mathbf{.}{\mathbf{f_est}}$ to be set to an underestimate rather than to an overestimate. If no value is supplied then an initial step length of $1.0$, subject to the variable bounds, will be used.
 local_search – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{local_search}}$ must specify whether or not you wish a ‘local search’ to be performed when a point is found which is thought to be a constrained minimum.
If ${\mathbf{options}}\mathbf{.}{\mathbf{local_search}}=\mathrm{Nag_TRUE}$ and either the quasi-Newton direction of search fails to produce a lower function value or the convergence criteria are satisfied, then a local search will be performed. This may move the search away from a saddle point or confirm that the final point is a minimum.
If ${\mathbf{options}}\mathbf{.}{\mathbf{local_search}}=\mathrm{Nag_FALSE}$ there will be no local search when a point is found which is thought to be a minimum.
The amount of work involved in a local search is comparable to twice that required in a normal iteration to minimize $F\left(x+\alpha p\right)$ with respect to $\alpha$. For most problems this will be small (relative to the total time required for the minimization). ${\mathbf{options}}\mathbf{.}{\mathbf{local_search}}$ could be set Nag_FALSE if:
 – it is known from the physical properties of a problem that a stationary point will be the required minimum; – a point which is not a minimum could be easily recognized, for example if the value of $F\left(x\right)$ at the minimum is known.
 state – Integer * Default memory $\text{}={\mathbf{n}}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ need not be set if the default option of ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ is used as n values of memory will be automatically allocated by nag_opt_bounds_deriv (e04kbc).
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$ has been chosen, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ must point to a minimum of n elements of memory. This memory will already be available if the calling program has used the options structure in a previous call to nag_opt_bounds_deriv (e04kbc) with ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ and the same value of n. If a previous call has not been made you must allocate sufficient memory.
When ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$ then ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ must specify information about which variables are currently on their bounds and which are free. If ${x}_{j}$ is:
 (a) fixed on its upper bound, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}\left[j-1\right]$ is $-1$; (b) fixed on its lower bound, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}\left[j-1\right]$ is $-2$; (c) effectively a constant (i.e., ${l}_{j}={u}_{j}$), ${\mathbf{options}}\mathbf{.}{\mathbf{state}}\left[j-1\right]$ is $-3$; (d) free, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}\left[j-1\right]$ gives its position in the sequence of free variables.
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ or $\mathrm{Nag_Init_F_G_H}$, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ will be initialized by nag_opt_bounds_deriv (e04kbc).
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ must be initialized before nag_opt_bounds_deriv (e04kbc) is called.
On exit: ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ gives information as above about the final point given in x.
 hesl – double * Default memory $\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}}\left[{\mathbf{n}}-1\right]/2,1\right)$
 hesd – double * Default memory $\text{}={\mathbf{n}}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ need not be set if the default of ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ is used as sufficient memory will be automatically allocated by nag_opt_bounds_deriv (e04kbc).
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_H_S}$ has been set then ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ must point to a minimum of $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}}\left[{\mathbf{n}}-1\right]/2,1\right)$ elements of memory.
${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ must point to at least n elements of memory if ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_F_G_H}$, $\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$ has been chosen.
The appropriate amount of memory will already be available for ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ if the calling program has used the options structure in a previous call to nag_opt_bounds_deriv (e04kbc) with ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ and the same value of n. If a previous call has not been made, you must allocate sufficient memory.
${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ are used to store the factors $L$ and $D$ of the current approximation to the matrix of second derivatives with respect to the free variables (see Section 3). (The elements of the matrix are assumed to be ordered according to the permutation specified by the positive elements of ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$, see above.) ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ holds the lower triangle of $L$, omitting the unit diagonal, stored by rows. ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ stores the diagonal elements of $D$. Thus if ${n}_{z}$ elements of ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ are positive, the strict lower triangle of $L$ will be held in the first ${n}_{z}\left({n}_{z}-1\right)/2$ elements of ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and the diagonal elements of $D$ in the first ${n}_{z}$ elements of ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$.
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ (the default), ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ will be initialized within nag_opt_bounds_deriv (e04kbc) to the factors of the unit matrix.
If you set ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_F_G_H}$, ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}\left[\mathit{j}-1\right]$ must contain on entry an approximation to the second derivative with respect to ${x}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$. ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ need not be set.
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$, ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ must contain on entry the Cholesky factors of a positive definite approximation to the ${n}_{z}$ by ${n}_{z}$ matrix of second derivatives for the subspace of free variables as specified by your setting of ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$.
On exit: ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ hold the factors $L$ and $D$ corresponding to the final point given in x. The elements of ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ are useful for deciding whether to accept the result produced by nag_opt_bounds_deriv (e04kbc) (see Section 9).
 iter – Integer
On exit: the number of iterations which have been performed in nag_opt_bounds_deriv (e04kbc).
 nf – Integer
On exit: the number of times the residuals have been evaluated (i.e., number of calls of objfun).

### 11.3Description of Printed Output

The level of printed output can be controlled with the structure members ${\mathbf{options}}\mathbf{.}{\mathbf{list}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ (see Section 11.2). If ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag_TRUE}$ then the argument values to nag_opt_bounds_deriv (e04kbc) are listed, whereas the printout of results is governed by the value of ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$. The default of ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter}$ provides a single line of output at each iteration and the final result. This section describes all of the possible levels of results printout available from nag_opt_bounds_deriv (e04kbc).
When ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Iter}$ or $\mathrm{Nag_Soln_Iter}$ a single line of output is produced on completion of each iteration, this gives the following values:
 Itn the iteration count, $k$. Nfun the cumulative number of calls to objfun. Objective the current value of the objective function, $F\left({x}^{\left(k\right)}\right)$ Norm g the Euclidean norm of the projected gradient vector, $‖{g}_{z}\left({x}^{\left(k\right)}\right)‖$. Norm x the Euclidean norm of ${x}^{\left(k\right)}$. Norm(x(k-1)-x(k)) the Euclidean norm of ${x}^{\left(k-1\right)}-{x}^{\left(k\right)}$. Step the step ${\alpha }^{\left(k\right)}$ taken along the computed search direction ${p}^{\left(k\right)}$. Cond H the ratio of the largest to the smallest element of the diagonal factor $D$ of the projected Hessian matrix. This quantity is usually a good estimate of the condition number of the projected Hessian matrix. (If no variables are currently free, this value will be zero.)
When ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln}$, $\mathrm{Nag_Soln_Iter}$ or $\mathrm{Nag_Soln_Full}$ this single line of output is also produced for the final solution.
When ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter_Full}$ more detailed results are given at each iteration. Additional values output are:
 x the current point ${x}^{\left(k\right)}$. g the current projected gradient vector, ${g}_{z}\left({x}^{\left(k\right)}\right)$. Status the current state of the variable with respect to its bound(s).
If ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln}$, $\mathrm{Nag_Soln_Iter}$ or $\mathrm{Nag_Soln_Iter_Full}$ the final result is printed out. This consists of:
 x the final point, ${x}^{*}$. g the final projected gradient vector, ${g}_{z}\left({x}^{*}\right)$. Status the final state of the variable with respect to its bound(s).
If ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_NoPrint}$ then printout will be suppressed; you can print the final solution when nag_opt_bounds_deriv (e04kbc) returns to the calling program.

#### 11.3.1Output of results via a user-defined printing function

You may also specify your own print function for output of iteration results and the final solution by use of the ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ function pointer, which has prototype
The rest of this section can be skipped if the default printing facilities provide the required functionality.
When a user-defined function is assigned to ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ this will be called in preference to the internal print function of nag_opt_bounds_deriv (e04kbc). Calls to the user-defined function are again controlled by means of the ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ member. Information is provided through st and comm, the two structure arguments to ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$.
The results contained in the members of st are those on completion of the last iteration or those after a local search. (An iteration may be followed by a local search (see ${\mathbf{options}}\mathbf{.}{\mathbf{local_search}}$, Section 11.2) in which case ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ is called with the results of the last iteration ($\mathbf{st}\mathbf{\to }\mathbf{local_search}=\mathrm{Nag_FALSE}$) and then again when the local search has been completed ($\mathbf{st}\mathbf{\to }\mathbf{local_search}=\mathrm{Nag_TRUE}$).)
If $\mathbf{comm}\mathbf{\to }\mathbf{it_prt}=\mathrm{Nag_TRUE}$ then the results on completion of an iteration of nag_opt_bounds_deriv (e04kbc) are contained in the members of st. If $\mathbf{comm}\mathbf{\to }\mathbf{sol_prt}=\mathrm{Nag_TRUE}$ then the final results from nag_opt_bounds_deriv (e04kbc), including details of the final iteration, are contained in the members of st. In both cases, the same members of st are set, as follows:
iterInteger
The current iteration count, $k$, if $\mathbf{comm}\mathbf{\to }\mathbf{it_prt}=\mathrm{Nag_TRUE}$; the final iteration count, $k$, if $\mathbf{comm}\mathbf{\to }\mathbf{sol_prt}=\mathrm{Nag_TRUE}$.
nInteger
The number of variables.
xdouble *
The coordinates of the point ${x}^{\left(k\right)}$.
fdouble *
The value of the current objective function.
gdouble *
Points to the n memory locations holding the first derivatives of $F$ at the current point ${x}^{\left(k\right)}$.
gpj_normdouble *
The Euclidean norm of the current projected gradient ${g}_{z}$.
stepdouble *
The step ${\alpha }^{\left(k\right)}$ taken along the search direction ${p}^{\left(k\right)}$.
conddouble *
The estimate of the condition number of the Hessian matrix.
xk_normdouble *
The Euclidean norm of ${x}^{\left(k-1\right)}-{x}^{\left(k\right)}$.
stateInteger
The status of variables ${x}_{j}$, $j=1,2,\dots ,n$, with respect to their bounds. See Section 3 for a description of the possible status values.
Nag_TRUE if a local search has been performed.
nfInteger
The cumulative number of calls made to objfun.
The relevant members of the structure comm are:
it_prtNag_Boolean
Will be Nag_TRUE when the print function is called with the results of the current iteration.
sol_prtNag_Boolean
Will be Nag_TRUE when the print function is called with the final result.
userdouble *
iuserInteger *
pPointer
Pointers for communication of user information. If used they must be allocated memory either before entry to nag_opt_bounds_deriv (e04kbc) or during a call to objfun or ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$. The type Pointer will be void * with a C compiler that defines void * and char * otherwise.