The new solver e04kfc is part of the NAG optimization modelling suite(see Section 4.1 in the E04 Chapter Introduction), therefore the definition of the objective function values and gradients need to be split into two separate subroutines. e04kfc offers a significant improvement in performance over e04dgc as well as additional functionality, such as the addition of variable bounds and monitoring.
Callbacks
Old Code
static void NAG_CALL objfun(Integer n, const double x[], double *objf,
double g[], Nag_Comm *comm)
{
/* Compute objective at point x */
*objf = ... ;
/* Compute objective gradient at point x */
g[0] = ... ;
...
g[n-1] = ... ;
}
/* Initialize problem with n variables */
nag_opt_handle_init(&handle, n, NAGERR_DEFAULT);
/* Add nonlinear objective function with dense gradient
* (dependent on all variables)
*/
for (int i=1; i<=n; i++)
idxfd[i-1] = i;
nag_opt_handle_set_nlnobj(handle, n, idxfd, NAGERR_DEFAULT);
/* Solver the problem */
SET_FAIL(fail);
nag_opt_handle_solve_bounds_foas(handle, objfun, objgrd, NULLFN, n,
x, rinfo, stats, &comm, &fail);
iter = stats[7];
objf = rinfo[0];
/* Free the handle memory */
nag_opt_handle_free(&handle, NAGERR_DEFAULT);
e04dgc minimizes an unconstrained nonlinear function of several variables using a pre-conditioned, limited memory quasi-Newton conjugate gradient method. The function is intended for use on large scale problems.
The function may be called by the names: e04dgc, nag_opt_uncon_conjgrd_comp or nag_opt_conj_grad.
3Description
e04dgc uses a pre-conditioned conjugate gradient method and is based upon algorithm PLMA as described in Gill and Murray (1979) and Section 4.8.3 of Gill et al. (1981).
The algorithm proceeds as follows:
Let ${x}_{0}$ be a given starting point and let $k$ denote the current iteration, starting with $k=0$. The iteration requires ${g}_{k}$, the gradient vector evaluated at ${x}_{k}$, the $k$th estimate of the minimum. At each iteration a vector ${p}_{k}$ (known as the direction of search) is computed and the new estimate ${x}_{k+1}$ is given by ${x}_{k}+{\alpha}_{k}{p}_{k}$ where ${\alpha}_{k}$ (the step length) minimizes the function $F({x}_{k}+{\alpha}_{k}{p}_{k})$ with respect to the scalar ${\alpha}_{k}$. At the start of each line search an initial approximation ${\alpha}_{0}$ to the step ${\alpha}_{k}$ is taken as:
where ${F}_{\mathit{est}}$ is a user-supplied estimate of the function value at the solution. If ${F}_{\mathit{est}}$ is not specified, the software always chooses the unit step length for ${\alpha}_{0}$. Subsequent step length estimates are computed using cubic interpolation with safeguards.
A quasi-Newton method computes the search direction, ${p}_{k}$, by updating the inverse of the approximate Hessian $\left({H}_{k}\right)$ and computing
$${p}_{k+1}=-{H}_{k+1}{g}_{k+1}\text{.}$$
(1)
The updating formula for the approximate inverse is given by
where ${y}_{k}={g}_{k-1}-{g}_{k}$ and ${s}_{k}={x}_{k+1}-{x}_{k}={\alpha}_{k}{p}_{k}$.
The method used by e04dgc to obtain the search direction is based upon computing ${p}_{k+1}$ as $-{H}_{k+1}{g}_{k+1}$ where ${H}_{k+1}$ is a matrix obtained by updating the identity matrix with a limited number of quasi-Newton corrections. The storage of an $n\times n$ matrix is avoided by storing only the vectors that define the rank two corrections – hence the term limited-memory quasi-Newton method. The precise method depends upon the number of updating vectors stored. For example, the direction obtained with the ‘one-step’ limited memory update is given by (1) using (2) with ${H}_{k}$ equal to the identity matrix, viz.
e04dgc uses a two-step method described in detail in Gill and Murray (1979) in which restarts and pre-conditioning are incorporated. Using a limited-memory quasi-Newton formula, such as the one above, guarantees ${p}_{k+1}$ to be a descent direction if all the inner products ${y}_{k}^{\mathrm{T}}{s}_{k}$ are positive for all vectors ${y}_{k}$ and ${s}_{k}$ used in the updating formula.
The termination criteria of e04dgc are as follows:
Let ${\tau}_{F}$ specify an argument that indicates the number of correct figures desired in ${F}_{k}$ (${\tau}_{F}$ is equivalent to ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$ in the optional parameter list, see Section 11). If the following three conditions are satisfied:
(iii)$\Vert {g}_{k}\Vert \le {\tau}_{F}^{1/3}(1+\left|{F}_{k}\right|)$ or $\Vert {g}_{k}\Vert <{\epsilon}_{A}$, where ${\epsilon}_{A}$ is the absolute error associated with computing the objective function
then the algorithm is considered to have converged. For a full discussion on termination criteria see Chapter 8 of Gill et al. (1981).
4References
Gill P E and Murray W (1979) Conjugate-gradient methods for large-scale nonlinear optimization Technical Report SOL 79-15 Department of Operations Research, Stanford University
Gill P E, Murray W, Saunders M A and Wright M H (1983) Computing forward-difference intervals for numerical optimization SIAM J. Sci. Statist. Comput.4 310–321
Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
5Arguments
1: $\mathbf{n}$ – IntegerInput
On entry: the number $n$ of variables.
Constraint:
${\mathbf{n}}\ge 1$.
2: $\mathbf{objfun}$ – function, supplied by the userExternal Function
objfun must calculate the objective function $F\left(x\right)$ and its gradient at a specified point $x$.
On exit: ${\mathbf{g}}\left[\mathit{i}-1\right]$ must contain the value of $\frac{\partial F}{\partial {x}_{\mathit{i}}}$ at the point $x$, for $\mathit{i}=1,2,\dots ,n$.
5: $\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type $\mathrm{Nag\_Comm}$; the following members are relevant to objfun.
flag – IntegerInput/Output
On entry: $\mathbf{comm}\mathbf{\to}\mathbf{flag}$ is always non-negative.
On exit: if objfun resets $\mathbf{comm}\mathbf{\to}\mathbf{flag}$ to some negative number then e04dgc will terminate immediately with the error indicator NE_USER_STOP. If fail is supplied to e04dgc${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be set to your setting of $\mathbf{comm}\mathbf{\to}\mathbf{flag}$.
first – Nag_BooleanInput
On entry: will be set to Nag_TRUE on the first call to objfun and Nag_FALSE for all subsequent calls.
nf – IntegerInput
On entry: the number of calculations of the objective function; this value will be equal to the number of calls made to objfun including the current one.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void * with a C compiler that defines void * and char * otherwise. Before calling e04dgc these pointers may be allocated memory and initialized with various quantities for use by objfun when called from e04dgc.
Note:objfun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e04dgc. If your code inadvertently does return any NaNs or infinities, e04dgc is likely to produce unexpected results.
Note:objfun should be tested separately before being used in conjunction with e04dgc. The array x must not be changed by objfun.
On exit: the objective gradient at the final iterate.
6: $\mathbf{options}$ – Nag_E04_Opt *Input/Output
On entry/exit: a pointer to a structure of type $\mathrm{Nag\_E04\_Opt}$ whose members are optional parameters for e04dgc. These structure members offer the means of adjusting some of the argument values of the algorithm and on output will supply further details of the results. A description of the members of options is given below in Section 11.
If any of these optional parameters are required then the structure options should be declared and initialized by a call to e04xxc and supplied as an argument to e04dgc. However, if the optional parameters are not required the NAG defined null pointer, E04_DEFAULT, can be used in the function call.
7: $\mathbf{comm}$ – Nag_Comm *Input/Output
Note:comm is a NAG defined type (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
On entry/exit: structure containing pointers for communication with user-supplied functions; see the above description of objfun for details. If you do not need to make use of this communication feature the null pointer NAGCOMM_NULL may be used in the call to e04dgc; comm will then be declared internally for use in calls to user-supplied functions.
8: $\mathbf{fail}$ – NagError *Input/Output
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).
6Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_BAD_PARAM
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}$ had an illegal value.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}$ had an illegal value.
NE_DERIV_ERRORS
Large errors were found in the derivatives of the objective function.
This value of fail will occur if the verification process indicated that at least one gradient component had no correct figures. You should refer to the printed output to determine which elements are suspected to be in error.
As a first step, you should check that the code for the objective values is correct – for example, by computing the function at a point where the correct value is known. However, care should be taken that the chosen point fully tests the evaluation of the function. It is remarkable how often the values $x=0$ or $x=1$ are used to test function evaluation procedures, and how often the special properties of these numbers make the test meaningless.
Errors in programming the function may be quite subtle in that the function value is ‘almost’ correct. For example, the function may not be accurate to full precision because of the inaccurate calculation of a subsidiary quantity, or the limited accuracy of data upon which the function depends.
NE_GRAD_TOO_SMALL
The gradient at the starting point is too small, rerun the problem at a different starting point.
The value of $g{\left({x}_{0}\right)}^{\mathrm{T}}g\left({x}_{0}\right)$ is less than $\epsilon \left|F\left({x}_{o}\right)\right|$, where $\epsilon $ is the machine precision.
NE_INT_ARG_LT
On entry, ${\mathbf{n}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{n}}\ge 1$.
NE_INVALID_INT_RANGE_1
Value $\u27e8\mathit{\text{value}}\u27e9$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{max\_iter}}$ not valid. Correct range is ${\mathbf{options}}\mathbf{.}{\mathbf{max\_iter}}\ge 0$.
NE_INVALID_REAL_RANGE_EF
Value $\u27e8\mathit{\text{value}}\u27e9$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{f\_prec}}$ not valid. Correct range is $\epsilon \le {\mathbf{options}}\mathbf{.}{\mathbf{f\_prec}}<1.0$.
Value $\u27e8\mathit{\text{value}}\u27e9$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$ not valid. Correct range is $\u27e8\mathit{\text{value}}\u27e9$$\le {\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}<1.0$.
NE_INVALID_REAL_RANGE_F
Value $\u27e8\mathit{\text{value}}\u27e9$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{max\_line\_step}}$ not valid. Correct range is ${\mathbf{options}}\mathbf{.}{\mathbf{max\_line\_step}}>0.0$.
NE_INVALID_REAL_RANGE_FF
Value $\u27e8\mathit{\text{value}}\u27e9$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch\_tol}}$ not valid. Correct range is $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch\_tol}}<1.0$.
NE_NOT_APPEND_FILE
Cannot open file $\u27e8\mathit{string}\u27e9$ for appending.
NE_NOT_CLOSE_FILE
Cannot close file $\u27e8\mathit{string}\u27e9$.
NE_OPT_NOT_INIT
Options structure not initialized.
NE_USER_STOP
User requested termination, user flag value $\text{}=\u27e8\mathit{\text{value}}\u27e9$.
This exit occurs if you set $\mathbf{comm}\mathbf{\to}\mathbf{flag}$ to a negative value in objfun. If fail is supplied the value of ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be the same as your setting of $\mathbf{comm}\mathbf{\to}\mathbf{flag}$.
NE_WRITE_ERROR
Error occurred when writing to file $\u27e8\mathit{string}\u27e9$.
NW_NO_IMPROVEMENT
A sufficient decrease in the function value could not be attained during the final linesearch. Current point cannot be improved upon.
If objfun computes the function and gradients correctly, then this warning may occur because an overly stringent accuracy has been requested, i.e., ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$ is too small or if the minimum lies close to a step length of zero. In this case you should apply the tests described in Section 3 to determine whether or not the final solution is acceptable. For a discussion of attainable accuracy see Gill et al. (1981).
If many iterations have occurred in which essentially no progress has been made or e04dgc has failed to move from the initial point, then the function objfun may be incorrect. You should refer to the comments below under NE_DERIV_ERRORS and check the gradients using the ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}$ argument. Unfortunately, there may be small errors in the objective gradients that cannot be detected by the verification process. Finite difference approximations to first derivatives are catastrophically affected by even small inaccuracies.
NW_STEP_BOUND_TOO_SMALL
Computed upper-bound on step length was too small
The computed upper bound on the step length taken during the linesearch was too small. A rerun with an increased value of ${\mathbf{options}}\mathbf{.}{\mathbf{max\_line\_step}}$ ($\rho $ say) may be successful unless $\rho \ge {10}^{10}$ (the default value), in which case the current point cannot be improved upon.
NW_TOO_MANY_ITER
The maximum number of iterations, $\u27e8\mathit{\text{value}}\u27e9$, have been performed.
If the algorithm appears to be making progress the value of ${\mathbf{options}}\mathbf{.}{\mathbf{max\_iter}}$ value may be too small (see Section 11), you should increase its value and rerun e04dgc. If the algorithm seems to be ‘bogged down’, you should check for incorrect gradients or ill-conditioning as described below under NW_NO_IMPROVEMENT.
7Accuracy
On successful exit the accuracy of the solution will be as defined by the optional parameter ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$.
8Parallelism and Performance
Background information to multithreading can be found in the Multithreading documentation.
e04dgc is not threaded in any implementation.
9Further Comments
9.1Timing
Problems whose Hessian matrices at the solution contain sets of clustered eigenvalues are likely to be minimized in significantly fewer than $n$ iterations. Problems without this property may require anything between $n$ and $5n$ iterations, with approximately $2n$ iterations being a common figure for moderately difficult problems.
The data includes a set of user-defined column and row names, and data for the Hessian in a sparse storage format (see Section 10.2 for further details). The options structure is declared and initialized by e04xxc. Five option values are read from a data file by use of e04xyc.
A number of optional input and output arguments to e04dgc are available through the structure argument options, type Nag_E04_Opt. An argument may be selected by assigning an appropriate value to the relevant structure member; those arguments not selected will be assigned default values. If no use is to be made of any of the optional parameters you should use the NAG defined null pointer, E04_DEFAULT, in place of options when calling e04dgc; the default settings will then be used for all arguments.
Before assigning values to options directly the structure must be initialized by a call to the function e04xxc. Values may then be assigned to the structure members in the normal C manner.
After return from e04dgc, the options structure may only be re-used for future calls of e04dgc if the dimensions of the new problem are the same. Otherwise, the structure must be cleared by a call of e04xzc) and re-initialized by a call of e04xxc before future calls. Failure to do this will result in unpredictable behaviour.
Option settings may also be read from a text file using the function e04xyc in which case initialization of the options structure will be performed automatically if not already done. Any subsequent direct assignment to the options structure must not be preceded by initialization.
If assignment of functions and memory to pointers in the options structure is required, then this must be done directly in the calling program, they cannot be assigned using e04xyc.
11.1Optional Parameter Checklist and Default Values
For easy reference, the following list shows the members of options which are valid for e04dgc together with their default values where relevant. The number $\epsilon $ is a generic notation for machine precision (see X02AJC).
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag\_TRUE}$ the argument settings in the call to e04dgc will be printed.
print_level – Nag_PrintType
Default $\text{}=\mathrm{Nag\_Soln\_Iter}$
On entry: the level of results printout produced by e04dgc. The following values are available:
$\mathrm{Nag\_NoPrint}$
No output.
$\mathrm{Nag\_Soln}$
The final solution.
$\mathrm{Nag\_Iter}$
One line of output for each iteration.
$\mathrm{Nag\_Soln\_Iter}$
The final solution and one line of output for each iteration.
Constraint:
${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_NoPrint}$, $\mathrm{Nag\_Soln}$, $\mathrm{Nag\_Iter}$ or $\mathrm{Nag\_Soln\_Iter}$.
outfile – const char[512]
Default $\text{}=\mathtt{stdout}$
On entry: the name of the file to which results should be printed. If ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}\left[0\right]=\text{'}\text{}\text{0}\text{}\text{'}$ then the stdout stream is used.
print_fun – pointer to function
Default $\text{}=\text{}$NULL
On entry: printing function defined by you; the prototype of ${\mathbf{options}}\mathbf{.}{\mathbf{print\_fun}}$ is
On entry: specifies the level of derivative checking to be performed by e04dgc on the gradient elements defined in objfun.
${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}$ may have the following values:
$\mathrm{Nag\_NoCheck}$
No derivative check is performed.
$\mathrm{Nag\_SimpleCheck}$
Perform a simple check of the gradient.
$\mathrm{Nag\_CheckObj}$
Perform a component check of the gradient elements.
If ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}=\mathrm{Nag\_SimpleCheck}$ then a simple ‘cheap’ test is performed, which requires only one call to objfun. If ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}=\mathrm{Nag\_CheckObj}$ then a more reliable (but more expensive) test will be made on individual gradient components. This component check will be made in the range specified by ${\mathbf{options}}\mathbf{.}{\mathbf{obj\_check\_start}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{obj\_check\_stop}}$, default values being $1$ and n respectively. The procedure for the derivative check is based on finding an interval that produces an acceptable estimate of the second derivative, and then using that estimate to compute an interval that should produce a reasonable forward-difference approximation. The gradient element is then compared with the difference approximation. (The method of finite difference interval estimation is based on Gill et al. (1983)). The result of the test is printed out by e04dgc if ${\mathbf{options}}\mathbf{.}{\mathbf{print\_gcheck}}=\mathrm{Nag\_TRUE}$.
Constraint:
${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}=\mathrm{Nag\_NoCheck}$, $\mathrm{Nag\_SimpleCheck}$ or $\mathrm{Nag\_CheckObj}$.
print_gcheck – Nag_Boolean
Default $\text{}=\mathrm{Nag\_TRUE}$
On entry: if Nag_TRUE the result of any derivative check (see ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}$) will be printed.
obj_check_start – Integer
Default $\text{}=1$
obj_check_stop – Integer
Default $\text{}={\mathbf{n}}$
On entry: these options take effect only when ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}=\mathrm{Nag\_CheckObj}$. They may be used to control the verification of gradient elements computed by the function objfun. For example, if the first 30 variables appear linearly in the objective, so that the corresponding gradient elements are constant, then it is reasonable for ${\mathbf{options}}\mathbf{.}{\mathbf{obj\_check\_start}}$ to be set to 31.
On entry: this argument defines ${\epsilon}_{r}$, which is intended to be a measure of the accuracy with which the problem function $F$ can be computed. The value of ${\epsilon}_{r}$ should reflect the relative precision of $1+\left|F\left(x\right)\right|$; i.e., ${\epsilon}_{r}$ acts as a relative precision when $\left|F\right|$ is large, and as an absolute precision when $\left|F\right|$ is small. For example, if $F\left(x\right)$ is typically of order 1000 and the first six significant digits are known to be correct, an appropriate value for ${\epsilon}_{r}$ would be $\text{1.0e\u22126}$. In contrast, if $F\left(x\right)$ is typically of order ${10}^{\mathrm{-4}}$ and the first six significant digits are known to be correct, an appropriate value for ${\epsilon}_{r}$ would be $\text{1.0e\u221210}$. The choice of ${\epsilon}_{r}$ can be quite complicated for badly scaled problems; see Chapter 8 of Gill et al. (1981), for a discussion of scaling techniques. The default value is appropriate for most simple functions that are computed with full accuracy. However when the accuracy of the computed function values is known to be significantly worse than full precision, the value of ${\epsilon}_{r}$ should be large enough so that e04dgc will not attempt to distinguish between function values that differ by less than the error inherent in the calculation.
On entry: specifies the accuracy to which you wish the final iterate to approximate a solution of the problem. Broadly speaking, ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$ indicates the number of correct figures desired in the objective function at the solution. For example, if ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$ is ${10}^{\mathrm{-6}}$ and e04dgc terminates successfully, the final value of $F$ should have approximately six correct figures. e04dgc will terminate successfully if the iterative sequence of $x$-values is judged to have converged and the final point satisfies the termination criteria (see Section 3, where ${\tau}_{F}$ represents ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$).
On entry: controls the accuracy with which the step $\alpha $ taken during each iteration approximates a minimum of the function along the search direction (the smaller the value of ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch\_tol}}$, the more accurate the linesearch). The default value requests an inaccurate search, and is appropriate for most problems. A more accurate search may be appropriate when it is desirable to reduce the number of iterations – for example, if the objective function is cheap to evaluate.
On entry: specifies the user-supplied guess of the optimum objective function value. This value is used by e04dgc to calculate an initial step length (see Section 3). If no value is supplied then an initial step length of $1.0$ will be used but it should be noted that for badly scaled functions a unit step along the steepest descent direction will often compute the function at very large values of $x$.
iter – Integer
On exit: the number of iterations which have been performed in e04dgc.
nf – Integer
On exit: the number of times the objective function has been evaluated (i.e., number of calls of objfun). The total excludes the calls made to objfun for purposes of derivative checking.
11.3Description of Printed Output
The level of printed output can be controlled with the structure members ${\mathbf{options}}\mathbf{.}{\mathbf{list}}$, ${\mathbf{options}}\mathbf{.}{\mathbf{print\_gcheck}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}$ (see Section 11.2). If ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag\_TRUE}$ then the argument values to e04dgc are listed, followed by the result of any derivative check if ${\mathbf{options}}\mathbf{.}{\mathbf{print\_gcheck}}=\mathrm{Nag\_TRUE}$. The printout of the optimization results is governed by the value of ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}$. The default of ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_Soln\_Iter}$ provides a single line of output at each iteration and the final result. This section describes all of the possible levels of results printout available from e04dgc.
If a simple derivative check, ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}=\mathrm{Nag\_SimpleCheck}$, is requested then the directional derivative, $g{\left(x\right)}^{\mathrm{T}}p$, of the objective gradient and its finite difference approximation are printed out, where $p$ is a random vector of unit length.
When a component derivative check, ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}=\mathrm{Nag\_CheckObj}$, is requested then the following results are supplied for each component:
x[i]
the element of $x$.
dx[i]
the optimal finite difference interval.
g[i]
the gradient element.
Difference approxn.
the finite difference approximation.
Itns
the number of trials performed to find a suitable difference interval.
The indicator, OK or BAD?, states whether the gradient element and finite difference approximation are in agreement.
If the gradient is believed to be in error e04dgc will exit with fail set to NE_DERIV_ERRORS.
When ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_Iter}$ or $\mathrm{Nag\_Soln\_Iter}$ a single line of output is produced on completion of each iteration, this gives the following values:
Itn
the current iteration number $k$.
Nfun
the cumulative number of calls to objfun. The evaluations needed for the estimation of the gradients by finite differences are not included in the total Nfun. The value of Nfun is a guide to the amount of work required for the linesearch. e04dgc will perform at most 16 function evaluations per iteration.
Objective
the current value of the objective function, $F\left({x}_{k}\right)$.
Norm g
the Euclidean norm of the gradient vector, $\Vert g\left({x}_{k}\right)\Vert $.
Norm x
the Euclidean norm of ${x}_{k}$.
Norm(x(k-1)-x(k))
the Euclidean norm of ${x}_{k-1}-{x}_{k}$.
Step
the step $\alpha $ taken along the computed search direction ${p}_{k}$.
If ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_Soln}$ or $\mathrm{Nag\_Soln\_Iter}$, the final result is printed out. This consists of:
x
the final point, ${x}^{*}$.
g
the final gradient vector, $g\left({x}^{*}\right)$.
If ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_NoPrint}$ then printout will be suppressed; you can print the final solution when e04dgc returns to the calling program.
11.3.1Output of results via a user-defined printing function
You may also specify your own print function for output of the results of any gradient check, the optimization results at each iteration and the final solution. The user-defined print function should be assigned to the ${\mathbf{options}}\mathbf{.}{\mathbf{print\_fun}}$ function pointer, which has prototype
The rest of this section can be skipped if the default printing facilities provide the required functionality.
When a user-defined function is assigned to ${\mathbf{options}}\mathbf{.}{\mathbf{print\_fun}}$ this will be called in preference to the internal print function of e04dgc. Calls to the user-defined function are again controlled by means of the ${\mathbf{options}}\mathbf{.}{\mathbf{print\_gcheck}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}$ members. Information is provided through st and comm the two structure arguments to ${\mathbf{options}}\mathbf{.}{\mathbf{print\_fun}}$.
If $\mathbf{comm}\mathbf{\to}\mathbf{it\_prt}=\mathrm{Nag\_TRUE}$ then the results from the last iteration of e04dgc are in the following members of st:
n – Integer
The number of variables.
x – double *
Points to the $\mathbf{st}\mathbf{\to}\mathbf{n}$ memory locations holding the current point ${x}_{k}$.
f – double
The value of the current objective function.
g – double *
Points to the $\mathbf{st}\mathbf{\to}\mathbf{n}$ memory locations holding the first derivatives of $F$ at the current point ${x}_{k}$.
step – double
The step $\alpha $ taken along the search direction ${p}_{k}$.
If $\mathbf{comm}\mathbf{\to}\mathbf{g\_prt}=\mathrm{Nag\_TRUE}$ then the following members are set:
n – Integer
The number of variables.
x – double *
Points to the $\mathbf{st}\mathbf{\to}\mathbf{n}$ memory locations holding the initial point ${x}_{0}$.
g – double *
Points to the $\mathbf{st}\mathbf{\to}\mathbf{n}$ memory locations holding the first derivatives of $F$ at the initial point ${x}_{0}$.
Details of any derivative check performed by e04dgc are held in the following substructure of st:
gprint – Nag_GPrintSt *
Which in turn contains two substructures $\mathbf{gprint}\mathbf{\to}\mathbf{g\_chk}$, $\mathbf{gprint}\mathbf{\to}\mathbf{f\_sim}$ and a pointer to an array of substructures, $*\mathbf{gprint}\mathbf{\to}\mathbf{f\_comp}$.
g_chk – Nag_Grad_Chk_St *
This substructure contains the members:
type – Nag_GradChk
The type of derivative check performed by e04dgc. This will be the same value as in ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}$.
g_error – Integer
This member will be equal to one of the error codes NE_NOERROR or NE_DERIV_ERRORS according to whether the derivatives were found to be correct or not.
obj_start – Integer
Specifies the gradient element at which any component check started. This value will be equal to ${\mathbf{options}}\mathbf{.}{\mathbf{obj\_check\_start}}$.
obj_stop – Integer
Specifies the gradient element at which any component check ended. This value will be equal to ${\mathbf{options}}\mathbf{.}{\mathbf{obj\_check\_stop}}$.
f_sim – Nag_SimSt *
The result of a simple derivative check, $\mathbf{g\_chk}\mathbf{\to}\mathbf{type}=\mathrm{Nag\_SimpleCheck}$, will be held in this substructure which has members:
correct – Nag_Boolean
If Nag_TRUE then the objective gradient is consistent with the finite difference approximation according to a simple check.
dir_deriv – double *
The directional derivative $g{\left(x\right)}^{T}p$ where $p$ is a random vector of unit length with elements of approximately equal magnitude.
fd_approx – double *
The finite difference approximation, $(F(x+hp)-F\left(x\right))/h$, to the directional derivative.
f_comp – Nag_CompSt *
The results of a component derivative check, $\mathbf{g\_chk}\mathbf{\to}\mathbf{type}=\mathrm{Nag\_CheckObj}$, will be held in the array of $\mathbf{st}\mathbf{\to}\mathbf{n}$ substructures of type $\mathrm{Nag\_CompSt}$ pointed to by $\mathbf{gprint}\mathbf{\to}\mathbf{f\_comp}$. The procedure for the derivative check is based on finding an interval that produces an acceptable estimate of the second derivative, and then using that estimate to compute an interval that should produce a reasonable forward-difference approximation. The gradient element is then compared with the difference approximation. (The method of finite difference interval estimation is based on Gill et al. (1983)).
correct – Nag_Boolean
If Nag_TRUE then this objective gradient component is consistent with its finite difference approximation.
hopt – double *
The optimal finite difference interval. This is dx[i] in the NAG default printout.
gdiff – double *
The finite difference approximation for this gradient component.
iter – Integer
The number of trials performed to find a suitable difference interval.
comment – char
A character string which describes the possible nature of the reason for which an estimation of the finite difference interval failed to produce a satisfactory relative condition error of the second-order difference. Possible strings are: "Constant?", "Linear or odd?", "Too nonlinear?" and "Small derivative?".
The relevant members of the structure comm are:
g_prt – Nag_Boolean
Will be Nag_TRUE only when the print function is called with the result of the derivative check of objfun.
it_prt – Nag_Boolean
Will be Nag_TRUE when the print function is called with the result of the current iteration.
sol_prt – Nag_Boolean
Will be Nag_TRUE when the print function is called with the final result.
user – double *
iuser – Integer *
p – Pointer
Pointers for communication of user information. If used they must be allocated memory either before entry to e04dgc or during a call to objfun or ${\mathbf{options}}\mathbf{.}{\mathbf{print\_fun}}$. The type Pointer will be void * with a C compiler that defines void * and char * otherwise.