e04 Chapter Contents
e04 Chapter Introduction
NAG C Library Manual

# NAG Library Function Documentnag_opt_conj_grad (e04dgc)

## 1  Purpose

nag_opt_conj_grad (e04dgc) minimizes an unconstrained nonlinear function of several variables using a pre-conditioned, limited memory quasi-Newton conjugate gradient method. The function is intended for use on large scale problems.

## 2  Specification

 #include #include
 void (*objfun)(Integer n, const double x[], double *objf, double g[], Nag_Comm *comm),
double x[], double *objf, double g[], Nag_E04_Opt *options, Nag_Comm *comm, NagError *fail)

## 3  Description

nag_opt_conj_grad (e04dgc) uses a pre-conditioned conjugate gradient method and is based upon algorithm PLMA as described in Gill and Murray (1979) and Section 4.8.3 of Gill et al. (1981).
The algorithm proceeds as follows:
Let ${x}_{0}$ be a given starting point and let $k$ denote the current iteration, starting with $k=0$. The iteration requires ${g}_{k}$, the gradient vector evaluated at ${x}_{k}$, the $k$th estimate of the minimum. At each iteration a vector ${p}_{k}$ (known as the direction of search) is computed and the new estimate ${x}_{k+1}$ is given by ${x}_{k}+{\alpha }_{k}{p}_{k}$ where ${\alpha }_{k}$ (the step length) minimizes the function $F\left({x}_{k}+{\alpha }_{k}{p}_{k}\right)$ with respect to the scalar ${\alpha }_{k}$. At the start of each line search an initial approximation ${\alpha }_{0}$ to the step ${\alpha }_{k}$ is taken as:
 $α 0 = min1, 2 F k - F est / gkT g k$
where ${F}_{\mathit{est}}$ is a user-supplied estimate of the function value at the solution. If ${F}_{\mathit{est}}$ is not specified, the software always chooses the unit step length for ${\alpha }_{0}$. Subsequent step length estimates are computed using cubic interpolation with safeguards.
A quasi-Newton method computes the search direction, ${p}_{k}$, by updating the inverse of the approximate Hessian $\left({H}_{k}\right)$ and computing
 $p k+1 = - H k+1 g k+1 .$ (1)
The updating formula for the approximate inverse is given by
 $H k+1 = H k - 1 ykT s k H k y k skT + s k ykT H k + 1 ykT s k 1 + ykT H k y k ykT s k s k skT$ (2)
where ${y}_{k}={g}_{k-1}-{g}_{k}$ and ${s}_{k}={x}_{k+1}-{x}_{k}={\alpha }_{k}{p}_{k}$.
The method used by nag_opt_conj_grad (e04dgc) to obtain the search direction is based upon computing ${p}_{k+1}$ as $-{H}_{k+1}{g}_{k+1}$ where ${H}_{k+1}$ is a matrix obtained by updating the identity matrix with a limited number of quasi-Newton corrections. The storage of an $n$ by $n$ matrix is avoided by storing only the vectors that define the rank two corrections – hence the term limited-memory quasi-Newton method. The precise method depends upon the number of updating vectors stored. For example, the direction obtained with the ‘one-step’ limited memory update is given by (1) using (2) with ${H}_{k}$ equal to the identity matrix, viz.
 $p k+1 = - g k+1 + 1 ykT s k skT g k+1 y k + ykT g k+1 s k - skT g k+1 ykT s k 1 + ykT y k ykT s k s k$
nag_opt_conj_grad (e04dgc) uses a two-step method described in detail in Gill and Murray (1979) in which restarts and pre-conditioning are incorporated. Using a limited-memory quasi-Newton formula, such as the one above, guarantees ${p}_{k+1}$ to be a descent direction if all the inner products ${y}_{k}^{\mathrm{T}}{s}_{k}$ are positive for all vectors ${y}_{k}$ and ${s}_{k}$ used in the updating formula.
The termination criteria of nag_opt_conj_grad (e04dgc) are as follows:
Let ${\tau }_{F}$ specify an argument that indicates the number of correct figures desired in ${F}_{k}$ (${\tau }_{F}$ is equivalent to ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ in the optional argument list, see Section 10). If the following three conditions are satisfied:
 (i) ${F}_{k-1}-{F}_{k}<{\tau }_{F}\left(1+\left|{F}_{k}\right|\right)$ (ii) $‖{x}_{k-1}-{x}_{k}‖<\sqrt{{\tau }_{F}}\left(1+‖{x}_{k}‖\right)$ (iii) $‖{g}_{k}‖\le {\tau }_{F}^{1/3}\left(1+\left|{F}_{k}\right|\right)$ or $‖{g}_{k}‖<{\epsilon }_{A}$, where ${\epsilon }_{A}$ is the absolute error associated with computing the objective function
then the algorithm is considered to have converged. For a full discussion on termination criteria see Chapter 8 of Gill et al. (1981).

## 4  References

Gill P E and Murray W (1979) Conjugate-gradient methods for large-scale nonlinear optimization Technical Report SOL 79-15 Department of Operations Research, Stanford University
Gill P E, Murray W, Saunders M A and Wright M H (1983) Computing forward-difference intervals for numerical optimization SIAM J. Sci. Statist. Comput. 4 310–321
Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press

## 5  Arguments

1:     nIntegerInput
On entry: the number $n$ of variables.
Constraint: ${\mathbf{n}}\ge 1$.
2:     objfunfunction, supplied by the userExternal Function
objfun must calculate the objective function $F\left(x\right)$ and its gradient at a specified point $x$.
The specification of objfun is:
 void objfun (Integer n, const double x[], double *objf, double g[], Nag_Comm *comm)
1:     nIntegerInput
On entry: the number $n$ of variables.
2:     x[n]const doubleInput
On entry: the point $x$ at which the objective function is required.
3:     objfdouble *Output
On exit: the value of the objective function $F$ at the current point $x$.
4:     g[n]doubleOutput
On exit: ${\mathbf{g}}\left[\mathit{i}-1\right]$ must contain the value of $\frac{\partial F}{\partial {x}_{\mathit{i}}}$ at the point $x$, for $\mathit{i}=1,2,\dots ,n$.
5:     commNag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to objfun.
flagIntegerInput/Output
On entry: $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ is always non-negative.
On exit: if objfun resets $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ to some negative number then nag_opt_conj_grad (e04dgc) will terminate immediately with the error indicator NE_USER_STOP. If fail is supplied to nag_opt_conj_grad (e04dgc) ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be set to your setting of $\mathbf{comm}\mathbf{\to }\mathbf{flag}$.
firstNag_BooleanInput
On entry: will be set to Nag_TRUE on the first call to objfun and Nag_FALSE for all subsequent calls.
nfIntegerInput
On entry: the number of calculations of the objective function; this value will be equal to the number of calls made to objfun including the current one.
userdouble *
iuserInteger *
pPointer
The type Pointer will be void * with a C compiler that defines void * and char * otherwise. Before calling nag_opt_conj_grad (e04dgc) these pointers may be allocated memory and initialized with various quantities for use by objfun when called from nag_opt_conj_grad (e04dgc).
Note: objfun should be tested separately before being used in conjunction with nag_opt_conj_grad (e04dgc). The array x must not be changed by objfun.
3:     x[n]doubleInput/Output
On entry: ${x}_{0}$, an estimate of the solution point ${x}^{*}$.
On exit: the final estimate of the solution.
4:     objfdouble *Output
On exit: the value of the objective function $F\left(x\right)$ at the final iterate.
5:     g[n]doubleOutput
On exit: the objective gradient at the final iterate.
6:     optionsNag_E04_Opt *Input/Output
On entry/exit: a pointer to a structure of type Nag_E04_Opt whose members are optional arguments for nag_opt_conj_grad (e04dgc). These structure members offer the means of adjusting some of the argument values of the algorithm and on output will supply further details of the results. A description of the members of options is given below in Section 10.
If any of these optional arguments are required then the structure options should be declared and initialized by a call to nag_opt_init (e04xxc) and supplied as an argument to nag_opt_conj_grad (e04dgc). However, if the optional arguments are not required the NAG defined null pointer, E04_DEFAULT, can be used in the function call.
7:     commNag_Comm *Input/Output
Note: comm is a NAG defined type (see Section 3.2.1.1 in the Essential Introduction).
On entry/exit: structure containing pointers for communication with user-supplied functions; see the above description of objfun for details. If you do not need to make use of this communication feature the null pointer NAGCOMM_NULL may be used in the call to nag_opt_conj_grad (e04dgc); comm will then be declared internally for use in calls to user-supplied functions.
8:     failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).

### 5.1  Description of Printed Output

Intermediate and final results are printed out by default. The level of printed output can be controlled with the structure member ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ (see Section 10.2). The default, ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter}$, provides the result of any derivative check, a single line of output at each iteration and the final result.
The derivative check performed by default will give the directional derivative, $g{\left(x\right)}^{\mathrm{T}}p$, of the objective gradient and its finite difference approximation, where $p$ is a random vector of unit length. If the gradient is believed to be in error then nag_opt_conj_grad (e04dgc) will exit with ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE_DERIV_ERRORS}}$.
The line of results printed at each iteration gives:
 Itn the current iteration number $k$. Nfun the cumulative number of calls to objfun. The evaluations needed for the estimation of the gradients by finite differences are not included in the total Nfun. The value of Nfun is a guide to the amount of work required for the linesearch. nag_opt_conj_grad (e04dgc) will perform at most 16 function evaluations per iteration. Objective the current value of the objective function, $F\left({x}_{k}\right)$. Norm g the Euclidean norm of the gradient vector, $‖g\left({x}_{k}\right)‖$. Norm x the Euclidean norm of ${x}_{k}$. Norm (x(k-1)-x(k)) the Euclidean norm of ${x}_{k-1}-{x}_{k}$. Step the step ${\alpha }_{k}$ taken along the computed search direction ${p}_{k}$. On reasonably well-behaved problems, the unit step will be taken as the solution is approached.
The printout of the final result consists of:
 x the final point, ${x}^{*}$. g the final gradient vector, $g\left({x}^{*}\right)$.

## 6  Error Indicators and Warnings

NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ had an illegal value.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}$ had an illegal value.
NE_DERIV_ERRORS
Large errors were found in the derivatives of the objective function.
This value of fail will occur if the verification process indicated that at least one gradient component had no correct figures. You should refer to the printed output to determine which elements are suspected to be in error.
As a first step, you should check that the code for the objective values is correct – for example, by computing the function at a point where the correct value is known. However, care should be taken that the chosen point fully tests the evaluation of the function. It is remarkable how often the values $x=0$ or $x=1$ are used to test function evaluation procedures, and how often the special properties of these numbers make the test meaningless.
Errors in programming the function may be quite subtle in that the function value is ‘almost’ correct. For example, the function may not be accurate to full precision because of the inaccurate calculation of a subsidiary quantity, or the limited accuracy of data upon which the function depends.
The gradient at the starting point is too small, rerun the problem at a different starting point.
The value of $g{\left({x}_{0}\right)}^{\mathrm{T}}g\left({x}_{0}\right)$ is less than $\epsilon \left|F\left({x}_{o}\right)\right|$, where $\epsilon$ is the machine precision.
NE_INT_ARG_LT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 1$.
NE_INVALID_INT_RANGE_1
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ not valid. Correct range is ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}\ge 0$.
NE_INVALID_REAL_RANGE_EF
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{f_prec}}$ not valid. Correct range is $\epsilon \le {\mathbf{options}}\mathbf{.}{\mathbf{f_prec}}<1.0$.
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ not valid. Correct range is $〈\mathit{\text{value}}〉$ $\le {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}<1.0$.
NE_INVALID_REAL_RANGE_F
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{max_line_step}}$ not valid. Correct range is ${\mathbf{options}}\mathbf{.}{\mathbf{max_line_step}}>0.0$.
NE_INVALID_REAL_RANGE_FF
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ not valid. Correct range is $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}<1.0$.
NE_NOT_APPEND_FILE
Cannot open file $〈\mathit{string}〉$ for appending.
NE_NOT_CLOSE_FILE
Cannot close file $〈\mathit{string}〉$.
NE_OPT_NOT_INIT
Options structure not initialized.
NE_USER_STOP
User requested termination, user flag value $\text{}=〈\mathit{\text{value}}〉$.
This exit occurs if you set $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ to a negative value in objfun. If fail is supplied the value of ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be the same as your setting of $\mathbf{comm}\mathbf{\to }\mathbf{flag}$.
NE_WRITE_ERROR
Error occurred when writing to file $〈\mathit{string}〉$.
NW_NO_IMPROVEMENT
A sufficient decrease in the function value could not be attained during the final linesearch. Current point cannot be improved upon.
If objfun computes the function and gradients correctly, then this warning may occur because an overly stringent accuracy has been requested, i.e., ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ is too small or if the minimum lies close to a step length of zero. In this case you should apply the tests described in Section 3 to determine whether or not the final solution is acceptable. For a discussion of attainable accuracy see Gill et al. (1981).
If many iterations have occurred in which essentially no progress has been made or nag_opt_conj_grad (e04dgc) has failed to move from the initial point, then the function objfun may be incorrect. You should refer to the comments below under NE_DERIV_ERRORS and check the gradients using the ${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}$ argument. Unfortunately, there may be small errors in the objective gradients that cannot be detected by the verification process. Finite difference approximations to first derivatives are catastrophically affected by even small inaccuracies.
NW_STEP_BOUND_TOO_SMALL
Computed upper-bound on step length was too small
The computed upper bound on the step length taken during the linesearch was too small. A rerun with an increased value of ${\mathbf{options}}\mathbf{.}{\mathbf{max_line_step}}$ ($\rho$ say) may be successful unless $\rho \ge {10}^{10}$ (the default value), in which case the current point cannot be improved upon.
NW_TOO_MANY_ITER
The maximum number of iterations, $〈\mathit{\text{value}}〉$, have been performed.
If the algorithm appears to be making progress the value of ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ value may be too small (see Section 10), you should increase its value and rerun nag_opt_conj_grad (e04dgc). If the algorithm seems to be ‘bogged down’, you should check for incorrect gradients or ill-conditioning as described below under NW_NO_IMPROVEMENT.

## 7  Accuracy

On successful exit the accuracy of the solution will be as defined by the optional argument ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.

### 8.1  Timing

Problems whose Hessian matrices at the solution contain sets of clustered eigenvalues are likely to be minimized in significantly fewer than $n$ iterations. Problems without this property may require anything between $n$ and $5n$ iterations, with approximately $2n$ iterations being a common figure for moderately difficult problems.

## 9  Example

This example minimizes the function
 $F = e x 1 4 x 1 2 + 2 x 2 2 + 4 x 1 x 2 + 2 x 2 + 1 .$
The data includes a set of user-defined column and row names, and data for the Hessian in a sparse storage format (see Section 9.2 for further details). The options structure is declared and initialized by nag_opt_init (e04xxc). Five option values are read from a data file by use of nag_opt_read (e04xyc).

### 9.1  Program Text

Program Text (e04dgce.c)

### 9.2  Program Data

Program Options (e04dgce.opt)

### 9.3  Program Results

Program Results (e04dgce.r)

## 10  Optional Arguments

A number of optional input and output arguments to nag_opt_conj_grad (e04dgc) are available through the structure argument options, type Nag_E04_Opt. An argument may be selected by assigning an appropriate value to the relevant structure member; those arguments not selected will be assigned default values. If no use is to be made of any of the optional arguments you should use the NAG defined null pointer, E04_DEFAULT, in place of options when calling nag_opt_conj_grad (e04dgc); the default settings will then be used for all arguments.
Before assigning values to options directly the structure must be initialized by a call to the function nag_opt_init (e04xxc). Values may then be assigned to the structure members in the normal C manner.
Option settings may also be read from a text file using the function nag_opt_read (e04xyc) in which case initialization of the options structure will be performed automatically if not already done. Any subsequent direct assignment to the options structure must not be preceded by initialization.
If assignment of functions and memory to pointers in the options structure is required, then this must be done directly in the calling program, they cannot be assigned using nag_opt_read (e04xyc).

### 10.1  Optional Argument Checklist and Default Values

For easy reference, the following list shows the members of options which are valid for nag_opt_conj_grad (e04dgc) together with their default values where relevant. The number $\epsilon$ is a generic notation for machine precision (see nag_machine_precision (X02AJC)).
 Boolean list Nag_TRUE Nag_PrintType print_level Nag_Soln_Iter char outfile[80] stdout void (*print_fun)() NULL Nag_GradChk verify_grad Nag_SimpleCheck Boolean print_gcheck Nag_TRUE Integer obj_check_start 1 Integer obj_check_stop n Integer max_iter $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,5{\mathbf{n}}\right)$ double f_prec ${\epsilon }^{0.9}$ double optim_tol ${{\mathbf{options}}\mathbf{.}{\mathbf{f_prec}}}^{0.8}$ double linesearch_tol 0.9 double max_line_step ${10}^{10}$ double f_est Integer iter Integer nf

### 10.2  Description of the Optional Arguments

 list – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag_TRUE}$ the argument settings in the call to nag_opt_conj_grad (e04dgc) will be printed.
 print_level – Nag_PrintType Default $\text{}=\mathrm{Nag_Soln_Iter}$
On entry: the level of results printout produced by nag_opt_conj_grad (e04dgc). The following values are available:
 Nag_NoPrint No output. Nag_Soln The final solution. Nag_Iter One line of output for each iteration. Nag_Soln_Iter The final solution and one line of output for each iteration.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_NoPrint}$, $\mathrm{Nag_Soln}$, $\mathrm{Nag_Iter}$ or $\mathrm{Nag_Soln_Iter}$.
 outfile – const char[80] Default $\text{}=\mathtt{stdout}$
On entry: the name of the file to which results should be printed. If ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}\left[0\right]=\text{' \0 '}$ then the stdout stream is used.
 print_fun – pointer to function Default $\text{}=\text{}$ NULL
On entry: printing function defined by you; the prototype of ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ is
```void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm);
```
See Section 10.3.1 below for further details.
 verify_grad – Nag_GradChk Default $\text{}=\mathrm{Nag_SimpleCheck}$
On entry: specifies the level of derivative checking to be performed by nag_opt_conj_grad (e04dgc) on the gradient elements defined in objfun.
${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}$ may have the following values:
 Nag_NoCheck No derivative check is performed. Nag_SimpleCheck Perform a simple check of the gradient. Nag_CheckObj Perform a component check of the gradient elements.
If ${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}=\mathrm{Nag_SimpleCheck}$ then a simple ‘cheap’ test is performed, which requires only one call to objfun. If ${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}=\mathrm{Nag_CheckObj}$ then a more reliable (but more expensive) test will be made on individual gradient components. This component check will be made in the range specified by ${\mathbf{options}}\mathbf{.}{\mathbf{obj_check_start}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{obj_check_stop}}$, default values being $1$ and n respectively. The procedure for the derivative check is based on finding an interval that produces an acceptable estimate of the second derivative, and then using that estimate to compute an interval that should produce a reasonable forward-difference approximation. The gradient element is then compared with the difference approximation. (The method of finite difference interval estimation is based on Gill et al. (1983)). The result of the test is printed out by nag_opt_conj_grad (e04dgc) if ${\mathbf{options}}\mathbf{.}{\mathbf{print_gcheck}}=\mathrm{Nag_TRUE}$.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}=\mathrm{Nag_NoCheck}$, $\mathrm{Nag_SimpleCheck}$ or $\mathrm{Nag_CheckObj}$.
 print_gcheck – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
On entry: if Nag_TRUE the result of any derivative check (see ${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}$) will be printed.
 obj_check_start – Integer Default $\text{}=1$
 obj_check_stop – Integer Default $\text{}={\mathbf{n}}$
On entry: these options take effect only when ${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}=\mathrm{Nag_CheckObj}$. They may be used to control the verification of gradient elements computed by the function objfun. For example, if the first 30 variables appear linearly in the objective, so that the corresponding gradient elements are constant, then it is reasonable for ${\mathbf{options}}\mathbf{.}{\mathbf{obj_check_start}}$ to be set to 31.
Constraint: $1\le {\mathbf{options}}\mathbf{.}{\mathbf{obj_check_start}}\le {\mathbf{options}}\mathbf{.}{\mathbf{obj_check_stop}}\le {\mathbf{n}}$.
 max_iter – Integer Default $\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,5{\mathbf{n}}\right)$
On entry: the limit on the number of iterations allowed before termination.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}\ge 0$.
 f_prec – double Default $\text{}={\epsilon }^{0.9}$
On entry: this argument defines ${\epsilon }_{r}$, which is intended to be a measure of the accuracy with which the problem function $F$ can be computed. The value of ${\epsilon }_{r}$ should reflect the relative precision of $1+\left|F\left(x\right)\right|$; i.e., ${\epsilon }_{r}$ acts as a relative precision when $\left|F\right|$ is large, and as an absolute precision when $\left|F\right|$ is small. For example, if $F\left(x\right)$ is typically of order 1000 and the first six significant digits are known to be correct, an appropriate value for ${\epsilon }_{r}$ would be $\text{1.0e−6}$. In contrast, if $F\left(x\right)$ is typically of order ${10}^{-4}$ and the first six significant digits are known to be correct, an appropriate value for ${\epsilon }_{r}$ would be $\text{1.0e−10}$. The choice of ${\epsilon }_{r}$ can be quite complicated for badly scaled problems; see Chapter 8 of Gill et al. (1981), for a discussion of scaling techniques. The default value is appropriate for most simple functions that are computed with full accuracy. However when the accuracy of the computed function values is known to be significantly worse than full precision, the value of ${\epsilon }_{r}$ should be large enough so that nag_opt_conj_grad (e04dgc) will not attempt to distinguish between function values that differ by less than the error inherent in the calculation.
Constraint: $\epsilon \le {\mathbf{options}}\mathbf{.}{\mathbf{f_prec}}<1.0$.
 optim_tol – double Default $\text{}={{\mathbf{options}}\mathbf{.}{\mathbf{f_prec}}}^{0.8}$
On entry: specifies the accuracy to which you wish the final iterate to approximate a solution of the problem. Broadly speaking, ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ indicates the number of correct figures desired in the objective function at the solution. For example, if ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ is ${10}^{-6}$ and nag_opt_conj_grad (e04dgc) terminates successfully, the final value of $F$ should have approximately six correct figures. nag_opt_conj_grad (e04dgc) will terminate successfully if the iterative sequence of $x$-values is judged to have converged and the final point satisfies the termination criteria (see Section 3, where ${\tau }_{F}$ represents ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$).
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{f_prec}}\le {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}<1.0$.
 linesearch_tol – double Default $\text{}=0.9$
On entry: controls the accuracy with which the step $\alpha$ taken during each iteration approximates a minimum of the function along the search direction (the smaller the value of ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$, the more accurate the linesearch). The default value requests an inaccurate search, and is appropriate for most problems. A more accurate search may be appropriate when it is desirable to reduce the number of iterations – for example, if the objective function is cheap to evaluate.
Constraint: $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}<1.0$.
 max_line_step – double Default $\text{}={10}^{10}$
On entry: defines the maximum allowable step length for the line search.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{max_line_step}}>0.0$.
 f_est – double
On entry: specifies the user-supplied guess of the optimum objective function value. This value is used by nag_opt_conj_grad (e04dgc) to calculate an initial step length (see Section 3). If no value is supplied then an initial step length of 1.0 will be used but it should be noted that for badly scaled functions a unit step along the steepest descent direction will often compute the function at very large values of $x$.
 iter – Integer
On exit: the number of iterations which have been performed in nag_opt_conj_grad (e04dgc).
 nf – Integer
On exit: the number of times the objective function has been evaluated (i.e., number of calls of objfun). The total excludes the calls made to objfun for purposes of derivative checking.

### 10.3  Description of Printed Output

The level of printed output can be controlled with the structure members ${\mathbf{options}}\mathbf{.}{\mathbf{list}}$, ${\mathbf{options}}\mathbf{.}{\mathbf{print_gcheck}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ (see Section 10.2). If ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag_TRUE}$ then the argument values to nag_opt_conj_grad (e04dgc) are listed, followed by the result of any derivative check if ${\mathbf{options}}\mathbf{.}{\mathbf{print_gcheck}}=\mathrm{Nag_TRUE}$. The printout of the optimization results is governed by the value of ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$. The default of ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter}$ provides a single line of output at each iteration and the final result. This section describes all of the possible levels of results printout available from nag_opt_conj_grad (e04dgc).
If a simple derivative check, ${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}=\mathrm{Nag_SimpleCheck}$, is requested then the directional derivative, $g{\left(x\right)}^{\mathrm{T}}p$, of the objective gradient and its finite difference approximation are printed out, where $p$ is a random vector of unit length.
When a component derivative check, ${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}=\mathrm{Nag_CheckObj}$, is requested then the following results are supplied for each component:
 x[i] the element of $x$. dx[i] the optimal finite difference interval. g[i] the gradient element. Difference approxn. the finite difference approximation. Itns the number of trials performed to find a suitable difference interval.
The indicator, OK or BAD?, states whether the gradient element and finite difference approximation are in agreement.
If the gradient is believed to be in error nag_opt_conj_grad (e04dgc) will exit with fail set to NE_DERIV_ERRORS.
When ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Iter}$ or $\mathrm{Nag_Soln_Iter}$ a single line of output is produced on completion of each iteration, this gives the following values:
 Itn the current iteration number $k$. Nfun the cumulative number of calls to objfun. The evaluations needed for the estimation of the gradients by finite differences are not included in the total Nfun. The value of Nfun is a guide to the amount of work required for the linesearch. nag_opt_conj_grad (e04dgc) will perform at most 16 function evaluations per iteration. Objective the current value of the objective function, $F\left({x}_{k}\right)$. Norm g the Euclidean norm of the gradient vector, $‖g\left({x}_{k}\right)‖$. Norm x the Euclidean norm of ${x}_{k}$. Norm(x(k-1)-x(k)) the Euclidean norm of ${x}_{k-1}-{x}_{k}$. Step the step $\alpha$ taken along the computed search direction ${p}_{k}$.
If ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln}$ or $\mathrm{Nag_Soln_Iter}$, the final result is printed out. This consists of:
 x the final point, ${x}^{*}$. g the final gradient vector, $g\left({x}^{*}\right)$.
If ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_NoPrint}$ then printout will be suppressed; you can print the final solution when nag_opt_conj_grad (e04dgc) returns to the calling program.

#### 10.3.1  Output of results via a user-defined printing function

You may also specify your own print function for output of the results of any gradient check, the optimization results at each iteration and the final solution. The user-defined print function should be assigned to the ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ function pointer, which has prototype
```void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm);
```
The rest of this section can be skipped if the default printing facilities provide the required functionality.
When a user-defined function is assigned to ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ this will be called in preference to the internal print function of nag_opt_conj_grad (e04dgc). Calls to the user-defined function are again controlled by means of the ${\mathbf{options}}\mathbf{.}{\mathbf{print_gcheck}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ members. Information is provided through st and comm the two structure arguments to ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$.
If $\mathbf{comm}\mathbf{\to }\mathbf{it_prt}=\mathrm{Nag_TRUE}$ then the results from the last iteration of nag_opt_conj_grad (e04dgc) are in the following members of st:
nInteger
The number of variables.
xdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{n}$ memory locations holding the current point ${x}_{k}$.
fdouble
The value of the current objective function.
gdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{n}$ memory locations holding the first derivatives of $F$ at the current point ${x}_{k}$.
stepdouble
The step $\alpha$ taken along the search direction ${p}_{k}$.
xk_normdouble
The Euclidean norm of ${x}_{k-1}-{x}_{k}$.
iterInteger
The number of iterations performed by nag_opt_conj_grad (e04dgc).
nfInteger
The cumulative number of calls made to objfun.
If $\mathbf{comm}\mathbf{\to }\mathbf{g_prt}=\mathrm{Nag_TRUE}$ then the following members are set:
nInteger
The number of variables.
xdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{n}$ memory locations holding the initial point ${x}_{0}$.
gdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{n}$ memory locations holding the first derivatives of $F$ at the initial point ${x}_{0}$.
Details of any derivative check performed by nag_opt_conj_grad (e04dgc) are held in the following substructure of st:
gprintNag_GPrintSt *
Which in turn contains two substructures $\mathbf{gprint}\mathbf{\to }\mathbf{g_chk}$, $\mathbf{gprint}\mathbf{\to }\mathbf{f_sim}$ and a pointer to an array of substructures, $*\mathbf{gprint}\mathbf{\to }\mathbf{f_comp}$.
This substructure contains the members:
The type of derivative check performed by nag_opt_conj_grad (e04dgc). This will be the same value as in ${\mathbf{options}}\mathbf{.}{\mathbf{verify_grad}}$.
g_errorInteger
This member will be equal to one of the error codes NE_NOERROR or NE_DERIV_ERRORS according to whether the derivatives were found to be correct or not.
obj_startInteger
Specifies the gradient element at which any component check started. This value will be equal to ${\mathbf{options}}\mathbf{.}{\mathbf{obj_check_start}}$.
obj_stopInteger
Specifies the gradient element at which any component check ended. This value will be equal to ${\mathbf{options}}\mathbf{.}{\mathbf{obj_check_stop}}$.
f_simNag_SimSt *
The result of a simple derivative check, $\mathbf{g_chk}\mathbf{\to }\mathbf{type}=\mathrm{Nag_SimpleCheck}$, will be held in this substructure which has members:
correctNag_Boolean
If Nag_TRUE then the objective gradient is consistent with the finite difference approximation according to a simple check.
dir_derivdouble *
The directional derivative $g{\left(x\right)}^{T}p$ where $p$ is a random vector of unit length with elements of approximately equal magnitude.
fd_approxdouble *
The finite difference approximation, $\left(F\left(x+hp\right)-F\left(x\right)\right)/h$, to the directional derivative.
f_compNag_CompSt *
The results of a component derivative check, $\mathbf{g_chk}\mathbf{\to }\mathbf{type}=\mathrm{Nag_CheckObj}$, will be held in the array of $\mathbf{st}\mathbf{\to }\mathbf{n}$ substructures of type Nag_CompSt pointed to by $\mathbf{gprint}\mathbf{\to }\mathbf{f_comp}$. The procedure for the derivative check is based on finding an interval that produces an acceptable estimate of the second derivative, and then using that estimate to compute an interval that should produce a reasonable forward-difference approximation. The gradient element is then compared with the difference approximation. (The method of finite difference interval estimation is based on Gill et al. (1983)).
correctNag_Boolean
If Nag_TRUE then this objective gradient component is consistent with its finite difference approximation.
hoptdouble *
The optimal finite difference interval. This is dx[i] in the NAG default printout.
gdiffdouble *
The finite difference approximation for this gradient component.
iterInteger
The number of trials performed to find a suitable difference interval.
commentchar
A character string which describes the possible nature of the reason for which an estimation of the finite difference interval failed to produce a satisfactory relative condition error of the second-order difference. Possible strings are: "Constant?", "Linear or odd?", "Too nonlinear?" and "Small derivative?".
The relevant members of the structure comm are:
g_prtNag_Boolean
Will be Nag_TRUE only when the print function is called with the result of the derivative check of objfun.
it_prtNag_Boolean
Will be Nag_TRUE when the print function is called with the result of the current iteration.
sol_prtNag_Boolean
Will be Nag_TRUE when the print function is called with the final result.
userdouble *
iuserInteger *
pPointer
Pointers for communication of user information. If used they must be allocated memory either before entry to nag_opt_conj_grad (e04dgc) or during a call to objfun or ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$. The type Pointer will be void * with a C compiler that defines void * and char * otherwise.