NAG Library Routine Document
e04xaf
(estimate_deriv_old)
e04xaa (estimate_deriv)
1
Purpose
e04xaf/e04xaa computes an approximation to the gradient vector and/or the Hessian matrix for use in conjunction with, or following the use of an optimization routine (such as
e04uff/e04ufa).
e04xaa is a version of
e04xaf that has additional arguments in order to make it safe for use in multithreaded applications (see
Section 5).
2
Specification
2.1
Specification for e04xaf
Fortran Interface
Subroutine e04xaf ( 
msglvl, n, epsrf, x, mode, objfun, ldh, hforw, objf, objgrd, hcntrl, h, iwarn, work, iuser, ruser, info, ifail) 
Integer, Intent (In)  ::  msglvl, n, ldh  Integer, Intent (Inout)  ::  mode, iuser(*), ifail  Integer, Intent (Out)  ::  iwarn, info(n)  Real (Kind=nag_wp), Intent (In)  ::  epsrf  Real (Kind=nag_wp), Intent (Inout)  ::  x(n), hforw(n), h(ldh,*), work(*), ruser(*)  Real (Kind=nag_wp), Intent (Out)  ::  objf, objgrd(n), hcntrl(n)  External  ::  objfun 

C Header Interface
#include <nagmk26.h>
void 
e04xaf_ (const Integer *msglvl, const Integer *n, const double *epsrf, double x[], Integer *mode, void (NAG_CALL *objfun)(Integer *mode, const Integer *n, const double x[], double *objf, double objgrd[], const Integer *nstate, Integer iuser[], double ruser[]), const Integer *ldh, double hforw[], double *objf, double objgrd[], double hcntrl[], double h[], Integer *iwarn, double work[], Integer iuser[], double ruser[], Integer info[], Integer *ifail) 

2.2
Specification for e04xaa
Fortran Interface
Subroutine e04xaa ( 
msglvl, n, epsrf, x, mode, objfun, ldh, hforw, objf, objgrd, hcntrl, h, iwarn, work, iuser, ruser, info, lwsav, iwsav, rwsav, ifail) 
Integer, Intent (In)  ::  msglvl, n, ldh, iwsav(1)  Integer, Intent (Inout)  ::  mode, iuser(*), ifail  Integer, Intent (Out)  ::  iwarn, info(n)  Real (Kind=nag_wp), Intent (In)  ::  epsrf, rwsav(1)  Real (Kind=nag_wp), Intent (Inout)  ::  x(n), hforw(n), h(ldh,*), work(*), ruser(*)  Real (Kind=nag_wp), Intent (Out)  ::  objf, objgrd(n), hcntrl(n)  Logical, Intent (In)  ::  lwsav(1)  External  ::  objfun 

C Header Interface
#include <nagmk26.h>
void 
e04xaa_ (const Integer *msglvl, const Integer *n, const double *epsrf, double x[], Integer *mode, void (NAG_CALL *objfun)(Integer *mode, const Integer *n, const double x[], double *objf, double objgrd[], const Integer *nstate, Integer iuser[], double ruser[]), const Integer *ldh, double hforw[], double *objf, double objgrd[], double hcntrl[], double h[], Integer *iwarn, double work[], Integer iuser[], double ruser[], Integer info[], const logical lwsav[], const Integer iwsav[], const double rwsav[], Integer *ifail) 

3
Description
e04xaf/e04xaa is similar to routine FDCALC described in
Gill et al. (1983a). It should be noted that this routine aims to compute sufficiently accurate estimates of the derivatives for use with an optimization algorithm. If you require more accurate estimates you should refer to
Chapter D04.
e04xaf/e04xaa computes finite difference approximations to the gradient vector and the Hessian matrix for a given function. The simplest approximation involves the forwarddifference formula, in which the derivative
${f}^{\prime}\left(x\right)$ of a univariate function
$f\left(x\right)$ is approximated by the quantity
for some interval
$h>0$, where the subscript 'F' denotes ‘forwarddifference’ (see
Gill et al. (1983b)).
To summarise the procedure used by
e04xaf/e04xaa (for the case when the objective function is available and you require estimates of gradient values and Hessian matrix diagonal values, i.e.,
${\mathbf{mode}}=0$) consider a univariate function
$f$ at the point
$x$. (In order to obtain the gradient of a multivariate function
$F\left(x\right)$, where
$x$ is an
$n$vector, the procedure is applied to each component of
$x$, keeping the other components fixed.) Roughly speaking, the method is based on the fact that the bound on the relative truncation error in the forwarddifference approximation tends to be an increasing function of
$h$, while the relative condition error bound is generally a decreasing function of
$h$, hence changes in
$h$ will tend to have opposite effects on these errors (see
Gill et al. (1983b)).
The ‘best’ interval
$h$ is given by
where
$\Phi $ is an estimate of
${f}^{\prime \prime}\left(x\right)$, and
${e}_{R}$ is an estimate of the relative error associated with computing the function (see Chapter 8 of
Gill et al. (1981)). Given an interval
$h$,
$\Phi $ is defined by the secondorder approximation
The decision as to whether a given value of
$\Phi $ is acceptable involves
$\hat{c}\left(\Phi \right)$, the following bound on the relative condition error in
$\Phi $:
(When
$\Phi $ is zero,
$\hat{c}\left(\Phi \right)$ is taken as an arbitrary large number.)
The procedure selects the interval
${h}_{\varphi}$ (to be used in computing
$\Phi $) from a sequence of trial intervals
$\left({h}_{k}\right)$. The initial trial interval is taken as
$10\stackrel{}{h}$, where
unless you specify the initial value to be used.
The value of
$\hat{c}\left(\Phi \right)$ for a trial value
${h}_{k}$ is defined as ‘acceptable’ if it lies in the interval
$\left[0.001,0.1\right]$. In this case
${h}_{\varphi}$ is taken as
${h}_{k}$, and the current value of
$\Phi $ is used to compute
${h}_{F}$ from
(1). If
$\hat{c}\left(\Phi \right)$ is unacceptable, the next trial interval is chosen so that the relative condition error bound will either decrease or increase, as required. If the bound on the relative condition error is too large, a larger interval is used as the next trial value in an attempt to reduce the condition error bound. On the other hand, if the relative condition error bound is too small,
${h}_{k}$ is reduced.
The procedure will fail to produce an acceptable value of $\hat{c}\left(\Phi \right)$ in two situations. Firstly, if ${f}^{\prime \prime}\left(x\right)$ is extremely small, then $\hat{c}\left(\Phi \right)$ may never become small, even for a very large value of the interval. Alternatively, $\hat{c}\left(\Phi \right)$ may never exceed $0.001$, even for a very small value of the interval. This usually implies that ${f}^{\prime \prime}\left(x\right)$ is extremely large, and occurs most often near a singularity.
As a check on the validity of the estimated first derivative, the procedure provides a comparison of the forwarddifference approximation computed with
${h}_{F}$ (as above) and the centraldifference approximation computed with
${h}_{\varphi}$. Using the centraldifference formula the first derivative can be approximated by
where
$h>0$. If the values
${h}_{F}$ and
${h}_{\varphi}$ do not display some agreement, neither can be considered reliable.
When both function and gradients are available and you require the Hessian matrix (i.e.,
${\mathbf{mode}}=1$)
e04xaf/e04xaa follows a similar procedure to the case above with the exception that the gradient function
$g\left(x\right)$ is substituted for the objective function and so the forwarddifference interval for the first derivative of
$g\left(x\right)$ with respect to variable
${x}_{j}$ is computed. The
$j$th column of the approximate Hessian matrix is then defined as in Chapter 2 of
Gill et al. (1981), by
where
${h}_{j}$ is the best forwarddifference interval associated with the
$j$th component of
$g$ and
${e}_{j}$ is the vector with unity in the
$j$th position and zeros elsewhere.
When only the objective function is available and you require the gradients and Hessian matrix (i.e.,
${\mathbf{mode}}=2$)
e04xaf/e04xaa again follows the same procedure as the case for
${\mathbf{mode}}=0$ except that this time the value of
$\hat{c}\left(\Phi \right)$ for a trial value
${h}_{k}$ is defined as acceptable if it lies in the interval
$\left[0.0001,0.01\right]$ and the initial trial interval is taken as
The approximate Hessian matrix
$G$ is then defined as in Chapter 2 of
Gill et al. (1981), by
4
References
Gill P E, Murray W, Saunders M A and Wright M H (1983a) Documentation for FDCALC and FDCORE Technical Report SOL 83–6 Stanford University
Gill P E, Murray W, Saunders M A and Wright M H (1983b) Computing forwarddifference intervals for numerical optimization SIAM J. Sci. Statist. Comput. 4 310–321
Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
5
Arguments
 1: $\mathbf{msglvl}$ – IntegerInput

On entry: must indicate the amount of intermediate output desired (see
Section 9.1 for a description of the printed output). All output is written on the current advisory message unit (see
x04abf).
Value  Definition 
0  No printout 
1  A summary is printed out for each variable plus any warning messages. 
Other  Values other than $0$ and $1$ should normally be used only at the direction of NAG. 
 2: $\mathbf{n}$ – IntegerInput

On entry: the number $n$ of independent variables.
Constraint:
${\mathbf{n}}\ge 1$.
 3: $\mathbf{epsrf}$ – Real (Kind=nag_wp)Input

On entry: must define
${e}_{R}$, which is intended to be a measure of the accuracy with which the problem function
$F$ can be computed. The value of
${e}_{R}$ should reflect the relative precision of
$1+\leftF\left(x\right)\right$, i.e., acts as a relative precision when
$\leftF\right$ is large, and as an absolute precision when
$\leftF\right$ is small. For example, if
$F\left(x\right)$ is typically of order
$1000$ and the first six significant digits are known to be correct, an appropriate value for
${e}_{R}$ would be
$\text{1.0E\u22126}$.
A discussion of
epsrf is given in Chapter 8 of
Gill et al. (1981). If
epsrf is either too small or too large on entry a warning will be printed if
${\mathbf{msglvl}}=1$, the argument
iwarn set to the appropriate value on exit and
e04xaf/e04xaa will use a default value of
${e}_{M}^{0.9}$, where
${e}_{M}$ is the
machine precision.
If ${\mathbf{epsrf}}\le 0.0$ on entry then e04xaf/e04xaa will use the default value internally. The default value will be appropriate for most simple functions that are computed with full accuracy.
 4: $\mathbf{x}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput

On entry: the point $x$ at which the derivatives are to be computed.
 5: $\mathbf{mode}$ – IntegerInput/Output

On entry: indicates which derivatives are required.
 ${\mathbf{mode}}=0$
 The gradient and Hessian diagonal values having supplied the objective function via objfun.
 ${\mathbf{mode}}=1$
 The Hessian matrix having supplied both the objective function and gradients via objfun.
 ${\mathbf{mode}}=2$
 The gradient values and Hessian matrix having supplied the objective function via objfun.
On exit: is changed
only if you set
mode negative in
objfun, i.e., you have requested termination of
e04xaf/e04xaa.
 6: $\mathbf{objfun}$ – Subroutine, supplied by the user.External Procedure

If
${\mathbf{mode}}=0$ or
$2$,
objfun must calculate the objective function; otherwise if
${\mathbf{mode}}=1$,
objfun must calculate the objective function and the gradients.
The specification of
objfun is:
Fortran Interface
Integer, Intent (In)  ::  n, nstate  Integer, Intent (Inout)  ::  mode, iuser(*)  Real (Kind=nag_wp), Intent (In)  ::  x(n)  Real (Kind=nag_wp), Intent (Inout)  ::  ruser(*)  Real (Kind=nag_wp), Intent (Out)  ::  objf, objgrd(n) 

 1: $\mathbf{mode}$ – IntegerInput/Output

mode indicates which argument values within
objfun need to be set.
On entry: to
objfun,
mode is always set to the value that you set it to before the call to
e04xaf/e04xaa.
On exit: its value must not be altered unless you wish to indicate a failure within
objfun, in which case it should be set to a negative value. If
mode is negative on exit from
objfun, the execution of
e04xaf/e04xaa is terminated with
ifail set to
mode.
 2: $\mathbf{n}$ – IntegerInput

On entry: the number $n$ of variables as input to e04xaf/e04xaa.
 3: $\mathbf{x}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput

On entry: the point $x$ at which the objective function (and gradients if ${\mathbf{mode}}=1$) is to be evaluated.
 4: $\mathbf{objf}$ – Real (Kind=nag_wp)Output

On exit: must be set to the value of the objective function.
 5: $\mathbf{objgrd}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: if
${\mathbf{mode}}=1$,
${\mathbf{objgrd}}\left(j\right)$ must contain the value of the first derivative with respect to
$x$.
If
${\mathbf{mode}}\ne 1$,
objgrd need not be set.
 6: $\mathbf{nstate}$ – IntegerInput

On entry: will be set to
$1$ on the first call of
objfun by
e04xaf/e04xaa, and is
$0$ for all subsequent calls. Thus, if you wish,
nstate may be tested within
objfun in order to perform certain calculations once only. For example you may read data.
 7: $\mathbf{iuser}\left(*\right)$ – Integer arrayUser Workspace
 8: $\mathbf{ruser}\left(*\right)$ – Real (Kind=nag_wp) arrayUser Workspace

objfun is called with the arguments
iuser and
ruser as supplied to
e04xaf/e04xaa. You should use the arrays
iuser and
ruser to supply information to
objfun.
objfun must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which
e04xaf/e04xaa is called. Arguments denoted as
Input must
not be changed by this procedure.
Note: objfun should not return floatingpoint NaN (Not a Number) or infinity values, since these are not handled by
e04xaf/e04xaa. If your code inadvertently
does return any NaNs or infinities,
e04xaf/e04xaa is likely to produce unexpected results.
 7: $\mathbf{ldh}$ – IntegerInput

On entry: the first dimension of the array
h as declared in the (sub)program from which
e04xaf/e04xaa is called.
Constraint:
${\mathbf{ldh}}\ge {\mathbf{n}}$.
 8: $\mathbf{hforw}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput/Output

On entry: the initial trial interval for computing the appropriate partial derivative to the
$j$th variable.
If
${\mathbf{hforw}}\left(j\right)\le 0.0$, the initial trial interval is computed by
e04xaf/e04xaa (see
Section 3).
On exit: ${\mathbf{hforw}}\left(j\right)$ is the best interval found for computing a forwarddifference approximation to the appropriate partial derivative for the $j$th variable.
 9: $\mathbf{objf}$ – Real (Kind=nag_wp)Output

On exit: the value of the objective function evaluated at the input vector in
x.
 10: $\mathbf{objgrd}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: if
${\mathbf{mode}}=0$ or
$2$,
${\mathbf{objgrd}}\left(j\right)$ contains the best estimate of the first partial derivative for the
$j$th variable.
If
${\mathbf{mode}}=1$,
${\mathbf{objgrd}}\left(j\right)$ contains the first partial derivative for the
$j$th variable evaluated at the input vector in
x.
 11: $\mathbf{hcntrl}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: ${\mathbf{hcntrl}}\left(j\right)$ is the best interval found for computing a centraldifference approximation to the appropriate partial derivative for the $j$th variable.
 12: $\mathbf{h}\left({\mathbf{ldh}},*\right)$ – Real (Kind=nag_wp) arrayOutput

Note: the second dimension of the array
h
must be at least
$1$ if
${\mathbf{mode}}=0$ and at least
${\mathbf{n}}$ if
${\mathbf{mode}}=1$ or
$2$.
On exit: if
${\mathbf{mode}}=0$, the estimated Hessian diagonal elements are contained in the first column of this array.
If ${\mathbf{mode}}=1$ or $2$, the estimated Hessian matrix is contained in the leading $n$ by $n$ part of this array.
 13: $\mathbf{iwarn}$ – IntegerOutput

On exit:
${\mathbf{iwarn}}=0$ on successful exit.
If the value of
epsrf on entry is too small or too large then
iwarn is set to
$1$ or
$2$ respectively on exit and the default value for
epsrf is used within
e04xaf/e04xaa.
If
${\mathbf{msglvl}}>0$ then warnings will be printed if
epsrf is too small or too large.
 14: $\mathbf{work}\left(*\right)$ – Real (Kind=nag_wp) arrayWorkspace

Note: the dimension of the array
work
must be at least
${\mathbf{n}}$ if
${\mathbf{mode}}=0$ and at least
${\mathbf{n}}\times \left({\mathbf{n}}+1\right)$ if
${\mathbf{mode}}=1$ or
$2$.
 15: $\mathbf{iuser}\left(*\right)$ – Integer arrayUser Workspace
 16: $\mathbf{ruser}\left(*\right)$ – Real (Kind=nag_wp) arrayUser Workspace

iuser and
ruser are not used by
e04xaf/e04xaa, but are passed directly to
objfun and may be used to pass information to this routine.
 17: $\mathbf{info}\left({\mathbf{n}}\right)$ – Integer arrayOutput

On exit:
${\mathbf{info}}\left(j\right)$ represents diagnostic information on variable
$j$. (See
Section 6 for more details.)
 18: $\mathbf{ifail}$ – IntegerInput/Output

Note: for e04xaa, ifail does not occur in this position in the argument list. See the additional arguments described below.
On entry:
ifail must be set to
$0$,
$1\text{or}1$. If you are unfamiliar with this argument you should refer to
Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$1\text{or}1$ is recommended. If the output of error messages is undesirable, then the value
$1$ is recommended. Otherwise, if you are not familiar with this argument, the recommended value is
$0$.
When the value $\mathbf{1}\text{or}\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit:
${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see
Section 6).
 Note: the following are additional arguments for specific use with e04xaa. Users of e04xaf therefore need not read the remainder of this description.
 18: $\mathbf{lwsav}\left(1\right)$ – Logical arrayCommunication Array
 19: $\mathbf{iwsav}\left(1\right)$ – Integer arrayCommunication Array
 20: $\mathbf{rwsav}\left(1\right)$ – Real (Kind=nag_wp) arrayCommunication Array

These arguments are no longer required by e04xaf/e04xaa.
 21: $\mathbf{ifail}$ – IntegerInput/Output

Note: see the argument description for
ifail above.
6
Error Indicators and Warnings
On exit from
e04xaf/e04xaa both diagnostic arguments
info and
ifail should be tested.
ifail represents an overall diagnostic indicator, whereas the integer array
info represents diagnostic information on each variable.
If on entry
${\mathbf{ifail}}=0$ or
$1$, explanatory error messages are output on the current error message unit (as defined by
x04aaf).
Errors or warnings detected by the routine:
 ${\mathbf{ifail}}=1$

On entry, ${\mathbf{ldh}}=\u2329\mathit{\text{value}}\u232a$ and ${\mathbf{n}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{ldh}}\ge {\mathbf{n}}$.
On entry, ${\mathbf{mode}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: $0\le {\mathbf{mode}}\le 2$.
On entry, ${\mathbf{n}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{n}}\ge 1$.
 ${\mathbf{ifail}}=2$

One or more variables have a nonzero
info value. This may not necessarily represent an unsuccessful exit – see diagnostic information on
info.
 ${\mathbf{ifail}}<0$

User requested termination by setting
mode negative in
objfun.
 ${\mathbf{ifail}}=99$
An unexpected error has been triggered by this routine. Please
contact
NAG.
See
Section 3.9 in How to Use the NAG Library and its Documentation for further information.
 ${\mathbf{ifail}}=399$
Your licence key may have expired or may not have been installed correctly.
See
Section 3.8 in How to Use the NAG Library and its Documentation for further information.
 ${\mathbf{ifail}}=999$
Dynamic memory allocation failed.
See
Section 3.7 in How to Use the NAG Library and its Documentation for further information.
Diagnostic information returned via
info is as follows:
 ${\mathbf{info}}\left(i\right)=1$

The appropriate function appears to be constant.
${\mathbf{hforw}}\left(i\right)$ is set to the initial trial interval value (see
Section 3) corresponding to a wellscaled problem and
Error est. in the printed output is set to zero. This value occurs when the estimated relative condition error in the first derivative approximation is unacceptably large for every value of the finite difference interval. If this happens when the function is not constant the initial interval may be too small; in this case, it may be worthwhile to rerun
e04xaf/e04xaa with larger initial trial interval values supplied in
hforw (see
Section 3). This error may also occur if the function evaluation includes an inordinately large constant term or if
epsrf is too large.
 ${\mathbf{info}}\left(i\right)=2$

The appropriate function appears to be linear or odd.
${\mathbf{hforw}}\left(i\right)$ is set to the smallest interval with acceptable bounds on the relative condition error in the forward and backwarddifference estimates. In this case, the estimated relative condition error in the second derivative approximation remained large for every trial interval, but the estimated error in the first derivative approximation was acceptable for at least one interval. If the function is not linear or odd the relative condition error in the second derivative may be decreasing very slowly, it may be worthwhile to rerun
e04xaf/e04xaa with larger initial trial interval values supplied in
hforw (see
Section 3).
 ${\mathbf{info}}\left(i\right)=3$

The second derivative of the appropriate function appears to be so large that it cannot be reliably estimated (i.e., near a singularity). ${\mathbf{hforw}}\left(i\right)$ is set to the smallest trial interval.
This value occurs when the relative condition error estimate in the second derivative remained very small for every trial interval.
If the second derivative is not large the relative condition error in the second derivative may be increasing very slowly. It may be worthwhile to rerun
e04xaf/e04xaa with smaller initial trial interval values supplied in
hforw (see
Section 3). This error may also occur when the given value of
epsrf is not a good estimate of a bound on the absolute error in the appropriate function (i.e.,
epsrf is too small).
 ${\mathbf{info}}\left(i\right)=4$

The algorithm terminated with an apparently acceptable estimate of the second derivative. However the forwarddifference estimates of the appropriate first derivatives (computed with the final estimate of the ‘optimal’ forwarddifference interval) and the central difference estimates (computed with the interval used to compute the final estimate of the second derivative) do not agree to half a decimal place. The usual reason that the forward and centraldifference estimates fail to agree is that the first derivative is small.
If the first derivative is not small, it may be helpful to execute the procedure at a different point.
7
Accuracy
If ${\mathbf{ifail}}={\mathbf{0}}$ on exit the algorithm terminated successfully, i.e., the forwarddifference estimates of the appropriate first derivatives (computed with the final estimate of the ‘optimal’ forwarddifference interval ${h}_{F}$) and the centraldifference estimates (computed with the interval ${h}_{\varphi}$ used to compute the final estimate of the second derivative) agree to at least half a decimal place.
In short word length implementations when computing the full Hessian matrix given function values only (i.e., ${\mathbf{mode}}=2$) the elements of the computed Hessian will have at best $1$ to $2$ figures of accuracy.
8
Parallelism and Performance
e04xaf/e04xaa is not threaded in any implementation.
To evaluate an acceptable set of finite difference intervals for a wellscaled problem, the routine will require around two function evaluations per variable; in a badly scaled problem however, as many as six function evaluations per variable may be needed.
If you request the full Hessian matrix supplying both function and gradients (i.e.,
${\mathbf{mode}}=1$) or function only (i.e.,
${\mathbf{mode}}=2$) then a further
n or
$3\times {\mathbf{n}}\times \left({\mathbf{n}}+1\right)/2$ function evaluations respectively are required.
9.1
Description of the Printed Output
The following is a description of the printed output from
e04xaf/e04xaa as controlled by the argument
msglvl.
Output when
${\mathbf{msglvl}}=1$ is as follows:
J 
number of variable for which the difference interval has been computed. 
$\mathtt{X}\left(j\right)$ 
$j$th variable of $x$ as set by you. 
F. dif. int. 
the best interval found for computing a forwarddifference approximation to the appropriate partial derivative with respect to the $j$th variable. 
C. dif. int. 
the best interval found for computing a centraldifference approximation to the appropriate partial derivative with respect to the $j$th variable. 
Error est. 
a bound on the estimated error in the final forwarddifference approximation. When ${\mathbf{info}}\left(j\right)=1$, Error est. is set to zero. 
Grad. est. 
best estimate of the first partial derivative with respect to the $j$th variable. 
Hess diag est. 
best estimate of the second partial derivative with respect to the $j$th variable. 
fun evals. 
the number of function evaluations used to compute the final difference intervals for the $j$th variable. 
$\mathtt{info}\left(j\right)$ 
the value of info for the $j$th variable. 
10
Example
This example computes the gradient vector and the Hessian matrix of the following function:
at the point
$\left(2,1,1,1\right)$.
10.1
Program Text
Note: the following programs illustrate the use of e04xaf and e04xaa.
Program Text (e04xafe.f90)
Program Text (e04xaae.f90)
10.2
Program Data
None.
10.3
Program Results
Program Results (e04xafe.r)
Program Results (e04xaae.r)