Integer type:** int32**** int64**** nag_int** show int32 show int32 show int64 show int64 show nag_int show nag_int

nag_numdiff (d04aa) calculates a set of derivatives (up to order 14$14$) of a function of one real variable at a point, together with a corresponding set of error estimates, using an extension of the Neville algorithm.

nag_numdiff (d04aa) provides a set of approximations:

to the derivatives:

of a real valued function f(x)$f\left(x\right)$ at a real abscissa x_{0}${x}_{0}$, together with a set of error estimates:

which hopefully satisfy:

You must provide the value of x_{0}${x}_{0}$, a value of n$n$ (which is reduced to 14$14$ should it exceed 14$14$), a function which evaluates f(x)$f\left(x\right)$ for all real x$x$, and a step length h$h$. The results der(j)${\mathbf{der}}\left(j\right)$ and erest(j)${\mathbf{erest}}\left(j\right)$ are based on 21$21$ function values:

$${\mathbf{der}}\left(j\right)\text{, \hspace{1em}}j=1,2,\dots ,n$$ |

f ^{(j)}(x_{0}), j = 1,2, … ,n
$${f}^{\left(j\right)}\left({x}_{0}\right)\text{, \hspace{1em}}j=1,2,\dots ,n$$ |

$${\mathbf{erest}}\left(j\right)\text{, \hspace{1em}}j=1,2,\dots ,n$$ |

$$|{\mathbf{der}}\left(j\right)-{f}^{\left(j\right)}\left({x}_{0}\right)|<{\mathbf{erest}}\left(j\right)\text{, \hspace{1em}}j=1,2,\dots ,n\text{.}$$ |

f(x _{0}),f(x_{0} ± (2i − 1)h), i = 1,2, … ,10.
$$f\left({x}_{0}\right),f({x}_{0}\pm (2i-1)h)\text{, \hspace{1em}}i=1,2,\dots ,10\text{.}$$ |

Internally nag_numdiff (d04aa) calculates the odd order derivatives and the even order derivatives separately. There is an option you can use for restricting the calculation to only odd (or even) order derivatives. For each derivative the function employs an extension of the Neville Algorithm (see Lyness and Moler (1969)) to obtain a selection of approximations.

For example, for odd derivatives, based on 20$20$ function values, nag_numdiff (d04aa) calculates a set of numbers:

each of which is an approximation to f^{(2s + 1)}(x_{0}) / (2s + 1) ! ${f}^{(2s+1)}\left({x}_{0}\right)/(2s+1)!$. A specific approximation T_{k,p,s}${T}_{\mathit{k},p,s}$ is of polynomial degree 2p + 2$2p+2$ and is based on polynomial interpolation using function values f(x_{0} ± (2i − 1)h)$f({x}_{0}\pm (2i-1)h)$, for k = k, … ,k + p$\mathit{k}=\mathit{k},\dots ,\mathit{k}+p$. In the absence of round-off error, the better approximations would be associated with the larger values of p$p$ and of k$k$. However, round-off error in function values has an increasingly contaminating effect for successively larger values of p$p$. This function proceeds to make a judicious choice between all the approximations in the following way.

T _{k,p,s}, p = s,s + 1, … ,6, k = 0,1, … ,9 − p
$${T}_{k,p,s}\text{, \hspace{1em}}p=s,s+1,\dots ,6\text{, \hspace{1em}}k=0,1,\dots ,9-p$$ |

For a specified value of s$s$, let:

where
U_{p}
=
max_{k}
(T_{k,p,s})
${U}_{p}={\displaystyle \underset{\mathit{k}}{\mathrm{max}}}\phantom{\rule{0.25em}{0ex}}\left({T}_{\mathit{k},p,s}\right)$ and
L_{p}
=
min_{k}
(T_{k,p,s})
${L}_{p}={\displaystyle \underset{\mathit{k}}{\mathrm{min}}}\phantom{\rule{0.25em}{0ex}}\left({T}_{\mathit{k},p,s}\right)$, for k = 0,1, … ,9 − p$\mathit{k}=0,1,\dots ,9-p$, and let p$\stackrel{-}{\mathit{p}}$ be such that
R_{p}
=
min_{p}
(R_{p})
${R}_{\stackrel{-}{\mathit{p}}}={\displaystyle \underset{\mathit{p}}{\mathrm{min}}}\phantom{\rule{0.25em}{0ex}}\left({R}_{\mathit{p}}\right)$, for p = s, … ,6$\mathit{p}=s,\dots ,6$.

R _{p}
=
U_{p}
−
L_{p}
,
p = s,s + 1, … ,6
$${R}_{p}={U}_{p}-{L}_{p}\text{, \hspace{1em}}p=s,s+1,\dots ,6$$ |

The function returns:

and

where K_{j}${K}_{j}$ is a safety factor which has been assigned the values:

on the basis of performance statistics.

$${\mathbf{der}}\left(2s+1\right)=\frac{1}{8-\stackrel{-}{p}}\times \{\sum _{k=0}^{9-\stackrel{-}{p}}{T}_{k,\stackrel{-}{p},s}-{U}_{\stackrel{-}{p}}-{L}_{\stackrel{-}{p}}\}(2s+1)!$$ |

$${\mathbf{erest}}\left(2s+1\right)={R}_{\stackrel{-}{p}}\times (2s+1)!\times {K}_{2s+1}$$ |

K_{j} = 1${K}_{j}=1$, |
j ≤ 9$j\le 9$ |

K_{j} = 1.5${K}_{j}=1.5$, |
j = 10,11$j=10,11$ |

K_{j} = 2${K}_{j}=2$ |
j ≥ 12$j\ge 12$, |

The even order derivatives are calculated in a precisely analogous manner.

Lyness J N and Moler C B (1966) van der Monde systems and numerical differentiation *Numer. Math.* **8** 458–464

Lyness J N and Moler C B (1969) Generalised Romberg methods for integrals of derivatives *Numer. Math.* **14** 1–14

- 1: xval – double scalar
- The point at which the derivatives are required, x
_{0}${x}_{0}$. - 2: nder – int64int32nag_int scalar
- Must be set so that its absolute value is the highest order derivative required.
- nder > 0${\mathbf{nder}}>0$
- All derivatives up to order min (nder,14)$\mathrm{min}\phantom{\rule{0.125em}{0ex}}({\mathbf{nder}},14)$ are calculated.
- nder < 0${\mathbf{nder}}<0$ and nder is even
- Only even order derivatives up to order min ( − nder,14)$\mathrm{min}\phantom{\rule{0.125em}{0ex}}(-{\mathbf{nder}},14)$ are calculated.
- nder < 0${\mathbf{nder}}<0$ and nder is odd
- Only odd order derivatives up to order min ( − nder,13)$\mathrm{min}\phantom{\rule{0.125em}{0ex}}(-{\mathbf{nder}},13)$ are calculated.

- 3: hbase – double scalar
- The initial step length which may be positive or negative. For advice on the choice of hbase see Section [Further Comments].
- 4: fun – function handle or string containing name of m-file
- [result] = fun(x)
**Input Parameters**- 1: x – double scalar
- The value of the argument x$x$.If you have equally spaced tabular data, the following information may be useful:
(i) in any call of nag_numdiff (d04aa) the only values of x$x$ for which f(x)$f\left(x\right)$ will be required are x = xval$x={\mathbf{xval}}$ and x = xval ± (2j − 1)hbase$x={\mathbf{xval}}\pm (2\mathit{j}-1){\mathbf{hbase}}$, for j = 1,2, … ,10$\mathit{j}=1,2,\dots ,10$; and (ii) f(x _{0})$f\left({x}_{0}\right)$ is always computed, but it is disregarded when only odd order derivatives are required.

**Output Parameters**

None.

None.

- 1: der(14$14$) – double array
- der(j)${\mathbf{der}}\left(j\right)$ contains an approximation to the j$j$th derivative of f(x)$f\left(x\right)$ at x = xval$x={\mathbf{xval}}$, so long as the j$j$th derivative is one of those requested by you when specifying nder. For other values of j$j$, der(j)${\mathbf{der}}\left(j\right)$ is unused.
- 2: erest(14$14$) – double array
- An estimate of the absolute error in the corresponding result der(j)${\mathbf{der}}\left(j\right)$ so long as the j$j$th derivative is one of those requested by you when specifying nder. The sign of erest(j)${\mathbf{erest}}\left(j\right)$ is positive unless the result der(j)${\mathbf{der}}\left(j\right)$ is questionable. It is set negative when |der(j)| < |erest(j)|$\left|{\mathbf{der}}\left(j\right)\right|<\left|{\mathbf{erest}}\left(j\right)\right|$ or when for some other reason there is doubt about the validity of the result der(j)${\mathbf{der}}\left(j\right)$ (see Section [Error Indicators and Warnings]). For other values of j$j$, erest(j)${\mathbf{erest}}\left(j\right)$ is unused.
- 3: ifail – int64int32nag_int scalar
- ifail = 0${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

Errors or warnings detected by the function:

On entry, nder = 0${\mathbf{nder}}=0$, or hbase = 0.0${\mathbf{hbase}}=0.0$.

If ifail has a value zero on exit then nag_numdiff (d04aa) has terminated successfully, but before any use is made of a derivative der(j)${\mathbf{der}}\left(j\right)$ the value of erest(j)${\mathbf{erest}}\left(j\right)$ must be checked.

The accuracy of the results is problem dependent. An estimate of the accuracy of each result der(j)${\mathbf{der}}\left(j\right)$ is returned in erest(j)${\mathbf{erest}}\left(j\right)$ (see Sections [Description], [Parameters] and [Further Comments]).

A basic feature of any floating point function for numerical differentiation based on real function values on the real axis is that successively higher order derivative approximations are successively less accurate. It is expected that in most cases der(14)${\mathbf{der}}\left(14\right)$ will be unusable. As an aid to this process, the sign of erest(j)${\mathbf{erest}}\left(j\right)$ is set negative when the estimated absolute error is greater than the approximate derivative itself, i.e., when the approximate derivative may be so inaccurate that it may even have the wrong sign. It is also set negative in some other cases when information available to the function indicates that the corresponding value of der(j)${\mathbf{der}}\left(j\right)$ is questionable.

The actual values in erest depend on the accuracy of the function values, the properties of the machine arithmetic, the analytic properties of the function being differentiated and the user-supplied step length hbase (see Section [Further Comments]). The only hard and fast rule is that for a given fun(xval)${\mathbf{fun}}\left({\mathbf{xval}}\right)$ and hbase, the values of erest(j)${\mathbf{erest}}\left(j\right)$ increase with increasing j$j$. The limit of 14$14$ is dictated by experience. Only very rarely can one obtain meaningful approximations for higher order derivatives on conventional machines.

The time taken by nag_numdiff (d04aa) depends on the time spent for function evaluations. Otherwise the time is roughly equivalent to that required to evaluate the function 21$21$ times and calculate a finite difference table having about 200$200$ entries in total.

The results depend very critically on the choice of the user-supplied step length hbase. The overall accuracy is diminished as hbase becomes small (because of the effect of round-off error) and as hbase becomes large (because the discretization error also becomes large). If the function is used four or five times with different values of hbase one can find a reasonably good value. A process in which the value of hbase is successively halved (or doubled) is usually quite effective. Experience has shown that in cases in which the Taylor series for fun(x)${\mathbf{fun}}\left({\mathbf{x}}\right)$ about xval has a finite radius of convergence R$R$, the choices of hbase > R / 19${\mathbf{hbase}}>R/19$ are not likely to lead to good results. In this case some function values lie outside the circle of convergence.

Open in the MATLAB editor: nag_numdiff_example

function nag_numdiff_examplexval = 0.5; nder = int64(-7); hbase = 0.5; fun = @(x) 0.5*exp(2.0*x-1.0); [der, erest, ifail] = nag_numdiff(xval, nder, hbase, fun)

der = 1.0e+04 * 0.1392 0 -0.3139 0 0.8762 0 -2.4753 0 0 0 0 0 0 0 erest = 1.0e+05 * -1.0734 0 -1.4378 0 -2.4790 0 -4.4838 0 0 0 0 0 0 0 ifail = 0

Open in the MATLAB editor: d04aa_example

function d04aa_examplexval = 0.5; nder = int64(-7); hbase = 0.5; fun = @(x) 0.5*exp(2.0*x-1.0); [der, erest, ifail] = d04aa(xval, nder, hbase, fun)

der = 1.0e+04 * 0.1392 0 -0.3139 0 0.8762 0 -2.4753 0 0 0 0 0 0 0 erest = 1.0e+05 * -1.0734 0 -1.4378 0 -2.4790 0 -4.4838 0 0 0 0 0 0 0 ifail = 0

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013