NAG Library Function Document
nag_ode_ivp_rk_setup (d02pvc)
1 Purpose
nag_ode_ivp_rk_setup (d02pvc) is a setup function which must be called prior to the first call of either of the integration functions
nag_ode_ivp_rk_range (d02pcc) and
nag_ode_ivp_rk_onestep (d02pdc).
2 Specification
#include <nag.h> 
#include <nagd02.h> 
void 
nag_ode_ivp_rk_setup (Integer neq,
double tstart,
const double ystart[],
double tend,
double tol,
const double thres[],
Nag_RK_method method,
Nag_RK_task task,
Nag_ErrorAssess errass,
double hstart,
Nag_ODE_RK *opt,
NagError *fail) 

3 Description
nag_ode_ivp_rk_setup (d02pvc) and its associated functions (
nag_ode_ivp_rk_range (d02pcc),
nag_ode_ivp_rk_onestep (d02pdc),
nag_ode_ivp_rk_reset_tend (d02pwc),
nag_ode_ivp_rk_interp (d02pxc),
nag_ode_ivp_rk_errass (d02pzc)) solve the initial value problem for a first order system of ordinary differential equations. The functions, based on Runge–Kutta methods and derived from RKSUITE (
Brankin et al. (1991)) integrate
where
$y$ is the vector of
neq solution components and
$t$ is the independent variable.
The integration proceeds by steps from the initial point
${t}_{0}$ towards the final point
${t}_{f}$. An approximate solution
$y$ is computed at each step. For each component
${y}_{i},i=1,2,\dots ,{\mathbf{neq}}$ the error made in the step, i.e., the local error, is estimated. The step size is chosen automatically so that the integration will proceed efficiently while keeping this local error estimate smaller than a tolerance that you specify by means of arguments
tol and
thres.
nag_ode_ivp_rk_range (d02pcc) can be used to solve the ‘usual task’, namely integrating the system of differential equations to obtain answers at points you specify.
nag_ode_ivp_rk_onestep (d02pdc) is used for all more ‘complicated tasks’.
You should consider carefully how you want the local error to be controlled. Essentially the code uses relative local error control, with
tol being the desired relative accuracy. For reliable computation, the code must work with approximate solutions that have some correct digits, so there is an upper bound on the value you can specify for
tol. It is impossible to compute a numerical solution that is more accurate than the correctly rounded value of the true solution, so you are not allowed to specify a
tol that is too small for the precision you are using. The magnitude of the local error in
${y}_{i}$ on any step will not be greater than
${\mathbf{tol}}\times \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\sigma}_{i},{\mathbf{thres}}\left[i1\right]\right)$ where
${\sigma}_{i}$ is an average magnitude of
${y}_{i}$ over the step. If
${\mathbf{thres}}\left[i1\right]$ is smaller than the current value of
${\sigma}_{i}$, this is a relative error test and
tol indicates how many significant digits you want in
${y}_{i}$. If
${\mathbf{thres}}\left[i1\right]$ is larger than the current value of
${\sigma}_{i}$, this is an absolute error test with tolerance
${\mathbf{tol}}\times {\mathbf{thres}}\left[i1\right]$. Relative error control is the recommended mode of operation, but pure relative error control,
${\mathbf{thres}}\left[i1\right]=0.0$, is not permitted. See
Section 10 for further information about error control.
nag_ode_ivp_rk_range (d02pcc) and
nag_ode_ivp_rk_onestep (d02pdc) control local error rather than the true (global) error, the difference between the numerical and true solution. Control of the local error controls the true error indirectly. Roughly speaking, the code produces a solution that satisfies the differential equation with a discrepancy bounded in magnitude by the error tolerance. What this implies about how close the numerical solution is to the true solution depends on the stability of the problem. Most practical problems are at least moderately stable, and the true error is then comparable to the error tolerance. To judge the accuracy of the numerical solution, you could reduce
tol substantially, e.g., use
$0.1\times {\mathbf{tol}}$, and solve the problem again. This will usually result in a rather more accurate solution, and the true error of the first integration can be estimated by comparison. Alternatively, a global error assessment can be computed automatically using the argument
errass. Because indirect control of the true error by controlling the local error is generally satisfactory and because both ways of assessing true errors cost twice, or more, the cost of the integration itself, such assessments are used mostly for spot checks, selecting appropriate tolerances for local error control, and exploratory computations.
nag_ode_ivp_rk_range (d02pcc) and
nag_ode_ivp_rk_onestep (d02pdc) each implement three Runge–Kutta formula pairs, and you must select one for the integration. The best choice for
method depends on the problem. The order of accuracy is 3, 5, 8, corresponding to
${\mathbf{method}}=\mathrm{Nag\_RK\_2\_3}$,
$\mathrm{Nag\_RK\_4\_5}$ or
$\mathrm{Nag\_RK\_7\_8}$ respectively. As a rule, the lower the value of
tol, the higher the order of accuracy of the
method. If the components
thres are small enough that you are effectively specifying relative error control, experience suggests
tol 
efficient method 
${10}^{2}{10}^{4}$ 
$\mathrm{Nag\_RK\_2\_3}$ 
${10}^{3}{10}^{6}$ 
$\mathrm{Nag\_RK\_4\_5}$ 
${10}^{5}\phantom{{10}^{6}}$ 
$\mathrm{Nag\_RK\_7\_8}$ 
The overlap in the ranges of tolerances appropriate for a given
method merely reflects the dependence of efficiency on the problem being solved. Making
tol smaller will normally make the integration more expensive. However, in the range of tolerances appropriate to a
method, the increase in cost is modest. There are situations for which one
method, or even this kind of code, is a poor choice. You should not specify a very small
${\mathbf{thres}}\left[i1\right]$, when the
$i$th solution component might vanish. In particular, you should not do this when
${y}_{i}=0.0$. If you do, the code will have to work hard with any
method to compute significant digits, but
${\mathbf{method}}=\mathrm{Nag\_RK\_2\_3}$ is a particularly poor choice in this situation. All three
methods are inefficient when the problem is ‘stiff’. If it is only mildly stiff, you can solve it with acceptable efficiency with
${\mathbf{method}}=\mathrm{Nag\_RK\_2\_3}$, but if it is moderately or very stiff, a code designed specifically for such problems will be much more efficient. The higher the order of accuracy of the
method, the more smoothness is required of the solution for the
method to be efficient.
When assessment of the true (global) error is requested, this error assessment is updated at each step. Its value can be obtained at any time by a call to
nag_ode_ivp_rk_errass (d02pzc). The code monitors the computation of the global error assessment and reports any doubts it has about the reliability of the results. The assessment scheme requires some smoothness of
$f\left(t,y\right)$, and it can be deceived if
$f$ is insufficiently smooth. At very crude tolerances the numerical solution can become so inaccurate that it is impossible to continue assessing the accuracy reliably. At very stringent tolerances the effects of finite precision arithmetic can make it impossible to assess the accuracy reliably. The cost of this is roughly twice the cost of the integration itself with
${\mathbf{method}}=\mathrm{Nag\_RK\_4\_5}$ or
$\mathrm{Nag\_RK\_7\_8}$ and three times with
${\mathbf{method}}=\mathrm{Nag\_RK\_2\_3}$.
The first step of the integration is critical because it sets the scale of the problem. The integrator will find a starting step size automatically if you set the variable
hstart to 0.0. Automatic selection of the first step is so effective that you should normally use it. Nevertheless, you might want to specify a trial value for the first step to be certain that the code recognizes the scale on which phenomena occur near the initial point. Also, automatic computation of the first step size involves some cost, so supplying a good value for this step size will result in a less expensive start. If you are confident that you have a good value, provide it in the variable
hstart.
4 References
Brankin R W, Gladwell I and Shampine L F (1991) RKSUITE: A suite of Runge–Kutta codes for the initial value problems for ODEs SoftReport 91S1 Southern Methodist University
5 Arguments
 1:
neq – IntegerInput
On entry: the number of ordinary differential equations in the system.
Constraint:
${\mathbf{neq}}\ge 1$.
 2:
tstart – doubleInput

On entry: the initial value of the independent variable, ${t}_{0}$.
 3:
ystart[neq] – const doubleInput

On entry: ${y}_{0}$, the initial values of the solution, ${y}_{i}$ for $i=1,2,\dots ,{\mathbf{neq}}$, at ${t}_{0}$.
 4:
tend – doubleInput

On entry: the final value of the independent variable,
${t}_{f}$, at which the solution is required.
tstart and
tend together determine the direction of integration.
Constraint:
tend must be distinguishable from
tstart for the method and the precision of the machine being used.
 5:
tol – doubleInput

On entry: a relative error tolerance.
Constraint:
$10.0\times \mathit{machineprecision}\le {\mathbf{tol}}\le 0.01$.
 6:
thres[neq] – const doubleInput

On entry: a vector of thresholds.
Constraint:
${\mathbf{thres}}\left[i1\right]\ge \sqrt{\epsilon}$, where $\epsilon $ is the smallest possible machine number (see X02AMC).
 7:
method – Nag_RK_methodInput
On entry: the Runge–Kutta method to be used.
 ${\mathbf{method}}=\mathrm{Nag\_RK\_2\_3}$
 A 2(3) pair is used.
 ${\mathbf{method}}=\mathrm{Nag\_RK\_4\_5}$
 A 4(5) pair is used.
 ${\mathbf{method}}=\mathrm{Nag\_RK\_7\_8}$
 A 7(8) pair is used.
Constraint:
${\mathbf{method}}=\mathrm{Nag\_RK\_2\_3}$, $\mathrm{Nag\_RK\_4\_5}$ or $\mathrm{Nag\_RK\_7\_8}$.
 8:
task – Nag_RK_taskInput
On entry: determines whether the usual integration task is to be performed using
nag_ode_ivp_rk_range (d02pcc) or a more complicated task is to be performed using
nag_ode_ivp_rk_onestep (d02pdc).
 ${\mathbf{task}}=\mathrm{Nag\_RK\_range}$
 nag_ode_ivp_rk_range (d02pcc) is to be used for the integration.
 ${\mathbf{task}}=\mathrm{Nag\_RK\_onestep}$
 nag_ode_ivp_rk_onestep (d02pdc) is to be used for the integration.
Constraint:
${\mathbf{task}}=\mathrm{Nag\_RK\_range}$ or $\mathrm{Nag\_RK\_onestep}$.
 9:
errass – Nag_ErrorAssessInput
On entry: specifies whether a global error assessment is to be computed with the main integration. ${\mathbf{errass}}=\mathrm{Nag\_ErrorAssess\_on}$ specifies that it is.
Constraint:
${\mathbf{errass}}=\mathrm{Nag\_ErrorAssess\_on}$ or $\mathrm{Nag\_ErrorAssess\_off}$.
 10:
hstart – doubleInput

On entry: a value for the size of the first step in the integration to be attempted. The absolute value of
hstart is used with the direction being determined by
tstart and
tend. The actual first step taken by the integrator may be different to
hstart if the underlying algorithm determines that
hstart is unsuitable.
 ${\mathbf{hstart}}=0.0$
 The size of the first step is computed automatically.
Suggested value:
${\mathbf{hstart}}=0.0$.
 11:
opt – Nag_ODE_RK *Output
On exit: the structure of type Nag_ODE_RK initialized to appropriate values and to be passed unchanged to the integration functions
nag_ode_ivp_rk_range (d02pcc) or
nag_ode_ivp_rk_onestep (d02pdc). Memory will have been allocated by nag_ode_ivp_rk_setup (d02pvc). This memory is used by
nag_ode_ivp_rk_range (d02pcc),
nag_ode_ivp_rk_onestep (d02pdc),
nag_ode_ivp_rk_interp (d02pxc),
nag_ode_ivp_rk_reset_tend (d02pwc) and
nag_ode_ivp_rk_errass (d02pzc). The library function
nag_ode_ivp_rk_free (d02ppc) is provided so that this memory can be freed when the integration is complete or the setup function nag_ode_ivp_rk_setup (d02pvc) is to be reentered.
 12:
fail – NagError *Input/Output

The NAG error argument (see
Section 3.6 in the Essential Introduction).
6 Error Indicators and Warnings
 NE_2_REAL_ARG_EQ

On entry, ${\mathbf{tstart}}=\u27e8\mathit{\text{value}}\u27e9$ while ${\mathbf{tend}}=\u27e8\mathit{\text{value}}\u27e9$. These arguments must satisfy ${\mathbf{tstart}}\ne {\mathbf{tend}}$.
 NE_2_REAL_ARG_TOO_CLOSE

On entry, ${\mathbf{tend}}=\u27e8\mathit{\text{value}}\u27e9$ while ${\mathbf{tstart}}=\u27e8\mathit{\text{value}}\u27e9$. These arguments must satisfy $\mathrm{abs}\left({\mathbf{tend}}{\mathbf{tstart}}\right)\ge \u27e8\mathit{\text{value}}\u27e9$.
 NE_ALLOC_FAIL

Dynamic memory allocation failed.
 NE_BAD_PARAM

On entry, argument
errass had an illegal value.
On entry, argument
method had an illegal value.
On entry, argument
task had an illegal value.
 NE_INT_ARG_LT

On entry, ${\mathbf{neq}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{neq}}\ge 1$.
 NE_REAL_ARRAY_INPUT

On entry, ${\mathbf{thres}}\left[\u27e8\mathit{\text{value}}\u27e9\right]=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{thres}}\left[\u27e8\mathit{\text{value}}\u27e9\right]\ge \u27e8\mathit{\text{value}}\u27e9$.
 NE_REAL_RANGE_CONS

On entry,
${\mathbf{tol}}=\u27e8\mathit{\text{value}}\u27e9$ and
$10\times \mathit{machineprecision}=\u27e8\mathit{\text{value}}\u27e9$. The argument
tol must satisfy
$10\times \mathit{machineprecision}\le {\mathbf{tol}}\le 0.01$.
7 Accuracy
Not applicable.
8 Parallelism and Performance
Not applicable.
The value of the argument
tend may be reset during the integration without the overhead associated with a complete restart; this can be achieved by a call to
nag_ode_ivp_rk_reset_tend (d02pwc).
It is often the case that a solution component
${y}_{i}$ is of no interest when it is smaller in magnitude than a certain threshold. You can inform the code of this by setting
${\mathbf{thres}}\left[i1\right]$ to this threshold. In this way you avoid the cost of computing significant digits in
${y}_{i}$ when only the fact that it is smaller than the threshold is of interest. This matter is important when
${y}_{i}$ vanishes, and in particular, when the initial value
${\mathbf{ystart}}\left[i1\right]$ vanishes. An appropriate threshold depends on the general size of
${y}_{i}$ in the course of the integration. Physical reasoning may help you select suitable threshold values. If you do not know what to expect of
$y$, you can find out by a preliminary integration using
nag_ode_ivp_rk_range (d02pcc) with nominal values of
thres. As
nag_ode_ivp_rk_range (d02pcc) steps from
${t}_{0}$ towards
${t}_{f}$ for each
$i=1,2,\dots ,{\mathbf{neq}}$ it forms
YMAX $\left[i1\right]$, the largest magnitude of
${y}_{i}$ computed at any step in the integration so far. Using this you can determine more appropriate values for
thres for an accurate integration. You might, for example, take
${\mathbf{thres}}\left[i1\right]$ to be
$10.0\times $ machine precision times the final value of
YMAX $\left[i1\right]$.
10 Example