Settings help

FL Name Style:

FL Specification Language:

1Scope of the Chapter

This chapter provides routines for the numerical evaluation of definite integrals in one or more dimensions and for evaluating weights and abscissae of integration rules.

2Background to the Problems

The routines in this chapter are designed to estimate:
1. (a)the value of a one-dimensional definite integral of the form
 $∫abf(x)dx$ (1)
where $f\left(x\right)$ is defined by you, either at a set of points $\left({x}_{\mathit{i}},f\left({x}_{\mathit{i}}\right)\right)$, for $\mathit{i}=1,2,\dots ,n$, where $a={x}_{1}<{x}_{2}<\cdots <{x}_{n}=b$, or in the form of a function; and the limits of integration $a,b$ may be finite or infinite.
Some methods are specially designed for integrands of the form
 $f(x)=w(x)g(x)$ (2)
which contain a factor $w\left(x\right)$, called the weight-function, of a specific form. These methods take full account of any peculiar behaviour attributable to the $w\left(x\right)$ factor.
2. (b)the values of the one-dimensional indefinite integrals arising from (1) where the ranges of integration are interior to the interval $\left[a,b\right]$.
3. (c)the value of a multidimensional definite integral of the form
 $∫Rnf(x1,x2,…,xn)dxn⋯dx2dx1$ (3)
where $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is a function defined by you and ${R}_{n}$ is some region of $n$-dimensional space.
The simplest form of ${R}_{n}$ is the $n$-rectangle defined by
 $ai≤xi≤bi, i=1,2,…,n$ (4)
where ${a}_{i}$ and ${b}_{i}$ are constants. When ${a}_{i}$ and ${b}_{i}$ are functions of ${x}_{j}$ ($j), the region can easily be transformed to the rectangular form (see page 266 of Davis and Rabinowitz (1975)). Some of the methods described incorporate the transformation procedure.

2.1One-dimensional Integrals

To estimate the value of a one-dimensional integral, a quadrature rule uses an approximation in the form of a weighted sum of integrand values, i.e.,
 $∫abf(x)dx≃∑i=1Nwif(xi).$ (5)
The points ${x}_{i}$ within the interval $\left[a,b\right]$ are known as the abscissae, and the ${w}_{i}$ are known as the weights.
More generally, if the integrand has the form (2), the corresponding formula is
 $∫abw(x)g(x)dx≃∑i=1Nwig(xi).$ (6)
If the integrand is known only at a fixed set of points, these points must be used as the abscissae, and the weighted sum is calculated using finite difference methods. However, if the functional form of the integrand is known, so that its value at any abscissa is easily obtained, then a wide variety of quadrature rules are available, each characterised by its choice of abscissae and the corresponding weights.
The appropriate rule to use will depend on the interval $\left[a,b\right]$ – whether finite or otherwise – and on the form of any $w\left(x\right)$ factor in the integrand. A suitable value of $N$ depends on the general behaviour of $f\left(x\right)$; or of $g\left(x\right)$, if there is a $w\left(x\right)$ factor present.
Among possible rules, we mention particularly the Gaussian formulae, which employ a distribution of abscissae which is optimal for $f\left(x\right)$ or $g\left(x\right)$ of polynomial form.
The choice of basic rules constitutes one of the principles on which methods for one-dimensional integrals may be classified. The other major basis of classification is the implementation strategy, of which some types are now presented.
1. (a)Single rule evaluation procedures
A fixed number of abscissae, $N$, is used. This number and the particular rule chosen uniquely determine the weights and abscissae. No estimate is made of the accuracy of the result.
2. (b)Automatic procedures
The number of abscissae, $N$, within $\left[a,b\right]$ is gradually increased until consistency is achieved to within a level of accuracy (absolute or relative) you requested. There are essentially two ways of doing this; hybrid forms of these two methods are also possible:
A series of rules using increasing values of $N$ are successively applied over the whole interval $\left[a,b\right]$. It is clearly more economical if abscissae already used for a lower value of $N$ can be used again as part of a higher-order formula. This principle is known as optimal extension. There is no overlap between the abscissae used in Gaussian formulae of different orders. However, the Kronrod formulae are designed to give an optimal $\left(2N+1\right)$-point formula by adding $\left(N+1\right)$ points to an $N$-point Gauss formula. Further extensions have been developed by Patterson.
The interval $\left[a,b\right]$ is repeatedly divided into a number of sub-intervals, and integration rules are applied separately to each sub-interval. Typically, the subdivision process will be carried further in the neighbourhood of a sharp peak in the integrand than where the curve is smooth. Thus, the distribution of abscissae is adapted to the shape of the integrand.
Subdivision raises the problem of what constitutes an acceptable accuracy in each sub-interval. The usual global acceptability criterion demands that the sum of the absolute values of the error estimates in the sub-intervals should meet the conditions required of the error over the whole interval. Automatic extrapolation over several levels of subdivision may eliminate the effects of some types of singularities.
An ideal general-purpose method would be an automatic method which could be used for a wide variety of integrands, was efficient (i.e., required the use of as few abscissae as possible), and was reliable (i.e., always gave results to within the requested accuracy). Complete reliability is unobtainable, and generally higher reliability is obtained at the expense of efficiency, and vice versa. It must, therefore, be emphasized that the automatic routines in this chapter cannot be assumed to be $100%$ reliable. In general, however, the reliability is very high.

2.2Multidimensional Integrals

A distinction must be made between cases of moderately low dimensionality (say, up to $4$ or $5$ dimensions), and those of higher dimensionality. Where the number of dimensions is limited, a one-dimensional method may be applied to each dimension, according to some suitable strategy, and high accuracy may be obtainable (using product rules). However, the number of integrand evaluations rises very rapidly with the number of dimensions, so that the accuracy obtainable with an acceptable amount of computational labour is limited; for example a product of $3$-point rules in $20$ dimensions would require more than ${10}^{9}$ integrand evaluations. Special techniques such as the Monte Carlo methods can be used to deal with high dimensions.
1. (a)Products of one-dimensional rules
Using a two-dimensional integral as an example, we have
 $∫a1b1∫a2b2f(x,y)dy dx≃∑i=1Nwi [∫a2b2f(xi,y)dy]$ (7)
 $∫a1b1∫a2b2f(x,y)dy dx≃∑i=1N∑j=1Nwivjf(xi,yj)$ (8)
where $\left({w}_{i},{x}_{i}\right)$ and $\left({v}_{i},{y}_{i}\right)$ are the weights and abscissae of the rules used in the respective dimensions.
A different one-dimensional rule may be used for each dimension, as appropriate to the range and any weight function present, and a different strategy may be used, as appropriate to the integrand behaviour as a function of each independent variable.
For a rule-evaluation strategy in all dimensions, the formula (8) is applied in a straightforward manner. For automatic strategies (i.e., attempting to attain a requested accuracy), there is a problem in deciding what accuracy must be requested in the inner integral(s). Reference to formula (7) shows that the presence of a limited but random error in the $y$-integration for different values of ${x}_{i}$ can produce a ‘jagged’ function of $x$, which may be difficult to integrate to the desired accuracy and for this reason products of automatic one-dimensional routines should be used with caution (see Lyness (1983)).
2. (b)Monte Carlo methods
These are based on estimating the mean value of the integrand sampled at points chosen from an appropriate statistical distribution function. Usually a variance reducing procedure is incorporated to combat the fundamentally slow rate of convergence of the rudimentary form of the technique. These methods can be effective by comparison with alternative methods when the integrand contains singularities or is erratic in some way, but they are of quite limited accuracy.
3. (c)Number theoretic methods
These are based on the work of Korobov and Conroy and operate by exploiting implicitly the properties of the Fourier expansion of the integrand. Special rules, constructed from so-called optimal coefficients, give a particularly uniform distribution of the points throughout $n$-dimensional space and from their number theoretic properties minimize the error on a prescribed class of integrals. The method can be combined with the Monte Carlo procedure.
4. (d)Sag–Szekeres method
By transformation this method seeks to induce properties into the integrand which make it accurately integrable by the trapezoidal rule. The transformation also allows effective control over the number of integrand evaluations.
5. (e)Sparse grid methods
Given a set of one-dimensional quadrature rules of increasing levels of accuracy, the sparse grid method constructs an approximation to a multidimensional integral using $d$-dimensional tensor products of the differences between rules of adjacent levels. This provides a lower theoretical accuracy than the methods in (a), the full grid approach, which is nonetheless still sufficient for various classes of sufficiently smooth integrands. Furthermore, it requries substantially fewer evaluations than the full grid approach. Specifically, if a one-dimensional quadrature rule has $N\sim \mathit{O}\left({2}^{\ell }\right)$ points, the full grid will require $\mathit{O}\left({2}^{\mathit{ld}}\right)$ function evaluations, whereas the sparse grid of level $\ell$ will require $\mathit{O}\left({2}^{\ell }{d}^{\ell -1}\right)$. Hence a sparse grid approach is computationally feasible even for integrals over $d\sim \mathit{O}\left(100\right)$.
Sparse grid methods are deterministic, and may be viewed as automatic whole domain procedures if their level $\ell$ is allowed to increase.
An automatic adaptive strategy in several dimensions normally involves division of the region into subregions, concentrating the divisions in those parts of the region where the integrand is worst behaved. It is difficult to arrange with any generality for variable limits in the inner integral(s). For this reason, some methods use a region where all the limits are constants; this is called a hyper-rectangle. Integrals over regions defined by variable or infinite limits may be handled by transformation to a hyper-rectangle. Integrals over regions so irregular that such a transformation is not feasible may be handled by surrounding the region by an appropriate hyper-rectangle and defining the integrand to be zero outside the desired region. Such a technique should always be followed by a Monte Carlo method for integration.
The method used locally in each subregion produced by the adaptive subdivision process is usually one of three types: Monte Carlo, number theoretic or deterministic. Deterministic methods are usually the most rapidly convergent but are often expensive to use for high dimensionality and not as robust as the other techniques.

3Recommendations on Choice and Use of Available Routines

This section is divided into five subsections. The first subsection illustrates the difference between direct and reverse communication routines. The second subsection highlights the different levels of vectorization provided by different interfaces.
Sections 3.3, 3.3.2 and 3.4 consider in turn routines for: one-dimensional integrals over a finite interval, and over a semi-infinite or an infinite interval; and multidimensional integrals. Within each sub-section, routines are classified by the type of method, which ranges from simple rule evaluation to automatic adaptive algorithms. The recommendations apply particularly when the primary objective is simply to compute the value of one or more integrals, and in these cases the automatic adaptive routines are generally the most convenient and reliable, although also the most expensive in computing time.
Note however that in some circumstances it may be counter-productive to use an automatic routine. If the results of the quadrature are to be used in turn as input to a further computation (e.g., an ‘outer’ quadrature or an optimization problem), then this further computation may be adversely affected by the ‘jagged performance profile’ of an automatic routine; a simple rule-evaluation routine may provide much better overall performance. For further guidance, the article by Lyness (1983) is recommended.

3.1Direct and Reverse Communication

Routines in this chapter which evaluate an integral value may be classified as either direct communication or reverse communication. See Section 7 in How to Use the NAG Library for a description of these terms.
Currently in this chapter the only routine explicitly using reverse communication is d01raf.

3.2Choice of Interface

This section concerns the design of the interface for the provision of abscissae, and the subsequent collection of calculated information, typically integrand evaluations. Vectorized interfaces typically allow for more efficient operation.
1. (a)Single abscissa interfaces
The algorithm will provide a single abscissa at which information is required. These are typically the most simple to use, although they may be significantly less efficient than a vectorized equivalent. Most of the algorithms in this chapter are of this type.
Examples of this include d01ajf and d01fbf.
2. (b)Vectorized abscissae interfaces
The algorithm will return a set of abscissae, at all of which information is required. While these are more complicated to use, they are typically more efficient than a non-vectorized equivalent. They reduce the overhead of function calls, allow the avoidance of repetition of computations common to each of the integrand evaluations, and offer greater scope for vectorization and parallelization of your code. Where possible and practical for the specific algorithm, all future routines will provide a vectorized abscissae interface.
Examples include d01rgf, d01uaf, and the routines d01rjf and d01rkf, which are vectorized replacements for d01ajf and d01akf respectively.
3. (c)Multiple integral interfaces
These are routines which allow for multiple integrals to be estimated simultaneously. As with (b) above, these are more complicated to use than single integral routines, however they can provide higher efficiency, particularly if several integrals require the same subcalculations at the same abscissae. They are most efficient if integrals which are supplied together are expected to have similar behaviour over the domain, particularly when the algorithm is adaptive.
Examples include d01eaf and d01raf.

3.3One-dimensional Integrals

3.3.1Over a Finite Interval

1. (a)Integrand defined at a set of points
If $f\left(x\right)$ is defined numerically at four or more points, then the Gill–Miller finite difference method (d01gaf) should be used. The interval of integration is taken to coincide with the range of $x$ values of the points supplied. It is in the nature of this problem that any routine may be unreliable. In order to check results independently and so as to provide an alternative technique you may fit the integrand by Chebyshev series using e02adf and then use routine e02ajf to evaluate its integral (which need not be restricted to the range of the integration points, as is the case for d01gaf). A further alternative is to fit a cubic spline to the data using e02baf and then to evaluate its integral using e02bdf.
2. (b)Integrand defined as a function
If the functional form of $f\left(x\right)$ is known, then one of the following approaches should be taken. They are arranged in the order from most specific to most general, hence the first applicable procedure in the list will be the most efficient. However, if you do not wish to make any assumptions about the integrand, the most reliable routines to use will be d01rjf, d01rkf, d01rlf, d01rgf or d01raf, although these will in general be less efficient for simple integrals.
1. (i)Rule-evaluation routines
If $f\left(x\right)$ is known to be sufficiently well behaved (more precisely, can be closely approximated by a polynomial of moderate degree), a Gaussian routine with a suitable number of abscissae may be used.
d01tbf or d01tcf with d01fbf may be used if it is required to examine the weights and abscissae.
d01tbf is faster and more accurate, whereas d01tcf is more general. d01uaf uses the same quadrature rules as d01tbf, and may be used if you do not explicitly require the weights and abscissae.
If $f\left(x\right)$ is well behaved, apart from a weight-function of the form
 $|x-a+b2| c or (b-x)c(x-a)d,$
d01tcf with d01fbf may be used.
d01tbf and d01tcf generate weights and abscissae for specific Gauss rules. Weights and abscissae for other quadrature formulae may be computed using routines d01tdf or d01tef. Wherever possible use d01tdf in preference to d01tef. The former however requires information that may not be readily available.
2. (ii)Automatic whole-interval routines
If $f\left(x\right)$ is reasonably smooth, and the required accuracy is not too high, the automatic whole interval routines d01arf and d01bdf may be used. Additionally, d01esf with $d=1$ may be used with an appropriate transformation from the unit interval.
d01bdf uses the Gauss $10$-point rule, with the $21$ point Kronrod extension, and the subsequent $43$ and $87$ point Patterson extensions if required.
d01esf supports multiple simultaneous integrals, and has a vectorized interface. Either high order Gauss–Patterson rules (of size ${2}^{\ell }-1$, for $\ell =1,\dots ,9$), or high order Clenshaw-Curtis rules (of size ${2}^{\ell -1}+1$, for $\ell =2,\dots ,12$). Gauss–Patterson rules possess greater polynomial accuracy, whereas Clenshaw–Curtis rules are often well suited to oscillatory integrals.
d01arf incorporates the same high order Gauss–Patterson rules as d01esf, and is the only routine that may be used for indefinite integration.
Firstly, several routines are available for integrands of the form $w\left(x\right)g\left(x\right)$ where $g\left(x\right)$ is a ‘smooth’ function (i.e., has no singularities, sharp peaks or violent oscillations in the interval of integration) and $w\left(x\right)$ is a weight function of one of the following forms.
1. 1.if $w\left(x\right)={\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$, where $k,l=0$ or $1$, $\alpha ,\beta >-1$: use d01apf;
2. 2.if $w\left(x\right)=\frac{1}{x-c}$: use d01aqf (this integral is called the Hilbert transform of $g$);
3. 3.if $w\left(x\right)=\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$: use d01anf (this routine can also handle certain types of singularities in $g\left(x\right)$).
Secondly, there are multiple routines for general $f\left(x\right)$, using different strategies.
d01rjf and d01rkf use the strategy of Piessens et al. (1983), using repeated bisection of the interval, and in the first case the $\epsilon$-algorithm (Wynn (1956)), to improve the integral estimate. This can cope with singularities away from the end points, provided singular points do not occur as abscissae, d01rkf tends to perform better than d01rjf on more oscillatory integrals.
d01rlf uses the same subdivision strategy as d01rjf over a set of initial interval segments determined by supplied break-points. It is hence suitable for integrals with discontinuities (including switches in definition) or sharp peaks occuring at known points. Such integrals may also be approximated using other routines which do not allow break-points, although such integrals should be evaluated over each of the sub-intervals seperately.
d01raf again uses the strategy of Piessens et al. (1983), and provides the functionality of d01rjf, d01rkf and d01rlf in a reverse communication framework. It also supports multiple integrals and uses a vectorized interface for the abscissae. Hence it is likely to be more efficient if several similar integrals are required to be evaluated over the same domain. Furthermore, its behaviour can be tailored through the use of optional parameters.
d01ahf uses the strategy of Patterson (1968) and the $\epsilon$-algorithm to adaptively evaluate the integral in question. It tends to be more efficient than the bisection based algorithms, although these tend to be more robust when singularities occur away from the end points.
d01rgf uses another adaptive scheme due to Gonnet (2010). This attempts to match the quadrature rule to the underlying integrand as well as subdividing the domain. Further, it can explicitly deal with singular points at abscissae, should NaN's or ∞ be returned by the user-supplied (sub)routine, provided the generation of these does not cause the program to halt (see Chapter X07).

3.3.2Over a Semi-infinite or Infinite Interval

1. (a)Integrand defined at a set of points
If $f\left(x\right)$ is defined numerically at four or more points, and the portion of the integral lying outside the range of the points supplied may be neglected, then the Gill–Miller finite difference method, d01gaf, should be used.
2. (b)Integrand defined as a function
1. (i)Rule evaluation routines
If $f\left(x\right)$ behaves approximately like a polynomial in $x$, apart from a weight function of the form:
1. 1.${e}^{-\beta x},\beta >0$ (semi-infinite interval, lower limit finite); or
2. 2.${e}^{-\beta x},\beta <0$ (semi-infinite interval, upper limit finite); or
3. 3.${e}^{-\beta {\left(x-\alpha \right)}^{2}},\beta >0$ (infinite interval),
or if $f\left(x\right)$ behaves approximately like a polynomial in ${\left(x+b\right)}^{-1}$ (semi-infinite range), then the Gaussian routines may be used.
d01uaf may be used if it is not required to examine the weights and abscissae.
d01tbf or d01tcf with d01fbf may be used if it is required to examine the weights and abscissae.
d01tbf is faster and more accurate, whereas d01tcf is more general.
d01ubf returns an approximation to the specific problem .
d01rmf may be used, except for integrands which decay slowly towards an infinite end point, and oscillate in sign over the entire range. For this class, it may be possible to calculate the integral by integrating between the zeros and invoking some extrapolation process (see c06baf).
d01asf may be used for integrals involving weight functions of the form $\mathrm{cos}\left(\omega x\right)$ and $\mathrm{sin}\left(\omega x\right)$ over a semi-infinite interval (lower limit finite).
The following alternative procedures are mentioned for completeness, though their use will rarely be necessary.
1. 1.If the integrand decays rapidly towards an infinite end point, a finite cut-off may be chosen, and the finite range methods applied.
2. 2.If the only irregularities occur in the finite part (apart from a singularity at the finite limit, with which d01rmf can cope), the range may be divided, with d01rmf used on the infinite part.
3. 3.A transformation to finite range may be employed, e.g.,
 $x= 1-tt or x=- loge⁡t$
will transform $\left(0,\infty \right)$ to $\left(1,0\right)$ while for infinite ranges we have
 $∫-∞∞f(x)dx=∫0∞(f(x)+f(-x))dx.$
If the integrand behaves badly on $\left(-\infty ,0\right)$ and well on $\left(0,\infty \right)$ or vice versa it is better to compute it as $\underset{-\infty }{\overset{0}{\int }}f\left(x\right)dx+\underset{0}{\overset{\infty }{\int }}f\left(x\right)dx$. This saves computing unnecessary function values in the semi-infinite range where the function is well behaved.

3.4Multidimensional Integrals

A number of techniques are available in this area and the choice depends to a large extent on the dimension and the required accuracy. It can be advantageous to use more than one technique as a confirmation of accuracy, particularly for high-dimensional integrations. Several routines include a transformation procedure, using a user-supplied subroutine, which allows general product regions to be easily dealt with in terms of conversion to the standard $n$-cube region.
1. (a)Products of one-dimensional rules (suitable for up to about $5$ dimensions)
If $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is known to be a sufficiently well behaved function of each variable ${x}_{i}$, apart possibly from weight functions of the types provided, a product of Gaussian rules may be used. These are provided by d01tbf or d01tcf with d01fbf. Rules for finite, semi-infinite and infinite ranges are included.
For two-dimensional integrals only, unless the integrand is very badly behaved, the automatic whole-interval product procedure of d01daf may be used. The limits of the inner integral may be user-specified functions of the outer variable. Infinite limits may be handled by transformation (see Section 3.3.2); end point singularities introduced by transformation should not be troublesome, as the integrand value will not be required on the boundary of the region.
If none of these routines proves suitable and convenient, the one-dimensional routines may be used recursively. For example, the two-dimensional integral
 $I=∫a1b1∫a2b2f(x,y)dy dx$
may be expressed as
 $I=∫a1b1 F(x)dx, where F(x)=∫a2b2 f(x,y)dy.$
The user-supplied code to evaluate $F\left(x\right)$ will call the integration routine for the $y$-integration, which will call more user-supplied code for $f\left(x,y\right)$ as a function of $y$ ($x$ being effectively a constant).
The reverse communication routine d01raf may be used by itself in a pseudo-recursive manner, in that it may be called to evaluate an inner integral for the integrand value of an outer integral also being calculated by d01raf.
2. (b)Sag–Szekeres method
Two routines are based on this method.
d01fdf is particularly suitable for integrals of very large dimension although the accuracy is generally not high. It allows integration over either the general product region (with built-in transformation to the $n$-cube) or the $n$-sphere. Although no error estimate is provided, two adjustable arguments may be varied for checking purposes or may be used to tune the algorithm to particular integrals.
d01jaf is also based on the Sag–Szekeres method and integrates over the $n$-sphere. It uses improved transformations which may be varied according to the behaviour of the integrand. Although it can yield very accurate results it can only practically be employed for dimensions not exceeding $4$.
3. (c)Number Theoretic method
Two subroutines are based on this method, d01gcf and a vectorized equivalent d01gdf.
Algorithms of this type carry out multidimensional integration using the Korobov–Conroy method over a product region with built-in transformation to the $n$-cube. A stochastic modification of this method is incorporated into the routines in this Library, hybridising the technique with the Monte Carlo procedure. An error estimate is provided in terms of the statistical standard error. A number of pre-computed optimal coefficient rules for up to $20$ dimensions are provided; others can be computed using d01gyf and d01gzf. Like the Sag–Szekeres method it is suitable for large dimensional integrals although the accuracy is not high.
d01gcf requires a function to be provided to evaluate the value of the integrand at a single abscissa, and a subroutine to return the upper and lower limits of integration in a given dimension.
d01gdf has a vectorized interface which can result in faster execution, especially on vector-processing machines. You are required to provide two subroutines, the first to return an array of values of the integrand at each of an array of points, and the second to evaluate the limits of integration at each of an array of points. This reduces the overhead of function calls, avoids repetitions of computations common to each of the evaluations of the integral and limits of integration, and offers greater scope for vectorization of your code.
4. (d)A combinatorial extrapolation method
d01paf computes a sequence of approximations and an error estimate to the integral of a function over a multidimensional simplex using a combinatorial method with extrapolation.
5. (e)Sparse Grid method
d01esf implements a sparse grid quadrature scheme for the integration of a vector of multidimensional integrals over the unit hypercube,
 $F ≈ ∫ [0,1] d f(x) dx .$
The routine uses a vectorized interface, which returns a set of points at which the integrands must be evaluated in a sparse storage format for efficiency.
Other domains can be readily integrated over by using an appropriate mapping inside the provided subroutine for evaluating the integrands. It is suitable for $d$ up to $\mathit{O}\left(100\right)$, although no upper bound on the number of dimensions is enforced. It will also evaluate one-dimensional integrals, although in this case the sparse grid used is in fact the full grid.
The routine uses optional parameters, set and queried using the routines d01zkf and d01zlf respectively. Amongst other options, these allow the parallelization of the routine to be controlled.
6. (f)Automatic routines (d01fcf and d01gbf)
Both routines are for integrals of the form
 $∫a1b1 ∫a2b2 ⋯ ∫anbn f(x1,x2,…,xn)dxndxn-1⋯dx1.$
d01gbf is an adaptive Monte Carlo routine. This routine is usually slow and not recommended for high-accuracy work. It is a robust routine that can often be used for low-accuracy results with highly irregular integrands or when $n$ is large.
d01fcf is an adaptive deterministic routine. Convergence is fast for well behaved integrands. Highly accurate results can often be obtained for $n$ between $2$ and $5$, using significantly fewer integrand evaluations than would be required by d01gbf. The routine will usually work when the integrand is mildly singular and for $n\le 10$ should be used before d01gbf. If it is known in advance that the integrand is highly irregular, it is best to compare results from at least two different routines.
There are many problems for which one or both of the routines will require large amounts of computing time to obtain even moderately accurate results. The amount of computing time is controlled by the number of integrand evaluations you have allowed, and you should set this argument carefully, with reference to the time available and the accuracy desired.
d01eaf extends the technique of d01fcf to integrate adaptively more than one integrand, that is to calculate the set of integrals
 $∫a1b1 ∫a2b2 ⋯ ∫anbn (f1,f2,…,fm) dxndxn-1⋯dx1$
for a set of similar integrands ${f}_{1},{f}_{2},\dots ,{f}_{m}$ where ${f}_{i}={f}_{i}\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$.

4Decision Trees

Tree 1: One-dimensional integrals over a finite interval

 Is the functional form of the integrand known? Is indefinite integration required? d01arf yes yes no no Do you require reverse communication? d01raf yes no Are you concerned with efficiency for simple integrals? Is the integrand smooth (polynomial-like) apart from weight function ${|x-\left(a+b\right)/2|}^{c}$ or ${\left(b-x\right)}^{c}{\left(x-a\right)}^{d}$? d01arf, d01uaf, d01fbf or d01gdf yes yes no no Is the integrand reasonably smooth and the required accuracy not too great? d01bdf, d01arf or d01uaf, or possibly d01esf yes no Are multiple integrands to be integrated simultaneously? d01raf or possibly d01esf yes no Has the integrand discontinuities, sharp peaks or singularities at known points other than the end points? Split the range and begin again; or use d01rgf or d01rlf yes no Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function ${\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }\phantom{\rule{0ex}{0ex}}{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$? d01apf yes no Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function ${\left(x-c\right)}^{-1}$? d01aqf yes no Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$? d01anf yes no Is the integrand free of singularities? d01rjf, d01rkf or d01uaf, or possibly d01esf yes no Is the integrand free of discontinuities and of singularities except possibly at the end points? d01ahf yes no d01raf, d01rgf or d01rjf d01ahf, d01raf, d01rgf or d01rjf d01gaf

Tree 2: One-dimensional integrals over a semi-infinite or infinite interval

 Is the functional form of the integrand known? Are you concerned with efficiency for simple integrands? Is the integrand smooth (polynomial-like) with no exceptions? d01uaf, d01bdf, d01arf or d01esf with transformation. See Section 3.3.2(b)(ii). yes yes yes no no no Is the integrand of the form ? d01ubf yes no Is the integrand smooth (polynomial-like) apart from weight function ${e}^{-\beta \left(x\right)}$ (semi-infinite range) or ${e}^{{-\beta \left(x-a\right)}^{2}}$ (infinite range) or is the integrand polynomial-like in $\frac{1}{x+b}$? (semi-infinite range)? d01uaf or d01fbf yes no Has integrand discontinuities, sharp peaks or singularities at known points other than a finite limit? Split range; begin again using Tree 1 or Tree 2 yes no Does the integrand oscillate over the entire range? Does the integrand decay rapidly towards an infinite limit? Use d01rmf; or set cutoff and use Tree 1 yes yes no no Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ (semi-infinite range)? d01asf yes no Use finite-range integration between the zeros and extrapolate (see c06baf) d01rmf d01rmf d01gaf (integrates over the range of the points supplied)

Tree 3: Multidimensional integrals

 Is dimension $\text{}=2$ and product region? d01daf yes no Is dimension $\text{}\le 4$ Is region an $n$-sphere? d01fbf with user transformation or d01jaf yes yes no no Is region a Simplex? d01fbf with user transformation or d01paf yes no Is the integrand smooth (polynomial-like) in each dimension apart from weight function? d01fbf yes no Is integrand free of extremely bad behaviour? d01esf, d01fcf, d01fdf or d01gdf yes no Is bad behaviour on the boundary? d01fcf or d01fdf yes no Compare results from at least two of d01fcf, d01fdf, d01gbf and d01gdf, d01esf and one-dimensional recursive application Is region an $n$-sphere? d01fdf yes no Is region a Simplex? d01paf yes no Is high accuracy required? d01fdf with argument tuning yes no Is dimension high? d01esf, d01fdf, d01gbf or d01gdf yes no d01fcf
Note: in the case where there are many integrals to be evaluated d01eaf should be preferred to d01fcf.
d01fbf may require the use of d01tbf, d01tcf or d01tdf to calculate the weights and abscissae for each dimension (d01tdf may require use of d01tef).

5Functionality Index

 Korobov optimal coefficients for use in d01gcf and d01gdf:
 when number of points is a product of $2$ primes d01gzf
 when number of points is prime d01gyf
 over a finite two-dimensional region d01daf
 over a general product region,
 Korobov–Conroy number-theoretic method d01gcf
 Sag–Szekeres method (also over $n$-sphere) d01fdf
 variant of d01gcf especially efficient on vector machines d01gdf
 over a hyper-rectangle,
 multiple integrands d01eaf
 Monte Carlo method d01gbf
 sparse grid method (with user transformation),
 muliple integrands, vectorized interface d01esf
 over an $n$-simplex d01paf
 over an $n$-sphere $\left(n\le 4\right)$,
 allowing for badly behaved integrands d01jaf
 adaptive integration of a function over a finite interval,
 strategy due to Gonnet,
 vectorized interface d01rgf
 strategy due to Patterson,
 suitable for well-behaved integrands, except possibly at end-points d01ahf
 strategy due to Piessens and de Doncker,
 allowing for singularities at user-specified break-points d01rlf
 suitable for badly behaved integrands d01rjf
 suitable for highly oscillatory integrals d01rkf
 weight function $1/\left(x-c\right)$ Cauchy principal value (Hilbert transform) d01aqf
 weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ d01anf
 weight function with end-point singularities of algebraico-logarithmic type d01apf
 adaptive integration of a function over a infinite or semi-infinite interval,
 strategy due to Piessens and de Doncker d01rmf
 adaptive integration of a function over an infinite interval or semi-infinite interval,
 weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ d01asf
 integration of a function defined by data values only,
 Gill–Miller method d01gaf
 non-adaptive integration over a finite, semi-infinite or infinite interval,
 using pre-computed weights and abscissae
 specific integral with weight $\mathrm{exp}\left({-x}^{2}\right)$ over semi-infinite interval d01ubf
 vectorized interface d01uaf
 non-adaptive integration over a finite interval d01bdf
 non-adaptive integration over a finite interval,
 with provision for indefinite integrals also d01arf
 reverse communication,
 adaptive integration over a finite interval,
 multiple integrands,
 efficient on vector machines d01raf
 Service routines,
 array size query for d01raf d01rcf
 general option getting d01zlf
 general option setting and initialization d01zkf
 Weights and abscissae for Gaussian quadrature rules,
 method of Golub and Welsch,
 calculating the weights and abscissae d01tdf
 generate recursive coefficients d01tef
 more general choice of rule,
 calculating the weights and abscissae d01tcf
 restricted choice of rule,
 using pre-computed weights and abscissae d01tbf

6Auxiliary Routines Associated with Library Routine Arguments

 d01fdv nagf_quad_md_sphere_dummy_regionSee the description of the argument region in d01fdf. d01rbm nagf_quad_d01rb_dummySee the description of the argument monit in d01rbf.

7 Withdrawn or Deprecated Routines

The following lists all those routines that have been withdrawn since Mark 23 of the Library or are in the Library, but deprecated.
Routine Status Replacement Routine(s)
d01ajf Deprecated d01rjf
d01akf Deprecated d01rkf
d01alf Deprecated d01rlf
d01amf Deprecated d01rmf
d01atf Deprecated d01rjf
d01auf Deprecated d01rkf
d01baf Withdrawn at Mark 26 d01uaf
d01baw Withdrawn at Mark 26
d01bax Withdrawn at Mark 26
d01bay Withdrawn at Mark 26
d01baz Withdrawn at Mark 26
d01bbf Withdrawn at Mark 26 d01tbf
d01bcf Deprecated d01tcf
d01rbf To be withdrawn at Mark 28 No replacement required

8References

Davis P J and Rabinowitz P (1975) Methods of Numerical Integration Academic Press
Gonnet P (2010) Increasing the reliability of adaptive quadrature using explicit interpolants ACM Trans. Math. software 37 26
Lyness J N (1983) When not to use an automatic quadrature routine SIAM Rev. 25 63–87
Patterson T N L (1968) The Optimum addition of points to quadrature formulae Math. Comput. 22 847–856
Piessens R, de Doncker–Kapenga E, Überhuber C and Kahaner D (1983) QUADPACK, A Subroutine Package for Automatic Integration Springer–Verlag
Sobol I M (1974) The Monte Carlo Method The University of Chicago Press
Stroud A H (1971) Approximate Calculation of Multiple Integrals Prentice–Hall
Wynn P (1956) On a device for computing the ${e}_{m}\left({S}_{n}\right)$ transformation Math. Tables Aids Comput. 10 91–96