# NAG C Library Chapter Introduction

## 1Scope of the Chapter

This chapter provides functions for the numerical evaluation of definite integrals in one or more dimensions and for evaluating weights and abscissae of integration rules.

## 2Background to the Problems

The functions in this chapter are designed to estimate:
(a) the value of a one-dimensional definite integral of the form
 $∫abfxdx$ (1)
where $f\left(x\right)$ is defined by you, either at a set of points $\left({x}_{\mathit{i}},f\left({x}_{\mathit{i}}\right)\right)$, for $\mathit{i}=1,2,\dots ,n$, where $a={x}_{1}<{x}_{2}<\cdots <{x}_{n}=b$, or in the form of a function; and the limits of integration $a,b$ may be finite or infinite.
Some methods are specially designed for integrands of the form
 $fx=wxgx$ (2)
which contain a factor $w\left(x\right)$, called the weight-function, of a specific form. These methods take full account of any peculiar behaviour attributable to the $w\left(x\right)$ factor.
(b) the value of a multidimensional definite integral of the form
 $∫Rnfx1,x2,…,xndxn⋯dx2dx1$ (3)
where $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is a function defined by you and ${R}_{n}$ is some region of $n$-dimensional space.
The simplest form of ${R}_{n}$ is the $n$-rectangle defined by
 $ai≤xi≤bi, i=1,2,…,n$ (4)
where ${a}_{i}$ and ${b}_{i}$ are constants. When ${a}_{i}$ and ${b}_{i}$ are functions of ${x}_{j}$ ($j), the region can easily be transformed to the rectangular form (see page 266 of Davis and Rabinowitz (1975)). Some of the methods described incorporate the transformation procedure.

### 2.1One-dimensional Integrals

To estimate the value of a one-dimensional integral, a quadrature rule uses an approximation in the form of a weighted sum of integrand values, i.e.,
 $∫abfxdx≃∑i=1Nwifxi.$ (5)
The points ${x}_{i}$ within the interval $\left[a,b\right]$ are known as the abscissae, and the ${w}_{i}$ are known as the weights.
More generally, if the integrand has the form (2), the corresponding formula is
 $∫abwxgxdx≃∑i=1Nwigxi.$ (6)
If the integrand is known only at a fixed set of points, these points must be used as the abscissae, and the weighted sum is calculated using finite difference methods. However, if the functional form of the integrand is known, so that its value at any abscissa is easily obtained, then a wide variety of quadrature rules are available, each characterised by its choice of abscissae and the corresponding weights.
The appropriate rule to use will depend on the interval $\left[a,b\right]$ – whether finite or otherwise – and on the form of any $w\left(x\right)$ factor in the integrand. A suitable value of $N$ depends on the general behaviour of $f\left(x\right)$; or of $g\left(x\right)$, if there is a $w\left(x\right)$ factor present.
Among possible rules, we mention particularly the Gaussian formulae, which employ a distribution of abscissae which is optimal for $f\left(x\right)$ or $g\left(x\right)$ of polynomial form.
The choice of basic rules constitutes one of the principles on which methods for one-dimensional integrals may be classified. The other major basis of classification is the implementation strategy, of which some types are now presented.
(a) Single rule evaluation procedures
A fixed number of abscissae, $N$, is used. This number and the particular rule chosen uniquely determine the weights and abscissae. No estimate is made of the accuracy of the result.
(b) Automatic procedures
The number of abscissae, $N$, within $\left[a,b\right]$ is gradually increased until consistency is achieved to within a level of accuracy (absolute or relative) you requested. There are essentially two ways of doing this; hybrid forms of these two methods are also possible:
 (i) whole interval procedures (non-adaptive) A series of rules using increasing values of $N$ are successively applied over the whole interval $\left[a,b\right]$. It is clearly more economical if abscissae already used for a lower value of $N$ can be used again as part of a higher-order formula. This principle is known as optimal extension. There is no overlap between the abscissae used in Gaussian formulae of different orders. However, the Kronrod formulae are designed to give an optimal $\left(2N+1\right)$-point formula by adding $\left(N+1\right)$ points to an $N$-point Gauss formula. Further extensions have been developed by Patterson. (ii) adaptive procedures The interval $\left[a,b\right]$ is repeatedly divided into a number of sub-intervals, and integration rules are applied separately to each sub-interval. Typically, the subdivision process will be carried further in the neighbourhood of a sharp peak in the integrand than where the curve is smooth. Thus, the distribution of abscissae is adapted to the shape of the integrand. Subdivision raises the problem of what constitutes an acceptable accuracy in each sub-interval. The usual global acceptability criterion demands that the sum of the absolute values of the error estimates in the sub-intervals should meet the conditions required of the error over the whole interval. Automatic extrapolation over several levels of subdivision may eliminate the effects of some types of singularities.
An ideal general-purpose method would be an automatic method which could be used for a wide variety of integrands, was efficient (i.e., required the use of as few abscissae as possible), and was reliable (i.e., always gave results to within the requested accuracy). Complete reliability is unobtainable, and generally higher reliability is obtained at the expense of efficiency, and vice versa. It must therefore be emphasized that the automatic functions in this chapter cannot be assumed to be $100%$ reliable. In general, however, the reliability is very high.

### 2.2Multidimensional Integrals

A distinction must be made between cases of moderately low dimensionality (say, up to $4$ or $5$ dimensions), and those of higher dimensionality. Where the number of dimensions is limited, a one-dimensional method may be applied to each dimension, according to some suitable strategy, and high accuracy may be obtainable (using product rules). However, the number of integrand evaluations rises very rapidly with the number of dimensions, so that the accuracy obtainable with an acceptable amount of computational labour is limited; for example a product of $3$-point rules in $20$ dimensions would require more than ${10}^{9}$ integrand evaluations. Special techniques such as the Monte–Carlo methods can be used to deal with high dimensions.
(a) Products of one-dimensional rules
Using a two-dimensional integral as an example, we have
 $∫a1b1∫a2b2fx,ydy dx≃∑i=1Nwi ∫a2b2fxi,ydy$ (7)
 $∫a1b1∫a2b2fx,ydy dx≃∑i=1N∑j=1Nwivjfxi,yj$ (8)
where $\left({w}_{i},{x}_{i}\right)$ and $\left({v}_{i},{y}_{i}\right)$ are the weights and abscissae of the rules used in the respective dimensions.
A different one-dimensional rule may be used for each dimension, as appropriate to the range and any weight function present, and a different strategy may be used, as appropriate to the integrand behaviour as a function of each independent variable.
For a rule-evaluation strategy in all dimensions, the formula (8) is applied in a straightforward manner. For automatic strategies (i.e., attempting to attain a requested accuracy), there is a problem in deciding what accuracy must be requested in the inner integral(s). Reference to formula (7) shows that the presence of a limited but random error in the $y$-integration for different values of ${x}_{i}$ can produce a ‘jagged’ function of $x$, which may be difficult to integrate to the desired accuracy and for this reason products of automatic one-dimensional functions should be used with caution (see Lyness (1983)).
(b) Monte–Carlo methods
These are based on estimating the mean value of the integrand sampled at points chosen from an appropriate statistical distribution function. Usually a variance reducing procedure is incorporated to combat the fundamentally slow rate of convergence of the rudimentary form of the technique. These methods can be effective by comparison with alternative methods when the integrand contains singularities or is erratic in some way, but they are of quite limited accuracy.
(c) Number theoretic methods
These are based on the work of Korobov and Conroy and operate by exploiting implicitly the properties of the Fourier expansion of the integrand. Special rules, constructed from so-called optimal coefficients, give a particularly uniform distribution of the points throughout $n$-dimensional space and from their number theoretic properties minimize the error on a prescribed class of integrals. The method can be combined with the Monte–Carlo procedure.
(d) Sag–Szekeres method
By transformation this method seeks to induce properties into the integrand which make it accurately integrable by the trapezoidal rule. The transformation also allows effective control over the number of integrand evaluations.
(e) Sparse grid methods
Given a set of one-dimensional quadrature rules of increasing levels of accuracy, the sparse grid method constructs an approximation to a multidimensional integral using $d$-dimensional tensor products of the differences between rules of adjacent levels. This provides a lower theoretical accuracy than the methods in (a), the full grid approach, which is nonetheless still sufficient for various classes of sufficiently smooth integrands. Furthermore, it requries substantially fewer evaluations than the full grid approach. Specifically, if a one-dimensional quadrature rule has $N\sim \mathit{O}\left({2}^{\ell }\right)$ points, the full grid will require $\mathit{O}\left({2}^{\mathit{ld}}\right)$ function evaluations, whereas the sparse grid of level $\ell$ will require $\mathit{O}\left({2}^{\ell }{d}^{\ell -1}\right)$. Hence a sparse grid approach is computationally feasible even for integrals over $d\sim \mathit{O}\left(100\right)$.
Sparse grid methods are deterministic, and may be viewed as automatic whole domain procedures if their level $\ell$ is allowed to increase.
An automatic adaptive strategy in several dimensions normally involves division of the region into subregions, concentrating the divisions in those parts of the region where the integrand is worst behaved. It is difficult to arrange with any generality for variable limits in the inner integral(s). For this reason, some methods use a region where all the limits are constants; this is called a hyper-rectangle. Integrals over regions defined by variable or infinite limits may be handled by transformation to a hyper-rectangle. Integrals over regions so irregular that such a transformation is not feasible may be handled by surrounding the region by an appropriate hyper-rectangle and defining the integrand to be zero outside the desired region. Such a technique should always be followed by a Monte–Carlo method for integration.
The method used locally in each subregion produced by the adaptive subdivision process is usually one of three types: Monte–Carlo, number theoretic or deterministic. Deterministic methods are usually the most rapidly convergent but are often expensive to use for high dimensionality and not as robust as the other techniques.

## 3Recommendations on Choice and Use of Available Functions

This section is divided into five subsections. The first subsection illustrates the difference between direct and reverse communication functions. The second subsection highlights the different levels of vectorization provided by different interfaces.
Sections 3.3, 3.4 and 3.5 consider in turn functions for: one-dimensional integrals over a finite interval, and over a semi-infinite or an infinite interval; and multidimensional integrals. Within each sub-section, functions are classified by the type of method, which ranges from simple rule evaluation to automatic adaptive algorithms. The recommendations apply particularly when the primary objective is simply to compute the value of one or more integrals, and in these cases the automatic adaptive functions are generally the most convenient and reliable, although also the most expensive in computing time.
Note however that in some circumstances it may be counter-productive to use an automatic function. If the results of the quadrature are to be used in turn as input to a further computation (e.g., an ‘outer’ quadrature or an optimization problem), then this further computation may be adversely affected by the ‘jagged performance profile’ of an automatic function; a simple rule-evaluation function may provide much better overall performance. For further guidance, the article by Lyness (1983) is recommended.

### 3.1Direct and Reverse Communication

Functions in this chapter which evaluate an integral value may be classified as either direct communication or reverse communication. See Section 3.3.2 in How to Use the NAG Library and its Documentation for a description of these terms.
Currently in this chapter the only function explicitly using reverse communication is nag_quad_1d_gen_vec_multi_rcomm (d01rac).

### 3.2Choice of Interface

This section concerns the design of the interface for the provision of abscissae, and the subsequent collection of calculated information, typically integrand evaluations. Vectorized interfaces typically allow for more efficient operation.
 (a) Single abscissa interfaces The algorithm will provide a single abscissa at which information is required. These are typically the most simple to use, although they may be significantly less efficient than a vectorized equivalent. Most of the algorithms in this chapter are of this type. Examples of this include nag_quad_md_gauss (d01fbc) and nag_1d_quad_gen_1 (d01sjc). (b) Vectorized abscissae interfaces The algorithm will return a set of abscissae, at all of which information is required. While these are more complicated to use, they are typically more efficient than a non-vectorized equivalent. They reduce the overhead of function calls, allow the avoidance of repetition of computations common to each of the integrand evaluations, and offer greater scope for vectorization and parallelization of your code. Examples include nag_quad_1d_fin_gonnet_vec (d01rgc) and nag_quad_1d_gauss_vec (d01uac). (c) Multiple integral interfaces These are functions which allow for multiple integrals to be estimated simultaneously. As with (b) above, these are more complicated to use than single integral functions, however they can provide higher efficiency, particularly if several integrals require the same subcalculations at the same abscissae. They are most efficient if integrals which are supplied together are expected to have similar behaviour over the domain, particularly when the algorithm is adaptive. nag_quad_1d_gen_vec_multi_rcomm (d01rac) is an example.

### 3.3One-dimensional Integrals over a Finite Interval

(a) Integrand defined at a set of points
If $f\left(x\right)$ is defined numerically at four or more points, then the Gill–Miller finite difference method (nag_1d_quad_vals (d01gac)) should be used. The interval of integration is taken to coincide with the range of $x$ values of the points supplied. It is in the nature of this problem that any function may be unreliable. In order to check results independently and so as to provide an alternative technique you may fit the integrand by Chebyshev series using nag_1d_cheb_fit (e02adc) and then use function nag_1d_cheb_intg (e02ajc) to evaluate its integral (which need not be restricted to the range of the integration points, as is the case for nag_1d_quad_vals (d01gac)). A further alternative is to fit a cubic spline to the data using nag_1d_spline_fit_knots (e02bac) and then to evaluate its integral using nag_1d_spline_intg (e02bdc).
(b) Integrand defined as a function
If the functional form of $f\left(x\right)$ is known, then one of the following approaches should be taken. They are arranged in the order from most specific to most general, hence the first applicable procedure in the list will be the most efficient. However, if you do not wish to make any assumptions about the integrand, the most reliable functions to use will be nag_quad_1d_gen_vec_multi_rcomm (d01rac), nag_quad_1d_fin_gonnet_vec (d01rgc), nag_1d_quad_gen_1 (d01sjc), nag_1d_quad_osc_1 (d01skc) and nag_1d_quad_brkpts_1 (d01slc), although these will in general be less efficient for simple integrals.
(i) Rule-evaluation functions
If $f\left(x\right)$ is known to be sufficiently well behaved (more precisely, can be closely approximated by a polynomial of moderate degree), a Gaussian function with a suitable number of abscissae may be used.
nag_quad_1d_gauss_wset (d01tbc) or nag_quad_1d_gauss_wgen (d01tcc) with nag_quad_md_gauss (d01fbc) may be used if it is required to examine the weights and abscissae.
nag_quad_1d_gauss_wset (d01tbc) is faster and more accurate, whereas nag_quad_1d_gauss_wgen (d01tcc) is more general. nag_quad_1d_gauss_vec (d01uac) uses the same quadrature rules as nag_quad_1d_gauss_wset (d01tbc), and may be used if you do not explicitly require the weights and abscissae.
If $f\left(x\right)$ is well behaved, apart from a weight-function of the form
 $x-a+b2 c or b-xcx-ad,$
nag_quad_1d_gauss_wset (d01tbc) and nag_quad_1d_gauss_wgen (d01tcc) generate weights and abscissae for specific Gauss rules. Weights and abscissae for other quadrature formulae may be computed using functions nag_quad_1d_gauss_wrec (d01tdc) or nag_quad_1d_gauss_recm (d01tec). Wherever possible use nag_quad_1d_gauss_wrec (d01tdc) in preference to nag_quad_1d_gauss_recm (d01tec). The former however requires information that may not be readily available.
(ii) Automatic whole-interval functions
If $f\left(x\right)$ is reasonably smooth, and the required accuracy is not too high, the automatic whole interval function nag_quad_1d_fin_smooth (d01bdc) may be used. Additionally, nag_quad_md_sgq_multi_vec (d01esc) with $d=1$ may be used with an appropriate transformation from the unit interval.
nag_quad_1d_fin_smooth (d01bdc) uses the Gauss $10$-point rule, with the $21$ point Kronrod extension, and the subsequent $43$ and $87$ point Patterson extensions if required.
nag_quad_md_sgq_multi_vec (d01esc) supports multiple simultaneous integrals, and has a vectorized interface. Either high order Gauss–Patterson rules (of size ${2}^{\ell }-1$, for $\ell =1,\dots ,9$), or high order Clenshaw-Curtis rules (of size ${2}^{\ell -1}+1$, for $\ell =2,\dots ,12$). Gauss–Patterson rules possess greater polynomial accuracy, whereas Clenshaw–Curtis rules are often well suited to oscillatory integrals.
Firstly, several functions are available for integrands of the form $w\left(x\right)g\left(x\right)$ where $g\left(x\right)$ is a ‘smooth’ function (i.e., has no singularities, sharp peaks or violent oscillations in the interval of integration) and $w\left(x\right)$ is a weight function of one of the following forms.
 1 if $w\left(x\right)={\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$, where , $\alpha ,\beta >-1$: use nag_1d_quad_wt_alglog_1 (d01spc); 2 if $w\left(x\right)=\frac{1}{x-c}$: use nag_1d_quad_wt_cauchy_1 (d01sqc) (this integral is called the Hilbert transform of $g$); 3 if $w\left(x\right)=\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$: use nag_1d_quad_wt_trig_1 (d01snc) (this function can also handle certain types of singularities in $g\left(x\right)$).
Secondly, there are multiple routines for general $f\left(x\right)$, using different strategies.
nag_1d_quad_gen_1 (d01sjc) and nag_1d_quad_osc_1 (d01skc) use the strategy of Piessens et al. (1983), using repeated bisection of the interval, and in the first case the $\epsilon$-algorithm (Wynn (1956)), to improve the integral estimate. This can cope with singularities away from the end points, provided singular points do not occur as abscissae, nag_1d_quad_osc_1 (d01skc) tends to perform better than nag_1d_quad_gen_1 (d01sjc) on more oscillatory integrals.
nag_1d_quad_brkpts_1 (d01slc) uses the same subdivision strategy as nag_1d_quad_gen_1 (d01sjc) over a set of initial interval segments determined by supplied break-points. It is hence suitable for integrals with discontinuities (including switches in definition) or sharp peaks occuring at known points. Such integrals may also be approximated using other functions which do not allow break-points, although such integrals should be evaluated over each of the sub-intervals seperately.
nag_quad_1d_gen_vec_multi_rcomm (d01rac) again uses the strategy of Piessens et al. (1983), and provides the functionality of nag_1d_quad_gen_1 (d01sjc), nag_1d_quad_osc_1 (d01skc) and nag_1d_quad_brkpts_1 (d01slc) in a reverse communication framework. It also supports multiple integrals and uses a vectorized interface for the abscissae. Hence it is likely to be more efficient if several similar integrals are required to be evaluated over the same domain. Furthermore, its behaviour can be tailored through the use of optional parameters.
nag_quad_1d_fin_gonnet_vec (d01rgc) uses another adaptive scheme due to Gonnet (2010). This attempts to match the quadrature rule to the underlying integrand as well as subdividing the domain. Further, it can explicitly deal with singular points at abscissae, should NaN's or ∞ be returned by the user-supplied function, provided the generation of these does not cause the program to halt (see Chapter x07).

### 3.4One-dimensional Integrals over a Semi-infinite or Infinite Interval

(a) Integrand defined at a set of points
If $f\left(x\right)$ is defined numerically at four or more points, and the portion of the integral lying outside the range of the points supplied may be neglected, then the Gill–Miller finite difference method, nag_1d_quad_vals (d01gac), should be used.
(b) Integrand defined as a function
(i) Rule evaluation functions
If $f\left(x\right)$ behaves approximately like a polynomial in $x$, apart from a weight function of the form:
 1 ${e}^{-\beta x},\beta >0$ (semi-infinite interval, lower limit finite); or 2 ${e}^{-\beta x},\beta <0$ (semi-infinite interval, upper limit finite); or 3 ${e}^{-\beta {\left(x-\alpha \right)}^{2}},\beta >0$ (infinite interval),
or if $f\left(x\right)$ behaves approximately like a polynomial in ${\left(x+b\right)}^{-1}$ (semi-infinite range), then the Gaussian functions may be used.
nag_quad_1d_gauss_vec (d01uac) may be used if it is not required to examine the weights and abscissae.
nag_quad_1d_gauss_wset (d01tbc) or nag_quad_1d_gauss_wgen (d01tcc) with nag_quad_md_gauss (d01fbc) may be used if it is required to examine the weights and abscissae.
nag_quad_1d_gauss_wset (d01tbc) is faster and more accurate, whereas nag_quad_1d_gauss_wgen (d01tcc) is more general.
nag_quad_1d_inf_exp_wt (d01ubc) returns an approximation to the specific problem .
nag_1d_quad_inf_1 (d01smc) may be used, except for integrands which decay slowly towards an infinite end point, and oscillate in sign over the entire range. For this class, it may be possible to calculate the integral by integrating between the zeros and invoking some extrapolation process.
nag_1d_quad_inf_wt_trig_1 (d01ssc) may be used for integrals involving weight functions of the form $\mathrm{cos}\left(\omega x\right)$ and $\mathrm{sin}\left(\omega x\right)$ over a semi-infinite interval (lower limit finite).
The following alternative procedures are mentioned for completeness, though their use will rarely be necessary.
1. If the integrand decays rapidly towards an infinite end point, a finite cut-off may be chosen, and the finite range methods applied.
2. If the only irregularities occur in the finite part (apart from a singularity at the finite limit, with which nag_1d_quad_inf_1 (d01smc) can cope), the range may be divided, with nag_1d_quad_inf_1 (d01smc) used on the infinite part.
3. A transformation to finite range may be employed, e.g.,
 $x= 1-tt or x=- loge⁡t$
will transform $\left(0,\infty \right)$ to $\left(1,0\right)$ while for infinite ranges we have
 $∫-∞∞fxdx=∫0∞fx+f-xdx.$
If the integrand behaves badly on $\left(-\infty ,0\right)$ and well on $\left(0,\infty \right)$ or vice versa it is better to compute it as $\underset{-\infty }{\overset{0}{\int }}f\left(x\right)dx+\underset{0}{\overset{\infty }{\int }}f\left(x\right)dx$. This saves computing unnecessary function values in the semi-infinite range where the function is well behaved.

### 3.5Multidimensional Integrals

A number of techniques are available in this area and the choice depends to a large extent on the dimension and the required accuracy. It can be advantageous to use more than one technique as a confirmation of accuracy, particularly for high-dimensional integrations. Several functions include a transformation procedure, using a user-supplied function, which allows general product regions to be easily dealt with in terms of conversion to the standard $n$-cube region.
(a) Products of one-dimensional rules (suitable for up to about $5$ dimensions)
If $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is known to be a sufficiently well behaved function of each variable ${x}_{i}$, apart possibly from weight functions of the types provided, a product of Gaussian rules may be used. These are provided by nag_quad_1d_gauss_wset (d01tbc) or nag_quad_1d_gauss_wgen (d01tcc) with nag_quad_md_gauss (d01fbc). Rules for finite, semi-infinite and infinite ranges are included.
For two-dimensional integrals only, unless the integrand is very badly behaved, the automatic whole-interval product procedure of nag_quad_2d_fin (d01dac) may be used. The limits of the inner integral may be user-specified functions of the outer variable. Infinite limits may be handled by transformation (see Section 3.4); end point singularities introduced by transformation should not be troublesome, as the integrand value will not be required on the boundary of the region.
If none of these functions proves suitable and convenient, the one-dimensional functions may be used recursively. For example, the two-dimensional integral
 $I=∫a1b1∫a2b2fx,ydy dx$
may be expressed as
 $I=∫a1b1 Fxdx, where Fx=∫a2b2 fx,ydy.$
The user-supplied code to evaluate $F\left(x\right)$ will call the integration function for the $y$-integration, which will call more user-supplied code for $f\left(x,y\right)$ as a function of $y$ ($x$ being effectively a constant).
From Mark 24 onwards, all direct communication functions may be called recursively. As such, you may use any function, including the same function, for each dimension. Note however, in previous releases, some direct communication functions, specifically, nag_quad_1d_fin_smooth (d01bdc), nag_quad_2d_fin (d01dac), nag_quad_md_gauss (d01fbc), nag_quad_md_sphere (d01fdc), nag_quad_md_numth_vec (d01gdc) and nag_quad_md_simplex (d01pac), could not be called recursively.
The reverse communication function nag_quad_1d_gen_vec_multi_rcomm (d01rac) may be used by itself in a pseudo-recursive manner, in that it may be called to evaluate an inner integral for the integrand value of an outer integral also being calculated by nag_quad_1d_gen_vec_multi_rcomm (d01rac).
(b) Sag–Szekeres method
nag_quad_md_sphere (d01fdc) is particularly suitable for integrals of very large dimension although the accuracy is generally not high. It allows integration over either the general product region (with built-in transformation to the $n$-cube) or the $n$-sphere. Although no error estimate is provided, two adjustable arguments may be varied for checking purposes or may be used to tune the algorithm to particular integrals.
(c) Number Theoretic method
Algorithms of this type carry out multidimensional integration using the Korobov–Conroy method over a product region with built-in transformation to the $n$-cube. A stochastic modification of this method is incorporated into the functions in this library, hybridising the technique with the Monte–Carlo procedure. An error estimate is provided in terms of the statistical standard error. A number of pre-computed optimal coefficient rules for up to $20$ dimensions are provided; others can be computed using nag_quad_md_numth_coeff_prime (d01gyc) and nag_quad_md_numth_coeff_2prime (d01gzc). Like the Sag–Szekeres method it is suitable for large dimensional integrals although the accuracy is not high.
nag_quad_md_numth_vec (d01gdc) has a vectorized interface which can result in faster execution, especially on vector-processing machines. You are required to provide two functions, the first to return an array of values of the integrand at each of an array of points, and the second to evaluate the limits of integration at each of an array of points. This reduces the overhead of function calls, avoids repetitions of computations common to each of the evaluations of the integral and limits of integration, and offers greater scope for vectorization of your code.
(d) A combinatorial extrapolation method
nag_quad_md_simplex (d01pac) computes a sequence of approximations and an error estimate to the integral of a function over a multidimensional simplex using a combinatorial method with extrapolation.
(e) Sparse Grid method
nag_quad_md_sgq_multi_vec (d01esc) implements a sparse grid quadrature scheme for the integration of a vector of multidimensional integrals over the unit hypercube,
 $F ≈ ∫ 0,1 d fx dx .$
The function uses a vectorized interface, which returns a set of points at which the integrands must be evaluated in a sparse storage format for efficiency.
Other domains can be readily integrated over by using an appropriate mapping inside the provided function for evaluating the integrands. It is suitable for $d$ up to $\mathit{O}\left(100\right)$, although no upper bound on the number of dimensions is enforced. It will also evaluate one-dimensional integrals, although in this case the sparse grid used is in fact the full grid.
The function uses optional parameters, set and queried using the functions nag_quad_opt_set (d01zkc) and nag_quad_opt_get (d01zlc) respectively. Amongst other options, these allow the parallelization of the function to be controlled.
Both functions are for integrals of the form
 $∫a1b1 ∫a2b2 ⋯ ∫anbn fx1,x2,…,xndxndxn-1⋯dx1.$
nag_multid_quad_monte_carlo_1 (d01xbc) is an adaptive Monte–Carlo function. This function is usually slow and not recommended for high-accuracy work. It is a robust function that can often be used for low-accuracy results with highly irregular integrands or when $n$ is large.
nag_multid_quad_adapt_1 (d01wcc) is an adaptive deterministic function. Convergence is fast for well behaved integrands. Highly accurate results can often be obtained for $n$ between $2$ and $5$, using significantly fewer integrand evaluations than would be required by the Monte–Carlo function nag_multid_quad_monte_carlo_1 (d01xbc). The function will usually work when the integrand is mildly singular and for $n\le 10$ should be used before nag_multid_quad_monte_carlo_1 (d01xbc). If it is known in advance that the integrand is highly irregular, it is best to compare results from at least two different functions.
There are many problems for which one or both of the functions will require large amounts of computing time to obtain even moderately accurate results. The amount of computing time is controlled by the number of integrand evaluations you have allowed, and you should set this argument carefully, with reference to the time available and the accuracy desired.

## 4Decision Trees

### Tree 1: One-dimensional integrals over a finite interval

 Is the functional form of the integrand known? Do you require reverse communication? d01rac yes yes no no Are you concerned with efficiency for simple integrals? Is the integrand smooth (polynomial-like) apart from weight function ${\left|x-\left(a+b\right)/2\right|}^{c}$ or ${\left(b-x\right)}^{c}{\left(x-a\right)}^{d}$? d01uac, d01tbc or d01tcc and d01fbc or d01gdc yes yes no no Is the integrand reasonably smooth and the required accuracy not too great? d01bdc, d01esc or d01uac yes no Are multiple integrands to be integrated simultaneously? d01esc or d01rac yes no Has the integrand discontinuities, sharp peaks or singularities at known points other than the end points? Split the range and begin again; or use d01rgc or d01slc yes no Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function ${\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }\phantom{\rule{0ex}{0ex}}{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$? d01spc yes no Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function $\frac{1}{x-c}$? d01sqc yes no Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$? d01snc yes no Is the integrand free of singularities? d01esc, d01sjc, d01skc or d01uac yes no d01rac, d01rgc or d01sjc d01rac, d01rgc or d01sjc d01gac

### Tree 2: One-dimensional integrals over a semi-infinite or infinite interval

 Is the functional form of the integrand known? Are you concerned with efficiency for simple integrands? Is the integrand smooth (polynomial-like) with no exceptions? d01bdc yes yes yes no no no Is the integrand of the form ? d01ubc yes no Is the integrand smooth (polynomial-like) apart from weight function ${e}^{-\beta \left(x\right)}$ (semi-infinite range) or ${e}^{{-\beta \left(x-a\right)}^{2}}$ (infinite range) or is the integrand polynomial-like in $\frac{1}{x+b}$? (semi-infinite range)? d01uac, or d01tcc and d01fbc, or, d01tbc and d01fbc, or d01tdc and d01fbc (d01tdc may require use of d01tec) yes no Has integrand discontinuities, sharp peaks or singularities at known points other than a finite limit? Split range; begin again using finite or infinite range tree yes no Does the integrand oscillate over the entire range? Does the integrand decay rapidly towards an infinite limit? Use d01smc; or set cutoff and use finite range tree yes yes no no Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ (semi-infinite range)? d01ssc yes no Use finite-range integration between the zeros and extrapolate. d01smc d01smc d01gac (integrates over the range of the points supplied)

### Tree 3: Multidimensional integrals

 Is dimension $\text{}=2$ and product region? d01dac yes no Is dimension $\text{}\le 4$ Is region an $n$-sphere? d01fbc with user transformation. yes yes no no Is region a Simplex? d01fbc with user transformation or d01pac yes no Is the integrand smooth (polynomial-like) in each dimension apart from weight function? d01tbc and d01fbc or d01tcc and d01fbc yes no Is integrand free of extremely bad behaviour? d01esc, d01fdc, d01gdc or d01wcc yes no Is bad behaviour on the boundary? d01fdc or d01wcc yes no Compare results from at least two of d01esc, d01fdc, d01gdc, d01wcc and d01xbc and one-dimensional recursive application Is region an $n$-sphere? d01fdc yes no Is region a Simplex? d01pac yes no Is high accuracy required? d01fdc with argument tuning yes no Is dimension high? d01esc, d01fdc or d01gdc yes no d01wcc

## 5Functionality Index

 Korobov optimal coefficients for use in nag_quad_md_numth_vec (d01gdc):
 when number of points is a product of 2 primes nag_quad_md_numth_coeff_2prime (d01gzc)
 when number of points is prime nag_quad_md_numth_coeff_prime (d01gyc)
 over a finite two-dimensional region nag_quad_2d_fin (d01dac)
 over a general product region,
 Sag–Szekeres method (also over n-sphere) nag_quad_md_sphere (d01fdc)
 over a hyper-rectangle,
 sparse grid method (with user transformation),
 muliple integrands, vectorized interface nag_quad_md_sgq_multi_vec (d01esc)
 adaptive integration of a function over a finite interval,
 strategy due to Gonnet,
 strategy due to Piessens and de Doncker,
 allowing for singularities at user-specified break-points nag_1d_quad_brkpts_1 (d01slc)
 suitable for highly oscillatory integrals nag_1d_quad_osc_1 (d01skc)
 weight function 1 / (x − c) Cauchy principal value (Hilbert transform) nag_1d_quad_wt_cauchy_1 (d01sqc)
 weight function cos(ωx) or sin(ωx) nag_1d_quad_wt_trig_1 (d01snc)
 weight function with end-point singularities of algebraico-logarithmic type nag_1d_quad_wt_alglog_1 (d01spc)
 adaptive integration of a function over an infinite interval or semi-infinite interval,
 weight function cos(ωx) or sin(ωx) nag_1d_quad_inf_wt_trig_1 (d01ssc)
 integration of a function defined by data values only,
 non-adaptive integration over a finite, semi-infinite or infinite interval,
 using pre-computed weights and abscissae
 specific integral with weight exp( − x2) over semi-infinite interval nag_quad_1d_inf_exp_wt (d01ubc)
 reverse communication,
 adaptive integration over a finite interval,
 multiple integrands,
 efficient on vector machines nag_quad_1d_gen_vec_multi_rcomm (d01rac)
 Service functions,
 general option setting and initialization nag_quad_opt_set (d01zkc)
 Weights and abscissae for Gaussian quadrature rules,
 method of Golub and Welsch,
 calculating the weights and abscissae nag_quad_1d_gauss_wrec (d01tdc)
 more general choice of rule,
 calculating the weights and abscissae nag_quad_1d_gauss_wgen (d01tcc)
 restricted choice of rule,
 using pre-computed weights and abscissae nag_quad_1d_gauss_wset (d01tbc)

None.

## 7Functions Withdrawn or Scheduled for Withdrawal

The following lists all those functions that have been withdrawn since Mark 23 of the Library or are scheduled for withdrawal at one of the next two marks.
Davis P J and Rabinowitz P (1975) Methods of Numerical Integration Academic Press
Gonnet P (2010) Increasing the reliability of adaptive quadrature using explicit interpolants ACM Trans. Math. software 37 26
Lyness J N (1983) When not to use an automatic quadrature routine SIAM Rev. 25 63–87
Patterson T N L (1968) The Optimum addition of points to quadrature formulae Math. Comput. 22 847–856
Piessens R, de Doncker–Kapenga E, Überhuber C and Kahaner D (1983) QUADPACK, A Subroutine Package for Automatic Integration Springer–Verlag
Sobol I M (1974) The Monte Carlo Method The University of Chicago Press
Stroud A H (1971) Approximate Calculation of Multiple Integrals Prentice–Hall
Wynn P (1956) On a device for computing the ${e}_{m}\left({S}_{n}\right)$ transformation Math. Tables Aids Comput. 10 91–96