S Chapter Contents
NAG Library Manual

# NAG Library Chapter IntroductionS – Approximations of Special Functions

## 1  Scope of the Chapter

This chapter is concerned with the provision of some commonly occurring physical and mathematical functions.

## 2  Background to the Problems

The majority of the routines in this chapter approximate real-valued functions of a single real argument, and the techniques involved are described in Section 2.1. In addition the chapter contains routines for elliptic integrals (see Section 2.2), Bessel and Airy functions of a complex argument (see Section 2.3), complementary error function of a complex argument, hypergeometric functions and various option pricing routines for use in financial applications.

### 2.1  Functions of a Single Real Argument

Most of the routines provided for functions of a single real argument have been based on truncated Chebyshev expansions. This method of approximation was adopted as a compromise between the conflicting requirements of efficiency and ease of implementation on many different machine ranges. For details of the reasons behind this choice and the production and testing procedures followed in constructing this chapter see Schonfelder (1976).
Basically, if the function to be approximated is $f\left(x\right)$, then for $x\in \left[a,b\right]$ an approximation of the form
 $fx=gx∑′r=0CrTrt$
is used (${\sum }^{\prime }$ denotes, according to the usual convention, a summation in which the first term is halved), where $g\left(x\right)$ is some suitable auxiliary function which extracts any singularities, asymptotes and, if possible, zeros of the function in the range in question and $t=t\left(x\right)$ is a mapping of the general range $\left[a,b\right]$ to the specific range [$-1,+1$] required by the Chebyshev polynomials, ${T}_{r}\left(t\right)$. For a detailed description of the properties of the Chebyshev polynomials see Clenshaw (1962) and Fox and Parker (1968).
The essential property of these polynomials for the purposes of function approximation is that ${T}_{n}\left(t\right)$ oscillates between $±1$ and it takes its extreme values $n+1$ times in the interval [$-1,+1$]. Therefore, provided the coefficients ${C}_{r}$ decrease in magnitude sufficiently rapidly the error made by truncating the Chebyshev expansion after $n$ terms is approximately given by
 $Et≃CnTnt.$
That is, the error oscillates between $±{C}_{n}$ and takes its extreme value $n+1$ times in the interval in question. Now this is just the condition that the approximation be a minimax representation, one which minimizes the maximum error. By suitable choice of the interval, [$a,b$], the auxiliary function, $g\left(x\right)$, and the mapping of the independent variable, $t\left(x\right)$, it is almost always possible to obtain a Chebyshev expansion with rapid convergence and hence truncations that provide near minimax polynomial approximations to the required function. The difference between the true minimax polynomial and the truncated Chebyshev expansion is seldom sufficiently great enough to be of significance.
The evaluation of the Chebyshev expansions follows one of two methods. The first and most efficient, and hence the most commonly used, works with the equivalent simple polynomial. The second method, which is used on the few occasions when the first method proves to be unstable, is based directly on the truncated Chebyshev series, and uses backward recursion to evaluate the sum. For the first method, a suitably truncated Chebyshev expansion (truncation is chosen so that the error is less than the machine precision) is converted to the equivalent simple polynomial. That is, we evaluate the set of coefficients ${b}_{r}$ such that
 $yt=∑r=0 n-1brtr=∑′r=0 n-1CrTrt.$
The polynomial can then be evaluated by the efficient Horner's method of nested multiplications,
 $yt=b0+tb1+tb2+…tbn- 2+tbn- 1….$
This method of evaluation results in efficient routines but for some expansions there is considerable loss of accuracy due to cancellation effects. In these cases the second method is used. It is well known that if
 $bn-1=Cn-1 bn-2=2tbn-1+Cn-2 bj-0=2tbj+1-bj+2+Cj, j=n-3,n-4,…,0$
then
 $∑′r=0 CrTrt=12b0-b2$
and this is always stable. This method is most efficiently implemented by using three variables cyclically and explicitly constructing the recursion.
That is,
 $α = Cn-1 β = 2tα+Cn-2 γ = 2tβ-α+Cn-3 α = 2tγ-β+Cn-4 β = 2tα-γ+Cn-5 ⋮ say ​α = 2tγ-β+C2 β = 2tα-γ+C1 yt = tβ-α+12C0$
The auxiliary functions used are normally functions compounded of simple polynomial (usually linear) factors extracting zeros, and the primary compiler-provided functions, sin, cos, ln, exp, sqrt, which extract singularities and/or asymptotes or in some cases basic oscillatory behaviour, leaving a smooth well-behaved function to be approximated by the Chebyshev expansion which can therefore be rapidly convergent.
The mappings of [$a,b$] to [$-1,+1$] used range from simple linear mappings to the case when $b$ is infinite, and considerable improvement in convergence can be obtained by use of a bilinear form of mapping. Another common form of mapping is used when the function is even; that is, it involves only even powers in its expansion. In this case an approximation over the whole interval [$-a,a$] can be provided using a mapping $t=2{\left(x/a\right)}^{2}-1$. This embodies the evenness property but the expansion in $t$ involves all powers and hence removes the necessity of working with an expansion with half its coefficients zero.
For many of the routines an analysis of the error in principle is given, namely, if $E$ and $\nabla$ are the absolute errors in function and argument and $\epsilon$ and $\delta$ are the corresponding relative errors, then
 $E ≃ f′x∇ E ≃ xf′xδ ε ≃ x f′ x fx δ.$
If we ignore errors that arise in the argument of the function by propagation of data errors, etc., and consider only those errors that result from the fact that a real number is being represented in the computer in floating point form with finite precision, then $\delta$ is bounded and this bound is independent of the magnitude of $x$. For example, on an $11$-digit machine
 $δ≤10-11.$
(This of course implies that the absolute error $\nabla =x\delta$ is also bounded but the bound is now dependent on $x$.) However, because of this the last two relations above are probably of more interest. If possible the relative error propagation is discussed; that is, the behaviour of the error amplification factor $\left|x{f}^{\prime }\left(x\right)/f\left(x\right)\right|$ is described, but in some cases, such as near zeros of the function which cannot be extracted explicitly, absolute error in the result is the quantity of significance and here the factor $\left|x{f}^{\prime }\left(x\right)\right|$ is described. In general, testing of the functions has shown that their error behaviour follows fairly well these theoretical error behaviours. In regions where the error amplification factors are less than or of the order of one, the errors are slightly larger than the above predictions. The errors are here limited largely by the finite precision of arithmetic in the machine, but $\epsilon$ is normally no more than a few times greater than the bound on $\delta$. In regions where the amplification factors are large, of order ten or greater, the theoretical analysis gives a good measure of the accuracy obtainable.
It should be noted that the definitions and notations used for the functions in this chapter are all taken from Abramowitz and Stegun (1972). You are strongly recommended to consult this book for details before using the routines in this chapter.

### 2.2  Approximations to Elliptic Integrals

Four functions provided here are symmetrised variants of the classical (Legendre) elliptic integrals. These alternative definitions have been suggested by Carlson (1965), Carlson (1977b) and Carlson (1977a) and he also developed the basic algorithms used in this chapter.
The symmetrised elliptic integral of the first kind is represented by
 $RF x,y,z = 12 ∫0∞ dt t+x t+y t+z ,$
where $x,y,z\ge 0$ and at most one may be equal to zero.
The normalization factor, $\frac{1}{2}$, is chosen so as to make
 $RFx,x,x=1/x.$
If any two of the variables are equal, ${R}_{F}$ degenerates into the second function
 $RC x,y = RF x,y,y = 12 ∫0∞ dt t+y . t+x ,$
where the argument restrictions are now $x\ge 0$ and $y\ne 0$.
This function is related to the logarithm or inverse hyperbolic functions if $0, and to the inverse circular functions if $0\le x\le y$.
The symmetrised elliptic integral of the second kind is defined by
 $RD x,y,z = 32 ∫0∞ dt t+x t+y t+z3$
with $z>0$, $x\ge 0$ and $y\ge 0$, but only one of $x$ or $y$ may be zero.
The function is a degenerate special case of the symmetrised elliptic integral of the third kind
 $RJ x,y,z,ρ = 32 ∫0∞ dt t+x t+y t+z t+ρ$
with $\rho \ne 0$ and $x,y,z\ge 0$ with at most one equality holding. Thus ${R}_{D}\left(x,y,z\right)={R}_{J}\left(x,y,z,z\right)$. The normalization of both these functions is chosen so that
 $RDx,x,x=RJx,x,x,x=1/x⁢x.$
The algorithms used for all these functions are based on duplication theorems. These allow a recursion system to be established which constructs a new set of arguments from the old using a combination of arithmetic and geometric means. The value of the function at the original arguments can then be simply related to the value at the new arguments. These recursive reductions are used until the arguments differ from the mean by an amount small enough for a Taylor series about the mean to give sufficient accuracy when retaining terms of order less than six. Each step of the recurrences reduces the difference from the mean by a factor of four, and as the truncation error is of order six, the truncation error goes like ${\left(4096\right)}^{-n}$, where $n$ is the number of iterations.
The above forms can be related to the more traditional canonical forms (see Section 17.2 of Abramowitz and Stegun (1972)), as follows.
If we write $q={\mathrm{cos}}^{2}\varphi ,r=1-m {\mathrm{sin}}^{2}\varphi ,s=1-n {\mathrm{sin}}^{2}\varphi$, where $0\le \varphi \le \frac{1}{2}\pi$, we have
the classical elliptic integral of the first kind:
 $Fϕ∣m = ∫0ϕ 1-m sin2⁡θ -12 dθ = sin⁡ϕ RF q,r,1 ;$
the classical elliptic integral of the second kind:
 $Eϕ∣m = ∫0ϕ 1-m sin2⁡θ 12 dθ = sin⁡ϕ RF q,r,1 -13m sin3 ϕ RD q,r,1$
the classical elliptic integral of the third kind:
 $Πn; ϕ∣m = ∫0ϕ 1-n sin2⁡θ -1 1-m sin2⁡θ -12 dθ = sin⁡ϕ RF q,r,1 + 13 n sin3 ϕ RJ q,r,1,s .$
Also the classical complete elliptic integral of the first kind:
 $Km = ∫ 0 π2 1 - m sin2⁡θ -12 dθ = RF 0,1-m,1 ;$
the classical complete elliptic integral of the second kind:
 $Em = ∫ 0 π2 1-m sin2 θ 12 dθ = RF 0,1-m,1 - 13 m RD 0,1-m,1 .$
For convenience, Chapter S contains routines to evaluate classical and symmetrised elliptic integrals.

### 2.3  Bessel and Airy Functions of a Complex Argument

The routines for Bessel and Airy functions of a real argument are based on Chebyshev expansions, as described in Section 2.1. The routines provided for functions of a complex argument, however, use different methods. These routines relate all functions to the modified Bessel functions ${I}_{\nu }\left(z\right)$ and ${K}_{\nu }\left(z\right)$ computed in the right-half complex plane, including their analytic continuations. ${I}_{\nu }$ and ${K}_{\nu }$ are computed by different methods according to the values of $z$ and $\nu$. The methods include power series, asymptotic expansions and Wronskian evaluations. The relations between functions are based on well known formulae (see Abramowitz and Stegun (1972)).

### 2.4  Option Pricing Routines

The option pricing routines evaluate the closed form solutions or approximations to the equations that define mathematical models for the prices of selected financial option contracts. These solutions can be viewed as special functions determined by the underlying equations. The terminology associated with these routines arises from their setting in financial markets and is briefly outlined below. See Joshi (2003) for a comprehensive introduction to this subject. An option is a contract which gives the holder the right, but not the obligation, to buy (if it is a call) or sell (if it is a put) a particular asset, $S$. A European option can be exercised only at the specified expiry time, $T$, while an American option can be exercised at any time up to $T$. For Asian options the average underlying price over a pre-set time period determines the payoff.
The asset is bought (if a call) or sold (if a put) at a pre-specified strike price $X$. Thus, an option contract has a payoff to the holder of $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{\left({S}_{T}-X\right),0\right\}$ for a call or $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{\left(X-{S}_{T}\right),0\right\}$, for a put, which depends on whether the asset price at the time of exercise is above (call) or below (put) the strike, $X$. If at any moment in time a contract is currently showing a theoretical profit then it is deemed ‘in-the-money’; otherwise it is deemed ‘out-of-the-money’.
The option contract itself therefore has a value and, in many cases, can be traded in markets. Mathematical models, such as the Black–Scholes model, give theoretical prices for particular option contracts using a number of assumptions about the behaviour of financial markets. Typically, the price, ${S}_{t}$, of the underlying asset at time $t$, is modelled as the solution of a stochastic differential equation for the return, $d{S}_{t}/{S}_{t}$, on the asset price over a time interval, $dt$,
 $dSt St = μ dt + σdWt ,$
where $d{W}_{t}$ is a Brownian motion. The drift, $\mu$, defines the trend in the movements of $S$, while the volatility, $\sigma$, measures the risk and may be taken to be the standard deviation of the returns on the asset price. In addition the model requires a riskless money market account or bond with value, ${B}_{t}$, at time $t$ and risk-free rate, $r$, such that
 $dBt = rBt dt .$
This leads to the determination of the Black–Scholes option price, $P$, by a martingale method or via the derivation of the Black–Scholes partial differential equation,
 $∂P ∂t + ∂P ∂S rS+ 12 ∂2P ∂S2 σ2 S2 - rP=0 .$
For this case a closed form solution exists which is evaluated by S30AAF.
A number of different option types where the solution exists in closed form or as a closed form approximation are presented in this chapter. See Haug (2007) for an extensive listing of option pricing formulae.

### 2.5  Hypergeometric Functions

The confluent hypergeometric function $M\left(a,b,x\right)$ (or ${}_{1}F_{1}\left(a;b;x\right)$) requires a number of techniques to approximate it over the whole parameter $\left(a,b\right)$ space and for all argument $\left(x\right)$ values. For $x$ well within the unit circle $\left|x\right|\le \rho <1$ (where $\rho =0.8$ say), and for relatively small parameter values, the function can be well approximated by Taylor expansions, continued functions or through the solution of the related ordinary differential equation by an explicit, adaptive integrator. For values of $\left|x\right|>\rho$, one of several transformations can be performed (depending on the value of $x$) to reformulate the problem in terms of a new argument ${x}^{\prime }$ such that $\left|{x}^{\prime }\right|\le \rho$. If one or more of the parameters is relatively large (e.g., $\left|a\right|>30$) then recurrence relations can be used in combination to reformulate the problem in terms of parameter values of small size (e.g., $\left|a\right|<1$).
Approximations to the hypergeometric functions can therefore require all of the above techniques in sequence: a transformation to get an argument well inside the unit circle, a combination of recurrence relations to reduce the parameter sizes, and the approximation of the resulting hypergeometric function by one of a set of approximation techniques.
All the techniques described above are based on those described in Pearson (2009).

## 3  Recommendations on Choice and Use of Available Routines

### 3.1  Vectorized Routine Variants

Many routines in Chapter S which compute functions of a single real argument have variants which operate on vectors of arguments. For example, S18AEF computes the value of the ${I}_{0}$ Bessel function for a single argument, and S18ASF computes the same function for multiple arguments. In general it should be more efficient to use vectorized routines where possible, though to some extent this will depend on the environment from which you call the routines. See Section 4 for a complete list of vectorized routines.

### 3.2  Elliptic Integrals

IMPORTANT ADVICE: users who encounter elliptic integrals in the course of their work are strongly recommended to look at transforming their analysis directly to one of the Carlson forms, rather than to the traditional canonical Legendre forms. In general, the extra symmetry of the Carlson forms is likely to simplify the analysis, and these symmetric forms are much more stable to calculate.
The routine S21BAF for ${R}_{C}$ is largely included as an auxiliary to the other routines for elliptic integrals. This integral essentially calculates elementary functions, e.g.,
 $ln⁡x =x-1 RC 1+x2 2,x , x>0; arcsin⁡x =x RC1-x2,1,x≤1; arcsinh⁡x =x RC1+x2,1,etc.$
In general this method of calculating these elementary functions is not recommended as there are usually much more efficient specific routines available in the Library. However, S21BAF may be used, for example, to compute $\mathrm{ln}x/\left(x-1\right)$ when $x$ is close to $1$, without the loss of significant figures that occurs when $\mathrm{ln}x$ and $x-1$ are computed separately.

### 3.3  Bessel and Airy Functions

For computing the Bessel functions ${J}_{\nu }\left(x\right)$, ${Y}_{\nu }\left(x\right)$, ${I}_{\nu }\left(x\right)$ and ${K}_{\nu }\left(x\right)$ where $x$ is real and $\nu =0\text{​ or ​}1$, special routines are provided, which are much faster than the more general routines that allow a complex argument and arbitrary real $\nu \ge 0$. Similarly, special routines are provided for computing the Airy functions and their derivatives $\mathrm{Ai}\left(x\right)$, $\mathrm{Bi}\left(x\right)$, ${\mathrm{Ai}}^{\prime }\left(x\right)$, ${\mathrm{Bi}}^{\prime }\left(x\right)$ for a real argument which are much faster than the routines for complex arguments.

### 3.4  Confluent Hypergeometric Function ${}_{1}F_{1}$

Two routines are provided for the confluent hypergeometric function ${}_{1}F_{1}$. Both return values for ${}_{1}F_{1}\left(a;b;x\right)$ where parameters $a$ and $b$, and argument $x$, are all real, but one variant works in a scaled form designed to avoid unnecessary loss of precision. The unscaled routine S22BAF is easier to use and should be chosen in the first instance, changing to the scaled routine S22BBF only if problems are encountered.

## 4  Functionality Index

 Airy function,
 Ai, real argument,
 scalar S17AGF
 vectorized S17AUF
 Ai or Ai ′ , complex argument, optionally scaled S17DGF
 Ai ′ , real argument,
 scalar S17AJF
 vectorized S17AWF
 Bi, real argument,
 scalar S17AHF
 vectorized S17AVF
 Bi or Bi ′ , complex argument, optionally scaled S17DHF
 Bi ′ , real argument,
 scalar S17AKF
 vectorized S17AXF
 Arccos,
 inverse circular cosine S09ABF
 Arccosh,
 inverse hyperbolic cosine S11ACF
 Arcsin,
 inverse circular sine S09AAF
 Arcsinh,
 inverse hyperbolic sine S11ABF
 Arctanh,
 inverse hyperbolic tangent S11AAF
 Bessel function,
 I0, real argument,
 scalar S18AEF
 vectorized S18ASF
 I1, real argument,
 scalar S18AFF
 vectorized S18ATF
 Iν, complex argument, optionally scaled S18DEF
 J0, real argument,
 scalar S17AEF
 vectorized S17ASF
 J1, real argument,
 scalar S17AFF
 vectorized S17ATF
 Jα ± n(z), complex argument S18GKF
 Jν, complex argument, optionally scaled S17DEF
 K0, real argument,
 scalar S18ACF
 vectorized S18AQF
 K1, real argument,
 vectorized S18ARF
 Kν, complex argument, optionally scaled S18DCF
 Y0, real argument,
 scalar S17ACF
 vectorized S17AQF
 Y1, real argument,
 vectorized S17ARF
 Yν, complex argument, optionally scaled S17DCF
 beta function,
 incomplete S14CCF
 Complement of the Cumulative Normal distribution S15ACF
 Complement of the Error function,
 complex argument, scaled S15DDF
 real argument, scaled S15AGF
 Cosine,
 hyperbolic S10ACF
 Cosine Integral S13ACF
 Cumulative Normal distribution function S15ABF
 Dawson's Integral S15AFF
 Elliptic functions, Jacobian, sn, cn, dn,
 complex argument S21CBF
 real argument S21CAF
 Elliptic integral,
 general,
 of 2nd kind, F(z , k ′  , a , b) S21DAF
 Legendre form,
 complete of 1st kind, K(m) S21BHF
 complete of 2nd kind, E (m) S21BJF
 of 1st kind, F(ϕ | m) S21BEF
 of 2nd kind, E (ϕ ∣ m) S21BFF
 of 3rd kind, Π (n ; ϕ ∣ m) S21BGF
 symmetrised,
 degenerate of 1st kind, RC S21BAF
 of 1st kind, RF S21BBF
 of 2nd kind, RD S21BCF
 of 3rd kind, RJ S21BDF
 Erf,
 real argument S15AEF
 Erfc,
 complex argument, scaled S15DDF
 erfcx,
 real argument S15AGF
 Exponential,
 complex S01EAF
 Exponential Integral S13AAF
 Fresnel integral,
 C,
 vectorized S20ARF
 S,
 scalar S20ACF
 vectorized S20AQF
 Gamma function S14AAF
 Gamma function,
 incomplete S14BAF
 Generalized factorial function S14AAF
 Hankel function Hν(1) or Hν(2),
 complex argument, optionally scaled S17DLF
 Hypergeometric functions,
 1F1 (a ; b ; x) , confluent, real argument S22BAF
 1F1(a ; b ; x), confluent, real argument, scaled form S22BBF
 Jacobian theta functions θk(x , q),
 real argument S21CCF
 Kelvin function,
 bei x,
 scalar S19ABF
 vectorized S19APF
 ber x,
 scalar S19AAF
 vectorized S19ANF
 kei x,
 vectorized S19ARF
 ker x,
 scalar S19ACF
 vectorized S19AQF
 Legendre functions of 1st kind Pnm(x), Pnm(x) S22AAF
 Logarithm of 1 + x S01BAF
 Logarithm of beta function,
 real S14CBF
 Logarithm of gamma function,
 complex S14AGF
 real S14ABF
 real, scaled S14AHF
 Option Pricing,
 American option, Bjerksund and Stensland option price S30QCF
 Asian option, geometric continuous average rate price S30SAF
 Asian option, geometric continuous average rate price with Greeks S30SBF
 binary asset-or-nothing option price S30CCF
 binary asset-or-nothing option price with Greeks S30CDF
 binary cash-or-nothing option price S30CAF
 binary cash-or-nothing option price with Greeks S30CBF
 Black–Scholes–Merton option price S30AAF
 Black–Scholes–Merton option price with Greeks S30ABF
 European option, option prices, using Merton jump-diffusion model S30JAF
 European option, option price with Greeks, using Merton jump-diffusion model S30JBF
 floating-strike lookback option price S30BAF
 floating-strike lookback option price with Greeks S30BBF
 Heston's model option price S30NAF
 Heston's model option price with Greeks S30NBF
 standard barrier option price S30FAF
 Polygamma function,
 ψ(n)(x), real x S14AEF
 ψ(n)(z), complex z S14AFF
 Psi function S14ACF
 Scaled modified Bessel function(s),
 e − (x)I0(x), real argument,
 scalar S18CEF
 vectorized S18CSF
 e − (x)I1(x), real argument,
 scalar S18CFF
 vectorized S18CTF
 ex K0 (x), real argument,
 scalar S18CCF
 vectorized S18CQF
 ex K1 (x), real argument,
 scalar S18CDF
 vectorized S18CRF
 Sine,
 hyperbolic S10ABF
 Tangent,
 circular S07AAF
 hyperbolic S10AAF
 Zeros of Bessel functions Jα(x), Jα ′ (x), Yα(x), Yα ′ (x),
 scalar S17ALF

None.

None.

## 7  References

Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications
Carlson B C (1965) On computing elliptic integrals and functions J. Math. Phys. 44 36–51
Carlson B C (1977a) Special Functions of Applied Mathematics Academic Press
Carlson B C (1977b) Elliptic integrals of the first kind SIAM J. Math. Anal. 8 231–242
Clenshaw C W (1962) Chebyshev Series for Mathematical Functions Mathematical tables HMSO
Fox L and Parker I B (1968) Chebyshev Polynomials in Numerical Analysis Oxford University Press
Haug E G (2007) The Complete Guide to Option Pricing Formulas (2nd Edition) McGraw-Hill
Joshi M S (2003) The Concepts and Practice of Mathematical Finance Cambridge University Press
Pearson J (2009) Computation of hypergeometric functions MSc Dissertation, Mathematical Institute, University of Oxford
Schonfelder J L (1976) The production of special function routines for a multi-machine library Softw. Pract. Exper. 6(1)

S Chapter Contents