NAG FL Interface
S (Specfun)
Approximations of Special Functions

Settings help

FL Name Style:


FL Specification Language:


1 Scope of the Chapter

This chapter is concerned with the provision of some commonly occurring physical and mathematical functions.

2 Background to the Problems

The majority of the routines in this chapter approximate real-valued functions of a single real argument, and the techniques involved are described in Section 2.1. In addition the chapter contains routines for elliptic integrals (see Section 2.2), Bessel and Airy functions of a complex argument (see Section 2.3), complementary error function of a complex argument, hypergeometric functions and various option pricing routines for use in financial applications.

2.1 Functions of a Single Real Argument

Most of the routines provided for functions of a single real argument have been based on truncated Chebyshev expansions. This method of approximation was adopted as a compromise between the conflicting requirements of efficiency and ease of implementation on many different machine ranges. For details of the reasons behind this choice and the production and testing procedures followed in constructing this chapter see Schonfelder (1976).
Basically, if the function to be approximated is f(x), then for x[a,b] an approximation of the form
f(x)=g(x)r=0CrTr(t)  
is used ( denotes, according to the usual convention, a summation in which the first term is halved), where g(x) is some suitable auxiliary function which extracts any singularities, asymptotes and, if possible, zeros of the function in the range in question and t=t(x) is a mapping of the general range [a,b] to the specific range [-1,+1] required by the Chebyshev polynomials, Tr(t). For a detailed description of the properties of the Chebyshev polynomials see Clenshaw (1962) and Fox and Parker (1968).
The essential property of these polynomials for the purposes of function approximation is that Tn(t) oscillates between ±1 and it takes its extreme values n+1 times in the interval [-1,+1]. Therefore, provided the coefficients Cr decrease in magnitude sufficiently rapidly the error made by truncating the Chebyshev expansion after n terms is approximately given by
E(t)CnTn(t).  
That is, the error oscillates between ±Cn and takes its extreme value n+1 times in the interval in question. Now this is just the condition that the approximation be a minimax representation, one which minimizes the maximum error. By suitable choice of the interval, [a,b], the auxiliary function, g(x), and the mapping of the independent variable, t(x), it is almost always possible to obtain a Chebyshev expansion with rapid convergence and hence truncations that provide near minimax polynomial approximations to the required function. The difference between the true minimax polynomial and the truncated Chebyshev expansion is seldom sufficiently great enough to be of significance.
The evaluation of the Chebyshev expansions follows one of two methods. The first and most efficient, and hence the most commonly used, works with the equivalent simple polynomial. The second method, which is used on the few occasions when the first method proves to be unstable, is based directly on the truncated Chebyshev series, and uses backward recursion to evaluate the sum. For the first method, a suitably truncated Chebyshev expansion (truncation is chosen so that the error is less than the machine precision) is converted to the equivalent simple polynomial. That is, we evaluate the set of coefficients br such that
y(t)=r=0 n-1brtr=r=0 n-1CrTr(t).  
The polynomial can then be evaluated by the efficient Horner's method of nested multiplications,
y(t)=(b0+t(b1+t(b2+t(bn- 2+tbn- 1)))).  
This method of evaluation results in efficient routines but for some expansions there is considerable loss of accuracy due to cancellation effects. In these cases the second method is used. It is well known that if
bn-1=Cn-1 bn-2=2tbn-1+Cn-2 bj-0=2tbj+1-bj+2+Cj,  j=n-3,n-4,,0  
then
r=0 CrTr(t)=12(b0-b2)  
and this is always stable. This method is most efficiently implemented by using three variables cyclically and explicitly constructing the recursion.
That is,
α = Cn-1 β = 2tα+Cn-2 γ = 2tβ-α+Cn-3 α = 2tγ-β+Cn-4 β = 2tα-γ+Cn-5 say ​α = 2tγ-β+C2 β = 2tα-γ+C1 y(t) = tβ-α+12C0  
The auxiliary functions used are normally functions compounded of simple polynomial (usually linear) factors extracting zeros, and the primary compiler-provided functions, sin, cos, ln, exp, sqrt, which extract singularities and/or asymptotes or in some cases basic oscillatory behaviour, leaving a smooth well-behaved function to be approximated by the Chebyshev expansion which can, therefore, be rapidly convergent.
The mappings of [a,b] to [-1,+1] used range from simple linear mappings to the case when b is infinite, and considerable improvement in convergence can be obtained by use of a bilinear form of mapping. Another common form of mapping is used when the function is even; that is, it involves only even powers in its expansion. In this case an approximation over the whole interval [-a,a] can be provided using a mapping t=2 (x/a) 2-1. This embodies the evenness property but the expansion in t involves all powers and hence removes the necessity of working with an expansion with half its coefficients zero.
For many of the routines an analysis of the error in principle is given, namely, if E and are the absolute errors in function and argument and ε and δ are the corresponding relative errors, then
E |f(x)| E |xf(x)|δ ε | x f (x) f(x) |δ.  
If we ignore errors that arise in the argument of the function by propagation of data errors, etc., and consider only those errors that result from the fact that a real number is being represented in the computer in floating-point form with finite precision, then δ is bounded and this bound is independent of the magnitude of x. For example, on an 11-digit machine
|δ|10-11.  
(This of course implies that the absolute error =xδ is also bounded but the bound is now dependent on x.) However, because of this the last two relations above are probably of more interest. If possible the relative error propagation is discussed; that is, the behaviour of the error amplification factor |xf(x)/f(x)| is described, but in some cases, such as near zeros of the function which cannot be extracted explicitly, absolute error in the result is the quantity of significance and here the factor |xf(x)| is described. In general, testing of the functions has shown that their error behaviour follows fairly well these theoretical error behaviours. In regions where the error amplification factors are less than or of the order of one, the errors are slightly larger than the above predictions. The errors are here limited largely by the finite precision of arithmetic in the machine, but ε is normally no more than a few times greater than the bound on δ. In regions where the amplification factors are large, of order ten or greater, the theoretical analysis gives a good measure of the accuracy obtainable.
It should be noted that the definitions and notations used for the functions in this chapter are all taken from Abramowitz and Stegun (1972). You are strongly recommended to consult this book for details before using the routines in this chapter. An excellent on-line reference for special functions is the NIST Digital Library of Mathematical Functions.

2.2 Approximations to Elliptic Integrals

Four functions provided here are symmetrised variants of the classical (Legendre) elliptic integrals. These alternative definitions have been suggested by Carlson (1965), Carlson (1977b) and Carlson (1977a) and he also developed the basic algorithms used in this chapter.
The symmetrised elliptic integral of the first kind is represented by
RF (x,y,z) = 12 0 dt (t+x) (t+y) (t+z) ,  
where x,y,z0 and at most one may be equal to zero.
The normalization factor, 12 , is chosen so as to make
RF(x,x,x)=1/x.  
If any two of the variables are equal, RF degenerates into the second function
RC (x,y) = RF (x,y,y) = 12 0 dt (t+y) . t+x ,  
where the argument restrictions are now x0 and y0.
This function is related to the logarithm or inverse hyperbolic functions if 0<y<x, and to the inverse circular functions if 0xy.
The symmetrised elliptic integral of the second kind is defined by
RD (x,y,z) = 32 0 dt (t+x) (t+y) (t+z)3  
with z>0, x0 and y0, but only one of x or y may be zero.
The function is a degenerate special case of the symmetrised elliptic integral of the third kind
RJ (x,y,z,ρ) = 32 0 dt (t+x) (t+y) (t+z) (t+ρ)  
with ρ0 and x,y,z0 with at most one equality holding. Thus RD(x,y,z)=RJ(x,y,z,z). The normalization of both these functions is chosen so that
RD(x,x,x)=RJ(x,x,x,x)=1/(xx).  
The algorithms used for all these functions are based on duplication theorems. These allow a recursion system to be established which constructs a new set of arguments from the old using a combination of arithmetic and geometric means. The value of the function at the original arguments can then be simply related to the value at the new arguments. These recursive reductions are used until the arguments differ from the mean by an amount small enough for a Taylor series about the mean to give sufficient accuracy when retaining terms of order less than six. Each step of the recurrences reduces the difference from the mean by a factor of four, and as the truncation error is of order six, the truncation error goes like (4096)-n, where n is the number of iterations.
The above forms can be related to the more traditional canonical forms (see Section 17.2 of Abramowitz and Stegun (1972)), as follows.
If we write q=cos2ϕ, r=1-msin2ϕ, s=1-nsin2ϕ , where 0ϕ12π , we have
the classical elliptic integral of the first kind:
F(ϕm) = 0ϕ (1-msin2θ) -12 dθ = sinϕ RF (q,r,1) ;  
the classical elliptic integral of the second kind:
E(ϕm) = 0ϕ (1-msin2θ) 12 dθ = sinϕ RF (q,r,1) -13m sin3 ϕ RD (q,r,1)  
the classical elliptic integral of the third kind:
Π(n;ϕm) = 0ϕ (1-nsin2θ) -1 (1-msin2θ) -12 dθ = sinϕ RF (q,r,1) + 13 n sin3 ϕ RJ (q,r,1,s) .  
Also, the classical complete elliptic integral of the first kind:
K(m) = 0 π2 (1-msin2θ) -12 dθ = RF (0,1-m,1) ;  
the classical complete elliptic integral of the second kind:
E(m) = 0 π2 (1-msin2θ) 12 dθ = RF (0,1-m,1) - 13 m RD (0,1-m,1) .  
For convenience, Chapter S contains routines to evaluate classical and symmetrised elliptic integrals.

2.3 Bessel and Airy Functions of a Complex Argument

The routines for Bessel and Airy functions of a real argument are based on Chebyshev expansions, as described in Section 2.1. The routines provided for functions of a complex argument, however, use different methods. These routines relate all functions to the modified Bessel functions Iν(z) and Kν(z) computed in the right-half complex plane, including their analytic continuations. Iν and Kν are computed by different methods according to the values of z and ν. The methods include power series, asymptotic expansions and Wronskian evaluations. The relations between functions are based on well known formulae (see Abramowitz and Stegun (1972)).

2.4 Option Pricing Routines

The option pricing routines evaluate the closed form solutions or approximations to the equations that define mathematical models for the prices of selected financial option contracts. These solutions can be viewed as special functions determined by the underlying equations. The terminology associated with these routines arises from their setting in financial markets and is briefly outlined below. See Joshi (2003) for a comprehensive introduction to this subject. An option is a contract which gives the holder the right, but not the obligation, to buy (if it is a call) or sell (if it is a put) a particular asset, S. A European option can be exercised only at the specified expiry time, T, while an American option can be exercised at any time up to T. For Asian options the average underlying price over a pre-set time period determines the payoff.
The asset is bought (if a call) or sold (if a put) at a pre-specified strike price X. Thus, an option contract has a payoff to the holder of max{(ST-X),0} for a call or max{(X-ST),0}, for a put, which depends on whether the asset price at the time of exercise is above (call) or below (put) the strike, X. If at any moment in time a contract is currently showing a theoretical profit then it is deemed ‘in-the-money’; otherwise it is deemed ‘out-of-the-money’.
The option contract itself, therefore, has a value and, in many cases, can be traded in markets. Mathematical models (e.g., Black–Scholes, Merton, Vasicek, Hull–White, Heston, CEV, SABR, …) give theoretical prices for particular option contracts using a number of assumptions about the behaviour of financial markets. Typically the price St of the underlying asset at time t is modelled as the solution of a stochastic differential equation (SDE). Depending on the complexity of this equation, the model may admit closed form formulae for the prices of various options. The options described in this chapter introduction are detailed below. We let 𝔼 denote expectation with respect to the risk neutral measure and we define 𝕀A to be 1 on the set A and 0 otherwise.

2.4.1 The Black–Scholes Model

The best known model of asset behaviour is the Black–Scholes model. Under the risk-neutral measure, the asset is governed by the SDE
dSt St = (r-q) dt + σ dWt  
where r is the continuously compounded risk-free interest rate, q is the continuously compounded dividend yield, σ is the volatility of log-asset returns (i.e., log(St+dt/St) ) and W = (Wt) t0 is a standard Brownian motion. Under this model, the price of any option P must solve the Black–Scholes PDE
P t + P S (r-q) S+ 12 2P S2 σ2 S2 -rP=0  
at all times before the option is exercised. This PDE admits a closed form solution for a number of different options.

2.4.2 The Black–Scholes Model with Term Structure

The simplest extension of the Black–Scholes model is to allow r, q and σ to be deterministic functions of time so that
dSt St = (rt-qt) dt + σt dWt .  
In this case one can still obtain closed form solutions for some options, e.g., European calls and puts.

2.4.3 The Heston Model

Heston (1993) proposed a stochastic volatility model with the following form
dSt St = (r-q) dt + vt d W t (1) dvt = κ (η-vt) dt + σ vt d Wt(2)  
where W(1) and W(2) are two Brownian motions with quadratic covariation given by d W(1),W(2) t = ρ d t . In this model r and q are the continuously compounded risk free interest rate and dividend rate respectively, v= (vt) t0 is the stochastic volatility process, η is the long term mean of volatility, κ is the rate of mean reversion, and σ is the volatility of volatility. The prices of European call and put options in the Heston model are available in closed form up to the evaluation of an integral transform (see Lewis (2000)).

2.4.4 The Heston Model with Term Structure

The Heston model can be extended by allowing the coefficients to become deterministic functions of time:
dSt St = (rt-qt) dt + vt d Wt(1) dvt = κt (ηt-vt) dt + σt vt d Wt(2)  
where W(1) and W(2) are two Brownian motions with quadratic covariation given by d W(1),W(2) t = ρt d t . When the coefficients are restricted to being piecewise constant functions of time, the prices of European call and put options can be calculated as described in Elices (2008) and Mikhailov and Nögel (2003).

2.5 Hypergeometric Functions

The confluent hypergeometric function M(a,b,x) (or F1 1 (a;b;x) ) requires a number of techniques to approximate it over the whole parameter (a,b) space and for all argument (x) values. For x well within the unit circle |x|ρ<1 (where ρ=0.8 say), and for relatively small parameter values, the function can be well approximated by Taylor expansions, continued fractions or through the solution of the related ordinary differential equation by an explicit, adaptive integrator. For values of |x|>ρ, one of several transformations can be performed (depending on the value of x) to reformulate the problem in terms of a new argument x such that |x|ρ. If one or more of the parameters is relatively large (e.g., |a|>30) then recurrence relations can be used in combination to reformulate the problem in terms of parameter values of small size (e.g., |a|<1).
Approximations to the hypergeometric functions can, therefore, require all of the above techniques in sequence: a transformation to get an argument well inside the unit circle, a combination of recurrence relations to reduce the parameter sizes, and the approximation of the resulting hypergeometric function by one of a set of approximation techniques. Similar complications arise in the computation of the Gaussian Hypergeometric Function F1 2 .
All the techniques described above are based on those described in Pearson (2009).

3 Recommendations on Choice and Use of Available Routines

3.1 Vectorized Routine Variants

Many routines in Chapter S that compute functions of a single real argument have variants which operate on vectors of arguments. For example, s18aef computes the value of the I0 Bessel function for a single argument, and s18asf computes the same function for multiple arguments. In general it should be more efficient to use vectorized routines where possible, though to some extent this will depend on the environment from which you call the routines. See Section 4 for a complete list of vectorized routines.

3.2 Elliptic Integrals

IMPORTANT ADVICE: users who encounter elliptic integrals in the course of their work are strongly recommended to look at transforming their analysis directly to one of the Carlson forms, rather than to the traditional canonical Legendre forms. In general, the extra symmetry of the Carlson forms is likely to simplify the analysis, and these symmetric forms are much more stable to calculate. Note, however, that this transformation may eventually lead to the following combination of Carlson forms:
RF (0,1-m,1) - 13 m RD (0,1-m,1)  
with possibly m1, which makes RF and RD undefined, although the combination itself remains defined and 1. The routine s21bjf returning the Legendre form E(m) through this combination makes provision for such a case, and allows m=1.
The routine s21baf for RC is largely included as an auxiliary to the other routines for elliptic integrals. This integral essentially calculates elementary functions, e.g.,
lnx =(x-1)RC ( (1+x2) 2,x) ,  x>0; arcsinx =xRC(1-x2,1),|x|1; arcsinhx =xRC(1+x2,1),etc.  
In general this method of calculating these elementary functions is not recommended as there are usually much more efficient specific routines available in the Library. However, s21baf may be used, for example, to compute lnx/(x-1) when x is close to 1, without the loss of significant figures that occurs when lnx and x-1 are computed separately.

3.3 Bessel and Airy Functions

For computing the Bessel functions Jν(x), Yν(x), Iν(x) and Kν(x) where x is real and ν=0 or 1, special routines are provided, which are much faster than the more general routines that allow a complex argument and arbitrary real ν0. Similarly, special routines are provided for computing the Airy functions and their derivatives Ai(x), Bi(x), Ai(x), Bi(x) for a real argument which are much faster than the routines for complex arguments.

3.4 Option Pricing Functions

For the Black–Scholes model, functions are provided to compute prices and derivatives (Greeks) of all the European options listed in Section 2.4. Prices for American call and put options can be obtained by calling s30qcf which uses the Bjerksund and Stensland (2002) approximation to the theoretical value. For the Black–Scholes model with term structure, prices for European call and put options can be obtained by calling d03ndf. The prices of European call and put options in the standard Heston model can be obtained by calling s30naf, while s30ncf returns the same prices in the Heston model with term structure.
Given the price of a European call or put option, the implied volatility can be computed by calling s30acf.

3.5 Hypergeometric Functions

Two routines are provided for the confluent hypergeometric function F 1 1 . Both return values for F 1 1 (a;b;x) where parameters a and b, and argument x, are all real, but one variant works in a scaled form designed to avoid unnecessary loss of precision. The unscaled routine s22baf is easier to use and should be chosen in the first instance, changing to the scaled routine s22bbf only if problems are encountered. Similar considerations apply to the Gaussian hypergeometric function routines s22bef and s22bff.

4 Functionality Index

Airy function,  
Ai, real argument,  
scalar   s17agf
vectorized   s17auf
Ai or Ai, complex argument, optionally scaled   s17dgf
Ai, real argument,  
scalar   s17ajf
vectorized   s17awf
Bi, real argument,  
scalar   s17ahf
vectorized   s17avf
Bi or Bi, complex argument, optionally scaled   s17dhf
Bi, real argument,  
scalar   s17akf
vectorized   s17axf
Arccos,  
inverse circular cosine   s09abf
Arccosh,  
inverse hyperbolic cosine   s11acf
Arcsin,  
inverse circular sine   s09aaf
Arcsinh,  
inverse hyperbolic sine   s11abf
Arctanh,  
inverse hyperbolic tangent   s11aaf
Bessel function,  
I0, real argument,  
scalar   s18aef
vectorized   s18asf
I1, real argument,  
scalar   s18aff
vectorized   s18atf
Iν, complex argument, optionally scaled   s18def
J0, real argument,  
scalar   s17aef
vectorized   s17asf
J1, real argument,  
scalar   s17aff
vectorized   s17atf
Jα±n(z), complex argument   s18gkf
Jν, complex argument, optionally scaled   s17def
K0, real argument,  
scalar   s18acf
vectorized   s18aqf
K1, real argument,  
scalar   s18adf
vectorized   s18arf
Kν, complex argument, optionally scaled   s18dcf
Y0, real argument,  
scalar   s17acf
vectorized   s17aqf
Y1, real argument,  
scalar   s17adf
vectorized   s17arf
Yν, complex argument, optionally scaled   s17dcf
beta function,  
regularized incomplete,  
scalar   s14ccf
vectorized   s14cqf
Complement of the Cumulative Normal distribution,  
scalar   s15acf
vectorized   s15aqf
Complement of the Error function,  
complex argument, scaled,  
scalar   s15ddf
vectorized   s15drf
real argument,  
scalar   s15adf
vectorized   s15arf
real argument, scaled,  
scalar   s15agf
vectorized   s15auf
Cosine,  
hyperbolic   s10acf
Cosine Integral   s13acf
Cumulative Normal distribution function,  
scalar   s15abf
vectorized   s15apf
Dawson's Integral,  
scalar   s15aff
vectorized   s15atf
Digamma function, scaled   s14adf
Elliptic functions, Jacobian, sn, cn, dn,  
complex argument   s21cbf
real argument   s21caf
Elliptic integral,  
general,  
of 2nd kind, F(z,k,a,b)   s21daf
Legendre form,  
complete of 1st kind, K(m)   s21bhf
complete of 2nd kind, E (m)   s21bjf
of 1st kind, F(ϕ|m)   s21bef
of 2nd kind, E (ϕm)   s21bff
of 3rd kind, Π (n;ϕm)   s21bgf
symmetrised,  
degenerate of 1st kind, RC   s21baf
of 1st kind, RF   s21bbf
of 2nd kind, RD   s21bcf
of 3rd kind, RJ   s21bdf
Erf,  
real argument,  
scalar   s15aef
vectorized   s15asf
Erfc,  
complex argument, scaled,  
scalar   s15ddf
vectorized   s15drf
real argument,  
scalar   s15adf
vectorized   s15arf
erfcx,  
real argument,  
scalar   s15agf
vectorized   s15auf
Exponential,  
complex   s01eaf
Exponential Integral   s13aaf
Fresnel integral,  
C,  
scalar   s20adf
vectorized   s20arf
S,  
scalar   s20acf
vectorized   s20aqf
Gamma function,  
incomplete,  
scalar   s14baf
vectorized   s14bnf
scalar   s14aaf
vectorized   s14anf
Generalized factorial function,  
scalar   s14aaf
vectorized   s14anf
Hankel function Hν(1) or Hν(2),  
complex argument, optionally scaled   s17dlf
Hypergeometric functions,  
F 1 1 (a;b;x) , confluent, real argument   s22baf
F 1 1 (a;b;x), confluent, real argument, scaled form   s22bbf
F 1 2 (a,b;c;x) , Gauss, real argument   s22bef
F 1 2 (a,b;c;x) , Gauss, real argument, scaled form   s22bff
Jacobian theta functions θk(x,q),  
real argument   s21ccf
Kelvin function,  
beix,  
scalar   s19abf
vectorized   s19apf
berx,  
scalar   s19aaf
vectorized   s19anf
keix,  
scalar   s19adf
vectorized   s19arf
kerx,  
scalar   s19acf
vectorized   s19aqf
Legendre functions of 1st kind Pnm(x)Pnm¯(x)   s22aaf
Logarithm of 1+x   s01baf
Logarithm of beta function,  
real,  
scalar   s14cbf
vectorized   s14cpf
Logarithm of gamma function,  
complex   s14agf
real,  
scalar   s14abf
vectorized   s14apf
real, scaled   s14ahf
Mathieu function (angular),  
periodic, real,  
vectorized   s22caf
Modified Struve function,  
I0-L0, real argument,  
scalar   s18gcf
I1-L1, real argument,  
scalar   s18gdf
L0, real argument,  
scalar   s18gaf
L1, real argument,  
scalar   s18gbf
Option Pricing,  
American option, Bjerksund and Stensland option price   s30qcf
Asian option, geometric continuous average rate price   s30saf
Asian option, geometric continuous average rate price with Greeks   s30sbf
binary asset-or-nothing option price   s30ccf
binary asset-or-nothing option price with Greeks   s30cdf
binary cash-or-nothing option price   s30caf
binary cash-or-nothing option price with Greeks   s30cbf
Black–Scholes implied volatility   s30acf
Black–Scholes–Merton option price   s30aaf
Black–Scholes–Merton option price with Greeks   s30abf
European option, option prices, using Merton jump-diffusion model   s30jaf
European option, option price with Greeks, using Merton jump-diffusion model   s30jbf
floating-strike lookback option price   s30baf
floating-strike lookback option price with Greeks   s30bbf
Heston's model option price   s30naf
Heston's model option price with Greeks   s30nbf
Heston's model with term structure   s30ncf
standard barrier option price   s30faf
Polygamma function,  
ψ(n)(x), real x   s14aef
ψ(n)(z), complex z   s14aff
psi function   s14acf
psi function derivatives, scaled   s14adf
Scaled modified Bessel function(s),  
e-|x|I0(x), real argument,  
scalar   s18cef
vectorized   s18csf
e-|x|I1(x), real argument,  
scalar   s18cff
vectorized   s18ctf
ex K0 (x), real argument,  
scalar   s18ccf
vectorized   s18cqf
ex K1 (x), real argument,  
scalar   s18cdf
vectorized   s18crf
Sine,  
hyperbolic   s10abf
Sine Integral   s13adf
Struve function,  
H0, real argument,  
scalar   s17gaf
H1, real argument,  
scalar   s17gbf
Tangent,  
circular   s07aaf
hyperbolic   s10aaf
Trigamma function, scaled   s14adf
Zeros of Bessel functions Jα(x)Jα(x)Yα(x)Yα(x),  
scalar   s17alf

5 Auxiliary Routines Associated with Library Routine Arguments

None.

6 Withdrawn or Deprecated Routines

None.

7 References

NIST Digital Library of Mathematical Functions
Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications
Bjerksund P and Stensland G (2002) Closed form valuation of American options Discussion Paper 2002/09 NHH Bergen Norway
Carlson B C (1965) On computing elliptic integrals and functions J. Math. Phys. 44 36–51
Carlson B C (1977a) Special Functions of Applied Mathematics Academic Press
Carlson B C (1977b) Elliptic integrals of the first kind SIAM J. Math. Anal. 8 231–242
Clenshaw C W (1962) Chebyshev Series for Mathematical Functions Mathematical tables HMSO
Elices A (2008) Models with time-dependent parameters using transform methods: application to Heston’s model arXiv:0708.2020v2
Fox L and Parker I B (1968) Chebyshev Polynomials in Numerical Analysis Oxford University Press
Haug E G (2007) The Complete Guide to Option Pricing Formulas (2nd Edition) McGraw-Hill
Heston S (1993) A closed-form solution for options with stochastic volatility with applications to bond and currency options Review of Financial Studies 6 327–343
Joshi M S (2003) The Concepts and Practice of Mathematical Finance Cambridge University Press
Lewis A L (2000) Option valuation under stochastic volatility Finance Press, USA
Mikhailov S and Nögel U (2003) Heston’s Stochastic Volatility Model Implementation, Calibration and Some Extensions Wilmott Magazine July/August 74–79
Pearson J (2009) Computation of hypergeometric functions MSc Dissertation, Mathematical Institute, University of Oxford
Schonfelder J L (1976) The production of special function routines for a multi-machine library Softw. Pract. Exper. 6(1)