PDF version (NAG web site
, 64bit version, 64bit version)
NAG Toolbox Chapter Introduction
G05 — Random Number Generators
Scope of the Chapter
This chapter is concerned with the generation of sequences of independent pseudorandom and quasirandom numbers from various distributions, and models.
Background to the Problems
Pseudorandom Numbers
A sequence of pseudorandom numbers is a sequence of numbers generated in some systematic way such that they are independent and statistically indistinguishable from a truly random sequence. A pseudorandom number generator (PRNG) is a mathematical algorithm that, given an initial state, produces a sequence of pseudorandom numbers. A PRNG has several advantages over a true random number generator in that the generated sequence is repeatable, has known mathematical properties and can be implemented without needing any specialist hardware. Many books on statistics and computer science have good introductions to PRNGs, for example
Knuth (1981) or
Banks (1998).
PRNGs can be split into base generators, and distributional generators. Within the context of this document a base generator is defined as a PRNG that produces a sequence (or stream) of variates (or values) uniformly distributed over the interval (0,1)$(0,1)$. Depending on the algorithm being considered, this interval may be open, closed or halfclosed. A distribution generator is a function that takes variates generated from a base generator and transforms them into variates from a specified distribution, for example a uniform, Gaussian (Normal) or gamma distribution.
The period (or cycle length) of a base generator is defined as the maximum number of values that can be generated before the sequence starts to repeat. The initial state of the base generator is often called the seed.
There are six base generators currently available in the NAG Toolbox, these are; a basic linear congruential generator (LCG) (referred to as the NAG basic generator) (see
Knuth (1981)), two sets of Wichmann–Hill generators (see
Maclaren (1989) and
Wichmann and Hill (2006)), the Mersenne Twister (see
Matsumoto and Nishimura (1998)), the ACORN generator (see
Wikramaratna (1989)) and L'Ecuyer generator (see
L'Ecuyer and Simard (2002)).
NAG Basic Generator
The NAG basic generator is a linear congruential generator (LCG) and, like all linear congruential generators, has the form:
where the
u_{i}${u}_{\mathit{i}}$, for
i = 1,2, … $\mathit{i}=1,2,\dots $, form the required sequence.
The NAG basic generator uses a_{1} = 13^{13}${a}_{1}={13}^{13}$ and m_{1} = 2^{59}${m}_{1}={2}^{59}$, which gives a period of approximately 2^{57}${2}^{57}$.
This generator has been part of the NAG Library since Mark 6 and as such has been widely used. It suffers from no known problems, other than those due to the lattice structure inherent in all linear congruential generators, and, even though the period is relatively short compared to many of the newer generators, it is sufficiently large for many practical problems.
The performance of the NAG basic generator has been analysed by the Spectral Test, see Section 3.3.4 of
Knuth (1981), yielding the following results in the notation of
Knuth (1981).
n$n$ 
ν_{n}${\nu}_{n}$ 
Upper bound for ν_{n}${\nu}_{n}$ 
2$2$ 
3.44 × 10^{8}$3.44\times {10}^{8}$ 
4.08 × 10^{8}$4.08\times {10}^{8}$ 
3$3$ 
4.29 × 10^{5}$4.29\times {10}^{5}$ 
5.88 × 10^{5}$5.88\times {10}^{5}$ 
4$4$ 
1.72 × 10^{4}$1.72\times {10}^{4}$ 
2.32 × 10^{4}$2.32\times {10}^{4}$ 
5$5$ 
1.92 × 10^{3}$1.92\times {10}^{3}$ 
3.33 × 10^{3}$3.33\times {10}^{3}$ 
6$6$ 
593$593$ 
939$939$ 
7$7$ 
198$198$ 
380$380$ 
8$8$ 
108$108$ 
197$197$ 
9$9$ 
67$67$ 
120$120$ 
The righthand column gives an upper bound for the values of ν_{n}${\nu}_{n}$ attainable by any multiplicative congruential generator working modulo 2^{59}${2}^{59}$.
An informal interpretation of the quantities
ν_{n}${\nu}_{n}$ is that consecutive
n$n$tuples are statistically uncorrelated to an accuracy of
1 / ν_{n}$1/{\nu}_{n}$. This is a theoretical result; in practice the degree of randomness is usually much greater than the above figures might support. More details are given in
Knuth (1981), and in the references cited therein.
Note that the achievable accuracy drops rapidly as the number of dimensions increases. This is a property of all multiplicative congruential generators and is the reason why very long periods are needed even for samples of only a few random numbers.
Wichmann–Hill I Generator
This series of Wichmann–Hill base generators (see
Maclaren (1989)) use a combination of four linear congruential generators and has the form:
where the
u_{i}${u}_{\mathit{i}}$, for
i = 1,2, … $\mathit{i}=1,2,\dots $, form the required sequence. The NAG Toolbox implementation includes 273 sets of parameters,
a_{j},m_{j}${a}_{\mathit{j}},{m}_{\mathit{j}}$, for
j = 1,2,3,4$\mathit{j}=1,2,3,4$, to choose from.
The constants
a_{i}${a}_{i}$ are in the range 112 to 127 and the constants
m_{j}${m}_{j}$ are prime numbers in the range
16718909$16718909$ to
16776971$16776971$, which are close to
2^{24} = 16777216${2}^{24}=16777216$. These constants have been chosen so that each of the resulting 273 generators are essentially independent, all calculations can be carried out in 32bit integer arithmetic and the generators give good results with the spectral test, see
Knuth (1981) and
Maclaren (1989). The period of each of these generators would be at least
2^{92}${2}^{92}$ if it were not for common factors between
(m_{1} − 1)$({m}_{1}1)$,
(m_{2} − 1)$({m}_{2}1)$,
(m_{3} − 1)$({m}_{3}1)$ and
(m_{4} − 1)$({m}_{4}1)$. However, each generator should still have a period of at least
2^{80}${2}^{80}$. Further discussion of the properties of these generators is given in
Maclaren (1989).
Wichmann–Hill II Generator
This Wichmann–Hill base generator (see
Wichmann and Hill (2006)) is of the same form as that described in
Section [Wichmann–Hill I Generator], i.e., a combination of four linear congruential generators. In this case
a_{1} = 11600${a}_{1}=11600$,
m_{1} = 2147483579${m}_{1}=2147483579$,
a_{2} = 47003${a}_{2}=47003$,
m_{2} = 2147483543${m}_{2}=2147483543$,
a_{3} = 23000${a}_{3}=23000$,
m_{3} = 2147483423${m}_{3}=2147483423$,
a_{4} = 33000${a}_{4}=33000$,
m_{4} = 2147483123${m}_{4}=2147483123$.
Unlike in the original Wichmann–Hill generator, these values are too large to carry out the calculations detailed in
(1) using 32bit integer arithmetic, however, if
then setting
gives
and
W_{i}${W}_{i}$ can be calculated in 32bit integer arithmetic. Similar expressions exist for
x_{i}${x}_{i}$,
y_{i}${y}_{i}$ and
z_{i}${z}_{i}$. The period of this generator is approximately
2^{121}${2}^{121}$.
Further details of implementing this algorithm and its properties are given in
Wichmann and Hill (2006). This paper also gives some useful guidelines on testing PRNGs.
Mersenne Twister Generator
The Mersenne Twister (see
Matsumoto and Nishimura (1998)) is a twisted generalized feedback shift register generator. The algorithm underlying the Mersenne Twister is as follows:
(i) 
Set some arbitrary initial values x_{1},x_{2}, … ,x_{r}${x}_{1},{x}_{2},\dots ,{x}_{r}$, each consisting of w$w$ bits. 
(ii) 
Letting
where I_{w − 1}${I}_{w1}$ is the (w − 1) × (w − 1)$(w1)\times (w1)$ identity matrix and each of the a_{i},i = 1${a}_{i},i=1$ to w$w$ take a value of either 0$0$ or 1$1$ (i.e., they can be represented as bits). Define
where
x_{i}^{
(ω : (l + 1))
}

x_{i + 1}^{
(l : 1)
}
${x}_{i}^{(\omega :(l+1))}{x}_{i+1}^{(\mathrm{l}:1)}$ indicates the concatenation of the most significant (upper) w − l$wl$ bits of x_{i}${x}_{i}$ and the least significant (lower) l$l$ bits of x_{i + 1}${x}_{i+1}$. 
(iii) 
Perform the following operations sequentially:
where t_{1}${t}_{1}$, t_{2}${t}_{2}$, t_{3}${t}_{3}$ and t_{4}${t}_{4}$ are integers and m_{1}${m}_{1}$ and m_{2}${m}_{2}$ are bitmasks and ‘ ≫ t$\gg t$’ and ‘ ≪ t$\ll t$’ represent a t$t$ bit shift right and left respectively, ⊕ $\oplus $ is bitwise exclusively or (xor) operation and ‘AND’ is a bitwise and operation. 
The
u_{i + r}${u}_{\mathit{i}+r}$, for
i = 1,2, … $\mathit{i}=1,2,\dots $, form the required sequence. The supplied implementation of the Mersenne Twister uses the following values for the algorithmic constants:
where the notation 0x
DD … $\dots $ indicates the bit pattern of the integer whose hexadecimal representation is
DD … $\dots $.
This algorithm has a period length of approximately
2^{19,937} − 1${2}^{\mathrm{19,937}}1$ and has been shown to be uniformly distributed in 623 dimensions (see
Matsumoto and Nishimura (1998)).
ACORN Generator
The ACORN generator is a special case of a multiple recursive generator (see
Wikramaratna (1989) and
Wikramaratna (2007)). The algorithm underlying ACORN is as follows:
(i) 
Choose an integer value k ≥ 1$k\ge 1$. 
(ii) 
Choose an integer value M$M$, and an integer seed Y_{0}^{(0)}${Y}_{0}^{\left(0\right)}$, such that 0 < Y_{0}^{(0)} < M$0<{Y}_{0}^{\left(0\right)}<M$ and Y_{0}^{(0)}${Y}_{0}^{\left(0\right)}$ and M$M$ are relatively prime. 
(iii) 
Choose an arbitrary set of k$k$ initial integer values, Y_{0}^{(1)},Y_{0}^{(2)}, … ,Y_{0}^{(k)}${Y}_{0}^{\left(1\right)},{Y}_{0}^{\left(2\right)},\dots ,{Y}_{0}^{\left(k\right)}$, such that 0 ≤ Y_{0}^{(m)} < M$0\le {Y}_{0}^{\left(m\right)}<M$, for all m = 1,2, … ,k$m=1,2,\dots ,k$. 
(iv) 
Perform the following sequentially:
for m = 1,2, … ,k$m=1,2,\dots ,k$. 
(v) 
Set u_{i} = Y_{i}^{(k)} / M${u}_{i}={Y}_{i}^{\left(k\right)}/M$. 
The u_{i}${u}_{\mathit{i}}$, for i = 1,2, … $\mathit{i}=1,2,\dots $, then form a pseudorandom sequence, with u_{i} ∈ [0,1)${u}_{i}\in [0,1)$, for all i$i$.
Although you can choose any value for
k$k$,
M$M$,
Y_{0}^{(0)}${Y}_{0}^{\left(0\right)}$ and the
Y_{0}^{(m)}${Y}_{0}^{\left(m\right)}$, within the constraints mentioned in
(i) to
(iii) above, it is recommended that
k ≥ 10$k\ge 10$,
M$M$ is chosen to be a large power of two with
M ≥ 2^{60}$M\ge {2}^{60}$ and
Y_{0}^{(0)}${Y}_{0}^{\left(0\right)}$ is chosen to be odd.
The period of the ACORN generator, with the modulus
M$M$ equal to a power of two, and an odd value for
Y_{0}^{(0)}${Y}_{0}^{\left(0\right)}$ has been shown to be an integer multiple of
M$M$ (see
Wikramaratna (1992)). Therefore, increasing
M$M$ will give a series with a longer period.
L'Ecuyer MRG32k3a Combined Recursive Generator
The base generator L'Ecuyer MRG32k3a (see
L'Ecuyer and Simard (2002)) combines two multiple recursive generators:
where
a_{11}
=
0
${a}_{11}=0$,
a_{12}
=
1403580
${a}_{12}=1403580$,
a_{13}
=
810728
${a}_{13}=810728$,
m_{1}
=
2^{32} − 209
${m}_{1}={2}^{32}209$,
a_{21}
=
527612
${a}_{21}=527612$,
a_{22}
=
0
${a}_{22}=0$,
a_{23}
=
1370589
${a}_{23}=1370589$,
m_{2}
=
2^{32} − 22853
${m}_{2}={2}^{32}22853$, and
u_{i}
,
i
=
1
,
2
, …
${u}_{i},i=1,2,\dots $ form the required sequence. If
d = m_{1}$d={m}_{1}$ then
u_{i} ∈ (0,1]${u}_{i}\in (0,1]$ else if
d = m_{1} + 1$d={m}_{1}+1$ then
u_{i} ∈ (0,1)${u}_{i}\in (0,1)$. Combining the two multiple recursive generators (MRG) results in sequences with better statistical properties in high dimensions and longer periods compared with those generated from a single MRG. The combined generator described above has a period length of approximately
2^{191}${2}^{191}$.
Quasirandom Numbers
Low discrepancy (quasirandom) sequences are used in numerical integration, simulation and optimization. Like pseudorandom numbers they are uniformly distributed but they are not statistically independent, rather they are designed to give more even distribution in multidimensional space (uniformity). Therefore they are often more efficient than pseudorandom numbers in multidimensional Monte–Carlo methods.
The quasirandom number generators implemented in this chapter generate a set of points
x^{1},x^{2}, … ,x^{N}${x}^{1},{x}^{2},\dots ,{x}^{N}$ with high uniformity in the
S$S$dimensional unit cube
I^{S} = [0,1]^{S}${I}^{S}={[0,1]}^{S}$. One measure of the uniformity is the discrepancy which is defined as follows:
 Given a set of points x^{1},x^{2}, … ,x^{N} ∈ I^{S}${x}^{1},{x}^{2},\dots ,{x}^{N}\in {I}^{S}$ and a subset G ⊂ I^{S}$G\subset {I}^{S}$, define the counting function S_{N}(G)${S}_{N}\left(G\right)$ as the number of points x^{i} ∈ G${x}^{i}\in G$. For each x = (x_{1},x_{2}, … ,x_{S}) ∈ I^{S}$x=({x}_{1},{x}_{2},\dots ,{x}_{S})\in {I}^{S}$, let G_{x}${G}_{x}$ be the rectangular S$S$dimensional region
with volume x_{1},x_{2}, … ,x_{S}${x}_{1},{x}_{2},\dots ,{x}_{S}$. Then the discrepancy of the points x^{1},x^{2}, … ,x^{N}${x}^{1},{x}^{2},\dots ,{x}^{N}$ is
The discrepancy of the first N$N$ terms of such a sequence has the form
The principal aim in the construction of lowdiscrepancy sequences is to find sequences of points in I^{S}${I}^{S}$ with a bound of this form where the constant C_{S}${C}_{S}$ is as small as possible.
Three types of lowdiscrepancy sequences are supplied in this library, these are due to Sobol, Faure and Niederreiter. Two sets of Sobol sequences are supplied, the first is based on work of
Joe and Kuo (2008) and the second on the work of
Bratley and Fox (1988). More information on quasirandom number generation and the Sobol, Faure and Niederreiter sequences in particular can be found in
Bratley and Fox (1988) and
Fox (1986).
The efficiency of a simulation exercise may often be increased by the use of variance reduction methods (see
Morgan (1984)). It is also worth considering whether a simulation is the best approach to solving the problem. For example, lowdimensional integrals are usually more efficiently calculated by functions in
Chapter D01 rather than by Monte–Carlo integration.
Scrambled Quasirandom Numbers
Scrambled quasirandom sequences are an extension of standard quasirandom sequences that attempt to eliminate the bias inherent in a quasirandom sequence whilst retaining the lowdiscrepancy properties. The use of a scrambled sequence allows error estimation of Monte–Carlo results by performing a number of iterates and computing the variance of the results.
This implementation of scrambled quasirandom sequences is based on TOMS algorithm 823 and details can be found in the accompanying paper,
Hong and Hickernell (2003). Three methods of scrambling are supplied; the first a restricted form of Owen's scrambling (
Owen (1995)), the second based on the method of
Faure and Tezuka (2000) and the last method combines the first two.
Scrambled versions of both Sobol sequences and the Niederreiter sequence can be obtained.
Nonuniform Random Numbers
Random numbers from other distributions may be obtained from the uniform random numbers by the use of transformations and rejection techniques, and for discrete distributions, by table based methods.
(a) 
Transformation Methods
For a continuous random variable, if the cumulative distribution function (CDF) is F(x)$F\left(x\right)$ then for a uniform (0,1)$(0,1)$ random variate u$u$, y = F^{ − 1}(u)$y={F}^{1}\left(u\right)$ will have CDF F(x)$F\left(x\right)$. This method is only efficient in a few simple cases such as the exponential distribution with mean μ$\mu $, in which case F^{ − 1}(u) = − μlogu${F}^{1}\left(u\right)=\mu \mathrm{log}u$. Other transformations are based on the joint distribution of several random variables. In the bivariate case, if v$v$ and w$w$ are random variates there may be a function g$g$ such that y = g(v,w)$y=g(v,w)$ has the required distribution; for example, the Student's t$t$distribution with n$n$ degrees of freedom in which v$v$ has a Normal distribution, w$w$ has a gamma distribution and g(v,w) = v×sqrt(n / w)$g(v,w)=v\sqrt{n/w}$. 
(b) 
Rejection Methods
Rejection techniques are based on the ability to easily generate random numbers from a distribution (called the envelope) similar to the distribution required. The value from the envelope distribution is then accepted as a random number from the required distribution with a certain probability; otherwise, it is rejected and a new number is generated from the envelope distribution. 
(c) 
Table Search Methods
For discrete distributions, if the cumulative probabilities, P_{i} = Prob(x ≤ i)${P}_{i}=\mathrm{Prob}(x\le i)$, are stored in a table then, given u$u$ from a uniform (0,1)$(0,1)$ distribution, the table is searched for i$i$ such that P_{i − 1} < u ≤ P_{i}${P}_{i1}<u\le {P}_{i}$. The returned value i$i$ will have the required distribution. The table searching can be made faster by means of an index, see Ripley (1987). The effort required to set up the table and its index may be considerable, but the methods are very efficient when many values are needed from the same distribution. 
Copulas
A copula is a function that links the univariate marginal distributions with their multivariate distribution. Sklar's theorem (see
Sklar (1973)) states that if
f$f$ is an
m$m$dimensional distribution function with continuous margins
f_{1}
,
f_{2}
, … ,
f_{m}
${f}_{1},{f}_{2},\dots ,{f}_{m}$, then
f$f$ has a unique copula representation,
c$c$, such that
The copula,
c$c$, is a multivariate uniform distribution whose dependence structure is defined by the dependence structure of the multivariate distribution
f$f$, with
where
u_{i}
∈
[0,1]
${u}_{i}\in [0,1]$. This relationship can be used to simulate variates from distributions defined by the dependence structure of one distribution and each of the marginal distributions given by another. For additional information see
Nelsen (1998) or
Boye (Unpublished manuscript) and the references therein.
Brownian Bridge
Brownian Bridge Process
Fix two times t_{0} < T${t}_{0}<T$ and let
W
=
(W_{t})_{
0 ≤ t ≤ T − t0
}
$W={\left({W}_{t}\right)}_{0\le t\le T{t}_{0}}$ be a standard d$d$dimensional Wiener process on the interval [0,T − t_{0}]$[0,T{t}_{0}]$. Recall that the terms Wiener process and Brownian motion are often used interchangeably.
A
standard d$d$dimensional Brownian bridge
B
=
(B_{t})_{t0 ≤ t ≤ T}
$B={\left({B}_{t}\right)}_{{t}_{0}\le t\le T}$ on
[t_{0},T]$[{t}_{0},T]$ is defined (see
Revuz and Yor (1999)) as
The process is continuous, starts at zero at time
t_{0}${t}_{0}$ and ends at zero at time
T$T$. It is Gaussian, has zero mean and has a covariance structure given by
for any
s ≤ t$s\le t$ in
[t_{0},T]$[{t}_{0},T]$ where
I_{d}${I}_{d}$ is the
d$d$dimensional identity matrix. The Brownian bridge is often called a nonfree or ‘pinned’ Wiener process since it is forced to be
0$0$ at time
T$T$, but is otherwise very similar to a standard Wiener process.
We can generalize this construction as follows. Fix points
x,w ∈ ℝ^{d}$x,w\in {\mathbb{R}}^{d}$, let
Σ$\Sigma $ be a
d × d$d\times d$ covariance matrix and choose any
d × d$d\times d$ matrix
C$C$ such that
CC^{T} = Σ$C{C}^{\mathrm{T}}=\Sigma $. The
generalized d$d$dimensional Brownian bridge
X
=
(X_{t})_{t0 ≤ t ≤ T}
$X={\left({X}_{t}\right)}_{{t}_{0}\le t\le T}$ is defined by setting
for all
t ∈ [t_{0},T]$t\in [{t}_{0},T]$. The process
X$X$ is continuous, starts at
x$x$ at time
t_{0}${t}_{0}$ and ends at
w$w$ at time
T$T$. It has mean
((t − t_{0})w + (T − t)x)
/
(T − t_{0})
$((t{t}_{0})w+(Tt)x)/(T{t}_{0})$ and covariance structure
for all
s ≤ t$s\le t$ in
[t_{0},T]$[{t}_{0},T]$. This is a nonfree Wiener process since it is forced to be equal to
w$w$ at time
T$T$. However if we set
w = x + CW_{T − t0}$w=x+C{W}_{T{t}_{0}}$, then
X$X$ simplifies to
for all
t ∈ [t_{0},T]$t\in [{t}_{0},T]$ which is nothing other than a
d$d$dimensional Wiener process with covariance given by
Σ$\Sigma $.
Figure 1: Two sample paths for a twodimensional free Wiener process
Figure 1 shows two sample paths for a twodimensional free Wiener process
X
=
(X_{t}^{1},X_{t}^{2})_{
0 ≤ t ≤ 2
}
$X={({X}_{t}^{1},{X}_{t}^{2})}_{0\le t\le 2}$. The correlation coefficient between the onedimensional processes
X^{1}${X}^{1}$ and
X^{2}${X}^{2}$ at any time is
ρ = 0.80$\rho =0.80$. Note that the red and green paths in each figure are uncorrelated, however it is fairly evident that the two red paths are correlated, and that the two green paths are correlated (when one path increases so does the other, and vice versa).
Figure 2: Two sample paths for a twodimensional nonfree Wiener process. The process starts at (0,0)$(0,0)$ and ends at (1, − 1)$(1,1)$
Figure 2 shows two sample paths for a twodimensional nonfree Wiener process. The process starts at
(0,0)
$(0,0)$ and ends at
(1, − 1)
$(1,1)$. The correlation coefficient between the onedimensional processes is again
ρ = 0.80$\rho =0.80$. The red and green paths in each figure are uncorrelated, while the two red paths tend to increase and decrease together, as do the two green paths. Both
Figure 1 and
Figure 2 were constructed using
nag_rand_bb (g05xb).
Brownian Bridge Algorithm
The ideas above can also be used to construct sample paths of a free or nonfree Wiener process (recall that a nonfree Wiener process is the Brownian bridge process outlined above). Fix two times
t_{0} < T${t}_{0}<T$ and let
(t_{i})_{1 ≤ i ≤ N}
${\left({t}_{i}\right)}_{1\le i\le N}$ be any set of time points satisfying
t_{0} < t_{1} < t_{2} <⋯< t_{N} < T${t}_{0}<{t}_{1}<{t}_{2}<\cdots <{t}_{N}<T$. Let
(X_{ti})_{1 ≤ i ≤ N}
${\left({X}_{{t}_{i}}\right)}_{1\le i\le N}$ denote a
d$d$dimensional (free or nonfree) Wiener sample path at these times. These values can be generated by the socalled
Brownian bridge algorithm (see
Glasserman (2004)) which works as follows. From any two known points
X_{ti}
${X}_{{t}_{i}}$ at time
t_{i}${t}_{\mathrm{i}}$ and
X_{tk}
${X}_{{t}_{k}}$ at time
t_{k}${t}_{k}$ with
t_{i} < t_{k}${t}_{i}<{t}_{k}$, a new point
X_{tj}
${X}_{{t}_{j}}$ can be interpolated at any time
t_{j}
∈
(t_{i},t_{k})
${t}_{j}\in ({t}_{i},{t}_{k})$ by setting
where
Z$Z$ is a
d$d$dimensional standard Normal random variable and
C$C$ is any
d × d$d\times d$ matrix such that
CC^{T}$C{C}^{\mathrm{T}}$ is the desired covariance structure for the (free or nonfree) Wiener process
X$X$. Clearly this algorithm is iterative in nature. All that is needed to complete the specification is to fix the start point
X_{t0}${X}_{{t}_{0}}$ and end point
X_{T}${X}_{T}$, and to specify how successive interpolation times
t_{j}${t}_{j}$ are chosen. For
X$X$ to behave like a usual (free) Wiener process we should set
X_{t0}${X}_{{t}_{0}}$ equal to some value
x ∈ ℝ^{d}$x\in {\mathbb{R}}^{d}$ and then set
X_{T} = x + Csqrt(T − t_{0})Z${X}_{T}=x+C\sqrt{T{t}_{0}}Z$ where
Z$Z$ is any
d$d$dimensional standard Normal random variable. However when it comes to deciding how the successive interpolation times
t_{j}${t}_{j}$ should be chosen, there is virtually no restriction. Any method of choosing which
t_{j}
∈
(t_{i},t_{k})
${t}_{j}\in ({t}_{i},{t}_{k})$ to interpolate next is equally valid, provided
t_{i}${t}_{i}$ is the nearest known point to the left of
t_{j}${t}_{j}$ and
t_{k}${t}_{k}$ is the nearest known point to the right of
t_{j}${t}_{j}$. In other words, the interpolation interval
(t_{i},t_{k})$({t}_{i},{t}_{k})$ must not contain any other known points, otherwise the covariance structure of the process will be incorrect.
The order in which the successive interpolation times t_{j}${t}_{j}$ are chosen is called the bridge construction order. Since all construction orders will produce a correct process, the question arises whether one construction order should be preferred over another. When the Z$Z$ values are drawn from a pseudorandom generator, the answer is typically no. However the bridge algorithm is frequently used with quasirandom numbers, and in this case the bridge construction order can be important.
Bridge Construction Order and Quasirandom Sequences
Consider the onedimensional case of a free Wiener process where d = C = 1$d=C=1$. The Brownian bridge is frequently combined with lowdiscrepancy (quasirandom) sequences to perform quasiMonte–Carlo integration. Quasirandom points
Z^{1},
Z^{2},
Z^{3},
…
${Z}^{1},{Z}^{2},{Z}^{3},\dots $ are generated from the standard Normal distribution, where each quasirandom point
Z^{i}
=
(Z_{1}^{i},Z_{2}^{i}, ⋯ ,Z_{D}^{i})
${Z}^{i}=({Z}_{1}^{i},{Z}_{2}^{i},\cdots ,{Z}_{D}^{i})$ consists of D$D$ onedimensional values. The process X$X$ starts at X_{t0} = x${X}_{{t}_{0}}=x$ which is known. There remain N + 1$N+1$ time points at which the bridge is to be computed, namely
(X_{ti})_{1 ≤ i ≤ N}
${\left({X}_{{t}_{i}}\right)}_{1\le i\le N}$ and X_{T}${X}_{T}$ (recall we are considering a free Wiener process). In this case D$D$ is set equal to N + 1$N+1$, so that N + 1$N+1$ dimensional quasirandom points are generated. A single quasirandom point is used to construct one Wiener sample path.
The question is how to use the dimension values of each
N + 1$N+1$ dimensional quasirandom point. Often the ‘lower’ dimension values (
Z_{1}^{i},Z_{2}^{i}${Z}_{1}^{i},{Z}_{2}^{i}$, etc.) display better uniformity properties than the ‘higher’ dimension values (
Z_{N + 1}^{i},Z_{N}^{i}${Z}_{N+1}^{i},{Z}_{N}^{i}$, etc.) so that the ‘lower’ dimension values should be used to construct the most
important sections of the sample path. For example, consider a model which is particularly sensitive to the behaviour of the underlying process at time
3$3$. When constructing the sample paths, one would therefore ensure that time
3$3$ was one of the interpolation points of the bridge, and that a ‘lower’ dimension value was used in
(2) to construct the corresponding bridge point
X_{3}${X}_{3}$. Indeed, one would most likely also ensure that time
X_{3}${X}_{3}$ was one of the
first bridge points that was constructed: ‘lower’ dimension values would be used to construct both the left and right bridge points used in
(2) to interpolate
X_{3}${X}_{3}$, so that the distribution of
X_{3}${X}_{3}$ benefits as much as possible from the uniformity properties of the quasirandom sequence. For further discussions in this regard we refer to
Glasserman (2004). These remarks extend readily to the case of a nonfree Wiener process.
Brownian Bridge and Stochastic Differential Equations
The Brownian bridge algorithm, especially when combined with quasirandom variates, is frequently used to obtain numerical solutions to stochastic differential equations (SDEs) driven by (free or nonfree) Wiener processes. The quasirandom variates produce a family of Wiener sample paths which cover the space of all Wiener sample paths fairly evenly. This is analogous to the way in which a twodimensional quasirandom sequence covers the unit square
[0,1]^{2}${[0,1]}^{2}$ evenly. When solving SDEs one is typically interested in the
increments of the driving Wiener process between two time points, rather than the value of the process at a particular time point.
Section [Brownian Bridge] contains details on which functions can be used to obtain such Wiener increments.
Random Fields
A random field is a stochastic process, taking values in a Euclidean space, and defined over a parameter space of dimensionality at least one. They are often used to simulate some physical spacedependent parameter, such as the permeability of rock, which cannot be measured at every point in the space. The simulated values can then be used to model other dependent quantities, for example, underground flow of water, often through the use of partial differential equations (PDEs).
A d$d$dimensional random field Z(x)$Z\left(\mathbf{x}\right)$ is a function which is random at every point (x ∈ D)$(\mathbf{x}\in D)$ for some domain D ⊂ ℝ^{d}$D\subset {\mathbb{R}}^{d}$, so Z(x)$Z\left(\mathbf{x}\right)$ is a random variable for each x$\mathbf{x}$. The random field has a mean function μ(x) = 𝔼[Z(x)]$\mu \left(\mathbf{x}\right)=\mathbb{E}\left[Z\left(\mathbf{x}\right)\right]$ and a symmetric positive semidefinite covariance function C(x,y) = 𝔼[(Z(x) − μ(x))(Z(y) − μ(y))]$C(\mathbf{x},\mathbf{y})=\mathbb{E}\left[(Z\left(\mathbf{x}\right)\mu \left(\mathbf{x}\right))(Z\left(\mathbf{y}\right)\mu \left(\mathbf{y}\right))\right]$.
A random field, Z(x)$Z\left(\mathbf{x}\right)$, is a Gaussian random field if, for any choice of n ∈ ℕ$n\in \mathbb{N}$ and x_{1}, … ,x_{n} ∈ ℝ^{d}${\mathbf{x}}_{1},\dots ,{\mathbf{x}}_{n}\in {\mathbb{R}}^{d}$, the random vector [Z(x_{1}), … ,Z(x_{n})]^{T}${[Z\left({\mathbf{x}}_{1}\right),\dots ,Z\left({\mathbf{x}}_{n}\right)]}^{\mathrm{T}}$ follows a multivariate Gaussian distribution.
A Gaussian random field
Z(x)$Z\left(\mathbf{x}\right)$ is stationary if
μ(x)$\mu \left(\mathbf{x}\right)$ is constant for all
x ∈ ℝ$\mathbf{x}\in \mathbb{R}$ and
C(x,y) = C(x + a,y + a)$C(\mathbf{x},\mathbf{y})=C(\mathbf{x}+\mathbf{a},\mathbf{y}+\mathbf{a})$ for all
x,y,a ∈ ℝ^{d}$\mathbf{x},\mathbf{y},\mathbf{a}\in {\mathbb{R}}^{d}$ and hence we can express the covariance function
C(x,y)$C(\mathbf{x},\mathbf{y})$ as a function
γ$\gamma $ of one variable:
C(x,y) = γ(x − y)$C(\mathbf{x},\mathbf{y})=\gamma (\mathbf{x}\mathbf{y})$.
γ$\gamma $ is known as a variogram (or more correctly, a semivariogram) and includes the multiplicative factor
σ^{2}${\sigma}^{2}$ representing the variance such that
γ(0) = σ^{2}$\gamma \left(0\right)={\sigma}^{2}$. There are a number of commonly used variograms, including:
 Symmetric stable variogram

Cauchy variogram

Differential variogram with compact support
 Exponential variogram
 Gaussian variogram
 Nugget variogram
 Spherical variogram
 Bessel variogram
 Hole effect variogram
 Whittle–Matérn variogram
 Continuously parameterised variogram with compact support
 Generalized hyperbolic distribution variogram
 Cosine variogram
where
x^{′}${x}^{\prime}$ is a scaled norm of
x$x$Other Random Structures
In addition to random numbers from various distributions, random compound structures can be generated. These include random time series, random matrices and random samples.
Multiple Streams of Pseudorandom Numbers
It is often advantageous to be able to generate variates from multiple, independent, streams (or sequences) of random variates. For example when running a simulation in parallel on several processors. There are four ways of generating multiple streams using the functions available in this chapter:
(i) 
using different initial values (seeds); 
(ii) 
using different generators; 
(iii) 
skip ahead (also called blocksplitting); 
(iv) 
leapfrogging. 
Multiple Streams via Different Initial Values (Seeds)
A different sequence of variates can be generated from the same base generator by initializing the generator using a different set of seeds. The statistical properties of the base generators are only guaranteed within, not between sequences. For example, two sequences generated from two different starting points may overlap if these initial values are not far enough apart. The potential for overlapping sequences is reduced if the period of the generator being used is large. In general, of the four methods for creating multiple streams described here, this is the least satisfactory.
The one exception to this is the Wichmann–Hill II generator. The
Wichmann and Hill (2006) paper describes a method of generating blocks of variates, with lengths up to
2^{90}${2}^{90}$, by fixing the first three seed values of the generator (
w_{0}${w}_{0}$,
x_{0}${x}_{0}$ and
y_{0}${y}_{0}$), and setting
z_{0}${z}_{0}$ to a different value for each stream required. This is similar to the skipahead method described in
Section [Multiple Streams via Skipahead], in that the full sequence of the Wichmann–Hill II generator is split into a number of different blocks, in this case with a fixed length of
2^{90}${2}^{90}$. But without the computationally intensive initialization usually required for the skipahead method.
Multiple Streams via Different Generators
Independent sequences of variates can be generated using a different base generator for each sequence. For example, sequence 1 can be generated using the NAG basic generator, sequence 2 using Mersenne Twister, sequence 3 the ACORN generator and sequence 4 using L'Ecuyer generator. The Wichmann–Hill I generator implemented in this chapter is, in fact, a series of 273 independent generators. The particular subgenerator to use is selected using the subid variable. Therefore, in total, 278 independent streams can be generated with each using a different generator (273 Wichmann–Hill I generators, and 5 additional base generators).
Multiple Streams via Skipahead
Independent sequences of variates can be generated from a single base generator through the use of blocksplitting, or skippingahead. This method consists of splitting the sequence into
k$k$ nonoverlapping blocks, each of length
n$n$, where
n$n$ is no smaller than the maximum number of variates required from any of the sequences. For example,
where
x_{1},x_{2}, … ${x}_{1},{x}_{2},\dots $ is the sequence produced by the generator of interest. Each of the
k$k$ blocks provide an independent sequence.
The skipahead algorithm therefore requires the sequence to be advanced a large number of places, as to generate values from say, block b$b$, you must skip over the (b − 1)n$(b1)n$ values in the first b − 1$b1$ blocks. Due to their form this can be done efficiently for linear congruential generators and multiple congruential generators. A skipahead algorithm is also provided for the Mersenne Twister generator.
Although skipahead requires some additional computation at the initialization stage (to ‘fast forward’ the sequence) no additional computation is required at the generation stage.
This method of producing multiple streams can also be used for the Sobol and Niederreiter quasirandom number generator via the parameter
iskip in
nag_rand_quasi_init (g05yl).
Multiple Streams via Leapfrog
Independent sequences of variates can also be generated from a single base generator through the use of leapfrogging. This method involves splitting the sequence from a single generator into
k$k$ disjoint subsequences. For example:
where
x_{1},x_{2}, … ${x}_{1},{x}_{2},\dots $ is the sequence produced by the generator of interest. Each of the
k$k$ subsequences then provides an independent stream of variates.
The leapfrog algorithm therefore requires the generation of every k$k$th variate from the base generator. Due to their form this can be done efficiently for linear congruential generators and multiple congruential generators. A leapfrog algorithm is provided for the NAG Basic generator, both the Wichmann–Hill I and Wichmann–Hill II generators and L'Ecuyer generator.
It is known that, dependent on the number of streams required, leapfrogging can lead to sequences with poor statistical properties, especially when applied to linear congruential generators. In addition, leapfrogging can increase the time required to generate each variate. Therefore leapfrogging should be avoided unless absolutely necessary.
Skipahead and Leapfrog for a Linear Congruential Generator (LCG): An Example
As an illustrative example, a brief description of the algebra behind the implementation of the leapfrog and skipahead algorithms for a linear congruential generator is given. A linear congruential generator has the form
x_{i + 1} = a_{1} x_{i} mod m_{1}${x}_{i+1}={a}_{1}{x}_{i}\mathrm{mod}{m}_{1}$. The recursive nature of a linear congruential generator means that
The sequence can therefore be quickly advanced v$v$ places by multiplying the current state (x_{i}${x}_{i}$) by a_{1}^{v} mod m_{1}${a}_{1}^{v}\mathrm{mod}{m}_{1}$, hence skipping the sequence ahead. Leapfrogging can be implemented by using a_{1}^{k}${a}_{1}^{k}$, where k$k$ is the number of streams required, in place of a_{1}${a}_{1}$ in the standard linear congruential generator recursive formula, in order to advance k$k$ places, rather than one, at each iteration.
In a linear congruential generator the multiplier a_{1}${a}_{1}$ is constructed so that the generator has good statistical properties in, for example, the spectral test. When using leapfrogging to construct multiple streams this multiplier is replaced with a_{1}^{k}${a}_{1}^{k}$, and there is no guarantee that this new multiplier will have suitable properties especially as the value of k$k$ depends on the number of streams required and so is likely to change depending on the application. This problem can be emphasized by the lattice structure of linear congruential generators. Similiarly, the value of a_{1}${a}_{1}$ is often chosen such that the computation a_{1} x_{i} mod m_{1}${a}_{1}{x}_{i}\mathrm{mod}{m}_{1}$ can be performed efficiently. When a_{1}${a}_{1}$ is replaced by a_{1}^{k}${a}_{1}^{k}$, this is often no longer the case.
Note that, due to rounding, when using a distributional generator, a sequence generated using leapfrogging and a sequence constructed by taking every k$k$ value from a set of variates generated without leapfrogging may differ slightly. These differences should only affect the least significant digit.
Skipahead and Leapfrog for the Mersenne Twister: An Example
Skipping ahead with the Mersenne Twister generator is based on the definition of a
k × k$k\times k$ (where
k = 19937$k=19937$) transition matrix,
A$A$, over the finite field
𝔽_{2}${\mathbb{F}}_{2}$ (with elements 0 and 1). Multiplying
A$A$ by the current state
x_{n}${x}_{n}$, represented as a vector of bits, produces the next state vector
x_{n + 1}${x}_{n+1}$:
Thus, skipping ahead
v$v$ places in a sequence is equivalent to multiplying by
A^{v}${A}^{v}$:
Since calculating
A^{v}${A}^{v}$ by a standard square and multiply algorithm is
O(k^{3}logv)$\mathit{O}\left({k}^{3}\mathrm{log}v\right)$ and requires over 47MB of memory (see
Haramoto et al. (2008)), an indirect calculation is performed which relies on a property of the characteristic polynomial
p(z)$p\left(z\right)$ of
A$A$, namely that
p(A) = 0$p\left(A\right)=0$. We then define
and observe that
for a polynomial
q(z)$q\left(z\right)$. Since
p(A) = 0$p\left(A\right)=0$, we have that
g
(A)
=
A^{v}
$g\left(A\right)={A}^{v}$ and
This polynomial evaluation can be performed using Horner's method:
which reduces the problem to advancing the generator
k − 1$k1$ places from state
x_{n}${x}_{n}$ and adding (where addition is as defined over
𝔽_{2}${\mathbb{F}}_{2}$) the intermediate states for which
a_{i}${a}_{i}$ is nonzero.
There are therefore two stages to skipping the Mersenne Twister ahead
v$v$ places:
(i) 
Calculate the coefficients of the polynomial
g
(z)
=
z^{v}
mod
p
(z)
$g\left(z\right)={z}^{v}\mathrm{mod}p\left(z\right)$; 
(ii) 
advance the sequence k − 1$k1$ places from the starting state and add the intermediate states that correspond to nonzero coefficients in the polynomial calculated in the first step. 
The resulting state is that for position v$v$ in the sequence.
The cost of calculating the polynomial is
O(k^{2}logv)
$\mathit{O}\left({k}^{2}\mathrm{log}v\right)$ and the cost of applying it to state is constant. Skip ahead functionality is typically used in order to generate
n$n$ independent pseudorandom number streams (e.g., for separate threads of computation). There are two options for generating the
n$n$ states:
(i) 
On the master thread calculate the polynomial for a skip ahead distance of v$v$ and apply this polynomial to state n$n$ times, after each iteration j$j$ saving the current state for later usage by thread j$j$. 
(ii) 
Have each thread j$j$ independently and in parallel with other threads calculate the polynomial for a distance of (j + 1)v$(j+1)v$ and apply to the original state. 
Since
lim_{
v
→
∞
}
logv
=
log
n
v
$\underset{v\to \infty}{\mathrm{lim}}}\phantom{\rule{0.25em}{0ex}}\mathrm{log}v=\mathrm{log}nv$, then for large
v$v$ the cost of generating the polynomial for a skip ahead distance of
nv$nv$ (i.e., the calculation performed by thread
n − 1$n1$ in option
(ii) above) is approximately the same as generating that for a distance of
v$v$ (i.e., the calculation performed by thread
0$0$). However, only one application to state need be made per thread, and if
n$n$ is sufficiently large the cost of applying the polynomial to state becomes the dominant cost in option
(i), in which case it is desirable to use option
(ii). Tests have shown that as a guideline it becomes worthwhile to switch from option
(i) to option
(ii) for approximately
n > 30$n>30$.
Leap frog calculations with the Mersenne Twister are performed by computing the sequence fully up to the required size and discarding the redundant numbers for a given stream.
Recommendations on Choice and Use of Available Functions
Pseudorandom Numbers
Prior to generating any pseudorandom variates the base generator being used must be initialized. Once initialized, a distributional generator can be called to obtain the variates required. No interfaces have been supplied for direct access to the base generators. If a sequence of random variates from a uniform distribution on the open interval
(0,1)$(0,1)$, is required, then the uniform distribution function (
nag_rand_dist_uniform01 (g05sa)) should be called.
Initialization
Prior to generating any variates the base generator must be initialized. Two utility functions are provided for this,
nag_rand_init_repeat (g05kf) and
nag_rand_init_nonrepeat (g05kg), both of which allow any of the base generators to be chosen.
nag_rand_init_repeat (g05kf) selects and initializes a base generator to a repeatable (when executed serially) state: two calls of
nag_rand_init_repeat (g05kf) with the same argumentvalues will result in the same subsequent sequences of random numbers (when both generated serially).
nag_rand_init_nonrepeat (g05kg) selects and initializes a base generator to a nonrepeatable state in such a way that different calls of
nag_rand_init_nonrepeat (g05kg), either in the same run or different runs of the program, will almost certainly result in different subsequent sequences of random numbers.
No utilities for saving, retrieving or copying the current state of a generator have been provided. All of the information on the current state of a generator (or stream, if multiple streams are being used) is stored in the integer array state and as such this array can be treated as any other integer array, allowing for easy copying, restoring, etc.
Repeated initialization
As mentioned in
Section [Multiple Streams via Different Initial Values (Seeds)], it is important to note that the statistical properties of pseudorandom numbers are only guaranteed within sequences and not between sequences produced by the same generator. Repeated initialization will thus render the numbers obtained less rather than more independent. In a simple case there should be only one call to
nag_rand_init_repeat (g05kf) or
nag_rand_init_nonrepeat (g05kg) and this call should be before any call to an actual generation function.
Choice of Base Generator
If a single sequence is required then it is recommended that the Mersenne Twister is used as the base generator (
genid = 3$\mathbf{genid}=3$). This generator is fast, has an extremely long period and has been shown to perform well on various test suites, see
Matsumoto and Nishimura (1998),
L'Ecuyer and Simard (2002) and
Wichmann and Hill (2006) for example.
When choosing a base generator, the period of the chosen generator should be borne in mind. A good rule of thumb is never to use more numbers than the square root of the period in any one experiment as the statistical properties are impaired. For closely related reasons, breaking numbers down into their bit patterns and using individual bits may also cause trouble.
Choice of Method for Generating Multiple Streams
If the Wichmann–Hill II base generator is being used, and a period of
2^{90}${2}^{90}$ is sufficient, then the method described in
Section [Multiple Streams via Different Initial Values (Seeds)] can be used. If a different generator is used, or a longer period length is required then generating multiple streams by altering the initial values should be avoided.
Using a different generator works well if less than 277 streams are required.
Of the remaining two methods, both skipahead and leapfrogging use the sequence from a single generator, both guarantee that the different sequences will not overlap and both can be scaled to an arbitrary number of streams. Leapfrogging requires no
apriori knowledge about the number of variates being generated, whereas skipahead requires you to know (approximately) the maximum number of variates required from each stream. Skipahead requires no
apriori information on the number of streams required. In contrast leapfrogging requires you to know the maximum number of streams required, prior to generating the first value. Of these two, if possible, skipahead should be used in preference to leapfrogging. Both methods required additional computation compared with generating a single sequence, but for skipahead this computation occurs only at initialization. For leapfrogging additional computation is required both at initialization and during the generation of the variates. In addition, as mentioned in
Section [Multiple Streams via Leapfrog], using leapfrogging can, in some instances, change the statistical properties of the sequences being generated.
Leapfrogging is performed by calling
nag_rand_init_leapfrog (g05kh) after the initialization function (
nag_rand_init_repeat (g05kf) or
nag_rand_init_nonrepeat (g05kg)). For skipahead, either
nag_rand_init_skipahead (g05kj) or
nag_rand_init_skipahead_power2 (g05kk) can be called. Of these,
nag_rand_init_skipahead_power2 (g05kk) restricts the amount being skipped to a power of
2$2$, but allows for a large ‘skip’ to be performed.
Copulas
After calling one of the copula functions the inverse cumulative distribution function (CDF) can be applied to convert the uniform marginal distribution into the required form. Scalar and vector routines for evaluating the CDF, for a range of distributions, are supplied in
Chapter G01. If should be noted that these routines are often described as computing the ‘deviates’ of the distribution.
When using the inverse CDF functions from
Chapter G01 it should be noted that some are limited in the number of significant figures they return. This may affect the statistical properties of the resulting sequence of variates. Section 7 of the individual function documentation will give a discussion of the accuracy of the particular algorithm being used and any available alternatives.
Quasirandom Numbers
Prior to generating any quasirandom variates the generator being used must be initialized via
nag_rand_quasi_init (g05yl) or
nag_rand_quasi_init_scrambled (g05yn). Of these,
nag_rand_quasi_init (g05yl) can be used to initialize a standard Sobol, Faure or Niederreiter sequence and
nag_rand_quasi_init_scrambled (g05yn) can be used to initialize a scrambled Sobol or Niederreiter sequence.
Due to the random nature of the scrambling, prior to calling the initialization function
nag_rand_quasi_init_scrambled (g05yn) one of the pseudorandom initialization functions,
nag_rand_init_repeat (g05kf) or
nag_rand_init_nonrepeat (g05kg), must be called.
Once a quasirandom generator has been initialized, using either
nag_rand_quasi_init (g05yl) or
nag_rand_quasi_init_scrambled (g05yn), one of three generation functions can be called to generate uniformly distributed sequences (
nag_rand_quasi_uniform (g05ym)), Normally distributed sequences (
nag_rand_quasi_normal (g05yj)) or sequences with a lognormal distribution (
nag_rand_quasi_lognormal (g05yk)). For example, for a repeatable sequence of scrambled quasirandom variates from the Normal distribution,
nag_rand_init_repeat (g05kf) must be called first (to initialize a pseudorandom generator), followed by
nag_rand_quasi_init_scrambled (g05yn) (to initialize a scrambled quasirandom generator) and then
nag_rand_quasi_normal (g05yj) can be called to generate the sequence from the required distribution.
See the last paragraph of
Section [Copulas] on how sequences from other distributions can be obtained using the inverse CDF.
Brownian Bridge
nag_rand_bb (g05xb) may be used to generate sample paths from a (free or nonfree) Wiener process using the Brownian bridge algorithm. Prior to calling
nag_rand_bb (g05xb), the generator must be initialized by a call to
nag_rand_bb_init (g05xa).
nag_rand_bb_init (g05xa) requires you to specify a
bridge construction order. The function
nag_rand_bb_make_bridge_order (g05xe) can be used to convert a set of input times into one of several common bridge construction orders, which can then be used in the initialization call to
nag_rand_bb_init (g05xa).
nag_rand_bb_inc (g05xd) may be used to generate the
scaled increments of the sample paths of a (free or nonfree) Wiener process. Prior to calling
nag_rand_bb_inc (g05xd), the generator must be initialized by a call to
nag_rand_bb_inc_init (g05xc). Note that
nag_rand_bb_inc (g05xd) generates these scaled increments directly; it is not necessary to call
nag_rand_bb (g05xb) before calling
nag_rand_bb_inc (g05xd). As before,
nag_rand_bb_make_bridge_order (g05xe) can be used to convert a set of input times into a bridge construction order which can be passed to
nag_rand_bb_inc_init (g05xc).
Random Fields
Functions for simulating from either a onedimensional or a twodimensional stationary Gaussian random field are provided. These functions use the
circulant embedding method of
Dietrich and Newsam (1997) to efficiently generate from the required field. In both cases a setup function is called, which defines the domain and variogram to use, followed by the generation function. A number of preset variograms are supplied or a userdefined function can be used.
 Onedimensional random field:
 Twodimension random field:
In addition to generating a random field, it is possible to use the circulant embedding method to generate realisations of fractional Brownian motion, this functionality is provided in
nag_rand_field_fracbm_generate (g05zt).
Functionality Index
circulant embedding generator,   
generate a free or nonfree (pinned) Wiener process for a given set of time steps   nag_rand_bb (g05xb) 
Generating samples, matrices and tables,   
random sample from an integer vector,   
Generation of time series,   
array of variates from multivariate distributions,   
vector of variates from discrete univariate distributions,   
variate array from discrete distributions with array of parameters,   
vectors of variates from continuous univariate distributions,   
array of variates from univariate distributions,   
References
Banks J (1998) Handbook on Simulation Wiley
Boye E (Unpublished manuscript) Copulas for finance: a reading guide and some applications Financial Econometrics Research Centre, City University Business School, London
Bratley P and Fox B L (1988) Algorithm 659: implementing Sobol's quasirandom sequence generator ACM Trans. Math. Software 14(1) 88–100
Dietrich C R and Newsam G N (1997) Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix SIAM J. Sci. Comput. 18 1088–1107
Faure H and Tezuka S (2000) Another random scrambling of digital (t,s)sequences Monte Carlo and QuasiMonte Carlo Methods SpringerVerlag, Berlin, Germany (eds K T Fang, F J Hickernell and H Niederreiter)
Fox B L (1986) Algorithm 647: implementation and relative efficiency of quasirandom sequence generators ACM Trans. Math. Software 12(4) 362–376
Glasserman P (2004) Monte Carlo Methods in Financial Engineering Springer
Haramoto H, Matsumoto M, Nishimura T, Panneton F and L'Ecuyer P (2008) Efficient jump ahead for F2linear random number generators INFORMS J. on Computing 20(3) 385–390
Hong H S and Hickernell F J (2003) Algorithm 823: implementing scrambled digital sequences ACM Trans. Math. Software 29:2 95–109
Joe S and Kuo F Y (2008) Constructing Sobol sequences with better twodimensional projections SIAM J. Sci. Comput. 30 2635–2654
Knuth D E (1981) The Art of Computer Programming (Volume 2) (2nd Edition) Addison–Wesley
L'Ecuyer P and Simard R (2002)
TestU01: a software library in ANSI C for empirical testing of random number generators Departement d'Informatique et de Recherche Operationnelle, Universite de Montreal
http://www.iro.umontreal.ca/~lecuyer
Maclaren N M (1989) The generation of multiple independent sequences of pseudorandom numbers Appl. Statist. 38 351–359
Matsumoto M and Nishimura T (1998) Mersenne twister: a 623dimensionally equidistributed uniform pseudorandom number generator ACM Transactions on Modelling and Computer Simulations
Morgan B J T (1984) Elements of Simulation Chapman and Hall
Nelsen R B (1998) An Introduction to Copulas. Lecture Notes in Statistics 139 Springer
Owen A B (1995) Randomly permuted (t,m,s)nets and (t,s)sequences Monte Carlo and QuasiMonte Carlo Methods in Scientific Computing, Lecture Notes in Statistics 106 SpringerVerlag, New York, NY 299–317 (eds H Niederreiter and P JS Shiue)
Revuz D and Yor M (1999) Continuous Martingales and Brownian Motion Springer
Ripley B D (1987) Stochastic Simulation Wiley
Sklar A (1973) Random variables: joint distribution functions and copulas Kybernetika 9 499–460
Wichmann B A and Hill I D (2006) Generating good pseudorandom numbers Computational Statistics and Data Analysis 51 1614–1622
Wikramaratna R S (1989) ACORN  a new method for generating sequences of uniformly distributed pseudorandom numbers Journal of Computational Physics 83 16–31
Wikramaratna R S (1992) Theoretical background for the ACORN random number generator Report AEAAPS0244 AEA Technology, Winfrith, Dorest, UK
Wikramaratna R S (2007) The additive congruential random number generator a special case of a multiple recursive generator Journal of Computational and Applied Mathematics
PDF version (NAG web site
, 64bit version, 64bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013