Please select the report that you wish to read below. You might also be interested in our Technical Poster Repository.
- TR2/20 dco/c++: Derivative Code by Overloading in C++
- TR1/20 Comparison of Wavelets for Adaptive Mesh Refinement
- TR2/19 Adjoint Flow Solver TinyFlow using dco/c++
- TR1/19 Using the NAG Library for Python with Kdb+ and PyQ
- TR2/18 The Role of Matrix Functions
- TR1/18 Using the NAG Library with Kdb+ in a Pure Q Environment
- TR1/17 Batched Least Squares of Tall Skinny Matrices on GPU
- TR4/16 Extending Error Function and related functions to Complex Arguments
- TR3/16 A Finite Volume - Alternating Direction Implicit Approach for the Calibration of Stochastic Local Volatility models
- TR2/16 Index-tracking Portfolio Optimization Model
- TR1/16 Portfolio Credit Risk: Introduction
- TR2/15 Pricing Bermudan Swaptions on the LIBOR Market Model using the Stochastic Grid Bundling Method
- TR1/15 Portfolio Optimization using the NAG Library
- TR3/14 Adjoint Algorithmic Differentiation Tool Support for Typical Numerical Patterns in Computational Finance
- TR2/14 Adjoint Algorithmic Differentiation of a GPU Accelerated Application
- TR1/14 Generating Realisations of Stationary Gaussian Random Fields by Circulant Embedding
- TR1/13 Local Volatility FX Basket Option on CPU and GPU
- TR3/12 Variable Selection in a Cox Proportional Hazards Model
- TR2/12 A high-performance Brownian bridge for GPUs: Lessons for bandwidth bound applications
- TR1/12 Solving partial differential equations using the NAG Library
- TR3/11 Calling the NAG Pseudo and Quasi Random Number Generators From a Multi-Threaded Environment
- TR2/11 Nonlinear Optimization Made Easier: A Tutorial for using the AMPL modelling language with NAG routines
- TR1/11 Reverse Communication Interface
- TR6/10 Solving an Optimization Problem using the NAG Library for .NET from F#
- TR5/10 Exact First- and Second-Order Greeks by Algorithmic Differentiation
- TR4/10 Flexible delivery of visualization software and services
- TR3/10 Calling the NAG Fortran Library for Windows x64 DLLs from VB.NET
- TR2/10 Using the NAG Library to calculate financial option prices in Excel
- TR1/10 Using the NAG Libraries with Excel and VSTO
- TR5/09 Calling NAG Library Routines from Scilab
- TR4/09 Calling NAG Library Routines from Octave
- TR3/09 Fitting a Seasonal ARIMA Model using the NAG C Library
- TR2/09 Calling NAG Library Routines from Java
- TR1/09 A Web Services Architecture for Visualization
TRn/nn - NAG Technical Report series number
TR2/20 dco/c++: Derivative Code by Overloading in C++
Klaus Leppkes & Johannes Lotz (RWTH Aachen University), Uwe Naumann (RWTH Aachen University & NAG Ltd)
Boosted by advanced type genericity and support for template metaprogramming techniques, the role of C++ as the preferred language for largescale numerical simulation in Computational Science, Engineering and Finance has been strengthened over recent years. Algorithmic Diﬀerentiation of numerical simulations and algorithmic adjoint methods, in particular, have seen substantial growth in interest due to increased requirement for gradient-based techniques in high dimensions in the context of parameter sensitivity analysis and calibration, uncertainty quantiﬁcation, and nonlinear optimization. Modern software tools for (adjoint) Algorithmic Diﬀerentiation in C++ make heavy use of modern C++ features aiming for increased computational eﬃciency and decreased memory requirement. The dco/c++ tool presented in this paper aims to take Algorithmic Diﬀerentiation in C++ one step further by focussing on derivatives of arbitrary order, support for shared-memory parallelism, and powerful and intuitive user interfaces in addition to competitive computational performance. Its algorithmic and software quality has made dco/c++ the tool of choice in many industrial and academic projects.
The code referenced in the report is available on the NAG GitHub
TR1/20 Comparison of Wavelets for Adaptive Mesh Refinement
J Knipping & C Vuik, (Delft University of Technology) and J du Toit (NAG Ltd, Oxford)
In the field of Scientific Computing there is a big focus on solving time dependent Partial Differential Equations (PDEs) as efficiently as possible. Adaptive mesh refinement (AMR) can be used to construct a sparse mesh at every time step which maintains an accurate approximation to the solution. Interpolating wavelets are often used in AMR. We present a detailed comparison of two wavelets for AMR: Donoho's interpolating wavelet and a lifted version (also called second generation wavelets) of Donoho's interpolating wavelet. The wavelets are compared on PDE problems from computational finance and computational fluid dynamics. We also examine different ways of handling the boundaries and the impact thereof.
Donoho's interpolating wavelet with lower order boundary stencil implementation appears to be the most accurate, whilst resulting in very high compression compared to the original mesh. For one data set Donoho's interpolating wavelet keeps fewer than 5% of the points whilst having an error smaller than 0.0001. In general, Donoho's interpolating wavelet produces sparse meshes while maintaining good accuracy, even for very irregular shapes. Lastly, an improvement on the inverse transform during the adaptive mesh refinement leads to promising results.
TR2/19 Adjoint Flow Solver TinyFlow using dco/c++
Johannes Lotz (Aachen University) and Viktor Mosenkis (NAG)
Adjoints of large numerical solvers are used more and more in industry and academia, e.g. in computational fluid dynamics, finance and engineering. Using algorithmic differentiation is a convenient and efficient way of generating adjoint codes automatically from a given primal. This document reports the application of algorithmic differentiation using dco/c++ to a demonstrator flow solver, which makes use of various NAG Library routines. Since the NAG Library supports dco/c++ data types, seamless integration is possible (as shown in this report). Simple switches between algorithmic and symbolic versions of the NAG routines can be used to minimize memory usage.
TR1/19 Using the NAG Library for Python with Kdb+ and PyQ
Christopher Brandt (NAG)
This paper provides detailed instructions on how to use the NAG Library for Python with kdb+ and PyQ. PyQ is an extension to kdb+ featuring zero-copy sharing of data between Python and the q programming language. The paper provides examples that illustrate how to access routines within the NAG Library for Python using data stored in kdb+.
TR2/18 The Role of Matrix Functions
Edvin Hopkins (NAG)
Matrix functions have a variety of uses throughout science, mathematics and engineering. Here at NAG we have implemented many of the latest, state-of-the-art algorithms for computing them. In this technical report, we introduce matrix functions, and give some examples of their use. Plenty of code snippets are included, demonstrating how to use the new NAG Library for Python to compute matrix functions.
TR1/18 Using the NAG Library with Kdb+ in a Pure Q Environment
Christopher Brandt (NAG)
In the present technical report, we demonstrate how to integrate the NAG Library with kdb+ using the Foreign Function Interface (FFI) from Kx Systems. The procedure outlined herein leverages FFI to drastically simplify the development process for users. The enclosed three examples were carefully chosen to illustrate usage cases that extend to most of the 1700+ routines contained within the NAG Library.
TR1/17 Batched Least Squares of Tall Skinny Matrices on GPU
Tim Schmielau (NAG) & Jacques du Toit (NAG)
NAG has produced a highly efficient batched least squares solver for NVIDIA GPUs. The code is optimized for tall skinny matrices. These frequently arise in data fitting problems such as XVA in finance, and are typically not that easy to parallelize. The code is 20x to 40x faster than building a batched GPU least squares solver using the NVIDIA libraries (cuBLAS, cuSolver). This gives a pronounced speedup for applications where the matrices are already in GPU memory.
TR4/16 Extending Error Function and related functions to Complex Arguments
Guillermo Navas-Palencia (NAG)
In this short communication several extensions of the Faddeeva function are implemented using functions currently available in the NAG Library. These extensions allow the evaluation of error and related functions with complex arguments. Finally, two relevant applications employing these extensions are presented.
TR3/16 A Finite Volume - Alternating Direction Implicit Approach for the Calibration of Stochastic Local Volatility models
Maarten Wyns (University of Antwerp) and Jacques Du Toit (NAG)
Calibration of stochastic local volatility (SLV) models to their underlying local volatility model is often performed by numerically solving a two-dimensional non-linear forward Kolmogorov equation.
TR2/16 Index-tracking Portfolio Optimization Model
Guillermo Navas-Palencia (NAG)
In the present tutorial report we examine the theory and computational aspects behind the index-tracking portfolio optimization model. This model is compared with Markowitz mean-variance model. This report is distributed with an example in C using the NAG C Library.
TR1/16 Portfolio Credit Risk: Introduction
Guillermo Navas-Palencia (NAG)
In the present technical report we examine the main theoretical aspects in some models used in Portfolio credit risk. We introduce the well-known Vasicek model, the large homogeneous portfolios or Vasicek distribution and their corresponding generalizations. An illustrative example considering factors following a logistic distribution is presented. Numerical experiments for several homogeneous portfolios are performed in order to compare these methods. Finally, we use the NAG Toolbox for MATLAB® for implementing prototypes of these models quickly.
TR2/15 Pricing Bermudan Swaptions on the LIBOR Market Model using the Stochastic Grid Bundling Method
Stef Maree (Delft University of Technology) and Jacques du Toit (NAG)
We examine using the Stochastic Grid Bundling Method (SGBM) to price a Bermudan swaption driven by a one-factor LIBOR Market Model (LMM). Using a well known approximation formula from the finance literature, we implement SGBM with one basis function and show that it is around six times faster than the equivalent Longstaff–Schwartz method. The two methods agree in price to one basis point, and the SGBM path estimator gives better (higher) prices than the Longstaff–Schwartz prices. A closer examination shows that inaccuracies in the approximation formula introduce a small bias into the SGBM direct estimator.
TR1/15 Portfolio Optimization using the NAG Library
John Morrissey (NAG) and Brian Spector (NAG)
An introduction into the notation and techniques used in portfolio optimization. We discuss some sample problems and present help in choosing an appropriate optimizer. Finally, there is a section on handling transaction cost for the portfolio optimization.
TR3/14 Adjoint Algorithmic Differentiation Tool Support for Typical Numerical Patterns in Computational Finance
Uwe Naumann (Aachen University) and Jacques du Toit (NAG)
We demonstrate the flexibility and ease of use of C++ algorithmic differentiation (AD) tools based on overloading to numerical patterns (kernels) arising in computational finance. While adjoint methods and AD have been known in the finance literature for some time, there are few tools capable of handling and integrating with the C++ codes found in production. Adjoint methods are also known to be very powerful but to potentially have infeasible memory requirements. We present several techniques for dealing with this problem and demonstrate them on numerical kernels which occur frequently in finance. We build the discussion around our own AD tool dco/c++ which is designed to handle arbitrary C++ codes and to be highly flexible, however the sketched concepts can certainly be transferred to other AD solutions including in-house tools. An archive of the source code for the numerical kernels as well as all the AD solutions discussed can be downloaded from our website. This includes documentation for the code and dco/c++. Trial licences for dco/c++ are available from NAG.
TR2/14 Adjoint Algorithmic Differentiation of a GPU Accelerated Application
Jacques du Toit (NAG), Johannes Lotz (Aachen University) and Uwe Naumann (Aachen University)
We consider a GPU accelerated program using Monte Carlo simulation to price a basket call option on 10 FX rates driven by a 10 factor local volatility model. We develop an adjoint version of this program using algorithmic differentiation. The code uses mixed precision. For our test problem of 10,000 sample paths with 360 Euler time steps, we obtain a runtime of 522ms to compute the gradient of the price with respect to the 438 input parameters, the vast majority of which are the market observed implied volatilities (the equivalent single threaded tangent-linear code on a CPU takes 2hrs).
TR1/14 Generating Realisations of Stationary Gaussian Random Fields by Circulant Embedding
Catherine E. Powell, School of Mathematics, University of Manchester, UK
Random fields are families of random variables, indexed by a d-dimensional parameter x with d > 1.They are important in many applications and are used, for example, to model properties of biological tissue, velocity fields in turbulent flows and permeability coefficients of rocks. Mark 24 of the NAG Fortran Library includes new routines for generating realisations of stationary Gaussian random fields using the method of circulant embedding. This short note illustrates the main ideas behind circulant embedding and how to use the routines g05zr and g05zs in the NAG Toolbox for MATLAB. The routines g05zm, g05zn and g05zp can also be used to generate realisations of stationary Gaussian stochastic processes (the d = 1 case).1
TR1/13 Local Volatility FX Basket Option on CPU and GPU
Jacques du Toit, (NAG) and Isabel Ehrlich, (Imperial College, London)
We present high performance implementations on a CPU and an NVIDIA GPU of a Monte Carlo pricer for a simple FX basket option driven by a multi-factor local volatility model. Basket options such as these are typically considered too complicated to tackle analytically in a market-consistent manner, and are too high dimensional for PDE methods. Consequently these products are valued using Monte Carlo methods. This results in a compute intensive, massively parallel problem which is ideally suited to modern CPUs and GPUs. We develop fully parallelized, fully vectorized code and study the effects of mixed precision on accuracy and performance. We also investigate using texture memory on the GPU.
TR3/12 Variable Selection in a Cox Proportional Hazards Model
In this article, and the associated example programs, we show how to use existing NAG library routines to perform automatic variable selection for a Cox proportional hazards model, a commonly type of model commonly used in the analysis of censored data. The three approaches described are; forward, backward and stepwise selection.
TR2/12 A high-performance Brownian bridge for GPUs: Lessons for bandwidth bound applications
Jacques Du Toit, (NAG)
We present a very flexible Brownian bridge generator together with a GPU implementation which achieves close to peak performance on an NVIDIA C2050. The performance is compared with an OpenMP implementation run on several high performance x86-64 systems. The GPU shows a performance gain of at least 10x. Full comparative results are given in Section 8: in particular, we observe that the Brownian bridge algorithm does not scale well on multicore CPUs since it is memory bandwidth bound. The evolution of the GPU algorithm is discussed. Achieving peak performance required challenging the "conventional wisdom" regarding GPU programming, in particular the importance of occupancy, the speed of shared memory and the impact of branching.
TR1/12 Solving partial differential equations using the NAG Library
Jeremy Walton, (NAG)
We describe the characteristics of partial differential equations (PDEs), including their uses, classification, subsidiary conditions and some of the ways in which they may be solved. In this context, we demonstrate how routines from the NAG Library can be used in their numerical solution. These routines come not only from the Library’s PDE chapter, but also from the chapters which deal with mesh generation and the solution of large linear systems. The combination of mesh generators and large linear solvers is applicable in the implementation of the so-called finite element method, which may be used in cases where the complexity of the geometry of the domain over which the PDE is to be solved prevents the application of the comparatively simple finite differencing method (as used, for example, in the Library’s PDE chapter). We illustrate the use of the NAG routines using a variety of example problems; the solutions are generated using the NAG Toolbox for MATLAB ® and plotted using tools in that environment.
TR3/11 Calling the NAG Pseudo and Quasi Random Number Generators From a Multi-Threaded Environment
Martyn Byng, (NAG)
In this article, and the associated example programs, we will show how to call the NAG random number generators within a multi-threaded enviroment. The examples are written making use of OpenMP, however the basic structure of the NAG calls will be the same irrespective of the threading mechanism used. Some OpenMP commands and pragmas are briefly described in this document, however additional information is available on the OpenMP website, alternatively NAG offer training courses in OpenMP, details of which can be obtained from firstname.lastname@example.org.
TR2/11 Nonlinear Optimization Made Easier: A Tutorial for using the AMPL modelling language with NAG routines.
Jan Fiala, (NAG)
Optimization, or Operational Research in general, nowadays plays an important role in our lives. No matter if you are a respected finance house or a student of mathematics, you have probably used some sort of optimization routines. The field itself has changed rapidly since linear programming was introduced in the mid 1940s. More powerful computers allowed us to consider much more realistic and complex models using sophisticated algorithms. Whereas the input for linear programming problems is relatively simple, it is a much more delicate task in the case of general nonlinear programming. One way to tackle it is to introduce a specialised language for the problem description. In this tutorial we will focus on a particular one called AMPL which we have equipped with two of our NAG solvers, namely E04UFF and E04UGF.
TR1/11 Reverse Communication Interface
Marcin Krzysztofik, (NAG)
Reverse communication is a means of avoiding procedure arguments in the parameter list of a procedure. Most numerical routines use the alternative, forward (or direct) communication approach, i.e. they are called only once to compute results; they completely specify the problem by including user-provided procedures in the argument list.
TR6/10 Solving an Optimization Problem using the NAG Library for .NET from F#
Sorin Serban, (NAG)
NAG has just released their latest numerical library; the NAG Library for .NET. This is the first release of the library and includes over 400 methods for key mathematical and statistical areas, including Wavelet Transforms, Integration, Interpolation and Approximation, Random Number Generators, Time Series Analysis, and Optimization. The Optimization chapter contains methods for solving LP-, QP-, LS- and NLP-problems without constraints or with constraints. A global optimizer is also included, solving problems without constraints but with bounds on the variables.
TR5/10 Exact First- and Second-Order Greeks by Algorithmic Differentiation
The Numerical Algorithms Group (NAG) work very closely with Uwe Naumann to help users take advantage of Algorithmic Differentiation methods.
Algorithmic (also known as Automatic) differentiation (AD) is a method for computing sensitivities of outputs of numerical programs with respect to its inputs both accurately (to machine precision) and efficiently. The two basic modes of AD ' forward and reverse ' and combinations thereof yield products of a vector with the Jacobian, its transpose, or the Hessian, respectively.
TR4/10 Flexible delivery of visualization software and services
Jason Wood, Jungwook Seo, David Duke Ken Brodlie (University of Leeds) and Jeremy Walton (NAG)
An important issue in the design of visualization systems is the allowance of flexibility in providing a range of interfaces to a single body of algorithmic software. In this paper we describe how the ADVISE architecture provides exactly this flexibility. The architecture is cleanly separated into three layers (user interface, web service middleware and visualization components) which gives us the flexibility to provide a range of different delivery options, but all making use of the same basic set of visualization components. These delivery options comprise a range of user interfaces (visual pipeline editor, tailored application, web page), coupled with installation choice between a stand-alone desktop application, or a distributed client-server application. This work was carried out within the ADVISE project.
TR3/10 Calling the NAG Fortran Library for Windows x64 DLLs from VB.NET
Ludovic Henno, NAG
Users who have Microsoft Visual Studio 2005 or 2008 may use the DLLs provided with the NAG Fortran Library for Windows XP/Vista/7 x64 (FLW6I22DC_nag.dll and FLW6I22DC_mkl.dll) in conjunction with VB.NET.
In this report, we will see the rules one has to follow to use the NAG routines from VB.NET and then we will illustrate those rules with examples.
TR2/10 Using the NAG Library to calculate financial option prices in Excel
Marcin Krzysztofik, Jeremy Walton, NAG
In finance, an option is a contract that conveys the right, but not the obligation, to buy or sell a specific asset. Options are widely traded on financial markets, and so some method of determining their value (or price) is required. Several option pricing models have been developed, which have then been implemented using a range of mathematical methods; some of these implementations have been made available in the latest release of the NAG Library. We have used these routines to calculate option prices in Microsoft Excel, and present some examples (which may be downloaded from the NAG website) that illustrate the way in which NAG routines can be called from within an Excel spreadsheet.
TR1/10 Using the NAG Libraries with Excel and VSTO
Sorin Serban, Shah Datardina, NAG
The following example which includes interpolating and approximating data points, uses the Excel Workbook template to call the NAG Fortran Library and NAG C Library (soon also the NAG Library for .NET) from inside an Excel workbook. There are other ways to integrate Excel with an external library by creating Add-Ins. The difference is that the workbook model described here is called a “document-level project” while an add-in is an “application-level project”. At the document level, all customization is unique to one or more sheets contained in a single workbook, but at the application level, custom Excel functions would be available to all workbooks.
TR5/09 Calling NAG Library Routines from Scilab
Nathaniel Fenton, NAG Ltd, Oxford
This report gives detailed instructions on how to call routines in the NAG C and Fortran Libraries from the Scilab programming environment.
TR4/09 Calling NAG Library Routines from Octave
Anna Kwiczala, NAG Ltd, Oxford
This report gives detailed instructions on how to call routines in the NAG C and Fortran Libraries from the Octave programming environment.
TR3/09 Fitting a Seasonal ARIMA Model using the NAG C Library
Martyn Byng, NAG Ltd, Oxford
This article gives a brief description of how to fit a seasonal ARIMA (autoregressive integrated moving average) model using the NAG C Library routine g13bec (/numeric/cl/nagdoc_cl26.2/html/g13/g13bec.html), and how to forecast from such a model using the NAG C Library routine g13bjc (/numeric/cl/nagdoc_cl26.2/html/g13/g13bjc.html). The article should be read in conjunction with the documentation for these two routines. A full set of example source code, data and expected results is available in the accompanying materials linked below.
TR2/09 Calling NAG Library Routines from Java
Mick Pont, Anna Kwiczala, NAG Ltd, Oxford
This report gives detailed instructions on how to call routines in the NAG C and Fortran Libraries from the Java programming language. We show examples using Java running on both UNIX and Microsoft Windows platforms. It has been extended to show how to call option pricing routines and global optimization routines available in the latest versions of the NAG C and Fortran Libraries.
This report supersedes NAG Technical Report TR1/04
TR1/09 A Web Services Architecture for Visualization
Jason Wood, Ken Brodlie, Jungwook Seo, David Duke (University of Leeds) and Jeremy Walton (NAG Ltd, Oxford)
Service-oriented architectures are increasingly being used in the creation of large distributed applications. This paper examines the provision of visualization as a service which can be made available for application designers to combine with other services. It describes a three-layer architecture which exploits the strengths of web service technologies in providing standardized access, and which also enables the efficient and flexible construction of visualization applications. A realization of the architecture is illustrated by re-visiting an early example of web-based visualization. This work was carried out within the ADVISE project.