NAGnews 132

In this issue:

Partial Differential Equations 'Dolfin-Adjoint' Software Wins the 2015 Wilkinson Prize

NAG, Argonne National Laboratory and the National Physical Laboratory announce and present the winner of the co-sponsored Wilkinson Prize 2015 to P.E. Farrell (University of Oxford), S.W. Funke (Simula Research Laboratory), D.A. Ham (Imperial College) and M.E. Rognes (Simula Research Laboratory) for "dolfin-adjoint", a package which automatically derives and solves adjoint and tangent linear equations from high-level mathematical specifications of finite element discretisations of partial differential equations. The presentation of the award will take place at the International Congress on Industrial and Applied Mathematics (ICIAM 2015) in Beijing.

The need for adjoints of partial differential equations (PDEs) pervades science and engineering. Adjoints enable the study of the sensitivity and stability of physical systems, and the optimization of designs subject to constraints. While deriving the adjoint model associated with a linear stationary forward model is straightforward, the derivation and implementation of adjoint models for nonlinear or time-dependent models is notoriously difficult. dolfin-adjoint solves this problem by automatically analyzing and exploiting the high-level mathematical structure inherent in finite element methods. It is implemented on top of the FEniCS Project for finite element discretisations.

The Wilkinson Prize is awarded every four years to the entry that best addresses all phases of the preparation of numerical software, and is sponsored by Argonne National Laboratory (US), the Numerical Algorithms Group (UK), and the National Physical Laboratory (UK).

Read the full story here.

Mark 25: New Library Routines Focus - LARS/LASSO/Forward Stagewise Regression

Today we will focus on the new LARS/LASSO/Forward Stagewise Regression that features in Mark 25 of the NAG Library.

Least-angle regression (LARS) is a regression technique for high-dimensional data. It is related to both the Least Absolute Shrinkage and Selection Operator (LASSO) and forward stagewise regression.

At Mark 25 of the NAG Library, an algorithm (and adjustments) has been introduced into the Correlation and Regression Analysis Chapter (G02) allowing a LARS, LASSO or forward stagewise model to be fit to data. In addition a utility routine has been included that returns the parameter estimates at any arbitrary point along the solution path a requirement for, amongst other things, the use of cross-validation techniques.

Learn more about the new functionality here.

'Pricing Bermudan Swaptions' on the LIBOR Market Model using the Stochastic Grid Bundling Method' - latest NAG Student Prize Winner

We are delighted to announce the latest NAG Student 'Direct Award' Prize winner as Stef Maree, MSc Student at Delft University of Technology for his work on 'Pricing Bermudan Swaptions on the LIBOR Market Model using the Stochastic Grid Bundling Method'.


We examine using the Stochastic Grid Bundling Method (SGBM) to price a Bermudan swaption driven by a one-factor LIBOR Market Model (LMM). Using a well-known approximation formula from the finance literature, we implement SGBM with one basis function and show that it is around six times faster than the equivalent Longstaff-Schwartz method. The two methods agree in price to one basis point, and the SGBM path estimator gives better (higher) prices than the Longstaff-Schwartz prices. A closer examination shows that inaccuracies in the approximation formula introduce a small bias into the SGBM direct estimator.

Read the paper here.

NAG Linear Regression on Apache Spark

NAG's Brian Spector recently gave a talk at the Chicago Apache Spark Users Group Meetup. During the talk he presented many of the problems and successes when using the NAG Library distributed on Apache Spark worker nodes. You can view Brian's slides here. Following the Meetup Brian blogged about 'The Linear Regression Problem'. In his post he tests the scalability and performance of using the NAG Library for Java to solve a large-scale multi-linear regression problem on Spark. The example data ranges from 2 gigabytes up to 64 gigabytes in the form of

label x1 x2 x3 x4
68050.9 42.3 12.1 2.32 1
87565 47.3 19.5 7.19 2
65151.50 47.3 7.4 1.68 0
78564.1 53.2 11.4 1.74 1
56556.7 34.9 10.7 6.84 0

We solve this problem using the normal equations. This method allows us to map the sum-of-squares matrix computation across worker nodes. The reduce phase of Spark aggregates two of these matrices together. In the final step, a NAG linear regression routine is called on the master node to calculate the regression coefficients.

Read Brian's post here.

Best of the Blog

Helping primary school children achieve the best in their SATs

Year 6 is a rather important year in British schools. Children following the National Curriculum in state-funded schools are subject to SATs (standard assessment tests) before they move on to secondary school, and the results of these tests are used to create the dreaded league tables which are supposed to help parents make an informed choice about which school to send their children to. Read more.

Training Courses & Events

NAG will be at the following exhibitions and conferences over the next few months.

NAGnews - Past Issues

We provide an online archive of past issues of NAGnews. For editions prior to 2010, please contact us.