- Rewarding the best - Student Prize Winners 2012
- New White Paper: Variable Selection in a Cox Proportional Hazards Model
- NAG Quant Day: New York, 26 July
- New Case Study: Performance of Materials Sciences Codes Enhanced
- Would you benefit from dedicated training at YOUR place?
- Events and Training
- Recent blog Posts
Rewarding the best - Student Prize Winners 2012
One of our highlights each year occurs when we present the NAG Student Awards which were established to reward outstanding achievement on an MSc course. At the recent Society of Industrial Applied Mathematics (SIAM) Student Chapter Conference held at the University of Manchester we presented two exceptional students with awards. Craig Lucas, NAG Senior Technical Consultant, was delighted to present an award to Randall Errol Martyr for his MSc in Mathematical Finance and Robert Andrew for his MSc in Mathematics & Computational Science.
NAG Student Award Winner Robert Andrew (left) with Craig Lucas, NAG (image © Nick Higham)
NAG Student Prize Winner Randall Errol Martyr with Craig Lucas (image © Nick Higham)
In addition to the NAG Student Awards, we also give 'direct' prizes during the year. The most recent winner was Stan Stilger of Manchester Business School for his paper 'The Use of Importance Sampling to Speed Up Stochastic Volatility Simulations'. Stan won a pass to the prestigious finance event, Quant Congress.
Congratulations and well done to you all!
New White Paper: Variable Selection in a Cox Proportional Hazards Model
The Cox proportional hazard model relates the time to an event, usually death or failure to a number of explanatory variables known as covariates. Some of the observations may be right-censored, that is the exact time to failure is not known, only that it is greater than a known time.
The NAG routine for fitting a Cox proportional hazards model is G12BAF if you are using the NAG Fortran Library and g12bac if you are using the NAG C Library. In this new paper we will show how to use these routines to perform the three main approaches for automatic variable selection, that is, choosing which explanatory variables to include in the model. The three approaches described are; forward selection, backward selection and stepwise selection.
NAG Quant Event - New York
On 26th July NAG is holding its annual Quant Event in New York (the event will also be webcast live). This is a free evening event aimed at finance industry professionals, but is also open to anyone taking an interest in financial computation from business and academia. The keynote speaker is Professor Uwe Naumann. Here's his talk abstract:
"Fast Greeks through Adjoint Algorithmic Differentiation - and further speedup through Mathematical Structural Insight"
Derivatives of various objectives with respect to potentially large numbers of free parameters are crucial ingredients of many modern numerical algorithms.
Parameter calibration methods based on implementations of highly sophisticated mathematical models as computer programs are of fundamental interest in Computational Finance. Algorithmic Differentiation (AD) transforms the given computer programs into code for the computation of first (gradients, Jacobians) as well as second (Hessians) and higher derivatives.
Adjoint AD allows for gradients to be computed with a computational cost that is independent of their sizes. For example, let the evaluation of a given function of one hundred parameters take one minute on your favourite computer. A sequential first-order finite difference approximation of the one hundred gradient entries takes about one hundred minutes. Adjoint AD delivers the same gradient with machine accuracy within less than ten minutes. Similar complexity results hold for second and higher derivatives.
In this talk we review the fundamental ideas behind (adjoint) AD and we present software tools that support the semi-automatic generation of derivative code with special focus on C/C++. Further gains in robustness and computational complexity result from the exploitation of additional mathematical and structural insight. In particular, we discuss AD of numerical methods that are likely to be embedded into the given simulation code and concurrency in the context of adjoint AD.
Event details will go online soon, but if you'd like to pre-register for the event (or webcast) simply email Rachel Foot.
New Case Study: Performance of Materials Science Codes Enhanced
A high performance computing (HPC) developer from the University of York, UK, working under NAG's Distributed Computational Science and Engineering (dCSE) support service for HECToR, has implemented performance improvements in both time taken and memory used, for the geometry optimization part of the materials science codes CASTEP and ONETEP. The improvements enable larger, more complex, systems to be studied by scientists within their existing budgets.
CASTEP and ONETEP are codes which use Density Functional Theory to calculate the electronic properties of materials from first principles. They are amongst the most heavily-used applications on HECToR. A significant amount of time in these applications is devoted to geometry optimization - this is, the minimization of the system entropy with respect to atomic positions. Originally, this was done using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) non-linear optimization algorithm, which scales O(N*N) with N, the number of atoms in the study. The goal of this distributed CSE project was to improve the scalability of both codes by switching to a limited-memory version of the BFGS algorithm.
Would you benefit from dedicated training at YOUR place?
NAG experts travel all over the world giving tailored training courses at academic institutions and businesses. A recent training course included the topics 'Using the NAG Toolbox for MATLAB' and 'NAG and Excel' and was attended by over 100 students.
If you think people at your organization would benefit from training from a NAG expert, get in touch by email. Courses can be tailored to your needs. Click here to learn more about our courses.
Technical Seminars, Training and Events
Algorithmic Differentiation One Day Training Course (webcast and London, UK) - 20 July 2012
NAG, Wilmott and the Certificate in Quantitative Finance are holding a one day training course on Automatic Differentiation, taught by Professor Uwe Naumann. Event Info.
NAG Quant Day (New York, USA) - 26 July 2012
Following the highly successful NAG Quant Day format is this years' USA event. The keynote speaker is Professor Uwe Naumann. Places are limited and registration is required to secure your place. Event Info.
ISMP is a scientific meeting held every 3 years on behalf of the Mathematical Optimization Society. NAG will be attending this event to talk about the NAG Library and Compiler products.
- International Symposium on Mathematical Programming (Berlin, Germany) - 19-24 August 2012
Recent Blog Posts
Keep up to date with NAG's recent blog posts here:
The Matrix Square Root, Blocking & Parallelism
Edvin Deadman writes about how he, and others, have been investigating how blocking can be used to speed up the computation of matrix square roots.
Using NAG and LabVIEW in a 64 bit environment
Jeremy Walton gives the latest instalment in a series of posts about enhancing LabVIEW applications by using NAG methods and routines. The examples we looked at previously were all in the 32 bit environment, but some users asked whether this works in the 64 bit world. Examples are shown for this in this post.
So just how expensive is Marshaling?
John Morrissey blogs about a technique that may improve run-time performance when using the NAG C Library from .NET.
NAGNews - Past Issues