- Software is Key to Performance
- First Look: The NAG Library for SMP & Multicore, Mark 23
- Performance Gains: NAG Library for SMP & Multicore Studies
- One Day Algorithmic Differentiation Training Course
- Technical Seminars, Training and Events
- Recent NAG Blog Posts
Software is Key to Performance
For many years, the way to get better performance from your software was to buy the next processor - the hardware escalator. That escalator has essentially now stopped. With the advent of multicore/many-core it is common to see untreated applications slow down when the next processors are introduced.
Software has had to/still needs to evolve to take account of the multicore architectures and all of the challenges that they entail. This is true from laptops to supercomputers and everything in-between. This is a massive task and frankly there isn't enough skilled effort available to keep pace with the changes required.
At NAG we've been working in many ways to help our collaborators keep ahead of the game. We've been working on the NAG Library for SMP & Multicore since 1997 (world-first) and this is considered a vital tool for many users. We've also had R&D projects working on numerical routines for both GPUs and Intel's MIC. We'll be showcasing the NAG Library for SMP & Multicore as well as the research into new architectures during ISC2012 in Hamburg this June.
NAG has also been providing computational science and engineering (CSE) support for the UK national supercomputing service, HECToR, at the level of around 20 people per annum for over 5 years. Over that time there have been many fantastic success stories demonstrating the benefit of investing in software scalability and performance. As a result of the acknowledged success of direct CSE support in a national service, EPSRC have recently supported a trial of a similar service regional centre/university. (In fact, overall EPSRC sponsors over £9m per annum of software support, which is the envy of major research communities around the world).
NAG has been extending its HPC Services provision over recent years, providing procurement advice and CSE support on 4 continents to both research and industry, spreading the message that continuing quality software development is key to success.
High Performance Computing is much more than High Performance Computers.
First Look: The NAG Library for SMP & Multicore, Mark 23
The latest update of NAG's premier numerical library is now available. If you're unaware of this extensive numerical library optimized for use of HPC and multicore systems then take five minutes to learn more about its contents and uses.
At the latest release (Mark 23), the NAG Library for SMP & Multicore contains mathematical and statistical routines that have been optimized for use of HPC systems. At each new release new routines are added and many of the existing routines are enhanced to gain even better speed-ups. At Mark 23 there are now 1,700 routines in the SMP & Multicore Library, with about a third of these being parallelized for use on multiple cores.
Mark 23 Highlights Include:
- Parallelism in the areas of pseudorandom number generators, two-dimensional wavelets, particle swarm optimization, four and five-dimensional data interpolation routines, hierarchical mixed effects regression routines and more sparse eigensolver routines.
- Over 70 tuned LAPACK routines and
- Over 250 routines enhanced through calling tuned LAPACK routines (including: nonlinear equations, matrix calculations, eigenproblems, Cholesky factorization).
More about the new functionality can be found on our website.
Performance Gains: NAG Library for SMP & Multicore Studies
One illustration of the performance benefits of the NAG Library for SMP & Multicore on multiple processors is a NAG routine which computes Kendall and/or Spearman nonparametric rank correlation coefficients for a set of data.
The results (see bar chart) show how, for a problem with 500 variable and 2,000 observations, the run time reduces as the number of cores being used increases.
These results were obtained on a 24 core system comprising two AMD Opteron 6174 processors, with each processor having 12 cores running at 2.2 GHz. More performance studies can be found here.
One Day Algorithmic Differentiation Training Course
Date: Friday 20th July 2012
Location: Internationally via webcast or 7city Learning, 4 Chiswell Street, London EC1Y 4UP
This training course introduces Algorithmic Differentiation (AD) techniques in the context of applying AD to C++ numerical codes. For those not familiar with AD refer to:
- Exact First- and Second-Order Greeks by Algorithmic Differentiation
- Adjoint Parameter Calibration in Computational Finance
1.1 Motivation. Introduction. Overview
- Requirement for first and second derivatives in numerical algorithms
- First and second-order tangent-linear and adjoint models
- Case studies
1.2 First-Order Algorithmic Differentiation
- Hands-on development of first derivative code, that is
- Hand-coding tangent-linear and adjoint models;
- Tangent-linear and adjoint models by source transformation with dcc 0.9;
- Tangent-linear and adjoint models by overloading with dco 0.9.
1.3 Second-Order Algorithmic Differentiation
Hands-on development of second derivative code, that is:
- Hand-coding second-order tangent-linear and adjoint models;
- Second-order tangent-linear and adjoint models by source transformation with dcc 0.9;
- Second-order tangent-linear and adjoint models by overloading with dco 0.9.
1.4 Further Topics in Algorithmic Differentiation
- Exploitation of sparsity
- Checkpointing in adjoint code
- (Algorithmic) Differentiation of numerical algorithms
The prerequisites for the course are:
- Basic numerical analysis
- C/C++ compiler: Microsoft C++ and/or GNU C
- Text editor
- Your own PC/laptop - Windows 32 or 64 bit
- Software: dco 0.9 and dcc 0.91 - provided upon registration
The fee to attend the course in the classroom or view webcast will be £300 + VAT (UK & Europe); £300 (AsiaPac + Middle East and Africa); $480 (Americas)
Early bird discount: 20% discount if signed up before 30th June
CQF Alumni: Free
To register for this event, please send an email to T.McCahill@7city.com T.McCahill@7city.com specifying whether you would like to attend the "Classroom" lecture or view via "Webcast". Once registered, details regarding payment and details regarding the software download for dco 0.9 and dcc 0.91 will be emailed to you.
You will need to bring your own laptop to this session, however, a small number of laptops will be available for classroom delegates. If you will need to borrow a machine please include this in your registration email but please note that this will be on a first come first served basis.
Technical Seminars, Training and Events
- International Supercomputing Conference
18-20 June 2012
- Quant Invest
25-27 June 2012
- Parallel Programming with MPI
University College London
12-14 June 2012
Recent Blog Posts
Keep up to date with NAG's recent blog posts here:
NAGNews - Past Issues