NAGnews 152

In this issue:


Major update to NAG’s Algorithmic Differentiation Tool  ̶  dco/c++


NAG continues to pioneer in the development of Algorithmic Differentiation (AD) software and with new research and development coming to the fore this December we announce a major new release of the AD software tool, dco/c++.

dco/c++ is an AD software tool for computing sensitivities of C++ codes. It embodies over 15 man years of R&D, a lot of which has required original research. It's an operator-overloading tool with a slick API: the tool is easy to learn, easy to use, can be applied quickly to a code base, and integrates easily with build and testing frameworks.

What's new in dco/c++ v3.2

  • Re-engineered internals mean dco/c++ is now ~30% faster and uses ~30% less memory
  • Vector reverse mode: for simulations with more than one output, several columns of the Jacobian or Hessian can now be computed at once using vector data types
  • Parallel reverse mode: for simulations with more than one output, the columns of the Jacobian or Hessian can now easily be computed in parallel. This can be combined with vector reverse mode
  • Jacobian pre-accumulation: sections of the computation can be collapsed into a pre-computed Jacobian, further reducing memory use
  • Disk tape: allows the tape to be recorded straight to disk. Although slower, this allows very large computations to complete without having to use checkpointing to reduce memory use
  • Tape activity logging and improved error handling

It might be that the organisation you work at has a licence for NAG’s AD software - do contact us and we’ll check for you. For more information on dco/c++ click here. Trials, training, consulting and help with Proof of Concept projects are all available from NAG.

Algorithmic Differentiation for Accelerators – dco/map

For those wishing to implement AD on accelerators NAG provides dco/map. An overview is available in this Technical Poster High Performance Tape-Free Adjoint AD for C++11. Presentation Slides: Second Order Sensitivities: AAD Construction and Use for CPU and GPU - by Jacques Du Toit and Chris Kenyon

 


University of Leeds NAG Student Prize Winner Announced


We were delighted to hear that this years’ University of Leeds NAG Prize was awarded to Fryderyk Wilczynski for achieving the best performance on the MSc within the integrated MSc/PhD Fluid Dynamics programme (cohort two). Fryderyk was awarded the prize by Peter Jimack, Professor of Scientific Computing in the School of Computing on 12 December 2017 in front of his fellow students and instructors.

After receiving his award, we asked Fryderyk to tell us a little bit about his studies.

“I am interested in studying hydrodynamic and magnetohydrodynamic instabilities. My research project is concerned with plasma instabilities that occur inside nuclear fusion reactors. Fusion reactions combine light atomic nuclei such as hydrogen to form heavier ones such as helium, producing energy.”

Read the entire article here. Congratulations Fryderyk from all at NAG! Photo shows Fryderyk (right) receiving the prize from Professor Peter Jimack (left).

 


New ‘Optimization Corner’ technical blog series continues with ‘The Price of Derivatives ̶ Derivative-free Optimization’


In the previous post we discussed ways to provide derivatives and we focussed on a finite difference (FD) approximation. This time we address, in more detail, algorithms which neither require derivatives nor approximate them internally via finite differences. This class of optimization algorithms is usually referred to as Derivative-free Optimization (DFO).

Read the full blog post here. Learn more about ‘The Optimization Corner’ series.

 


Algorithm Spotlight Mark 26.1: Struve Functions


Included in the latest Mark of the NAG Library are a set of six routines related to Struve functions.

Struve functions have some specific uses across many different fields of physics in a wide variety of applications. For example, they can be found in water-wave and surface-wave problems (specifically flow of liquid near a turning ship) as well as calculations to do with the distribution of fluid pressure over a vibrating disk and other unsteady aerodynamics. They also crop up when considering aspects of optical diffraction, plasma stability (specifically resistive magnetohydrodynamics instability theory), quantum dynamical studies of spin decoherence and excitation in carbon nanotubes.

To learn more about them see ‘Struve functions’. See what else is new at Mark 26.1.


Blog Bites


Speaking up for gender diversity in high performance computing

Qingqing Liao is a HPC Applications Support Engineer at NAG, although her actual workplace is at a large blue chip Oil & Gas organisation in the USA, where she works in a team of HPC specialists who develop, parallelize and optimize new cutting-edge algorithms for use in production.

Recently she was invited to talk to an audience of attendees at SC17 ̶ the largest HPC event in the world ̶ during the ‘Women in HPC’ event. Diversity has become an increasingly dominant theme at supercomputing events. Coming to the fore for NAG in 2015 when they published their gender diversity workplace figures at SC15 as part of John West’s, Texas Advanced Computing Center’s initiative. Since then, NAG has been actively supporting the ‘Women in HPC’ initiative and seeks to encourage and attract women into technical careers. At the grass-roots level NAG promotes the learning of STEM subjects to girls in education.

Fresh from her busy week at SC17 in Denver, I chatted to Qingqing about her route into HPC and asked her about speaking at the ‘Women in HPC’ event. Read more

The role of services in NAG’s business today

When I joined NAG, 23 years ago, we were a software company that did a bit of consultancy on the side. In the intervening years we’ve had a number of high-profile service contracts – developing ACML for AMD, providing CSE support for HECToR and, today, providing a specialist team to a customer in Houston. However, what is perhaps less apparent is the large number of smaller contracts that we routinely undertake for our customers. There are a number of reasons for this, but chief amongst them is that organisations have become much leaner, and no longer maintain all the expertise that they need in house. They look for partners who can provide the skills that they lack, and NAG’s reputation makes us an obvious choice for many of them. Read more


Out & About with NAG


Exhibitions, Conferences, Trade Shows & Webinars

Webinar: Leverage multi-core performance using Intel® Threading Building Blocks (Intel® TBB)

16, 17, 18 January 2018

This series of 2-hour theory and practical webinars delivered over 3 days will introduce Intel's Threading Building Blocks (TBB). Those attending this series need no prior knowledge of TBB and perhaps only rudimentary understanding of parallel programming. On completing the series, participants will know what TBB is, how it enables parallel programming, what differentiates it from other parallel programming models, and how to use common parallel programming patterns to parallelize their own code. Register here.