Current Issue

Technical news, white papers, tips & hints and other news from NAG

NAGnews 167

In this issue:

 


What's new in the NAG Fortran Compiler - major new release


The NAG Fortran Compiler has been updated with a host of major new features:

  • Parallel execution of coarray programs on shared-memory machines
  • Half precision floating-point conforming to the IEEE arithmetic standard, including full support for all exceptions and rounding modes
  • Submodules, a Fortran 2008 feature for breaking large modules into separately-compilable files
  • Teams, a Fortran 2018 coarray feature for structuring parallel execution
  • Events, a Fortran 2018 coarray feature for lightweight single-sided synchronisation
  • Atomic operations, a Fortran 2018 coarray feature for updating atomic variables without synchronisation

Click here to learn more about the new release.

In a recent benchmark study the NAG Compiler recently came out top with a score of 96% against Intel, Cray, gfortran and Oracle compilers. The results are featured in the December issue of Fortran Forum https://dl.acm.org/citation.cfm?id=3374907.

We recommend that users move to the latest NAG Fortran Compiler because of additional functionality (all supported clients are guaranteed technical assistance on the current release and one previous). Users upgrading from a previous release will need a new licence key. Windows and Mac NAG Compiler 7.0 implementations will follow in 2020. If you have any questions about this release do contact your Account Manager or the NAG Technical Support Service.

Free trials of the new NAG Fortran Compiler are available

 


Super-charged Fixed Point Iterations using Anderson Acceleration and the NAG Library


Fixed Point Iterations appear in many areas of science, finance, engineering and applied mathematics. Their general form is $$x_{n+1} = f(x_n)$$ which is repeated until a convergence criterion is reached. One problem with such techniques is that convergence can be slow which limits their usefulness.

In a 2015 paper, NAG collaborator Nick Higham along with Nataša Strabić applied a technique called Anderson Acceleration to his Alternating-Projections algorithm for computing Nearest Correlation Matrices (NCMs) resulting in much faster convergence.

The Anderson Accelerated NCM routine was included in Mark 27 of the NAG Library with the function name nag_correg_corrmat_fixed. While implementing it, the NAG team decided to make general Anderson Acceleration methods available to users of the NAG Library: sys_func_aa and sys_func_aa_rcomm which can be applied to any fixed point problem.

In a recently published Jupyter Notebook, we demonstrate the use of these routines from Python where we first show how the convergence of the simple fixed point iteration $$u^{n+1}_{j,i} = \frac{1}{4} \left(u^{n}_{j+1,i} +u^{n}_{j-1,i} +u^{n}_{j,i+1} +u^{n}_{j,i-1} \right)+\frac{h^2}{4}f_{j,i}$$ can be improved from requiring 88 iterations to only 10 using Anderson Acceleration ? a speed up of almost a factor of 9.

Click here to read the blog post

 


Webinar: Modern modelling techniques in convex optimization and its applicability to finance and beyond


Nowadays there is a wide range of optimization solvers available. However, it is sometimes difficult to choose the best solver for your model to gain all the potential benefits. Convex optimization, particularly Second-order Cone Programming (SOCP) and Quadratically Constrained Quadratic Programming (QCQP), saw a massive increase of interest thanks to robustness and performance. A key issue is to recognize what models can be reformulated and solved this way.

NAG's first webinar of 2020 introduces the background of SOCP and QCQP, and reviews basic and more advanced modelling techniques. These techniques will be demonstrated in real-world examples in Portfolio Optimization.

Learn more and register https://attendee.gotowebinar.com/register/5588930156736514306

 


How to use dco/c++ and the NAG AD Library to compute adjoints of a non-trivial PDE solver?


A new technical report demonstrates how to use dco/c++ and the NAG AD Library to compute adjoints of a non-trivial PDE (Partial Di?erential Equation) solver. It shows how dco/c++ can be used to couple hand-written symbolic adjoint code with an overall algorithmic solution. It also demonstrates the easy-to-use interface when dco/c++ is coupled with the NAG AD Library. Here, the sparse linear solver (f11jc) can be switched from algorithmic to symbolic mode in one code line. It introduces the primal solver, the adjoint solver, and respective run time and memory results. An optimization algorithm is also run on a test case using steepest descent to show the potential use of the computed adjoints.

Read the report

 


Latest Student Prize Winners


It was great to see University of Leeds Student, Reece Coyle, awarded the NAG Prize for best performance in the MSc Component of the EPSRC CDT in Fluid Dynamics in December. Congratulations Reece from all at NAG. The photos below show Reece receiving his award from Professor Peter Jimack and everyone who attended the presentation!

If you're interested in learning more about NAG's Student Awards do get in touch.

 


Out & About with NAG


Exhibitions, Conferences, Trade Shows, and Webinars

Webinar: Guided performance analysis and optimization using MAQAO
23 January 2020

Webinar: Modern modelling techniques in convex optimization and its applicability to finance and beyond
5 February 2020

TakeAIM Awards Ceremony
6 February 2020, London

PyCon 2020
17-19 April 2020, Pittsburg

QuantMinds International
11-15 May 2020, Hamburg

 


Best of the Blog


Application Performance Profiling: Part 1 - What to profile

This is the first of a series of blogs to give an overview of the process of application profiling for performance optimization. The topics we will cover are:

  1. Performance profiling concepts: Parallelism, the critical path, and Amdahl's argument.
  2. The right tool for the job: Tracing, sampling and instrumenting profilers.
  3. Interpreting profiles and diagnosing issues: An example with MPI and I/O.

Click here to read the blog.
Watch the webinar that accompanies the blog series.