In the field of Scientific Computing there is a big focus on solving time dependent Partial Differential Equations (PDEs) as efficiently as possible. Adaptive mesh refinement (AMR) can be used to construct a sparse mesh at every time step which maintains an accurate approximation to the solution. Interpolating wavelets are often used in AMR. In this report we present a detailed comparison of two wavelets for AMR: Donoho's interpolating wavelet and a lifted version (also called second generation wavelets) of Donoho's interpolating wavelet. The wavelets are compared on PDE problems from computational finance and computational fluid dynamics. We also examine different ways of handling the boundaries and the impact thereof. Donoho's interpolating wavelet with lower order boundary stencil implementation appears to be the most accurate, whilst resulting in very high compression compared to the original mesh. For one data set Donoho's interpolating wavelet keeps fewer than 5% of the points whilst having an error smaller than 0.0001. In general, Donoho's interpolating wavelet produces sparse meshes while maintaining good accuracy, even for very irregular shapes. Lastly, an improvement on the inverse transform during the adaptive mesh refinement leads to promising results.
NAG is currently working in the PDE area. If you're interested in learning more about our work do get in touch info@nag.com.

Nowadays there is a wide range of optimization solvers available. However, it is sometimes difficult to choose the best solver for your model to gain all the potential benefits. Convex optimization, particularly Second-order Cone Programming (SOCP) and Quadratically Constrained Quadratic Programming (QCQP), saw a massive increase of interest thanks to robustness and performance. A key issue is to recognize what models can be reformulated and solved this way.
NAG's first webinar of 2020 introduces the background of SOCP and QCQP, and reviews basic and more advanced modelling techniques. These techniques are demonstrated in real-world examples in Portfolio Optimization.
Learn more about the NAG Library Second-order Cone Programming Solver plus Examples.
NAG is a judge and sponsor of the TakeAIM awards. The competition is an opportunity for university students to showcase their work on the industrial stage. TakeAIM's goal is to highlight the crucial role mathematics plays in solving real-world problems while rewarding the academic exploration of future innovators who undertake pioneering research.
The award ceremony for the 2019 entries was held in London on 6 February. NAG Honorarium (and TakeAIM judge) David Sayers and colleague David Humphris attended the vibrant event where student entrants and winners were celebrated.
The 2019 TakeAIM Award was presented to two entrants: Sarah Brown, University of Nottingham for her paper, 'Using Maths to Combat Potentially Fatal Asthma Attacks', and Enrico Gavagnin, University of Bath for his work 'A collective human challenge'.
Congratulations Sarah and Enrico, and to all the excellent runners-up.

Widely reputed as the world's best checking Compiler, the NAG Fortran Compiler has recently been upgraded with many new features. Since the last NAGnews, the Compiler, Release 7.0, has been implemented for Apple Intel Mac. Download this Compiler here.
NAG Fortran Compiler (7.0) new features:
- Parallel execution of coarray programs on shared-memory machines
- Half precision floating-point conforming to the IEEE arithmetic standard, including full support for all exceptions and rounding modes
- Submodules, a Fortran 2008 feature for breaking large modules into separately-compilable files
- Teams, a Fortran 2018 coarray feature for structuring parallel execution
- Events, a Fortran 2018 coarray feature for lightweight single-sided synchronisation
- Atomic operations, a Fortran 2018 coarray feature for updating atomic variables without synchronisation
Click here to learn more about the new release.
In a recent benchmark study, the NAG Fortran Compiler recently came out top with a score of 96% against Intel, Cray, gfortran and Oracle compilers. The results are featured in the December issue of Fortran Forum.
We recommend that users move to the latest NAG Fortran Compiler because of additional functionality (all supported clients are guaranteed technical assistance on the current release and one previous). Users upgrading from a previous release will need a new licence key.
Webinar: Energy Efficient Computing using Dynamic Tuning
This webinar, presented by the POP project, focuses on tools designed to improve the energy-efficiency of HPC applications using a methodology of dynamic tuning of HPC applications, developed under the H2020 READEX project. The READEX methodology has been designed for exploiting the dynamic behaviour of software. At design time, different runtime situations (RTS) are detected and optimized system configurations are determined. RTSs with the same configuration are grouped into scenarios, forming the tuning model. At runtime, the tuning model is used to switch system configurations dynamically. We will present the MERIC tool, that implements the READEX methodology. It supports manual or binary instrumentation of the analysed applications to simplify the analysis.
Webinar: Addressing Biomedical Challenges with HPC
In this webinar, presented by the POP project, we will outline some of CompBioMed’s achievements on HPC infrastructure in these fields of computational biomedicine. The consortium’s efforts to broaden the capability of HPC use within the biomedical community will also be discussed. Finally, we will highlight the results of previous interactions between the CompBioMed and POP CoEs and possible avenues for future collaboration.
- PyCon 2020 15-23 April 2020, Pittsburg
- The Trading Show Chicago 3-4 September 2020, Chicago
- QuantMinds International 2-5 November 2020, Hamburg

This is the second of a mini-series of blogs on application performance profiling by Phil Tooley, HPC Application Analyst. The aim of the blogs is to give an overview of the process of application profiling for performance optimization. The topics he covers in this series are:
1. Performance profiling concepts: Parallelism, the critical path, and Amdahl's argument.
2. The right tool for the job: Tracing, sampling and instrumenting profilers.
3. Interpreting profiles and diagnosing issues: An example with MPI and I/O.
In this blog he will use the concepts introduced in the first part to understand how to choose the best profiling tools for different situations.
