Use of Algorithmic Differentiation (AD) leads to more efficient sampling for Hamiltonian Monte Carlo (HMC). Check out our recent paper on Markov Chain Monte Carlo (MCMC) for Bayesian uncertainty quantification from time-series data and the accompanying code on the NAG GitHub.
NAG’s AD tool, dco/c++ (jointly developed with RWTH Aachen University) differentiates C++ code. It can differentiate any C++ code where the implemented function has well-defined derivatives. The combination of dco/c++ with open-source Stan software for HMC creates new opportunities for doing Bayesian uncertainty quantification using existing C++ code bases. The developer effort required to get started is minimal and advanced functionality of dco/c++ can be used to tune performance. MCMC-based uncertainty quantification has long been seen as too computationally expensive for many real-world problems, but this is changing. Our AD tool dco/c++ has an important part to play in this.
NAG’s Branden Moore continues to write about Cost of Solution in Cloud HPC in his latest blog.
One of the primary drivers for Cloud computing is access to architectures and systems which may not be readily available in-house. One example of this is AWS’s somewhat recent introduction of their own custom-designed Graviton 2 processor. This processor is based on the ARM architecture, rather than the x86-based architectures from Intel and AMD. We have had a number of clients enquire about how viable ARM is for their HPC needs. While there are a handful of published benchmarks available, I decided to take an afternoon and try it for myself.
For this small exercise, I decided to benchmark the weather code WRF v188.8.131.52. There are two "traditional" benchmarks for WRF v3, representing different resolutions (12km and 2.5km resolutions). Both benchmarks run for three simulated hours. The smaller benchmark (12km resolution) typically scales well to a few hundred cores, and the larger benchmark (2.5km resolution) will scale to a few thousand cores. However, for this project, I ran the benchmarks on only a single node, and as this exercise was only to satisfy my own curiosity, I did not re-run the benchmarks multiple times which we would normally do to capture statistical variation.
Following the success of the recent AD Masterclass series, NAG is delighted to present the second series Advanced Adjoint Techniques. Building on the successful first series, we uncover checkpointing and symbolic adjoints, advanced AD for Machine Learning, two classes on Monte Carlo, and finally investigate second-order sensitivities.
The 2nd series begins on 1 October 2020 with “Checkpointing and external functions: Manipulating the DAG”. Five subsequent classes follow. The online Masterclass series is open to all. If you'd like to catch-up with the first series material, do get in touch.
Machine learning is becoming ever more powerful and prevalent in the modern world, and is being used in all kinds of places from cutting-edge science to computer games, and self-driving cars to food production. However, it is a computationally-intensive process - particularly for the initial training stage of the model, and almost universally requires expensive GPU hardware to complete the training in a reasonable length of time. Because of this high hardware cost, and the increasing availability of cloud computing many ML users, both new and experienced, are migrating their workflows to the cloud in order to reduce costs and access the latest and most powerful hardware.
This tutorial demonstrates porting an existing machine learning model to a virtual machine on the Microsoft Azure cloud platform. We will train a small movie recommendation model using a single GPU to give personalised recommendations. The total cost of performing this training should be no more than $5 using any of the single GPU instances currently available on Azure.
This is not the only way to perform ML training on Azure, for example, Microsoft also offer the Azure ML product, which is designed to allow rapid deployment of commonly used ML applications. However, the approach we will use here is the most flexible as it gives the user complete control over all aspects of the software environment, and is likely to be the fastest method of porting an existing ML workflow to Azure.
NAG is delighted to announce our continued support of the University of Oxford’s InfoMM Centre for Doctoral Training. InfoMM (Industrially Focused Mathematical Modelling) is a four-year doctoral programme, setup as a collaboration between the university and industry partners, in which students are trained in cutting-edge mathematical methods and how to solve real-world industry challenges.
Since our first collaboration in 2016 we have sponsored several talented students working with us on advanced aspects of mathematical optimization and data science, such as derivative-free optimization, Bayesian optimization or dimensionality reduction for big data optimization. We are especially excited about our latest project which started this year. We will be looking at various challenges in machine learning, particularly advanced techniques for neural networks training and related problems.
Learn more about InfoMM here.