Realize the Full Benefits of HPC Workloads on the Cloud
At the ISC 2020 Digital event this week we revealed the new Cloud HPC Migration Service designed to meet the growing need for deep HPC expertise in cloud migration. With this service, NAG combines high performance computing experience with new cost-performance optimization approaches to help users adapt complex HPC workloads to the cloud. A registered partner of Microsoft Azure, AWS, and Google Cloud, NAG will support customers in achieving optimal cost-to-solution.
The reasons for moving to the public cloud are numerous - access to a wider variety of hardware, reduced CAPEX, and less on-site support, to name a few. HPC has been slow to make the move, but last year Hyperion research noted that cloud adoption had reached a “tipping point”, with users estimating that up to 40% of HPC workloads could move to the cloud.
"We’re seeing a lot of clients decide that now is the right time to move HPC workloads to the cloud," said Adrian Tate, CEO of NAG. "The access to increased variety of hardware is a great thing, but it can amplify inefficiencies which will eradicate your cost advantage. With decades of real HPC experience, up-to-date cloud credentials, and deep partnerships with the big three cloud providers, we understand the overall HPC cloud migration problem in a way that others can't."
NAG’s Cloud HPC Migration Service is provided stand-alone or through the cloud partner networks of Microsoft Azure, AWS and Google Cloud. Learn more about the service here and if you have questions or would like to chat with the service team do get in touch.
NAG is delighted to present the online Algorithmic Differentiation (AD) Masterclass Series. NAG are pioneers in the industrial application of AD. The aim of the AD Masterclass Series is to share the best practice, software engineering issues, optimization techniques and common pitfalls we’ve learned from over a decade in the front lines, applying AD to real-world codes. The series will deepen your AD knowledge and show you how to move beyond toy examples and PoCs.
- What AD is, its impact on the world and what it means for your codes
- How to compute derivatives using tangent and adjoint AD
- How to set up a rigorous testing harness, and the software engineering implications of this
- How to exploit SIMD vectorisation
- How to bootstrap a validated adjoint on a real-world code
- How to speed up your adjoint solution and reduce its memory use
Algorithmic Differentiation (AD) Masterclass Series dates and detail:
- 30 July 2020: Why the need for Algorithmic Differentiation?
- 6 August 2020: How AD works: computing Jacobians
- 13 August 2020: Testing and validation
- 20 August 2020: Pushing performance using SIMD vectorisation
- 27 August 2020: Bootstrapping validated adjoints on real-world codes
Boosted by advanced type genericity and support for template metaprogramming techniques, the role of C++ as the preferred language for largescale numerical simulation in Computational Science, Engineering and Finance has been strengthened over recent years. Algorithmic Diﬀerentiation of numerical simulations and algorithmic adjoint methods, in particular, have seen substantial growth in interest due to increased requirement for gradient-based techniques in high dimensions in the context of parameter sensitivity analysis and calibration, uncertainty quantiﬁcation, and nonlinear optimization. Modern software tools for (adjoint) Algorithmic Diﬀerentiation in C++ make heavy use of modern C++ features aiming for increased computational eﬃciency and decreased memory requirement. The dco/c++ tool presented in this paper aims to take Algorithmic Diﬀerentiation in C++ one step further by focussing on derivatives of arbitrary order, support for shared-memory parallelism, and powerful and intuitive user interfaces in addition to competitive computational performance. Its algorithmic and software quality has made dco/c++ the tool of choice in many industrial and academic projects.
Read the report here – the code referenced in the report is available on the NAG GitHub
Guest Blog Author: Dr Jennifer Pestana - Mathematics and Statistics Lecturer, University of Strathclyde
Linear systems involving Toeplitz matrices arise in many applications, including differential and integral equations, and signal and image processing (see, e.g., this article and the books by Ng, and Chan and Jin). More recently, Toeplitz systems have appeared in discretisations of fractional diffusion problems. This is because fractional diffusion operators are non-local, and lead to dense matrices; if these dense matrices are Toeplitz, it's possible to develop fast solvers.
In a recent blog post for NAG, Mike Croucher showed that using specialized direct Toeplitz solvers, rather than a generic solver, can result in a massive speed-up. Here, we show that in addition to these tailored direct approaches, preconditioned iterative methods can be competitive for these problems. Perhaps surprisingly, this is true even when the Toeplitz matrix is dense.