NAGnews 172
First-order Active-set Method

A Highly Competitive Optimization Solver - Now in the NAG Library

New in the NAG Library at the latest Mark is a First-order Active-set Method (FOAS). FOAS is based on a nonlinear conjugate gradient for large-scale bound-constrained nonlinear optimization. The solver is ideal for very large problems (tens of thousands or more variables) where the first-order derivatives are available or are relatively cheap to estimate.

A key design objective for the new FOAS solver was to provide a modern and attractive replacement for the existing NAG Library routine uncon_conjgrd_comp (e04dg). While the e04dg solver was targeted for unconstrained NLPs, the new solver handle_solve_bounds_foas (e04kf) has not only been extended to solve bound-constrained NLPs but also offers noticeable performance gains. Learn more here

Implementations of first-order methods not only are ubiquitous and have widespread use, but they have also demonstrated to endure the challenges of ever-growing problems sizes imposed by industry. Most notable are applications in statistics, e.g. parameter calibration and nonlinear model regression, amongst many others. First-order methods and the conjugate gradient method have been a research subject for well over 50 years and continue to be improved.

FOAS Benchmarks

The following Figure 1 reports a benchmark using performance profiles over a set of CUTEst NLP problems for solvers e04kf and e04dg. Contrasting the two plots, it can be seen that the new solver is more efficient in time and in terms of user call-backs: it solves 45% of the problems faster (left plot) and 60% of the problems require fewer gradient evaluations (right plot). These results show the clear advantage of e04kf. Current users of uncon_conjgrd_comp (e04dg) are highly encouraged to upgrade.

Figure 1
Figure 1

 

Figure 1: Performance profiles comparing solvers e04kf and e04dg over 114 CUTEst unconstrained NLP problems. Performance measures are: time (left) and number of gradient calls (right). For the time plot (left), the higher line indicates faster solver. For the right plot, the higher line represents fewer gradients calls.

What makes “Cost of Solution” worth talking about?

In the last NAGnews we announced the launch of the NAG Cloud HPC Migration Service – following this Branden Moore, HPC and Benchmarking Manager, found time to blog about the ‘Cost of Solution’ concept. In the blog he delves into the different aspects of the ‘Cost of Solution’ for cloud HPC, guides us through an example showing how to best use this metric in order to make informed choices for Cloud HPC and shows how adding additional resources – in this case spending more per hour – can sometimes lead to overall cost savings. Read it here.

Cost of Solution graphic

The NAG Cloud HPC Migration Service combines high performance computing experience with new cost-performance optimization approaches to help users adapt complex HPC workloads to the cloud. A registered partner of Microsoft Azure, AWS, and Google Cloud, NAG will support customers in achieving optimal cost-to-solution. 

Learn more about the service here and if you have questions or would like to chat with the service team do get in touch.

Algorithmic Differentiation Masterclass Series

The AD Masterclass series is in full swing, but there’s still chance to catch up, and register for the rest of this unique and hugely valuable learning opportunity. The series will deepen your AD knowledge and show you how to move beyond toy examples and PoCs.

    Tech Report: Markov Chain Monte Carlo for Bayesian uncertainty quantification from time-series data

    In Maybank et al: Markov Chain Monte Carlo for Bayesian uncertainty quantification from time-series data, to appear in volume 12143 of the Springer Lecture Notes in Computer Science Series, NAG’s tool for algorithmic differentiation (AD) of numerical C++ programs dco/c++ is compared with similar functionalities provided by the Stan Math Library. Both tools have been tuned specifically for adjoint AD of linear algebra kernels implemented by Eigen. They reach near-optimal performance with slight advantages for dco/c++ on selected direct linear solvers. More importantly, this study illustrates the feasibility of combinations of dco/c++ with potentially highly optimized custom AD solutions.

    Modern numerical simulations in C++ are complex hierarchies of type-generic special-purpose classes linked by less specialized code for implementing the flows of data and control. Efficient and robust global adjoint sensitivity analysis requires state of the art general-purpose AD tool support (dco/c++) with embedded special handling of suitable parts of the computation (e.g. linear algebra and probability by Stan). Obviously, such a combination is of little use unless the functionality of dco/c++ is complemented and/or the performance of its adjoint code is improved. Ultimately, NAG strives for full integration of all relevant special-purpose AD solutions into dco/c++. At the same time, we aim for the highest possible degree of interoperability with third-party software. 

    Download the report ─ code available on request. 

    My Student Placement Year at NAG

    My name is Will Lee-Anglin and I’ve just spent my placement year in industry at NAG (2019/2020) whilst studying for an undergraduate degree in Computer Science and Mathematics at the University of Bath. NAG has been a brilliant opportunity for me to experience various roles and practices in a software company.

    I’ve worked as a Software Engineer in the Product Engineering team. The role of our team was to build and test the NAG Library and related products for all different types of systems, languages and implementations to meet customer needs. I was tasked with maintaining the Java wrappers for the NAG Library. This allowed me to make improvements to the build and testing processes as well as providing support for customers and fellow employees who were using the wrappers.