Mathematical Optimization, also known as Mathematical Programming, is an aid for decision making utilized on a grand scale across all industries. Advanced analytical techniques are used to find the best value of the inputs from a given set which is specified by physical limits of the problem and user's restrictions. The quality of the result is measured by a user metric provided as a scalar function of the inputs. Optimization problems come from a massively diverse range of fields and industries, such as portfolio optimization or calibration in finance, structural optimization in engineering, data fitting in weather forecasting, parameter estimation in chemistry and many more.
Whether the optimization problem is fitting data obtained from a particle accelerator or rebalancing your investment portfolio the solvers used to deliver the results need to be robust, reliable and thoroughly tested. NAG optimization experts have developed and extensively tested a wide range of routines that provide quick and accurate solutions to optimization problems. NAG optimization solvers are highly flexible, callable from many programming languages, environments and mathematical packages, and fully documented to simplify their deployment in your application. By embedding NAG software, analysts and software engineers are able to spend more time in other areas of their work, improving productivity and time management.
There might be more than one way to formulate an optimization problem into a mathematical model and each type of model requires a specific optimization solver - NAG offers a comprehensive collection of optimization solvers so users have all they need in one place. NAG solvers are backed by over five decades of experience in developing numerical software and are supported by collaborations with many leading academics and universities. They cover a wide set of problems and circumstances so users do not feel limited by their model.
The main classes of optimization problems covered in the NAG Library are:
- Linear Programming (LP) – dense and sparse;
- Quadratic Programming (QP) – convex and nonconvex, dense and sparse;
- Second-order Cone Programming (SOCP) – covering many convex optimization problems, such as, Quadratically Constrained Quadratic Programming (QCQP)
- Nonlinear Programming (NLP) – dense and sparse, based on active-set SQP methods and interior point method (IPM);
- Global Nonlinear Programming – algorithms based on branching, multistart and stochastic optimization;
- Mixed Integer Nonlinear Programming (MINLP) – for dense (possibly nonconvex) problems;
- Semidefinite Programming (SDP) – both linear matrix inequalities (LMI) and bilinear matrix inequalities (BMI);
- Derivative-free Optimization (DFO) – solvers for problems where derivatives cannot be easily computed and finite difference approximation is not suitable;
- Least Squares (LSQ), data fitting, calibration, regression – linear and nonlinear, constrained and unconstrained.
For a full overview of the offered functionality in the NAG Library, please see the chapter introductions of the following Chapters in the Library:
- NAG Library Chapter E04 – for convex and local optimization;
- NAG Library Chapter E05 – for global optimization;
- NAG Library Chapter H – for mixed integer programming and operational research problems.
In the NAG Library documentation there is further classification of the optimization problems and additional details are discussed to assist choosing the right solver for your specific requirements, in particular, data sparsity, smoothness and differentiability, and key features of various methods.
Convex optimization, particularly Second-order Cone Programming (SOCP) and Quadratically Constrained Quadratic Programming (QCQP), saw a massive increase of interest thanks to robustness and performance. A key issue is to recognize what models can be reformulated and solved this way. This webinar introduces the background of SOCP and QCQP, and reviews basic and more advanced modelling techniques. These techniques are demonstrated in real-world examples in Portfolio Optimization.
In the previous post we discussed ways to provide derivatives and we focussed on a finite difference (FD) approximation. This time we address, in more detail, algorithms which neither require derivatives nor approximate them internally via finite differences. This class of optimization algorithms is usually referred to as Derivative-free Optimization (DFO).
Derivatives play an important role in the whole field of nonlinear optimization as a majority of the algorithms requires derivative information in one form or another. This post describes several ways to compute derivatives and focuses on the well-known finite difference approximation in detail.
Sometimes an “out of the box” solution does not fully exploit the potential of your real-world problem. For these situations NAG offers direct access to our optimization team in the form of the Mathematical Optimization Consultancy service so organizations gain from our knowledge and experience. The problem will be discussed, evaluated, analysed and solutions offered to correctly advance your application.
The Fixed Income and FX team at Första AP-fonden use solvers from the NAG Library for optimizations and interpolations. Their system is highly dependent on the speed and accuracy of the NAG Library. For example, advanced yield curve modelling within the system uses a number of NAG functions; these yield curves are essential for the management of the Fixed Income portfolio. In addition to this application, the NAG Library is used for optimizing fixed duration indices; another tool that the management of the Fixed Income portfolio relies on.