The NAG Library contains a comprehensive collection of Optimization routines, which cover a diverse set of problems and circumstances.
The NAG Library has local minimization routines for:
- Linear, quadratic, integer and nonlinear programming problems
- Least squares problems
- Constrained and unconstrained minimization
(Usually a maximization problem can be converted to a minimization problem by considering the revised objective function -F(x).)
There are however other factors that should be taken into account. If the user can provide first and/or second derivative information for the objective or constraint functions then this is beneficial, so routines or facilities need to be provided to allow the user to provide this information where it is known.
Most techniques rely upon 'smoothness' - the existence of derivatives, especially first derivatives, so that very discontinuous functions or ones with many sharp discontinuities in the lower order derivatives can cause problems. For this reason NAG provides a Nelder and Mead simplex algorithm (not to be confused with the linear programming simplex method) as a slow but reliable method for such objective functions.
Some problems can be very large, but have effectively lots of zeros that help define the problem. Such problems are termed sparse, and it would be wasteful and inefficient to store all of these zeros. In consequence we see families of sparse routines in the NAG optimization chapters.
In addition, NAG has routines specifically for global optimization problems though the repeated use of an appropriate local minimizer from a variety of starting points may still be a powerful and effective way of addressing such problems.