hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox Chapter Introduction

E05 — Global Optimization of a Function

Scope of the Chapter

Global optimization involves finding the absolute maximum or minimum value of a function (the objective function) of several variables, possibly subject to restrictions (defined by a set of bounds or constraint functions) on the values of the variables. Such problems can be much harder to solve than local optimization problems (which are discussed in Chapter E04) because it is difficult to determine whether a potential optimum found is global, and because of the nonlocal methods required to avoid becoming trapped near local optima. Most optimization functions in the NAG Toolbox are concerned with function minimization only, since the problem of maximizing a given objective function FF is equivalent to minimizing F-F. In nag_glopt_bnd_mcs_solve (e05jb), nag_glopt_bnd_pso (e05sa) and nag_glopt_nlp_pso (e05sb), you may specify whether you are solving a minimization or maximization problem; in the latter case, the required transformation of the objective function will be carried out automatically. In what follows we refer exclusively to minimization problems.
This introduction is a brief guide to the subject of global optimization, designed for the casual user. For further details you may find it beneficial to consult a more detailed text, such as Neumaier (2004). Furthermore, much of the material in the E04 Chapter Introduction is relevant in this context also. In particular, it is strongly recommended that you read Section [Scaling] in the E04 Chapter Introduction.

Background to the Problems

Problem Formulation

For the purposes of this Library, the global optimization problem is
minimize F(x)  subject to  lxxux  and  lcc(x)uc,
xRn
minimize xRn F(x)   subject to   lx x ux   and   lc c(x) uc ,
(1)
where F(x)F(x) (the objective function) is a real function; the vectors lxlx and uxux are elements of RnR-n, where RR- denotes the extended reals R{,}R{-,}; and where cc is a vector of mm constraint functions c1,,cmc1,,cm, with lclc and ucuc defining the constraints on c(x)c(x). If m = 0m=0 the problem is said to be bound constrained. Relational operators between vectors are interpreted elementwise. The feasible region ΦΦ is the set of all points (feasible points) that satisfy all of the constraints. A solution of (1) is a feasible point Φx^Φ satisfying
F() = min F(x).
xΦ
F(x^) = min xΦ F(x).
A local minimum minimizes FF only on some neighbourhood of x^. If a local minimum has the smallest objective value over all the local minima, then it is a global minimum.

Terminology

Complete Methods

A global optimization algorithm is called asymptotically complete if
(i) assuming indefinitely long run-time and exact computations, a global minimum will be found with certainty (probability one), but
(ii) the algorithm has no way of knowing when a global minimum has been found.
In comparison, a complete method satisfies (i) as well as
(ii(a)) the algorithm is able to recognize a global minimum (to prescribed tolerances) within a finite amount of time.
It is important to appreciate that, for finding a solution exactly, bounds on the amount of work may be very pessimistic. What complete methods guarantee is the absence of any deficiency that would prevent a global minimum from eventually being found. To achieve termination with certainty in a finite amount of time, the algorithm requires access to global information about the problem. In the case where only function values are available, as in nag_glopt_bnd_mcs_solve (e05jb), stopping criteria based on heuristics are present. This is because such a class of method can only terminate with certainty by performing an expensive dense search.
In contrast, incomplete methods have intuitive heuristics for searching but no guarantee of not getting stuck near nonglobal, local, minima. Often, to make incomplete methods efficient, expert knowledge on the particular problem class to be solved is required. Examples of incomplete methods include Particle Swarm Optimization (PSO), Genetic Algorithms (GA), Simulated Annealing (SA), Ant Colony Optimization (ACO) and Covariance Matrix Adaptation Evolutionary Strategies (CMA-ES). PSO has been implemented in the functions nag_glopt_bnd_pso (e05sa) and nag_glopt_nlp_pso (e05sb). Such functions must also use heuristics to stop the algorithm as again an expensive, dense search would be required to guarantee that no superior optima are present.
The heuristic nature of incomplete algorithms can make them very efficiently parallelizable. This is the case for nag_glopt_bnd_pso (e05sa) and nag_glopt_nlp_pso (e05sb), which use a heavily asynchronous implementation of the particle swarm heuristic to be efficient in achieving a good solution in implementations of the NAG Toolbox.

Branching

Most complete methods recursively split the original problem into smaller, more manageable subproblems. This technique is called branching. Branching is usually accompanied by a selection process that splits favourable branches more frequently than others. For example, with branch and bound methods, bounds on the objective function for each subproblem are computed in an attempt to eliminate those subregions where no improvement will occur.
Branching methods use a branching scheme to generate sequences of sub-boxes that eventually cover the feasible region. At least one function evaluation is made for every sub-box, and new sub-boxes are generated by splitting existing ones. Using appropriate splitting rules, convergence to zero of the diameters of sub-boxes is assured. For example, always splitting the oldest box along the oldest side, provided the children do not have too small a volume compared with the parent, guarantees convergence of the method, in the sense described in Neumaier (2004).
Efficiency can be enhanced by carefully balancing global and local searches. While the global part of the search splits sub-boxes with large unexplored territory, the local part usually entails splitting boxes with good function values. For example, the sub-box with the best function value should always be split. A method may also be improved by launching local searches from appropriate candidate local minima.

Methods of Global Optimization

Multi-level Coordinate Search (MCS)

The function nag_glopt_bnd_mcs_solve (e05jb) searches for a global minimizer using branching to recursively split the search space in a nonuniform manner. It divides, or splits, the root box of the search into smaller sub-boxes. Each sub-box contains a distinguished basepoint at which the objective function is sampled. We shall sometimes say ‘the function value of the (sub)box’ as shorthand for ‘the function value of the basepoint of the (sub)box’. The splitting procedure biases the search in favour of those sub-boxes where low function values are expected.
The global part of the algorithm entails splitting sub-boxes that enclose large unexplored territory, while the local part of the algorithm entails splitting sub-boxes that have good function values. A balance between the global and local part is achieved using a multi-level approach, where every sub-box is assigned a level s{0,1,,smax}s{0,1,,smax}. You can control the value of smaxsmax using the optional parameter Splits Limit. Whenever a sub-box of intermediate level 0 < s < smax0<s<smax is split each descendant will be given a new level, and the original sub-box's level is set to 00: a sub-box with level 00 has already been split; a sub-box with level smaxsmax will be split no further. This entire process is described in more detail in Section [Initialization and Sweeps] in (e05jb), where the initialization procedure used to produce an initial set of sub-boxes is outlined, and the method by which the algorithm sweeps through levels is discussed. Each sweep starts with the sub-boxes at the lowest level, a process thus forming the global part of the algorithm. At each level the sub-box with the best function value is selected for splitting; this forms the local part of the algorithm.
The process by which sub-boxes are split is explained in Section [Splitting] in (e05jb). It is a variant of the standard coordinate search method: the solver splits along a single coordinate at a time, at adaptively chosen points. In most cases one new function evaluation is needed to split a sub-box into two or three children. Each child is given a basepoint chosen to differ from the basepoint of the parent in at most one coordinate, and safeguards are present to ensure a degree of symmetry in the splits.
If you set the optional parameter Local Searches to be ‘OFF’, then the basepoints and function values of sub-boxes of maximum level smaxsmax are put into a ‘shopping basket’ of candidate minima. Turning Local Searches ‘ON’ (the default setting) will enable local searches to be started from these basepoints before they go into the shopping basket. The local search will go ahead providing the basepoint is not likely to be in the basin of attraction of a previously-found local minimum. The search itself uses a trust region approach, and is explained in Section [Local Search] in (e05jb): local quadratic models are built by a triple search, then a linesearch is made along the direction obtained by minimizing the quadratic on a region where it is a good approximation to the objective function. The quadratic need not be positive definite, so the general nonlinear optimizer nag_opt_nlp2_sparse_solve (e04vh) is used to minimize the model.

Particle Swarm Optimization

The functions nag_glopt_bnd_pso (e05sa) and nag_glopt_nlp_pso (e05sb) search for a global optimum using a variant of the Particle Swarm Optimization (PSO) algorithm. PSO is an heuristic algorithm similar in its behaviour to GA, ACO, SA and others. A set of particles (the swarm) is generated in the search space, and advances at each iteration following an heuristic velocity based upon the best candidate found by an individual particle (cognitive memory), the best candidate found by all the particles (global memory) and inertia. The inertia is provided by a decreasingly weighted contribution from a particle's current velocity. This mix allows for a global search of the domain in question.
The rate at which the swarm contracts and expands about potential optima is user controllable, allowing expert knowledge to be used when available. Furthermore, the algorithm may be coupled with a selection of local optimizers. These may be called during the iterations of the heuristic algorithm (the interior phase) to hasten the discovery of locally optimal points. They may also be called following the heuristic iterations (the exterior phase) to attempt to refine the final solution. Different options may be set for the local optimizer in each phase. For further details see Section [Algorithmic Details] in (e05sa) and (e05sb).
These functions are most effectively used when multiple cores are available for computation, since very many function evaluations are required for a typical problem. In implementations of the NAG Toolbox the algorithm has been parallelized to allow for high levels of asynchronicity between threads. This allows individual threads to continue searching without the requirement for all threads to have returned solutions, and leads to excellent parallel speedup.

Multiple Start

Function nag_glopt_nlp_multistart_sqp (e05uc) attempts to find the global minimum of an arbitrary smooth function subject to constraints (which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints) by generating a number of different starting points and using the local minimizer (e04uc). Function nag_glopt_nlp_multistart_sqp_lsq (e05us) takes the same approach in attempting to find the global minimum of an arbitrary smooth sum of squares function using the local minimizer (e04us).
The more starting points chosen, the greater the degree of confidence that the user might have in the returned results. Facilities are provided to allow the user to specify the starting points and to provide for subsequent runs with different starting points as an additional means of gaining confidence in the results.
The user may also request that a number of solutions be provided, ordered in increasing value of the local optima. This may be useful if a local solution has a desirable property not exhibited by the best local optimum computed, the putative global optimum.

Recommendations on Choice and Use of Available Functions

The suite of multi-level coordinate search functions consists of:
nag_glopt_bnd_mcs_solve (e05jb) is based on the multi-level coordinate search method of Huyer and Neumaier (1999). It is an asymptotically complete method for bound constrained problems based on local information (function values) only, employing branching and local searches to accelerate convergence.
If the problem has nonlinear constraints and is sufficiently smooth then you are advised to consider a multiple start technique. nag_glopt_nlp_multistart_sqp (e05uc) and nag_glopt_nlp_multistart_sqp_lsq (e05us) are provided for this purpose.
The suite of particle swarm optimization (PSO) functions are to be considered as experimental and are not recommended for production or mission-critical applications. They are only recommended as a last resort (should other methods fail) or for comparitive purposes.
The suite consists of the solver functions:
Both nag_glopt_bnd_pso (e05sa) and nag_glopt_nlp_pso (e05sb) use the functions nag_glopt_optset (e05zk) and nag_glopt_optget (e05zl) for initialization and option setting. These functions predominantly use function values only, although derivatives can be provided for coupled local minimization functions. They are designed for use primarily with implementations of the NAG Toolbox (although they may also be used in serial implementations). In such implementations, a minimal knowledge of OpenMP parallel programming is required, specifically the use of basic OpenMP commands and operators such as OMP_GET_THREAD_NUM and CRITICAL sections to ensure the thread safety of provided callback routines. Additional example programs are provided to demonstrate how this may be done (see Section [Example] in (e05sa) and (e05sb)).
nag_glopt_bnd_pso (e05sa) is a simplified version of nag_glopt_nlp_pso (e05sb) with less functionality. In particular, nag_glopt_bnd_pso (e05sa) does not support general constraint handling whereas nag_glopt_nlp_pso (e05sb) does support general nonlinear, non-equality constraints.
If the objective function is smooth and the problem has only simple bound constraints then both algorithms are applicable. For low dimensional problems (up to 2020) nag_glopt_bnd_mcs_solve (e05jb) is preferred. With increasing dimension the multi-start methods may be better, especially when more threads are used (threads are only applicable to NAG Toolbox).
The particle swarm methods are potentially useful when there is no smoothness in the objective function (e.g., due to noise) and, for the simple-bound constrained problem, nag_glopt_bnd_pso (e05sa) may be appropriate.
Currently there is no routine in this chapter using a complete method that can handle constraints that are not bound constraints.

Functionality Index

Global optimization, function of several real variables, general constraints, 
    multi-start nag_glopt_nlp_multistart_sqp (e05uc)
    using function values predominantly, and optional derivative information, PSO nag_glopt_nlp_pso (e05sb)
Global optimization, function of several real variables, sum of squares, general constraints, 
    multi-start nag_glopt_nlp_multistart_sqp_lsq (e05us)
Global optimum, function of several variables, bound constraints, 
    using function values only nag_glopt_bnd_mcs_solve (e05jb)
    using function values predominantly, and optional derivative information, PSO nag_glopt_bnd_pso (e05sa)
Service functions, 
    check whether optional parameter has been set for nag_glopt_bnd_mcs_solve (e05jb) nag_glopt_bnd_mcs_option_check (e05jh)
    initialization function for nag_glopt_bnd_mcs_solve (e05jb) nag_glopt_bnd_mcs_init (e05ja)
    optional parameter getting function for use with nag_glopt_bnd_pso (e05sa)nag_glopt_nlp_pso (e05sb)nag_glopt_nlp_multistart_sqp (e05uc) and nag_glopt_nlp_multistart_sqp_lsq (e05us) nag_glopt_optget (e05zl)
    optional parameter setting function for use with  nag_glopt_bnd_pso (e05sa)nag_glopt_nlp_pso (e05sb)nag_glopt_nlp_multistart_sqp (e05uc) and nag_glopt_nlp_multistart_sqp_lsq (e05us) nag_glopt_optset (e05zk)
    retrieve integer optional parameter values used by nag_glopt_bnd_mcs_solve (e05jb) nag_glopt_bnd_mcs_optget_int (e05jk)
    retrieve real optional parameter values used by nag_glopt_bnd_mcs_solve (e05jb) nag_glopt_bnd_mcs_optget_real (e05jl)
    retrieve value of ‘ON’/‘OFF’-valued character optional parameter used by nag_glopt_bnd_mcs_solve (e05jb) nag_glopt_bnd_mcs_optget_char (e05jj)
    supply integer optional parameter values to nag_glopt_bnd_mcs_solve (e05jb) nag_glopt_bnd_mcs_optset_int (e05jf)
    supply ‘ON’/‘OFF’-valued character optional parameter values to nag_glopt_bnd_mcs_solve (e05jb) nag_glopt_bnd_mcs_optset_char (e05je)
    supply optional parameter values from character string to nag_glopt_bnd_mcs_solve (e05jb) nag_glopt_bnd_mcs_optset_string (e05jd)
    supply real optional parameter values to nag_glopt_bnd_mcs_solve (e05jb) nag_glopt_bnd_mcs_optset_real (e05jg)

References

Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
Huyer W and Neumaier A (1999) Global optimization by multi-level coordinate search Journal of Global Optimization 14 331–355
Kennedy J and Eberhart R C (1995) Particle Swarm Optimization Proceedings of the 1995 IEEE International Conference on Neural Networks 1942–1948
Koh B, George A D, Haftka R T and Fregly B J (2006) Parallel Asynchronous Particle Swarm Optimization International Journal for Numerical Methods in Engineering 67(4) 578–595
Neumaier A (2004) Complete search in constrained global optimization Acta Numerica 13 271–369
Vaz A I and Vicente L N (2007) A Particle Swarm Pattern Search Method for Bound Constrained Global Optimization Journal of Global Optimization 39(2) 197–219 Kluwer Academic Publishers

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013