naginterfaces.library.glopt.bnd_​pso

naginterfaces.library.glopt.bnd_pso(bl, bu, objfun, comm, npar=None, monmod=None, data=None, io_manager=None, spiked_sorder='C')[source]

bnd_pso is designed to search for the global minimum or maximum of an arbitrary function, using Particle Swarm Optimization (PSO). Derivatives are not required, although these may be used by an accompanying local minimization function if desired. bnd_pso is essentially identical to nlp_pso(), but with a simpler interface and with various options removed; otherwise most arguments are identical. In particular, bnd_pso does not handle general constraints.

Note: this function uses optional algorithmic parameters, see also: optset().

For full information please refer to the NAG Library document for e05sa

https://www.nag.com/numeric/nl/nagdoc_29.3/flhtml/e05/e05saf.html

Parameters
blfloat, array-like, shape

is , the array of lower bounds.

bufloat, array-like, shape

is , the array of upper bounds.

objfuncallable (objf, vecout) = objfun(mode, x, objf, vecout, nstate, data=None)

must, depending on the value of , calculate the objective function and/or calculate the gradient of the objective function for a -variable vector .

Gradients are only required if a local minimizer has been chosen which requires gradients.

See the option ‘Local Minimizer’ for more information.

Parameters
modeint

Indicates which functionality is required.

should be returned in . The value of on entry may be used as an upper bound for the calculation. Any expected value of that is greater than may be approximated by this upper bound; that is can remain unaltered.

only

First derivatives can be evaluated and returned in . Any unaltered elements of will be approximated using finite differences.

only

must be calculated and returned in , and available first derivatives can be evaluated and returned in . Any unaltered elements of will be approximated using finite differences.

must be calculated and returned in . The value of on entry may not be used as an upper bound.

or only

All first derivatives must be evaluated and returned in .

or only

must be calculated and returned in , and all first derivatives must be evaluated and returned in .

xfloat, ndarray, shape

, the point at which the objective function and/or its gradient are to be evaluated.

objffloat

The value of passed to varies with the argument .

is an upper bound for the value of , often equal to the best value of found so far by a given particle. Only objective function values less than the value of on entry will be used further. As such this upper bound may be used to stop further evaluation when this will only increase the objective function value above the upper bound.

, , , or

is meaningless on entry.

vecoutfloat, ndarray, shape

If , the values of are used internally to indicate whether a finite difference approximation is required. See opt.nlp1_solve.

nstateint

indicates various stages of initialization throughout the function. This allows for permanent global arguments to be initialized the least number of times. For example, you may initialize a random number generator seed.

is called for the very first time. You may save computational time if certain data must be read or calculated only once.

is called for the first time by a NAG local minimization function. You may save computational time if certain data required for the local minimizer need only be calculated at the initial point of the local minimization.

Used in all other cases.

dataarbitrary, optional, modifiable in place

User-communication data for callback functions.

Returns
objffloat

The value of returned varies with the argument .

must be the value of . Only values of strictly less than on entry need be accurate.

or

Need not be set.

, or

must be calculated and returned in . The entry value of may not be used as an upper bound.

vecoutfloat, array-like, shape

The required values of returned to the calling function depend on the value of .

or

The value of need not be set.

or

can contain components of the gradient of the objective function for some , or acceptable approximations. Any unaltered elements of will be approximated using finite differences.

or

must contain the gradient of the objective function for all . Approximation of the gradient is strongly discouraged, and no finite difference approximations will be performed internally (see opt.uncon_conjgrd_comp and opt.bounds_mod_deriv_easy).

commdict, communication object, modified in place

Communication structure.

This argument must have been initialized by a prior call to optset().

nparNone or int, optional

Note: if this argument is None then a default value will be used, determined as follows: .

, the number of particles to be used in the swarm. Assuming all particles remain within bounds, each complete iteration will perform at least function evaluations. Otherwise, significantly fewer objective function evaluations may be performed.

monmodNone or callable x = monmod(x, xb, fb, xbest, fbest, itt, data=None), optional

Note: if this argument is None then a NAG-supplied facility will be used.

A user-specified monitoring and modification function. is called once every complete iteration after a finalization check.

It may be used to modify the particle locations that will be evaluated at the next iteration.

This permits the incorporation of algorithmic modifications such as including additional advection heuristics and genetic mutations. is only called during the main loop of the algorithm, and as such will be unaware of any further improvement from the final local minimization.

If no monitoring and/or modification is required, may be None.

Parameters
xfloat, ndarray, shape

Note: the th component of the th particle, , is stored in .

The particle locations, , which will currently be used during the next iteration unless altered in .

xbfloat, ndarray, shape

The location, , of the best solution yet found.

fbfloat

The objective value, , of the best solution yet found.

xbestfloat, ndarray, shape

Note: the th component of the position of the th particle’s cognitive memory, , is stored in .

The locations currently in the cognitive memory, , for (see Algorithmic Details).

fbestfloat, ndarray, shape

The objective values currently in the cognitive memory, , for .

ittint, ndarray, shape

Iteration and function evaluation counters (see description of below).

dataarbitrary, optional, modifiable in place

User-communication data for callback functions.

Returns
xfloat, array-like, shape

The particle locations to be used during the next iteration.

dataarbitrary, optional

User-communication data for callback functions.

io_managerFileObjManager, optional

Manager for I/O in this routine.

spiked_sorderstr, optional

If in is spiked (i.e., has unit extent in all but one dimension, or has size ), selects the storage order to associate with it in the NAG Engine:

spiked_sorder =

row-major storage will be used;

spiked_sorder =

column-major storage will be used.

Returns
xbfloat, ndarray, shape

The location of the best solution found, , in .

fbfloat

The objective value of the best solution, .

ittint, ndarray, shape

Integer iteration counters for bnd_pso.

Number of complete iterations.

Number of complete iterations without improvement to the current optimum.

Number of particles converged to the current optimum.

Number of improvements to the optimum.

Number of function evaluations performed.

Number of particles reset.

informint

Indicates which finalization criterion was reached. The possible values of are:

Meaning

Exit from a user-supplied function.

0

bnd_pso has detected an error and terminated.

1

The provided objective target has been achieved. (‘Target Objective Value’).

2

The standard deviation of the location of all the particles is below the set threshold (‘Swarm Standard Deviation’). If the solution returned is not satisfactory, you may try setting a smaller value of ‘Swarm Standard Deviation’, or try adjusting the options governing the repulsive phase (‘Repulsion Initialize’, ‘Repulsion Finalize’).

3

The total number of particles converged (‘Maximum Particles Converged’) to the current global optimum has reached the set limit. This is the number of particles which have moved to a distance less than ‘Distance Tolerance’ from the optimum with regard to the norm. If the solution is not satisfactory, you may consider lowering the ‘Distance Tolerance’. However, this may hinder the global search capability of the algorithm.

4

The maximum number of iterations without improvement (‘Maximum Iterations Static’) has been reached, and the required number of particles (‘Maximum Iterations Static Particles’) have converged to the current optimum. Increasing either of these options will allow the algorithm to continue searching for longer. Alternatively if the solution is not satisfactory, re-starting the application several times with may lead to an improved solution.

5

The maximum number of iterations (‘Maximum Iterations Completed’) has been reached. If the number of iterations since improvement is small, then a better solution may be found by increasing this limit, or by using the option ‘Local Minimizer’ with corresponding exterior options. Otherwise if the solution is not satisfactory, you may try re-running the application several times with and a lower iteration limit, or adjusting the options governing the repulsive phase (‘Repulsion Initialize’, ‘Repulsion Finalize’).

6

The maximum allowed number of function evaluations (‘Maximum Function Evaluations’) has been reached. As with , increasing this limit if the number of iterations without improvement is small, or decreasing this limit and running the algorithm multiple times with , may provide a superior result.

Other Parameters
‘Advance Cognitive’float

Default

The cognitive advance coefficient, . When larger than the global advance coefficient, this will cause particles to be attracted toward their previous best positions. Setting will cause bnd_pso to act predominantly as a local optimizer. Setting may cause the swarm to diverge, and is generally inadvisable. At least one of the global and cognitive coefficients must be nonzero.

‘Advance Global’float

Default

The global advance coefficient, . When larger than the cognitive coefficient this will encourage convergence toward the best solution yet found. Values will inhibit particles overshooting the optimum. Values cause particles to fly over the optimum some of the time. Larger values can prohibit convergence. Setting will remove any attraction to the current optimum, effectively generating a Monte Carlo multi-start optimization algorithm. At least one of the global and cognitive coefficients must be nonzero.

‘Boundary’str

Default

Determines the behaviour if particles leave the domain described by the box bounds. This only affects the general PSO algorithm, and will not pass down to any NAG local minimizers chosen.

This option is only effective in those dimensions for which , .

IGNORE

The box bounds are ignored. The objective function is still evaluated at the new particle position.

RESET

The particle is re-initialized inside the domain. and are not affected.

FLOATING

The particle position remains the same, however the objective function will not be evaluated at the next iteration. The particle will probably be advected back into the domain at the next advance due to attraction by the cognitive and global memory.

HYPERSPHERICAL

The box bounds are wrapped around an -dimensional hypersphere. As such a particle leaving through a lower bound will immediately re-enter through the corresponding upper bound and vice versa. The standard distance between particles is also modified accordingly.

FIXED

The particle rests on the boundary, with the corresponding dimensional velocity set to .

‘Distance Scaling’str

Default

Determines whether distances should be scaled by box widths.

ON

When a distance is calculated between and , a scaled norm is used.

OFF

Distances are calculated as the standard norm without any rescaling.

‘Distance Tolerance’float

Default

This is the distance, between particles and the global optimum which must be reached for the particle to be considered converged, i.e., that any subsequent movement of such a particle cannot significantly alter the global optimum. Once achieved the particle is reset into the box bounds to continue searching.

‘Function Precision’float

Default

The argument defines , which is intended to be a measure of the accuracy with which the problem function can be computed. If or , the default value is used.

The value of should reflect the relative precision of ; i.e., acts as a relative precision when is large, and as an absolute precision when is small. For example, if is typically of order and the first six significant digits are known to be correct, an appropriate value for would be . In contrast, if is typically of order and the first six significant digits are known to be correct, an appropriate value for would be . The choice of can be quite complicated for badly scaled problems; see Module 8 of Gill et al. (1981) for a discussion of scaling techniques. The default value is appropriate for most simple functions that are computed with full accuracy. However when the accuracy of the computed function values is known to be significantly worse than full precision, the value of should be large enough so that no attempt will be made to distinguish between function values that differ by less than the error inherent in the calculation.

‘Local Boundary Restriction’float

Default

Contracts the box boundaries used by a box constrained local minimizer to, , containing the start point , where

Smaller values of thereby restrict the size of the domain exposed to the local minimizer, possibly reducing the amount of work done by the local minimizer.

‘Local Interior Iterations’int

The maximum number of iterations or function evaluations the chosen local minimizer will perform inside (outside) the main loop if applicable. For the NAG minimizers these correspond to:

Minimizer

Argument/option

Default Interior

Default Exterior

opt.uncon_simplex

opt.uncon_conjgrd_comp

‘Iteration Limit’

opt.nlp1_solve

‘Major Iteration Limit’

Unless set, these are functions of the arguments passed to bnd_pso.

Setting will disable the local minimizer in the corresponding algorithmic region. For example, setting and will cause the algorithm to perform no local minimizations inside the main loop of the algorithm, and a local minimization with upto iterations after the main loop has been exited.

Note: currently opt.bounds_quasi_func_easy or opt.bounds_mod_deriv_easy are restricted to using and as function evaluation limits respectively. This applies to both local minimizations inside and outside the main loop. They may still be deactivated in either phase by setting , and may subsequently be reactivated in either phase by setting .

‘Local Interior Major Iterations’int

The maximum number of iterations or function evaluations the chosen local minimizer will perform inside (outside) the main loop if applicable. For the NAG minimizers these correspond to:

Minimizer

Argument/option

Default Interior

Default Exterior

opt.uncon_simplex

opt.uncon_conjgrd_comp

‘Iteration Limit’

opt.nlp1_solve

‘Major Iteration Limit’

Unless set, these are functions of the arguments passed to bnd_pso.

Setting will disable the local minimizer in the corresponding algorithmic region. For example, setting and will cause the algorithm to perform no local minimizations inside the main loop of the algorithm, and a local minimization with upto iterations after the main loop has been exited.

Note: currently opt.bounds_quasi_func_easy or opt.bounds_mod_deriv_easy are restricted to using and as function evaluation limits respectively. This applies to both local minimizations inside and outside the main loop. They may still be deactivated in either phase by setting , and may subsequently be reactivated in either phase by setting .

‘Local Exterior Iterations’int

The maximum number of iterations or function evaluations the chosen local minimizer will perform inside (outside) the main loop if applicable. For the NAG minimizers these correspond to:

Minimizer

Argument/option

Default Interior

Default Exterior

opt.uncon_simplex

opt.uncon_conjgrd_comp

‘Iteration Limit’

opt.nlp1_solve

‘Major Iteration Limit’

Unless set, these are functions of the arguments passed to bnd_pso.

Setting will disable the local minimizer in the corresponding algorithmic region. For example, setting and will cause the algorithm to perform no local minimizations inside the main loop of the algorithm, and a local minimization with upto iterations after the main loop has been exited.

Note: currently opt.bounds_quasi_func_easy or opt.bounds_mod_deriv_easy are restricted to using and as function evaluation limits respectively. This applies to both local minimizations inside and outside the main loop. They may still be deactivated in either phase by setting , and may subsequently be reactivated in either phase by setting .

‘Local Exterior Major Iterations’int

The maximum number of iterations or function evaluations the chosen local minimizer will perform inside (outside) the main loop if applicable. For the NAG minimizers these correspond to:

Minimizer

Argument/option

Default Interior

Default Exterior

opt.uncon_simplex

opt.uncon_conjgrd_comp

‘Iteration Limit’

opt.nlp1_solve

‘Major Iteration Limit’

Unless set, these are functions of the arguments passed to bnd_pso.

Setting will disable the local minimizer in the corresponding algorithmic region. For example, setting and will cause the algorithm to perform no local minimizations inside the main loop of the algorithm, and a local minimization with upto iterations after the main loop has been exited.

Note: currently opt.bounds_quasi_func_easy or opt.bounds_mod_deriv_easy are restricted to using and as function evaluation limits respectively. This applies to both local minimizations inside and outside the main loop. They may still be deactivated in either phase by setting , and may subsequently be reactivated in either phase by setting .

‘Local Interior Tolerance’float

Default

This is the tolerance provided to a local minimizer in the interior (exterior) of the main loop of the algorithm.

‘Local Exterior Tolerance’float

Default

This is the tolerance provided to a local minimizer in the interior (exterior) of the main loop of the algorithm.

‘Local Interior Minor Iterations’int

Where applicable, the secondary number of iterations the chosen local minimizer will use inside (outside) the main loop. Currently the relevant default values are:

Minimizer

Argument/option

Default Interior

Default Exterior

opt.nlp1_solve

‘Minor Iteration Limit’

‘Local Exterior Minor Iterations’int

Where applicable, the secondary number of iterations the chosen local minimizer will use inside (outside) the main loop. Currently the relevant default values are:

Minimizer

Argument/option

Default Interior

Default Exterior

opt.nlp1_solve

‘Minor Iteration Limit’

‘Local Minimizer’str

Default

Allows for a choice of submodule opt functions to be used as a coupled, dedicated local minimizer.

No local minimization will be performed in either the INTERIOR or EXTERIOR sections of the algorithm.

Use opt.uncon_simplex as the local minimizer. This does not require the calculation of derivatives.

On a call to during a local minimization, .

Use opt.bounds_mod_deriv_easy as the local minimizer. This requires the calculation of derivatives in , as indicated by .

The box bounds forwarded to this function from bnd_pso will have been acted upon by ‘Local Boundary Restriction’. As such, the domain exposed may be greatly smaller than that provided to bnd_pso.

Accurate derivatives must be provided to this function, and will not be approximated internally. Each iteration of this local minimizer also requires the calculation of both the objective function and its derivative. Hence on a call to during a local minimization, .

Use opt.bounds_quasi_func_easy as the local minimizer. This does not require the calculation of derivatives.

On a call to during a local minimization, .

The box bounds forwarded to this function from bnd_pso will have been acted upon by ‘Local Boundary Restriction’. As such, the domain exposed may be greatly smaller than that provided to bnd_pso.

Use opt.uncon_conjgrd_comp as the local minimizer.

Accurate derivatives must be provided, and will not be approximated internally. Additionally, each call to during a local minimization will require either the objective to be evaluated alone, or both the objective and its gradient to be evaluated. Hence on a call to , or .

Use opt.nlp1_solve as the local minimizer. This operates such that any derivatives of the objective function that you cannot supply, will be approximated internally using finite differences.

Either, the objective, objective gradient, or both may be requested during a local minimization, and as such on a call to , , or .

The box bounds forwarded to this function from bnd_pso will have been acted upon by ‘Local Boundary Restriction’. As such, the domain exposed may be greatly smaller than that provided to bnd_pso.

‘Maximum Function Evaluations’int

Default

The maximum number of evaluations of the objective function. When reached this will return = 1 and .

‘Maximum Iterations Completed’int

Default

The maximum number of complete iterations that may be performed. Once exceeded bnd_pso will exit with = 1 and .

Unless set, this adapts to the parameters passed to bnd_pso.

‘Maximum Iterations Static’int

Default

The maximum number of iterations without any improvement to the current global optimum. If exceeded bnd_pso will exit with = 1 and . This exit will be hindered by setting ‘Maximum Iterations Static Particles’ to larger values.

‘Maximum Iterations Static Particles’int

Default

The minimum number of particles that must have converged to the current optimum before the function may exit due to ‘Maximum Iterations Static’ with = 1 and .

‘Maximum Particles Converged’int

Default

The maximum number of particles that may converge to the current optimum. When achieved, bnd_pso will exit with = 1 and . This exit will be hindered by setting ‘Repulsion’ options, as these cause the swarm to re-expand.

‘Maximum Particles Reset’int

Default

The maximum number of particles that may be reset after converging to the current optimum. Once achieved no further particles will be reset, and any particles within ‘Distance Tolerance’ of the global optimum will continue to evolve as normal.

‘Maximum Variable Velocity’float

Default

Along any dimension , the absolute velocity is bounded above by . Very low values will greatly increase convergence time. There is no upper limit, although larger values will allow more particles to be advected out of the box bounds, and values greater than may cause significant and potentially unrecoverable swarm divergence.

‘Optimize’str

Default

Determines whether to maximize or minimize the objective function.

MINIMIZE

The objective function will be minimized.

MAXIMIZE

The objective function will be maximized. This is accomplished by minimizing the negative of the objective.

‘Repeatability’str

Default

Allows for the same random number generator seed to be used for every call to bnd_pso. is recommended in general.

OFF

The internal generation of random numbers will be nonrepeatable.

ON

The same seed will be used.

‘Repulsion Finalize’int

Default

The number of iterations performed in a repulsive phase before re-contraction. This allows a re-diversified swarm to contract back toward the current optimum, allowing for a finer search of the near optimum space.

‘Repulsion Initialize’int

Default

The number of iterations without any improvement to the global optimum before the algorithm begins a repulsive phase. This phase allows the particle swarm to re-expand away from the current optimum, allowing more of the domain to be investigated. The repulsive phase is automatically ended if a superior optimum is found.

‘Repulsion Particles’int

Default

The number of particles required to have converged to the current optimum before any repulsive phase may be initialized. This will prevent repulsion before a satisfactory search of the near optimum area has been performed, which may happen for large dimensional problems.

‘Seed’int

Default

Sets the random number generator seed to be used when . If set to 0, the default seed will be used. If not, the absolute value of ‘Seed’ will be used to generate the random number generator seed.

‘Swarm Standard Deviation’float

Default

The target standard deviation of the particle distances from the current optimum. Once the standard deviation is below this level, bnd_pso will exit with = 1 and . This criterion will be penalized by the use of ‘Repulsion’ options, as these cause the swarm to re-expand, increasing the standard deviation of the particle distances from the best point.

‘Target Objective’str

Default

Activate or deactivate the use of a target value as a finalization criterion. If active, then once the supplied target value for the objective function is found (beyond the first iteration if ‘Target Warning’ is active) bnd_pso will exit with no exception or warning is raised and . Other than checking for feasibility only (), this is the only finalization criterion that guarantees that the algorithm has been successful. If the target value was achieved at the initialization phase or first iteration and ‘Target Warning’ is active, bnd_pso will exit with = 2. This option may take any real value , or the character ON/OFF as well as DEFAULT. If this option is queried using optget(), the current value of will be returned in , and will indicate whether this option is ON or OFF. The behaviour of the option is as follows:

Once a point is found with an objective value within the ‘Target Objective Tolerance’ of , bnd_pso will exit successfully with no exception or warning is raised and .

OFF

The current value of will remain stored, however it will not be used as a finalization criterion.

ON

The current value of stored will be used as a finalization criterion.

DEFAULT

The stored value of will be reset to its default value (), and this finalization criterion will be deactivated.

‘Target Objective Value’float

Default

Activate or deactivate the use of a target value as a finalization criterion. If active, then once the supplied target value for the objective function is found (beyond the first iteration if ‘Target Warning’ is active) bnd_pso will exit with no exception or warning is raised and . Other than checking for feasibility only (), this is the only finalization criterion that guarantees that the algorithm has been successful. If the target value was achieved at the initialization phase or first iteration and ‘Target Warning’ is active, bnd_pso will exit with = 2. This option may take any real value , or the character ON/OFF as well as DEFAULT. If this option is queried using optget(), the current value of will be returned in , and will indicate whether this option is ON or OFF. The behaviour of the option is as follows:

Once a point is found with an objective value within the ‘Target Objective Tolerance’ of , bnd_pso will exit successfully with no exception or warning is raised and .

OFF

The current value of will remain stored, however it will not be used as a finalization criterion.

ON

The current value of stored will be used as a finalization criterion.

DEFAULT

The stored value of will be reset to its default value (), and this finalization criterion will be deactivated.

‘Target Objective Safeguard’float

Default

If you have given a target objective value to reach in (the value of the option ‘Target Objective Value’), sets your desired safeguarded termination tolerance, for when is close to zero.

‘Target Objective Tolerance’float

Default

The optional tolerance to a user-specified target value.

‘Target Warning’str

Default

Activates or deactivates the error exit associated with the target value being achieved before entry into the main loop of the algorithm, = 2.

OFF

No error will be returned, and the function will exit normally.

ON

An error will be returned if the target objective is reached prematurely, and the function will exit with = 2.

‘Verify Gradients’str

Default

Adjusts the level of gradient checking performed when gradients are required. Gradient checks are only performed on the first call to the chosen local minimizer if it requires gradients. There is no guarantee that the gradient check will be correct, as the finite differences used in the gradient check are themselves subject to inaccuracies.

OFF

No gradient checking will be performed.

ON

A cheap gradient check will be performed on both the gradients corresponding to the objective through .

OBJECTIVE FULL

A more expensive gradient check will be performed on the gradients corresponding to the objective .

‘Weight Decrease’str

Default

Determines how particle weights decrease.

OFF

Weights do not decrease.

INTEREST

Weights decrease through compound interest as , where is the ‘Weight Value’ and is the current number of iterations.

LINEAR

Weights decrease linearly following , where is the iteration number and is the maximum number of iterations as set by ‘Maximum Iterations Completed’.

‘Weight Initial’float

Default

The initial value of any particle’s inertial weight, , or the minimum possible initial value if initial weights are randomized. When set, this will override or , and as such these must be set afterwards if so desired.

‘Weight Initialize’str

Default

Determines how the initial weights are distributed.

INITIAL

All weights are initialized at the initial weight, , if set. If ‘Weight Initial’ has not been set, this will be the maximum weight, .

MAXIMUM

All weights are initialized at the maximum weight, .

RANDOMIZED

Weights are uniformly distributed in or if ‘Weight Initial’ has been set.

‘Weight Maximum’float

Default

The maximum particle weight, .

‘Weight Minimum’float

Default

The minimum achievable weight of any particle, . Once achieved, no further weight reduction is possible.

‘Weight Reset’str

Default

Determines how particle weights are re-initialized.

INITIAL

Weights are re-initialized at the initial weight if set. If ‘Weight Initial’ has not been set, this will be the maximum weight.

MAXIMUM

Weights are re-initialized at the maximum weight.

RANDOMIZED

Weights are uniformly distributed in or if ‘Weight Initial’ has been set.

‘Weight Value’float

Default

The constant used with .

‘SMP Callback Thread Safe’str

Default

Declare that the callback functions you provide are or are not thread safe. In particular, this indicates that access to the shared memory arrays and from within your provided callbacks is done in a thread safe manner. If these arrays are just used to pass constant data, then you may assume they are thread safe. If these are also used for workspace, or passing variable data such as random number generator seeds, then you must ensure these are accessed and updated safely. Whilst this can be done using OpenMP critical sections, we suggest their use is minimized to prevent unnecessary bottlenecks, and that instead individual threads have access to independent subsections of the provided arrays where possible.

YES

The callback functions have been programmed in a thread safe way. The algorithm will use OMP_NUM_THREADS threads.

NO

The callback functions are not thread safe. Setting this option will force the algorithm to run on a single thread only, and is advisable only for debugging purposes, or if you wish to parallelize your callback functions.

WARNING

This will cause an immediate exit from bnd_pso with = 51 if multiple threads are detected. This is to inform you that you have not declared the callback functions either to be thread safe, or that they are thread unsafe and you wish the algorithm to run in serial.

‘SMP Local Minimizer External’str

Default

Determines how many threads will attempt to locally minimize the best found solution after the function has exited the main loop.

MASTER

Only the master thread will attempt to find any improvement. The local minimization will be launched from the best known solution. All other threads will remain effectively idle.

ALL

The master thread will perform a local minimization from the best known solution, while all other threads will perform a local minimization from randomly generated perturbations of the best known solution, increasing the chance of an improvement. Assuming all local minimizations will take approximately the same amount of computation, this will be effectively free in terms of real time. It will however increase the number of function evaluations performed.

‘SMP Monitor’str

Default

Determines whether the user-supplied function is invoked once every sub-iteration each thread performs, or only once by a single thread after all threads have completed at least one sub-iteration.

SINGLE

Only one thread will invoke , after all threads have performed at least one sub-iteration.

ALL

Each thread will invoke each time it completes a sub-iteration. If you wish to alter using you should use this option, as will only receive the arrays , and private to the calling thread.

‘SMP Monmod’str

Default

Determines whether the user-supplied function is invoked once every sub-iteration each thread performs, or only once by a single thread after all threads have completed at least one sub-iteration.

SINGLE

Only one thread will invoke , after all threads have performed at least one sub-iteration.

ALL

Each thread will invoke each time it completes a sub-iteration. If you wish to alter using you should use this option, as will only receive the arrays , and private to the calling thread.

‘SMP Subswarm’int

Default

Determines how many threads support a particle subswarm. This is an extra collection of particles constrained to search only within a hypercube of edge length of the best point known to an individual thread. This may improve the number of iterations required to find a provided target, particularly if no local minimizer is in use.

If , then this will be disabled on all the threads.

If , then all the threads will support a particle subswarm.

‘SMP Thread Overrun’int

Default

This option provides control over the level of asynchronicity present in a simulation. In particular, a barrier synchronization between all threads is performed if any thread completes sub-iterations more than the slowest thread, causing all threads to be exposed to the current best solution. Allowing asynchronous behaviour does however allow individual threads to focus on different global optimum candidates some of the time, which can inhibit convergence to unwanted sub-optima. It also allows for threads to continue searching when other threads are completing sub-iterations at a slower rate.

If , the algorithm will force a synchronization between threads at the end of each iteration.

Raises
NagValueError
(errno )

On entry, .

Constraint: .

(errno )

On entry, .

Constraint: , where num_threads is the value returned by the OpenMP environment variable OMP_NUM_THREADS, or num_threads is for a serial version of this function.

(errno )

On entry, and .

Constraint: for all .

(errno )

On entry, for all .

Constraint: for at least one .

(errno )

Error occurred whilst adjusting to interior local minimizer options.

(errno )

Error occurred whilst adjusting to exterior local minimizer options.

(errno )

Either the option arrays have not been initialized for bnd_pso, or they have become corrupted.

(errno )

Multiple SMP threads have been detected; however, the option ‘SMP Callback Thread Safe’ has not been set.

Set if the provided callbacks are thread safe.

Set if the provided callbacks are not thread safe, to force serial execution.

Warns
NagAlgorithmicWarning
(errno )

A finalization criterion was reached that cannot guarantee success.

On exit, .

(errno )

Target achieved after the first iteration.

.

(errno )

Derivative checks indicate possible errors in the supplied derivatives.

NagCallbackTerminateWarning
(errno )

User requested exit during call to .

(errno )

User requested exit during call to .

Notes

bnd_pso uses a stochastic method based on Particle Swarm Optimization (PSO) to search for the global optimum of a nonlinear function , subject to a set of bound constraints on the variables. In the PSO algorithm (see Algorithmic Details), a set of particles is generated in the search space, and advances each iteration to (hopefully) better positions using a heuristic velocity based upon inertia, cognitive memory and global memory. The inertia is provided by a decreasingly weighted contribution from a particles current velocity, the cognitive memory refers to the best candidate found by an individual particle and the global memory refers to the best candidate found by all the particles. This allows for a global search of the domain in question.

Further, this may be coupled with a selection of local minimization functions, which may be called during the iterations of the heuristic algorithm, the interior phase, to hasten the discovery of locally optimal points, and after the heuristic phase has completed to attempt to refine the final solution, the exterior phase. Different options may be set for the local optimizer in each phase.

Without loss of generality, the problem is assumed to be stated in the following form:

where the objective is a scalar function, is a vector in and the vectors are lower and upper bounds respectively for the variables. The objective function may be nonlinear. Continuity of is not essential. For functions which are smooth and primarily unimodal, faster solutions will almost certainly be achieved by using submodule opt functions directly.

For functions which are smooth and multi-modal, gradient dependent local minimization functions may be coupled with bnd_pso.

For multi-modal functions for which derivatives cannot be provided, particularly functions with a significant level of noise in their evaluation, bnd_pso should be used either alone, or coupled with opt.uncon_simplex.

The lower and upper box bounds on the variable are included to initialize the particle swarm into a finite hypervolume, although their subsequent influence on the algorithm is user determinable (see the option ‘Boundary’ in Other Parameters). It is strongly recommended that sensible bounds are provided for all variables.

bnd_pso may also be used to maximize the objective function (see the option ‘Optimize’).

Due to the nature of global optimization, unless a predefined target is provided, there is no definitive way of knowing when to end a computation. As such several stopping heuristics have been implemented into the algorithm. If any of these is achieved, bnd_pso will exit with = 1, and the parameter will indicate which criteria was reached. See for more information.

In addition, you may provide your own stopping criteria through and .

nlp_pso() provides a comprehensive interface, allowing for the inclusion of general nonlinear constraints.

References

Gill, P E, Murray, W and Wright, M H, 1981, Practical Optimization, Academic Press

Kennedy, J and Eberhart, R C, 1995, Particle Swarm Optimization, Proceedings of the 1995 IEEE International Conference on Neural Networks, 1942–1948

Koh, B, George, A D, Haftka, R T and Fregly, B J, 2006, Parallel Asynchronous Particle Swarm Optimization, International Journal for Numerical Methods in Engineering (67(4)), 578–595

Vaz, A I and Vicente, L N, 2007, A Particle Swarm Pattern Search Method for Bound Constrained Global Optimization, Journal of Global Optimization (39(2)), 197–219, Kluwer Academic Publishers