E04JYF (PDF version)
E04 Chapter Contents
E04 Chapter Introduction
NAG Library Manual

NAG Library Routine Document

E04JYF

Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details.

+ Contents

    1  Purpose
    7  Accuracy

1  Purpose

E04JYF is an easy-to-use quasi-Newton algorithm for finding a minimum of a function Fx1,x2,,xn, subject to fixed upper and lower bounds of the independent variables x1,x2,,xn, using function values only.
It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

2  Specification

SUBROUTINE E04JYF ( N, IBOUND, FUNCT1, BL, BU, X, F, IW, LIW, W, LW, IUSER, RUSER, IFAIL)
INTEGER  N, IBOUND, IW(LIW), LIW, LW, IUSER(*), IFAIL
REAL (KIND=nag_wp)  BL(N), BU(N), X(N), F, W(LW), RUSER(*)
EXTERNAL  FUNCT1

3  Description

E04JYF is applicable to problems of the form:
Minimize Fx1,x2,,xn  subject to  ljxjuj,  j=1,2,,n
when derivatives of Fx are unavailable.
Special provision is made for problems which actually have no bounds on the xj, problems which have only non-negativity bounds and problems in which l1=l2==ln and u1=u2==un. You must supply a subroutine to calculate the value of Fx at any point x.
From a starting point you supplied there is generated, on the basis of estimates of the gradient and the curvature of Fx, a sequence of feasible points which is intended to converge to a local minimum of the constrained function. An attempt is made to verify that the final point is a minimum.
A typical iteration starts at the current point x where nz (say) variables are free from both their bounds. The projected gradient vector gz, whose elements are finite difference approximations to the derivatives of Fx with respect to the free variables, is known. A unit lower triangular matrix L and a diagonal matrix D (both of dimension nz), such that LDLT is a positive definite approximation of the matrix of second derivatives with respect to the free variables (i.e., the projected Hessian) are also held. The equations
LDLTpz=-gz
are solved to give a search direction pz, which is expanded to an n-vector p by an insertion of appropriate zero elements. Then α is found such that Fx+αp is approximately a minimum (subject to the fixed bounds) with respect to α; x is replaced by x+αp, and the matrices L and D are updated so as to be consistent with the change produced in the estimated gradient by the step αp. If any variable actually reaches a bound during the search along p, it is fixed and nz is reduced for the next iteration. Most iterations calculate gz using forward differences, but central differences are used when they seem necessary.
There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange multipliers are estimated for all the active constraints. If any Lagrange multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., nz is increased). Otherwise minimization continues in the current subspace provided that this is practicable. When it is not, or when the stronger convergence criteria are already satisfied, then, if one or more Lagrange multiplier estimates are close to zero, a slight perturbation is made in the values of the corresponding variables in turn until a lower function value is obtained. The normal algorithm is then resumed from the perturbed point.
If a saddle point is suspected, a local search is carried out with a view to moving away from the saddle point. A local search is also performed when a point is found which is thought to be a constrained minimum.

4  References

Gill P E and Murray W (1976) Minimization subject to bounds on the variables NPL Report NAC 72 National Physical Laboratory

5  Parameters

1:     N – INTEGERInput
On entry: the number n of independent variables.
Constraint: N1.
2:     IBOUND – INTEGERInput
On entry: indicates whether the facility for dealing with bounds of special forms is to be used.
It must be set to one of the following values:
IBOUND=0
If you are supplying all the lj and uj individually.
IBOUND=1
If there are no bounds on any xj.
IBOUND=2
If all the bounds are of the form 0xj.
IBOUND=3
If l1=l2==ln and u1=u2==un.
3:     FUNCT1 – SUBROUTINE, supplied by the user.External Procedure
You must supply FUNCT1 to calculate the value of the function Fx at any point x. It should be tested separately before being used with E04JYF (see the E04 Chapter Introduction).
The specification of FUNCT1 is:
SUBROUTINE FUNCT1 ( N, XC, FC, IUSER, RUSER)
INTEGER  N, IUSER(*)
REAL (KIND=nag_wp)  XC(N), FC, RUSER(*)
1:     N – INTEGERInput
On entry: the number n of variables.
2:     XC(N) – REAL (KIND=nag_wp) arrayInput
On entry: the point x at which the function value is required.
3:     FC – REAL (KIND=nag_wp)Output
On exit: the value of the function F at the current point x.
4:     IUSER(*) – INTEGER arrayUser Workspace
5:     RUSER(*) – REAL (KIND=nag_wp) arrayUser Workspace
FUNCT1 is called with the parameters IUSER and RUSER as supplied to E04JYF. You are free to use the arrays IUSER and RUSER to supply information to FUNCT1 as an alternative to using COMMON global variables.
FUNCT1 must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which E04JYF is called. Parameters denoted as Input must not be changed by this procedure.
4:     BL(N) – REAL (KIND=nag_wp) arrayInput/Output
On entry: the lower bounds lj.
If IBOUND is set to 0, you must set BLj to lj, for j=1,2,,n. (If a lower bound is not specified for a particular xj, the corresponding BLj should be set to -106.)
If IBOUND is set to 3, you must set BL1 to l1; E04JYF will then set the remaining elements of BL equal to BL1.
On exit: the lower bounds actually used by E04JYF.
5:     BU(N) – REAL (KIND=nag_wp) arrayInput/Output
On entry: the upper bounds uj.
If IBOUND is set to 0, you must set BUj to uj, for j=1,2,,n. (If an upper bound is not specified for a particular xj, the corresponding BUj should be set to 106.)
If IBOUND is set to 3, you must set BU1 to u1; E04JYF will then set the remaining elements of BU equal to BU1.
On exit: the upper bounds actually used by E04JYF.
6:     X(N) – REAL (KIND=nag_wp) arrayInput/Output
On entry: Xj must be set to an estimate of the jth component of the position of the minimum, for j=1,2,,n.
On exit: the lowest point found during the calculations. Thus, if IFAIL=0 on exit, Xj is the jth component of the position of the minimum.
7:     F – REAL (KIND=nag_wp)Output
On exit: the value of Fx corresponding to the final point stored in X.
8:     IW(LIW) – INTEGER arrayOutput
On exit: if IFAIL=0, 3 or 5, the first N elements of IW contain information about which variables are currently on their bounds and which are free. Specifically, if xi is:
fixed on its upper bound, IWi is -1;
fixed on its lower bound, IWi is -2;
effectively a constant (i.e., lj=uj), IWi is -3;
free, IWi gives its position in the sequence of free variables.
In addition, IWN+1 contains the number of free variables (i.e., nz). The rest of the array is used as workspace.
9:     LIW – INTEGERInput
On entry: the dimension of the array IW as declared in the (sub)program from which E04JYF is called.
Constraint: LIWN+2.
10:   W(LW) – REAL (KIND=nag_wp) arrayOutput
On exit: if IFAIL=0, 3 or 5, Wi contains a finite difference approximation to the ith element of the projected gradient vector gz, for i=1,2,,N. In addition, WN+1 contains an estimate of the condition number of the projected Hessian matrix (i.e., k). The rest of the array is used as workspace.
11:   LW – INTEGERInput
On entry: the dimension of the array W as declared in the (sub)program from which E04JYF is called.
Constraint: LWmaxN×N-1/2+12×N,13.
12:   IUSER(*) – INTEGER arrayUser Workspace
13:   RUSER(*) – REAL (KIND=nag_wp) arrayUser Workspace
IUSER and RUSER are not used by E04JYF, but are passed directly to FUNCT1 and may be used to pass information to this routine as an alternative to using COMMON global variables.
14:   IFAIL – INTEGERInput/Output
On entry: IFAIL must be set to 0, -1​ or ​1. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value -1​ or ​1 is recommended. If the output of error messages is undesirable, then the value 1 is recommended. Otherwise, because for this routine the values of the output parameters may be useful even if IFAIL0 on exit, the recommended value is -1. When the value -1​ or ​1 is used it is essential to test the value of IFAIL on exit.
On exit: IFAIL=0 unless the routine detects an error or a warning has been flagged (see Section 6).

6  Error Indicators and Warnings

If on entry IFAIL=0 or -1, explanatory error messages are output on the current error message unit (as defined by X04AAF).
Note: E04JYF may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the routine:
IFAIL=1
On entry,N<1,
orIBOUND<0,
orIBOUND>3,
orIBOUND=0 and BLj>BUj for some j,
orIBOUND=3 and BL1>BU1,
orLIW<N+2,
or LW < max13,12×N+N×N-1/2 .
IFAIL=2
There have been 400×n function evaluations, yet the algorithm does not seem to be converging. The calculations can be restarted from the final point held in X. The error may also indicate that Fx has no minimum.
IFAIL=3
The conditions for a minimum have not all been met but a lower point could not be found and the algorithm has failed.
IFAIL=4
An overflow has occurred during the computation. This is an unlikely failure, but if it occurs you should restart at the latest point given in X.
IFAIL=5
IFAIL=6
IFAIL=7
IFAIL=8
There is some doubt about whether the point x found by E04JYF is a minimum. The degree of confidence in the result decreases as IFAIL increases. Thus, when IFAIL=5 it is probable that the final x gives a good estimate of the position of a minimum, but when IFAIL=8 it is very unlikely that the routine has found a minimum.
IFAIL=9
In the search for a minimum, the modulus of one of the variables has become very large 106. This indicates that there is a mistake in FUNCT1, that your problem has no finite solution, or that the problem needs rescaling (see Section 8).
IFAIL=10
The computed set of forward-difference intervals (stored in W9×N+1,W9×N+2,, W10×N) is such that Xi+W9×N+iXi for some i.
This is an unlikely failure, but if it occurs you should attempt to select another starting point.
If you are dissatisfied with the result (e.g., because IFAIL=5, 6, 7 or 8), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. If persistent trouble occurs and the gradient can be calculated, it may be advisable to change to a routine which uses gradients (see the E04 Chapter Introduction).

7  Accuracy

A successful exit (IFAIL=0) is made from E04JYF when (B1, B2 and B3) or B4 hold, and the local search confirms a minimum, where (Quantities with superscript k are the values at the kth iteration of the quantities mentioned in Section 3, xtol=100ε, ε is the machine precision and . denotes the Euclidean norm. The vector gz is returned in the array W.)
If IFAIL=0, then the vector in X on exit, xsol, is almost certainly an estimate of the position of the minimum, xtrue, to the accuracy specified by xtol.
If IFAIL=3 or 5, xsol may still be a good estimate of xtrue, but the following checks should be made. Let k denote an estimate of the condition number of the projected Hessian matrix at xsol. (The value of k is returned in WN+1). If
(i) the sequence Fx k  converges to Fxsol at a superlinear or a fast linear rate,
(ii) gzxxol2<10.0×ε, and
(iii) k<1.0/gzxsol,
then it is almost certain that xsol is a close approximation to the position of a minimum. When (ii) is true, then usually Fxsol is a close approximation to Fxtrue.
When a successful exit is made then, for a computer with a mantissa of t decimals, one would expect to get about t/2-1 decimals accuracy in x and about t-1 decimals accuracy in F, provided the problem is reasonably well scaled.

8  Further Comments

The number of iterations required depends on the number of variables, the behaviour of Fx and the distance of the starting point from the solution. The number of operations performed in an iteration of E04JYF is roughly proportional to n2. In addition, each iteration makes at least m+1 calls of FUNCT1, where m is the number of variables not fixed on bounds. So, unless Fx can be evaluated very quickly, the run time will be dominated by the time spent in FUNCT1.
Ideally the problem should be scaled so that at the solution the value of Fx and the corresponding values of x1,x2,,xn are each in the range -1,+1, and so that at points a unit distance away from the solution, F is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that E04JYF will take less computer time.

9  Example

To minimize
F= x1+10x2 2+5 x3-x4 2+ x2-2x3 4+10 x1-x4 4
subject to
-1x1 3, -2x2 0, -1x4 3,
starting from the initial guess 3,-1,0,1 .

9.1  Program Text

Program Text (e04jyfe.f90)

9.2  Program Data

None.

9.3  Program Results

Program Results (e04jyfe.r)


E04JYF (PDF version)
E04 Chapter Contents
E04 Chapter Introduction
NAG Library Manual

© The Numerical Algorithms Group Ltd, Oxford, UK. 2012