NAG FL Interface
e04kff (handle_​solve_​bounds_​foas)

Note: this routine uses optional parameters to define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm and to Section 12 for a detailed description of the specification of the optional parameters.
Settings help

FL Name Style:


FL Specification Language:


1 Purpose

e04kff is a solver from the NAG optimization modelling suite for bound-constrained large-scale Nonlinear Programming (NLP) problems. It is a first-order active-set method (FOAS) that has low memory requirements and thus is suitable for very large-scale problems.

2 Specification

Fortran Interface
Subroutine e04kff ( handle, objfun, objgrd, monit, nvar, x, rinfo, stats, iuser, ruser, cpuser, ifail)
Integer, Intent (In) :: nvar
Integer, Intent (Inout) :: iuser(*), ifail
Real (Kind=nag_wp), Intent (Inout) :: x(nvar), ruser(*)
Real (Kind=nag_wp), Intent (Out) :: rinfo(100), stats(100)
Type (c_ptr), Intent (In) :: handle, cpuser
External :: objfun, objgrd, monit
C Header Interface
#include <nag.h>
void  e04kff_ (void **handle,
void (NAG_CALL *objfun)(const Integer *nvar, const double x[], double *fx, Integer *inform, Integer iuser[], double ruser[], void **cpuser),
void (NAG_CALL *objgrd)(const Integer *nvar, const double x[], const Integer *nnzfd, double fdx[], Integer *inform, Integer iuser[], double ruser[], void **cpuser),
void (NAG_CALL *monit)(const Integer *nvar, const double x[], Integer *inform, const double rinfo[], const double stats[], Integer iuser[], double ruser[], void **cpuser),
const Integer *nvar, double x[], double rinfo[], double stats[], Integer iuser[], double ruser[], void **cpuser, Integer *ifail)
The routine may be called by the names e04kff or nagf_opt_handle_solve_bounds_foas.

3 Description

e04kff solves large-scale bound-constrained nonlinear optimization problems of the form
minimize xRn f(x)   subject to   x x ux, (1)
where n is the number of decision variables, x, ux, and x are n-dimensional vectors, and the nonlinear objective function f(x) is assumed to be sufficiently smooth.
The solver is a first-order method (i.e., uses only first derivatives) that has very low memory requirements and, therefore, is suitable for very large bound-constrained problems. It is based on an active-set method coupled to a nonmonotone projected gradient algorithm (NPG), nonlinear conjugate gradient method (CG) and its limited-memory variant (LCG). The active-set method is based on alternating between both solvers, the NPG step handles the box constraints and identifies a suitable search space while the CG step explores it for a solution.
For a detailed description of the algorithm see Section 11. Under standard assumptions on the problem (smoothness of the first derivative of the objective) the algorithm converges to a local solution or to a critical point.
e04kff serves as a solver for problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as a means of communication for routines in the NAG optimization modelling suite. First, the problem handle is initialized by calling e04raf. Then some of the routines e04ref, e04rff, e04rgf or e04rhf may be called to formulate the objective and to define (simple) box constraints for the problem. Once the problem is fully described, the handle may be passed to the solver e04kff. When the handle is no longer needed, e04rzf should be called to destroy it and deallocate the memory held within. See Section 3.1 in the E04 Chapter Introduction for more details about the NAG optimization modelling suite.
The algorithm behaviour can be modified by various optional parameters (see Section 12) which can be set by e04zmf and e04zpf anytime between the initialization of the handle by e04raf and a call to the solver. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various starting points and/or optional parameters. Option getter e04znf can be called to retrieve the current value of any option.
The optional parameter Task may be used to switch the problem to maximization, while FOAS Estimate Derivatives can be used to complete missing elements from the gradient. Optional parameter Verify Derivatives may help verify the correctness of the gradient vector before starting to solve a problem.
Several options may have significant impact on the performance of the solver. Even if the defaults were chosen to suit the majority of problems, it is recommended that you experiment in order to find the most suitable set of options for a particular problem, see Sections 11 and 12 for further details.

4 References

Dai Y-H and Kou C-X (2013) A Nonlinear Conjugate Gradient Algorithm with an Optimal Property and an Improved Wolfe Line Search SIAM J. Optim. 23(1) 296–320
Gill P E and Leonard M W (2003) Limited-Memory Reduced-Hessian Methods for Large-Scale Unconstrained Optimization SIAM J. Optim. 14(2) 380–401
Hager W W and Zhang H (2005) A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search SIAM J. Optim. 16(1) 170–192
Hager W W and Zhang H (2006a) Algorithm 851: CG DESCENT, a Conjugate Gradient Method with Guaranteed Descent ACM Trans. Math. Software 32(1) 113–137
Hager W W and Zhang H (2006b) A New Active Set Algorithm for Box Constrained Optimization SIAM J. Optim. 17(2) 525–557
Hager W W and Zhang H (2013) The Limited Memory Conjugate Gradient Method SIAM J. Optim. 23(4) 2150–2168
Nocedal J and Wright S J (2006) Numerical Optimization (2nd Edition) Springer Series in Operations Research, Springer, New York

5 Arguments

1: handle Type (c_ptr) Input
On entry: the handle to the problem. It needs to be initialized (e.g., by e04raf) and to hold a problem formulation compatible with e04kff. It must not be changed between calls to the NAG optimization modelling suite.
2: objfun Subroutine, supplied by the NAG Library or the user. External Procedure
objfun must calculate the value of the nonlinear objective function f(x) at a specified point x. If there is no nonlinear objective (e.g., e04ref or e04rff was called to define a linear or quadratic objective function), objfun will never be called by e04kff and may be the dummy routine e04kfv (included in the NAG Library.)
The specification of objfun is:
Fortran Interface
Subroutine objfun ( nvar, x, fx, inform, iuser, ruser, cpuser)
Integer, Intent (In) :: nvar
Integer, Intent (Inout) :: inform, iuser(*)
Real (Kind=nag_wp), Intent (In) :: x(nvar)
Real (Kind=nag_wp), Intent (Inout) :: ruser(*)
Real (Kind=nag_wp), Intent (Out) :: fx
Type (c_ptr), Intent (In) :: cpuser
C Header Interface
void  objfun (const Integer *nvar, const double x[], double *fx, Integer *inform, Integer iuser[], double ruser[], void **cpuser)
1: nvar Integer Input
On entry: n, the current number of decision variables x in the model.
2: x(nvar) Real (Kind=nag_wp) array Input
On entry: the vector x of variable values at which the objective function is to be evaluated.
3: fx Real (Kind=nag_wp) Output
On exit: the value of the objective function at x.
4: inform Integer Input/Output
On entry: a non-negative value.
In some cases, it is known beforehand that the evaluations of the objective function and its gradient are required at the same point x, in such cases, inform=1. This may help to optimize your code in order to avoid recalculations of common quantities when evaluating both the objective function and gradient; the objective function is always evaluated before the objective gradient. This notification parameter may be safely ignored if such optimization is not required.
On exit: may be used to indicate that the function cannot be evaluated at the requested point x by setting inform<0. Returning NaN or ± in fx has the same effect. The algorithm will try to recover if the objective cannot be evaluated; if recovery is not possible it will stop with ifail=25.
5: iuser(*) Integer array User Workspace
6: ruser(*) Real (Kind=nag_wp) array User Workspace
7: cpuser Type (c_ptr) User Workspace
objfun is called with the arguments iuser, ruser and cpuser as supplied to e04kff. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to objfun.
objfun must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04kff is called. Arguments denoted as Input must not be changed by this procedure.
3: objgrd Subroutine, supplied by the NAG Library or the user. External Procedure
objgrd must calculate the values of the nonlinear objective function gradient f x at a specified point x. Every call to objgrd is preceded by a call to objfun at the same point, if this is known in advance, both functions will be notified via inform=1. If there is no nonlinear objective (e.g., e04ref or e04rff was called to define a linear or quadratic objective function), objgrd will never be called by e04kff and objgrd may be the dummy subroutine e04kfw (included in the NAG Library.)
If the optional parameter FOAS Estimate Derivatives=YES, then after returning from objgrd the gradient vector is checked for missing entries which you have not supplied. Missing entries are estimated using the finite difference method, see optional parameter FOAS Estimate Derivatives description for more details.
The specification of objgrd is:
Fortran Interface
Subroutine objgrd ( nvar, x, nnzfd, fdx, inform, iuser, ruser, cpuser)
Integer, Intent (In) :: nvar, nnzfd
Integer, Intent (Inout) :: inform, iuser(*)
Real (Kind=nag_wp), Intent (In) :: x(nvar)
Real (Kind=nag_wp), Intent (Inout) :: fdx(nnzfd), ruser(*)
Type (c_ptr), Intent (In) :: cpuser
C Header Interface
void  objgrd (const Integer *nvar, const double x[], const Integer *nnzfd, double fdx[], Integer *inform, Integer iuser[], double ruser[], void **cpuser)
1: nvar Integer Input
On entry: n, the current number of decision variables x in the model.
2: x(nvar) Real (Kind=nag_wp) array Input
On entry: the vector x of variable values at which the objective function gradient is to be evaluated.
3: nnzfd Integer Input
On entry: the number of nonzero elements in the sparse gradient vector of the objective function, as was set in a previous call to e04rgf.
4: fdx(nnzfd) Real (Kind=nag_wp) array Input/Output
On entry: the elements should only be assigned and not referenced.
On exit: the values of the nonzero elements in the sparse gradient vector of the objective function, in the order specified by idxfd in a previous call to e04rgf. fdx(i) will store the gradient element f xj , where j=idxfd(i).
5: inform Integer Input/Output
On entry: a non-negative value.
If inform=1, then the previous call to objfun was also made at the same point x with inform=1. This may help to optimize your code in order to avoid recalculations of common quantities when evaluating both the objective function and gradient. This notification parameter may be safely ignored if such optimization is not required.
On exit: may be used to inform that the gradient cannot be evaluated at the requested point x by setting inform<0. Returning NaN or ± in any element of fdx has the same effect. The algorithm will try to recover if the gradient cannot be evaluated; if recovery is not possible it will stop with ifail=25.
6: iuser(*) Integer array User Workspace
7: ruser(*) Real (Kind=nag_wp) array User Workspace
8: cpuser Type (c_ptr) User Workspace
objgrd is called with the arguments iuser, ruser and cpuser as supplied to e04kff. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to objgrd.
objgrd must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04kff is called. Arguments denoted as Input must not be changed by this procedure.
4: monit Subroutine, supplied by the NAG Library or the user. External Procedure
monit is provided to enable you to monitor the progress of the optimization and optionally to terminate the solver early if necessary. It is invoked at the end of every ith iteration where i is given by the FOAS Monitor Frequency (the default is 0, monit is not called).
monit may be the dummy subroutine e04kfu (included in the NAG Library).
The specification of monit is:
Fortran Interface
Subroutine monit ( nvar, x, inform, rinfo, stats, iuser, ruser, cpuser)
Integer, Intent (In) :: nvar
Integer, Intent (Inout) :: inform, iuser(*)
Real (Kind=nag_wp), Intent (In) :: x(nvar), rinfo(100), stats(100)
Real (Kind=nag_wp), Intent (Inout) :: ruser(*)
Type (c_ptr), Intent (In) :: cpuser
C Header Interface
void  monit (const Integer *nvar, const double x[], Integer *inform, const double rinfo[], const double stats[], Integer iuser[], double ruser[], void **cpuser)
1: nvar Integer Input
On entry: n, the current number of decision variables x in the model.
2: x(nvar) Real (Kind=nag_wp) array Input
On entry: the vector x of decision variables at the current iteration.
3: inform Integer Input/Output
On entry: a non-negative value.
On exit: may be used to request the solver to stop immediately. Specifically, if inform<0 the solver will terminate immediately with ifail=20; otherwise, the solver will proceed normally.
4: rinfo(100) Real (Kind=nag_wp) array Input
On entry: error measures and various indicators at the end of the current iteration as described in rinfo.
5: stats(100) Real (Kind=nag_wp) array Input
On entry: solver statistics at the end of the current iteration as described in stats.
6: iuser(*) Integer array User Workspace
7: ruser(*) Real (Kind=nag_wp) array User Workspace
8: cpuser Type (c_ptr) User Workspace
monit is called with the arguments iuser, ruser and cpuser as supplied to e04kff. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to monit.
monit must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04kff is called. Arguments denoted as Input must not be changed by this procedure.
5: nvar Integer Input
On entry: n, the current number of decision variables x in the model.
6: x(nvar) Real (Kind=nag_wp) array Input/Output
On entry: x0, the initial estimates of the variables, x.
On exit: the final values of the variables, x.
7: rinfo(100) Real (Kind=nag_wp) array Output
On exit: error measures and various indicators at the end of the final iteration as given in the table below:
1 Objective function value f(x).
2 Norm of inactive gradient, the objective gradient over the current search space. If the problem is unconstrained, then elements 24 coincide. See Section 11.4 for details.
3 Norm of projected direction, used in (3), see Section 11.2.
4 Norm of objective gradient.
5 Last step size (αk) used in (2) and (5), see Section 11.2 and Section 11.3.
6 Progress score, a positive value that measures the progress of the solver. A low score close to zero indicates poor progress, see (8) in Section 11.4.
7100 Reserved for future use.
8: stats(100) Real (Kind=nag_wp) array Output
On exit: solver statistics at the end of the final iteration as given in the table below:
1 Number of function evaluations performed by NPG, see Section 11.2.
2 Number of gradient evaluations performed by NPG.
3 Number of function evaluations performed by CG, see Section 11.3.
4 Number of gradient evaluations performed by CG.
5 Number of function evaluations performed by LCG.
6 Number of gradient evaluations performed by LCG.
7 Number of function evaluations used by the finite difference method to estimate missing components of the gradient.
8 Number of iterations.
9 Total time spent in the solver (including user-supplied function calls).
10 Total time spent in user-supplied objective function.
11 Total time spent in user-supplied objective gradient.
12100 Reserved for future use.
9: iuser(*) Integer array User Workspace
10: ruser(*) Real (Kind=nag_wp) array User Workspace
11: cpuser Type (c_ptr) User Workspace
iuser, ruser and cpuser are not used by e04kff, but are passed directly to objfun, objgrd and monit and may be used to pass information to these routines. If you do not need to reference cpuser, it should be initialized to c_null_ptr.
12: ifail Integer Input/Output
On entry: ifail must be set to 0, −1 or 1 to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of 0 causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of −1 means that an error message is printed while a value of 1 means that it is not.
If halting is not appropriate, the value −1 or 1 is recommended. If message printing is undesirable, then the value 1 is recommended. Otherwise, the value −1 is recommended since useful values can be provided in some output arguments even when ifail0 on exit. When the value -1 or 1 is used it is essential to test the value of ifail on exit.
On exit: ifail=0 unless the routine detects an error or a warning has been flagged (see Section 6).

6 Error Indicators and Warnings

If on entry ifail=0 or −1, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
Note: in some cases e04kff may return useful information.
ifail=1
The supplied handle does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been properly initialized or it has been corrupted.
ifail=2
The problem is already being solved.
This solver does not support the model defined in the handle.
ifail=4
On entry, nvar=value, expected value=value.
Constraint: nvar must match the current number of variables of the model in the handle.
ifail=7
The dummy objfun routine was called but the problem requires these values. Please provide a proper objfun routine.
The dummy objgrd routine was called but the problem requires these derivatives. Please provide a proper objgrd routine.
ifail=20
User requested termination during a monitoring step. inform was set to a negative value in monit.
ifail=21
The current starting point is unusable.
Either inform was set to a negative value within the user-supplied functions objfun, objgrd, or an Infinity or NaN was detected in values returned from them.
ifail=22
Maximum number of iterations reached.
ifail=23
The solver terminated after the maximum time allowed was exhausted.
Maximum number of seconds exceeded. Use optional parameter Time Limit to change the limit.
ifail=24
The solver was terminated because no further progress could be achieved.
This can indicate that the solver is calculating very small step sizes and is making very little progress. It could also indicate that the problem has been solved to the best numerical accuracy possible given the current scaling.
ifail=25
Invalid number detected in user-supplied function and recovery failed.
Either inform was set to a negative value within the user-supplied function objfun, or an Infinity or NaN was detected in values returned from them. Recovery attempts failed.
Invalid number detected in user-supplied gradient and recovery failed.
Either inform was set to a negative value within the user-supplied function objgrd, or an Infinity or NaN was detected in values returned from them. Recovery attempts failed.
ifail=26
User-provided gradient is likely to be incorrect.
This error indicates that the user-supplied gradient vector fdx has entries that do not match with a finite-differences approximation of it.
ifail=50
Problem was solved to an acceptable level; full accuracy was not achieved.
This indicates that the algorithm detected a sequence of very small reductions in the objective function value and is unable to reach a point satisfying the requested optimality tolerance. This may happen if the desired tolerances are too small for the current problem, or if the objective function is badly scaled.
ifail=54
The problem seems to be unbounded and the algorithm was stopped.
This indicates that the solver found a direction where the objective function decreases arbitrarily. In the case of maximizing, the direction found increases arbitrarily the objective value. Cases where this happens are when a linear objective is provided with no constraints, or when the constraints defined do not bound the search direction.
ifail=-99
An unexpected error has been triggered by this routine. Please contact NAG.
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
ifail=-399
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
ifail=-999
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.

7 Accuracy

The accuracy of the solution is determined by optional parameters FOAS Stop Tolerance and FOAS Rel Stop Tolerance. If ifail=0 on exit, the returned point satisfies (6) or (7) to the defined accuracy.
Please refer to Section 11.4 and the description of the particular options in Section 12.

8 Parallelism and Performance

Background information to multithreading can be found in the Multithreading documentation.
e04kff is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
e04kff makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.

9 Further Comments

9.1 Description of the Printed Output

The solver can print information to give an overview of the problem and of the progress of the computation. The output may be sent to two independent streams (files) which are set by optional parameters Print File and Monitoring File. Optional parameters Print Level, Monitoring Level, Print Solution and Print Options determine the exposed level of detail. This allows, for example, a detailed log file to be generated while the condensed information is displayed on the screen.
By default (Print File=6, Print Level=2), the following sections are printed to the standard output:
Header
The header is a message indicating the start of the solver. It should look like:
-------------------------------------------------------------------------------
 E04KF, First-order method for bound-constrained problems
-------------------------------------------------------------------------------
Optional parameters list
The list shows all options of the solver, each displayed on one line. The output contains the option name, its current value and an indicator for how it was set. The options unchanged from the default setting are noted by ‘d’, options you set are noted by ‘U’, and options reset by the solver are noted by ‘S’. Note that the output format is compatible with the file format expected by e04zpf. The output might look as follows:
Begin of Options
    Print File                    =                   6     * d
    Print Level                   =                   2     * U
    Print Options                 =                 Yes     * d
    Print Solution                =                 All     * U
    Monitoring File               =                   9     * U
    Monitoring Level              =                   3     * U
    Foas Monitor Frequency        =                   0     * d
    Foas Print Frequency          =                   5     * U
End of Options
Problem statistics
If Print Level2, statistics on the problem is printed, for example:
 Problem Statistics
   No of variables                  8 (+1 disabled, +2 fixed)
     free (unconstrained)           3
     bounded                        2
   Objective function       Nonlinear
Verification of derivatives
If Verify Derivatives=YES, then the user-supplied nonlinear objective gradient is verified. If Print Level<5, then it will only check those entries related to the actual search space and compare to a finite-difference approximation, it might look as follows:
 Derivative checker
     idx test        fdx     approx  rel error
       3 Fail   3.56E+02   3.57E+02   2.80E-03
       4  Ok    1.80E+02   1.80E+02   6.05E-09
       5  Ok    0.00E+00   0.00E+00   0.00E+00
On the other hand, if Print Level=5, it will print one line for each gradient entry and the output should be similar to:
 Derivative checker
     idx test        fdx     approx  rel error
       3 Fail   3.56E+02   3.57E+02   2.80E-03
       4  Ok    1.80E+02   1.80E+02   6.05E-09
       5  Ok    0.00E+00   0.00E+00   0.00E+00
       7 Skip         FD         --         --
       2 Skip    BLX=BUX         --         --
       6 Skip    BLX=BUX         --         --
       1 Skip   disabled         --         --
Iteration log
If Print Level2, the solver prints the status of every kth iteration, specified using the optional parameter FOAS Print Frequency=k, with default value of k=1. It will also print status information regarding the switch between NPG, CG and LCG. It will also print status information regarding the switch between NPG, CG and LCG.
If Print Level=2, the output shows the iteration number (0 represents the starting point), the current objective value, and optimality measures (norm of inactive gradient and the norm of the projected direction). Note that all these values for the last iteration are also available in rinfo. The output might look as follows:
-------------------------------------------------------------------------------
  iters |  objective |  optim  |   dir
-------------------------------------------------------------------------------
      20  7.71339E-01  1.22E-01  1.70E+00
      25  7.06709E-01  1.75E+00  1.54E+00
      30  7.06709E-01  1.75E+00  1.54E+00
      35  6.82989E-01  1.35E+00  1.35E+00
      30  6.55834E-01  3.26E+00  1.67E+00
If Print Level=3, the solver also prints for each iteration the progress score, iteration type, size or type of performed step, accumulated number of objective function and gradient calls. The output takes the following form:
-------------------------------------------------------------------------------
  iters |  objective |  optim  |   dir   | progrss | it|   step  |    nf|    ng
-------------------------------------------------------------------------------
       0  1.10354E+02  0.00E+00  6.81E+01  1.00E+00       Start        1      1
       1  3.29016E+01  1.45E+01  1.45E+01  5.36E+00 NPG  1.39E-02      8      2
       2  3.29016E+01  1.45E+01  1.45E+01  5.36E+00     Switch CG      8      2
       3  1.84090E+01  2.48E+01  2.48E+01  2.85E+00  CG  2.42E-02     11      4
       4  1.84090E+01  2.48E+01  2.48E+01  2.85E+00      Restart      11      4
       5  9.48087E+00  1.25E+01  1.25E+01  1.45E+00  CG  1.49E-02     13      5
       6  4.20527E+00  8.28E+00  8.22E-01  2.51E+00  CG  3.61E-02     15      6
       7  7.20453E-01  3.08E+00  1.62E+00  9.70E+00 LCG  7.21E-01     17      7
If Print Level>3, each iteration produces more information that expands over several lines. This additional information can contain:
The output might look as follows:
-----  Iteration details (CG)  ---------
Size of gradient                2.64E+01
Size of subspace gradient       2.64E+01
Eigenvalues R range   6.80E-01  6.80E-01
Orthogonal property                 lost
Action                     Switch to LCG
----------------------------------------
-----  LCG solver details  -------------
Search space has            2 element(s)
Memory vectors                   2 (  2)
----------------------------------------
Summary
Once the solver finishes, a detailed summary is produced:
-------------------------------------------------------------------------------
Status: converged, an optimal solution was found
-------------------------------------------------------------------------------
Value of the objective             3.55353E-14
Norm of inactive gradient          2.53812E-07
Norm of projected direction        1.68481E-07
Iterations                                  32
Function evaluations                        80
FD func. evaluations                         5
Gradient evaluations                        48
  NPG function calls                         0
  NPG gradient calls                         0
  CG function calls                         11
  CG gradient calls                          9
  LCG function calls                        69
  LCG gradient calls                        39
-------------------------------------------------------------------------------
It starts with a status line of the overall result, followed by the final objective value as well as the gradient (unconstrained problem) or inactive gradient and projected direction (constrained problem) norms. If Print Level>1 or Monitoring Level>1, it will additionally report iteration count, objective function and gradient call information.
Optionally, if Stats Time=YES, the timings for the user-supplied objective are displayed. It might look as follows:
Timing
  Total time spent                       12.54 sec
  Total time in obj function              3.01 sec ( 24.0%)
  Total time in obj gradient              5.28 sec ( 42.1%)
Solution
If Print Solution=YES, the values of the primal variables are printed, furthermore if the problem is constrained, also the dual variables (see Lagrangian Multipliers) and their bounds are reported. It might look as follows:
 Primal variables:
   idx   Lower bound       Value       Upper bound
     1     Disabled         NaN          Disabled
     2   5.00000E+00    5.00000E+00    5.00000E+00
     3  -1.00000E+00    8.00000E-01    8.00000E-01
     4  -2.00000E+00    6.40000E-01    2.00000E+00
     5   2.00000E+00    2.00000E+00    9.00000E+00
     6  -1.00000E+00   -1.00000E+00   -1.00000E+00
     7   0.00000E+00    0.00000E+00    1.00000E+00
 
 Box bounds dual variables:
   idx   Lower bound       Value       Upper bound       Value
     1     Disabled         NaN          Disabled         NaN
     2   5.00000E+00    0.00000E+00    5.00000E+00    0.00000E+00
     3  -1.00000E+00    0.00000E+00    8.00000E-01    4.00000E-01
     4  -2.00000E+00    0.00000E+00    2.00000E+00    0.00000E+00
     5   2.00000E+00    0.00000E+00    9.00000E+00    0.00000E+00
     6  -1.00000E+00    0.00000E+00   -1.00000E+00    0.00000E+00
     7   0.00000E+00    0.00000E+00    1.00000E+00    0.00000E+00

10 Example

In this example, we minimize in 2 the Rosenbrock function over simple bounds on the variables. The problem to solve is
minimize x2 f(x) = (1-x1) 2 + 100 (x2-x12) 2   subject to   −1 x1 0.8 , −2 x2 2 .  
The initial guess is x0=(-1.5,1.9)T, and the expected solution point is x*=(0.8,0.64)T with objective value f(x*)=0.04.

10.1 Program Text

Program Text (e04kffe.f90)

10.2 Program Results

Program Results (e04kffe.r)

11 Algorithmic Details

This section contains the description of the underlying algorithms used in e04kff, a first-order active-set (FOAS) method with very low memory requirements suitable for large-scale bound-constrained nonlinear optimization problems. For further details, see Hager and Zhang (2006b) and references therein.

11.1 Active-Set Method and Algorithm Outline

Active-set is a useful method to tackle problems with bound constraints. It derives its name from the partitioning of the search space: elements of x that are fixed at a bound (xi=i or xi=ui), are said to be active, while the rest of the elements of x are said to be inactive. The goal of this method is to estimate which variables will be active at the solution point while optimizing over the inactive components.
e04kff consists of a constrained solver, an unconstrained solver and a set of rules to switch between them. The constrained solver (nonmonotone projected gradient, NPG) has two purposes: while solving (1) it tries to guess which components of x are active at a solution. Once a reasonable guess is available, control is transferred to an unconstrained solver (conjugate gradient method, CG, and its limited-memory variant, LCG) that operates only over the inactive elements of x but convergence is much faster that NPG.
The following is an outline of the implemented algorithm.
FOAS Algorithm
  1. (i)Make initial point x0 feasible.
  2. (ii)Loop until stopping criteria are satisfied
    1. (a)Constrained problem (NPG): start solving constrained problem (1) and try to identify a suitable inactive search space; if found, then go to (ii)(b).
    2. (b)Unconstrained subproblem (CG): solve unconstrained problem (4) over the elements of x marked inactive. If elements of x become newly active or it is deemed that some active elements should be explored, then go back to (ii)(a).

11.2 Constrained Subproblem (NPG)

By setting Ω={xn:xxux}, the constrained problem (1) can be stated as
min xΩ f(x)  
and can be solved using NPG which is an iterative method of the form
xk+1 = xk + αk dΩ (xk) , (2)
with step size αk and projection direction dΩ(x).
The projected direction, dΩ(x), is obtained by a projection of the objective gradient g(x) onto Ω:
dΩ(x) = PΩ (x-g(x)) - x , (3)
with PΩ the Euclidean projection operator over the box Ω. Note that if Ω=n, then dΩ(xk)=-g(xk) and the method reduces to nonmonotone steepest descent.
The step size αk>0 is chosen to guarantee global convergence, i.e., guarantee sufficient progress at each iteration using the nonmonotone Armijo condition
f(xk+1) fkR + c αk g(xk)T dΩ(xk) ,   with ​ 0<c<1 ,  
where fkR is a reference objective function value for iteration k, possibly fkR=f(xk). The step size is estimated using a nonmonotone backtracking technique. As soon as a suitable inactive set is identified (see Hager and Zhang (2006b)), the control is transferred to CG (LCG) to accelerate the convergence.

11.3 Unconstrained Subproblem (CG)

The exploration of the inactive search space is performed using the CG method which is an iterative method to solve
min xn f(x) (4)
and each iterate is updated using the expression
xk+1 = xk + αk dk , (5)
where dk is a direction of descent that combines the current gradient g(x) and a scaled version of the previous direction, βkdk-1, with βk,
dk+1 = -g (xk+1) + βk dk ,   d0 = -g(x0) .  
The step size αk>0 is chosen to ensure global convergence and is achieved by satisfying the weak Wolfe conditions
f(xk+1) - f(xk) δαk g(xk)T dk ,   g(xk+1)T dk σ g(xk)T dk ,  
with 0<δ<σ<1 or the approximate Wolfe conditions (see Hager and Zhang (2005)) εε,
(2δ-1) g(xk)T dk g(xk+1)T dk , g(xk+1)T dk σ g(xk)T dk ,   and g(xk+1)T dk g(xk)T dk+εk ,  
with δ<min(0.5,σ) and εk>0. The line search performed by e04kff will always satisfy either/or both stated conditions. The line search builds a series of nested intervals that contains a point satisfying the above sufficient decrease requirements.
It is important to note that when Ωn, then the used gradient is actually the inactive gradient, it is the gradient, g(xk), but with zeros in the same positions where the elements of xk are marked as active.
The performance of the CG method can degrade when orthogonality is lost between consecutive search directions. Therefore, e04kff uses a limited number of previous search directions to detect and restore orthogonality. When the current search direction is no longer orthogonal, it is discarded and a quasi-Newton variant known as limited-memory CG (LCG) is used to build a new search direction orthogonal to the explored subspace.
The available memory is used to build an approximation of the Hessian and the new search direction is estimated using
dk=-Hkg(xk).  
The quasi-Newton Hessian approximation Hk, is updated at each iteration using the BFGS update formula, see Nocedal and Wright (2006).
Experiments in literature indicate that an adequate range for the quasi-Newton (amount of vectors used to build Hk) lies between 7 and 20. The default value for the maximum amount of vectors used in e04kff is 11 and can be changed via the FOAS Memory.

11.4 Stopping Criteria

A point is considered a solution when there are no feasible descent directions to use. Under this circumstance the routine will stop, declaring to have found a solution.
If the problem is unconstrained, the routine declares to have found a solution and stops when the first-order optimality condition is met within the defined absolute or relative tolerances (εtol0, εrel0),
g(xk)p max(εtol, εrel g(x0)p ) , (6)
where p can designate either the Euclidean or Infinity (default) norms.
On the other hand, if the problem is constrained, the routine characterizes to have found a solution when the first-order condition is satisfied for the projected direction, dΩ, to the defined tolerances,
dΩ(xk) p max(εtol, εrel dΩ(x0) p ) . (7)
In the unconstrained case we have Ω=n and both stopping criteria (6) and (7) coincide.
The stopping tolerances can be changed using the optional parameters FOAS Stop Tolerance and FOAS Rel Stop Tolerance, while the norm used can be set by using the optional parameter FOAS Tolerance Norm, see Section 12 for details. If these parameters are set too small in relation to the complexity and scaling of the problem, the routine can terminate with ifail=22, 24 or 50.
Progress of the solver towards a solution is monitored using two criteria. The first one evaluates how poor the actual step has been and is estimated via
-α k f (xk) T dk |f(xk)| < ε prog , (8)
where αk is given by the line search. If the above relation is consistently satisfied, then the solver stops with ifail=24. The tolerance εprog is set using the optional parameter FOAS Progress Tolerance. The second criteria monitors the rate of convergence using
|f( x k-1 )-f(xk)| |f(xk)| < αk f(xk) ε slow . (9)
As with the first criteria, if the previous relation is deemed permanent, then the solver stops with ifail=50. The tolerance εslow can be changed using the optional parameter FOAS Slow Tolerance.

11.5 A Note About Lagrangian Multipliers

It is often useful to have access to the Lagrangian multipliers (dual variables) associated with the constraints if there are any defined. In the case where only simple bounds are present, the multipliers directly relate to the values of the gradient at the solution. The multipliers of the active bounds are the absolute values of the associated elements of the gradient. The multipliers of the inactive bounds are always zero.
The multipliers based on the final gradient value (or its finite-difference approximation) can be retrieved by calling e04rxf with the command string cmdstr='Dual Variables'. The format is the same as for other routines, see Section 3.1 in e04svf. Note that if the problem has not fully converged, the provided approximation might be quite crude.

12 Optional Parameters

Several optional parameters in e04kff define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e04kff these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling e04zmf anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring steps is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
The option values may be retrieved by e04znf.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section 12.1.

12.1 Description of the Optional Parameters

For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
All options accept the value DEFAULT to return single options to their default states.
Keywords and character values are case and white space insensitive.
Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
FOAS Estimate DerivativesaDefault =NO
This option indicates whether to check for and estimate missing entries of the user-supplied gradient vector. Since the associated cost of estimating missing elements can be high, this option should only be used if strictly necessary. In general terms, if the gradient is not provided (in its entirety or for arbitrary points) potential degradation in the progress of the solver is to be expected. Depending on the complexity of the objective, the routine may not achieve the desired optimality accuracy or even terminate with no possible further progress error ifail=24, it is advisable to increase the values of FOAS Stop Tolerance and FOAS Rel Stop Tolerance when using this option.
Missing elements from the gradient vector are estimated by finite-differences using the perturbation interval specified by the optional parameter FOAS Finite Diff Interval.
If FOAS Estimate Derivatives=NO, the entries are not checked and all derivative elements need to be provided.
Constraint: FOAS Estimate Derivatives=YES or NO.
FOAS Finite Diff IntervalrDefault =ε
Specifies the relative perturbation size used to estimate a derivative using the forward (or backward) finite-difference method. Setting the value too small or too big may lead e04kff to terminated with ifail=24 or 25.
Constraint: 10−12FOAS Finite Diff Interval10−1.
FOAS Iteration LimitiDefault =107
This parameter sets the maximum number of iterations to be performed by e04kff. Setting the option too low might lead to ifail=22.
Constraint: FOAS Iteration Limit1.
FOAS MemoryiDefault =11
This parameter specifies the maximum number of memory vectors to use in the LCG solver.
Constraint: 0FOAS Memory100.
FOAS Monitor FrequencyiDefault =0
This parameter specifies the frequency on which to call the monitor routine monit. If zero, the monitor routine will not be called.
Constraint: FOAS Monitor Frequency0.
FOAS Print FrequencyiDefault =1
This parameter specifies the frequency with which to print information regarding each iteration to Print File and/or Monitoring File. By default, it will print information of every iteration.
Constraint: FOAS Print Frequency1.
FOAS Progress TolerancerDefault =ε34
Specifies the tolerance for εprog (see (8)) for which the routine characterises a poor rate of progress given that it deems to be far from a solution. If this behaviour is persistent, then the routine asserts that no substantial further progress can be achieved and the process is terminated with ifail=24. Setting a high tolerance can lead to misinterpret reasonable progress for unsatisfactory progress or even issue a premature stop, see (8) in Section 11.4.
Constraint: 0<FOAS Progress Tolerance<1.
FOAS Rel Stop TolerancerDefault =ε34
This parameter sets the value of εrel which specifies the relative tolerance for the convergence measures in the stopping criteria, see (6) and (7) in Section 11.4.
Constraint: 0εrel<1.
FOAS Restart FactorrDefault =6.0
This factor specifies the frequency nvar×FOAS Restart Factor with which the CG/LCG directions are replaced by the steepest descent direction (dk=-gk). Setting the value too small can potentially slow the convergence speed.
Constraint: FOAS Restart Factor0.
FOAS Slow TolerancerDefault =ε18
Specifies the tolerance for εslow (see (9)) for which the routine characterises a slow rate of convergence. If this behaviour is deemed permanent, then the routine asserts that no substantial improvement can be achieved and the process is terminated with ifail=50. Setting a large tolerance can lead to incorrectly identifying a suboptimal solution, see (9) in Section 11.4.
Constraint: FOAS Slow Tolerance>0.
FOAS Stop TolerancerDefault = max(10−6,ε)
This parameter sets the value of εtol which specifies the tolerance for the convergence measures in the stopping criteria, see (6) and (7) in Section 11.4.
Constraint: 0εtol<1.
FOAS Tolerance NormaDefault =INFINITY
This parameter specifies the norm used to measure some stopping metrics, such as optimality tolerances (see Section 11.4). It is possible to choose between 2-norm and ∞-norm. Solving problems using ∞-norm generally has lower computational costs than those based on 2-norm.
Constraint: FOAS Tolerance Norm=INFINITY or TWO.
Infinite Bound SizerDefault =1020
This defines the ‘infinite’ bound bigbnd in the definition of the problem constraints. Any upper bound greater than or equal to bigbnd will be regarded as + (and similarly any lower bound less than or equal to -bigbnd will be regarded as -). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
Constraint: Infinite Bound Size1000.
Monitoring FileiDefault =−1
If i0, the unit number for the secondary (monitoring) output. If set to −1, no secondary output is provided. The following information is output to the unit:
Constraint: Monitoring File−1.
Monitoring LeveliDefault =4
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as Print Level.
Constraint: 0Monitoring Level5.
Print FileiDefault =advisory message unit number
If i0, the unit number for the primary output of the solver. If Print File=−1, the primary output is completely turned off independently of other settings. The default value is the advisory message unit number as defined by x04abf at the time of the optional parameter's initialization, e.g., at the initialization of the handle. The following information is output to the unit:
Constraint: Print File−1.
Print LeveliDefault =2
This parameter defines how detailed information should be printed by the solver to the primary output.
i Output
0 No output from the solver
1 Only the final status and the objective value
2 Problem statistics, one line per iteration showing the progress of the solution with respect to the convergence measures, final status and statistics
3 As level 2 but each iteration line is longer and includes step length and progress measure
4,5 As level 3 but further details of each iteration are presented
Constraint: 0Print Level5.
Print OptionsaDefault =YES
If Print Options=YES, a listing of optional parameters will be printed to the primary output.
Constraint: Print Options=YES or NO.
Print SolutionaDefault =NO
If Print Solution=YES, the final values of the solution vector are printed on the primary and secondary outputs.
Constraint: Print Solution=YES or NO.
Stats TimeaDefault =NO
This parameter allows you to turn on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice YES is equivalent to WALL CLOCK.
Constraint: Stats Time=YES, NO, CPU or WALL CLOCK.
TaskaDefault =MINIMIZE
This parameter specifies the required direction of the optimization. If Task=FEASIBLE POINT, the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found. If no objective function is set, Task reverts to FEASIBLE POINT automatically.
Constraint: Task=MINIMIZE, MAXIMIZE or FEASIBLE POINT.
Time LimitrDefault =106
This parameter specifies a limit in seconds that the solver can use to solve one problem. If during the convergence check this limit is exceeded, the solver will terminate with ifail=23 error message.
Constraint: Time Limit>0.
Verify DerivativesaDefault =NO
This parameter specifies whether the routine should perform numerical checks on the consistency of the user-supplied gradient function. If any discrepancies are found, ifail=26 is returned. It is recommended that such checks are enabled when first developing the formulation of the problem, however, the verification process results in a significant increase of the number of the function evaluations and thus it shouldn't be used in production code.
Constraint: Verify Derivatives=YES or NO.