NAG CPP Interface
nagcpp::opt::handle_solve_ipopt (e04st)

Note: this function uses optional parameters to define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm and to Section 12 for a detailed description of the specification of the optional parameters.

1 Purpose

handle_solve_ipopt, an interior point method optimization solver, based on the IPOPT software package, is a solver for the NAG optimization modelling suite and is suitable for large scale nonlinear programming (NLP) problems.

2 Specification

#include "e04/nagcpp_e04st.hpp"
#include "e04/nagcpp_class_CommE04RA.hpp"
template <typename COMM, typename OBJFUN, typename OBJGRD, typename CONFUN, typename CONGRD, typename HESS, typename MONIT, typename X, typename U, typename RINFO, typename STATS>

void function handle_solve_ipopt(COMM &comm, OBJFUN objfun, OBJGRD objgrd, CONFUN confun, CONGRD congrd, HESS hess, MONIT monit, X &&x, const types::f77_integer nnzu, U &&u, RINFO &&rinfo, STATS &&stats, OptionalE04ST opt)
template <typename COMM, typename OBJFUN, typename OBJGRD, typename CONFUN, typename CONGRD, typename HESS, typename MONIT, typename X, typename U, typename RINFO, typename STATS>

void function handle_solve_ipopt(COMM &comm, OBJFUN objfun, OBJGRD objgrd, CONFUN confun, CONGRD congrd, HESS hess, MONIT monit, X &&x, const types::f77_integer nnzu, U &&u, RINFO &&rinfo, STATS &&stats)

3 Description

handle_solve_ipopt will typically be used for nonlinear programming problems (NLP)
minimize xn f(x)   (a) subject to lgg(x)ug,   (b) lBBxuB,   (c) lxxux,   (d) (1)
where
The objective f(x) can be specified in a number of ways: handle_​set_​linobj for a dense linear function, handle_​set_​quadobj for a sparse linear or quadratic function and handle_​set_​nlnobj for a general nonlinear function. In the last case, objfun and objgrd will be used to compute values and gradients of the objective function. Variable box bounds lx,ux can be specified with handle_​set_​simplebounds. The special case of linear constraints lB,B,uB is handled by handle_​set_​linconstr while general nonlinear constraints lg,g(x),ug are specified by handle_​set_​nlnconstr (both can be specified). Again, in the last case, confun and congrd will be used to compute values and gradients of the nonlinear constraint functions.
Finally, if the you are willing to calculate second derivatives, the sparsity structure of the second partial derivatives of a nonlinear objective and/or of any nonlinear constraints is specified by handle_​set_​nlnhess while the values of these derivatives themselves will be computed by user-supplied hess. While there is an option (see Hessian Mode) that forces internal approximation of second derivatives, no such option exists for first derivatives which must be computed accurately. If handle_​set_​nlnhess has been called and hess is used to calculate values for second derivatives, both the objective and all the constraints must be included; it is not possible to provide a subset of these. If handle_​set_​nlnhess is not called, then internal approximation of second derivatives will take place.
See Section 3.1 in the E04 Chapter Introduction for more details about the NAG optimization modelling suite.

3.1 Structure of the Lagrange Multipliers

For a problem consisting of n variable bounds, mB linear constraints and mg nonlinear constraints (as specified in nvar, nclin and ncnln of handle_​set_​simplebounds, handle_​set_​linconstr and handle_​set_​nlnconstr, respectively), the number of Lagrange multipliers, and consequently the correct value for nnzu, will be q=2*n+2*mB+2*mg. The order these will be found in the u array is
z1L, z1U, z2L, z2U znL, znU, λ1L, λ1U, λ2L, λ2U λmBL, λmBU, λ(mB+1)L, λ(mB+1)U, λ(mB+2)L, λ(mB+2)U λ(mB+mg)L, λ(mB+mg)U
where the L and U subscripts refer to lower and upper bounds, respectively, and the variable bound constraint multipliers come first (if present, i.e., if handle_​set_​simplebounds was called), followed by the linear constraint multipliers (if present, i.e., if handle_​set_​linconstr was called) and the nonlinear constraint multipliers (if present, i.e., if handle_​set_​nlnconstr was called).
Significantly nonzero values for any of these, after the solver has terminated, indicates that the corresponding constraint is active. Significance is judged in the first instance by the relative scale of any value compared to the smallest among them.

4 References

Byrd R H, Gilbert J Ch and Nocedal J (2000) A trust region method based on interior point techniques for nonlinear programming Mathematical Programming 89 149–185
Byrd R H, Liu G and Nocedal J (1997) On the local behavior of an interior point method for nonlinear programming Numerical Analysis (eds D F Griffiths and D J Higham) Addison–Wesley
Conn A R, Gould N I M, Orban D and Toint Ph L (2000) A primal-dual trust-region algorithm for non-convex nonlinear programming Mathematical Programming 87 (2) 215–249
Conn A R, Gould N I M and Toint Ph L (2000) Trust Region Methods SIAM, Philadephia
Fiacco A V and McCormick G P (1990) Nonlinear Programming: Sequential Unconstrained Minimization Techniques SIAM, Philadelphia
Gould N I M, Orban D, Sartenaer A and Toint Ph L (2001) Superlinear convergence of primal-dual interior point algorithms for nonlinear programming SIAM Journal on Optimization 11 (4) 974–1002
Hock W and Schittkowski K (1981) Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems 187 Springer–Verlag
Hogg J D and Scott J A (2011) HSL MA97: a bit-compatible multifrontal code for sparse symmetric systems RAL Technical Report. RAL-TR-2011-024
Wächter A and Biegler L T (2006) On the implementation of a primal-dual interior point filter line search algorithm for large-scale nonlinear programming Mathematical Programming 106(1) 25–57
Williams P and Lang B (2013) A framework for the MR3 Algorithm: theory and implementation SIAM J. Sci. Comput. 35 740–766
Yamashita H (1998) A globally convergent primal-dual interior-point method for constrained optimization Optimization Methods and Software 10 443–469

5 Arguments

1: comm CommE04RA Input/Output
Communication structure. An object of either the derived class CommE04RA or its base class NoneCopyableComm can be supplied. It is recommended that the derived class is used. If the base class is supplied it must first be initialized via a call to opt::handle_init (e04ra).
2: objfun void function Function
objfun must calculate the value of the nonlinear objective function f(x) at a specified value of the n-element vector of x variables. If there is no nonlinear objective (e.g., handle_​set_​linobj and handle_​set_​quadobj was called to define a linear or quadratic objective function), objfun will never be called by handle_solve_ipopt and objfun may be the dummy function ipopt_​dummy_​objfun. (ipopt_​dummy_​objfun is included in the NAG Library.)
void function objfun(const utility::array1D<double,data_handling::ArgIntent::IN> &x, double &fx)
1: x(nvar) double array Input
On entry: the vector x of variable values at which the objective function is to be evaluated.
2: fx double Output
On exit: the value of the objective function at x.
3: nvar types::f77_integer Input
On entry: n, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by handle_​init.
Note: objfun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by handle_solve_ipopt. If your code inadvertently does return any NaNs or infinities, handle_solve_ipopt is likely to produce unexpected results.
3: objgrd void function Function
objgrd must calculate the values of the nonlinear objective function gradients f x at a specified value of the n-element vector of x variables. If there is no nonlinear objective (e.g., handle_​set_​linobj and handle_​set_​quadobj was called to define a linear or quadratic objective function), objgrd will never be called by handle_solve_ipopt and objgrd may be the dummy function ipopt_​dummy_​objgrd. (ipopt_​dummy_​objgrd is included in the NAG Library.)
void function objgrd(const utility::array1D<double,data_handling::ArgIntent::IN> &x, const types::f77_integer nnzfd, utility::array1D<double,data_handling::ArgIntent::OUT> &fdx)
1: x(nvar) double array Input
On entry: the vector x of variable values at which the objective function gradient is to be evaluated.
2: nnzfd types::f77_integer Input
On entry: the number of nonzero elements in the sparse gradient vector of the objective function, as was set in a previous call to handle_​set_​nlnobj.
3: fdx(nnzfd) double array Output
On exit: the values of the nonzero elements in the sparse gradient vector of the objective function, in the order specified by idxfd in a previous call to handle_​set_​nlnobj. fdx(i-1) will be the gradient f xidxfd(i-1).
4: nvar types::f77_integer Input
On entry: n, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by handle_​init.
Note: objgrd should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by handle_solve_ipopt. If your code inadvertently does return any NaNs or infinities, handle_solve_ipopt is likely to produce unexpected results.
4: confun void function Function
confun must calculate the values of the mg-element vector g i(x) of nonlinear constraint functions at a specified value of the n-element vector of x variables. If no nonlinear constraints were registered in this handle, confun will never be called by handle_solve_ipopt and confun may be the dummy function ipopt_​dummy_​confun. (ipopt_​dummy_​confun is included in the NAG Library.)
void function confun(const utility::array1D<double,data_handling::ArgIntent::IN> &x, const types::f77_integer ncnln, utility::array1D<double,data_handling::ArgIntent::OUT> &gx)
1: x(nvar) double array Input
On entry: the vector x of variable values at which the constraint functions are to be evaluated.
2: ncnln types::f77_integer Input
On entry: mg, the number of nonlinear constraints, as specified in an earlier call to handle_​set_​nlnconstr.
3: gx(ncnln) double array Output
On exit: the mg values of the nonlinear constraint functions at x.
4: nvar types::f77_integer Input
On entry: n, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by handle_​init.
Note: confun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by handle_solve_ipopt. If your code inadvertently does return any NaNs or infinities, handle_solve_ipopt is likely to produce unexpected results.
5: congrd void function Function
congrd must calculate the nonzero values of the sparse Jacobian of the nonlinear constraint functions gi x at a specified value of the n-element vector of x variables. If there are no nonlinear constraints (e.g., handle_​set_​nlnconstr was never called with the same handle or it was called with ncnln =0), congrd will never be called by handle_solve_ipopt and congrd may be the dummy function ipopt_​dummy_​congrd. (ipopt_​dummy_​congrd is included in the NAG Library.)
void function congrd(const utility::array1D<double,data_handling::ArgIntent::IN> &x, const types::f77_integer nnzgd, utility::array1D<double,data_handling::ArgIntent::OUT> &gdx)
1: x(nvar) double array Input
On entry: the vector x of variable values at which the Jacobian of the constraint functions is to be evaluated.
2: nnzgd types::f77_integer Input
On entry: is the number of nonzero elements in the sparse Jacobian of the constraint functions, as was set in a previous call to handle_​set_​nlnconstr.
3: gdx(nnzgd) double array Output
On exit: the nonzero values of the Jacobian of the nonlinear constraints, in the order specified by irowgd and icolgd in an earlier call to handle_​set_​nlnconstr. gdx(i-1) will be the gradient girowgd(i-1) xicolgd(i-1) .
4: nvar types::f77_integer Input
On entry: n, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by handle_​init.
Note: congrd should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by handle_solve_ipopt. If your code inadvertently does return any NaNs or infinities, handle_solve_ipopt is likely to produce unexpected results.
6: hess void function Function
hess must calculate the nonzero values of one of a set of second derivative quantities:
  • the Hessian of the Lagrangian function σ 2f+i=1mg λi2 gi
  • the Hessian of the objective function 2f
  • the Hessian of the constraint functions 2gi
The value of argument idf determines which one of these is to be computed and this, in turn, is determined by earlier calls to handle_​set_​nlnhess, when the nonzero sparsity structure of these Hessians was registered. Please note that it is not possible to only supply a subset of the Hessians (see errorid=6). If there were no calls to handle_​set_​nlnhess, hess will never be called by handle_solve_ipopt and hess may be the dummy function ipopt_​dummy_​hess (ipopt_​dummy_​hess is included in the NAG Library). In this case, the Hessian of the Lagrangian will be approximated by a limited-memory quasi-Newton method (L-BFGS).
void function hess(const utility::array1D<double,data_handling::ArgIntent::IN> &x, const types::f77_integer idf, const double sigma, const utility::array1D<double,data_handling::ArgIntent::IN> &lamda, const types::f77_integer nnzh, utility::array1D<double,data_handling::ArgIntent::OUT> &hx)
1: x(nvar) double array Input
On entry: the vector x of variable values at which the Hessian functions are to be evaluated.
2: idf types::f77_integer Input
On entry: specifies the quantities to be computed in hx.
idf=-1
The values of the Hessian of the Lagrangian will be computed in hx. This will be the case if handle_​set_​nlnhess has been called with idf of the same value.
idf=0
The values of the Hessian of the objective function will be computed in hx. This will be the case if handle_​set_​nlnhess has been called with idf of the same value.
idf>0
The values of the Hessian of the idfth constraint function will be computed in hx. This will be the case if handle_​set_​nlnhess has been called with idf of the same value.
3: sigma double Input
On entry: if idf=-1, the value of the σ quantity in the definition of the Hessian of the Lagrangian. Otherwise, sigma should not be referenced.
4: lamda(ncnln) double array Input
On entry: if idf=-1, the values of the λi quantities in the definition of the Hessian of the Lagrangian. Otherwise, lambda should not be referenced.
5: nnzh types::f77_integer Input
On entry: the number of nonzero elements in the Hessian to be computed.
6: hx(nnzh) double array Output
On exit: the nonzero values of the requested Hessian evaluated at x. For each value of idf, the ordering of nonzeros must follow the sparsity structure registered in the handle by earlier calls to handle_​set_​nlnhess through the arguments irowh and icolh.
7: nvar types::f77_integer Input
On entry: n, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by handle_​init.
8: ncnln types::f77_integer Input
On entry: mg, the number of nonlinear constraints, as specified in an earlier call to handle_​set_​nlnconstr.
Note: hess should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by handle_solve_ipopt. If your code inadvertently does return any NaNs or infinities, handle_solve_ipopt is likely to produce unexpected results.
7: monit void function Function
monit is provided to enable you to monitor the progress of the optimization.
monit may be the dummy function ipopt_​dummy_​monit (ipopt_​dummy_​monit is included in the NAG Library).
void function monit(const utility::array1D<double,data_handling::ArgIntent::IN> &x, const utility::array1D<double,data_handling::ArgIntent::IN> &u, const utility::array1D<double,data_handling::ArgIntent::IN> &rinfo, const utility::array1D<double,data_handling::ArgIntent::IN> &stats)
1: x(nvar) double array Input
On entry: xi, the values of the decision variables x at the current iteration.
2: u(nnzu) double array Input
On entry: if nnzu>0, u holds the values at the current iteration of Lagrange multipliers (dual variables) for the constraints. See Section 3.1 for layout information.
3: rinfo(32) double array Input
On entry: error measures and various indicators at the end of the current iteration as described in Section 9.1.
4: stats(32) double array Input
On entry: solver statistics at the end of the current iteration as described in Section 9.1.
5: nvar types::f77_integer Input
On entry: n, the number of variables in the problem.
6: nnzu types::f77_integer Input
On entry: the dimension of array u.
Note: monit should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by handle_solve_ipopt. If your code inadvertently does return any NaNs or infinities, handle_solve_ipopt is likely to produce unexpected results.
8: x(nvar) double array Input/Output
On entry: x0, the initial estimates of the variables x.
On exit: the final values of the variables x.
9: nnzu types::f77_integer Input
On entry: the number of Lagrange multipliers that are to be returned in array u.
If nnzu=0, u will not be referenced; otherwise it needs to match the dimension q as explained in Section 3.1.
Constraints:
  • nnzu0;
  • if nnzu>0, nnzu=q.
10: u(nnzu) double array Output
Note: if nnzu>0, u holds Lagrange multipliers (dual variables) for the constraints. See Section 3.1 for layout information. If nnzu=0, u will not be referenced.
On exit: the final value of Lagrange multipliers z,λ.
11: rinfo(32) double array Output
On exit: error measures and various indicators at the end of the final iteration as given in the table below:
0 Objective function value f(x).
1 Constraint violation (primal infeasibility) (8).
2 Dual infeasibility (7).
3 Complementarity.
4 Karush–Kuhn–Tucker error.
12: stats(32) double array Output
On exit: solver statistics at the end of the final iteration as given in the table below:
0 Number of the iterations.
2 Number of backtracking trial steps.
3 Number of Hessian evaluations.
4 Number of objective gradient evaluations.
7 Total wall clock time elapsed.
18 Number of objective function evaluations.
19 Number of constraint function evaluations.
20 Number of constraint Jacobian evaluations.
13: opt OptionalE04ST Input/Output
Optional parameter container, derived from Optional.

5.1Additional Quantities

1: nvar
n, the number of variables in the problem.

6 Exceptions and Warnings

Errors or warnings detected by the function:
Note: in some cases handle_solve_ipopt may return useful information.
All errors and warnings have an associated numeric error code field, errorid, stored either as a member of the thrown exception object (see errorid), or as a member of opt.ifail, depending on how errors and warnings are being handled (see Error Handling for more details).
Raises: ErrorException
errorid=-199
function is not available in this implementation.
errorid=1
comm::handle has not been initialized.
errorid=1
comm::handle does not belong to the NAG optimization modelling suite,
has not been initialized properly or is corrupted.
errorid=1
comm::handle has not been initialized properly or is corrupted.
errorid=2
The problem is already being solved.
errorid=2
This solver does not support the model defined in the handle.
errorid=3
A different solver from the suite has already been used. Initialize a new
comm::handle using function.
errorid=4
The information supplied does not match with that previously stored.
On entry, nvar = value must match that
given during initialization of the comm::handle, i.e., value.
errorid=5
On entry, nnzu = value.
Constraint: nnzu=value or 0.
errorid=5
On entry, nnzu = value.
Constraint: no constraints present, so nnzu must be 0.
errorid=6
On entry, a nonlinear objective function has been defined
but no objective Hessian sparsity structure has been defined
through function.
errorid=6
On entry, a nonlinear constraint function has been defined
but no constraint Hessian sparsity structure has been defined
through function, for constraint number value.
errorid=7
Either change the option ‘Hessian Mode’ or
provide a proper hess function.
errorid=7
Please provide a proper objfun function.
errorid=7
Please provide a proper objgrd function.
errorid=7
Please provide a proper confun function.
errorid=7
Please provide a proper congrd function.
errorid=23
The solver terminated with not enough degrees of freedom.
errorid=23
The solver terminated due to an invalid problem definition.
errorid=23
The solver terminated due to an invalid option.
errorid=51
The solver detected an infeasible problem.
errorid=54
The solver terminated due to diverging iterates.
errorid=10601
On entry, argument value must be a vector of size value array.
Supplied argument has value dimensions.
errorid=10601
On entry, argument value must be a vector of size value array.
Supplied argument was a vector of size value.
errorid=10601
On entry, argument value must be a vector of size value array.
The size for the supplied array could not be ascertained.
errorid=10602
On entry, the raw data component of value is null.
errorid=10603
On entry, unable to ascertain a value for value.
errorid=10605
On entry, the communication class value has not been initialized correctly.
errorid=10703
An exception was thrown during IO (writing).
errorid=-99
An unexpected error has been triggered by this routine.
errorid=-399
Your licence key may have expired or may not have been installed correctly.
errorid=-999
Dynamic memory allocation failed.
Raises: WarningException
errorid=20
User requested termination during a monitoring step.
errorid=22
Maximum number of iterations exceeded.
errorid=23
The solver terminated after failure in the restoration phase.
errorid=23
The solver terminated after an error in the step computation.
errorid=23
The solver terminated after the maximum time allowed was exceeded.
errorid=24
The solver terminated after the search direction became too small.
errorid=25
Invalid number detected in user function.
errorid=50
The solver reports NLP solved to acceptable level.
Raises: CallbackException
errorid=10701
An exception was thrown in a callback.
errorid=10702
The memory address for an array in a callback has changed.

7 Accuracy

The accuracy of the solution is driven by optional parameter Stop Tolerance 1.
If errorid=0 on the final exit, the returned point satisfies Karush–Kuhn–Tucker (KKT) conditions to the requested accuracy (under the default settings close to ε where ε is the machine precision) and thus it is a good estimate of a local solution. If errorid=50, some of the convergence conditions were not fully satisfied but the point still seems to be a reasonable estimate and should be usable. Please refer to Section 11.1 and the description of the particular options.

8 Parallelism and Performance

Please see the description for the underlying computational routine in this section of the FL Interface documentation.

9 Further Comments

9.1 Description of the Printed Output

The solver can print information to give an overview of the problem and of the progress of the computation. The output may be sent to two independent streams (files) which are set by optional parameters Print File and Monitoring File. Optional parameters Print Level and Monitoring Level determine the exposed level of detail. This allows, for example, the generation of a detailed log in a file while the condensed information is displayed on the screen. This section also describes what kind of information is made available to the monitoring function monit via rinfo and stats.
There are four sections printed to the primary output with the default settings (level 2): a derivative check, a header, an iteration log and a summary. At higher levels more information will be printed, including any internal IPOPT options that have been changed from their default values.
Derivative Check
If Verify Derivatives is set, then information will appear about any errors detected in the user-supplied derivative functions objgrd, congrd or hess. It may look like this:
Starting derivative checker for first derivatives.

* grad_f[          1] = -2.000000E+00    ~  2.455000E+01  [ 1.081E+00]
* jac_g [    1,    4] =  4.700969E+01 v  ~  5.200968E+01  [ 9.614E-02]
Starting derivative checker for second derivatives.

*             obj_hess[    1,    1] =  1.881000E+03 v  ~  1.882000E+03  [ 5.314E-04]
*     1-th constr_hess[    1,    3] =  2.988964E+00 v  ~ -1.103543E-02  [ 3.000E+00]

Derivative checker detected 3 error(s).
The first line indicates that the value for the partial derivative of the objective with respect to the first variable as returned by objgrd (the first one printed) differs sufficiently from a finite difference estimation derived from objfun (the second one printed). The number in square brackets is the relative difference between these two numbers.
The second line reports on a discrepancy for the partial derivative of the first constraint with respect to the fourth variable. If the indicator v is absent, the discrepancy refers to a component that had not been included in the sparsity structure, in which case the nonzero structure of the derivatives should be corrected. Mistakes in the first derivatives should be corrected before attempting to correct mistakes in the second derivatives.
The third line reports on a discrepancy in a second derivative of the objective function, differentiated with respect to the first variable, twice.
The fourth line reports on a discrepancy in a second derivative of the first constraint, differentiated with respect to the first and third variables.
Header
If Print Level1, the header will contain statistics about the size of the problem how the solver sees it, i.e., it reflects any changes imposed by preprocessing and problem transformations. The header may look like:
Number of nonzeros in equality constraint Jacobian...:        4
Number of nonzeros in inequality constraint Jacobian.:        8
Number of nonzeros in Lagrangian Hessian.............:       10

Total number of variables............................:        4
                     variables with only lower bounds:        4
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        1
Total number of inequality constraints...............:        2
        inequality constraints with only lower bounds:        2
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0
It summarises what is known about the variables and the constraints. Simple bounds are set by handle_​set_​simplebounds and standard equalities and inequalities by handle_​set_​linconstr.
Iteration log
If Print Level=2, the status of each iteration is condensed to one line. The line shows:
iter The current iteration count. This includes regular iterations and iterations during the restoration phase. If the algorithm is in the restoration phase, the letter r will be appended to the iteration number. The iteration number 0 represents the starting point. This quantity is also available as stats(0) of monit.
objective The unscaled objective value at the current point (given the current NLP scaling). During the restoration phase, this value remains the unscaled objective value for the original problem. This quantity is also available as rinfo(0) of monit.
inf_pr The unscaled constraint violation at the current point (given the current NLP scaling). This quantity is the infinity-norm (max) of the (unscaled) constraints gi. During the restoration phase, this value remains the constraint violation of the original problem at the current point. This quantity is also available as rinfo(1) of monit.
inf_du The scaled dual infeasibility at the current point (given the current NLP scaling). This quantity measure the infinity-norm (max) of the internal dual infeasibility, λi of Eq. (4a) in the implementation paper Wächter and Biegler (2006), including inequality constraints reformulated using slack variables and NLP scaling. During the restoration phase, this is the value of the dual infeasibility for the restoration phase problem. This quantity is also available as rinfo(2) of monit.
lg(mu) log10 of the value of the barrier parameter μ. μ itself is also available as rinfo(3) of monit.
||d|| The infinity norm (max) of the primal step (for the original variables x and the internal slack variables s). During the restoration phase, this value includes the values of additional variables, p- and n- (see Eq. (30) in Wächter and Biegler (2006)). This quantity is also available as rinfo(4) of monit.
lg(rg) log10 of the value of the regularization term for the Hessian of the Lagrangian in the augmented system (δw of Eq. (26) and Section 3.1 in Wächter and Biegler (2006)). A dash (–) indicates that no regularization was done. The regularization term itself is also available as rinfo(5) of monit.
alpha_du The step size for the dual variables (αkz of Eq. (14c) in Wächter and Biegler (2006)). This quantity is also available as rinfo(6) of monit.
alpha_pr The step size for the primal variables (αk of Eq. (14a) in Wächter and Biegler (2006)). This quantity is also available as rinfo(7) of monit. The number is usually followed by a character for additional diagnostic information regarding the step acceptance criterion.
f f-type iteration in the filter method without second-order correction
F f-type iteration in the filter method with second-order correction
h h-type iteration in the filter method without second-order correction
H h-type iteration in the filter method with second-order correction
k penalty value unchanged in merit function method without second-order correction
K penalty value unchanged in merit function method with second-order correction
n penalty value updated in merit function method without second-order correction
N penalty value updated in merit function method with second-order correction
R Restoration phase just started
w in watchdog procedure
s step accepted in soft restoration phase
t/T tiny step accepted without line search
r some previous iterate restored
ls The number of backtracking line search steps (does not include second-order correction steps). This quantity is also available as stats(1) of monit.
Note that the step acceptance mechanisms in IPOPT consider the barrier objective function (5) which is usually different from the value reported in the objective column. Similarly, for the purposes of the step acceptance, the constraint violation is measured for the internal problem formulation, which includes slack variables for inequality constraints and potentially NLP scaling of the constraint functions. This value, too, is usually different from the value reported in inf_pr. As a consequence, a new iterate might have worse values both for the objective function and the constraint violation as reported in the iteration output, seemingly contradicting globalization procedure.
Note that all these values are also available in rinfo(0),,rinfo(7) and stats(0),,stats(1)of the monitoring function monit.
The output might look as follows:
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  2.6603500E+05 1.55E+02 3.21E+01  -1.0 0.00E+00    -  0.00E+00 0.00E+00   0
   1  1.5053889E+05 7.95E+01 1.43E+01  -1.0 1.16E+00    -  4.55E-01 1.00E+00f  1
   2  8.9745785E+04 3.91E+01 6.45E+00  -1.0 3.07E+01    -  5.78E-03 1.00E+00f  1
   3  3.9878595E+04 1.63E+01 3.47E+00  -1.0 5.19E+00   0.0 2.43E-01 1.00E+00f  1
   4  2.7780042E+04 1.08E+01 1.64E+00  -1.0 3.66E+01    -  7.24E-01 8.39E-01f  1
   5  2.6194274E+04 1.01E+01 1.49E+00  -1.0 1.07E+01    -  1.00E+00 1.05E-01f  1
   6  1.5422960E+04 4.75E+00 6.82E-01  -1.0 1.74E+01    -  1.00E+00 1.00E+00f  1
   7  1.1975453E+04 3.14E+00 7.26E-01  -1.0 2.83E+01    -  1.00E+00 5.06E-01f  1
   8  8.3508421E+03 1.34E+00 2.04E-01  -1.0 3.96E+01    -  9.27E-01 1.00E+00f  1
   9  7.0657495E+03 4.85E-01 9.22E-02  -1.0 5.32E+01    -  1.00E+00 1.00E+00f  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10  6.8359393E+03 1.17E-01 1.28E-01  -1.7 4.69E+01    -  8.21E-01 1.00E+00h  1
  11  6.6508917E+03 1.52E-02 1.52E-02  -2.5 1.87E+01    -  1.00E+00 1.00E+00h  1
  12  6.4123213E+03 8.77E-03 1.49E-01  -3.8 1.85E+01    -  7.49E-01 1.00E+00f  1
  13  6.3157361E+03 4.33E-03 1.90E-03  -3.8 2.07E+01    -  1.00E+00 1.00E+00f  1
  14  6.2989280E+03 1.12E-03 4.06E-04  -3.8 1.54E+01    -  1.00E+00 1.00E+00h  1
  15  6.2996264E+03 9.90E-05 2.05E-04  -5.7 5.35E+00    -  9.63E-01 1.00E+00h  1
  16  6.2998436E+03 0.00E+00 1.86E-07  -5.7 4.55E-01    -  1.00E+00 1.00E+00h  1
  17  6.2998424E+03 0.00E+00 6.18E-12  -8.2 2.62E-03    -  1.00E+00 1.00E+00h  1
If Print Level>2, each iteration produces significantly more detailed output comprising detailed error measures and output from internal operations. The output is reasonably self-explanatory so it is not featured here in detail.
Summary
Once the solver finishes, a detailed summary is produced if Print Level1. An example is shown below:
Number of Iterations....: 6

                                   (scaled)                 (unscaled)
Objective...............:   7.8692659500479623E-01    6.2324586324379867E+00
Dual infeasibility......:   7.9744615766675617E-10    6.3157735687207093E-09
Constraint violation....:   8.3555384833289281E-12    8.3555384833289281E-12
Complementarity.........:   0.0000000000000000E+00    0.0000000000000000E+00
Overall NLP error.......:   7.9744615766675617E-10    6.3157735687207093E-09


Number of objective function evaluations             = 7
Number of objective gradient evaluations             = 7
Number of equality constraint evaluations            = 7
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 7
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 6
Total CPU secs in IPOPT (w/o function evaluations)   =      0.724
Total CPU secs in NLP function evaluations           =      0.343

EXIT: Optimal Solution Found.
It starts with the total number of iterations the algorithm went through. Then, five quantities are printed, all evaluated at the termination point: the value of the objective function, the dual infeasibility, the constraint violation, the complementarity and the NLP error.
This is followed by some statistics on the number of calls to user-supplied functions and CPU time taken in user-supplied functions and the main algorithm. Lastly, status at exit is indicated by a short message. Detailed timings of the algorithm are displayed only if Stats Time is set.

9.2 Additional Licensor

Parts of the code for handle_​solve_​ipopt are distributed according to terms imposed by another licensor. Please refer to Library Licensors for further details.

10 Example

This example is based on Problem 73 in Hock and Schittkowski (1981) and involves the minimization of the linear function
f(x)= 24.55x1+ 26.75x2+ 39.00x3+ 40.50x4  
subject to the bounds
0x1, 0x2, 0x3, 0x4,  
to the nonlinear constraint
12x1+ 11.9x2+ 41.8x3+ 52.1x4-21- 1.645 0.28x12+ 0.19x22+ 20.5x32+ 0.62x42 0  
and the linear constraints
2.3x1+ 5.6x2+ 11.1x3+ 1.3x45 , x1+ x2+ x3+ x4-1=0.  
The initial point, which is infeasible, is
x0 = ( 1, 1, 1, 1 ) T  
and f(x0)=130.8. The optimal solution (to five significant figures) is
x*=(0.63552,0.0,0.31270,0.051777)T,  
Source FileDataResults
ex_e04st.cppNoneex_e04st.mon
ex_e04st.out
ex_e04st.r

11 Algorithmic Details

handle_solve_ipopt is an implementation of IPOPT (see Wächter and Biegler (2006)) that is fully supported and maintained by NAG. It uses Harwell packages MA97 for the underlying sparse linear algebra factorization and MC68 approximate minimum degree or METIS algorithm for the ordering. Any issues relating to handle_solve_ipopt should be directed to NAG who assume all responsibility for the handle_solve_ipopt function and its implementation.
In the remainder of this section, we repeat part of Section 2.1 of Wächter and Biegler (2006).
To simplify notation, we describe the method for the problem formulation
minimize xn f(x) (2)
subject to g(x)=0 (3)
x0 . (4)
Range constraints of the form lc(x)u can be expressed in this formulation by introducing slack variables xs0, xt0 (increasing n by 2) and defining new equality constraints g(x,xs)c(x)-l-xs=0 and g(x,xt)u-c(x)-xt=0.
handle_solve_ipopt, like the methods discussed in Williams and Lang (2013), Byrd et al. (2000), Conn et al. (2000) and Fiacco and McCormick (1990), computes (approximate) solutions for a sequence of barrier problems
minimize xn φμ(x) f(x) - μ i=1 n ln(x(i)) (5)
subject to g(x)=0 (6)
for a decreasing sequence of barrier parameters μ converging to zero.
The algorithm may be interpreted as a homotopy method to the primal-dual equations,
f(x) + g(x)λ - z = 0 (7)
g(x)=0 (8)
XZe - μe = 0 (9)
with the homotopy parameter μ, which is driven to zero (see e.g., Byrd et al. (1997) and Gould et al. (2001)). Here, Xdiag(x) for a vector x (similarly zdiag(z), etc.), and e stands for the vector of all ones for appropriate dimension, while λm and zn correspond to the Lagrange multipliers for the equality constraints (3) and the bound constraints (4), respectively.
Note, that the equations (7), (8) and (9) for μ=0 together with ‘x, z0’ are the Karush–Kuhn–Tucker (KKT) conditions for the original problem (2), (3) and (4). Those are the first-order optimality conditions for (2), (3) and (4) if constraint qualifications are satisfied (Conn et al. (2000)).
Starting from an initial point supplied in x, handle_solve_ipopt computes an approximate solution to the barrier problem (5) and (6) for a fixed value of μ (by default, 0.1), then decreases the barrier parameter, and continues the solution of the next barrier problem from the approximate solution of the previous one.
A sophisticated overall termination criterion for the algorithm is used to overcome potential difficulties when the Lagrange multipliers become large. This can happen, for example, when the gradients of the active constraints are nearly linear dependent. The termination criterion is described in detail by Wächter and Biegler (2006) (also see below Section 11.1).

11.1 Stopping Criteria

Using the individual parts of the primal-dual equations (7), (8) and (9), we define the optimality error for the barrier problem as
Eμ (x,λ,z) max{ f(x)+g(x)λ-z s d ,g(x), XZe-μe s c } (10)
with scaling parameters sd, sc1 defined below (not to be confused with NLP scaling factors described in Section 11.2). By E0(x,λ,z) we denote (10) with μ=0; this measures the optimality error for the original problem (2), (3) and (4). The overall algorithm terminates if an approximate solution (x~*,λ~*,z~*) (including multiplier estimates) satisfying
E0 (x~*,λ~*,z~*) εtol (11)
is found, where εtol>0 is the user-supplied error tolerance in optional parameter Stop Tolerance 1.
Even if the original problem is well scaled, the multipliers λ and z might become very large, for example, when the gradients of the active constraints are (nearly) linearly dependent at a solution of (2), (3) and (4). In this case, the algorithm might encounter numerical difficulties satisfying the unscaled primal-dual equations (7), (8) and (9) to a tight tolerance. In order to adapt the termination criteria to handle such circumstances, we choose the scaling factors
sd max{smax, λ1+z1 (m+n) } / smax    sc max{ s max , z1 n } / smax  
in (10). In this way, a component of the optimality error is scaled, whenever the average value of the multipliers becomes larger than a fixed number smax1 (smax=100 in our implementation). Also note, in the case that the multipliers diverge, E0(x,λ,z) can only become small, if a Fritz John point for (2), (3) and (4) is approached, or if the primal variables diverge as well.

11.2 Scaling the NLP

Ideally, the formulated problem should be scaled so that, near the solution, all function gradients (objective and constraints), when nonzero, are of a similar order of a magnitude. handle_solve_ipopt will compute automatic NLP scaling factors for the objective and constraint functions (but not the decision variables) and apply them if large imbalances of scale are detected. This rescaling is only computed at the starting point. References to scaled or unscaled objective or constraints in Section 9.1 and Section 11 should be understood in this context.

12 Optional Parameters

Several optional parameters in handle_solve_ipopt define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of handle_solve_ipopt these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling handle_​opt_​set anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
If any options are set by the solver (typically those with the choice of AUTO), their value can be retrieved by handle_​opt_​get. If the solver is called again, any such arguments are reset to their default values and the decision is made again.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section 12.1.

12.1 Description of the Optional Parameters

For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
All options accept the value DEFAULT to return single options to their default states.
Keywords and character values are case and white space insensitive.
Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
Hessian ModeaDefault =AUTO
This parameter specifies whether the Hessian will be user-supplied (in hx) or approximated by handle_solve_ipopt using a limited-memory quasi-Newton L-BFGS method. In the AUTO setting, if no Hessian structure has been registered in the problem with a call to handle_​set_​nlnhess, and there are explicitly nonlinear user-supplied functions, then the Hessian will be approximated. Otherwise hess will be called if and only if any of handle_​set_​nlnobj and handle_​set_​nlnconstr have been used to define the problem. Approximating the Hessian is likely to require more iterations to achieve convergence but will reduce the time spent in user-supplied functions.
Constraint: Hessian Mode=AUTO, EXACT or APPROXIMATE.
Infinite Bound SizerDefault =1020
This defines the ‘infinite’ bound bigbnd in the definition of the problem constraints. Any upper bound greater than or equal to bigbnd will be regarded as + (and similarly any lower bound less than or equal to -bigbnd will be regarded as -). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
It also serves as a limit for the objective function to be considered unbounded (errorid=54).
Constraint: Infinite Bound Size1000.
Monitoring FileiDefault =-1
If i0, the unit number for the secondary (monitoring) output. If set to -1, no secondary output is provided. The information output to this unit is controlled by Monitoring Level.
Constraint: Monitoring File-1.
Monitoring LeveliDefault =4
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with Print Level.
Constraint: 0Monitoring Level5.
Matrix OrderingaDefault =AUTO
This parameter specifies the ordering to be used by the internal sparse linear algebra solver. It affects the number of nonzeros in the factorized matrix and thus influences the cost per iteration.
Matrix Ordering=AUTO
A heuristic is used to choose automatically between METIS and AMD orderings.
Matrix Ordering=BEST
Both AMD and METIS orderings are computed at the beginning of the solve and the one with the fewest nonzeros in the factorized matrix is selected.
Matrix Ordering=AMD
An approximate minimum degree (AMD) ordering is used.
Matrix Ordering=METIS
METIS ordering is used.
Constraint: Matrix Ordering=AUTO, BEST, AMD or METIS.
Outer Iteration LimitiDefault =100
The maximum number of iterations to be performed by handle_solve_ipopt. Setting the option too low might lead to errorid=22.
Constraint: Outer Iteration Limit0.
Print FileiDefault =advisory message unit number
If i0, the unit number for the primary output of the solver. If Print File=-1, the primary output is completely turned off independently of other settings. The default value is the advisory message unit number as defined by x04abf (no CPP interface) at the time of the optional parameters initialization, e.g., at the initialization of the handle. The information output to this unit is controlled by Print Level.
Constraint: Print File-1.
Print LeveliDefault =2
This parameter defines how detailed information should be printed by the solver to the primary output.
i Output
0 No output from the solver
1 Additionally, derivative check information, the Header and Summary.
2 Additionally, the Iteration log.
3, 4 Additionally, details of each iteration with scalar quantities printed.
5 Additionally, individual components of arrays are printed resulting in large output.
Constraint: 0Print Level5.
Stats TimeaDefault =NO
This parameter allows you to turn on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches.
Constraint: Stats Time=YES or NO.
Stop Tolerance 1rDefault =max(10-6,ε)
This option sets the value εtol which is used for optimality and complementarity tests from KKT conditions See Section 11.1.
Constraint: Stop Tolerance 1>ε.
TaskaDefault =MINIMIZE
This parameter specifies the required direction of the optimization. If Task=FEASIBLE POINT, the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function is set, Task reverts to FEASIBLE POINT automatically.
Constraint: Task=MINIMIZE, MAXIMIZE or FEASIBLE POINT.
Time LimitrDefault =106
A limit to the number of seconds that the solver can use to solve one problem. If during the convergence check this limit is exceeded, the solver will terminate with a corresponding error message.
Constraint: Time Limit>0.
Verify DerivativesaDefault =NO
This parameter specifies whether the function should perform numerical checks on the consistency of the user-supplied functions. It is recommended that such checks are enabled when first developing the formulation of the problem, however, the derivative check results in a significant increase of the number of the function evaluations and thus it shouldn't be used in production code.
Constraint: Verify Derivatives=YES or NO.