NAG CL Interface
e04stc (handle_solve_ipopt)
Note: this function uses optional parameters to define choices in the problem specification and in the details of the algorithm. If you wish to use default
settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm and to Section 12 for a detailed description of the specification of the optional parameters.
1
Purpose
e04stc, an interior point method optimization solver, based on the IPOPT software package, is a solver for the NAG optimization modelling suite and is suitable for large scale nonlinear programming (NLP) problems.
2
Specification
void 
e04stc (void *handle,
void 
(*hess)(Integer nvar,
const double x[],
Integer ncnln,
Integer idf,
double sigma,
const double lambda[],
Integer nnzh,
double hx[],
Integer *inform,
Nag_Comm *comm),


Integer nvar,
double x[],
Integer nnzu,
double u[],
double rinfo[],
double stats[],
Nag_Comm *comm,
NagError *fail) 

The function may be called by the names: e04stc or nag_opt_handle_solve_ipopt.
3
Description
e04stc will typically be used for nonlinear programming problems (NLP)
where
 $n$ is the number of the decision variables,
 ${m}_{g}$ is the number of the nonlinear constraints and $g\left(x\right)$, ${l}_{g}$ and ${u}_{g}$ are ${m}_{g}$dimensional vectors,
 ${m}_{B}$ is the number of the linear constraints and $B$ is a ${m}_{B}$ by $n$ matrix, ${l}_{B}$ and ${u}_{B}$ are ${m}_{B}$dimensional vectors,
 there are $n$ box constraints and ${l}_{x}$ and ${u}_{x}$ are $n$dimensional vectors.
The objective
$f\left(x\right)$ can be specified in a number of ways:
e04rec for a dense linear function,
e04rfc for a sparse linear or quadratic function and
e04rgc for a general nonlinear function. In the last case,
objfun and
objgrd will be used to compute values and gradients of the objective function. Variable box bounds
${l}_{x},{u}_{x}$ can be specified with
e04rhc. The special case of linear constraints
${l}_{B},B,{u}_{B}$ is handled by
e04rjc while general nonlinear constraints
${l}_{g},g\left(x\right),{u}_{g}$ are specified by
e04rkc (both can be specified). Again, in the last case,
confun and
congrd will be used to compute values and gradients of the nonlinear constraint functions.
Finally, if the you are willing to calculate second derivatives, the sparsity structure of the second partial derivatives of a nonlinear objective and/or of any nonlinear constraints is specified by
e04rlc while the values of these derivatives themselves will be computed by usersupplied
hess. While there is an option (see
${\mathbf{Hessian\; Mode}}$) that forces internal approximation of second derivatives, no such option exists for first derivatives which must be computed accurately. If
e04rlc has been called and
hess is used to calculate values for second derivatives, both the objective and all the constraints must be included; it is not possible to provide a subset of these. If
e04rlc is not called, then internal approximation of second derivatives will take place.
See
Section 4.1 in the
E04 Chapter Introduction for more details about the NAG optimization modelling suite.
3.1
Structure of the Lagrange Multipliers
For a problem consisting of
$n$ variable bounds,
${m}_{B}$ linear constraints and
${m}_{g}$ nonlinear constraints (as specified in
nvar,
nclin and
ncnln of
e04rhc,
e04rjc and
e04rkc, respectively), the number of Lagrange multipliers, and consequently the correct value for
nnzu, will be
$q=2*n+2*{m}_{B}+2*{m}_{g}$. The order these will be found in the
u array is
${z}_{{1}_{L}},{z}_{{1}_{U}},{z}_{{2}_{L}},{z}_{{2}_{U}}\dots {z}_{{n}_{L}},{z}_{{n}_{U}},{\lambda}_{{1}_{L}},{\lambda}_{{1}_{U}},{\lambda}_{{2}_{L}},{\lambda}_{{2}_{U}}\dots {\lambda}_{{{m}_{B}}_{L}},{\lambda}_{{{m}_{B}}_{U}},{\lambda}_{{\left({m}_{B}+1\right)}_{L}},{\lambda}_{{\left({m}_{B}+1\right)}_{U}},{\lambda}_{{\left({m}_{B}+2\right)}_{L}},{\lambda}_{{\left({m}_{B}+2\right)}_{U}}\dots {\lambda}_{{\left({m}_{B}+{m}_{g}\right)}_{L}},{\lambda}_{{\left({m}_{B}+{m}_{g}\right)}_{U}}$
where the
$L$ and
$U$ subscripts refer to lower and upper bounds, respectively, and the variable bound constraint multipliers come first (if present, i.e., if
e04rhc was called), followed by the linear constraint multipliers (if present, i.e., if
e04rjc was called) and the nonlinear constraint multipliers (if present, i.e., if
e04rkc was called).
Significantly nonzero values for any of these, after the solver has terminated, indicates that the corresponding constraint is active. Significance is judged in the first instance by the relative scale of any value compared to the smallest among them.
4
References
Byrd R H, Gilbert J Ch and Nocedal J (2000) A trust region method based on interior point techniques for nonlinear programming Mathematical Programming 89 149–185
Byrd R H, Liu G and Nocedal J (1997) On the local behavior of an interior point method for nonlinear programming Numerical Analysis (eds D F Griffiths and D J Higham) Addison–Wesley
Conn A R, Gould N I M, Orban D and Toint Ph L (2000) A primaldual trustregion algorithm for nonconvex nonlinear programming Mathematical Programming 87 (2) 215–249
Conn A R, Gould N I M and Toint Ph L (2000) Trust Region Methods SIAM, Philadephia
Fiacco A V and McCormick G P (1990) Nonlinear Programming: Sequential Unconstrained Minimization Techniques SIAM, Philadelphia
Gould N I M, Orban D, Sartenaer A and Toint Ph L (2001) Superlinear convergence of primaldual interior point algorithms for nonlinear programming SIAM Journal on Optimization 11 (4) 974–1002
Hock W and Schittkowski K (1981) Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems 187 Springer–Verlag
Hogg J D and Scott J A (2011) HSL MA97: a bitcompatible multifrontal code for sparse symmetric systems RAL Technical Report. RALTR2011024
Wächter A and Biegler L T (2006) On the implementation of a primaldual interior point filter line search algorithm for largescale nonlinear programming Mathematical Programming 106(1) 25–57
Williams P and Lang B (2013) A framework for the $M{R}^{3}$ Algorithm: theory and implementation SIAM J. Sci. Comput. 35 740–766
Yamashita H (1998) A globally convergent primaldual interiorpoint method for constrained optimization Optimization Methods and Software 10 443–469
5
Arguments

1:
$\mathbf{handle}$ – void *
Input

On entry: the handle to the problem. It needs to be initialized by
e04rac and the problem formulated by some of the functions
e04rec,
e04rfc,
e04rgc,
e04rhc,
e04rjc,
e04rkc and
e04rlc. It
must not be changed between calls to the NAG optimization modelling suite.

2:
$\mathbf{objfun}$ – function, supplied by the user
External Function

objfun must calculate the value of the nonlinear objective function
$f\left(x\right)$ at a specified value of the
$n$element vector of
$x$ variables. If there is no nonlinear objective (e.g.,
e04rec or
e04rfc was called to define a linear or quadratic objective function),
objfun will never be called by
e04stc
and may be
NULLFN.
The specification of
objfun is:
void 
objfun (Integer nvar,
const double x[],
double *fx,
Integer *inform,
Nag_Comm *comm)



1:
$\mathbf{nvar}$ – Integer
Input

On entry:
$n$, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by
e04rac.

2:
$\mathbf{x}\left[{\mathbf{nvar}}\right]$ – const double
Input

On entry: the vector $x$ of variable values at which the objective function is to be evaluated.

3:
$\mathbf{fx}$ – double *
Output

On exit: the value of the objective function at $x$.

4:
$\mathbf{inform}$ – Integer *
Input/Output

On entry: a nonnegative value.
On exit: must be set to a value describing the action to be taken by the solver on return from
objfun. Specifically, if the value is negative, then the value of
fx will be discarded and the solver will either attempt to find a different trial point or terminate immediately with
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_USER_NAN (the same will happen if
fx is Infinity or NaN); otherwise, the solver will proceed normally.

5:
$\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to
objfun.
 user – double *
 iuser – Integer *
 p – Pointer
The type Pointer will be
void *. Before calling
e04stc you may allocate memory and initialize these pointers with various quantities for use by
objfun when called from
e04stc (see
Section 3.1.1 in the Introduction to the NAG Library CL Interface).
Note: objfun should not return floatingpoint NaN (Not a Number) or infinity values, since these are not handled by
e04stc. If your code inadvertently
does return any NaNs or infinities,
e04stc is likely to produce unexpected results.

3:
$\mathbf{objgrd}$ – function, supplied by the user
External Function

objgrd must calculate the values of the nonlinear objective function gradients
$\frac{\partial f}{\partial x}$ at a specified value of the
$n$element vector of
$x$ variables. If there is no nonlinear objective (e.g.,
e04rec or
e04rfc was called to define a linear or quadratic objective function),
objgrd will never be called by
e04stc
and may be
NULLFN.
The specification of
objgrd is:

1:
$\mathbf{nvar}$ – Integer
Input

On entry:
$n$, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by
e04rac.

2:
$\mathbf{x}\left[{\mathbf{nvar}}\right]$ – const double
Input

On entry: the vector $x$ of variable values at which the objective function gradient is to be evaluated.

3:
$\mathbf{nnzfd}$ – Integer
Input

On entry: the number of nonzero elements in the sparse gradient vector of the objective function, as was set in a previous call to
e04rgc.

4:
$\mathbf{fdx}\left[\mathit{dim}\right]$ – double
Output

On exit: the values of the nonzero elements in the sparse gradient vector of the objective function, in the order specified by
idxfd in a previous call to
e04rgc.
${\mathbf{fdx}}\left[\mathit{i}1\right]$ will be the gradient
$\frac{\partial f}{\partial {x}_{{\mathbf{idxfd}}\left[\mathit{i}1\right]}}$.

5:
$\mathbf{inform}$ – Integer *
Input/Output

On entry: a nonnegative value.
On exit: must be set to a value describing the action to be taken by the solver on return from
objgrd. Specifically, if the value is negative the solution of the current problem will terminate immediately with
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_USER_NAN (the same will happen if
fdx contains Infinity or NaN); otherwise, computations will continue.

6:
$\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to
objgrd.
 user – double *
 iuser – Integer *
 p – Pointer
The type Pointer will be
void *. Before calling
e04stc you may allocate memory and initialize these pointers with various quantities for use by
objgrd when called from
e04stc (see
Section 3.1.1 in the Introduction to the NAG Library CL Interface).
Note: objgrd should not return floatingpoint NaN (Not a Number) or infinity values, since these are not handled by
e04stc. If your code inadvertently
does return any NaNs or infinities,
e04stc is likely to produce unexpected results.

4:
$\mathbf{confun}$ – function, supplied by the user
External Function

confun must calculate the values of the
${m}_{g}$element vector
${g}_{i}\left(x\right)$ of nonlinear constraint functions at a specified value of the
$n$element vector of
$x$ variables. If no nonlinear constraints were registered in this
handle,
confun will never be called by
e04stc
and may be specified as
NULLFN.
The specification of
confun is:

1:
$\mathbf{nvar}$ – Integer
Input

On entry:
$n$, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by
e04rac.

2:
$\mathbf{x}\left[{\mathbf{nvar}}\right]$ – const double
Input

On entry: the vector $x$ of variable values at which the constraint functions are to be evaluated.

3:
$\mathbf{ncnln}$ – Integer
Input

On entry:
${m}_{g}$, the number of nonlinear constraints, as specified in an earlier call to
e04rkc.

4:
$\mathbf{gx}\left[\mathit{dim}\right]$ – double
Output

On exit: the ${m}_{g}$ values of the nonlinear constraint functions at $x$.

5:
$\mathbf{inform}$ – Integer *
Input/Output

On entry: a nonnegative value.
On exit: must be set to a value describing the action to be taken by the solver on return from
confun. Specifically, if the value is negative, then the value of
gx will be discarded and the solver will either attempt to find a different trial point or terminate immediately with
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_USER_NAN (the same will happen if
gx contains Infinity or NaN); otherwise, the solver will proceed normally.

6:
$\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to
confun.
 user – double *
 iuser – Integer *
 p – Pointer
The type Pointer will be
void *. Before calling
e04stc you may allocate memory and initialize these pointers with various quantities for use by
confun when called from
e04stc (see
Section 3.1.1 in the Introduction to the NAG Library CL Interface).
Note: confun should not return floatingpoint NaN (Not a Number) or infinity values, since these are not handled by
e04stc. If your code inadvertently
does return any NaNs or infinities,
e04stc is likely to produce unexpected results.

5:
$\mathbf{congrd}$ – function, supplied by the user
External Function

congrd must calculate the nonzero values of the sparse Jacobian of the nonlinear constraint functions
$\frac{\partial {g}_{i}}{\partial x}$ at a specified value of the
$n$element vector of
$x$ variables. If there are no nonlinear constraints (e.g.,
e04rkc was never called with the same
handle or it was called with
ncnln $=0$),
congrd will never be called by
e04stc
and may be specified as
NULLFN.
The specification of
congrd is:

1:
$\mathbf{nvar}$ – Integer
Input

On entry:
$n$, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by
e04rac.

2:
$\mathbf{x}\left[{\mathbf{nvar}}\right]$ – const double
Input

On entry: the vector $x$ of variable values at which the Jacobian of the constraint functions is to be evaluated.

3:
$\mathbf{nnzgd}$ – Integer
Input

On entry: is the number of nonzero elements in the sparse Jacobian of the constraint functions, as was set in a previous call to
e04rkc.

4:
$\mathbf{gdx}\left[\mathit{dim}\right]$ – double
Output

On exit: the nonzero values of the Jacobian of the nonlinear constraints, in the order specified by
irowgd and
icolgd in an earlier call to
e04rkc.
${\mathbf{gdx}}\left[\mathit{i}1\right]$ will be the gradient
$\frac{\partial {g}_{{\mathbf{irowgd}}\left[\mathit{i}1\right]}}{\partial {x}_{{\mathbf{icolgd}}\left[\mathit{i}1\right]}}$.

5:
$\mathbf{inform}$ – Integer *
Input/Output

On entry: a nonnegative value.
On exit: must be set to a value describing the action to be taken by the solver on return from
congrd. Specifically, if the value is negative the solution of the current problem will terminate immediately with
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_USER_NAN (the same will happen if
gdx contains Infinity or NaN); otherwise, computations will continue.

6:
$\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to
congrd.
 user – double *
 iuser – Integer *
 p – Pointer
The type Pointer will be
void *. Before calling
e04stc you may allocate memory and initialize these pointers with various quantities for use by
congrd when called from
e04stc (see
Section 3.1.1 in the Introduction to the NAG Library CL Interface).
Note: congrd should not return floatingpoint NaN (Not a Number) or infinity values, since these are not handled by
e04stc. If your code inadvertently
does return any NaNs or infinities,
e04stc is likely to produce unexpected results.

6:
$\mathbf{hess}$ – function, supplied by the user
External Function

hess must calculate the nonzero values of one of a set of second derivative quantities:
 the Hessian of the Lagrangian function $\sigma {\nabla}^{2}f+{\displaystyle \sum _{i=1}^{{m}_{g}}}{\lambda}_{i}{\nabla}^{2}{g}_{i}$
 the Hessian of the objective function ${\nabla}^{2}f$
 the Hessian of the constraint functions ${\nabla}^{2}{g}_{i}$
The value of argument
idf determines which one of these is to be computed and this, in turn, is determined by earlier calls to
e04rlc, when the nonzero sparsity structure of these Hessians was registered. Please note that it is not possible to only supply a subset of the Hessians (see
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_DERIV_ERRORS or
NE_NULL_ARGUMENT). If there were no calls to
e04rlc,
hess will never be called by
e04stc
In this case, the Hessian of the Lagrangian will be approximated by a limitedmemory quasiNewton method (LBFGS).
The specification of
hess is:

1:
$\mathbf{nvar}$ – Integer
Input

On entry:
$n$, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by
e04rac.

2:
$\mathbf{x}\left[{\mathbf{nvar}}\right]$ – const double
Input

On entry: the vector $x$ of variable values at which the Hessian functions are to be evaluated.

3:
$\mathbf{ncnln}$ – Integer
Input

On entry:
${m}_{g}$, the number of nonlinear constraints, as specified in an earlier call to
e04rkc.

4:
$\mathbf{idf}$ – Integer
Input

On entry: specifies the quantities to be computed in
hx.
 ${\mathbf{idf}}=1$
 The values of the Hessian of the Lagrangian will be computed in hx. This will be the case if e04rlc has been called with idf of the same value.
 ${\mathbf{idf}}=0$
 The values of the Hessian of the objective function will be computed in hx. This will be the case if e04rlc has been called with idf of the same value.
 ${\mathbf{idf}}>0$
 The values of the Hessian of the idfth constraint function will be computed in hx. This will be the case if e04rlc has been called with idf of the same value.

5:
$\mathbf{sigma}$ – double
Input

On entry: if
${\mathbf{idf}}=1$, the value of the
$\sigma $ quantity in the definition of the Hessian of the Lagrangian. Otherwise,
sigma should not be referenced.

6:
$\mathbf{lambda}\left[\mathit{dim}\right]$ – const double
Input

On entry: if
${\mathbf{idf}}=1$, the values of the
${\lambda}_{i}$ quantities in the definition of the Hessian of the Lagrangian. Otherwise,
lambda should not be referenced.

7:
$\mathbf{nnzh}$ – Integer
Input

On entry: the number of nonzero elements in the Hessian to be computed.

8:
$\mathbf{hx}\left[\mathit{dim}\right]$ – double
Output

On exit: the nonzero values of the requested Hessian evaluated at
$x$. For each value of
idf, the ordering of nonzeros must follow the sparsity structure registered in the
handle by earlier calls to
e04rlc through the arguments
irowh and
icolh.

9:
$\mathbf{inform}$ – Integer *
Input/Output

On entry: a nonnegative value.
On exit: must be set to a value describing the action to be taken by the solver on return from
hess. Specifically, if the value is negative the solution of the current problem will terminate immediately with
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_USER_NAN (the same will happen if
hx contains Infinity or NaN); otherwise, computations will continue.

10:
$\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to
hess.
 user – double *
 iuser – Integer *
 p – Pointer
The type Pointer will be
void *. Before calling
e04stc you may allocate memory and initialize these pointers with various quantities for use by
hess when called from
e04stc (see
Section 3.1.1 in the Introduction to the NAG Library CL Interface).
Note: hess should not return floatingpoint NaN (Not a Number) or infinity values, since these are not handled by
e04stc. If your code inadvertently
does return any NaNs or infinities,
e04stc is likely to produce unexpected results.

7:
$\mathbf{monit}$ – function, supplied by the user
External Function

monit is provided to enable you to monitor the progress of the optimization. A facility is provided to halt the optimization process if necessary, using argument
inform.
monit may be
specified as
NULLFN.
The specification of
monit is:
void 
monit (Integer nvar,
const double x[],
Integer nnzu,
const double u[],
Integer *inform,
const double rinfo[],
const double stats[],
Nag_Comm *comm)



1:
$\mathbf{nvar}$ – Integer
Input

On entry: $n$, the number of variables in the problem.

2:
$\mathbf{x}\left[{\mathbf{nvar}}\right]$ – const double
Input

On entry: ${x}^{i}$, the values of the decision variables $x$ at the current iteration.

3:
$\mathbf{nnzu}$ – Integer
Input

On entry: the dimension of array
u.

4:
$\mathbf{u}\left[{\mathbf{nnzu}}\right]$ – const double
Input

On entry: if
${\mathbf{nnzu}}>0$,
u holds the values at the current iteration of Lagrange multipliers (dual variables) for the constraints. See
Section 3.1 for layout information.

5:
$\mathbf{inform}$ – Integer *
Input/Output

On entry: a nonnegative value.
On exit: must be set to a value describing the action to be taken by the solver on return from
monit. Specifically, if the value is negative the solution of the current problem will terminate immediately with
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_USER_STOP; otherwise, computations will continue.

6:
$\mathbf{rinfo}\left[32\right]$ – const double
Input

On entry: error measures and various indicators at the end of the current iteration as described in
Section 9.1.

7:
$\mathbf{stats}\left[32\right]$ – const double
Input

On entry: solver statistics at the end of the current iteration as described in
Section 9.1.

8:
$\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to
monit.
 user – double *
 iuser – Integer *
 p – Pointer
The type Pointer will be
void *. Before calling
e04stc you may allocate memory and initialize these pointers with various quantities for use by
monit when called from
e04stc (see
Section 3.1.1 in the Introduction to the NAG Library CL Interface).
Note: monit should not return floatingpoint NaN (Not a Number) or infinity values, since these are not handled by
e04stc. If your code inadvertently
does return any NaNs or infinities,
e04stc is likely to produce unexpected results.

8:
$\mathbf{nvar}$ – Integer
Input

On entry:
$n$, the number of variables in the problem. It must be unchanged from the value set during the initialization of the handle by
e04rac.

9:
$\mathbf{x}\left[{\mathbf{nvar}}\right]$ – double
Input/Output

On entry: ${x}^{0}$, the initial estimates of the variables $x$.
On exit: the final values of the variables $x$.

10:
$\mathbf{nnzu}$ – Integer
Input

On entry: the number of Lagrange multipliers that are to be returned in array
u.
If
${\mathbf{nnzu}}=0$,
u will not be referenced; otherwise it needs to match the dimension
$q$ as explained in
Section 3.1.
Constraints:
 ${\mathbf{nnzu}}\ge 0$;
 if ${\mathbf{nnzu}}>0$, ${\mathbf{nnzu}}=q$.

11:
$\mathbf{u}\left[{\mathbf{nnzu}}\right]$ – double
Output

Note: if
${\mathbf{nnzu}}>0$,
u holds Lagrange multipliers (dual variables) for the constraints. See
Section 3.1 for layout information. If
${\mathbf{nnzu}}=0$,
u will not be referenced and may be
NULL.
On exit: the final value of Lagrange multipliers $z,\lambda $.

12:
$\mathbf{rinfo}\left[32\right]$ – double
Output

On exit: error measures and various indicators at the end of the final iteration as given in the table below:
$0$ 
Objective function value $f\left(x\right)$. 
$1$ 
Constraint violation (primal infeasibility) (8). 
$2$ 
Dual infeasibility (7). 
$3$ 
Complementarity. 
$4$ 
Karush–Kuhn–Tucker error. 

13:
$\mathbf{stats}\left[32\right]$ – double
Output

On exit: solver statistics at the end of the final iteration as given in the table below:
$0$ 
Number of the iterations. 
$2$ 
Number of backtracking trial steps. 
$3$ 
Number of Hessian evaluations. 
$4$ 
Number of objective gradient evaluations. 
$7$ 
Total wall clock time elapsed. 
$18$ 
Number of objective function evaluations. 
$19$ 
Number of constraint function evaluations. 
$20$ 
Number of constraint Jacobian evaluations. 

14:
$\mathbf{comm}$ – Nag_Comm *

The NAG communication argument (see
Section 3.1.1 in the Introduction to the NAG Library CL Interface).

15:
$\mathbf{fail}$ – NagError *
Input/Output

The NAG error argument (see
Section 7 in the Introduction to the NAG Library CL Interface).
6
Error Indicators and Warnings
 NE_ALLOC_FAIL

Dynamic memory allocation failed.
See
Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information.
 NE_ALREADY_DEFINED

A different solver from the suite has already been used. Initialize a new
handle using
e04rac.
 NE_BAD_PARAM

On entry, argument $\u2329\mathit{\text{value}}\u232a$ had an illegal value.
 NE_DERIV_ERRORS

Either all of the constraint and objective Hessian structures must be defined or none (in which case, the Hessians will be approximated by a limitedmemory quasiNewton LBFGS method).
On entry, a nonlinear objective function has been defined but no objective Hessian sparsity structure has been defined through
e04rlc.
On entry, a nonlinear constraint function has been defined but no constraint Hessian sparsity structure has been defined through
e04rlc, for constraint number
$\u2329\mathit{\text{value}}\u232a$.
 NE_HANDLE

The supplied
handle does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been initialized by
e04rac or it has been corrupted.
 NE_INT

On entry, ${\mathbf{nnzu}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{nnzu}}=\u2329\mathit{\text{value}}\u232a$ or $0$.
On entry, ${\mathbf{nnzu}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: no constraints present, so ${\mathbf{nnzu}}$ must be $0$.
 NE_INTERNAL_ERROR

An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact
NAG for assistance.
See
Section 7.5 in the Introduction to the NAG Library CL Interface for further information.
 NE_MAYBE_INFEASIBLE

The solver detected an infeasible problem.
The restoration phase converged to a point that is a minimizer for the constraint violation (in the ${\ell}_{1}$norm), but is not feasible for the original problem. This indicates that the problem may be infeasible (or at least that the algorithm is stuck at a locally infeasible point). The returned point (the minimizer of the constraint violation) might help you to find which constraint is causing the problem. If you believe that the NLP is feasible, it might help to start the optimization from a different point.
 NE_MAYBE_UNBOUNDED

The solver terminated due to diverging iterates.
The maxnorm of the iterates has become larger than a preset value. This can happen if the problem is unbounded below and the iterates are diverging.
 NE_NO_IMPROVEMENT

The solver terminated after the search direction became too small.
This indicates that the solver is calculating very small step sizes and is making very little progress. This could happen if the problem has been solved to the best numerical accuracy possible given the current NLP scaling.
 NE_NO_LICENCE

Your licence key may have expired or may not have been installed correctly.
See
Section 8 in the Introduction to the NAG Library CL Interface for further information.
 NE_NOT_IMPLEMENTED

This function is not available in this implementation.
 NE_NULL_ARGUMENT

The problem requires the
confun values.
Please provide a proper
confun function.
The problem requires the
congrd derivatives.
Please provide a proper
congrd function.
The problem requires the
hess derivatives.
Either change the optional parameter
${\mathbf{Hessian\; Mode}}$ or provide a proper
hess function.
The problem requires the
objfun values.
Please provide a proper
objfun function.
The problem requires the
objgrd derivatives.
Please provide a proper
objgrd function.
 NE_PHASE

The problem is already being solved.
 NE_REF_MATCH

The information supplied does not match with that previously stored.
On entry,
${\mathbf{nvar}}=\u2329\mathit{\text{value}}\u232a$ must match that given during initialization of the
handle, i.e.,
$\u2329\mathit{\text{value}}\u232a$.
 NE_SETUP_ERROR

This solver does not support the model defined in the handle.
 NE_SUBPROBLEM

The solver terminated after an error in the step computation.
This message is printed if the solver is unable to compute a search direction, despite several attempts to modify the iteration matrix. Usually, the value of the regularization parameter then becomes too large. One situation where this can happen is when values in the Hessian are invalid (NaN or Infinity). You can check whether this is true by using the ${\mathbf{Verify\; Derivatives}}$ option.
The solver terminated after failure in the restoration phase.
This indicates that the restoration phase failed to find a feasible point that was acceptable to the filter line search for the original problem. This could happen if the problem is highly degenerate, does not satisfy the constraint qualification, or if your NLP code provides incorrect derivative information.
The solver terminated after the maximum time allowed was exceeded.
Maximum number of seconds exceeded. Use optional parameter ${\mathbf{Time\; Limit}}$ to reset the limit.
The solver terminated due to an invalid option.
Please contact
NAG with details of the call to
e04stc.
The solver terminated due to an invalid problem definition.
Please contact
NAG with details of the call to
e04stc.
The solver terminated with not enough degrees of freedom.
This indicates that your problem, as specified, has too few degrees of freedom. This can happen if you have too many equality constraints, or if you fix too many variables.
 NE_TOO_MANY_ITER

Maximum number of iterations exceeded.
 NE_USER_NAN

Invalid number detected in user function.
Either inform was set to a negative value within the usersupplied functions objfun, objgrd, confun, congrd or hess, or an Infinity or NaN was detected in values returned from them.
 NE_USER_STOP

User requested termination during a monitoring step.
inform was set to a negative value in
monit.
 NW_NOT_CONVERGED

The solver reports NLP solved to acceptable level.
This indicates that the algorithm did not converge to the desired tolerances, but that it was able to obtain a point satisfying the acceptable tolerance level. This may happen if the desired tolerances are too small for the current problem.
7
Accuracy
The accuracy of the solution is driven by optional parameter ${\mathbf{Stop\; Tolerance\; 1}}$.
If
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_NOERROR on the final exit, the returned point satisfies Karush–Kuhn–Tucker (KKT) conditions to the requested accuracy (under the default settings close to
$\sqrt{\epsilon}$ where
$\epsilon $ is the
machine precision) and thus it is a good estimate of a local solution. If
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NW_NOT_CONVERGED, some of the convergence conditions were not fully satisfied but the point still seems to be a reasonable estimate and should be usable. Please refer to
Section 11.1 and the description of the particular options.
8
Parallelism and Performance
e04stc is not threaded in any implementation.
9.1
Description of the Printed Output
The solver can print information to give an overview of the problem and of the progress of the computation. The output may be sent to two independent streams (files) which are set by optional parameters
${\mathbf{Print\; File}}$ and
${\mathbf{Monitoring\; File}}$. Optional parameters
${\mathbf{Print\; Level}}$ and
${\mathbf{Monitoring\; Level}}$ determine the exposed level of detail. This allows, for example, the generation of a detailed log in a file while the condensed information is displayed on the screen. This section also describes what kind of information is made available to the monitoring function
monit via
rinfo and
stats.
There are four sections printed to the primary output with the default settings (level $2$): a derivative check, a header, an iteration log and a summary. At higher levels more information will be printed, including any internal IPOPT options that have been changed from their default values.
Derivative Check
If
${\mathbf{Verify\; Derivatives}}$ is set, then information will appear about any errors detected in the usersupplied derivative functions
objgrd,
congrd or
hess. It may look like this:
Starting derivative checker for first derivatives.
* grad_f[ 1] = 2.000000e+00 ~ 2.455000e+01 [ 1.081e+00]
* jac_g [ 1, 4] = 4.700969e+01 v ~ 5.200968e+01 [ 9.614e02]
Starting derivative checker for second derivatives.
* obj_hess[ 1, 1] = 1.881000e+03 v ~ 1.882000e+03 [ 5.314e04]
* 1th constr_hess[ 1, 3] = 2.988964e+00 v ~ 1.103543e02 [ 3.000e+00]
Derivative checker detected 3 error(s).
The first line indicates that the value for the partial derivative of the objective with respect to the first variable as returned by
objgrd (the first one printed) differs sufficiently from a finite difference estimation derived from
objfun (the second one printed). The number in square brackets is the relative difference between these two numbers.
The second line reports on a discrepancy for the partial derivative of the first constraint with respect to the fourth variable. If the indicator v is absent, the discrepancy refers to a component that had not been included in the sparsity structure, in which case the nonzero structure of the derivatives should be corrected. Mistakes in the first derivatives should be corrected before attempting to correct mistakes in the second derivatives.
The third line reports on a discrepancy in a second derivative of the objective function, differentiated with respect to the first variable, twice.
The fourth line reports on a discrepancy in a second derivative of the first constraint, differentiated with respect to the first and third variables.
Header
If
${\mathbf{Print\; Level}}\ge 1$, the header will contain statistics about the size of the problem how the solver sees it, i.e., it reflects any changes imposed by preprocessing and problem transformations. The header may look like:
Number of nonzeros in equality constraint Jacobian...: 4
Number of nonzeros in inequality constraint Jacobian.: 8
Number of nonzeros in Lagrangian Hessian.............: 10
Total number of variables............................: 4
variables with only lower bounds: 4
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 1
Total number of inequality constraints...............: 2
inequality constraints with only lower bounds: 2
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
It summarises what is known about the variables and the constraints. Simple bounds are set by
e04rhc and standard equalities and inequalities by
e04rjc.
Iteration log
If
${\mathbf{Print\; Level}}=2$, the status of each iteration is condensed to one line. The line shows:
iter 
The current iteration count. This includes regular iterations and iterations during the restoration phase. If the algorithm is in the restoration phase, the letter r will be appended to the iteration number. The iteration number $0$ represents the starting point. This quantity is also available as ${\mathbf{stats}}\left[0\right]$ of monit. 
objective 
The unscaled objective value at the current point (given the current NLP scaling). During the restoration phase, this value remains the unscaled objective value for the original problem. This quantity is also available as ${\mathbf{rinfo}}\left[0\right]$ of monit. 
inf_pr 
The unscaled constraint violation at the current point (given the current NLP scaling). This quantity is the infinitynorm (max) of the (unscaled) constraints ${g}_{i}$. During the restoration phase, this value remains the constraint violation of the original problem at the current point. This quantity is also available as ${\mathbf{rinfo}}\left[1\right]$ of monit. 
inf_du 
The scaled dual infeasibility at the current point (given the current NLP scaling). This quantity measure the infinitynorm (max) of the internal dual infeasibility, ${\lambda}_{i}$ of Eq. (4a) in the implementation paper Wächter and Biegler (2006), including inequality constraints reformulated using slack variables and NLP scaling. During the restoration phase, this is the value of the dual infeasibility for the restoration phase problem. This quantity is also available as ${\mathbf{rinfo}}\left[2\right]$ of monit. 
lg(mu) 
$log10$ of the value of the barrier parameter $\mu $. $\mu $ itself is also available as ${\mathbf{rinfo}}\left[3\right]$ of monit. 
d 
The infinity norm (max) of the primal step (for the original variables x and the internal slack variables s). During the restoration phase, this value includes the values of additional variables, $\stackrel{}{p}$ and $\stackrel{}{n}$ (see Eq. (30) in Wächter and Biegler (2006)). This quantity is also available as ${\mathbf{rinfo}}\left[4\right]$ of monit. 
lg(rg) 
$log10$ of the value of the regularization term for the Hessian of the Lagrangian in the augmented system (${\delta}_{w}$ of Eq. (26) and Section 3.1 in Wächter and Biegler (2006)). A dash (–) indicates that no regularization was done. The regularization term itself is also available as ${\mathbf{rinfo}}\left[5\right]$ of monit. 
alpha_du 
The step size for the dual variables (${\alpha}_{k}^{z}$ of Eq. (14c) in Wächter and Biegler (2006)). This quantity is also available as ${\mathbf{rinfo}}\left[6\right]$ of monit. 
alpha_pr 
The step size for the primal variables (${\alpha}_{k}$ of Eq. (14a) in Wächter and Biegler (2006)). This quantity is also available as ${\mathbf{rinfo}}\left[7\right]$ of monit. The number is usually followed by a character for additional diagnostic information regarding the step acceptance criterion.
f 
ftype iteration in the filter method without secondorder correction 
F 
ftype iteration in the filter method with secondorder correction 
h 
htype iteration in the filter method without secondorder correction 
H 
htype iteration in the filter method with secondorder correction 
k 
penalty value unchanged in merit function method without secondorder correction 
K 
penalty value unchanged in merit function method with secondorder correction 
n 
penalty value updated in merit function method without secondorder correction 
N 
penalty value updated in merit function method with secondorder correction 
R 
Restoration phase just started 
w 
in watchdog procedure 
s 
step accepted in soft restoration phase 
t/T 
tiny step accepted without line search 
r 
some previous iterate restored 

ls 
The number of backtracking line search steps (does not include secondorder correction steps). This quantity is also available as ${\mathbf{stats}}\left[1\right]$ of monit. 
Note that the step acceptance mechanisms in IPOPT consider the barrier objective function
(5) which is usually different from the value reported in the
objective column. Similarly, for the purposes of the step acceptance, the constraint violation is measured for the internal problem formulation, which includes slack variables for inequality constraints and potentially NLP scaling of the constraint functions. This value, too, is usually different from the value reported in
inf_pr. As a consequence, a new iterate might have worse values both for the objective function and the constraint violation as reported in the iteration output, seemingly contradicting globalization procedure.
Note that all these values are also available in
${\mathbf{rinfo}}\left[0\right],\dots ,{\mathbf{rinfo}}\left[7\right]$ and
${\mathbf{stats}}\left[0\right],\dots ,{\mathbf{stats}}\left[1\right]$of the monitoring function
monit.
The output might look as follows:
iter objective inf_pr inf_du lg(mu) d lg(rg) alpha_du alpha_pr ls
0 2.6603500e+05 1.55e+02 3.21e+01 1.0 0.00e+00  0.00e+00 0.00e+00 0
1 1.5053889e+05 7.95e+01 1.43e+01 1.0 1.16e+00  4.55e01 1.00e+00f 1
2 8.9745785e+04 3.91e+01 6.45e+00 1.0 3.07e+01  5.78e03 1.00e+00f 1
3 3.9878595e+04 1.63e+01 3.47e+00 1.0 5.19e+00 0.0 2.43e01 1.00e+00f 1
4 2.7780042e+04 1.08e+01 1.64e+00 1.0 3.66e+01  7.24e01 8.39e01f 1
5 2.6194274e+04 1.01e+01 1.49e+00 1.0 1.07e+01  1.00e+00 1.05e01f 1
6 1.5422960e+04 4.75e+00 6.82e01 1.0 1.74e+01  1.00e+00 1.00e+00f 1
7 1.1975453e+04 3.14e+00 7.26e01 1.0 2.83e+01  1.00e+00 5.06e01f 1
8 8.3508421e+03 1.34e+00 2.04e01 1.0 3.96e+01  9.27e01 1.00e+00f 1
9 7.0657495e+03 4.85e01 9.22e02 1.0 5.32e+01  1.00e+00 1.00e+00f 1
iter objective inf_pr inf_du lg(mu) d lg(rg) alpha_du alpha_pr ls
10 6.8359393e+03 1.17e01 1.28e01 1.7 4.69e+01  8.21e01 1.00e+00h 1
11 6.6508917e+03 1.52e02 1.52e02 2.5 1.87e+01  1.00e+00 1.00e+00h 1
12 6.4123213e+03 8.77e03 1.49e01 3.8 1.85e+01  7.49e01 1.00e+00f 1
13 6.3157361e+03 4.33e03 1.90e03 3.8 2.07e+01  1.00e+00 1.00e+00f 1
14 6.2989280e+03 1.12e03 4.06e04 3.8 1.54e+01  1.00e+00 1.00e+00h 1
15 6.2996264e+03 9.90e05 2.05e04 5.7 5.35e+00  9.63e01 1.00e+00h 1
16 6.2998436e+03 0.00e+00 1.86e07 5.7 4.55e01  1.00e+00 1.00e+00h 1
17 6.2998424e+03 0.00e+00 6.18e12 8.2 2.62e03  1.00e+00 1.00e+00h 1
If ${\mathbf{Print\; Level}}>2$, each iteration produces significantly more detailed output comprising detailed error measures and output from internal operations. The output is reasonably selfexplanatory so it is not featured here in detail.
Summary
Once the solver finishes, a detailed summary is produced if
${\mathbf{Print\; Level}}\ge 1$. An example is shown below:
Number of Iterations....: 6
(scaled) (unscaled)
Objective...............: 7.8692659500479623e01 6.2324586324379867e+00
Dual infeasibility......: 7.9744615766675617e10 6.3157735687207093e09
Constraint violation....: 8.3555384833289281e12 8.3555384833289281e12
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 7.9744615766675617e10 6.3157735687207093e09
Number of objective function evaluations = 7
Number of objective gradient evaluations = 7
Number of equality constraint evaluations = 7
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 7
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 6
Total CPU secs in IPOPT (w/o function evaluations) = 0.724
Total CPU secs in NLP function evaluations = 0.343
EXIT: Optimal Solution Found.
It starts with the total number of iterations the algorithm went through. Then, five quantities are printed, all evaluated at the termination point: the value of the objective function, the dual infeasibility, the constraint violation, the complementarity and the NLP error.
This is followed by some statistics on the number of calls to usersupplied functions and CPU time taken in usersupplied functions and the main algorithm. Lastly, status at exit is indicated by a short message. Detailed timings of the algorithm are displayed only if ${\mathbf{Stats\; Time}}$ is set.
9.2
Internal Changes
Internal changes have been made to this function as follows:
 At Mark 26.1:
The default for the optional parameter ${\mathbf{Verify\; Derivatives}}$ has been
changed from $\mathrm{AUTO}$ to $\mathrm{NO}$ meaning that the derivatives
will not be checked unless you explicitly request them to be and the
description of the option $\mathrm{AUTO}$ has been removed.
${\mathbf{Print\; Level}}=0$ and
${\mathbf{Monitoring\; Level}}=0$ no longer produce
output, a banner was printed in the previous release.
A new option ${\mathbf{Task}}$
has been introduced. It allows you to easily switch between
minimization, maximization and feasible point. The previous release
assumed minimization which is now the default choice.
A new option ${\mathbf{Matrix\; Ordering}}$ has been introduced. It allows you to choose
the fillreducing ordering for the internal sparse linear algebra
solver. Originally, at Mark 26, only AMD ordering was
implemented. METIS ordering has now been introduced which is
especially efficient for largescale problems. A heuristic to
automatically choose between the two orderings has also been added and
is now the default choice.
 At Mark 27:
The name of the argument 'mon' has been updated to
monit to be consistent with the rest of the NAG Optimization Suite routines.
For details of all known issues which have been reported for the NAG Library please refer to the
Known Issues.
9.3
Additional Licensor
Parts of the code for
e04stc are distributed according to terms imposed by another licensor. Please refer to
Library Licensors for further details.
10
Example
This example is based on Problem 73 in
Hock and Schittkowski (1981) and involves the minimization of the linear function
subject to the bounds
to the nonlinear constraint
and the linear constraints
The initial point, which is infeasible, is
and
$f\left({x}_{0}\right)=130.8$.
The optimal solution (to five significant figures) is
10.1
Program Text
10.2
Program Results
11
Algorithmic Details
e04stc is an implementation of IPOPT (see
Wächter and Biegler (2006)) that is fully supported and maintained by NAG. It uses Harwell packages MA97 for the underlying sparse linear algebra factorization and MC68 approximate minimum degree or METIS algorithm for the ordering. Any issues relating to
e04stc should be directed to NAG who assume all responsibility for the
e04stc function and its implementation.
In the remainder of this section, we repeat part of Section 2.1 of
Wächter and Biegler (2006).
To simplify notation, we describe the method for the problem formulation
Range constraints of the form $l\le c\left(x\right)\le u$ can be expressed in this formulation by introducing slack variables ${x}_{s}\ge 0$, ${x}_{t}\ge 0$ (increasing $n$ by $2$) and defining new equality constraints $g\left(x,{x}_{s}\right)\equiv c\left(x\right)l{x}_{s}=0$ and $g\left(x,{x}_{t}\right)\equiv uc\left(x\right){x}_{t}=0$.
e04stc, like the methods discussed in
Williams and Lang (2013),
Byrd et al. (2000),
Conn et al. (2000) and
Fiacco and McCormick (1990), computes (approximate) solutions for a sequence of barrier problems
for a decreasing sequence of barrier parameters
$\mu $ converging to zero.
The algorithm may be interpreted as a homotopy method to the primaldual equations,
with the homotopy parameter
$\mu $, which is driven to zero (see e.g.,
Byrd et al. (1997) and
Gould et al. (2001)). Here,
$X\u2254\mathrm{diag}\left(x\right)$ for a vector
$x$ (similarly
$z\u2254\mathrm{diag}\left(z\right)$, etc.), and
$e$ stands for the vector of all ones for appropriate dimension, while
$\lambda \in {\mathbb{R}}^{m}$ and
$z\in {\mathbb{R}}^{n}$ correspond to the Lagrange multipliers for the equality constraints
(3) and the bound constraints
(4), respectively.
Note, that the equations
(7),
(8) and
(9) for
$\mu =0$ together with ‘
$x$,
$z\ge 0$’ are the Karush–Kuhn–Tucker (KKT) conditions for the original problem
(2),
(3) and
(4). Those are the firstorder optimality conditions for
(2),
(3) and
(4) if constraint qualifications are satisfied (
Conn et al. (2000)).
Starting from an initial point supplied in
x,
e04stc computes an approximate solution to the barrier problem
(5) and
(6) for a fixed value of
$\mu $ (by default,
$0.1$), then decreases the barrier parameter, and continues the solution of the next barrier problem from the approximate solution of the previous one.
A sophisticated overall termination criterion for the algorithm is used to overcome potential difficulties when the Lagrange multipliers become large. This can happen, for example, when the gradients of the active constraints are nearly linear dependent. The termination criterion is described in detail by
Wächter and Biegler (2006) (also see below
Section 11.1).
11.1
Stopping Criteria
Using the individual parts of the primaldual equations
(7),
(8) and
(9), we define the optimality error for the barrier problem as
with scaling parameters
${s}_{d}$,
${s}_{c}\ge 1$ defined below (not to be confused with NLP scaling factors described in
Section 11.2). By
${E}_{0}\left(x,\lambda ,z\right)$ we denote
(10) with
$\mu =0$; this measures the optimality error for the original problem
(2),
(3) and
(4). The overall algorithm terminates if an approximate solution
$\left({\stackrel{~}{x}}_{*},{\stackrel{~}{\lambda}}_{*},{\stackrel{~}{z}}_{*}\right)$ (including multiplier estimates) satisfying
is found, where
${\epsilon}_{\mathit{tol}}>0$ is the usersupplied error tolerance in optional parameter
${\mathbf{Stop\; Tolerance\; 1}}$.
Even if the original problem is well scaled, the multipliers
$\lambda $ and
$z$ might become very large, for example, when the gradients of the active constraints are (nearly) linearly dependent at a solution of
(2),
(3) and
(4). In this case, the algorithm might encounter numerical difficulties satisfying the unscaled primaldual equations
(7),
(8) and
(9) to a tight tolerance. In order to adapt the termination criteria to handle such circumstances, we choose the scaling factors
in
(10). In this way, a component of the optimality error is scaled, whenever the average value of the multipliers becomes larger than a fixed number
${s}_{\mathrm{max}}\ge 1$ (
${s}_{\mathrm{max}}=100$ in our implementation). Also note, in the case that the multipliers diverge,
${E}_{0}\left(x,\lambda ,z\right)$ can only become small, if a Fritz John point for
(2),
(3) and
(4) is approached, or if the primal variables diverge as well.
11.2
Scaling the NLP
Ideally, the formulated problem should be scaled so that, near the solution, all function gradients (objective and constraints), when nonzero, are of a similar order of a magnitude.
e04stc will compute automatic NLP scaling factors for the objective and constraint functions (but not the decision variables) and apply them if large imbalances of scale are detected. This rescaling is only computed at the starting point. References to scaled or unscaled objective or constraints in
Section 9.1 and
Section 11 should be understood in this context.
12
Optional Parameters
Several optional parameters in e04stc define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e04stc these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling
e04zmc anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
If any options are set by the solver (typically those with the choice of
$\mathrm{AUTO}$), their value can be retrieved by
e04znc. If the solver is called again, any such arguments are reset to their default values and the decision is made again.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in
Section 12.1.
12.1
Description of the Optional Parameters
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
 the keywords, where the minimum abbreviation of each keyword is underlined;
 a parameter value,
where the letters $a$, $i$ and $r$ denote options that take character, integer and real values respectively.
 the default value, where the symbol $\epsilon $ is a generic notation for machine precision (see X02AJC).
All options accept the value $\mathrm{DEFAULT}$ to return single options to their default states.
Keywords and character values are case and white space insensitive.
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
Hessian Mode  $a$  Default $=\mathrm{AUTO}$ 
This parameter specifies whether the Hessian will be usersupplied (in
hx) or approximated by
e04stc using a limitedmemory quasiNewton LBFGS method. In the
$\mathrm{AUTO}$ setting, if no Hessian structure has been registered in the problem with a call to
e04rlc, and there are explicitly nonlinear usersupplied functions, then the Hessian will be approximated. Otherwise
hess will be called if and only if any of
e04rgc or
e04rkc have been used to define the problem. Approximating the Hessian is likely to require more iterations to achieve convergence but will reduce the time spent in usersupplied functions.
Constraint: ${\mathbf{Hessian\; Mode}}=\mathrm{AUTO}$, $\mathrm{EXACT}$ or $\mathrm{APPROXIMATE}$.
Infinite Bound Size  $r$  Default $\text{}={10}^{20}$ 
This defines the ‘infinite’ bound $\mathit{bigbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to $\mathit{bigbnd}$ will
be regarded as $+\infty $ (and similarly any lower bound less than or equal to $\mathit{bigbnd}$ will be regarded as $\infty $). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
It also serves as a limit for the objective function to be considered unbounded (
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_MAYBE_UNBOUNDED).
Constraint: ${\mathbf{Infinite\; Bound\; Size}}\ge 1000$.
Monitoring File  $i$  Default $=1$ 
(See
Section 3.1.1 in the Introduction to the NAG Library CL Interface for further information on NAG data types.)
If
$i\ge 0$, the
Nag_FileID number (as returned from
x04acc)
for the secondary (monitoring) output. If set to
$1$, no secondary output is provided. The information output to this unit is controlled by
${\mathbf{Monitoring\; Level}}$.
Constraint: ${\mathbf{Monitoring\; File}}\ge 1$.
Monitoring Level  $i$  Default $=4$ 
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with ${\mathbf{Print\; Level}}$.
Constraint: $0\le {\mathbf{Monitoring\; Level}}\le 5$.
Matrix Ordering  $a$  Default $=\mathrm{AUTO}$ 
This parameter specifies the ordering to be used by the internal sparse linear algebra solver. It affects the number of nonzeros in the factorized matrix and thus influences the cost per iteration.
 ${\mathbf{Matrix\; Ordering}}=\mathrm{AUTO}$
 A heuristic is used to choose automatically between METIS and AMD orderings.
 ${\mathbf{Matrix\; Ordering}}=\mathrm{BEST}$
 Both AMD and METIS orderings are computed at the beginning of the solve and the one with the fewest nonzeros in the factorized matrix is selected.
 ${\mathbf{Matrix\; Ordering}}=\mathrm{AMD}$
 An approximate minimum degree (AMD) ordering is used.
 ${\mathbf{Matrix\; Ordering}}=\mathrm{METIS}$
 METIS ordering is used.
Constraint: ${\mathbf{Matrix\; Ordering}}=\mathrm{AUTO}$, $\mathrm{BEST}$, $\mathrm{AMD}$ or $\mathrm{METIS}$.
Outer Iteration Limit  $i$  Default $\text{}=100$ 
The maximum number of iterations to be performed by
e04stc. Setting the option too low might lead to
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_TOO_MANY_ITER.
Constraint: ${\mathbf{Outer\; Iteration\; Limit}}\ge 0$.
Print File  $i$  Default
$=\mathrm{Nag\_FileID\; number\; associated\; with\; stdout}$

(See
Section 3.1.1 in the Introduction to the NAG Library CL Interface for further information on NAG data types.)
If
$i\ge 0$, the
Nag_FileID number (as returned from
x04acc,
stdout as the default)
for the primary output of the solver. If
${\mathbf{Print\; File}}=1$, the primary output is completely turned off independently of other settings. The information output to this unit is controlled by
${\mathbf{Print\; Level}}$.
Constraint: ${\mathbf{Print\; File}}\ge 1$.
Print Level  $i$  Default $=2$ 
This parameter defines how detailed information should be printed by the solver to the primary output.
$i$ 
Output 
$0$ 
No output from the solver 
$1$ 
Additionally, derivative check information, the Header and Summary. 
$2$ 
Additionally, the Iteration log. 
$3$, $4$ 
Additionally, details of each iteration with scalar quantities printed. 
$5$ 
Additionally, individual components of arrays are printed resulting in large output. 
Constraint: $0\le {\mathbf{Print\; Level}}\le 5$.
Stats Time  $a$  Default $=\mathrm{NO}$ 
This parameter allows you to turn on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches.
Constraint: ${\mathbf{Stats\; Time}}=\mathrm{YES}$ or $\mathrm{NO}$.
Stop Tolerance 1  $r$  Default $=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({10}^{6},\sqrt{\epsilon}\right)$ 
This option sets the value
${\epsilon}_{\mathrm{tol}}$ which is used for optimality and complementarity tests from KKT conditions See
Section 11.1.
Constraint: ${\mathbf{Stop\; Tolerance\; 1}}>\epsilon $.
Task  $a$  Default $=\mathrm{MINIMIZE}$ 
This parameter specifies the required direction of the optimization. If ${\mathbf{Task}}=\mathrm{FEASIBLEPOINT}$, the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function is set, ${\mathbf{Task}}$ reverts to $\mathrm{FEASIBLEPOINT}$ automatically.
Constraint: ${\mathbf{Task}}=\mathrm{MINIMIZE}$, $\mathrm{MAXIMIZE}$ or $\mathrm{FEASIBLE\; POINT}$.
Time Limit  $r$  Default $\text{}={10}^{6}$ 
A limit to the number of seconds that the solver can use to solve one problem. If during the convergence check this limit is exceeded, the solver will terminate with a corresponding error message.
Constraint: ${\mathbf{Time\; Limit}}>0$.
Verify Derivatives  $a$  Default $=\mathrm{NO}$ 
This parameter specifies whether the function should perform numerical checks on the consistency of the usersupplied functions. It is recommended that such checks are enabled when first developing the formulation of the problem, however, the derivative check results in a significant increase of the number of the function evaluations and thus it shouldn't be used in production code.
Constraint: ${\mathbf{Verify\; Derivatives}}=\mathrm{YES}$ or $\mathrm{NO}$.