Note:this function usesoptional parametersto define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm and to Section 12 for a detailed description of the specification of the optional parameters.
e04svc is a solver from the NAG optimization modelling suite for problems such as, Quadratic Programming (QP), linear Semidefinite Programming (SDP) and semidefinite programming with bilinear matrix inequalities (BMI-SDP).
The function may be called by the names: e04svc or nag_opt_handle_solve_pennon.
3Description
e04svc serves as a solver for compatible problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as a means of communication for functions in the NAG optimization modelling suite. First, the problem handle is initialized by calling e04rac. Then some of the functions e04rec,e04rfc,e04rhc,e04rjc,e04rncore04rpc may be used to formulate the objective function, (standard) constraints and matrix constraints of the problem. Once the problem is fully set, the handle may be passed to the solver. When the handle is no longer needed, e04rzc should be called to destroy it and deallocate the memory held within. See Section 4.1 in the E04 Chapter Introduction for more details about the NAG optimization modelling suite.
Problems which can be defined this way are, for example, (generally nonconvex) Quadratic Programming (QP)
Here $c$, ${l}_{x}$ and ${u}_{x}$ are $n$-dimensional vectors, $H$ is a symmetric $n\times n$ matrix, ${l}_{B}$, ${u}_{B}$ are ${m}_{B}$-dimensional vectors, $B$ is a general ${m}_{B}\times n$ rectangular matrix and ${A}_{i}^{k}$, ${Q}_{ij}^{k}$ are symmetric matrices. The expression $S\u2ab00$ stands for a constraint on eigenvalues of a symmetric matrix $S$, namely, all the eigenvalues should be non-negative, i.e., the matrix should be positive semidefinite. See relevant functions in the suite for more details on the problem formulation.
The solver is based on a generalized Augmented Lagrangian method with a suitable choice of standard and matrix penalty functions. For a detailed description of the algorithm see Section 11. Under standard assumptions on the problem (Slater constraint qualification, boundedness of the objective function on the feasible set, see Stingl (2006) for details) the algorithm converges to a local solution. In case of convex problems such as linear SDP or convex QP, this is the global solution. The solver is suitable for both small dense and large-scale sparse problems.
The algorithm behaviour and solver strategy can be modified by various optional parameters (see Section 12) which can be set by e04zmcande04zpc anytime between the initialization of the handle by e04rac and a call to the solver. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various starting points and/or optional parameters.
There are several optional parameters with a multiple choice where the default choice is $\mathrm{AUTO}$ (for example, ${\mathbf{Hessian\; Density}}$). This value means that the decision over the option is left to the solver based on the structure of the problem. Option getter e04znc can be called to retrieve the choice of these options as well as on any other options.
Optional parameter ${\mathbf{Task}}$ may be used to switch the problem to maximization or to ignore the objective function and find only a feasible point.
Optional parameter ${\mathbf{Monitor\; Frequency}}$ may be used to turn on the monitor mode of the solver. The solver invoked in this mode pauses regularly even before the optimal point is found to allow monitoring the progress from the calling program. All the important error measures and statistics are available in the calling program which may terminate the solver early if desired (see argument inform).
3.1Structure of the Lagrangian Multipliers
The algorithm works internally with estimates of both the decision variables, denoted by $x$, and the Lagrangian multipliers (dual variables) for standard and matrix constraints, denoted by $u$ and $U$, respectively. You may provide initial estimates, request approximations during the run (the monitor mode turned on) and obtain the final values. The Lagrangian multipliers are split into two arrays, the multipliers $u$ for (standard) constraints are stored in array u and multipliers $U$ for matrix constraints in array ua. Both arrays need to conform to the structure of the constraints.
If the simple bounds were defined (e04rhc was successfully called), the first $2n$ elements of u belong to the corresponding Lagrangian multipliers, interleaving a multiplier for the lower and for the upper bound for each ${x}_{i}$. If any of the bounds were set to infinity, the corresponding Lagrangian multipliers are set to $0$ and may be ignored.
Similarly, the following $2{m}_{B}$ elements of u belong to multipliers for the linear constraints, if formulated by e04rjc. The organization is the same, i.e., the multipliers for each constraint for the lower and upper bounds are alternated and zeroes are used for any missing (infinite bound) constraint.
A Lagrangian multiplier for a matrix constraint (one block) of dimension $d\times d$ is a dense symmetric matrix of the same dimension. All multipliers $U$ are stored next to each other in array ua in the same order as the matrix constraints were defined by e04rncande04rpc. The lower triangle of each is stored in the packed column order (see Section 3.4.2 in the F07 Chapter Introduction). For example, if there are four matrix constraints of dimensions $7$, $3$, $1$, $1$, the dimension of array ua should be $36$. The first $28$ elements $({d}_{1}\times ({d}_{1}+1)/2)$ belong to the packed lower triangle of ${U}_{1}$, followed by six elements of ${U}_{2}$ and one element for each ${U}_{3}$ and ${U}_{4}$. See for example Section 10 in e04rdc.
3.2Approximation of the Lagrangian Multipliers
By the nature of the algorithm, all inequality Lagrangian multiplier $u,U$ are always kept positive during the computational process. This applies even to Lagrangian multipliers of inactive constraints at the solution. They will only be close to zero although they would normally be equal to zero exactly. This is one of the major differences between results from solvers based on the active set method (such as e04nqc) and others, such as, e04svc or interior point methods. As a consequence, the initial estimate of the multipliers (if provided) might be adjusted by the solver to be sufficiently positive, also the estimates returned during the intermediate exits might only be a very crude approximation to their final values as they do not satisfy all the Karush–Kuhn–Tucker (KKT) conditions.
Another difference is that e04nqc merges multipliers for both lower and upper inequality into one element whose sign determines the inequality because there can be at most one active constraint and multiplier for the inactive is exact zero. Negative multipliers are associated with the upper bounds and positive with the lower bounds. On the other hand, e04svc works with both multipliers at the same time so they are returned in two elements, one for the lower bound, the other for the upper bound (see Section 3.1). An equivalent result can be achieved by subtracting the upper bound multiplier from the lower one. This holds even when equalities are interpreted as two inequalities (see optional parameter ${\mathbf{Transform\; Constraints}}$).
4References
Ben–Tal A and Zibulevsky M (1997) Penalty/barrier multiplier methods for convex programming problems SIAM Journal on Optimization7 347–366
Fujisawa K, Kojima M, Nakata K (1997) Exploiting sparsity in primal-dual interior-point method for semidefinite programming Math. Programming79 235–253
Hogg J D and Scott J A (2011) HSL MA97: a bit-compatible multifrontal code for sparse symmetric systems RAL Technical Report. RAL-TR-2011-024
Kočvara M and Stingl M (2003) PENNON – a code for convex nonlinear and semidefinite programming Optimization Methods and Software18(3) 317–333
Kočvara M and Stingl M (2007) On the solution of large-scale SDP problems by the modified barrier method using iterative solvers Math. Programming (Series B)109(2–3) 413–444
Mittelmann H D (2003) An independent benchmarking of SDP and SOCP solvers Math. Programming95 407–430
Stingl M (2006) On the Solution of Nonlinear Semidefinite Programs by Augmented Lagrangian Methods, PhD thesis Institute of Applied Mathematics II, Friedrich–Alexander University of Erlangen–Nuremberg
5Arguments
1: $\mathbf{handle}$ – void *Input
On entry: the handle to the problem. It needs to be initialized (e.g., by e04rac) and to hold a problem formulation compatible with e04svc. It must not be changed between calls to the NAG optimization modelling suite.
2: $\mathbf{nvar}$ – IntegerInput
On entry: $n$, the current number of decision variables $x$ in the model.
If ${\mathbf{nnzu}}=0$, u will not be referenced; otherwise, it needs to match the dimension of constraints defined by e04rhcande04rjc as explained in Section 3.1.
Note: intermediate stops take place only if ${\mathbf{Monitor\; Frequency}}>0$.
if ${\mathbf{nnzu}}>0$, u holds Lagrangian multipliers (dual variables) for (standard) constraints, i.e., simple bounds defined by e04rhc and a set of ${m}_{B}$ linear constraints defined by e04rjc. Either their initial estimates, intermediate approximations or final values, see Section 3.1.
if ${\mathbf{nnzu}}=0$, u will not be referenced and may be NULL.
On entry: if ${\mathbf{Initial\; U}}=\mathrm{USER}$ (the default is $\mathrm{AUTOMATIC}$), ${u}^{0}$, the initial estimate of the Lagrangian multipliers $u$; otherwise, u need not be set.
On intermediate exit:
the estimate of the multipliers $u$ at the end of the current outer iteration.
On intermediate re-entry: the input is ignored.
On exit: the final value of multipliers $u$.
6: $\mathbf{nnzuc}$ – IntegerInput
On entry: the dimension of array uc. If ${\mathbf{nnzuc}}=0$, uc will not be referenced. This argument is reserved for future releases of the NAG Library which will allow definition of second-order cone constraints. It needs to be set to $0$ at the moment.
uc is reserved for future releases of the NAG Library which will allow definition of second-order cone constraints. It is not referenced at the moment and may be NULL.
8: $\mathbf{nnzua}$ – IntegerInput
On entry: the dimension of array ua. If ${\mathbf{nnzua}}=0$, ua will not be referenced; otherwise, it needs to match the total number of nonzeros in all matrix Lagrangian multipliers (constraints defined by e04rncande04rpc) as explained in Section 3.1.
Note: intermediate stops take place only if ${\mathbf{Monitor\; Frequency}}>0$.
If ${\mathbf{nnzua}}>0$, ua holds the Lagrangian multipliers for matrix constraints defined by e04rncande04rpc, see Section 3.1.
If ${\mathbf{nnzua}}=0$, ua will not be referenced and may be NULL.
On entry: if ${\mathbf{Initial\; U}}=\mathrm{USER}$ (the default is $\mathrm{AUTOMATIC}$), ${U}^{0}$, the initial estimate of the matrix Lagrangian multipliers $U$; otherwise, ua need not be set.
On intermediate exit:
the estimate of the matrix multipliers $U$ at the end of the outer iteration.
On intermediate re-entry: the input is ignored.
On final exit: the final estimate of the multipliers $U$.
On intermediate or final entry: error measures and various indicators (see Section 11 for details) at the end of the current (or final) outer iteration as given in the table below:
On intermediate or final exit: solver statistics at the end of the current (or final) outer iteration as given in the table below. Note that time statistics is provided only if ${\mathbf{Stats\; Time}}$ is set (the default is $\mathrm{NO}$), the measured time is returned in seconds.
$0$
Number of the outer iterations.
$1$
Total number of the inner iterations.
$2$
Total number of the linesearch steps.
$3$
Number of evaluations of the augmented Lagrangian $F\left(\right)$, (see (8)).
$4$
Number of evaluations of $\nabla F\left(\right)$.
$5$
Number of evaluations of ${\nabla}^{2}F\left(\right)$.
$6$
Reserved for future use.
$7$
Total running time of the solver.
$8$
Total running time of the solver without evaluations of the user's functions and monitoring stops.
$9$
Time spent in the inner iterations.
$10$
Time spent in Lagrangian multipliers updates.
$11$
Time spent in penalty parameters updates.
$12$
Time spent in matrix feasibility computation.
$13$
Time of evaluations of $F\left(\right)$.
$14$
Time of evaluations of $\nabla F\left(\right)$.
$15$
Time of evaluations of ${\nabla}^{2}F\left(\right)$.
$16$
Time of factorizations of the Newton system.
$17$
Time of factorizations of the matrix constraints.
$18$–$31$
reserved for future use.
12: $\mathbf{inform}$ – Integer *Input/Output
Note: intermediate stops take place only if ${\mathbf{Monitor\; Frequency}}>0$.
On initial entry: no effect.
On intermediate exit:
${\mathbf{inform}}=1$.
On intermediate re-entry: if set to $0$, solving the current problem is terminated and the function returns ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_USER_STOP; otherwise, the function continues.
On final exit: ${\mathbf{inform}}=0$.
13: $\mathbf{fail}$ – NagError *Input/Output
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).
6Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information.
NE_ALREADY_DEFINED
The problem is already being solved.
NE_BAD_PARAM
On entry, argument $\u27e8\mathit{\text{value}}\u27e9$ had an illegal value.
NE_DIM_MATCH
On entry, ${\mathbf{nnzu}}=\u27e8\mathit{\text{value}}\u27e9$. nnzu does not match the size of the Lagrangian multipliers for (standard) constraints. ${\mathbf{nnzu}}=0$ or $\u27e8\mathit{\text{value}}\u27e9$.
On entry, ${\mathbf{nnzu}}=\u27e8\mathit{\text{value}}\u27e9$. nnzu does not match the size of the Lagrangian multipliers for (standard) constraints. ${\mathbf{nnzu}}=0$ when there are no (standard) constraints.
On entry, ${\mathbf{nnzua}}=\u27e8\mathit{\text{value}}\u27e9$. nnzua does not match the size of the Lagrangian multipliers for matrix constraints. ${\mathbf{nnzua}}=0$ or $\u27e8\mathit{\text{value}}\u27e9$.
On entry, ${\mathbf{nnzua}}=\u27e8\mathit{\text{value}}\u27e9$. nnzua does not match the size of the Lagrangian multipliers for matrix constraints. ${\mathbf{nnzua}}=0$ when there are no matrix constraints.
On entry, ${\mathbf{nnzuc}}=\u27e8\mathit{\text{value}}\u27e9$. nnzuc does not match the size of the Lagrangian multipliers for second-order cone constraints. ${\mathbf{nnzuc}}=0$ when there are no second-order cone constraints.
NE_FAILED_START
The current starting point is unusable.
The starting point ${x}^{0}$, either provided by you (if ${\mathbf{Initial\; X}}=\mathrm{USER}$, the default) or the automatic estimate (if ${\mathbf{Initial\; X}}=\mathrm{AUTOMATIC}$), must not be extremely infeasible in the matrix constraints (infeasibility of order ${10}^{6}$ and higher) and all the functions used in the problem formulation must be evaluatable.
In the unlikely case this error is triggered, it is necessary to provide a better estimate of the initial values.
NE_HANDLE
The supplied handle does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been properly initialized or it has been corrupted.
NE_INFEASIBLE
The problem was found to be infeasible during preprocessing.
One or more of the constraints (or its part after preprocessing) violates the constraints by more than ${\epsilon}_{\mathrm{feas}}$ (${\mathbf{Stop\; Tolerance\; Feasibility}}$).
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 7.5 in the Introduction to the NAG Library CL Interface for further information.
NE_MAYBE_INFEASIBLE
The problem seems to be infeasible, the algorithm was stopped.
Whilst the algorithm cannot definitively detect that the problem is infeasible, several indirect indicators suggest that it might be the case.
NE_MAYBE_UNBOUNDED
The problem seems to be unbounded, the algorithm was stopped.
Whilst the algorithm cannot definitively detect that the problem is unbounded, several indirect indicators (such as a rapid decrease in the objective function and a lack of convergence in the inner subproblem) suggest that this might be the case. A good scaling of the objective function is always highly recommended to avoid situations when unusual behavior triggers falsely this error exit.
NE_NO_IMPROVEMENT
Unable to make progress, the algorithm was stopped.
This error is returned if the solver cannot decrease the duality gap over a range of iterations. This can be due to the scaling of the problem or the problem might be close to primal or dual infeasibility.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library CL Interface for further information.
NE_REF_MATCH
On entry, ${\mathbf{nvar}}=\u27e8\mathit{\text{value}}\u27e9$, expected $\mathrm{value}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: nvar must match the current number of variables of the model in the handle.
NE_SETUP_ERROR
This solver does not support the model defined in the handle.
NE_SUBPROBLEM
The inner subproblem could not be solved to the required accuracy. Inner iteration limit has been reached.
The inner subproblem could not be solved to the required accuracy. Limited progress in the inner subproblem triggered a stop (heuristic inner stop criteria).
The inner subproblem could not be solved to the required accuracy. Line search or another internal component failed.
A problem with the convergence of the inner subproblem is typically a sign of numerical difficulties of the whole algorithm. The inner subproblem might be stopped before reaching the required accuracy because of the ${\mathbf{Inner\; Iteration\; Limit}}$, a heuristic detected no progress in the inner iterations (if ${\mathbf{Inner\; Stop\; Criteria}}=\mathrm{HEURISTIC}$, default) or if an internal component failed (for example, line search was unable to find a suitable step). The algorithm tries to recover, however, it might give up after several attempts with one of these error messages.
If it occurs in the very early iterations, consider increasing ${\mathbf{Inner\; Stop\; Tolerance}}$ and possibly ${\mathbf{Init\; Value\; P}}$ or ${\mathbf{Init\; Value\; Pmat}}$ which should ease the first iterations. An occurrence in later iterations indicates numerical difficulties typically due to scaling and/or ill-conditioning or the problem is close to infeasible. Reducing the tolerance on the stopping criteria or increasing ${\mathbf{P\; Update\; Speed}}$ might be of limited help.
NE_TOO_MANY_ITER
Outer iteration limit has been reached.
The requested accuracy is not achieved.
If ${\mathbf{Outer\; Iteration\; Limit}}$ is left to the default, this error indicates numerical difficulties. Consider whether the stopping tolerances (${\mathbf{Stop\; Tolerance\; 1}}$, ${\mathbf{Stop\; Tolerance\; 2}}$, ${\mathbf{Stop\; Tolerance\; Feasibility}}$) are set too low or optional parameters affecting the behaviour of the penalty updates (${\mathbf{P\; Update\; Speed}}$, ${\mathbf{P\; Min}}$ or ${\mathbf{Pmat\; Min}}$) have been modified inadvisedly. The iteration log should reveal more about the misbehaviour. Providing a different starting point might be of help in certain situations.
NE_UNBOUNDED
The problem was found unbounded during preprocessing.
The objective function consists of an unrestricted ray and thus the problem does not have a solution.
NE_USER_STOP
User requested termination during a monitoring step.
NW_NOT_CONVERGED
The algorithm converged to a suboptimal solution. The full accuracy was not achieved. The solution should still be usable.
This error may be reported only if ${\mathbf{Stop\; Criteria}}=\mathrm{SOFT}$ (default). The solver predicted that it is unable to reach a better estimate of the solution. However, the error measures indicate that the point is a reasonable approximation. Typically, only the norm of the gradient of the Lagrangian (optimality) does not fully satisfy the requested tolerance whereas the others are well below the tolerance.
Setting ${\mathbf{Stop\; Criteria}}=\mathrm{STRICT}$ will disallow this error but it is unlikely that the algorithm would reach a better solution.
7Accuracy
The accuracy of the solution is driven by optional parameters ${\mathbf{Stop\; Tolerance\; 1}}$, ${\mathbf{Stop\; Tolerance\; 2}}$, ${\mathbf{Stop\; Tolerance\; Feasibility}}$ and ${\mathbf{Stop\; Criteria}}$ and in certain cases ${\mathbf{DIMACS\; Measures}}$.
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_NOERROR on the final exit, the returned point satisfies Karush–Kuhn–Tucker (KKT) conditions to the requested accuracy (under the default settings close to $\sqrt{\epsilon}$) and thus it is a good estimate of a local solution. If ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NW_NOT_CONVERGED, some of the convergence conditions were not fully satisfied but the point still seems to be a reasonable estimate and should be usable. Please refer to Section 11.2 and the description of the particular options.
8Parallelism and Performance
e04svc is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
e04svc makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
9Further Comments
9.1Description of the Printed Output
The solver can print information to give an overview of the problem and of the progress of the computation. The output may be send to two independent
streams (files)
which are set by optional parameters ${\mathbf{Print\; File}}$ and ${\mathbf{Monitoring\; File}}$. Optional parameters ${\mathbf{Print\; Level}}$, ${\mathbf{Print\; Options}}$ and ${\mathbf{Monitoring\; Level}}$ determine the exposed level of detail. This allows, for example, to generate a detailed log in a file while the condensed information is displayed on the screen.
By default (${\mathbf{Print\; File}}=6$, ${\mathbf{Print\; Level}}=2$), five sections are printed to the standard output: a header, a list of options, problem statistics, an iteration log and a summary.
Header
The header is a message indicating the start of the solver. It should look like:
The list shows all options of the solver, each displayed on one line. The line contains the option name, its current value and an indicator for how it was set. The options left at their defaults are noted by ‘d’, the ones you set are noted by ‘U’ and the options reset by the solver by ‘S’. The solver will automatically set options which are set to $\mathrm{AUTO}$ or options which are not possible to satisfy in the given context (e.g., requesting ${\mathbf{DIMACS\; Measures}}$ for a nonlinear problem). Note that the output format is compatible with the file format expected by e04zpc. The output might look as follows:
Outer Iteration Limit = 20 * U
Stop Tolerance 1 = 1.00000E-06 * d
Stop Tolerance 2 = 1.00000E-07 * d
Hessian Density = Dense * S
Problem statistics
The statistics about the size of the problem shows how the problem is represented internally, i.e., it reflects any changes imposed by preprocessing (for example, removed fixed and disabled variables or constant feasible constraints) and problem transformations (see, for example, ${\mathbf{Presolve\; Block\; Detect}}$). It may look like:
Problem Statistics
No of variables 7 (+0 disabled, +1 fixed)
free (unconstrained) 0
bounded 7
No of lin. constraints 8 (+0 disabled, +1 removed)
nonzeroes 41
No of matrix inequal. 4
detected matrix inq. 3 (+1 constant)
linear 3
nonlinear 0
max. dimension 5
detected normal inq. 1
linear 1
nonlinear 0
Objective function Linear
Iteration log
If ${\mathbf{Print\; Level}}=2$, the status of each major iteration is condensed to one line. The line shows the major iteration number ($0$ represents the starting point), the current objective value, KKT measures (optimality, feasibility and complementarity), minimal penalty and the number of inner iterations performed. Note that all these values are also available in ${\mathbf{rinfo}}\left[0\right],\dots ,{\mathbf{rinfo}}\left[4\right]$ and ${\mathbf{stats}}\left[0\right]$. The output might look as follows:
Occasionally, a one letter flag is printed at the end of the line indicating that the inner subproblem was not solved to the required accuracy. The possibilities are M for maximum number of inner iterations, L for difficulties in the line search and ! when a heuristic stop took place. Repeated troubles in the subproblems may lead to ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_SUBPROBLEM. The output below had ${\mathbf{Inner\; Iteration\; Limit}}=5$ which was not enough in the first subproblem (first outer iteration).
All KKT measures should normally converge to zero as the algorithm progresses and once the requested accuracy (${\mathbf{Stop\; Tolerance\; 2}}$) is achieved, the solver stops. However, the convergence is not necessarilly monotonic. The penalty parameters are decreased each major iteration which should improve overall the feasibility of the problem. This also increases the ill-conditioning which might lead to a higher number of inner iterations. A very high number of inner iterations usually signals numerical difficulties. See Section 11 for the algorithmic details.
If ${\mathbf{Print\; Level}}>2$, each major iteration produces significantly more detailed output comprising detailed error measures and output from every inner iteration. The output is self-explanatory so is not featured here in detail.
Summary
Once the solver finishes, a detailed summary is produced. An example is shown below:
It starts with the status line of the overall result which matches the fail value. It is followed by the final objective value and the error measures (including ${\mathbf{DIMACS\; Measures}}$ if turned on). Iteration counters, numbers of evaluations of the Augmented Lagrangian function and timing of the function conclude the section. The timing of the algorithm is displayed only if ${\mathbf{Stats\; Time}}$ is set.
9.2Internal Changes
Internal changes have been made to this function as follows:
At Mark 26.1: e04svc cannot, at the
moment, handle fixed variables in the model. You are now able to
define such a model and e04svc will return
${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_SETUP_ERROR in this case.
At Mark 27.1: e04svc can now handle fixed variables in the model. Relevant error code has been removed.
For details of all known issues which have been reported for the NAG Library please refer to the Known Issues.
10Example
Semidefinite Programming has many applications in several fields of mathematics, such as, combinatorial optimization, finance, statistics, control theory or structural optimization. However, these applications seldom come in the form of (2) or (3). Usually a reformulation is needed or even a relaxation is employed to achieve the desired formulation. This is also the case of the Lovász $\vartheta $ function computed in this example. See also e04rac for links to further examples in the NAG optimization modelling suite.
The Lovász $\vartheta $ function (or also called $\vartheta $ number) of an undirected graph $G=(V,E)$ is an important quantity in combinatorial optimization. It gives an upper bound to Shannon capacity of the graph $G$ and is also related to the clique number and the chromatic number of the complement of $G$ which are NP-hard problems.
The $\vartheta $ function can be expressed in various ways, here we use the following:
$$\vartheta \left(G\right)=\mathrm{minimize}\left\{{\lambda}_{\mathrm{max}}\left(H\right)\mid H\in {\mathbb{S}}^{n}\text{,\hspace{1em}}{s}_{ij}=1\text{ if}i=j\text{ or if}ij\notin E\right\}$$
where $n=\left|V\right|$ and ${\mathbb{S}}^{n}$ denotes the space of real symmetric $n\times n$ matrices. This eigenvalue optimization problem is easy to reformulate as an SDP problem by introducing an artificial variable $t$ as follows:
where $f$, ${g}_{k}$, ${h}_{k}$ are ${\mathcal{C}}^{2}$ functions from ${\mathbb{R}}^{n}$ to $\mathbb{R}$ and ${\mathcal{A}}_{k}$ is a ${\mathcal{C}}^{2}$ matrix function from ${\mathbb{R}}^{n}$ to ${\mathbb{S}}^{{m}_{k}}$. Here ${\mathbb{S}}^{m}$ denotes the space of real symmetric matrices $m\times m$ and $S\in {\mathbb{S}}^{m}$, $S\u2ab00$ stands for a constraint on eigenvalues of $S$, namely the matrix $S$ should be positive semidefinite. Furthermore, we define the inner product on ${\mathbb{S}}^{m}\times {\u27e8A,B\u27e9}_{{\mathbb{S}}^{m}}=\mathrm{trace}\left(AB\right)$. The index ${\mathbb{S}}^{m}$ will be omitted whenever the dimension is clear from the context. Finally, for $\Phi :{\mathbb{S}}^{m}\to {\mathbb{S}}^{m}$ and $X\text{,}Y\in {\mathbb{S}}^{m}$, $D\Phi (X;Y)$ denotes the directional derivative of $\Phi $ with respect to $X$ in direction $Y$.
11.1Overview
The algorithm is based on a (generalized) augmented Lagrangian approach and on a suitable choice of smooth penalty/barrier functions ${\phi}_{g}:\mathbb{R}\to \mathbb{R}$ for (standard) inequality constraints and ${\phi}_{A}:\mathbb{R}\to \mathbb{R}$ for constraints on matrix eigenvalues. By means of ${\phi}_{A}$ we define a penalty/barrier function for matrix inequalities as follows.
Let $A\in {\mathbb{S}}^{m}$ have an eigenvalue decomposition $A={S}^{\mathrm{T}}\Lambda S$ where $\Lambda =\mathrm{diag}{({\lambda}_{1},{\lambda}_{2},\dots ,{\lambda}_{m})}^{\mathrm{T}}$. We define matrix function ${\Phi}_{P}:{\mathbb{S}}^{m}\to {\mathbb{S}}^{m}$ for $P>0$ as
Both ${\phi}_{g}$ and ${\phi}_{A}$ satisfy a number of assumptions (see Kočvara and Stingl (2003)) guaranteeing, in particular, that for any $p$, $P>0$
where $u\in {\mathbb{R}}^{{m}_{g}}$, $v\in {\mathbb{R}}^{{m}_{h}}$ and
$U=({U}_{1},\dots ,{U}_{{m}_{A}})$, ${U}_{k}\in {\mathbb{S}}^{{p}_{k}}$, $k=1,\dots ,{m}_{A}$ are Lagrange multipliers associated with the (standard) inequalities and equalities and the matrix inequality constraints, respectively.
The algorithm combines ideas of the (exterior) penalty and (interior) barrier methods with the augmented Lagrangian method, it can be defined as follows:
Algorithm 1 (Outer Loop)
Let ${x}^{0}$, ${u}^{0}$, ${v}^{0}$ and ${U}^{0}$ be given. Let ${p}^{0}>0$, ${P}^{0}>0$, ${\alpha}^{0}>0$. For $\ell =0,1,\dots $ repeat until a stopping criteria or maximum number of iterations is reached:
Step (i) of Algorithm 1, further referred as the inner problem, is the most time-consuming and thus the choice of the solver for (9) is critical for the overall efficiency of the method. See Section 11.4 below.
The inequality Lagrangian multipliers update in step (ii) is motivated by the fact that if ${x}^{\ell +1}$, ${v}^{\ell +1}$ solve (9) exactly in iteration $\ell $, we obtain
Details can be found, for example, in Stingl (2006).
In practise, numerical studies showed that it is not advantageous to do the full updates of multipliers $u$, $U$. Firstly, big changes in the multipliers may lead to a large number of iterations in subsequent solution of (9) and, secondly, the multipliers might become ill-conditioned after a few steps and the algorithm suffers from numerical instabilities. To overcome these difficulties, a restricted update is performed instead.
New Lagrangian multipliers for (standard) inequalities ${u}_{\mathit{k}}^{\ell +1}$, for $\mathit{k}=1,2,\dots ,{m}_{g}$ are limited not to violate the following bound
for a given $0<{\mu}_{g}<1$ (see ${\mathbf{U\; Update\; Restriction}}$).
A similar strategy is applied to the matrix multipliers ${U}_{k}^{\ell +1}$ as well. For $0<{\mu}_{A}<1$ (see ${\mathbf{Umat\; Update\; Restriction}}$) set
The penalty parameters $p,P$ in step (iii) are updated by some constant factor dependent on the initial penalty parameters ${p}^{0},{P}^{0}$ and ${\mathbf{P\; Update\; Speed}}$. The update process is stopped when ${p}_{\mathrm{min}}$ and ${P}_{\mathrm{min}}$ are reached (see ${\mathbf{P\; Min}}$, ${\mathbf{Pmat\; Min}}$).
Additional details about the multiplier and penalty update strategies, as well as local and global convergence properties under standard assumptions can be found in an extensive study Stingl (2006).
11.2Stopping Criteria
Algorithm 1 is stopped when all the stopping criteria are satisfied to the requested accuracy, these are:
and these based on Karush–Kuhn–Tucker (KKT) error measures, to keep the notation simple, formulation (4) is assumed and iteration index $\ell $ is dropped:
Here ${\epsilon}_{1}$, ${\epsilon}_{2}$, ${\epsilon}_{\mathrm{feas}}$ may be set in the option settings as ${\mathbf{Stop\; Tolerance\; 1}}$, ${\mathbf{Stop\; Tolerance\; 2}}$ and ${\mathbf{Stop\; Tolerance\; Feasibility}}$, respectively.
Note that if ${\mathbf{Task}}=\mathrm{FEASIBLEPOINT}$, only the feasibility is taken into account.
There is an option for linear SDP problems to switch from stopping criteria based on the KKT conditions to ${\mathbf{DIMACS\; Measures}}$, see Mittelmann (2003). This is the default choice. To keep the notation readable, these are defined here only for the following simpler formulation of linear SDP rather than (2):
$$\begin{array}{ll}\underset{x\in {\mathbb{R}}^{n}}{\mathrm{minimize}}\phantom{\rule{0.25em}{0ex}}& {c}^{\mathrm{T}}x\\ \text{subject to \hspace{1em}}& \mathcal{A}\left(x\right)=\sum _{\mathit{i}=1}^{n}{x}_{i}{A}_{i}-{A}_{0}\u2ab00\text{.}\end{array}$$
where ${\mathcal{A}}^{*}(\xb7)$ denote the adjoint operator to $\mathcal{A}(\xb7)$, ${\left[{\mathcal{A}}^{*}\left(U\right)\right]}_{i}=\u27e8{A}_{i},U\u27e9$.
They can be viewed as a scaled version of the KKT conditions. ${\mathrm{Derr}}_{1}$ represents the (scaled) norm of the gradient of the Lagrangian, ${\mathrm{Derr}}_{2}$ and ${\mathrm{Derr}}_{4}$ the dual and primal infeasibility, respectively, and ${\mathrm{Derr}}_{5}$ and ${\mathrm{Derr}}_{6}$ measure the duality gap and the complementary slackness. Note that in this solver ${\mathrm{Derr}}_{2}=0$ by definition and ${\mathrm{Derr}}_{3}$ is automaticaly zero because the formulation involves slack variables which are not used here.
11.3Choice of penalty functions ${\phi}_{\mathit{g}}$ and ${\phi}_{\mathit{A}}$
To treat the (standard) inequality constraints ${g}_{k}\left(x\right)\ge 0$, we use the penalty/barrier function proposed by Ben–Tal and Zibulevsky (1997):
The choice of ${\phi}_{A}$ (and thus of ${\Phi}_{P}$) is motivated by the complexity of the evaluation of ${\Phi}_{P}$ and its derivatives. If ${\phi}_{A}$ is defined as
For details follow Kočvara and Stingl (2003). Note that, in particular, formula (17) requires nontrivial computational resources even if careful handling of the sparsity of partial derivatives of $\mathcal{A}\left(x\right)$ is implemented. e04svc uses a set of strategies described in Fujisawa et al. (1997) adapted for parallel computation.
11.4Solution of the inner problem
This section describes solving of the inner problem (step (i) of Algorithm 1). We attempt to find an approximate solution of the following system (in $x$ and $v$) up to the given precision $\alpha $:
where the penalty parameters $p,P$, as well as the Lagrangian multipliers $u$ and $U$ are fixed.
A linesearch SQP framework is used due to its desirable convergence properties. It can be stated as follows.
Algorithm 2 (Inner Loop)
Let ${x}^{0}$, ${v}^{0}$ be given (typically as the solution from the previous outer iteration), $p$, $P$, $u$, $U$ and $\alpha >0$ fixed. For $\ell =0,1,\dots $
System (20) is solved by the factorization function MA97 (see Hogg and Scott (2011), in combination with an inertia correction strategy described in Stingl (2006). The step length selection is guided by ${\mathbf{Linesearch\; Mode}}$.
If there are no equality constraints in the problem, the unconstrained minimization in Step (i) of Algorithm 1 simplifies to the modified Newton method with line-search (for details, see Kočvara and Stingl (2003)). Alternatively, the equality constraints ${h}_{k}\left(x\right)=0$ can be converted to two inequalities which would be treated with the remaining constraints (see ${\mathbf{Transform\; Constraints}}$).
12Optional Parameters
Several optional parameters in e04svc define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e04svc these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling e04zmc anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
If any options are set by the solver (typically those with the choice of $\mathrm{AUTO}$), their value can be retrieved by e04znc. If the solver is called again, any such arguments are reset to their default values and the decision is made again.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section 12.1.
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
the keywords, where the minimum abbreviation of each keyword is underlined;
a parameter value,
where the letters $a$, $i$ and $r$ denote options that take character, integer and real values respectively;
the default value, where the symbol $\epsilon $ is a generic notation for machine precision (see X02AJC).
All options accept the value $\mathrm{DEFAULT}$ to return single options to their default states.
Keywords and character values are case and white space insensitive.
Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
DIMACS Measures
$a$
Default $=\mathrm{CHECK}$
If the problem is a linear semidefinite programming problem, this parameter specifies if DIMACS error measures (see Section 11.2) should be computed and/or checked. In other cases, this option reverts to $\mathrm{NO}$ automatically.
Constraint: ${\mathbf{DIMACS\; Measures}}=\mathrm{COMPUTE}$, $\mathrm{CHECK}$ or $\mathrm{NO}$.
Hessian Density
$a$
Default $=\mathrm{AUTO}$
This optional parameter guides the solver on how the Hessian matrix of augmented Lagrangian $F(x,u,v,U,p,P)$ should be built. Option $\mathrm{AUTO}$ leaves the decision to the solver and it is the recommended option. Setting it to $\mathrm{DENSE}$ bypasses the autodetection and the Hessian is always built as a dense matrix. Option $\mathrm{SPARSE}$ instructs the solver to use a sparse storage and factorization of the matrix if possible.
Constraint: ${\mathbf{Hessian\; Density}}=\mathrm{AUTO}$, $\mathrm{DENSE}$ or $\mathrm{SPARSE}$
Infinite Bound Size
$r$
Default $\text{}={10}^{20}$
This defines the ‘infinite’ bound $\mathit{bigbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to $\mathit{bigbnd}$ will be regarded as $+\infty $ (and similarly any lower bound less than or equal to $-\mathit{bigbnd}$ will be regarded as $-\infty $). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
This optional parameter defines the choice of the penalty optional parameters ${p}^{0}$, ${P}^{0}$, see Algorithm 1.
${\mathbf{Initial\; P}}=\mathrm{AUTOMATIC}$
The penalty optional parameters are chosen automatically as set by optional parameter ${\mathbf{Init\; Value\; P}}$, ${\mathbf{Init\; Value\; Pmat}}$ and subject to automatic scaling. Note that ${P}^{0}$ might be increased so that the penalty function ${\Phi}_{P}\left(\right)$ is defined for all matrix constraints at the starting point.
${\mathbf{Initial\; P}}=\mathrm{KEEPPREVIOUS}$
The penalty optional parameters are kept from the previous run of the solver if possible. If not, this options reverts to $\mathrm{AUTOMATIC}$. Note that even if the matrix penalty optional parameters are the same as in the previous run, they are still subject to a possible increase so that the penalty function ${\Phi}_{P}\left(\right)$ is well defined at the starting point.
Constraint: ${\mathbf{Initial\; P}}=\mathrm{AUTOMATIC}$ or $\mathrm{KEEPPREVIOUS}$.
Initial U
$a$
Default $=\mathrm{AUTOMATIC}$
This parameter guides the solver on which initial Lagrangian multipliers are to be used.
${\mathbf{Initial\; U}}=\mathrm{AUTOMATIC}$
The Lagrangian multipliers are chosen automatically as set by automatic scaling.
${\mathbf{Initial\; U}}=\mathrm{USER}$
The values of arrays u and ua (if provided) are used as the initial Lagrangian multipliers subject to automatic adjustments. If one or the other array is not provided, the choice for missing data is as in $\mathrm{AUTOMATIC}$.
${\mathbf{Initial\; U}}=\mathrm{KEEPPREVIOUS}$
The Lagrangian multipliers are kept from the previous run of the solver. If this option is set for the first run or optional parameters change the approach of the solver, the choice automatically reverts to $\mathrm{AUTOMATIC}$. This might be useful if the solver is hot started, for example, to achieve higher precision of the solution.
Constraint: ${\mathbf{Initial\; U}}=\mathrm{AUTOMATIC}$, $\mathrm{USER}$ or $\mathrm{KEEPPREVIOUS}$.
Initial X
$a$
Default $=\mathrm{USER}$
This parameter guides which starting point ${x}^{0}$ is to be used.
${\mathbf{Initial\; X}}=\mathrm{AUTOMATIC}$
The starting point is chosen automatically so that it satisfies simple bounds on the variables or as a zero vector. Input of argument x is ignored.
${\mathbf{Initial\; X}}=\mathrm{USER}$
Initial values of argument x are used as a starting point.
Constraint: ${\mathbf{Initial\; X}}=\mathrm{AUTOMATIC}$ or $\mathrm{USER}$.
Init Value P
$r$
Default $=1.0$
This parameter defines the value ${p}^{0}$, the initial penalty optional parameter for (standard) inequalities. A low value of the penalty causes the solution of the inner problem to be closer to the feasible region and thus to the desirable result. However, it also increases ill-conditioning of the system. It is not advisable to set the penalty too low unless a good starting point is provided.
The value of this option suggests ${P}^{0}$, the initial penalty optional parameter for matrix inequalities. It is similar to ${\mathbf{Init\; Value\; P}}$ (and the same advice applies), however, ${P}^{0}$ gets increased automatically if the matrix constraints are more infeasible than the actual penalty optional parameter.
The maximum number of the inner iterations (Newton steps) to be performed by Algorithm 2 in each outer iteration. Setting the option too low might lead to ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_SUBPROBLEM. Values higher than $100$ are unlikely to improve convergence.
The precision $\alpha $ for the solution of the inner subproblem is determined in Algorithm 1 and under typical circumstances Algorithm 2 is expected to reach this precision within the given ${\mathbf{Inner\; Iteration\; Limit}}$. If any problems are detected and ${\mathbf{Inner\; Stop\; Criteria}}=\mathrm{HEURISTIC}$, Algorithm 2 is allowed to stop before reaching the requested precision or the ${\mathbf{Inner\; Iteration\; Limit}}$. This usually saves many unfruitful iterations and the solver may recover in the following iterations. If you suspect that the heuristic problem detection is not suitable for your problem, setting ${\mathbf{Inner\; Stop\; Criteria}}=\mathrm{STRICT}$ disallows such behaviour.
Constraint: ${\mathbf{Inner\; Stop\; Criteria}}=\mathrm{HEURISTIC}$ or $\mathrm{STRICT}$.
Inner Stop Tolerance
$r$
Default $={10}^{\mathrm{-2}}$
This option sets the required precision ${\alpha}^{0}$ for the first inner problem solved by Algorithm 2. The precison of the solution of the inner problem does not need to be very high in the first outer iterations and it is automatically adjusted through the outer iterations to reach the optimality limit ${\epsilon}_{2}$ in the last one.
Setting ${\alpha}^{0}$ too restrictive (too low) causes an increase of the number of inner iterations needed in the first outer iterations and might lead to ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_SUBPROBLEM. In certain cases it might be helpful to use a more relaxed (higher) ${\alpha}^{0}$ and increase ${\mathbf{P\; Update\; Speed}}$ which should reduce the number of inner iterations needed at the beginning of the computation in exchange for a possibly higher number of the outer iterations.
This controls the step size selection in Algorithm 2. If ${\mathbf{Linesearch\; Mode}}=\mathrm{FULLSTEP}$ (the default for linear problems), unit steps are taken where possible and the step shortening takes place only to avoid undefined regions for the matrix penalty function ${\Phi}_{P}\left(\right)$ (see (17)). This may be used for linear problems but it is not recommended for any nonlinear ones. If ${\mathbf{Linesearch\; Mode}}=\mathrm{ARMIJO}$, Armijo backtracking linesearch is used instead which is a fairly basic linesearch. If ${\mathbf{Linesearch\; Mode}}=\mathrm{GOLDSTEIN}$, a cubic safe guarded linesearch based on Goldstein condition is employed, this is the recommended (and default) choice for nonlinear problems.
Constraint: ${\mathbf{Linesearch\; Mode}}=\mathrm{AUTO}$, $\mathrm{FULLSTEP}$, $\mathrm{ARMIJO}$ or $\mathrm{GOLDSTEIN}$.
List
$a$
Default $=\mathrm{NO}$
This parameter may be set to $\mathrm{YES}$ if you wish to turn on printing of each optional parameter specification as it is supplied.
Constraint: ${\mathbf{List}}=\mathrm{YES}$ or $\mathrm{NO}$
Monitor Frequency
$i$
Default $=0$
If ${\mathbf{Monitor\; Frequency}}>0$, the solver returns to you at the end of every $i$th outer iteration. During these intermediate exits, the current point x and Lagrangian multipliers u, ua (if requested) are provided as well as the statistics and error measures (rinfo, stats). Argument inform helps to distinguish between intermediate and final exits and also allows immediate termination.
If ${\mathbf{Monitor\; Frequency}}=0$, the solver stops only once on the final point and no intermediate exits are made.
(See Section 3.1.1 in the Introduction to the NAG Library CL Interface for further information on NAG data types.)
If $i\ge 0$, the
Nag_FileID number (as returned from x04acc)
for the secondary (monitoring) output. If set to $\mathrm{-1}$, no secondary output is provided. The following information is output to the unit:
–a listing of the optional parameters;
–problem statistics, the iteration log and the final status as set by ${\mathbf{Monitoring\; Level}}$.
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with ${\mathbf{Print\; Level}}$.
The maximum number of the outer iterations to be performed by Algorithm 1. If ${\mathbf{Outer\; Iteration\; Limit}}=0$, no iteration is performed, only quantities needed in the stopping criteria are computed and returned in rinfo. This might be useful in connection with ${\mathbf{Initial\; X}}=\mathrm{USER}$ and ${\mathbf{Initial\; U}}=\mathrm{USER}$ to check optimality of the given point. However, note that the rules for possible modifications of the starting point still apply, see u and ua. Setting the option too low might lead to ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_TOO_MANY_ITER.
This controls ${p}_{\mathrm{min}}$, the lowest possible penalty value $p$ used for (standard) inequalities. In general, very small values of the penalty optional parameters cause ill-conditioning which might lead to numerical difficulties. On the other hand, very high ${p}_{\mathrm{min}}$ prevents the algorithm from reaching the requested accuracy on the feasibility. Under normal circumstances, the default value is recommended.
This option affects how contributions from the matrix constraints (17) to the system Hessian matrix are computed. The default option of ${\mathbf{Preference}}=\mathrm{SPEED}$ should be suitable in most cases. However, dealing with matrix constraints of a very high dimension may cause noticable memory overhead and switching to ${\mathbf{Preference}}=\mathrm{MEMORY}$ may be required.
Constraint: ${\mathbf{Preference}}=\mathrm{SPEED}$ or $\mathrm{MEMORY}$.
Presolve Block Detect
$a$
Default $=\mathrm{YES}$
If ${\mathbf{Presolve\; Block\; Detect}}=\mathrm{YES}$, the matrix constraints are checked during preprocessoring to determine if they can be split into smaller independent ones, thus speeding up the solver.
Constraint: ${\mathbf{Presolve\; Block\; Detect}}=\mathrm{YES}$ or $\mathrm{NO}$.
(See Section 3.1.1 in the Introduction to the NAG Library CL Interface for further information on NAG data types.)
If $i\ge 0$, the
Nag_FileID number (as returned from x04acc, stdout as the default)
for the primary output of the solver. If ${\mathbf{Print\; File}}=\mathrm{-1}$, the primary output is completely turned off independently of other settings. The following information is output to the unit:
–a listing of optional parameters if set by ${\mathbf{Print\; Options}}$;
–problem statistics, the iteration log and the final status from the solver as set by ${\mathbf{Print\; Level}}$.
This parameter defines how detailed information should be printed by the solver to the primary output.
$\mathit{i}$
Output
$0$
No output from the solver
$1$
Only the final status and the objective value
$2$
Problem statistics, one line per outer iteration showing the progress of the solution, final status and statistics
$3$
As level $2$ but detailed output of the outer iterations is provided and brief overview of the inner iterations
$4$, $5$
As level $3$ but details of the inner iterations are printed as well
Constraint: $0\le {\mathbf{Print\; Level}}\le 5$.
Print Options
$a$
Default $=\mathrm{YES}$
If ${\mathbf{Print\; Options}}=\mathrm{YES}$, a listing of optional parameters will be printed to the primary output.
Constraint: ${\mathbf{Print\; Options}}=\mathrm{YES}$ or $\mathrm{NO}$.
P Update Speed
$i$
Default $=12$
This option affects the rate at which the penalty optional parameters $p,P$ are updated (Algorithm 1, step (iii)) and thus indirectly influences the overall number of outer iterations. Its value can be interpretted as the typical number of outer iterations needed to get from the initial penalty values ${p}^{0}$, ${P}^{0}$ half-way to the ${p}_{\mathrm{min}}$ and ${P}_{\mathrm{min}}$. Values smaller than $3$ causes a very agressive penalty update strategy which might lead to the increased number of inner iterations and possibly to numerical difficulties. On the other hand, values higher than $15$ produce a relatively conservative approach which leads to a higher number of the outer iterations.
If the solver encounters difficulties on your problem, a higher value might help. If your problem is working fine, setting a lower value might increase the speed.
This parameter turns on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice $\mathrm{YES}$ is equivalent to $\mathrm{WALL\; CLOCK}$.
Constraint: ${\mathbf{Stats\; Time}}=\mathrm{YES}$, $\mathrm{NO}$, $\mathrm{CPU}$ or $\mathrm{WALL\; CLOCK}$.
Stop Criteria
$a$
Default $=\mathrm{SOFT}$
If ${\mathbf{Stop\; Criteria}}=\mathrm{SOFT}$, the solver is allowed to stop prematurely with a suboptimal solution, ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NW_NOT_CONVERGED, if it predicts that a better estimate of the solution cannot be reached. This is the recommended option.
Constraint: ${\mathbf{Stop\; Criteria}}=\mathrm{SOFT}$ or $\mathrm{STRICT}$.
This option sets the value ${\epsilon}_{2}$ which is used for optimality (12) and complementarity (14) tests from KKT conditions or if ${\mathbf{DIMACS\; Measures}}=\mathrm{Check}$ for all DIMACS error measures instead. See Section 11.2.
This parameter specifies the required direction of the optimization. If ${\mathbf{Task}}=\mathrm{FEASIBLEPOINT}$, the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function was set, ${\mathbf{Task}}$ reverts to $\mathrm{FEASIBLEPOINT}$ automatically.
Constraint: ${\mathbf{Task}}=\mathrm{MINIMIZE}$, $\mathrm{MAXIMIZE}$ or $\mathrm{FEASIBLE\; POINT}$.
Transform Constraints
$a$
Default $=\mathrm{AUTO}$
This parameter controls how equality constraints are treated by the solver. If ${\mathbf{Transform\; Constraints}}=\mathrm{EQUALITIES}$, all equality constraints ${h}_{k}\left(x\right)=0$ from (4) are treated as two inequalities ${h}_{k}\left(x\right)\le 0$ and ${h}_{k}\left(x\right)\ge 0$, see Section 11.4. This is the default and the only option in this release for equality constrained problems.
Constraint: ${\mathbf{Transform\; Constraints}}=\mathrm{AUTO}$, $\mathrm{NO}$ or $\mathrm{EQUALITIES}$.
U Update Restriction
$r$
Default $=0.5$
This defines the value ${\mu}_{g}$ giving the bounds on the updates of Lagrangian multipliers for (standard) inequalities between the outer iterations. Values close to $1$ limit the changes of the multipliers and serve as a kind of smoothing, lower values allow more significant changes.
Based on numerical experience, big variation in the multipliers may lead to a large number of iterations in the subsequent step and might disturb the convergence due to ill-conditioning.
It might be worth experimenting with the value on your particular problem. Mid range values are recommended over the more extremal ones.
This is an equivalent of ${\mathbf{U\; Update\; Restriction}}$ for matrix constraints, denoted as ${\mu}_{A}$ in Section 11.1. The advice above applies equally.