NAG CPP Interface nagcpp::opt::handle_solve_lp_ipm (e04mt)
Note:this function usesoptional parametersto define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm and to Section 12 for a detailed description of the specification of the optional parameters.
handle_solve_lp_ipm is a solver from the NAG optimization modelling suite for large-scale linear programming (LP) problems based on an interior point method (IPM).
where $n$ is the number of decision variables and $m$ is the number of linear constraints. Here $c$, $x$, ${l}_{x}$, ${u}_{x}$ are $n$-dimensional vectors, $A$ is an $m$ by $n$ sparse matrix and ${l}_{A}$, ${u}_{A}$ are $m$-dimensional vectors.
handle_solve_lp_ipm implements two algorithmic variants of the interior point method for solving linear optimization problems: the infeasible Primal-Dual interior point method and homogeneous Self-Dual interior point method. In general, the Self-Dual algorithm has a slightly higher price per iteration, however, it is able to declare infeasibility or unboundness of the problem, whereas the Primal-Dual relies, in this case, on heuristics. For a detailed description of both algorithms see Section 11. The algorithm is chosen by the LPIPM Algorithm, the default is Primal-Dual.
handle_solve_lp_ipm solves linear programming problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as a means of communication for functions in the NAG optimization modelling suite. First, the problem handle is initialized by calling handle_init. Then some of the functions handle_set_linobj, handle_set_quadobj, handle_set_simplebounds and handle_set_linconstr may be called to formulate the objective function, bounds of the variables, and the block of linear constraints, respectively. Once the problem is fully set, the handle may be passed to the solver. When the handle is not needed anymore, handle_free should be called to destroy it and deallocate the memory held within. See Section 3.1 in the E04 Chapter Introduction for more details about the NAG optimization modelling suite.
The solver method can be modified by various optional parameters (see Section 12) which can be set by handle_opt_set and e04zpf (no CPP interface) anytime between the initialization of the handle by handle_init and a call to the solver. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various optional parameters.
The optional parameter Task may be used to switch the problem to maximization or to ignore the objective function and find only a feasible point.
Several options may have significant impact on the performance of the solver. Even if the defaults were chosen to suit the majority of problems, it is recommended to experiment in order to find the most suitable set of options for a particular problem, see Sections 11 and 12 for further details.
3.1Structure of the Lagrangian Multipliers
The algorithm works internally with estimates of both the decision variables, denoted by $x$, and the Lagrangian multipliers (dual variables), denoted by $u$. The multipliers $u$ are stored in the array u and conform to the structure of the constraints.
If the simple bounds have been defined (handle_set_simplebounds was successfully called), the first $2n$ elements of u belong to the corresponding Lagrangian multipliers, interleaving a multiplier for the lower and the upper bound for each ${x}_{i}$. If any of the bounds were set to infinity, the corresponding Lagrangian multipliers are set to $0$ and may be ignored.
Similarly, the following $2m$ elements of u belong to multipliers for the linear constraints (if handle_set_linconstr has been successfully called). The organization is the same, i.e., the multipliers for each constraint for the lower and upper bounds are alternated and zeros are used for any missing (infinite bound) constraints.
Some solvers merge multipliers for both lower and upper inequality into one element whose sign determines the inequality. Negative multipliers are associated with the upper bounds and positive with the lower bounds. An equivalent result can be achieved with this storage scheme by subtracting the upper bound multiplier from the lower one. This is also consistent with equality constraints.
4References
Andersen E D, Gondzio J, Mészáros C and Xu X (1996) Implementation of interior point methods for large scale linear programming HEC/Université de Genève
Colombo M and Gondzio J (2008) Further development of multiple centrality correctors for interior point methods Computational Optimization and Algorithms41(3) 277–305
Goldfard D and Scheinberg K (2004) A product-form Cholesky factorization method for handling dense columns in interior point methods for linear programming Mathematical Programming99(1) 1–34
Gondzio J (1996) Multiple centrality corrections in a primal-dual method for linear programming Computational Optimization and Algorithms6(2) 137–156
Gondzio J (2012) Interior point methods 25 years later European Journal of Operations Research218(3) 587–601
Hogg J D and Scott J A (2011) HSL MA97: a bit-compatible multifrontal code for sparse symmetric systems RAL Technical Report. RAL-TR-2011-024
HSL (2011) A collection of Fortran codes for large-scale scientific computation http://www.hsl.rl.ac.uk/
Karypis G and Kumar V (1998) A fast and high quality multilevel scheme for partitioning irregular graphs SIAM J. Sci. Comput.20(1) 359–392
Mészáros C (1996) The efficient implementation of interior point methods for
linear programming and their applications PhD Thesis Eötvös Loránd University of Science, Budapest
Nocedal J and Wright S J (2006) Numerical Optimization (2nd Edition) Springer Series in Operations Research, Springer, New York
Wright S W (1997) Primal-dual interior point methods SIAM, Philadelphia
Xu X, Hung P-F and Ye Y (1996) A simplified homogeneous and self-dual linear programming algorithm and its implementation Annals of Operations Research62(1) 151–171
Communication structure.
An object of either the derived class CommE04RA or its base class NoneCopyableComm can be supplied.
It is recommended that the derived class is used. If the base class is supplied it must
first be initialized via a call to opt::handle_init (e04ra).
Note: if ${\mathbf{nnzu}}>0$, u holds Lagrange multipliers (dual variables) for the bound constraints and linear constraints. If ${\mathbf{nnzu}}=0$, u will not be referenced.
On entry: the input of u is reserved for future releases of the NAG Library and it is ignored at the moment.
On exit: solver statistics as given in the table below. Note that time statistics are provided only if Stats Time is set (the default is $\mathrm{NO}$), the measured time is returned in seconds.
$0$
Number of iterations.
$1$
Total number of centrality correction steps performed.
$2$
Total number of iterative refinements performed.
$3$
Value of the perturbation added to the diagonal in the normal equation formulation or on the zero block in the augmented system formulation.
$4$
Total number of factorizations performed.
$5$
Total time spent in the solver.
$6$
Time spent in the presolve phase.
$7$
Time spent in the last iteration.
$8$
Total time spent factorizing the system matrix.
$9$
Total time spent backsolving the system matrix.
$10$
Total time spent in the multiple centrality correctors phase.
$11$
Time spent in the initialization phase.
$12$
Number of nonzeros in the system matrix.
$13$
Number of nonzeros in the system matrix factor.
$14$
Maximum error of the backsolve.
$15$
Number of columns in $A$ considered dense by the solver.
$16$
Maximum number of centrality corrector steps.
$17-99$
Reserved for future use.
6: $\mathbf{monit}$ – void functionFunction
monit is provided to enable you to monitor the progress of the optimization. It is invoked at the end of every $i$th iteration where $i$ is given by the optional parameter LPIPM Monitor Frequency (the default is $0$, monit is not called).
monit may be the dummy function lp_imp_dummy_monit (lp_imp_dummy_monit is included in the NAG Library).
void function monit(CommE04RA &comm, const utility::array1D<double,data_handling::ArgIntent::IN> &rinfo, const utility::array1D<double,data_handling::ArgIntent::IN> &stats)
1: $\mathbf{comm}$ – CommE04RAInput/Output
Communication structure.
Container for:
handle – void *
This optional parameter
may be set using the method
CommE04RA::handle
and accessed via
CommE04RA::get_handle.
On entry: the handle to the problem as provided on entry to handle_solve_lp_ipm. It may be used to query the model during the solve, and extract the current approximation of the solution by e04rxf (no CPP interface).
On entry: solver statistics at the end of the current iteration as described in stats, however, elements $2$, $3$, $5$, $9$, $10$, $11$ and $15$ refer to the quantities in the last iteration rather than accumulated over all iterations through the whole algorithm run.
7: $\mathbf{opt}$ – OptionalE04MTInput/Output
Optional parameter container, derived from Optional.
Note: in some cases handle_solve_lp_ipm may return useful information.
All errors and warnings have an associated numeric error code field, errorid, stored either as a member of the thrown exception object (see errorid), or as a member of
opt.ifail, depending on how errors
and warnings are being handled (see Error Handling for more details).
comm::handle does not belong to the NAG optimization modelling suite, has not been initialized properly or is corrupted.
$\mathbf{errorid}=1$
comm::handle has not been initialized properly or is corrupted.
$\mathbf{errorid}=2$
This solver does not support the model defined in the handle.
$\mathbf{errorid}=2$
The problem is already being solved.
$\mathbf{errorid}=4$
On entry, ${\mathbf{nvar}}=\u27e8\mathit{value}\u27e9$, expected $\mathrm{value}=\u27e8\mathit{value}\u27e9$. Constraint: ${\mathbf{nvar}}$ must match the value given during initialization of comm::handle.
$\mathbf{errorid}=5$
On entry, ${\mathbf{nnzu}}=\u27e8\mathit{value}\u27e9$. ${\mathbf{nnzu}}$ does not match the size of the Lagrangian multipliers for constraints. The correct value is either $0$ or $\u27e8\mathit{\text{value}}\u27e9$.
$\mathbf{errorid}=5$
On entry, ${\mathbf{nnzu}}=\u27e8\mathit{value}\u27e9$. ${\mathbf{nnzu}}$ does not match the size of the Lagrangian multipliers for constraints. The correct value is $0$ for no constraints.
$\mathbf{errorid}=51$
The problem was found to be primal infeasible.
$\mathbf{errorid}=52$
The problem was found to be dual infeasible.
$\mathbf{errorid}=53$
The problem seems to be primal or dual infeasible, the algorithm was stopped.
$\mathbf{errorid}=10601$
On entry, argument $\u27e8\mathit{\text{value}}\u27e9$ must be a vector of size $\u27e8\mathit{\text{value}}\u27e9$ array. Supplied argument has $\u27e8\mathit{\text{value}}\u27e9$ dimensions.
$\mathbf{errorid}=10601$
On entry, argument $\u27e8\mathit{\text{value}}\u27e9$ must be a vector of size $\u27e8\mathit{\text{value}}\u27e9$ array. Supplied argument was a vector of size $\u27e8\mathit{\text{value}}\u27e9$.
$\mathbf{errorid}=10601$
On entry, argument $\u27e8\mathit{\text{value}}\u27e9$ must be a vector of size $\u27e8\mathit{\text{value}}\u27e9$ array. The size for the supplied array could not be ascertained.
$\mathbf{errorid}=10602$
On entry, the raw data component of $\u27e8\mathit{\text{value}}\u27e9$ is null.
$\mathbf{errorid}=10603$
On entry, unable to ascertain a value for $\u27e8\mathit{\text{value}}\u27e9$.
$\mathbf{errorid}=10605$
On entry, the communication class $\u27e8\mathit{\text{value}}\u27e9$ has not been initialized correctly.
$\mathbf{errorid}=10703$
An exception was thrown during IO (writing).
$\mathbf{errorid}=-99$
An unexpected error has been triggered by this routine.
$\mathbf{errorid}=-399$
Your licence key may have expired or may not have been installed correctly.
If $\mathbf{errorid}={\mathbf{0}}$ on the final exit, the returned point satisfies Karush–Kuhn–Tucker (KKT) conditions to the requested accuracy (under the default settings close to $\sqrt{\epsilon}$) and thus it is a good estimate of the solution. If $\mathbf{errorid}={\mathbf{50}}$, some of the convergence conditions were not fully satisfied but the point is a reasonable estimate and still usable. Please refer to Section 11.3 and the description of the particular options.
8Parallelism and Performance
Please see the description for the underlying computational routine in this section of the
FL Interface documentation.
9Further Comments
9.1Description of the Printed Output
The solver can print information to give an overview of the problem and of the progress of the computation. The output may be sent to two independent streams (files) which are set by optional parameters Print File and Monitoring File. Optional parameters Print Level, Print Solution and Print Options determine the exposed level of detail. This allows, for example, a detailed log file to be generated while the condensed information is displayed on the screen.
By default (${\mathbf{Print\; File}}=6$, ${\mathbf{Print\; Level}}=2$), six sections are printed to the standard output:
Header
Optional parameters list
Problem statistics
Iteration log
Summary
Solution
The iteration log varies depending on which algorithm has been selected to solve the problem (Primal-Dual or Self-Dual).
Header
The header is a message indicating the start of the solver. It should look like:
----------------------------------------------
E04MT, Interior point method for LP problems
----------------------------------------------
Optional parameters list
The list shows all options of the solver, each displayed on one line. The output contains the option name, its current value and an indicator for how it was set. The options unchanged from the default setting are noted by ‘d’, options you set are noted by ‘U’, and options reset by the solver are noted by ‘S’. Note that the output format is compatible with the file format expected by e04zpf (no CPP interface). The output might look as follows:
Stats Time = Yes * U
Task = Minimize * d
Lpipm Centrality Correctors = 6 * d
Lp Presolve = Yes * d
Problem statistics
If ${\mathbf{Print\; Level}}\ge 2$, statistics on the original and the presolved problems are printed. More detailed statistics as well as a list of the presolve operations are also printed for Print Level$3$ or above, for example:
Original Problem Statistics
Number of variables 7
Number of constraints 7
Free variables 0
Number of nonzeros 41
Presolved Problem Statistics
Number of variables 13
Number of constraints 7
Free variables 0
Number of nonzeros 47
Iteration log
If ${\mathbf{Print\; Level}}\ge 2$, the solver prints the status of each iteration.
Primal-Dual algorithm
If ${\mathbf{Print\; Level}}=2$, the output shows the iteration number ($0$ represents the starting point), the current primal and dual objective value, KKT measures (optimality, feasibility and complementarity), average complementarity error $\mu $, number of centrality correction steps (MCC) performed and a column for additional information (I). Note that all these values are also available in rinfo and stats. The output might look as follows:
If ${\mathbf{Print\; Level}}=3$, the solver also prints for each iteration the primal and dual steps as well as the maximum error of the backsolves performed. The output takes the following form:
If ${\mathbf{Print\; Level}}=2$, the output shows the iteration number ($0$ represents the starting point), the current primal and dual objective value, convergence measures (primal infeasibility (12), dual infeasibility (13) and duality gap (14)), the value of the additional variable $\tau $ and the number of centrality correction steps performed. The output might look as follows:
If ${\mathbf{Print\; Level}}=3$, the solver also prints for each iteration ${\rho}_{A}$(15), the value of the variable $\kappa $, the step size as well as the maximum error of the backsolves performed. The output might look as follows:
Occasionally, when numerical instabilities are too big, the solver will restart the iteration and switch to an augmented system formulation. In such cases the letters RS will be printed in the information column (I).
If ${\mathbf{Print\; Level}}>3$, for both the Primal-Dual and the Self-Dual algorithms, each iteration produces more information that expands over several lines. This additional information contains:
The method used (normal equation, augmented system);
The centring parameter $\sigma $;
The total number of iterative refinements performed;
The number of iterative refinements performed in the centrality correction steps;
The number of factorizations performed at the current iteration;
The type of factorization performed (Cholesky, Bunch–Parlett);
The value of the perturbation added to the diagonal in the normal equation formulation or on the zero block in the augmented system formulation;
The total time spent in the iteration if Stats Time is not set to NO.
The output might look as follows:
----------- Details of Iteration 1 ------------
method Normal Equation
sigma 6.72E-02
iterative refinements 0
iterative refinements wmcc 0
factorizations 1
matrix type Cholesky
diagonal perturbation 0.00E+00
time iteration 0.02 sec
Summary
Once the solver finishes, a detailed summary is produced:
-------------------------------------------------
Status: converged, an optimal solution found
-------------------------------------------------
Final primal objective value -4.647531E+02
Final dual objective value -4.647531E+02
Absolute primal infeasibility 1.605940E-15
Relative primal infeasibility 1.210247E-16
Absolute dual infeasibility 4.272004E-12
Relative dual infeasibility 6.068111E-15
Absolute complementarity gap 9.387105E-07
Relative complementarity gap 2.507021E-10
Iterations 7
It starts with the status line of the overall result which matches the ifail value and is followed by the final primal and dual objective values as well as the error measures and iteration count.
Optionally, if Stats Time is set, the timings of the different parts of the algorithm are displayed. It might look as follows:
If ${\mathbf{Print\; Solution}}=\mathrm{X}$, the values of the primal variables and their bounds on the primary and secondary outputs. It might look as follows:
This example demonstrates how to use handle_solve_lp_ipm to solve a small LP problem with the two algorithms implemented (Primal-Dual and Self-Dual). The solver is called twice on the same handle with different values of optional parameters.
We solve the following linear programming problem:
This section contains the description of the underlying algorithms used in handle_solve_lp_ipm, which implements the infeasible Primal-Dual and homogeneous Self-Dual methods. For further details, see Andersen et al. (1996), Gondzio (2012), Mészáros (1996) and Wright (1997).
For simplicity, we consider the following primal linear programming formulation
where $c$, $x\in {\mathbb{R}}^{n}$, $b\in {\mathbb{R}}^{m}$ and $A\in {\mathbb{R}}^{m\times n}$ with full row rank. The dual formulation for (2) is given by
where $y$ and $z$ denote the dual variables. Solutions of the primal (2) and dual (3) problem are connected by the strong duality theory (see for example, Nocedal and Wright (2006)) and are characterized by the first order optimality conditions, the so-called Karush–Kuhn–Tucker (KKT) conditions, which are stated as follows:
where we define $X$ and $Z$ as the diagonal matrices with the elements ${x}_{i}$ and ${z}_{i}$, respectively, and $e={(1,1,\dots ,1)}^{\mathrm{T}}$ as the $n$-vector of ones.
The underlying algorithm applies an iterative method to find an optimal solution $({x}^{*},{y}^{*},{z}^{*})$ of the system (4) employing variants of Newton's method and modifying the search directions and step lengths so that the non-negative constraints are preserved at every iteration.
A direct solution of the nonlinear system of equations (4) by Newton's method is impractical as it exhibits slow progress towards the solution. Instead, a sequence of the following perturbed KKT conditions are solved so that the complementarity is driven to zero through the iteration sequence
Here $\mu $ is the average complementarity error at the current iteration $\mu ={x}^{\mathrm{T}}z/n$, called duality measure, and $\sigma \in (0,1)$, called centring parameter, is the reduction factor that we wish to achieve in the duality measure.
Each iteration of the Primal-Dual algorithm makes one step of Newton's method applied to the perturbed first order optimality conditions (5) with a given $\sigma $ and $\mu $. In particular, a Newton search direction is computed by solving a system of linear equations and a length of the step $\alpha $ is determined so that the bounds $(x,z)>0$ are not violated. The residual of (4) and $\mu $ define stopping criteria and the algorithm terminates once they are reduced to the requested accuracy, see Section 11.3.
Given an $x$, $z\in {\mathbb{R}}_{+}^{n}$ and $y\in {\mathbb{R}}^{m}$, Newton's direction is obtained by solving the following system:
denote the violations of the primal and the dual constraints, respectively. Primal and dual infeasibilities, ${r}_{b}$ and ${r}_{c}$, are reduced at the same rate $(1-\alpha )$, given a step size $\alpha \in (0,1)$. The Primal-Dual algorithm does not require feasibility of the solutions during the optimization process. Feasibility is attained during the process as optimality is approached.
Once the system is solved, $\Delta x$ and $\Delta z$ are used to compute the maximum step sizes in primal space $\left({\alpha}_{P}\right)$ and dual space (${\alpha}_{D}$) such that the non-negativity of variables is preserved
Finally, the barrier parameter $\mu $ is decreased by a given factor and the process is repeated until the stopping criteria (see Section 11.3) or maximum number of iterations is reached.
11.1.1The Barrier Method
Note that there is also another way to obtain the perturbed KKT conditions (5). They can be derived starting from the primal formulation (2) and replacing the non-negativity constraints $x\ge 0$ by a logarithmic barrier term ${\sum}_{j=1}^{n}\mathrm{log}{x}_{j}$ with a barrier parameter$\tau >0$. This approach leads to the primal logarithmic barrier problem defined as
$$\begin{array}{ll}\underset{x\in {\mathbb{R}}^{n}}{\mathrm{minimize}}\phantom{\rule{0.25em}{0ex}}& {c}^{\mathrm{T}}x-\tau {\sum}_{j=1}^{n}\mathrm{log}{x}_{j}\\ \text{subject to \hspace{1em}}& Ax=b\text{.}\end{array}$$
Finally, by introducing $Z=\tau {X}^{-1}$ and $z=Ze$ the same system of equations (5) is formulated.
11.2Homogeneous Self-Dual Algorithm
A homogeneous Self-Dual (HSD) embedding of the primal linear programming and its dual was proposed in Xu et al. (1996). As its name suggest, the HSD and its dual are equivalent. Self-Dual formulations embed the original problem (2) in a larger linear programming problem such that the latter is primal and dual feasible, with known feasible points, and from which solution we can extract optimal solutions or certificates of infeasibility of the original problem.
We define the homogeneous and Self-Dual linear feasibility (HLF) model as follows:
where $\tau $ and $\kappa $ are two additional variables. The model (8) is a Self-Dual linear programming problem with zero right-hand side and a zero objective vector. If $(\hat{x},\hat{\tau},\hat{y},\hat{z},\hat{\kappa})$ is a strictly complementarity solution for (8), then if $\hat{\tau}>0$, the linear programming problem has an optimal solution given by
and the duality gap is given by ${c}^{\mathrm{T}}{x}^{*}-{b}^{\mathrm{T}}{y}^{*}=\hat{\kappa}/\hat{\tau}=0$. The homogeneous algorithm is an application of the Primal-Dual method for the computation of a strictly complementarity solution to (8).
Homogeneous and Self-Dual interior point methods have several advantages besides an inherent ability to detect infeasibility (which improves the detection of divergence in Primal-Dual algorithms). The HSD model includes the ease of finding a suitable starting point and it is generally more robust in the presence of free variables. However, some disadvantages need to be noted: HSD is larger than the original problem. In particular, it increases the number of linear equations solved per iteration by one, requiring an extra backsolve step, which make it slightly slower than the Primal-Dual algorithm. Moreover, numerical experiments indicate that the required number of iterations on feasible problems might be slightly increased.
11.3Stopping Criteria
11.3.1Convergence – Optimal Termination
11.3.1.1Primal-Dual algorithm
The Primal-Dual algorithm is stopped when the first order optimality conditions are satisfied to the requested accuracy. These conditions are relative primal and dual feasibility and duality gap, defined as:
Furthermore, final absolute primal and dual infeasibilities, and duality gap are also returned. Here ${\epsilon}_{1}$ may be set using LPIPM Stop Tolerance and the norm denotes the 2-norm.
11.3.1.2Self-Dual algorithm
Similar to the Primal-Dual algorithm, the homogeneous Self-Dual algorithm is stopped when the following measures are satisfied to the requested accuracy.
which measure the relative reduction in the primal, dual and gap infeasibility, respectively. In addition, an extra measure is considered to quantify the accuracy in the objective function, which is given by
Premature termination is triggered if the current iteration exhibits fast convergence and the optimality measures lie within a small range. In particular, the Self-Dual algorithm is stopped if the above termination conditions are met within a small factor and $\tau >1000\kappa $. This measure is tracked after the first $10$ iterations.
11.3.2Infeasibility/Unboundedness Detection
11.3.2.1Primal-Dual algorithm
The Primal-Dual algorithm detects infeasible problems fairly reliably by using a set of heuristics. When several of these heuristics classify the problem as infeasible throughout a sufficient number of iterations, the algorithm is stopped.
Note that in order to obtain a certificate of infeasibility, the use of homogeneous Self-Dual algorithm is recommended, see Section 11.3.2.2.
11.3.2.2Self-Dual algorithm
The problem is concluded to be primal or dual infeasible if one of the following conditions hold:
1.Both the relative primal (12) and dual (13) feasibility of the HLF model (8) are satisfied and the value of $\tau $ satisfies
Then the problem is declared dual infeasible if ${c}^{\mathrm{T}}x<0$ or primal infeasible otherwise.
11.3.3Suboptimal Solution
The solver stops prematurely and reports suboptimal solution when it predicts that the current estimate of the solution will not be improved in subsequent iterations. In most cases the returned solution should be acceptable.
11.4Solving the KKT System
The solution of the Newton system of equations (6) is the most computationally costly operation. In practice, system (6) is reduced to the augmented system by eliminating $\Delta z$ from the last block of equations
as follows:
and $\Delta z={-X}^{-1}(\mu e-XZe)-{X}^{-1}Z\Delta x$. This is a system of $m+n$ variables and constraints, symmetric and indefinite. Submatrix $D$ is diagonal and positive definite.
The system (16) can be reduced further by eliminating $\Delta x$, to a positive definite system usually called normal equations defined as
Typically, formulation (17) is preferred for many problems as the system matrix can be factorized by a sparse Cholesky. However, this brings some well-known disadvantages: Ill-conditioning of the system is often observed during the final stages of the algorithm, and free (unbounded) variables require certain modifications. If matrix $A$ contains dense columns (columns with relatively many nonzeros), then $A{D}^{2}{A}^{\mathrm{T}}$ has many nonzeros, which in turn makes the factorization expensive. On the other hand, solving the augmented system is usually slower, but it normally avoids the fill-in caused by dense columns and can handle free variables directly in the formulation.
handle_solve_lp_ipm can detect and handle dense columns effectively: depending on the number and the density of the ‘dense’ columns, the solver may either choose to directly use an augmented system formulation or to treat these columns separately in a product-form Cholesky factorization as described in Goldfard and Scheinberg (2004). It is also possible to manually override the automatic choice via the optional parameter LPIPM System Formulation and let the solver use a normal equations or an augmented system formulation.
Badly scaled optimal solutions may present numerical challenges, therefore iterative refinement using mixed-precision is employed for reducing the roundoff errors produced during the solution of the system. When the condition number of the system $A{D}^{2}{A}^{\mathrm{T}}$ prevents the satisfactory use of iterative refinement, handle_solve_lp_ipm switches automatically to an augmented system formulation, reporting RS (Restart) in the last column of the iteration log (I). Furthermore, handle_solve_lp_ipm provides several scaling techniques to adjust the numerical characteristics of the problem data, see LPIPM Scaling.
Finally, factorization of the system matrix can degrade sparsity, so the resulting fill-in can be large, therefore several ordering techniques are included to minimize it. handle_solve_lp_ipm uses Harwell packages MA97 (see Hogg and Scott (2011) and HSL (2011)) for the underlying sparse linear algebra factorization and MC68 approximate minimum degree algorithm, and METIS (Karypis and Kumar (1998)) nested dissection algorithm for the ordering.
11.5Weighted Multiple Centrality Correctors
As previously stated, the factorization of the system at every iteration usually accounts for most of the computation time, therefore it is always desirable to reuse the factors if possible and to reduce the total number of iterations. An efficient computational method is obtained by splitting the computation of the Newton direction into two steps, namely the affine-scaling direction and its correction step, called Mehrotra's predictor-corrector. However, Mehrotra's predictor-corrector technique aims to correct the affine-direction in a full step, which is considered an aggressive approach.
handle_solve_lp_ipm implements a high-order method, called weighted multiple centrality correctors (WMCC), see Gondzio (1996) and Colombo and Gondzio (2008). This technique attempts to correct the affine-direction recursively as long as the step sizes increase at least by a fraction of a given aspiration level, up to a maximum number of times. The heuristic to determine the maximum number of WMCC is based on the ratio between the cost of the factorization and that of backsolving and therefore may lead to non-repeatable results. To avoid an undesired behaviour, you can fix the maximum number of WMCC with optional parameter LPIPM Centrality Correctors.
11.6Further Details
handle_solve_lp_ipm includes an advance preprocessing phase (called presolve) to reduce the dimensions of the problem before passing it to the solver. The reduction in problem size generally improves the behaviour of the solver, shortening the total computation time. In addition, infeasibility may also be detected during preprocessing. The default behaviour of the presolve can be modified by LP Presolve.
The initial point ${x}_{0}$ is always computed using heuristics. Effective warm-starting strategies for interior point methods are still subject to intensive academic research, and it is not available in this release.
12Optional Parameters
Several optional parameters in handle_solve_lp_ipm define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of handle_solve_lp_ipm these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling handle_opt_set anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
If any options are set by the solver (typically those with the choice of $\mathrm{AUTO}$), their value can be retrieved by handle_opt_get. If the solver is called again, any such arguments are reset to their default values and the decision is made again.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section 12.1.
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
the keywords;
a parameter value,
where the letters $a$, $i$ and $r$ denote options that take character, integer and real values respectively;
the default value, where the symbol $\epsilon $ is a generic notation for machine precision (see precision).
All options accept the value $\mathrm{DEFAULT}$ to return single options to their default states.
Keywords and character values are case and white space insensitive.
Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
Infinite Bound Size
$r$
Default $\text{}={10}^{20}$
This defines the ‘infinite’ bound $\mathit{bigbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to $\mathit{bigbnd}$ will
be regarded as $+\infty $ (and similarly any lower bound less than or equal to $-\mathit{bigbnd}$ will be regarded as $-\infty $). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
This parameter allows you to reduce the level of presolving of the problem or turn it off completely. If the presolver is turned off, the solver will try to handle the problem you have given. In such a case, the presence of fixed variables or linear dependencies in the constraint matrix can cause numerical instabilities to occur. In normal circumstances, it is recommended to use the full presolve which is the default.
Constraint: ${\mathbf{LP\; Presolve}}=\mathrm{FULL}$, $\mathrm{BASIC}$ or $\mathrm{NO}$.
LPIPM Algorithm
$a$
Default $=\mathrm{PRIMAL-DUAL}$
As described in Section 11, handle_solve_lp_ipm implements the infeasible Primal-Dual algorithm, see Section 11.1, and the homogeneous Self-Dual algorithm, see Section 11.2. This parameter controls which one to use.
Constraint: ${\mathbf{LPIPM\; Algorithm}}=\mathrm{PRIMAL-DUAL}$, $\mathrm{PD}$, $\mathrm{SELF-DUAL}$ or $\mathrm{SD}$.
LPIPM Centrality Correctors
$i$
Default $\text{}=6$
This parameter controls the number of centrality correctors (see Section 11.5) used at each iteration. Each corrector step attempts to improve the current iterate for the price of additional solve(s) of the factorized system matrix in order to reduce the total number of iterations. Therefore, it trades the additional solves of the system with the number of factorizations. The more expensive the factorization is with respect to the solve, the more corrector steps should be allowed.
If $i>0$, the maximum number of corrector steps will be computed by timing heuristics (the ratio between the times of the factorization and the solve in the first iteration) but will not be greater than $i$. The number computed by the heuristic can be recovered after the solve or during a monitoring step in stats. This might cause non-repeatable results.
If $i<0$, the maximum number of corrector steps will be set to $\left|i\right|$.
If it is set to $0$, no additional centrality correctors will be used and the algorithm reverts to Mehrotra's predictor-corrector.
LPIPM Iteration Limit
$i$
Default $\text{}=100$
The maximum number of iterations to be performed by handle_solve_lp_ipm. Setting the option too low might lead to $\mathbf{errorid}={\mathbf{22}}$.
This parameter controls the type of scaling to be applied on the constraint matrix $A$ before solving the problem. More precisely, the scaling procedure will try to find diagonal matrices ${D}_{1}$ and ${D}_{2}$ such that the values in ${D}_{1}A{D}_{2}$ are of a similar order of magnitude. The solver is less likely to run into numerical difficulties when the constraint matrix is well scaled.
Constraint: ${\mathbf{LPIPM\; Scaling}}=\mathrm{ARITHMETIC}$, $\mathrm{GEOMETRIC}$ or $\mathrm{NONE}$.
LPIPM Monitor Frequency
$i$
Default $\text{}=0$
This parameter defines the frequency of how often function monit is called. If $i>0$, the solver calls monit at the end of every $i$th iteration. If it is set to $0$, the function is not called at all.
As described in Section 11.4, handle_solve_lp_ipm can internally work either with the normal equations formulation (17) or with the augmented system (16). A brief discussion of advantages and disadvantages is presented in Section 11.4. Option $\mathrm{AUTO}$ leaves the decision to the solver based on the structure of the constraints and it is the recommended option. This will typically lead to the normal equations formulation unless there are many dense columns or the system is significantly cheaper to factorize as the augmented system. Note that in some cases even if ${\mathbf{LPIPM\; System\; Formulation}}=\mathrm{NORMAL\; EQUATIONS}$ the solver might switch the formulation through the computation to the augmented system due to numerical instabilities or computational cost.
If $i\ge 0$, the
unit number
for the secondary (monitoring) output. If set to $-1$, no secondary output is provided. The following information is output to the unit:
–a listing of the optional parameters if set by Print Options;
–problem statistics, the iteration log and the final status as set by Monitoring Level;
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with Print Level.
If $i\ge 0$, the
unit number
for the primary output of the solver. If ${\mathbf{Print\; File}}=-1$, the primary output is completely turned off independently of other settings. The default value is the advisory message unit number as defined by x04abf (no CPP interface) at the time of the optional parameters initialization, e.g., at the initialization of the handle. The following information is output to the unit:
–a listing of optional parameters if set by Print Options;
–problem statistics, the iteration log and the final status from the solver as set by Print Level;
This parameter defines how detailed information should be printed by the solver to the primary output.
$i$
Output
$0$
No output from the solver
$1$
Only the final status and the primal and dual objective value
$2$
Problem statistics, one line per iteration showing the progress of the solution with respect to the convergence measures, final status and statistics
$3$
As level $2$ but each iteration line is longer, including step lengths and errors
$4,5$
As level $3$ but further details of each iteration are presented
Constraint: $0\le {\mathbf{Print\; Level}}\le 5$.
Print Options
$a$
Default $=\mathrm{YES}$
If ${\mathbf{Print\; Options}}=\mathrm{YES}$, a listing of optional parameters will be printed to the primary and secondary output.
Constraint: ${\mathbf{Print\; Options}}=\mathrm{YES}$ or $\mathrm{NO}$.
Print Solution
$a$
Default $=\mathrm{NO}$
If ${\mathbf{Print\; Solution}}=\mathrm{X}$, the final values of the primal variables are printed on the primary and secondary outputs.
If ${\mathbf{Print\; Solution}}=\mathrm{YES}$ or $\mathrm{ALL}$, in addition to the primal variables, the final values of the dual variables are printed on the primary and secondary outputs.
Constraint: ${\mathbf{Print\; Solution}}=\mathrm{YES}$, $\mathrm{NO}$, $\mathrm{X}$ or $\mathrm{ALL}$.
Stats Time
$a$
Default $=\mathrm{NO}$
This parameter allows you to turn on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice $\mathrm{YES}$ is equivalent to $\mathrm{WALL\; CLOCK}$.
Constraint: ${\mathbf{Stats\; Time}}=\mathrm{YES}$, $\mathrm{NO}$, $\mathrm{CPU}$ or $\mathrm{WALL\; CLOCK}$.
Task
$a$
Default $=\mathrm{MINIMIZE}$
This parameter specifies the required direction of the optimization. If ${\mathbf{Task}}=\mathrm{FEASIBLEPOINT}$, the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function is set, Task reverts to $\mathrm{FEASIBLEPOINT}$ automatically.
Constraint: ${\mathbf{Task}}=\mathrm{MINIMIZE}$, $\mathrm{MAXIMIZE}$ or $\mathrm{FEASIBLE\; POINT}$.