# naginterfaces.library.opt.handle_​solve_​lp_​ipm¶

naginterfaces.library.opt.handle_solve_lp_ipm(handle, x=None, u=None, monit=None, data=None, io_manager=None)[source]

handle_solve_lp_ipm is a solver from the NAG optimization modelling suite for large-scale Linear Programming (LP) problems based on an interior point method (IPM).

Note: this function uses optional algorithmic parameters, see also: handle_opt_set(), handle_opt_get().

For full information please refer to the NAG Library document for e04mt

https://www.nag.com/numeric/nl/nagdoc_28.7/flhtml/e04/e04mtf.html

Parameters
handleHandle

The handle to the problem. It needs to be initialized (e.g., by handle_init()) and to hold a problem formulation compatible with handle_solve_lp_ipm. It must not be changed between calls to the NAG optimization modelling suite.

xNone or float, array-like, shape , optional

The input of is reserved for future releases of the NAG Library and it is ignored at the moment.

uNone or float, array-like, shape , optional

Note: if , holds Lagrange multipliers (dual variables) for the bound constraints and linear constraints. If , will not be referenced.

The input of is reserved for future releases of the NAG Library and it is ignored at the moment.

monitNone or callable monit(handle, rinfo, stats, data=None), optional

Note: if this argument is None then a NAG-supplied facility will be used.

is provided to enable you to monitor the progress of the optimization.

It is invoked at the end of every th iteration where is given by the option ‘LPIPM Monitor Frequency’ (the default is , is not called).

Parameters
handleHandle

The handle to the problem as provided on entry to handle_solve_lp_ipm. It may be used to query the model during the solve, and extract the current approximation of the solution by handle_set_get_real().

rinfofloat, ndarray, shape

Error measures and various indicators at the end of the current iteration as described in .

statsfloat, ndarray, shape

Solver statistics at the end of the current iteration as described in , however, elements , , , , , and refer to the quantities in the last iteration rather than accumulated over all iterations through the whole algorithm run.

dataarbitrary, optional, modifiable in place

User-communication data for callback functions.

dataarbitrary, optional

User-communication data for callback functions.

io_managerFileObjManager, optional

Manager for I/O in this routine.

Returns
xfloat, ndarray, shape

The final values of the variables .

ufloat, ndarray, shape

The final values of the variables .

rinfofloat, ndarray, shape

Error measures and various indicators of the algorithm (see Algorithmic Details for details) as given in the table below:

Value of the primal objective.

Value of the dual objective.

Flag indicating the system formulation used by the solver, : augmented system, : normal equation.

Factorization type, : Cholesky, : Bunch–Parlett.

Primal-Dual specific information (will be if the Self-Dual algorithm is chosen).

 4 Relative dual feasibility (optimality), see [equation]. 5 Relative primal feasibility, see [equation]. 6 Relative duality gap (complementarity), see [equation]. 7 Average complementarity error μ (see The Infeasible-interior-point Primal-Dual Algorithm). 8 Centring parameter σ (see The Infeasible-interior-point Primal-Dual Algorithm). 9 Primal step length. 10 Dual step length. 11–13 Reserved for future use.

Self-Dual specific information (will be if the Primal-Dual algorithm is chosen).

 14 Relative primal infeasibility, see [equation]. 15 Relative dual infeasibility, see [equation]. 16 Relative duality gap, see [equation]. 17 Accuracy, see [equation]. 18 τ, see [equation]. 19 κ, see [equation]. 20 Step length. 21–23 Reserved for future use.

Reserved for future use.

statsfloat, ndarray, shape

Solver statistics as given in the table below. Note that time statistics are provided only if ‘Stats Time’ is set (the default is ‘NO’), the measured time is returned in seconds.

 0 Number of iterations. 1 Total number of centrality correction steps performed. 2 Total number of iterative refinements performed. 3 Value of the perturbation added to the diagonal in the normal equation formulation or on the zero block in the augmented system formulation. 4 Total number of factorizations performed. 5 Total time spent in the solver. 6 Time spent in the presolve phase. 7 Time spent in the last iteration. 8 Total time spent factorizing the system matrix. 9 Total time spent backsolving the system matrix. 10 Total time spent in the multiple centrality correctors phase. 11 Time spent in the initialization phase. 12 Number of nonzeros in the system matrix. 13 Number of nonzeros in the system matrix factor. 14 Maximum error of the backsolve. 15 Number of columns in A considered dense by the solver. 16 Maximum number of centrality corrector steps. 17–99 Reserved for future use.
Other Parameters
‘Defaults’valueless

This special keyword may be used to reset all options to their default values. Any value given with this keyword will be ignored.

‘Infinite Bound Size’float

Default

This defines the ‘infinite’ bound in the definition of the problem constraints. Any upper bound greater than or equal to will be regarded as (and similarly any lower bound less than or equal to will be regarded as ). Note that a modification of this option does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.

Constraint: .

‘LP Presolve’str

Default

This argument allows you to reduce the level of presolving of the problem or turn it off completely. If the presolver is turned off, the solver will try to handle the problem you have given. In such a case, the presence of fixed variables or linear dependencies in the constraint matrix can cause numerical instabilities to occur. In normal circumstances, it is recommended to use the full presolve which is the default.

Constraint: , or .

‘LPIPM Algorithm’str

Default

As described in Algorithmic Details, handle_solve_lp_ipm implements the infeasible Primal-Dual algorithm, see The Infeasible-interior-point Primal-Dual Algorithm, and the homogeneous Self-Dual algorithm, see Homogeneous Self-Dual Algorithm. This argument controls which one to use.

Constraint: , , or .

‘LPIPM Centrality Correctors’int

Default

This argument controls the number of centrality correctors (see Weighted Multiple Centrality Correctors) used at each iteration. Each corrector step attempts to improve the current iterate for the price of additional solve(s) of the factorized system matrix in order to reduce the total number of iterations. Therefore, it trades the additional solves of the system with the number of factorizations. The more expensive the factorization is with respect to the solve, the more corrector steps should be allowed.

If , the maximum number of corrector steps will be computed by timing heuristics (the ratio between the times of the factorization and the solve in the first iteration) but will not be greater than . The number computed by the heuristic can be recovered after the solve or during a monitoring step in . This might cause non-repeatable results.

If , the maximum number of corrector steps will be set to .

If it is set to , no additional centrality correctors will be used and the algorithm reverts to Mehrotra’s predictor-corrector.

‘LPIPM Iteration Limit’int

Default

The maximum number of iterations to be performed by handle_solve_lp_ipm. Setting the option too low might lead to = 22.

Constraint: .

‘LPIPM Max Iterative Refinement’int

Default

This argument controls the maximum number of iterative refinement iterations (see Solving the KKT System) used at each main iteration when . When solving the Normal Equations linear system for numerically challenging problems, mixed-precision iterative refinement may be used until the roundoff errors are reduced to an acceptable level or until the number of refinements reached its maximum value set by this argument.

Constraint: .

‘LPIPM Scaling’str

Default

This argument controls the type of scaling to be applied on the constraint matrix before solving the problem. More precisely, the scaling procedure will try to find diagonal matrices and such that the values in are of a similar order of magnitude. The solver is less likely to run into numerical difficulties when the constraint matrix is well scaled.

Constraint: , or .

‘LPIPM Monitor Frequency’int

Default

This argument defines the frequency of how often function is called. If , the solver calls at the end of every th iteration. If it is set to , the function is not called at all.

Constraint: .

‘LPIPM Stop Tolerance’float

Default

This argument sets the value which is the tolerance for the convergence measures in the stopping criteria, see Stopping Criteria.

Constraint: .

‘LPIPM Stop Tolerance 2’float

Default

This argument sets the additional tolerance used in the stopping criteria for the Self-Dual algorithm, see Stopping Criteria.

Constraint: .

‘LPIPM System Formulation’str

Default

As described in Solving the KKT System, handle_solve_lp_ipm can internally work either with the normal equations formulation [equation] or with the augmented system [equation]. A brief discussion of advantages and disadvantages is presented in Solving the KKT System. Option ‘AUTO’ leaves the decision to the solver based on the structure of the constraints and it is the recommended option. This will typically lead to the normal equations formulation unless there are many dense columns or the system is significantly cheaper to factorize as the augmented system. Note that in some cases even if the solver might switch the formulation through the computation to the augmented system due to numerical instabilities or computational cost.

Constraint: , , , or .

‘Monitoring File’int

Default

If , the unit number for the secondary (monitoring) output. If set to , no secondary output is provided. The following information is output to the unit:

• a listing of the options if set by ‘Print Options’;

• problem statistics, the iteration log and the final status as set by ‘Monitoring Level’;

• the solution if set by ‘Print Solution’.

Constraint: .

‘Monitoring Level’int

Default

This argument sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with ‘Print Level’.

Constraint: .

‘Print File’int

Default

If , the unit number for the primary output of the solver. If , the primary output is completely turned off independently of other settings. The default value is the advisory message unit number at the time of the options initialization, e.g., at the initialization of the handle. The following information is output to the unit:

• a listing of options if set by ‘Print Options’;

• problem statistics, the iteration log and the final status from the solver as set by ‘Print Level’;

• the solution if set by ‘Print Solution’.

Constraint: .

‘Print Level’int

Default

This argument defines how detailed information should be printed by the solver to the primary output.

Output

No output from the solver

Only the final status and the primal and dual objective value

Problem statistics, one line per iteration showing the progress of the solution with respect to the convergence measures, final status and statistics

As level but each iteration line is longer, including step lengths and errors

As level but further details of each iteration are presented

Constraint: .

‘Print Options’str

Default

If , a listing of options will be printed to the primary and secondary output.

Constraint: or .

‘Print Solution’str

Default

If , the final values of the primal variables are printed on the primary and secondary outputs.

If or , in addition to the primal variables, the final values of the dual variables are printed on the primary and secondary outputs.

Constraint: , , or .

‘Stats Time’str

Default

This argument allows you to turn on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice ‘YES’ is equivalent to ‘WALL CLOCK’.

Constraint: , , or .

Default

This argument specifies the required direction of the optimization. If , the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function is set, ‘Task’ reverts to ‘FEASIBLE POINT’ automatically.

Constraint: , or .

Raises
NagValueError
(errno )

has not been initialized.

(errno )

does not belong to the NAG optimization modelling suite, has not been initialized properly or is corrupted.

(errno )

has not been initialized properly or is corrupted.

(errno )

This solver does not support the model defined in the handle.

(errno )

The problem is already being solved.

(errno )

On entry, , expected .

Constraint: must match the current number of variables of the model in the .

(errno )

On entry, .

does not match the size of the Lagrangian multipliers for constraints.

The correct value is either or .

(errno )

On entry, .

does not match the size of the Lagrangian multipliers for constraints.

The correct value is for no constraints.

(errno )

The problem was found to be primal infeasible.

(errno )

The problem was found to be dual infeasible.

(errno )

The problem seems to be primal or dual infeasible, the algorithm was stopped.

Warns
NagAlgorithmicWarning
(errno )

Suboptimal solution.

NagAlgorithmicMajorWarning
(errno )

Maximum number of iterations exceeded.

(errno )

No progress, stopping early.

NagCallbackTerminateWarning
(errno )

User requested termination during a monitoring step.

Notes

handle_solve_lp_ipm solves a large-scale linear optimization problem in the following form:

where is the number of decision variables and is the number of linear constraints. Here , , , are -dimensional vectors, is an sparse matrix and , are -dimensional vectors.

handle_solve_lp_ipm implements two algorithmic variants of the interior point method for solving linear optimization problems: the infeasible Primal-Dual interior point method and homogeneous Self-Dual interior point method. In general, the Self-Dual algorithm has a slightly higher price per iteration, however, it is able to declare infeasibility or unboundness of the problem, whereas the Primal-Dual relies, in this case, on heuristics. For a detailed description of both algorithms see Algorithmic Details. The algorithm is chosen by the ‘LPIPM Algorithm’, the default is Primal-Dual.

handle_solve_lp_ipm solves linear programming problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as a means of communication for functions in the NAG optimization modelling suite. First, the problem handle is initialized by calling handle_init(). Then some of the functions handle_set_linobj(), handle_set_quadobj(), handle_set_simplebounds() or handle_set_linconstr() may be called to formulate the objective function, bounds of the variables, and the block of linear constraints, respectively. Once the problem is fully set, the handle may be passed to the solver. When the handle is not needed anymore, handle_free() should be called to destroy it and deallocate the memory held within. See the E04 Introduction for more details about the NAG optimization modelling suite.

The solver method can be modified by various options (see Other Parameters) which can be set by handle_opt_set() and handle_opt_set_file() anytime between the initialization of the handle by handle_init() and a call to the solver. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various options.

The option ‘Task’ may be used to switch the problem to maximization or to ignore the objective function and find only a feasible point.

Several options may have significant impact on the performance of the solver. Even if the defaults were chosen to suit the majority of problems, it is recommended to experiment in order to find the most suitable set of options for a particular problem, see Algorithmic Details and Other Parameters for further details.

Structure of the Lagrangian Multipliers

The algorithm works internally with estimates of both the decision variables, denoted by , and the Lagrangian multipliers (dual variables), denoted by . The multipliers are stored in the array and conform to the structure of the constraints.

If the simple bounds have been defined (handle_set_simplebounds() was successfully called), the first elements of belong to the corresponding Lagrangian multipliers, interleaving a multiplier for the lower and the upper bound for each . If any of the bounds were set to infinity, the corresponding Lagrangian multipliers are set to and may be ignored.

Similarly, the following elements of belong to multipliers for the linear constraints (if handle_set_linconstr() has been successfully called). The organization is the same, i.e., the multipliers for each constraint for the lower and upper bounds are alternated and zeros are used for any missing (infinite bound) constraints.

Some solvers merge multipliers for both lower and upper inequality into one element whose sign determines the inequality. Negative multipliers are associated with the upper bounds and positive with the lower bounds. An equivalent result can be achieved with this storage scheme by subtracting the upper bound multiplier from the lower one. This is also consistent with equality constraints.

References

Andersen, E D, Gondzio, J, Mészáros, C and Xu, X, 1996, Implementation of interior point methods for large scale linear programming, HEC/Université de Genève

Colombo, M and Gondzio, J, 2008, Further development of multiple centrality correctors for interior point methods, Computational Optimization and Algorithms (41(3)), 277–305

Goldfard, D and Scheinberg, K, 2004, A product-form Cholesky factorization method for handling dense columns in interior point methods for linear programming, Mathematical Programming (99(1)), 1–34

Gondzio, J, 1996, Multiple centrality corrections in a primal-dual method for linear programming, Computational Optimization and Algorithms (6(2)), 137–156

Gondzio, J, 2012, Interior point methods 25 years later, European Journal of Operations Research (218(3)), 587–601

Hogg, J D and Scott, J A, 2011, HSL MA97: a bit-compatible multifrontal code for sparse symmetric systems, RAL Technical Report. RAL-TR-2011-024

HSL, 2011, A collection of Fortran codes for large-scale scientific computation, http://www.hsl.rl.ac.uk/

Karypis, G and Kumar, V, 1998, A fast and high quality multilevel scheme for partitioning irregular graphs, SIAM J. Sci. Comput. (20(1)), 359–392

Mészáros, C, 1996, The efficient implementation of interior point methods for linear programming and their applications, PhD Thesis, Eötvös Loránd University of Science, Budapest

Nocedal, J and Wright, S J, 2006, Numerical Optimization, (2nd Edition), Springer Series in Operations Research, Springer, New York

Wright, S W, 1997, Primal-dual interior point methods, SIAM, Philadelphia

Xu, X, Hung, P-F and Ye, Y, 1996, A simplified homogeneous and self-dual linear programming algorithm and its implementation, Annals of Operations Research (62(1)), 151–171