hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox: nag_mip_iqp_sparse (h02ce)

Purpose

nag_mip_iqp_sparse (h02ce) obtains integer solutions to sparse linear programming and quadratic programming problems.

Syntax

[ns, xs, istate, miniz, minz, obj, clamda, ifail] = h02ce(n, m, iobj, ncolh, qphx, a, ha, ka, bl, bu, start, names, crname, ns, xs, intvar, istate, strtgy, leniz, lenz, monit, 'nnz', nnz, 'nname', nname, 'lintvr', lintvr, 'mdepth', mdepth)
[ns, xs, istate, miniz, minz, obj, clamda, ifail] = nag_mip_iqp_sparse(n, m, iobj, ncolh, qphx, a, ha, ka, bl, bu, start, names, crname, ns, xs, intvar, istate, strtgy, leniz, lenz, monit, 'nnz', nnz, 'nname', nname, 'lintvr', lintvr, 'mdepth', mdepth)

Description

nag_mip_iqp_sparse (h02ce) is designed to obtain integer solutions to a class of quadratic programming problems addressed by nag_opt_qpconvex1_sparse_solve (e04nk). Specifically it solves the following problem:
minimizexRnf(x)  subject to  l{ x Ax } u,
minimizex ∈ R^n f(x)  subject to  l ≤ { x Ax } ≤ u,
(1)
where x = (x1,x2,,xn)Tx=(x1,x2,,xn)T is a set of variables (some of which may be required to be integer), AA is an mm by nn matrix and the objective function f(x)f(x) may be specified in a variety of ways depending upon the particular problem to be solved. The optional parameter Maximize may be used to specify an alternative problem in which f(x)f(x) is maximized. The possible forms for f(x)f(x) are listed in Table 1, in which the prefixes LP and QP stand for ‘linear programming’ and ‘quadratic programming’ respectively, cc is an nn-element vector and HH is the nn by nn second-derivative matrix 2f(x)2f(x) (the Hessian matrix).
Problem type Objective function f(x)f(x) Hessian matrix HH
LP cTxcTx Not applicable
QP cTx + (1/2)xTHxcTx+12xTHx Symmetric positive semidefinite
Table 1
For LP and QP problems, the unique global minimum value of f(x)f(x) is found. For QP problems, you must also provide a function that computes HxHx for any given vector xx. (HH need not be stored explicitly.)
(It is not expected that the feasibility problem of nag_opt_qpconvex1_sparse_solve (e04nk) would be relevant here.)
The function employs a ‘Branch and Bound’ technique to enforce the integer constraints. In this technique the problem is first solved without the integer constraints. If a variable is found to be non-integral when it is required to have an integer value then two additional problems are constructed. One bounds the variable above by the nearest integer value below the optimal value previously obtained. The second problem is formed by bounding the variable below by the nearest integer value above the optimal value. This process is continued until an integer solution is found. At this point you may elect to stop, or may continue to search for better integer solutions by examining any other sub-problems that remain to be explained.
In practice the function tries to compute an integer solution as quickly as possible using a depth-first approach, since this helps determine a realistic cut-off value. If we have a cut-off value, say the value of the function at this first integer solution, and any sub-problem, WW say, has a solution value greater than this cut-off value, then subsequent sub-problems of WW must have solutions greater than the value of the solution at WW and therefore need not be computed. Thus a knowledge of a good cut-off value can result in fewer sub-problems being solved and thus speed up the operation of the function. (See the description of monit in Section [Parameters] for details of how you can supply your own cut-off value.)
Each sub-problem is solved using (e04nk). You are referred to the function document for nag_opt_qpconvex1_sparse_solve (e04nk) for details of the algorithm used.

References

Gill P E, Hammarling S, Murray W, Saunders M A and Wright M H (1986) Users' guide for LSSOL (Version 1.0) Report SOL 86-1 Department of Operations Research, Stanford University
Gill P E and Murray W (1978) Numerically stable methods for quadratic programming Math. Programming 14 349–372
Gill P E, Murray W, Saunders M A and Wright M H (1986) Some theoretical properties of an augmented Lagrangian merit function Report SOL 86–6R Department of Operations Research, Stanford University
Gill P E, Murray W, Saunders M A and Wright M H (1989) A practical anti-cycling procedure for linearly constrained optimization Math. Programming 45 437–474
Gill P E, Murray W, Saunders M A and Wright M H (1991) Inertia-controlling methods for general quadratic programming SIAM Rev. 33 1–36
Hock W and Schittkowski K (1981) Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems 187 Springer–Verlag
Lawson C L, Hanson R J, Kincaid D R and Krogh F T (1979) Basic linear algebra supbrograms for Fortran usage ACM Trans. Math. Software 5 308–325
Murtagh B A and Saunders M A (1983) MINOS 5.0 user's guide Report SOL 83-20 Department of Operations Research, Stanford University

Parameters

Compulsory Input Parameters

1:     n – int64int32nag_int scalar
nn, the number of variables (excluding slacks). This is the number of columns in the linear constraint matrix AA.
Constraint: n1n1.
2:     m – int64int32nag_int scalar
mm, the number of general linear constraints (or slacks). This is the number of rows in AA, including the free row (if any; see iobj).
Constraint: m1m1.
3:     iobj – int64int32nag_int scalar
If iobj > 0iobj>0, row iobj of AA is a free row containing the nonzero elements of the vector cc appearing in the linear objective term cTxcTx.
If iobj = 0iobj=0, there is no free row, i.e., the problem is either an FP problem (in which case iobj must be set to zero), or a QP problem with c = 0c=0.
Constraint: 0iobjm0iobjm.
4:     ncolh – int64int32nag_int scalar
nHnH, the number of leading nonzero columns of the Hessian matrix HH. For FP and LP problems, ncolh must be set to zero.
Constraint: 0ncolhn0ncolhn.
5:     qphx – function handle or string containing name of m-file
For QP problems, you must supply a version of qphx to compute the matrix product HxHx. If HH has rows and columns consisting entirely of zeros, it is most efficient to order the variables x = (yz)Tx=(yz)T so that
Hx =
(H10)
0 0
(y)
z
=
(H1y)
0
,
Hx= H1 0 0 0 y z = H1y 0 ,
where the nonlinear variables yy appear first as shown. For LP problems, qphx will never be called by nag_mip_iqp_sparse (h02ce).
[hx] = qphx(nstate, ncolh, x)

Input Parameters

1:     nstate – int64int32nag_int scalar
If nstate = 1nstate=1, then nag_mip_iqp_sparse (h02ce) is calling qphx for the first time on a sub-problem. This parameter setting allows you to save computation time if certain data must be read or calculated only once.
If nstate2nstate2, then nag_mip_iqp_sparse (h02ce) is calling qphx for the last time. This parameter setting allows you to perform some additional computation on the final sub-problem solution. In general, the last call to qphx is made with nstate = 2 + ifailnstate=2+ifail (see Section [Error Indicators and Warnings]).
Otherwise, nstate = 0nstate=0.
2:     ncolh – int64int32nag_int scalar
This is the same parameter ncolh as supplied to nag_mip_iqp_sparse (h02ce).
3:     x(ncolh) – double array
The first ncolh elements of the vector xx.

Output Parameters

1:     hx(ncolh) – double array
The product HxHx.
6:     a(nnz) – double array
nnz, the dimension of the array, must satisfy the constraint 1nnzn × m1nnzn×m.
The nonzero elements of AA, ordered by increasing column index. Note that multiple elements with the same row and column indices are not allowed.
7:     ha(nnz) – int64int32nag_int array
nnz, the dimension of the array, must satisfy the constraint 1nnzn × m1nnzn×m.
ha(i)hai must contain the row index of the nonzero element stored in a(i)ai, for i = 1,2,,nnzi=1,2,,nnz. Note that the row indices for a column may be supplied in any order.
Constraint: 1ha(i)m1haim, for i = 1,2,,nnzi=1,2,,nnz.
8:     ka(n + 1n+1) – int64int32nag_int array
ka(j)kaj must contain the index in a of the start of the jjth column, for j = 1,2,,nj=1,2,,n. To specify the jjth column as empty, set ka(j) = ka(j + 1)kaj=kaj+1. Note that the first and last elements of ka must be such that ka(1) = 1ka1=1 and ka(n + 1) = nnz + 1kan+1=nnz+1.
Constraints:
  • ka(1) = 1ka1=1;
  • ka(j)1kaj1, for j = 2,3,,nj=2,3,,n;
  • ka(n + 1) = nnz + 1kan+1=nnz+1;
  • 0ka(j + 1)ka(j)m0kaj+1-kajm, for j = 1,2,,nj=1,2,,n.
9:     bl(n + mn+m) – double array
ll, the lower bounds for all the variables and general constraints, in the following order. The first n elements of bl must contain the bounds on the variables xx, and the next m elements the bounds for the general linear constraints AxAx (or slacks ss) and the free row (if any). To specify a nonexistent lower bound (i.e., lj = lj=-), set bl(j)bigbndblj-bigbnd, where bigbndbigbnd is the value of the optional parameter Infinite Bound Size (default value = 1020default value=1020). To specify the jjth constraint as an equality, set bl(j) = bu(j) = βblj=buj=β, say, where |β| < bigbnd|β|<bigbnd. Note that the lower bound corresponding to the free row must be set to - and stored in bl(n + iobj)bln+iobj.
Constraint: if iobj > 0iobj>0, bl(n + iobj)bigbndbln+iobj-bigbnd
(See also the description for bu.)
10:   bu(n + mn+m) – double array
uu, the upper bounds for all the variables and general constraints, in the following order. The first n elements of bl must contain the bounds on the variables xx, and the next m elements the bounds for the general linear constraints AxAx (or slacks ss) and the free row (if any). To specify a nonexistent upper bound (i.e., uj = + uj=+), set bu(j)bigbndbujbigbnd. Note that the upper bound corresponding to the free row must be set to + + and stored in bu(n + iobj)bun+iobj.
Constraints:
  • if iobj > 0iobj>0, bu(n + iobj)bigbndbun+iobjbigbnd;
  • bl(j)bu(j)bljbuj, for j = 1,2,,n + mj=1,2,,n+m;
  • if bl(j) = bu(j) = βblj=buj=β, |β| < bigbnd|β|<bigbnd.
11:   start – string (length ≥ 1)
Indicates how a starting basis is to be obtained.
start = 'C'start='C'
An internal crash procedure will be used to choose an initial basis matrix BB.
start = 'W'start='W'
A basis is already defined in istate (probably from a previous call).
Constraint: start = 'C'start='C' or 'W''W'.
12:   names(55) – cell array of strings
A set of names associated with the so-called MPSX form of the problem.
names(1)names1
Must contain the name for the problem (or be blank).
names(2)names2
Must contain the name for the free row (or be blank).
names(3)names3
Must contain the name for the constraint right-hand side (or be blank).
names(4)names4
Must contain the name for the ranges (or be blank).
names(5)names5
Must contain the name for the bounds (or be blank).
(These names are used in the monitoring file output; see Section [Description of Monitoring Information].)
13:   crname(nname) – cell array of strings
nname, the dimension of the array, must satisfy the constraint nname = 1nname=1 or n + mn+m.
The optional column and row names.
If nname = 1nname=1, crname is not referenced and the printed output will use default names for the columns and rows.
If nname = n + mnname=n+m, the first n elements must contain the names for the columns and the next m elements must contain the names for the rows. Note that the name for the free row (if any) must be stored in crname(n + iobj)crnamen+iobj.
14:   ns – int64int32nag_int scalar
nSnS, the number of superbasics. For QP problems, ns need not be specified if start = 'C'start='C', but must retain its value from a previous call when start = 'W'start='W'. For FP and LP problems, ns need not be initialized.
15:   xs(n + mn+m) – double array
The initial values of the variables and slacks (x,s)(x,s). (See the description for istate.)
16:   intvar(lintvr) – int64int32nag_int array
Specifies which components of the solution vector xx are constrained to be integer. Specifically, if kk elements of xx are required to take integer values then intvar(i) = liintvari=li, for i = 1,2,,ki=1,2,,k, where lili is the integer index such that xlixli is integer. If k < lintvrk<lintvr then intvar(k + 1)intvark+1 must be set to 1-1 to signal the end of the integer variable indices.
The order in which the indices of those components of xx required to be integer is presented determines the order in which the sub-problems are treated and solved. As such it can be a powerful tool to assist the function in achieving a solution efficiently. The general advice is to enter the important integer variables in the model early in intvar; secondary or less important variables should be entered near the end of the list. However some experimentation might be required to find the optimal order.
17:   istate(n + mn+m) – int64int32nag_int array
If start = 'C'start='C', the first n elements of istate and xs must specify the initial states and values, respectively, of the variables xx. (The slacks ss need not be initialized.) An internal crash procedure is then used to select an initial basis matrix BB. The initial basis matrix will be triangular (neglecting certain small elements in each column). It is chosen from various rows and columns of columns of (AI)(A-I). Possible values for istate(j)istatej are as follows:
istate(j)istatej State of xs(j)xsj during crash procedure
0 or 11 Eligible for the basis
2 Ignored
3 Eligible for the basis (given preference over 00 or 11)
4 or 55 Ignored
If nothing special is known about the problem, or there is no wish to provide special information, you may set istate(j) = 0istatej=0 and xs(j) = 0.0xsj=0.0, for j = 1,2,,nj=1,2,,n. All variables will then be eligible for the initial basis. Less trivially, to say that the jjth variable will probably be equal to one of its bounds, set istate(j) = 4istatej=4 and xs(j) = bl(j)xsj=blj or istate(j) = 5istatej=5 and xs(j) = bu(j)xsj=buj as appropriate.
Following the crash procedure, variables for which istate(j) = 2istatej=2 are made superbasic. Other variables not selected for the basis are then made nonbasic at the value xs(j)xsj if bl(j)xs(j)bu(j)bljxsjbuj, or at the value bl(j)blj or bu(j)buj closest to xs(j)xsj.
If start = 'W'start='W', istate and xs must specify the initial states and values, respectively, of the variables and slacks (x,s)(x,s). If nag_mip_iqp_sparse (h02ce) has been called previously with the same values of n and m, istate already contains satisfactory information.
Constraints:
  • if start = 'C'start='C', 0istate(j)50istatej5, for j = 1,2,,nj=1,2,,n;
  • if start = 'W'start='W', 0istate(j)30istatej3, for j = 1,2,,n + mj=1,2,,n+m.
18:   strtgy – int64int32nag_int scalar
Defines the branching strategy adopted by the function.
strtgy = 0strtgy=0
Each sub-problem first explored imposes a tighter upper bound on the component of xx.
strtgy = 1strtgy=1
Each sub-problem first explored imposes a tighter lower bound on the component of xx.
strtgy = 2strtgy=2
Each branch explored imposes a tighter upper bound on the component of xx if its fractional part is less than 0.50.5, otherwise it imposes a tighter lower bound.
strtgy = 3strtgy=3
Random choice is made between first exploring a tighter lower bound or a tighter upper bound sub-problem.
Constraint: strtgy = 0strtgy=0, 11, 22 or 33.
19:   leniz – int64int32nag_int scalar
The dimension of the array iz as declared in the (sub)program from which nag_mip_iqp_sparse (h02ce) is called.
Constraint: leniz1leniz1.
20:   lenz – int64int32nag_int scalar
The dimension of the array z as declared in the (sub)program from which nag_mip_iqp_sparse (h02ce) is called.
Constraint: lenz1lenz1.
The amounts of workspace provided (i.e., leniz and lenz) and required (i.e., miniz and minz) are (by default) output on the current advisory message unit (as defined by nag_file_set_unit_advisory (x04ab)). Since the minimum values of leniz and lenz required to start solving the problem are returned in miniz and minz, respectively, you may prefer to obtain appropriate values from the output of a preliminary run with leniz and lenz set to 11. (nag_mip_iqp_sparse (h02ce) will then terminate with ifail = 14ifail=14.)
21:   monit – function handle or string containing name of m-file
To provide feed-back on the progress of the branch and bound process. Additionally monit provides, via its parameter halt, the ability to terminate the process. (You might choose to do this when an integer solution is found, rather than search for a better solution.) If you do not require any intermediate output then monit may be the string 'h02cey'.
[bstval, halt, count] = monit(intfnd, nodes, depth, obj, x, bstval, bstsol, bl, bu, n, halt, count)

Input Parameters

1:     intfnd – int64int32nag_int scalar
Contains the number of integer solutions obtained so far.
2:     nodes – int64int32nag_int scalar
Contains the number of nodes (sub-problems) solved so far.
3:     depth – int64int32nag_int scalar
Contains the depth reached in the tree of problems.
4:     obj – double scalar
Contains the solution value to the sub-problem at this node.
5:     x(n) – double array
Contains the solution vector to the sub-problem at this node.
6:     bstval – double scalar
Contains the value of the objective function corresponding to the best integer solution obtained so far. If no integer solution has been found bstval contains the largest machine representable number (see nag_machine_real_largest (x02al)).
7:     bstsol(n) – double array
Contains the value of the best integer solution obtained so far.
8:     bl(n) – double array
Contains the current lower bounds on the variables at this point.
9:     bu(n) – double array
Contains the current upper bounds on the variables at this point.
10:   n – int64int32nag_int scalar
Contains the number of variables in the minimization problem.
11:   halt – logical scalar
Will have the value false.
12:   count – int64int32nag_int scalar
count may be used to save the last value of intfnd. If a subsequent call of monit has a value of intfnd which is greater than count, then you know that a new integer solution has been found at this node.

Output Parameters

1:     bstval – double scalar
May be set to a cut-off value, if you are a sophisticated user, as follows. Before an integer solution has been found bstval will be set by nag_mip_iqp_sparse (h02ce) to the largest machine representable number (see nag_machine_real_largest (x02al)). If you know that the solution being sought is a much smaller number, then bstval may be set to this number as a cut-off value (see Section [Description]). Beware of setting bstval too small, since then no integer solutions will be discovered. Also make sure that bstval is set using a statement of the form
IF (intfnd.EQ.0) bstval = bstval= cut-off value
on entry to monit. This statement will not prevent the normal operation of the algorithm when subsequent integer solutions are found. It would be a grievous mistake to unconditionally set bstval and if you have any doubts whatsoever about the correct use of this parameter then you are strongly recommended to leave it unchanged.
2:     halt – logical scalar
If halt is set to true, nag_opt_qpconvex1_sparse_solve (e04nk) will be brought to a halt with ifail exit 1-1. This facility may be useful if you are content with any integer solution, or with any integer solution that fits certain criteria. Under these circumstances setting halt = truehalt=true can save considerable unnecessary computation.
3:     count – int64int32nag_int scalar

Optional Input Parameters

1:     nnz – int64int32nag_int scalar
Default: The dimension of the arrays a, ha. (An error is raised if these dimensions are not equal.)
The number of nonzero elements in AA.
Constraint: 1nnzn × m1nnzn×m.
2:     nname – int64int32nag_int scalar
Default: The dimension of the array crname.
The number of column (i.e., variable) and row names supplied in the array names.
nname = 1nname=1
There are no names. Default names will be used in the printed output.
nname = n + mnname=n+m
All names must be supplied.
Constraint: nname = 1nname=1 or n + mn+m.
3:     lintvr – int64int32nag_int scalar
Default: The dimension of the array intvar.
kk, the number of components of xx required to be integer. If k = 0k=0, then lintvr must be set to 11 and intvar(1)intvar1 set to 1-1.
4:     mdepth – int64int32nag_int scalar
Specifies the maximum depth the tree of sub-problems may be developed.
Default: 2 × n + 202×n+20
Constraint: mdepth > 0mdepth>0.

Input Parameters Omitted from the MATLAB Interface

iz z

Output Parameters

1:     ns – int64int32nag_int scalar
The final number of superbasics. This will be zero for FP and LP problems.
2:     xs(n + mn+m) – double array
xs(i)xsi contains the final value of xixi, for i = 1,2,,ni=1,2,,n.
3:     istate(n + mn+m) – int64int32nag_int array
The final states of the variables and slacks (x,s)(x,s) from the solution of the last sub-problem tackled. The significance of each possible value of istate(j)istatej is as follows:
istate(j)istatej State of variable jj Normal value of xs(j)xsj
00 Nonbasic bl(j)blj
11 Nonbasic bu(j)buj
22 Superbasic Between bl(j)blj and bu(j)buj
33 Basic Between bl(j)blj and bu(j)buj
If Ninf = 0Ninf=0 (see Section [Description of the Printed Output]), basic and superbasic variables may be outside their bounds by as much as the value of the optional parameter Feasibility Tolerance (default value = max (106,sqrt(ε))default value=max(10-6,ε), where εε is the machine precision). Note that unless the optional parameter Scale Option = 0Scale Option=0 (default value = 2default value=2) is specified, the Feasibility Tolerance applies to the variables of the scaled problem. In this case, the variables of the original problem may be as much as 0.10.1 outside their bounds, but this is unlikely unless the problem is very badly scaled.
Very occasionally some nonbasic variables may be outside their bounds by as much as the Feasibility Tolerance, and there may be some nonbasic variables for which xs(j)xsj lies strictly between its bounds.
If Ninf > 0Ninf>0, some basic and superbasic variables may be outside their bounds by an arbitrary amount (bounded by Sinf (see Section [Description of the Printed Output]) if Scale Option = 0Scale Option=0).
4:     miniz – int64int32nag_int scalar
The minimum value of leniz required to start solving the problem. If ifail = 14ifail=14, nag_mip_iqp_sparse (h02ce) may be called again with leniz suitably larger than miniz. (The bigger the better, since it is not certain how much workspace the basis factors need.)
5:     minz – int64int32nag_int scalar
The minimum value of lenz required to start solving the problem. If ifail = 15ifail=15, nag_mip_iqp_sparse (h02ce) may be called again with lenz suitably larger than minz. (The bigger the better, since it is not certain how much workspace the basis factors need.)
6:     obj – double scalar
The value of the objective function.
If Ninf = 0Ninf=0, obj includes the quadratic objective term (1/2)xTHx12xTHx (if any).
If Ninf > 0Ninf >0, obj is just the linear objective term cTxcTx (if any). For FP problems, obj is set to zero.
7:     clamda(n + mn+m) – double array
A set of Lagrange-multipliers for the bounds on the variables and the general constraints. More precisely, the first n elements contain the multipliers (reduced costs) for the bounds on the variables, and the next m elements contain the multipliers (shadow prices) for the general linear constraints.
8:     ifail – int64int32nag_int scalar
ifail = 0ifail=0 unless the function detects an error (see [Error Indicators and Warnings]).

Error Indicators and Warnings

Errors or warnings detected by the function:

Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.

W ifail = 1ifail=-1
Halted at your request.
  ifail = 0ifail=0
Successful exit.
  ifail = 1ifail=1
Input parameter error immediately detected.
  ifail = 2ifail=2
No integer solution found.
  ifail = 3ifail=3
mdepth is too small.
  ifail = 4ifail=4
The problem is unbounded (or badly scaled). The objective function is not bounded below in the feasible region.
  ifail = 5ifail=5
The problem is infeasible. The general constraints cannot all be satisfied simultaneously to within the value of the optional parameter Feasibility Tolerance (default value = max (106,sqrt(ε))default value=max(10-6,ε), where εε is the machine precision).
  ifail = 6ifail=6
Too many iterations. The value of the optional parameter Iteration Limit (default value = max (50,5(n + m))default value=max(50,5(n+m))) is too small.
  ifail = 7ifail=7
The reduced Hessian matrix zTHZzTHZ (see Section [Definition of the Working Set and Search Direction]) exceeds its assigned dimension. The value of the optional parameter Superbasics Limit (default value = min (nH + 1,n)default value=min(nH+1,n)) is too small.
  ifail = 8ifail=8
The Hessian matrix HH appears to be indefinite. Check that qphx has been coded correctly and that all relevant elements of HxHx have been assigned their correct values.
  ifail = 9ifail=9
An input parameter is invalid for an internal call to nag_opt_qpconvex1_sparse_solve (e04nk).
  ifail = 10ifail=10
Numerical error in trying to satisfy the general constraints. The basis is very ill-conditioned.
  ifail = 11ifail=11
Not enough integer workspace for the basis factors. Increase leniz and rerun nag_mip_iqp_sparse (h02ce).
  ifail = 12ifail=12
Not enough real workspace for the basis factors. Increase lenz and rerun nag_mip_iqp_sparse (h02ce).
  ifail = 13ifail=13
The basis is singular after 1515 attempts to factorize it (adding slacks where necessary). Either the problem is badly scaled or the value of the optional parameter LU Factor Tolerance (default value = 100.0default value=100.0) is too large.
  ifail = 14ifail=14
Not enough integer workspace to start solving the problem. Increase leniz to at least miniz and rerun nag_mip_iqp_sparse (h02ce).
  ifail = 15ifail=15
Not enough real workspace to start solving the problem. Increase lenz to at least minz and rerun nag_mip_iqp_sparse (h02ce).
  ifail = 16ifail=16
An internal error has occurred. Contact NAG with details of your program.

Accuracy

nag_mip_iqp_sparse (h02ce) implements a numerically stable active-set strategy and returns solutions that are as accurate as the condition of the problem warrants on the machine.

Further Comments

This section contains a description of the printed output.

Description of the Printed Output

This section describes the (default) intermediate printout and final printout produced by nag_mip_iqp_sparse (h02ce). The intermediate printout is a subset of the monitoring information produced by the function at every iteration (see Section [Description of Monitoring Information]). You can control the level of printed output (see the description of the optional parameter Print Level in Section [Description of the Optional s]). Note that the intermediate printout and final printout are produced only if Print Level10Print Level10 (the default).
The following line of summary output ( < 80<80 characters) is produced at every iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Itn is the iteration count.
Step is the step taken along the computed search direction. If a constraint is added during the current iteration, Step will be the step to the nearest constraint. When the problem is of type LP, the step can be greater than one during the optimality phase.
Ninf is the number of violated constraints (infeasibilities). This will be zero during the optimality phase.
Sinf/Objective is the value of the current objective function. If xx is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If xx is feasible, Objective is the value of the objective function. The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point.
During the optimality phase, the value of the objective function will be nonincreasing. During the feasibility phase, the number of constraint infeasibilities will not increase until either a feasible point is found, or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained, the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.
Norm rg is dSdS, the Euclidean norm of the reduced gradient (see Section [The Main Iteration]). During the optimality phase, this norm will be approximately zero after a unit step. For FP and LP problems, Norm rg is not printed.
The final printout includes a listing of the status of every variable and constraint.
The following describes the printout for each variable. A full stop (.) is printed for any numerical value that is zero.
Variable gives the name of the variable. If nname = 1nname=1, a default name is assigned to the jjth variable, for j = 1,2,,nj=1,2,,n. If nname = n + mnname=n+m, the name supplied in crname(j)crnamej is assigned to the jjth variable.
State gives the state of the variable (LL if nonbasic on its lower bound, UL if nonbasic on its upper bound, EQ if nonbasic and fixed, FR if nonbasic and strictly between its bounds, BS if basic and SBS if superbasic).
A key is sometimes printed before State to give some additional information about the state of a variable. Note that unless the optional parameter Scale Option = 0Scale Option=0 (default value = 2default value=2) is specified, the tests for assigning a key are applied to the variables of the scaled problem.
A Alternative optimum possible. The variable is nonbasic, but its reduced gradient is essentially zero. This means that if the variable were allowed to start moving away from its bound, there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrange-multipliers might also change.
D Degenerate. The variable is basic or superbasic, but it is equal to (or very close to) one of its bounds.
I Infeasible. The variable is basic or superbasic and is currently violating one of its bounds by more than the value of the optional parameter Feasibility Tolerance (default value = max (106,sqrt(ε))default value=max(10-6,ε), where εε is the machine precision).
N Not precisely optimal. The variable is nonbasic or superbasic. If the value of the reduced gradient for the variable exceeds the value of the optional parameter Optimality Tolerance (default value = max (106,sqrt(ε))default value=max(10-6,ε)), the solution would not be declared optimal because the reduced gradient for the variable would not be considered negligible.
Value is the value of the variable at the final iterate.
Lower Bound is the lower bound specified for the variable. None indicates that bl(j)bigbndblj-bigbnd.
Upper Bound is the upper bound specified for the variable. None indicates that bu(j)bigbndbujbigbnd.
Lagr Mult is the Lagrange-multiplier for the associated bound. This will be zero if State is FR. If xx is optimal, the multiplier should be non-negative if State is LL, non-positive if State is UL, and zero if State is BS or SBS.
Residual is the difference between the variable Value and the nearer of its (finite) bounds bl(j)blj and bu(j)buj. A blank entry indicates that the associated variable is not bounded (i.e., bl(j)bigbndblj-bigbnd and bu(j)bigbndbujbigbnd).
The meaning of the printout for linear constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, nn replaced by mm, crname(j)crnamej replaced by crname(n + j)crnamen+j, bl(j)blj and bu(j)buj are replaced by bl(n + j)bln+j and bu(n + j)bun+j respectively, and with the following change in the heading.
Constrnt gives the name of the linear constraint.
Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Residual column to become positive.
Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.

Example

function nag_mip_iqp_sparse_example
n = int64(7);
m = int64(8);
iobj = int64(8);
ncolh = int64(7);
a = [0.02; 0.02; 0.03; 1; 0.7; 0.02; 0.15; -200; 0.06; 0.75; 0.03; ...
     0.04; 0.05; 0.04; 1; -2000; 0.02; 1; 0.01; 0.08; 0.08; 0.8; -2000; ...
      1; 0.12; 0.02; 0.02; 0.75; 0.04; -2000; 0.01; 0.8; 0.02; 1; 0.02; ...
      0.06; 0.02; -2000; 1; 0.01; 0.01; 0.97; 0.01; 400; 0.97; 0.03; 1; 400];
ha = [int64(7);5;3;1;6;4;2;8;7;6;5;4;3;2;1;8;2;1;4;3;7;6;8;1;7;3;4; ...
      6;2;8;5;6;7;1;2;3;4;8;1;2;3;6;7;8;7;2;1;8];
ka = [int64(1);9;17;24;31;39;45;49];
bl = [0; 0; 400; 100; 0; 0; 0; 2000; -1e25; -1e25; ...
      -1e25; -1e25; 1500; 250; -1e25];
bu = [200; 2500; 800; 700; 1500; 1e25; 1e25; ...
      2000; 60; 100; 40; 30; 1e25; 300; 1e25];
start = 'C';
names = {'        '; '        '; '        '; '        '; '        '};
crname = {'...X1...'; '...X2...'; '...X3...'; '...X4...'; '...X5...'; ...
          '...X6...'; '...X7...'; '..ROW1..'; '..ROW2..'; '..ROW3..'; ...
           '..ROW4..'; '..ROW5..'; '..ROW6..'; '..ROW7..'; '..COST..'};
ns = int64(24641422);
xs = [0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0];
intvar = [int64(2);3;4;5;6;7;-1;0;0;0];
istate = zeros(15, 1, 'int64');
strtgy = int64(3);
leniz = int64(100000);
lenz = int64(100000);
nag_mip_iqp_sparse_optstr('Nolist');
nag_mip_iqp_sparse_optstr('Print level = 0');
[nsOut, xsOut, istateOut, miniz, minz, obj, clamda, ifail] = ...
     nag_mip_iqp_sparse(n, m, iobj, ncolh, @qphx, a, ha, ka, bl, bu, start, names, ...
     crname, ns, xs, intvar, istate, strtgy, leniz, lenz, @monit)

function [hx] = qphx(nstate, ncolh, x)
  hx = zeros(ncolh,1);
  hx(1) = 2*x(1);
  hx(2) = 2*x(2);
  hx(3) = 2*(x(3)+x(4));
  hx(4) = hx(3);
  hx(5) = 2*x(5);
  hx(6) = 2*(x(6)+x(7));
  hx(7) = hx(6);
function [bstval, halt, count] = monit(intfnd,nodes,depth,obj,x,bstval, ...
                                             bstsol,bl,bu,n,halt,count)
  halt = false;
  if intfnd == 0
    bstval = -1847510;
  end
 

nsOut =

                    0


xsOut =

   1.0e+06 *

         0
    0.0004
    0.0006
    0.0002
    0.0004
    0.0003
    0.0002
    0.0020
    0.0000
    0.0001
    0.0000
    0.0000
    0.0015
    0.0003
   -2.9800


istateOut =

                    0
                    0
                    3
                    0
                    0
                    3
                    3
                    0
                    3
                    3
                    3
                    3
                    0
                    0
                    3


miniz =

                  497


minz =

                  502


obj =

  -1.8475e+06


clamda =

   1.0e+04 *

    0.2812
    0.0225
   -0.0000
    0.0157
    0.0204
   -0.0000
   -0.0000
   -1.4823
         0
         0
         0
         0
    1.6400
    1.6571
   -0.0001


ifail =

                    0


function h02ce_example
n = int64(7);
m = int64(8);
iobj = int64(8);
ncolh = int64(7);
a = [0.02; 0.02; 0.03; 1; 0.7; 0.02; 0.15; -200; 0.06; 0.75; 0.03; ...
     0.04; 0.05; 0.04; 1; -2000; 0.02; 1; 0.01; 0.08; 0.08; 0.8; -2000; ...
      1; 0.12; 0.02; 0.02; 0.75; 0.04; -2000; 0.01; 0.8; 0.02; 1; 0.02; ...
      0.06; 0.02; -2000; 1; 0.01; 0.01; 0.97; 0.01; 400; 0.97; 0.03; 1; 400];
ha = [int64(7);5;3;1;6;4;2;8;7;6;5;4;3;2;1;8;2;1;4;3;7;6;8;1;7;3;4; ...
      6;2;8;5;6;7;1;2;3;4;8;1;2;3;6;7;8;7;2;1;8];
ka = [int64(1);9;17;24;31;39;45;49];
bl = [0; 0; 400; 100; 0; 0; 0; 2000; -1e25; -1e25; ...
      -1e25; -1e25; 1500; 250; -1e25];
bu = [200; 2500; 800; 700; 1500; 1e25; 1e25; ...
      2000; 60; 100; 40; 30; 1e25; 300; 1e25];
start = 'C';
names = {'        '; '        '; '        '; '        '; '        '};
crname = {'...X1...'; '...X2...'; '...X3...'; '...X4...'; '...X5...'; ...
          '...X6...'; '...X7...'; '..ROW1..'; '..ROW2..'; '..ROW3..'; ...
           '..ROW4..'; '..ROW5..'; '..ROW6..'; '..ROW7..'; '..COST..'};
ns = int64(24641422);
xs = [0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0];
intvar = [int64(2);3;4;5;6;7;-1;0;0;0];
istate = zeros(15, 1, 'int64');
strtgy = int64(3);
leniz = int64(100000);
lenz = int64(100000);
h02cg('Nolist');
h02cg('Print level = 0');
[nsOut, xsOut, istateOut, miniz, minz, obj, clamda, ifail] = ...
     h02ce(n, m, iobj, ncolh, @qphx, a, ha, ka, bl, bu, start, names, ...
     crname, ns, xs, intvar, istate, strtgy, leniz, lenz, @monit)

function [hx] = qphx(nstate, ncolh, x)
  hx = zeros(ncolh,1);
  hx(1) = 2*x(1);
  hx(2) = 2*x(2);
  hx(3) = 2*(x(3)+x(4));
  hx(4) = hx(3);
  hx(5) = 2*x(5);
  hx(6) = 2*(x(6)+x(7));
  hx(7) = hx(6);
function [bstval, halt, count] = monit(intfnd,nodes,depth,obj,x,bstval, ...
                                             bstsol,bl,bu,n,halt,count)
  halt = false;
  if intfnd == 0
    bstval = -1847510;
  end
 

nsOut =

                    0


xsOut =

   1.0e+06 *

         0
    0.0004
    0.0006
    0.0002
    0.0004
    0.0003
    0.0002
    0.0020
    0.0000
    0.0001
    0.0000
    0.0000
    0.0015
    0.0003
   -2.9800


istateOut =

                    0
                    0
                    3
                    0
                    0
                    3
                    3
                    0
                    3
                    3
                    3
                    3
                    0
                    0
                    3


miniz =

                  497


minz =

                  502


obj =

  -1.8475e+06


clamda =

   1.0e+04 *

    0.2812
    0.0225
   -0.0000
    0.0157
    0.0204
   -0.0000
   -0.0000
   -1.4823
         0
         0
         0
         0
    1.6400
    1.6571
   -0.0001


ifail =

                    0


Note: the remainder of this document is intended for more advanced users. Section [Algorithmic Details] contains a detailed description of the algorithm which may be needed in order to understand Sections [Optional Parameters] and [Description of Monitoring Information]. Section [Optional Parameters] describes the optional parameters which may be set by calls to nag_mip_iqp_sparse_optstr (h02cg). Section [Description of Monitoring Information] describes the quantities which can be requested to monitor the course of the computation.

Algorithmic Details

This section contains a detailed description of the method used by nag_mip_iqp_sparse (h02ce).

Overview

nag_mip_iqp_sparse (h02ce) employs a Branch and Bound technique (see Section [Description]) based on an inertia-controlling method to solve the sub-problems that maintains a Cholesky factorization of the reduced Hessian (see below). The method is similar to that of Gill and Murray (1978), and is described in detail by Gill et al. (1991). Here we briefly summarise the main features of the method. Where possible, explicit reference is made to the names of variables that are parameters of the function or appear in the printed output.
The method used has two distinct phases: finding an initial feasible point by minimizing the sum of infeasibilities (the feasibility phase), and minimizing the quadratic objective function within the feasible region (the optimality phase). The computations in both phases are performed by the same functions. The two-phase nature of the algorithm is reflected by changing the function being minimized from the sum of infeasibilities (the printed quantity Sinf; see Section [Description of Monitoring Information]) to the quadratic objective function (the printed quantity Objective; see Section [Description of Monitoring Information]).
In general, an iterative process is required to solve a quadratic program. Given an iterate (x,s)(x,s) in both the original variables xx and the slack variables ss, a new iterate (x,s)(x-,s-) is defined by
(x)
s
=
(x)
s
+ αp,
x- s- = x s +αp,
(2)
where the step length αα is a non-negative scalar (the printed quantity Step; see Section [Description of Monitoring Information]), and pp is called the search direction. (For simplicity, we shall consider a typical iteration and avoid reference to the index of the iteration.) Once an iterate is feasible (i.e., satisfies the constraints), all subsequent iterates remain feasible.

Definition of the Working Set and Search Direction

At each iterate (x,s)(x,s), a working set of constraints is defined to be a linearly independent subset of the constraints that are satisfied ‘exactly’ (to within the value of the optional parameter Feasibility Tolerance; see Section [Description of the Optional s]). The working set is the current prediction of the constraints that hold with equality at a solution of the LP or QP problem. Let mWmW denote the number of constraints in the working set (including bounds), and let WW denote the associated mWmW by (n + m)(n+m) working set matrix consisting of the mWmW gradients of the working set constraints.
The search direction is defined so that constraints in the working set remain unaltered for any value of the step length. It follows that pp must satisfy the identity
Wp = 0.
Wp=0.
(3)
This characterisation allows pp to be computed using any nn by nznz full-rank matrix ZZ that spans the null space of WW. (Thus, nz = nmWnz=n-mW and WZ = 0WZ=0.) The null space matrix ZZ is defined from a sparse LULU factorization of part of WW (see (6) and (7) below). The direction pp will satisfy (3) if
p = ZpZ,
p=ZpZ,
(4)
where pzpz is any nznz-vector.
The working set contains the constraints Axs = 0Ax-s=0 and a subset of the upper and lower bounds on the variables (x,s)(x,s). Since the gradient of a bound constraint xjljxjlj or xjujxjuj is a vector of all zeros except for ± 1±1 in position jj, it follows that the working set matrix contains the rows of (AI)(A-I) and the unit rows associated with the upper and lower bounds in the working set.
The working set matrix WW can be represented in terms of a certain column partition of the matrix (AI)(A-I). As in Section [Description] we partition the constraints Axs = 0Ax-s=0 so that
BxB + SxS + NxN = 0,
BxB+SxS+NxN=0,
(5)
where BB is a square nonsingular basis and xBxB, xSxS and xnxn are the basic, superbasic and nonbasic variables respectively. The nonbasic variables are equal to their upper or lower bounds at (x,s)(x,s), and the superbasic variables are independent variables that are chosen to improve the value of the current objective function. The number of superbasic variables is nSnS (the printed quantity Ns; see Section [Description of Monitoring Information]). Given values of xNxN and xSxS, the basic variables xBxB are adjusted so that (x,s)(x,s) satisfies (5).
If PP is a permutation matrix such that (AI)P = (BSN)(A-I)P=(BSN), then the working set matrix WW satisfies
WP =
(BSN)
0 0 IN
,
WP= B S N 0 0 IN ,
(6)
where ININ is the identity matrix with the same number of columns as NN.
The null space matrix ZZ is defined from a sparse LULU factorization of part of WW. In particular, zz is maintained in ‘reduced gradient’ form, using the LUSOL package (see Gill et al. (1986)) to maintain sparse LULU factors of the basis matrix BB that alters as the working set WW changes. Given the permutation PP, the null space basis is given by
Z = P
  − B − 1S I 0  
.
Z=P -B-1S I 0 .
(7)
This matrix is used only as an operator, i.e., it is never computed explicitly. Products of the form ZvZv and ZTgZTg are obtained by solving with BB or BTBT. This choice of ZZ implies that nZnZ, the number of ‘degrees of freedom’ at (x,s)(x,s), is the same as nSnS, the number of superbasic variables.
Let gZgZ and HZHZ denote the reduced gradient and reduced Hessian of the objective function:
gZ = ZTg  and  HZ = ZTHZ,
gZ=ZTg  and  HZ=ZTHZ,
(8)
where gg is the objective gradient at (x,s)(x,s). Roughly speaking, gzgz and HzHz describe the first and second derivatives of an nSnS-dimensional unconstrained problem for the calculation of pZpZ. (The condition estimator of HZHZ is the quantity Cond Hz in the monitoring file output; see Section [Description of Monitoring Information].)
At each iteration, an upper triangular factor RR is available such that HZ = RTRHZ=RTR. Normally, RR is computed from RTR = ZTHZRTR=ZTHZ at the start of the optimality phase and then updated as the QP working set changes. For efficiency, the dimension of RR should not be excessive (say, nS1000nS1000). This is guaranteed if the number of nonlinear variables is ‘moderate’.
If the QP problem contains linear variables, HH is positive semidefinite and RR may be singular with at least one zero diagonal element. However, an inertia-controlling strategy is used to ensure that only the last diagonal element of RR can be zero. (See Gill et al. (1991) for a discussion of a similar strategy for indefinite quadratic programming.)
If the initial RR is singular, enough variables are fixed at their current value to give a nonsingular RR. This is equivalent to including temporary bound constraints in the working set. Thereafter, RR can become singular only when a constraint is deleted from the working set (in which case no further constraints are deleted until RR becomes nonsingular).

The Main Iteration

If the reduced gradient is zero, (x,s)(x,s) is a constrained stationary point on the working set. During the feasibility phase, the reduced gradient will usually be zero only at a vertex (although it may be zero elsewhere in the presence of constraint dependencies). During the optimality phase, a zero reduced gradient implies that xx minimizes the quadratic objective function when the constraints in the working set are treated as equalities. At a constrained stationary point, Lagrange-multipliers λλ are defined from the equations
WTλ = g(x).
WTλ=g(x).
(9)
A Lagrange-multiplier λjλj corresponding to an inequality constraint in the working set is said to be optimal if λjσλjσ when the associated constraint is at its upper bound, or if λjσλj-σ when the associated constraint is at its lower bound, where σσ depends on the value of the optional parameter Optimality Tolerance (see Section [Description of the Optional s]). If a multiplier is nonoptimal, the objective function (either the true objective or the sum of infeasibilities) can be reduced by continuing the minimization with the corresponding constraint excluded from the working set. (This step is sometimes referred to as ‘deleting’ a constraint from the working set.) If optimal multipliers occur during the feasibility phase but the sum of infeasibilities is nonzero, there is no feasible point and the function terminates immediately with ifail = 3ifail=3 (see Section [Error Indicators and Warnings]).
The special form (6) of the working set allows the multiplier vector λλ, the solution of (9), to be written in terms of the vector
d = (g) 0 − (A − I)
T
π = (g − ATπ) π ,
d= g 0 - A -I Tπ= g-ATπ π ,
(10)
where ππ satisfies the equations BTπ = gBBTπ=gB, and gBgB denotes the basic elements of gg. The elements of ππ are the Lagrange-multipliers λjλj associated with the equality constraints Axs = 0Ax-s=0. The vector dNdN of nonbasic elements of dd consists of the Lagrange-multipliers λjλj associated with the upper and lower bound constraints in the working set. The vector dSdS of superbasic elements of dd is the reduced gradient gzgz in (8). The vector dBdB of basic elements of dd is zero, by construction. (The Euclidean norm of dSdS and the final values of dSdS, gg and ππ are the quantities Norm rg, Reduced Gradnt, Obj Gradient and Dual Activity in the monitoring file output; see Section [Description of Monitoring Information].)
If the reduced gradient is not zero, Lagrange-multipliers need not be computed and the search direction is given by p = Zpzp=Zpz (see (7) and (11)). The step length is chosen to maintain feasibility with respect to the satisfied constraints.
There are two possible choices for pzpz, depending on whether or not HzHz is singular. If HzHz is nonsingular, RR is nonsingular and pzpz in (4) is computed from the equations
RTRpZ = gZ,
RTRpZ=-gZ,
(11)
where gzgz is the reduced gradient at xx. In this case, (x,s) + p(x,s)+p is the minimizer of the objective function subject to the working set constraints being treated as equalities. If (x,s) + p(x,s)+p is feasible, αα is defined to be unity. In this case, the reduced gradient at (x,s)(x-,s-) will be zero, and Lagrange-multipliers are computed at the next iteration. Otherwise, αα is set to αmαm, the step to the ‘nearest’ constraint along pp. This constraint is added to the working set at the next iteration.
If HzHz is singular, then RR must also be singular, and an inertia-controlling strategy is used to ensure that only the last diagonal element of RR is zero. (See Gill et al. (1991) for a discussion of a similar strategy for indefinite quadratic programming.) In this case, pzpz satisfies
pZTHZpZ = 0  and   gZTpZ0,
pZTHZpZ=0  and   gZTpZ0,
(12)
which allows the objective function to be reduced by any step of the form (x,s) + αp(x,s)+αp, where α > 0α>0. The vector p = ZpZp=ZpZ is a direction of unbounded descent for the QP problem in the sense that the QP objective is linear and decreases without bound along pp. If no finite step of the form (x,s) + αp(x,s)+αp (where α > 0α>0) reaches a constraint not in the working set, the QP problem is unbounded and the function terminates immediately with ifail = 2ifail=2 (see Section [Error Indicators and Warnings]). Otherwise, αα is defined as the maximum feasible step along pp and a constraint active at (x,s) + αp(x,s)+αp is added to the working set for the next iteration.

Miscellaneous

If the basis matrix is not chosen carefully, the condition of the null space matrix ZZ in (7) could be arbitrarily high. To guard against this, the function implements a ‘basis repair’ feature in which the LUSOL package (see Gill et al. (1986)) is used to compute the rectangular factorization
(BS)
T
= LU,
B S T=LU,
(13)
returning just the permutation PP that makes PLPTPLPT unit lower triangular. The pivot tolerance is set to require |PLPT| ij2 | PL PT | ij 2, and the permutation is used to define PP in (6). It can be shown that ZZ is likely to be little more than unity. Hence, zz should be well-conditioned regardless of the condition of WW. This feature is applied at the beginning of the optimality phase if a potential BSB-S ordering is known.
The EXPAND procedure (see Gill et al. (1989)) is used to reduce the possibility of cycling at a point where the active constraints are nearly linearly dependent. Although there is no absolute guarantee that cycling will not occur, the probability of cycling is extremely small (see Gill et al. (1986)). The main feature of EXPAND is that the feasibility tolerance is increased at the start of every iteration. This allows a positive step to be taken at every iteration, perhaps at the expense of violating the bounds on (x,s)(x,s) by a small amount.
Suppose that the value of the optional parameter Feasibility Tolerance is δδ. Over a period of KK iterations (where KK is the value of the optional parameter Expand Frequency; see Section [Description of the Optional s]), the feasibility tolerance actually used by nag_mip_iqp_sparse (h02ce) (i.e., the working feasibility tolerance) increases from 0.5δ0.5δ to δδ (in steps of 0.5δ / K0.5δ/K).
At certain stages the following ‘resetting procedure’ is used to remove small constraint infeasibilities. First, all nonbasic variables are moved exactly onto their bounds. A count is kept of the number of nontrivial adjustments made. If the count is nonzero, the basic variables are recomputed. Finally, the working feasibility tolerance is reinitialized to 0.5δ0.5δ.
If a problem requires more than KK iterations, the resetting procedure is invoked and a new cycle of iterations is started. (The decision to resume the feasibility phase or optimality phase is based on comparing any constraint infeasibilities with δδ.)
The resetting procedure is also invoked when nag_mip_iqp_sparse (h02ce) reaches an apparently optimal, infeasible or unbounded solution, unless this situation has already occurred twice. If any nontrivial adjustments are made, iterations are continued.
The EXPAND procedure not only allows a positive step to be taken at every iteration, but also provides a potential choice of constraints to be added to the working set. All constraints at a distance αα (where ααmααm) along pp from the current point are then viewed as acceptable candidates for inclusion in the working set. The constraint whose normal makes the largest angle with the search direction is added to the working set. This strategy helps keep the basis matrix BB well-conditioned.

Optional Parameters

Several optional parameters in nag_mip_iqp_sparse (h02ce) define choices in the problem specification or the algorithm logic. In order to reduce the number of formal parameters of nag_mip_iqp_sparse (h02ce) these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section [Description of the Optional s].
Optional parameters may be specified by calling nag_mip_iqp_sparse_optstr (h02cg) prior to a call to nag_mip_iqp_sparse (h02ce).
nag_mip_iqp_sparse_optstr (h02cg) can be called to supply options directly, one call being necessary for each optional parameter. For example,
h02cg('Print Level = 5')
nag_mip_iqp_sparse_optstr (h02cg) should be consulted for a full description of this method of supplying optional parameters.
All optional parameters not specified by you are set to their default values. Optional parameters specified by you are unaltered by nag_mip_iqp_sparse (h02ce) (unless they define invalid values) and so remain in effect for subsequent calls unless altered by you.

Description of the Optional Parameters

For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
Keywords and character values are case and white space insensitive.
Check Frequency  ii
Default = 60=60
Every iith iteration after the most recent basis factorization, a numerical test is made to see if the current solution (x,s)(x,s) satisfies the linear constraints Axs = 0Ax-s=0. If the largest element of the residual vector r = Axsr=Ax-s is judged to be too large, the current basis is refactorized and the basic variables recomputed to satisfy the constraints more accurately. If i < 0i<0, the default value is used. If i = 0i=0, the value i = 99999999i=99999999 is used and effectively no checks are made.
Crash Option  ii
Default = 2=2
Note that this option does not apply when start = 'W'start='W' (see Section [Parameters]).
If start = 'C'start='C', an internal crash procedure is used to select an initial basis from various rows and columns of the constraint matrix (AI)(A-I). The value of ii determines which rows and columns are initially eligible for the basis, and how many times the crash procedure is called. If i = 0i=0, the all-slack basis B = IB=-I is chosen. If i = 1i=1, the crash procedure is called once (looking for a triangular basis in all rows and columns of the linear constraint matrix AA). If i = 2i=2, the crash procedure is called twice (looking at any equality constraints first followed by any inequality constraints). If i < 0i<0 or i > 2i>2, the default value is used.
If i = 1​ or ​2i=1​ or ​2, certain slacks on inequality rows are selected for the basis first. (If i = 2i=2, numerical values are used to exclude slacks that are close to a bound.) The crash procedure then makes several passes through the columns of AA, searching for a basis matrix that is essentially triangular. A column is assigned to ‘pivot’ on a particular row if the column contains a suitably large element in a row that has not yet been assigned. (The pivot elements ultimately form the diagonals of the triangular basis.) For remaining unassigned rows, slack variables are inserted to complete the basis.
Crash Tolerance  rr
Default = 0.1=0.1
This value allows the crash procedure to ignore certain ‘small’ nonzero elements in the constraint matrix AA while searching for a triangular basis. For each column of AA, if amaxamax is the largest element in the column, other nonzeros in that column are ignored if they are less than (or equal to) amax × ramax×r.
When r > 0r>0, the basis obtained by the crash procedure may not be strictly triangular, but it is likely to be nonsingular and almost triangular. The intention is to obtain a starting basis with more column variables and fewer (arbitrary) slacks. A feasible solution may be reached earlier for some problems. If r < 0r<0 or r1r1, the default value is used.
Defaults  
This special keyword may be used to reset all optional parameters to their default values.
Expand Frequency  ii
Default = 10000=10000
This option is part of an anti-cycling procedure (see Section [Miscellaneous]) designed to allow progress even on highly degenerate problems.
For LP problems, the strategy is to force a positive step at every iteration, at the expense of violating the constraints by a small amount. Suppose that the value of the optional parameter Feasibility Tolerance is δδ. Over a period of ii iterations, the feasibility tolerance actually used by nag_mip_iqp_sparse (h02ce) (i.e., the working feasibility tolerance) increases from 0.5δ0.5δ to δδ (in steps of 0.5δ / i0.5δ/i).
For QP problems, the same procedure is used for iterations in which there is only one superbasic variable. (Cycling can only occur when the current solution is at a vertex of the feasible region.) Thus, zero steps are allowed if there is more than one superbasic variable, but otherwise positive steps are enforced.
Increasing the value of ii helps reduce the number of slightly infeasible nonbasic basic variables (most of which are eliminated during the resetting procedure). However, it also diminishes the freedom to choose a large pivot element (see the description of the optional parameter Pivot Tolerance).
If i < 0i<0, the default value is used. If i = 0i=0, the value i = 99999999i=99999999 is used and effectively no anti-cycling procedure is invoked.
Factorization Frequency  ii
Default = 100=100
If i > 0i>0, at most ii basis changes will occur between factorizations of the basis matrix. For LP problems, the basis factors are usually updated at every iteration. For QP problems, fewer basis updates will occur as the solution is approached. The number of iterations between basis factorizations will therefore increase. During these iterations a test is made regularly according to the value of optional parameter Check Frequency to ensure that the linear constraints Axs = 0Ax-s=0 are satisfied. If necessary, the basis will be refactorized before the limit of ii updates is reached. If i0i0, the default value is used.
Feasibility Tolerance  rr
Default = max (106,sqrt(ε))=max(10-6,ε)
If rεrε, rr defines the maximum acceptable absolute violation in each constraint at a ‘feasible’ point (including slack variables). For example, if the variables and the coefficients in the linear constraints are of order unity, and the latter are correct to about five decimal digits, it would be appropriate to specify rr as 10510-5. If r < εr<ε, the default value is used.
nag_mip_iqp_sparse (h02ce) attempts to find a feasible solution before optimizing the objective function. If the sum of infeasibilities cannot be reduced to zero, the problem is assumed to be infeasible. Let Sinf be the corresponding sum of infeasibilities. If Sinf is quite small, it may be appropriate to raise rr by a factor of 1010 or 100100. Otherwise, some error in the data should be suspected. Note that the function does not attempt to find the minimum value of Sinf.
If the constraints and variables have been scaled (see the description of the optional parameter Scale Option), then feasibility is defined in terms of the scaled problem (since it is more likely to be meaningful).
Infinite Bound Size  rr
Default = 1020=1020
If r > 0r>0, rr defines the ‘infinite’ bound bigbndbigbnd in the definition of the problem constraints. Any upper bound greater than or equal to bigbndbigbnd will be regarded as + + (and similarly any lower bound less than or equal to bigbnd-bigbnd will be regarded as -). If r0r0, the default value is used.
Infinite Step Size  rr
Default = max (bigbnd,1020)=max(bigbnd,1020)
If r > 0r>0, rr specifies the magnitude of the change in variables that will be considered a step to an unbounded solution. (Note that an unbounded solution can occur only when the Hessian is not positive definite.) If the change in xx during an iteration would exceed the value of rr, the objective function is considered to be unbounded below in the feasible region. If r0r0, the default value is used.
Iteration Limit  ii
Default = max (50,5(n + m))=max(50,5(n+m))
Iters  
Itns  
The value of ii specifies the maximum number of iterations allowed before termination. Setting i = 0i=0 and Print Level > 0Print Level>0 means that the workspace needed to start solving the problem will be computed and printed, but no iterations will be performed. If i < 0i<0, the default value is used.
List  
Default
Nolist  
Normally each optional parameter specification is printed as it is supplied. Optional parameter Nolist may be used to suppress the printing and optional parameter List may be used to restore printing.
LU Factor Tolerance  r1r1
Default = 100.0=100.0
LU Update Tolerance  r2r2
Default = 10.0=10.0
The values of r1r1 and r2r2 affect the stability and sparsity of the basis factorization B = LUB=LU, during refactorization and updates respectively. The lower triangular matrix LL is a product of matrices of the form
(10)
μ 1
1 0 μ 1
where the multipliers μμ will satisfy |μ|ri|μ|ri. The default values of r1r1 and r2r2 usually strike a good compromise between stability and sparsity. For large and relatively dense problems, setting r1r1 and r2r2 to 2525 (say) may give a marked improvement in sparsity without impairing stability to a serious degree. Note that for band matrices it may be necessary to set r1r1 in the range 1r1 < 21r1<2 in order to achieve stability. If r1 < 1r1<1 or r2 < 1r2<1, the default value is used.
LU Singularity Tolerance  rr
Default = ε0.67=ε0.67
If r > 0r>0, rr defines the singularity tolerance used to guard against ill-conditioned basis matrices. Whenever the basis is refactorized, the diagonal elements of UU are tested as follows. If |ujj|r|ujj|r or |ujj| < r × maxi |uij||ujj|<r×maxi|uij|, the jjth column of the basis is replaced by the corresponding slack variable. If r0r0, the default value is used.
Minimize  
Default
Maximize  
This option specifies the required direction of the optimization. It applies to both linear and nonlinear terms (if any) in the objective function. Note that if two problems are the same except that one minimizes f(x)f(x) and the other maximizes f(x)-f(x), their solutions will be the same but the signs of the dual variables πiπi and the reduced gradients djdj (see Section [The Main Iteration]) will be reversed.
Monitoring File  ii
Default = 1=-1
If i0i0 and Print Level > 0Print Level>0, monitoring information produced by nag_mip_iqp_sparse (h02ce) is sent to a file with logical unit number ii. If i < 0i<0 and/or Print Level = 0Print Level=0, the default value is used and hence no monitoring information is produced.
Optimality Tolerance  rr
Default = max (106,sqrt(ε))=max(10-6,ε)
If rεrε, rr is used to judge the size of the reduced gradients dj = gjπTajdj=gj-πTaj. By definition, the reduced gradients for basic variables are always zero. Optimality is declared if the reduced gradients for any nonbasic variables at their lower or upper bounds satisfy r × max (1,π)djr × max (1,π)-r×max(1,π)djr×max(1,π), and if |dj|r × max (1,π)|dj|r×max(1,π) for any superbasic variables. If r < εr<ε, the default value is used.
Partial Price  ii
Default = 10=10
Note that this option does not apply to QP problems.
This option is recommended for large FP or LP problems that have significantly more variables than constraints (i.e., nmnm). It reduces the work required for each pricing operation (i.e., when a nonbasic variable is selected to enter the basis). If i = 1i=1, all columns of the constraint matrix (AI)(A-I) are searched. If i > 1i>1, AA and I-I are partitioned to give ii roughly equal segments Aj,KjAj,Kj, for j = 1,2,,pj=1,2,,p (modulo pp). If the previous pricing search was successful on Aj1,Kj1Aj-1,Kj-1, the next search begins on the segments Aj,KjAj,Kj. If a reduced gradient is found that is larger than some dynamic tolerance, the variable with the largest such reduced gradient (of appropriate sign) is selected to enter the basis. If nothing is found, the search continues on the next segments Aj + 1,Kj + 1Aj+1,Kj+1, and so on. If i0i0, the default value is used.
Pivot Tolerance  rr
Default = ε0.67=ε0.67
If r > 0r>0, rr is used to prevent columns entering the basis if they would cause the basis to become almost singular. If r0r0, the default value is used.
Print Level  ii
Default = 10=10
The value of ii controls the amount of printout produced by nag_mip_iqp_sparse (h02ce), as indicated below. A detailed description of the printed output is given in Section [Description of the Printed Output] (summary output at each iteration and the final solution) and Section [Description of Monitoring Information] (monitoring information at each iteration). Note that the summary output will not exceed 8080 characters per line and that the monitoring information will not exceed 120120 characters per line. If i < 0i<0, the default value is used. The following printout is sent to the current advisory message unit (as defined by nag_file_set_unit_advisory (x04ab)):
0i0i Output
0000 No output.
0101 The final solution only.
0505 One line of summary output for each iteration (no printout of the final solution).
1010 The final solution and one line of summary output for each iteration.
The following printout is sent to the logical unit number defined by the Monitoring File:
0i0i Output
0000 No output.
0101 The final solution only.
0505 One long line of output for each iteration (no printout of the final solution).
1010 The final solution and one long line of output for each iteration.
2020 The final solution, one long line of output for each iteration, matrix statistics (initial status of rows and columns, number of elements, density, biggest and smallest elements, etc.), details of the scale factors resulting from the scaling procedure (if Scale Option = 1Scale Option=1 or 22), basis factorization statistics and details of the initial basis resulting from the crash procedure (if start = 'C'start='C'; see Section [Parameters]).
If Print Level > 0Print Level>0 and the unit number defined by Monitoring File is the same as that defined by nag_file_set_unit_advisory (x04ab), then the summary output is suppressed.
Rank Tolerance  rr
Default = 100ε=100ε
Scale Option  ii
Default = 2=2
This option enables you to scale the variables and constraints using an iterative procedure due to Fourer (see Hock and Schittkowski (1981)), which attempts to compute row scales riri and column scales cjcj such that the scaled matrix coefficients aij = aij × (cj / ri)a-ij=aij×(cj/ri) are as close as possible to unity. This may improve the overall efficiency of the function on some problems. (The lower and upper bounds on the variables and slacks for the scaled problem are redefined as lj = lj / cjl-j=lj/cj and uj = uj / cju-j=uj/cj respectively, where cjrjncjrj-n if j > nj>n.)
If i = 0i=0, no scaling is performed. If i = 1i=1, all rows and columns of the constraint matrix AA are scaled. If i = 2i=2, an additional scaling is performed that may be helpful when the solution xx is large; it takes into account columns of (AI)(A-I) that are fixed or have positive lower bounds or negative upper bounds. If i < 0i<0 or i > 2i>2, the default value is used.
Scale Tolerance  rr
Default = 0.9=0.9
Note that this option does not apply when Scale Option = 0Scale Option=0.
If 0 < r < 10<r<1, rr is used to control the number of scaling passes to be made through the constraint matrix AA. At least 33 (and at most 1010) passes will be made. More precisely, let apap denote the largest column ratio (i.e., ('biggest'element)/('smallest'element) 'biggest'element 'smallest'element  in some sense) after the ppth scaling pass through AA. The scaling procedure is terminated if apap1 × rapap-1×r for some p3p3. Thus, increasing the value of rr from 0.90.9 to 0.990.99 (say) will probably increase the number of passes through AA. If r0r0 or r1r1, the default value is used.
Superbasics Limit  ii
Default = min (nH + 1,n)=min(nH+1,n)
Note that this option does not apply to FP or LP problems.
The value of ii specifies ‘how nonlinear’ you expect the QP problem to be. If i0i0, the default value is used.

Description of Monitoring Information

This section describes the intermediate printout and final printout which constitutes the monitoring information produced by nag_mip_iqp_sparse (h02ce). (See also the description of the optional parameters Monitoring File and Print Level in Section [Description of the Optional s].) You can control the level of printed output.
When Print Level = 5Print Level=5 or 1010 and Monitoring File0Monitoring File0, the following line of intermediate printout ( < 120<120 characters) is produced at every iteration on the unit number specified by Monitoring File. Unless stated otherwise, the values of the quantities printed are those in effect on completion of the given iteration.
Itn is the iteration count.
pp is the partial price indicator. The variable selected by the last pricing operation came from the ppth partition of AA and I-I. Note that pp is reset to zero whenever the basis is refactorized.
dj is the value of the reduced gradient (or reduced cost) for the variable selected by the pricing operation at the start of the current iteration.
+S is the variable selected by the pricing operation to be added to the superbasic set.
-S is the variable chosen to leave the superbasic set.
-B is the variable removed from the basis (if any) to become nonbasic.
-B is the variable chosen to leave the set of basics (if any) in a special basic  superbasic swap. The entry under -S has become basic if this entry is nonzero, and nonbasic otherwise. The swap is done to ensure that there are no superbasic slacks.
Step is the value of the step length αα taken along the computed search direction pp. The variables xx have been changed to x + αpx+αp. If a variable is made superbasic during the current iteration (i.e., +S is positive), Step will be the step to the nearest bound. During the optimality phase, the step can be greater than unity only if the reduced Hessian is not positive definite.
Pivot is the rrth element of a vector yy satisfying By = aqBy=aq whenever aqaq (the qqth column of the constraint matrix (AI)(A-I)) replaces the rrth column of the basis matrix BB. Wherever possible, Step is chosen so as to avoid extremely small values of Pivot (since they may cause the basis to be nearly singular). In extreme cases, it may be necessary to increase the value of the optional parameter Pivot Tolerance (default value = ε0.67default value=ε0.67, where εε is the machine precision) to exclude very small elements of yy from consideration during the computation of Step.
Ninf is the number of violated constraints (infeasibilities). This will be zero during the optimality phase.
Sinf/Objective is the value of the current objective function. If xx is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If xx is feasible, Objective is the value of the objective function. The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point.
During the optimality phase, the value of the objective function will be nonincreasing. During the feasibility phase, the number of constraint infeasibilities will not increase until either a feasible point is found, or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained, the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.
L is the number of nonzeros in the basis factor LL. Immediately after a basis factorization B = LUB=LU, this is lenL, the number of subdiagonal elements in the columns of a lower triangular matrix. Further nonzeros are added to L when various columns of BB are later replaced. (Thus, L increases monotonically.)
U is the number of nonzeros in the basis factor UU. Immediately after a basis factorization, this is lenU, the number of diagonal and superdiagonal elements in the rows of an upper triangular matrix. As columns of BB are replaced, the matrix UU is maintained explicitly (in sparse form). The value of U may fluctuate up or down; in general, it will tend to increase.
Ncp is the number of compressions required to recover workspace in the data structure for UU. This includes the number of compressions needed during the previous basis factorization. Normally, Ncp should increase very slowly. If it does not, increase lenz by at least L + UL+U and rerun nag_mip_iqp_sparse (h02ce) (possibly using start = 'W'start='W'; see Section [Parameters]).
Norm rg is dSdS, the Euclidean norm of the reduced gradient (see Section [The Main Iteration]). During the optimality phase, this norm will be approximately zero after a unit step. For FP and LP problems, Norm rg is not printed.
Ns is the current number of superbasic variables. For FP and LP problems, Ns is not printed.
Cond Hz is a lower bound on the condition number of the reduced Hessian (see Section [Definition of the Working Set and Search Direction]). The larger this number, the more difficult the problem. For FP and LP problems, Cond Hz is not printed.
When Print Level20Print Level20 and Monitoring File0Monitoring File0, the following lines of intermediate printout ( < 120<120 characters) are produced on the unit number specified by Monitoring File whenever the matrix BB or BS = (BS)
T
BS= B S T is factorized. Gaussian elimination is used to compute an LULU factorization of BB or BSBS, where PLPTPLPT is a lower triangular matrix and PUQPUQ is an upper triangular matrix for some permutation matrices PP and QQ. The factorization is stabilized in the manner described under the LU Factor Tolerance (default value = 100.0default value=100.0; see Section [Description of the Optional s]).
Factorize is the factorization count.
Demand is a code giving the reason for the present factorization as follows:
Code Meaning
010010 First LULU factorization.
011011 Number of updates reached the value of the optional parameter Factorization Frequency (default value = 100default value=100).
012012 Excessive nonzeros in updated factors.
017017 Not enough storage to update factors.
010010 Row residuals too large (see the description for the optional parameter Check Frequency).
011011 Ill-conditioning has caused inconsistent results.
Iteration is the iteration count.
Nonlinear is the number of nonlinear variables in BB (not printed if BSBS is factorized).
Linear is the number of linear variables in BB (not printed if BSBS is factorized).
Slacks is the number of slack variables in BB (not printed if BSBS is factorized).
Elems is the number of nonzeros in BB (not printed if BSBS is factorized).
Density is the percentage nonzero density of BB (not printed if BSBS is factorized). More precisely, Density = 100 × Elems / (Nonlinear + Linear + Slacks)2Density=100×Elems/(Nonlinear+Linear+Slacks)2.
Compressns is the number of times the data structure holding the partially factorized matrix needed to be compressed, in order to recover unused workspace. Ideally, it should be zero. If it is more than 33 or 44, increase leniz and lenz and rerun nag_mip_iqp_sparse (h02ce) (possibly using start = 'W'start='W'; see Section [Parameters]).
Merit is the average Markowitz merit count for the elements chosen to be the diagonals of PUQPUQ. Each merit count is defined to be (c1)(r1)(c-1)(r-1), where cc and rr are the number of nonzeros in the column and row containing the element at the time it is selected to be the next diagonal. Merit is the average of m such quantities. It gives an indication of how much work was required to preserve sparsity during the factorization.
lenL is the number of nonzeros in LL.
lenU is the number of nonzeros in UU.
Increase is the percentage increase in the number of nonzeros in LL and UU relative to the number of nonzeros in BB. More precisely, Increase = 100 × (lenL + lenUElems) / ElemsIncrease=100×(lenL+lenU-Elems)/Elems.
m is the number of rows in the problem. Note that m = Ut + Lt + bpm=Ut+Lt+bp.
Ut is the number of triangular rows of BB at the top of UU.
d1 is the number of columns remaining when the density of the basis matrix being factorized reached 0.30.3.
Lmax is the maximum subdiagonal element in the columns of LL (not printed if BSBS is factorized). This will not exceed the value of the LU Factor Tolerance.
Bmax is the maximum nonzero element in BB (not printed if BSBS is factorized).
BSmax is the maximum nonzero element in BSBS (not printed if BB is factorized).
Umax is the maximum nonzero element in UU, excluding elements of BB that remain in UU unchanged. (For example, if a slack variable is in the basis, the corresponding row of BB will become a row of UU without modification. Elements in such rows will not contribute to Umax. If the basis is strictly triangular, none of the elements of BB will contribute, and Umax will be zero.)
Ideally, Umax should not be significantly larger than Bmax. If it is several orders of magnitude larger, it may be advisable to reset the LU Factor Tolerance to a value near 1.01.0.
Umax is not printed if BSBS is factorized.
Umin is the magnitude of the smallest diagonal element of PUQPUQ (not printed if BSBS is factorized).
Growth is the value of the ratio Umax / BmaxUmax/Bmax, which should not be too large.
Providing Lmax is not large (say < 10.0<10.0), the ratio max (Bmax,Umax) / Uminmax(Bmax,Umax)/Umin is an estimate of the condition number of BB. If this number is extremely large, the basis is nearly singular and some numerical difficulties could occur in subsequent computations. (However, an effort is made to avoid near singularity by using slacks to replace columns of BB that would have made Umin extremely small, and the modified basis is refactorized.)
Growth is not printed if BSBS is factorized.
Lt is the number of triangular columns of BB at the beginning of LL.
bp is the size of the ‘bump’ or block to be factorized nontrivially after the triangular rows and columns have been removed.
d2 is the number of columns remaining when the density of the basis matrix being factorized reached 0.60.6.
When Print Level20Print Level20 and Monitoring File0Monitoring File0, the following lines of intermediate printout ( < 80<80 characters) are produced on the unit number specified by Monitoring File whenever start = 'C'start='C' (see Section [Parameters]). They refer to the number of columns selected by the crash procedure during each of several passes through AA, whilst searching for a triangular basis matrix.
Slacks is the number of slacks selected initially.
Free cols is the number of free columns in the basis.
Preferred is the number of ‘preferred’ columns in the basis (i.e., istate(j) = 3istatej=3 for some jnjn).
Unit is the number of unit columns in the basis.
Double is the number of double columns in the basis.
Triangle is the number of triangular columns in the basis.
Pad is the number of slacks used to pad the basis.
When Print Level20Print Level20 and Monitoring File0Monitoring File0, the following lines of intermediate printout ( < 80<80 characters) are produced on the unit number specified by Monitoring File. They refer to the elements of the names array (see Section [Parameters]).
Name gives the name for the problem (blank if none).
Objective gives the name of the free row for the problem (blank if none).
RHS gives the name of the constraint right-hand side for the problem (blank if none).
Ranges gives the name of the ranges for the problem (blank if none).
Bounds gives the name of the bounds for the problem (blank if none).
When Print Level = 1Print Level=1 or 1010 and Monitoring File0Monitoring File0, the following lines of final printout ( < 120<120 characters) are produced on the unit number specified by Monitoring File.
Let ajaj denote the jjth column of AA, for j = 1,2,,nj=1,2,,n. The following describes the printout for each column (or variable). A full stop (.) is printed for any numerical value that is zero.
Number is the column number jj. (This is used internally to refer to xjxj in the intermediate output.)
Column gives the name of xjxj.
State gives the state of the variable (LL if nonbasic on its lower bound, UL if nonbasic on its upper bound, EQ if nonbasic and fixed, FR if nonbasic and strictly between its bounds, BS if basic and SBS if superbasic).
A key is sometimes printed before State to give some additional information about the state of xjxj. Note that unless the optional parameter Scale Option = 0Scale Option=0 (default value = 2default value=2) is specified, the tests for assigning a key are applied to the variables of the scaled problem.
A Alternative optimum possible. The variable is nonbasic, but its reduced gradient is essentially zero. This means that if the variable were allowed to start moving away from its bound, there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrange-multipliers might also change.
D Degenerate. The variable is basic or superbasic, but it is equal to (or very close to) one of its bounds.
I Infeasible. The variable is basic or superbasic and is currently violating one of its bounds by more than the value of the optional parameter Feasibility Tolerance (default value = max (106,sqrt(ε))default value=max(10-6,ε), where εε is the machine precision).
N Not precisely optimal. The variable is nonbasic or superbasic. If the value of the reduced gradient for the variable exceeds the value of the optional parameter Optimality Tolerance (default value = max (106,sqrt(ε))default value=max(10-6,ε)), the solution would not be declared optimal because the reduced gradient for the variable would not be considered negligible.
Activity is the value of xjxj at the final iterate.
Obj Gradient is the value of gjgj at the final iterate. For FP problems, gjgj is set to zero.
Lower Bound is the lower bound specified for the variable. None indicates that bl(j)bigbndblj-bigbnd.
Upper Bound is the upper bound specified for the variable. None indicates that bu(j)bigbndbujbigbnd.
Reduced Gradnt is the value of djdj at the final iterate (see Section [The Main Iteration]). For FP problems, djdj is set to zero.
m + j is the value of m + jm+j.
Let vivi denote the iith row of AA, for i = 1,2,,mi=1,2,,m. The following describes the printout for each row (or constraint). A full stop (.) is printed for any numerical value that is zero.
Number is the value of n + in+i. (This is used internally to refer to sisi in the intermediate output.)
Row gives the name of νiνi.
State gives the state of the variable (LL if active on its lower bound, UL if active on its upper bound, EQ if active and fixed, BS if inactive when sisi is basic and SBS if inactive when sisi is superbasic).
A key is sometimes printed before State to give some additional information about the state of sisi. Note that unless the optional parameter Scale Option = 0Scale Option=0 (default value = 2default value=2) is specified, the tests for assigning a key are applied to the variables of the scaled problem.
A Alternative optimum possible. The variable is nonbasic, but its reduced gradient is essentially zero. This means that if the variable were allowed to start moving away from its bound, there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrange-multipliers might also change.
D Degenerate. The variable is basic or superbasic, but it is equal to (or very close to) one of its bounds.
I Infeasible. The variable is basic or superbasic and is currently violating one of its bounds by more than the value of the optional parameter Feasibility Tolerance (default value = max (106,sqrt(ε))default value=max(10-6,ε), where εε is the machine precision).
N Not precisely optimal. The variable is nonbasic or superbasic. If the value of the reduced gradient for the variable exceeds the value of the optional parameter Optimality Tolerance (default value = max (106,sqrt(ε))default value=max(10-6,ε)), the solution would not be declared optimal because the reduced gradient for the variable would not be considered negligible.
Activity is the value of vivi at the final iterate.
Slack Activity is the value by which vivi differs from its nearest bound. (For the free row (if any), it is set to Activity.)
Lower Bound is the lower bound specified for the variable. None indicates that bl(j)bigbndblj-bigbnd.
Upper Bound is the upper bound specified for the variable. None indicates that bu(j)bigbndbujbigbnd.
i gives the index ii of vivi.
Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013