PDF version (NAG web site
, 64bit version, 64bit version)
NAG Toolbox: nag_mip_iqp_dense (h02cb)
Purpose
nag_mip_iqp_dense (h02cb) solves general quadratic programming problems with integer constraints on the variables. It is not intended for large sparse problems.
Syntax
[
istate,
xs,
obj,
ax,
clamda,
ifail] = h02cb(
a,
bl,
bu,
cvec,
h,
qphess,
intvar,
istate,
xs,
strtgy,
monit, 'n',
n, 'nclin',
nclin, 'lintvr',
lintvr, 'mdepth',
mdepth, 'lwrk',
lwrk)
[
istate,
xs,
obj,
ax,
clamda,
ifail] = nag_mip_iqp_dense(
a,
bl,
bu,
cvec,
h,
qphess,
intvar,
istate,
xs,
strtgy,
monit, 'n',
n, 'nclin',
nclin, 'lintvr',
lintvr, 'mdepth',
mdepth, 'lwrk',
lwrk)
Note: the interface to this routine has changed since earlier releases of the toolbox:
Mark 23:
lwrk is now optional
.
Description
nag_mip_iqp_dense (h02cb) uses a ‘Branch and Bound’ algorithm in conjunction with
nag_opt_qp_dense_solve (e04nf) to try and determine integer solutions to a general quadratic programming problem. The problem is assumed to be stated in the following general form:
where
A$A$ is an
m_{L}${m}_{L}$ by
n$n$ matrix and
f(x)$f\left(x\right)$ may be specified in a variety of ways depending upon the particular problem to be solved. The available forms for
f(x)$f\left(x\right)$ are listed in
Table 1, in which the prefixes FP, LP and QP stand for ‘feasible point’, ‘linear programming’ and ‘quadratic programming’ respectively and
c$c$ is an
n$n$element vector.
Problem type 
f(x)$f\left(x\right)$ 
Matrix H$H$ 
FP 
Not applicable 
Not applicable 
LP 
c^{T}x${c}^{\mathrm{T}}x$ 
Not applicable 
QP1 
c^{T}x + (1/2)x^{T}Hx$\phantom{{c}^{\mathrm{T}}x+}\frac{1}{2}{x}^{\mathrm{T}}Hx$ 
symmetric 
QP2 
c^{T}x + (1/2)x^{T}Hx${c}^{\mathrm{T}}x+\frac{1}{2}{x}^{\mathrm{T}}Hx$ 
symmetric 
QP3 
c^{T}x + (1/2)x^{T}H^{T}Hx$\phantom{{c}^{\mathrm{T}}x+}\frac{1}{2}{x}^{\mathrm{T}}{H}^{\mathrm{T}}Hx$ 
m$m$ by n$n$ upper trapezoidal 
QP4 
c^{T}x + (1/2)x^{T}H^{T}Hx${c}^{\mathrm{T}}x+\frac{1}{2}{x}^{\mathrm{T}}{H}^{\mathrm{T}}Hx$ 
m$m$ by n$n$ upper trapezoidal 
Table 1
Only when the problem is linear or the matrix H$H$ is positive definite can the technique be guaranteed to work; but often useful results can be obtained for a wider class of problems.
The default problem type is QP2 and other objective functions are selected by using the optional parameter
Problem Type. For problems of type FP, the objective function is omitted and
nag_mip_iqp_dense (h02cb) attempts to find a feasible point for the set of constraints.
Branch and bound consists firstly of obtaining a solution without any of the variables
x = (x_{1},x_{2}, … ,x_{n})^{T}$x={({x}_{1},{x}_{2},\dots ,{x}_{n})}^{\mathrm{T}}$ constrained to be integer. Suppose
x_{1}${x}_{1}$ ought to be integer, but at the optimal value just computed
x_{1} = 2.4${x}_{1}=2.4$. A constraint
x_{1} ≤ 2${x}_{1}\le 2$ is added to the system and the second problem solved. A constraint
x_{1} ≥ 3${x}_{1}\ge 3$ gives rise to a third subproblem. In a similar manner a whole series of subproblems may be generated, corresponding to integer constraints on the variables. The subproblems are all solved using
nag_opt_qp_dense_solve (e04nf).
In practice the function tries to compute an integer solution as quickly as possible using a depthfirst approach, since this helps determine a realistic cutoff value. If we have a cutoff value, say the value of the function at this first integer solution, and any subproblem,
W$W$ say, has a solution value greater than this cutoff value, then subsequent subproblems of
W$W$ must have solutions greater than the value of the solution at
W$W$ and therefore need not be computed. Thus a knowledge of a good cutoff value can result in fewer subproblems being solved and thus speed up the operation of the function. (See the description of
monit in
Section [Parameters] for details of how you can supply your own cutoff value.)
References
Gill P E, Hammarling S, Murray W, Saunders M A and Wright M H (1986) Users' guide for LSSOL (Version 1.0) Report SOL 861 Department of Operations Research, Stanford University
Gill P E and Murray W (1978) Numerically stable methods for quadratic programming Math. Programming 14 349–372
Gill P E, Murray W, Saunders M A and Wright M H (1984) Procedures for optimization problems with a mixture of bounds and general linear constraints ACM Trans. Math. Software 10 282–298
Gill P E, Murray W, Saunders M A and Wright M H (1989) A practical anticycling procedure for linearly constrained optimization Math. Programming 45 437–474
Gill P E, Murray W, Saunders M A and Wright M H (1991) Inertiacontrolling methods for general quadratic programming SIAM Rev. 33 1–36
Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
Pardalos P M and Schnitger G (1988) Checking local optimality in constrained quadratic programming is NPhard Operations Research Letters 7 33–35
Parameters
Compulsory Input Parameters
 1:
a(lda, : $:$) – double array

The first dimension of the array
a must be at least
max (1,nclin)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}(1,{\mathbf{nclin}})$The second dimension of the array must be at least
n${\mathbf{n}}$ if
nclin > 0${\mathbf{nclin}}>0$ and at least
1$1$ if
nclin = 0${\mathbf{nclin}}=0$The
i$\mathit{i}$th row of
a must contain the coefficients of the
i$\mathit{i}$th general linear constraint, for
i = 1,2, … ,m_{L}$\mathit{i}=1,2,\dots ,{m}_{L}$.
If
nclin = 0${\mathbf{nclin}}=0$ then the array
a is not referenced.
 2:
bl(n + nclin${\mathbf{n}}+{\mathbf{nclin}}$) – double array
 3:
bu(n + nclin${\mathbf{n}}+{\mathbf{nclin}}$) – double array
bl must contain the lower bounds and
bu the upper bounds, for all the constraints in the following order. The first
n$n$ elements of each array must contain the bounds on the variables, and the next
m_{L}${m}_{L}$ elements the bounds for the general linear constraints (if any). To specify a nonexistent lower bound (i.e.,
l_{j} = − ∞${l}_{j}=\infty $), set
bl(j) ≤ − bigbnd${\mathbf{bl}}\left(j\right)\le \mathit{bigbnd}$, and to specify a nonexistent upper bound (i.e.,
u_{j} = + ∞${u}_{j}=+\infty $), set
bu(j) ≥ bigbnd${\mathbf{bu}}\left(j\right)\ge \mathit{bigbnd}$; the default value of
bigbnd$\mathit{bigbnd}$ is
10^{20}${10}^{20}$, but this may be changed by the
Infinite Bound Size. To specify the
j$j$th constraint as an
equality, set
bl(j) = bu(j) = β${\mathbf{bl}}\left(j\right)={\mathbf{bu}}\left(j\right)=\beta $, say, where
β < bigbnd$\left\beta \right<\mathit{bigbnd}$.
Constraints:
 bl(j) ≤ bu(j)${\mathbf{bl}}\left(\mathit{j}\right)\le {\mathbf{bu}}\left(\mathit{j}\right)$, for j = 1,2, … ,n + nclin$\mathit{j}=1,2,\dots ,{\mathbf{n}}+{\mathbf{nclin}}$;
 if bl(j) = bu(j) = β${\mathbf{bl}}\left(j\right)={\mathbf{bu}}\left(j\right)=\beta $, β < bigbnd$\left\beta \right<\mathit{bigbnd}$.
 4:
cvec( : $:$) – double array

Note: the dimension of the array
cvec
must be at least
n${\mathbf{n}}$ if the problem is of type LP, QP2 (the default) or QP4, and at least
1$1$ otherwise.
The coefficients of the explicit linear term of the objective function when the problem is of type LP, QP2 (the default) and QP4.
If the problem is of type FP, QP1, or QP3,
cvec is not referenced.
 5:
h(ldh,tdh$\mathit{tdh}$) – double array
ldh, the first dimension of the array, must satisfy the constraint
 if the problem is of type QP1, QP2 (the default), QP3 or QP4, ldh ≥ n$\mathit{ldh}\ge {\mathbf{n}}$ or at least the value of the optional parameter Hessian Rows (default value = n$\text{default value}=n$)
 if the problem is of type FP or LP, ldh ≥ 1$\mathit{ldh}\ge 1$
.
May be used to store the quadratic term
H$H$ of the QP objective function if desired. In some cases, you need not use
h to store
H$H$ explicitly (see the specification of
qphess). The elements of
h are referenced only by
qphess. The number of rows of
H$H$ is denoted by
m$m$, whose default value is
n$n$. (The
Hessian Rows may be used to specify a value of
m < n$m<n$.)
If the default version of
qphess is used and the problem is of type QP1 or QP2 (the default), the first
m$m$ rows and columns of
h must contain the leading
m$m$ by
m$m$ rows and columns of the symmetric Hessian matrix
H$H$. Only the diagonal and upper triangular elements of the leading
m$m$ rows and columns of
h are referenced. The remaining elements need not be assigned.
If the default version of
qphess is used and the problem is of type QP3 or QP4, the first
m$m$ rows of
h must contain an
m$m$ by
n$n$ upper trapezoidal factor of the symmetric Hessian matrix
H^{T}H${H}^{\mathrm{T}}H$. The factor need not be of full rank, i.e., some of the diagonal elements may be zero. However, as a general rule, the larger the dimension of the leading nonsingular submatrix of
h, the fewer iterations will be required. Elements outside the upper trapezoidal part of the first
m$m$ rows of
h need not be assigned.
In other situations, it may be desirable to compute
Hx$Hx$ or
H^{T}Hx${H}^{\mathrm{T}}Hx$ without accessing
h – for example, if
H$H$ or
H^{T}H${H}^{\mathrm{T}}H$ is sparse or has special structure. The parameters
h and
ldh may then refer to any convenient array.
If the problem is of type FP or LP,
h is not referenced.
 6:
qphess – function handle or string containing name of mfile
In general, you need not provide a version of
qphess, because a ‘default’ function with name
nag_opt_qp_dense_sample_qphess (e04nfu) is included in the Library. However, the algorithm of
nag_mip_iqp_dense (h02cb) requires only the product of
H$H$ or
H^{T}H${H}^{\mathrm{T}}H$ and a vector
x$x$; and in some cases you may obtain increased efficiency by providing a version of
qphess that avoids the need to define the elements of the matrices
H$H$ or
H^{T}H${H}^{\mathrm{T}}H$ explicitly.
qphess is not referenced if the problem is of type FP or LP, in which case
qphess may be the string
''.
[hx] = qphess(n, jthcol, h, ldh, x)
Input Parameters
 1:
n – int64int32nag_int scalar
This is the same parameter
n as supplied to
nag_mip_iqp_dense (h02cb).
 2:
jthcol – int64int32nag_int scalar
Specifies whether or not the vector
x$x$ is a column of the identity matrix.
 jthcol = j > 0${\mathbf{jthcol}}=j>0$
 The vector x$x$ is the j$j$th column of the identity matrix, and hence Hx$Hx$ or h^{T}Hx${{\mathbf{h}}}^{\mathrm{T}}Hx$ is the j$j$th column of h${\mathbf{h}}$ or h^{T}H${{\mathbf{h}}}^{\mathrm{T}}H$, respectively, which may in some cases require very little computation and qphess may be coded to take advantage of this. However special code is not necessary because x$x$ is always stored explicitly in the array x.
 jthcol = 0${\mathbf{jthcol}}=0$
 x$x$ has no special form.
 3:
h(ldh,tdh$\mathit{tdh}$) – double array
This is the same parameter
h as supplied to
nag_mip_iqp_dense (h02cb).
 4:
ldh – int64int32nag_int scalar
This is the same parameter
ldh as supplied to
nag_mip_iqp_dense (h02cb).
 5:
x(n) – double array
The vector x$x$.
Output Parameters
 1:
hx(n) – double array
The product
Hx$Hx$ if the problem is of type QP1 or QP2 (the default), or the product
h^{T}Hx${{\mathbf{h}}}^{\mathrm{T}}Hx$ if the problem is of type QP3 or QP4.
 7:
intvar(lintvr) – int64int32nag_int array
lintvr, the dimension of the array, must satisfy the constraint
lintvr > 0${\mathbf{lintvr}}>0$.
intvar(i)${\mathbf{intvar}}\left(i\right)$ must contain the index of the solution vector
x$x$ which is required to be integer. For example, if
x_{1}${x}_{1}$ and
x_{3}${x}_{3}$ are constrained to take integer values then
intvar(1)${\mathbf{intvar}}\left(1\right)$ might be set to
1$1$ and
intvar(2)${\mathbf{intvar}}\left(2\right)$ to
3$3$. The order in which the indices are specified is important, since this determines the order in which the subproblems are generated. As a ruleofthumb, the important variables should always be specified first. Thus, in the above example, if
x_{3}${x}_{3}$ relates to a more important quantity than
x_{1}${x}_{1}$, then it might be advantageous to set
intvar(1) = 3${\mathbf{intvar}}\left(1\right)=3$ and
intvar(2) = 1${\mathbf{intvar}}\left(2\right)=1$. If
k$k$ is the smallest integer such that
intvar(k)${\mathbf{intvar}}\left(k\right)$ is less than or equal to zero then
nag_mip_iqp_dense (h02cb) assumes that
k − 1$k1$ variables are constrained to be integer; components
intvar(k + 1)${\mathbf{intvar}}\left(k+1\right)$,
… $\dots $,
intvar(lintvr)${\mathbf{intvar}}\left({\mathbf{lintvr}}\right)$ are
not referenced.
 8:
istate(n + nclin${\mathbf{n}}+{\mathbf{nclin}}$) – int64int32nag_int array
Need not be set if the (default) optional parameter
Cold Start is used.
If the optional parameter
Warm Start has been chosen,
istate specifies the desired status of the constraints at the start of the feasibility phase. More precisely, the first
n$n$ elements of
istate refer to the upper and lower bounds on the variables, and the next
m_{L}${m}_{L}$ elements refer to the general linear constraints (if any). Possible values for
istate(j)${\mathbf{istate}}\left(j\right)$ are as follows:
istate(j)${\mathbf{istate}}\left(j\right)$  Meaning 
0  The corresponding constraint should not be in the initial working set. 
1  The constraint should be in the initial working set at its lower bound. 
2  The constraint should be in the initial working set at its upper bound. 
3  The constraint should be in the initial working set as an equality. This value must not be specified unless bl(j) = bu(j)${\mathbf{bl}}\left(j\right)={\mathbf{bu}}\left(j\right)$. 
The values
− 2$2$,
− 1$1$ and
4$4$ are also acceptable but will be reset to zero by the function. If
nag_mip_iqp_dense (h02cb) has been called previously with the same values of
n and
nclin,
istate already contains satisfactory information. (See also the description of the optional parameter
Warm Start.) The function also adjusts (if necessary) the values supplied in
xs to be consistent with
istate.
Constraint:
− 2 ≤ istate(j) ≤ 4$2\le {\mathbf{istate}}\left(\mathit{j}\right)\le 4$, for
j = 1,2, … ,n + nclin$\mathit{j}=1,2,\dots ,{\mathbf{n}}+{\mathbf{nclin}}$.
 9:
xs(n) – double array
n, the dimension of the array, must satisfy the constraint
n > 0${\mathbf{n}}>0$.
An initial estimate of the solution.
 10:
strtgy – int64int32nag_int scalar
Determines a branching strategy to be used throughout the computation, as follows:
strtgy${\mathbf{strtgy}}$  Meaning 
0$0$  Always left branch first, i.e., impose an upper bound constraint on the variable first. 
1$1$  Always right branch first, i.e., impose a lower bound constraint on the variable first. 
2$2$  Branch towards the nearest integer, i.e., if x_{k} = 2.4${x}_{k}=2.4$ then impose an upper bound constraint x_{k} ≤ 2${x}_{k}\le 2$, whereas if x_{k} = 2.6${x}_{k}=2.6$ then impose the lower bound constraint x_{k} ≥ 3.0${x}_{k}\ge 3.0$. 
3$3$  A random choice is made between a lefthand and a righthand branch. 
Constraint:
strtgy = 0${\mathbf{strtgy}}=0$,
1$1$,
2$2$ or
3$3$.
 11:
monit – function handle or string containing name of mfile
monit may be used to print out intermediate output and to affect the course of the computation. Specifically, it allows you to specify a realistic value for the cutoff value (see
Section [Description]) and to terminate the algorithm. If you do not require any intermediate output, have no estimate of the cutoff value and require an exhaustive tree search then
monit may be the string
'h02cbu'.
[bstval, halt, count] = monit(intfnd, nodes, depth, obj, x, bstval, bstsol, bl, bu, n, halt, count)
Input Parameters
 1:
intfnd – int64int32nag_int scalar
Specifies the number of integer solutions obtained so far.
 2:
nodes – int64int32nag_int scalar
Specifies the number of nodes (subproblems) solved so far.
 3:
depth – int64int32nag_int scalar
Specifies the depth in the tree of subproblems the algorithm has now reached.
 4:
obj – double scalar
Specifies the value of the objective function of the end of the latest subproblem.
 5:
x(n) – double array
Specifies the values of the independent variables at the end of the latest subproblem.
 6:
bstval – double scalar
Normally specifies the value of the best integer solution found so far.
 7:
bstsol(n) – double array
Specifies the solution vector which gives rise to the best integer solution value so far discovered.
 8:
bl(n) – double array
bl(i)${\mathbf{bl}}\left(i\right)$ specifies the current lower bounds on the variable
x_{i}${x}_{i}$.
 9:
bu(n) – double array
bu(i)${\mathbf{bu}}\left(i\right)$ specifies the current upper bounds on the variable
x_{i}${x}_{i}$.
 10:
n – int64int32nag_int scalar
Specifies the number of variables.
 11:
halt – logical scalar
Will have the value false.
 12:
count – int64int32nag_int scalar
Unchanged from previous call.
Output Parameters
 1:
bstval – double scalar
May be set a cutoff value if you are an experienced user as follows. Before an integer solution has been found
bstval will be set by
nag_mip_iqp_dense (h02cb) to the largest machine representable number (see
nag_machine_real_largest (x02al)). If you know that the solution being sought is a much smaller number, then
bstval may be set to this number as a cutoff value (see
Section [Description]). Beware of setting
bstval too small, since then no integer solutions will be discovered. Also make sure that
bstval is set using a statement of the form
IF (intfnd.EQ.0) bstval = ${\mathbf{bstval}}=\text{}$ cutoff value
on entry to
monit. This statement will not prevent the normal operation of the algorithm when subsequent integer solutions are found. It would be a grievous mistake to unconditionally set
bstval and if you have any doubts whatsoever about the correct use of this parameter then you are strongly recommended to leave it unchanged.
 2:
halt – logical scalar
If
halt is set to
true,
nag_opt_qp_dense_solve (e04nf) will be brought to a halt with
ifail = − 1${\mathbf{ifail}}={{\mathbf{1}}}$. This facility may be useful if you are content with
any integer solution, or with any integer solution that fits certain criteria. Under these circumstances setting
halt = true${\mathbf{halt}}=\mathbf{true}$ can save considerable unnecessary computation.
 3:
count – int64int32nag_int scalar
May be used by you to save the last value of
intfnd. If a subsequent call of
monit has a value of
intfnd which is greater than
count, then you know that a new integer solution has been found at this node.
Optional Input Parameters
 1:
n – int64int32nag_int scalar
Default:
The dimension of the arrays
bl,
bu,
cvec,
xs. (An error is raised if these dimensions are not equal.)
n$n$, the number of variables.
Constraint:
n > 0${\mathbf{n}}>0$.
 2:
nclin – int64int32nag_int scalar
Default:
The first dimension of the array
a.
m_{L}${m}_{L}$, the number of general linear constraints.
Constraint:
nclin ≥ 0${\mathbf{nclin}}\ge 0$.
 3:
lintvr – int64int32nag_int scalar
Default:
The dimension of the array
intvar.
The dimension of the array
intvar as declared in the (sub)program from which
nag_mip_iqp_dense (h02cb) is called. Often
lintvr is the number of variables that are constrained to be integer.
Constraint:
lintvr > 0${\mathbf{lintvr}}>0$.
 4:
mdepth – int64int32nag_int scalar
The maximum depth (i.e., number of extra constraints) that nag_mip_iqp_dense (h02cb) may insert before admitting failure.
Default:
3 × n / 2$3\times {\mathbf{n}}/2$ Constraint:
mdepth ≥ 1${\mathbf{mdepth}}\ge 1$.
 5:
lwrk – int64int32nag_int scalar
The dimension of the array
wrk as declared in the (sub)program from which
nag_mip_iqp_dense (h02cb) is called.
Default:
2 × max (n,(nclin + 1))^{2} + 9 × n + 5 × nclin + 4 × mdepth$2\times {\mathrm{max}\phantom{\rule{0.125em}{0ex}}({\mathbf{n}},({\mathbf{nclin}}+1))}^{2}+9\times {\mathbf{n}}+5\times {\mathbf{nclin}}+4\times {\mathbf{mdepth}}$ Constraints:
 if the problem type is QP2 (the default) or QP4,
 if nclin > 0${\mathbf{nclin}}>0$, lwrk ≥ 2 × n^{2} + 9 × n + 5 × nclin + 4 × mdepth${\mathbf{lwrk}}\ge 2\times {{\mathbf{n}}}^{2}+9\times {\mathbf{n}}+5\times {\mathbf{nclin}}+4\times {\mathbf{mdepth}}$;
 if nclin = 0${\mathbf{nclin}}=0$, lwrk ≥ n^{2} + 9 × n + 4 × mdepth${\mathbf{lwrk}}\ge {{\mathbf{n}}}^{2}+9\times {\mathbf{n}}+4\times {\mathbf{mdepth}}$;
 if the problem type is QP1 or QP3,
 if nclin > 0${\mathbf{nclin}}>0$, lwrk ≥ 2 × n^{2} + 8 × n + 5 × nclin + 4 × mdepth${\mathbf{lwrk}}\ge 2\times {{\mathbf{n}}}^{2}+8\times {\mathbf{n}}+5\times {\mathbf{nclin}}+4\times {\mathbf{mdepth}}$;
 if nclin = 0${\mathbf{nclin}}=0$, lwrk ≥ n^{2} + 8 × n + 4 × mdepth${\mathbf{lwrk}}\ge {{\mathbf{n}}}^{2}+8\times {\mathbf{n}}+4\times {\mathbf{mdepth}}$;
 if the problem type is LP,
 if nclin = 0${\mathbf{nclin}}=0$, lwrk ≥ 9 × n + 1 + 4 × mdepth${\mathbf{lwrk}}\ge 9\times {\mathbf{n}}+1+4\times {\mathbf{mdepth}}$;
 if nclin ≥ n${\mathbf{nclin}}\ge {\mathbf{n}}$, lwrk ≥ 2 × n^{2} + 9 × n + 5 × nclin + 4 × mdepth${\mathbf{lwrk}}\ge 2\times {{\mathbf{n}}}^{2}+9\times {\mathbf{n}}+5\times {\mathbf{nclin}}+4\times {\mathbf{mdepth}}$;
 otherwise lwrk ≥ 2 × (nclin + 1)^{2} + 9 × n + 5 × nclin + 4 × mdepth${\mathbf{lwrk}}\ge 2\times {({\mathbf{nclin}}+1)}^{2}+9\times {\mathbf{n}}+5\times {\mathbf{nclin}}+4\times {\mathbf{mdepth}}$;
 if the problem type is FP,
 if nclin = 0${\mathbf{nclin}}=0$, lwrk ≥ 8 × n + 1 + 4 × mdepth${\mathbf{lwrk}}\ge 8\times {\mathbf{n}}+1+4\times {\mathbf{mdepth}}$;
 if nclin ≥ n${\mathbf{nclin}}\ge {\mathbf{n}}$, lwrk ≥ 2 × n^{2} + 8 × n + 5 × nclin + 4 × mdepth${\mathbf{lwrk}}\ge 2\times {{\mathbf{n}}}^{2}+8\times {\mathbf{n}}+5\times {\mathbf{nclin}}+4\times {\mathbf{mdepth}}$;
 otherwise lwrk ≥ 2 × (nclin + 1)^{2} + 8 × n + 5 × nclin + 4 × mdepth${\mathbf{lwrk}}\ge 2\times {({\mathbf{nclin}}+1)}^{2}+8\times {\mathbf{n}}+5\times {\mathbf{nclin}}+4\times {\mathbf{mdepth}}$.
Input Parameters Omitted from the MATLAB Interface
 lda ldh tdh iwrk liwrk wrk
Output Parameters
 1:
istate(n + nclin${\mathbf{n}}+{\mathbf{nclin}}$) – int64int32nag_int array
The status of the constraints in the working set at the point returned in
xs. The significance of each possible value of
istate(j)${\mathbf{istate}}\left(j\right)$ is as follows:
istate(j)${\mathbf{istate}}\left(j\right)$  Meaning 
− 2$2$  The constraint violates its lower bound by more than the feasibility tolerance. 
− 1$1$  The constraint violates its upper bound by more than the feasibility tolerance. 
− 0$\phantom{}0$  The constraint is satisfied to within the feasibility tolerance, but is not in the working set. 
− 1$\phantom{}1$  This inequality constraint is included in the working set at its lower bound. 
− 2$\phantom{}2$  This inequality constraint is included in the working set at its upper bound. 
− 3$\phantom{}3$  This constraint is included in the working set as an equality. This value of istate can occur only when bl(j) = bu(j)${\mathbf{bl}}\left(j\right)={\mathbf{bu}}\left(j\right)$. 
− 4$\phantom{}4$  This corresponds to optimality being declared with xs(j)${\mathbf{xs}}\left(j\right)$ being temporarily fixed at its current value. This value of istate can occur only when ifail = 1${\mathbf{ifail}}={\mathbf{1}}$ on exit. 
 2:
xs(n) – double array
The point at which
nag_mip_iqp_dense (h02cb) terminated. If
ifail = 0${\mathbf{ifail}}={\mathbf{0}}$,
1${\mathbf{1}}$ or
3${\mathbf{3}}$,
xs contains an estimate of the solution.
 3:
obj – double scalar
The value of the objective function at
x$x$ if
x$x$ is feasible, or the sum of infeasibilities at
x$x$ otherwise. If the problem is of type FP and
x$x$ is feasible,
obj is set to zero.
 4:
ax(max (1,nclin)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}(1,{\mathbf{nclin}})$) – double array
The final values of the linear constraints
Ax$Ax$.
If
nclin = 0${\mathbf{nclin}}=0$,
ax is not referenced.
 5:
clamda(n + nclin${\mathbf{n}}+{\mathbf{nclin}}$) – double array
The values of the Lagrangemultipliers for each constraint with respect to the current working set. The first
n$n$ elements contain the multipliers for the bound constraints on the variables, and the next
m_{L}${m}_{L}$ elements contain the multipliers for the general linear constraints (if any). If
istate(j) = 0${\mathbf{istate}}\left(j\right)=0$ (i.e., constraint
j$j$ is not in the working set),
clamda(j)${\mathbf{clamda}}\left(j\right)$ is zero. If
x$x$ is optimal,
clamda(j)${\mathbf{clamda}}\left(j\right)$ should be nonnegative if
istate(j) = 1${\mathbf{istate}}\left(j\right)=1$, nonpositive if
istate(j) = 2${\mathbf{istate}}\left(j\right)=2$ and zero if
istate(j) = 4${\mathbf{istate}}\left(j\right)=4$.
 6:
ifail – int64int32nag_int scalar
ifail = 0${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see
[Error Indicators and Warnings]).
Error Indicators and Warnings
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and
do not generate an error of type NAG:error_n. See nag_issue_warnings.
 W ifail = − 1${\mathbf{ifail}}=1$
Algorithm terminated at your request (
halt = true${\mathbf{halt}}=\mathbf{true}$).
 ifail = 1${\mathbf{ifail}}=1$
Input parameter error immediately detected.
 ifail = 2${\mathbf{ifail}}=2$
No integer solution found. (Check that
bstval has not been set too small.)
 ifail = 3${\mathbf{ifail}}=3$
mdepth is too small. Increase the value of
mdepth and reenter
nag_mip_iqp_dense (h02cb).
 ifail = 4${\mathbf{ifail}}=4$
The basic problem (without integer constraints) is unbounded.
 ifail = 5${\mathbf{ifail}}=5$
The basic problem is infeasible.
 ifail = 6${\mathbf{ifail}}=6$
The basic problem requires too many iterations.
 ifail = 7${\mathbf{ifail}}=7$
The basic problem has a reduced Hessian which exceeds its assigned dimension.
 ifail = 8${\mathbf{ifail}}=8$
The basic problem has an invalid parameter setting.
 ifail = 9${\mathbf{ifail}}=9$
The basic problem, as defined, is not standard.
 ifail = 10${\mathbf{ifail}}=10$
liwrk is too small.
 ifail = 11${\mathbf{ifail}}=11$
 ifail = 12${\mathbf{ifail}}=12$
An internal error has occurred within the function. Please contact
NAG with details of the call to
nag_mip_iqp_dense (h02cb).
Accuracy
nag_mip_iqp_dense (h02cb) implements a numerically stable active set strategy and returns solutions that are as accurate as the condition of the problem warrants on the machine.
Further Comments
This section contains some comments on scaling and a description of the printed output.
Scaling
Sensible scaling of the problem is likely to reduce the number of iterations required and make the problem less sensitive to perturbations in the data, thus improving the condition of the problem. In the absence of better information it is usually sensible to make the Euclidean lengths of each constraint of comparable magnitude. See
Chapter E04 and
Gill et al. (1981) for further information and advice.
Description of the Printed Output
This section describes the (default) intermediate printout and final printout produced by
nag_mip_iqp_dense (h02cb). The intermediate printout is a subset of the monitoring information produced by the function at every iteration (see
Section [Description of Monitoring Information]). You can control the level of printed output (see the description of the
Print Level in
Section [Description of the Optional s]). Note that the intermediate printout and final printout are produced only if
Print Level ≥ 10${\mathbf{Print\; Level}}\ge 10$ (the default).
The following line of summary output (
< 80$\text{}<80$ characters) is produced at every iteration. In all cases, the values of the quantities printed are those in effect
on
completion of the given iteration.
Itn 
is the iteration count.

Step 
is the step taken along the computed search direction. If a constraint is added during the current iteration, Step will be the step to the nearest constraint. When the problem is of type LP, the step can be greater than one during the optimality phase.

Ninf 
is the number of violated constraints (infeasibilities). This will be zero during the optimality phase.

Sinf/Objective 
is the value of the current objective function. If x$x$ is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If x$x$ is feasible, Objective is the value of the objective function. The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point. During the optimality phase, the value of the objective function will be nonincreasing. During the feasibility phase, the number of constraint infeasibilities will not increase until either a feasible point is found, or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained, the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.

Norm Gz 
is
‖Z_{R}^{T}g_{FR}‖
$\Vert {Z}_{R}^{\mathrm{T}}{g}_{\mathrm{FR}}\Vert $, the Euclidean norm of the reduced gradient with respect to Z_{R}${Z}_{R}$ (see Sections [Definition of the Search Direction] and [Choosing the Initial Working Set]). During the optimality phase, this norm will be approximately zero after a unit step.

The final printout includes a listing of the status of every variable and constraint.
The following describes the printout for each variable. A full stop (.) is printed for any numerical value that is zero.
A key is sometimes printed before State to give some additional information about the state of a variable.
Varbl 
gives the name (V) and index j$\mathit{j}$, for j = 1,2, … ,n$\mathit{j}=1,2,\dots ,n$, of the variable.

State 
gives the state of the variable (FR if neither bound is in the working set, EQ if a fixed variable, LL if on its lower bound, UL if on its upper bound, TF if temporarily fixed at its current value). If Value lies outside the upper or lower bounds by more than the Feasibility Tolerance (default value = sqrt(ε)$\text{default value}=\sqrt{\epsilon}$, where ε$\epsilon $ is the machine precision; see Section [Description of the Optional s]), State will be ++ or  respectively.
A 
Alternative optimum possible. The variable is active at one of its bounds, but its Lagrangemultiplier is essentially zero. This means that if the variable were allowed to start moving away from its bound, there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrangemultipliers might also change.

D 
Degenerate. The variable is free, but it is equal to (or very close to) one of its bounds.

I 
Infeasible. The variable is currently violating one of its bounds by more than the Feasibility Tolerance.


Value 
is the value of the variable at the final iterate.

Lower Bound 
is the lower bound specified for the variable. None indicates that bl(j) ≤ − bigbnd${\mathbf{bl}}\left(j\right)\le \mathit{bigbnd}$.

Upper Bound 
is the upper bound specified for the variable. None indicates that bu(j) ≥ bigbnd${\mathbf{bu}}\left(j\right)\ge \mathit{bigbnd}$.

Slack 
is the difference between the variable Value and the nearer of its (finite) bounds bl(j)${\mathbf{bl}}\left(j\right)$ and bu(j)${\mathbf{bu}}\left(j\right)$. A blank entry indicates that the associated variable is not bounded (i.e., bl(j) ≤ − bigbnd${\mathbf{bl}}\left(j\right)\le \mathit{bigbnd}$ and bu(j) ≥ bigbnd${\mathbf{bu}}\left(j\right)\ge \mathit{bigbnd}$).

The meaning of the printout for general constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’,
bl(j)${\mathbf{bl}}\left(j\right)$ and
bu(j)${\mathbf{bu}}\left(j\right)$ are replaced by
bl(n + j)${\mathbf{bl}}\left(n+j\right)$ and
bu(n + j)${\mathbf{bu}}\left(n+j\right)$ respectively, and with the following change in the heading.
L Con 
gives the name (L) and index j$\mathit{j}$, for j = 1,2, … ,m$\mathit{j}=1,2,\dots ,m$, of the constraint.

Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Slack column to become positive.
Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.
Example
Open in the MATLAB editor:
nag_mip_iqp_dense_example
function nag_mip_iqp_dense_example
a = [1, 1, 1, 1, 1, 1, 1;
0.15, 0.04, 0.02, 0.04, 0.02, 0.01, 0.03;
0.03, 0.05, 0.08, 0.02, 0.06, 0.01, 0;
0.02, 0.04, 0.01, 0.02, 0.02, 0, 0;
0.02, 0.03, 0, 0, 0.01, 0, 0;
0.7, 0.75, 0.8, 0.75, 0.8, 0.97, 0;
0.02, 0.06, 0.08, 0.12, 0.02, 0.01, 0.97];
bl = [0.01;
0.1;
0.01;
0.04;
0.1;
0.01;
0.01;
0.13;
1e25;
1e25;
1e25;
1e25;
0.0992;
0.003];
bu = [0.01;
0.15;
0.03;
0.02;
0.05;
1e25;
1e25;
0.13;
0.0049;
0.0064;
0.0037;
0.0012;
1e25;
0.002];
cvec = [0.02;
0.2;
0.2;
0.2;
0.2;
0.04;
0.04];
h = [2, 0, 0, 0, 0, 0, 0;
0, 2, 0, 0, 0, 0, 0;
0, 0, 2, 2, 0, 0, 0;
0, 0, 2, 2, 0, 0, 0;
0, 0, 0, 0, 2, 0, 0;
0, 0, 0, 0, 0, 2, 2;
0, 0, 0, 0, 0, 2, 2];
intvar = [int64(4)];
istate = zeros(14, 1, 'int64');
xs = [0.01;
0.03;
0;
0.01;
0.1;
0.02;
0.01];
strtgy = int64(2);
[istateOut, xsOut, obj, ax, clamda, ifail] = ...
nag_mip_iqp_dense(a, bl, bu, cvec, h, @qphess, intvar, istate, xs, strtgy, @monit)
function [hx] = qphess(n, jthcol, h, ldh, x)
hx = h*x;
function [bstval, halt, count] = ...
monit(intfnd, nodes, depth, obj, x, bstval, bstsol, bl, bu, n, halt, count)
istateOut =
1
0
0
1
0
0
0
3
0
0
0
0
1
1
xsOut =
0.0100
0.0733
0.0003
0
0.0634
0.0141
0.0028
obj =
0.0375
ax =
0.1300
0.0055
0.0076
0.0044
0.0030
0.0992
0.0030
clamda =
0.4949
0
0
0.0199
0
0
0
2.0340
0
0
0
0
2.0815
2.1032
ifail =
0
Open in the MATLAB editor:
h02cb_example
function h02cb_example
a = [1, 1, 1, 1, 1, 1, 1;
0.15, 0.04, 0.02, 0.04, 0.02, 0.01, 0.03;
0.03, 0.05, 0.08, 0.02, 0.06, 0.01, 0;
0.02, 0.04, 0.01, 0.02, 0.02, 0, 0;
0.02, 0.03, 0, 0, 0.01, 0, 0;
0.7, 0.75, 0.8, 0.75, 0.8, 0.97, 0;
0.02, 0.06, 0.08, 0.12, 0.02, 0.01, 0.97];
bl = [0.01;
0.1;
0.01;
0.04;
0.1;
0.01;
0.01;
0.13;
1e25;
1e25;
1e25;
1e25;
0.0992;
0.003];
bu = [0.01;
0.15;
0.03;
0.02;
0.05;
1e25;
1e25;
0.13;
0.0049;
0.0064;
0.0037;
0.0012;
1e25;
0.002];
cvec = [0.02;
0.2;
0.2;
0.2;
0.2;
0.04;
0.04];
h = [2, 0, 0, 0, 0, 0, 0;
0, 2, 0, 0, 0, 0, 0;
0, 0, 2, 2, 0, 0, 0;
0, 0, 2, 2, 0, 0, 0;
0, 0, 0, 0, 2, 0, 0;
0, 0, 0, 0, 0, 2, 2;
0, 0, 0, 0, 0, 2, 2];
intvar = [int64(4)];
istate = zeros(14, 1, 'int64');
xs = [0.01;
0.03;
0;
0.01;
0.1;
0.02;
0.01];
strtgy = int64(2);
[istateOut, xsOut, obj, ax, clamda, ifail] = ...
h02cb(a, bl, bu, cvec, h, @qphess, intvar, istate, xs, strtgy, @monit)
function [hx] = qphess(n, jthcol, h, ldh, x)
hx = h*x;
function [bstval, halt, count] = ...
monit(intfnd, nodes, depth, obj, x, bstval, bstsol, bl, bu, n, halt, count)
istateOut =
1
0
0
1
0
0
0
3
0
0
0
0
1
1
xsOut =
0.0100
0.0733
0.0003
0
0.0634
0.0141
0.0028
obj =
0.0375
ax =
0.1300
0.0055
0.0076
0.0044
0.0030
0.0992
0.0030
clamda =
0.4949
0
0
0.0199
0
0
0
2.0340
0
0
0
0
2.0815
2.1032
ifail =
0
the remainder of this document is intended for more advanced users. Section [Algorithmic Details] contains a detailed description of the algorithm which may be needed in order to understand Sections [Optional Parameters] and [Description of Monitoring Information]. Section [Optional Parameters] describes the optional parameters which may be set by calls to nag_mip_iqp_dense_optstr (h02cd). Section [Description of Monitoring Information] describes the quantities which can be requested to monitor the course of the computation.
Algorithmic Details
nag_mip_iqp_dense (h02cb) implements a basic branch and bound algorithm (see
Section [Description]) using
nag_opt_qp_dense_solve (e04nf) as its basic subproblem solver. See below for details of its algorithm.
Overview
nag_mip_iqp_dense (h02cb) is based on an inertiacontrolling method that maintains a Cholesky factorization of the reduced Hessian (see below). The method is based on that of
Gill and Murray (1978), and is described in detail by
Gill et al. (1991). Here we briefly summarise the main features of the method. Where possible, explicit reference is made to the names of variables that are parameters of
nag_mip_iqp_dense (h02cb) or appear in the printed output.
nag_mip_iqp_dense (h02cb) has two phases:
(i) 
finding an initial feasible point by minimizing the sum of infeasibilities (the feasibility phase), and 
(ii) 
minimizing the quadratic objective function within the feasible region (the optimality phase). 
The computations in both phases are performed by the same functions. The twophase nature of the algorithm is reflected by changing the function being minimized from the sum of infeasibilities to the quadratic objective function. The feasibility phase does not perform the standard simplex method (i.e., it does not necessarily find a vertex), except in the LP case when m_{L} ≤ n${m}_{L}\le n$. Once any iterate is feasible, all subsequent iterates remain feasible.
nag_mip_iqp_dense (h02cb) has been designed to be efficient when used to solve a
sequence of related problems – for example, within a sequential quadratic programming method for nonlinearly constrained optimization (e.g.,
nag_opt_nlp2_solve (e04wd)). In particular, you may specify an initial working set (the indices of the constraints believed to be satisfied exactly at the solution); see the discussion of the
Warm Start in
Section [Description of the Optional s].
In general, an iterative process is required to solve a quadratic program. (For simplicity, we shall always consider a typical iteration and avoid reference to the index of the iteration.) Each new iterate
x$\stackrel{}{x}$ is defined by
where the
step length
α$\alpha $ is a nonnegative scalar, and
p$p$ is called the
search direction.
At each point
x$x$, a working set of constraints is defined to be a linearly independent subset of the constraints that are satisfied ‘exactly’ (to within the tolerance defined by the
Feasibility Tolerance; see
Section [Description of the Optional s]). The working set is the current prediction of the constraints that hold with equality at the solution of a linearly constrained QP problem. The search direction is constructed so that the constraints in the working set remain
unaltered for any value of the step length. For a bound constraint in the working set, this property is achieved by setting the corresponding element of the search direction to zero. Thus, the associated variable is
fixed, and specification of the working set induces a partition of
x$x$ into
fixed and
free variables. During a given iteration, the fixed variables are effectively removed from the problem; since the relevant elements of the search direction are zero, the columns of
A$A$ corresponding to fixed variables may be ignored.
Let
m_{W}${m}_{\mathrm{W}}$ denote the number of general constraints in the working set and let
n_{FX}${n}_{\mathrm{FX}}$ denote the number of variables fixed at one of their bounds (
m_{W}${m}_{\mathrm{W}}$ and
n_{FX}${n}_{\mathrm{FX}}$ are the quantities
Lin and
Bnd in the monitoring file output from
nag_mip_iqp_dense (h02cb); see
Section [Description of Monitoring Information]). Similarly, let
n_{FR}${n}_{\mathrm{FR}}$ (
n_{FR} = n − n_{FX}${n}_{\mathrm{FR}}=n{n}_{\mathrm{FX}}$) denote the number of free variables. At every iteration,
the variables are reordered so that the last
n_{FX}${n}_{\mathrm{FX}}$ variables are fixed, with all other relevant vectors and matrices ordered accordingly.
Definition of the Search Direction
Let
A_{FR}${A}_{\mathrm{FR}}$ denote the
m_{W}${m}_{\mathrm{W}}$ by
n_{FR}${n}_{\mathrm{FR}}$ submatrix of general constraints in the working set corresponding to the free variables, and let
p_{FR}${p}_{\mathrm{FR}}$ denote the search direction with respect to the free variables only. The general constraints in the working set will be unaltered by any move along
p$p$ if
In order to compute
p_{FR}${p}_{\mathrm{FR}}$, the
TQ$TQ$ factorization of
A_{FR}${A}_{\mathrm{FR}}$ is used:
where
T$T$ is a nonsingular
m_{W}${m}_{\mathrm{W}}$ by
m_{W}${m}_{\mathrm{W}}$ upper triangular matrix (i.e.,
t_{ij} = 0${t}_{ij}=0$ if
i > j$i>j$), and the nonsingular
n_{FR}${n}_{\mathrm{FR}}$ by
n_{FR}${n}_{\mathrm{FR}}$ matrix
Q_{FR}${Q}_{\mathrm{FR}}$ is the product of orthogonal transformations (see
Gill et al. (1984)). If the columns of
Q_{FR}${Q}_{\mathrm{FR}}$ are partitioned so that
where
Y$Y$ is
n_{FR}${n}_{\mathrm{FR}}$ by
m_{W}${m}_{\mathrm{W}}$, then the
n_{Z}${n}_{Z}$
(n_{Z} = n_{FR} − m_{W})
$({n}_{Z}={n}_{\mathrm{FR}}{m}_{\mathrm{W}})$ columns of
Z$Z$ form a basis for the null space of
A_{FR}${A}_{\mathrm{FR}}$. Let
n_{R}${n}_{R}$ be an integer such that
0 ≤ n_{R} ≤ n_{Z}$0\le {n}_{R}\le {n}_{Z}$, and let
Z_{R}${Z}_{R}$ denote a matrix whose
n_{R}${n}_{R}$ columns are a subset of the columns of
Z$Z$. (The integer
n_{R}${n}_{R}$ is the quantity
Zr in the monitoring output from
nag_mip_iqp_dense (h02cb). In many cases,
Z_{R}${Z}_{R}$ will include
all the columns of
Z$Z$.) The direction
p_{FR}${p}_{\mathrm{FR}}$ will satisfy
(2) if
where
p_{R}${p}_{R}$ is any
n_{R}${n}_{R}$vector.
Let
Q$Q$ denote the
n$n$ by
n$n$ matrix
where
I_{FX}${I}_{\mathrm{FX}}$ is the identity matrix of order
n_{FX}${n}_{\mathrm{FX}}$. Let
H_{Q}${H}_{Q}$ and
g_{Q}${g}_{Q}$ denote the
n$n$ by
n$n$ transformed Hessian and
transformed gradient
and let the matrix of first
n_{R}${n}_{R}$ rows and columns of
H_{Q}${H}_{Q}$ be denoted by
H_{R}${H}_{R}$ and the vector of the first
n_{R}${n}_{R}$ elements of
g_{Q}${g}_{Q}$ be denoted by
g_{R}${g}_{R}$. The quantities
H_{R}${H}_{R}$ and
g_{R}${g}_{R}$ are known as the
reduced Hessian and
reduced gradient of
f(x)$f\left(x\right)$, respectively. Roughly speaking,
g_{R}${g}_{R}$ and
H_{R}${H}_{R}$ describe the first and second derivatives of an
unconstrained problem for the calculation of
p_{R}${p}_{R}$.
At each iteration, a triangular factorization of H_{R}${H}_{R}$ is available. If H_{R}${H}_{R}$ is positive definite, H_{R} = R^{T}R${H}_{R}={R}^{\mathrm{T}}R$, where R$R$ is the upper triangular Cholesky factor of H_{R}${H}_{R}$. If H_{R}${H}_{R}$ is not positive definite, H_{R} = R^{T}DR${H}_{R}={R}^{\mathrm{T}}DR$, where D = diag(1,1, … ,1,μ)$D=\mathrm{diag}(1,1,\dots ,1,\mu )$, with μ ≤ 0$\mu \le 0$.
The computation is arranged so that the reducedgradient vector is a multiple of
e_{R}${e}_{R}$, a vector of all zeros except in the last (i.e.,
n_{R}${n}_{R}$th) position. This allows the vector
p_{R}${p}_{R}$ in
(4) to be computed from a single backsubstitution
where
γ$\gamma $ is a scalar that depends on whether or not the reduced Hessian is positive definite at
x$x$. In the positive definite case,
x + p$x+p$ is the minimizer of the objective function subject to the constraints (bounds and general) in the working set treated as equalities. If
H_{R}${H}_{R}$ is not positive definite,
p_{R}${p}_{R}$ satisfies the conditions
which allow the objective function to be reduced by any positive step of the form
x + αp$x+\alpha p$.
The Main Iteration
If the reduced gradient is zero,
x$x$ is a constrained stationary point in the subspace defined by
Z$Z$. During the feasibility phase, the reduced gradient will usually be zero only at a vertex (although it may be zero at nonvertices in the presence of constraint dependencies). During the optimality phase, a zero reduced gradient implies that
x$x$ minimizes the quadratic objective when the constraints in the working set are treated as equalities. At a constrained stationary point, Lagrangemultipliers
λ_{C}${\lambda}_{C}$ and
λ_{B}${\lambda}_{B}$ for the general and bound constraints are defined from the equations
Given a positive constant
δ$\delta $ of the order of the
machine precision, a Lagrangemultiplier
λ_{j}${\lambda}_{j}$ corresponding to an inequality constraint in the working set is said to be
optimal if
λ_{j} ≤ δ${\lambda}_{j}\le \delta $ when the associated constraint is at its
upper bound, or if
λ_{j} ≥ − δ${\lambda}_{j}\ge \delta $ when the associated constraint is at its
lower bound. If a multiplier is nonoptimal, the objective function (either the true objective or the sum of infeasibilities) can be reduced by deleting the corresponding constraint (with index
Jdel; see
Section [Description of Monitoring Information]) from the working set.
If optimal multipliers occur during the feasibility phase and the sum of infeasibilities is nonzero, there is no feasible point, and you can force
nag_mip_iqp_dense (h02cb) to continue until the minimum value of the sum of infeasibilities has been found; see the discussion of the
Minimum Sum of Infeasibilities in
Section [Description of the Optional s]. At such a point, the Lagrangemultiplier
λ_{j}${\lambda}_{j}$ corresponding to an inequality constraint in the working set will be such that
− (1 + δ) ≤ λ_{j} ≤ δ$(1+\delta )\le {\lambda}_{j}\le \delta $ when the associated constraint is at its
upper bound, and
− δ ≤ λ_{j} ≤ (1 + δ)$\delta \le {\lambda}_{j}\le (1+\delta )$ when the associated constraint is at its
lower bound. Lagrangemultipliers for equality constraints will satisfy
λ_{j} ≤ 1 + δ$\left{\lambda}_{j}\right\le 1+\delta $.
If the reduced gradient is not zero, Lagrangemultipliers need not be computed and the nonzero elements of the search direction
p$p$ are given by
Z_{R}p_{R}${Z}_{R}{p}_{R}$ (see
(4) and
(5)). The choice of step length is influenced by the need to maintain feasibility with respect to the satisfied constraints. If
H_{R}${H}_{R}$ is positive definite and
x + p$x+p$ is feasible,
α$\alpha $ will be taken as unity. In this case, the reduced gradient at
x$\stackrel{}{x}$ will be zero, and Lagrangemultipliers are computed. Otherwise,
α$\alpha $ is set to
α_{M}${\alpha}_{\mathrm{M}}$, the step to the ‘nearest’ constraint (with index
Jadd; see
Section [Description of Monitoring Information]), which is added to the working set at the next iteration.
Each change in the working set leads to a simple change to A_{FR}${A}_{\mathrm{FR}}$: if the status of a general constraint changes, a row of A_{FR}${A}_{\mathrm{FR}}$ is altered; if a bound constraint enters or leaves the working set, a column of A_{FR}${A}_{\mathrm{FR}}$ changes. Explicit representations are recurred of the matrices T$T$, Q_{FR}${Q}_{\mathrm{FR}}$ and R$R$; and of vectors Q^{T}g${Q}^{\mathrm{T}}g$, and Q^{T}c${Q}^{\mathrm{T}}c$. The triangular factor R$R$ associated with the reduced Hessian is only updated during the optimality phase.
One of the most important features of
nag_mip_iqp_dense (h02cb) is its control of the conditioning of the working set, whose nearness to linear dependence is estimated by the ratio of the largest to smallest diagonal elements of the
TQ$TQ$ factor
T$T$ (the printed value
Cond T; see
Section [Description of Monitoring Information]). In constructing the initial working set, constraints are excluded that would result in a large value of
Cond T.
nag_mip_iqp_dense (h02cb) includes a rigorous procedure that prevents the possibility of cycling at a point where the active constraints are nearly linearly dependent (see
Gill et al. (1989)). The main feature of the anticycling procedure is that the feasibility tolerance is increased slightly at the start of every iteration. This not only allows a positive step to be taken at every iteration, but also provides, whenever possible, a
choice of constraints to be added to the working set. Let
α_{M}${\alpha}_{\mathrm{M}}$ denote the maximum step at which
x + α_{M}p$x+{\alpha}_{\mathrm{M}}p$ does not violate any constraint by more than its feasibility tolerance. All constraints at a distance
α$\alpha $ (
α ≤ α_{M}$\alpha \le {\alpha}_{\mathrm{M}}$) along
p$p$ from the current point are then viewed as acceptable candidates for inclusion in the working set. The constraint whose normal makes the largest angle with the search direction is added to the working set.
Choosing the Initial Working Set
At the start of the optimality phase, a positive definite H_{R}${H}_{R}$ can be defined if enough constraints are included in the initial working set. (The matrix with no rows and columns is positive definite by definition, corresponding to the case when A_{FR}${A}_{\mathrm{FR}}$ contains n_{FR}${n}_{\mathrm{FR}}$ constraints.) The idea is to include as many general constraints as necessary to ensure that the reduced Hessian is positive definite.
Let
H_{Z}${H}_{Z}$ denote the matrix of the first
n_{Z}${n}_{Z}$ rows and columns of the matrix
H_{Q} = Q^{T}HQ${H}_{Q}={Q}^{\mathrm{T}}HQ$ at the beginning of the optimality phase. A partial Cholesky factorization is used to find an upper triangular matrix
R$R$ that is the factor of the largest positive definite leading submatrix of
H_{Z}${H}_{Z}$. The use of interchanges during the factorization of
H_{Z}${H}_{Z}$ tends to maximize the dimension of
R$R$. (The condition of
R$R$ may be controlled using the
Rank Tolerance. Let
Z_{R}${Z}_{R}$ denote the columns of
Z$Z$ corresponding to
R$R$, and let
Z$Z$ be partitioned as
Z = $Z=\left(\begin{array}{cc}{Z}_{R}& {Z}_{A}\end{array}\right)$. A working set for which
Z_{R}${Z}_{R}$ defines the null space can be obtained by including
the rows of
Z_{A}^{T}
${Z}_{A}^{\mathrm{T}}$ as ‘artificial constraints’. Minimization of the objective function then proceeds within the subspace defined by
Z_{R}${Z}_{R}$, as described in
Section [Definition of the Search Direction].
The artificially augmented working set is given by
so that
p_{FR}${p}_{\mathrm{FR}}$ will satisfy
A_{FR}p_{FR} = 0${A}_{\mathrm{FR}}{p}_{\mathrm{FR}}=0$ and
Z_{A}^{T}
p_{FR} = 0${Z}_{A}^{\mathrm{T}}{p}_{\mathrm{FR}}=0$. By definition of the
TQ$TQ$ factorization,
A_{FR}${\stackrel{}{A}}_{\mathrm{FR}}$ automatically satisfies the following:
where
and hence the
TQ$TQ$ factorization of
(7) is available trivially from
T$T$ and
Q_{FR}${Q}_{\mathrm{FR}}$ without additional expense.
The matrix
Z_{A}${Z}_{A}$ is not kept fixed, since its role is purely to define an appropriate null space; the
TQ$TQ$ factorization can therefore be updated in the normal fashion as the iterations proceed. No work is required to ‘delete’ the artificial constraints associated with
Z_{A}${Z}_{A}$ when
Z_{R}^{T}
g_{FR} = 0${Z}_{R}^{\mathrm{T}}{g}_{\mathrm{FR}}=0$, since this simply involves repartitioning
Q_{FR}${Q}_{\mathrm{FR}}$. The ‘artificial’ multiplier vector associated with the rows of
Z_{A}^{T}
${Z}_{A}^{\mathrm{T}}$ is equal to
Z_{A}^{T}
g_{FR}${Z}_{A}^{\mathrm{T}}{g}_{\mathrm{FR}}$, and the multipliers corresponding to the rows of the ‘true’ working set are the multipliers that would be obtained if the artificial constraints were not present. If an artificial constraint is ‘deleted’ from the working set, an
A appears alongside the entry in the
Jdel column of the monitoring file output (see
Section [Description of Monitoring Information]).
The number of columns in
Z_{A}${Z}_{A}$ and
Z_{R}${Z}_{R}$, the Euclidean norm of
Z_{R}^{T}
g_{FR}${Z}_{R}^{\mathrm{T}}{g}_{\mathrm{FR}}$, and the condition estimator of
R$R$ appear in the monitoring file output as
Art,
Zr,
Norm Gz and
Cond Rz respectively (see
Section [Description of Monitoring Information]).
Under some circumstances, a different type of artificial constraint is used when solving a linear program. Although the algorithm of
nag_mip_iqp_dense (h02cb) does not usually perform simplex steps (in the traditional sense), there is one exception: a linear program with fewer general constraints than variables (i.e.,
m_{L} ≤ n${m}_{L}\le n$). (Use of the simplex method in this situation leads to savings in storage.) At the starting point, the ‘natural’ working set (the set of constraints exactly or nearly satisfied at the starting point) is augmented with a suitable number of ‘temporary’ bounds, each of which has the effect of temporarily fixing a variable at its current value. In subsequent iterations, a temporary bound is treated as a standard constraint until it is deleted from the working set, in which case it is never added again. If a temporary bound is ‘deleted’ from the working set, an
F (for ‘Fixed’) appears alongside the entry in the
Jdel column of the monitoring file output (see
Section [Description of Monitoring Information]).
Optional Parameters
Several optional parameters in nag_mip_iqp_dense (h02cb) define choices in the problem specification or the algorithm logic. In order to reduce the number of formal parameters of nag_mip_iqp_dense (h02cb) these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in
Section [Description of the Optional s].
Optional parameters may be specified by calling
nag_mip_iqp_dense_optstr (h02cd) prior to a call to
nag_mip_iqp_dense (h02cb).
nag_mip_iqp_dense_optstr (h02cd) can be called to supply options directly, one call being necessary for each optional parameter. For example,
h02cd('Print Level = 5')
nag_mip_iqp_dense_optstr (h02cd) should be consulted for a full description of this method of supplying optional parameters.
All optional parameters not specified by you are set to their default values. Optional parameters specified by you are unaltered by nag_mip_iqp_dense (h02cb) (unless they define invalid values) and so remain in effect for subsequent calls unless altered by you.
Description of the Optional Parameters
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
 the keywords, where the minimum abbreviation of each keyword is underlined (if no characters of an optional qualifier are underlined, the qualifier may be omitted);
 a parameter value,
where the letters a$a$, i and r$i\text{ and}r$ denote options that take character, integer and real values respectively;
 the default value, where the symbol ε$\epsilon $ is a generic notation for machine precision (see nag_machine_precision (x02aj)).
Keywords and character values are case and white space insensitive.
Check Frequency i$i$Default = 50$\text{}=50$Every i$i$th iteration, a numerical test is made to see if the current solution x$x$ satisfies the constraints in the working set. If the largest residual of the constraints in the working set is judged to be too large, the current working set is refactorized and the variables are recomputed to satisfy the constraints more accurately. If i ≤ 0$i\le 0$, the default value is used.
Cold Start DefaultWarm Start This option specifies how the initial working set is chosen. With a
Cold Start,
nag_mip_iqp_dense (h02cb) chooses the initial working set based on the values of the variables and constraints at the initial point. Broadly speaking, the initial working set will include equality constraints and bounds or inequality constraints that violate or ‘nearly’ satisfy their bounds (to within
Crash Tolerance).
With a
Warm Start, you must provide a valid definition of every element of the array
istate (see
Section [Parameters] for the definition of this array).
nag_mip_iqp_dense (h02cb) will override your specification of
istate if necessary, so that a poor choice of the working set will not cause a fatal error. For instance, any elements of
istate which are set to
− 2$2$,
− 1 or 4$1\text{ or}4$ will be reset to zero, as will any elements which are set to
3$3$ when the corresponding elements of
bl and
bu are not equal. A warm start will be advantageous if a good estimate of the initial working set is available – for example, when
nag_mip_iqp_dense (h02cb) is called repeatedly to solve related problems.
Crash Tolerance r$r$Default = 0.01$\text{}=0.01$This value is used in conjunction with the optional parameter
Cold Start (the default value) when
nag_mip_iqp_dense (h02cb) selects an initial working set. If
0 ≤ r ≤ 1$0\le r\le 1$, the initial working set will include (if possible) bounds or general inequality constraints that lie within
r$r$ of their bounds. In particular, a constraint of the form
a_{j}^{T}
x ≥ l
${a}_{j}^{\mathrm{T}}x\ge l$ will be included in the initial working set if
a_{j}^{T}x − l
≤
r
(1 + l)
${a}_{j}^{\mathrm{T}}xl\le r(1+\leftl\right)$. If
r < 0$r<0$ or
r > 1$r>1$, the default value is used.
Defaults This special keyword may be used to reset all optional parameters to their default values.
Expand Frequency i$i$Default = 5$\text{}=5$This option is part of an anticycling procedure designed to guarantee progress even on highly degenerate problems.
The strategy is to force a positive step at every iteration, at the expense of violating the constraints by a small amount. Suppose that the value of the optional parameter
Feasibility Tolerance is
δ$\delta $. Over a period of
i$i$ iterations, the feasibility tolerance actually used by
nag_mip_iqp_dense (h02cb) (i.e., the
working feasibility tolerance) increases from
0.5δ$0.5\delta $ to
δ$\delta $ (in steps of
0.5δ / i$0.5\delta /i$).
At certain stages the following ‘resetting procedure’ is used to remove constraint infeasibilities. First, all variables whose upper or lower bounds are in the working set are moved exactly onto their bounds. A count is kept of the number of nontrivial adjustments made. If the count is positive, iterative refinement is used to give variables that satisfy the working set to (essentially)
machine precision. Finally, the working feasibility tolerance is reinitialized to
0.5δ$0.5\delta $.
If a problem requires more than i$i$ iterations, the resetting procedure is invoked and a new cycle of i$i$ iterations is started with i$i$ incremented by 10$10$. (The decision to resume the feasibility phase or optimality phase is based on comparing any constraint infeasibilities with δ$\delta $.)
The resetting procedure is also invoked when nag_mip_iqp_dense (h02cb) reaches an apparently optimal, infeasible or unbounded solution, unless this situation has already occurred twice. If any nontrivial adjustments are made, iterations are continued.
If i ≤ 0$i\le 0$, the default value is used. If i ≥ 9999999$i\ge 9999999$, no anticycling procedure is invoked.
Feasibility Phase Iteration Limit i_{1}${i}_{1}$Default = max (50,5(n + m_{L}))$\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}(50,5(n+{m}_{L}))$Optimality Phase Iteration Limit i_{2}${i}_{2}$Default = max (50,5(n + m_{L}))$\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}(50,5(n+{m}_{L}))$The scalars
i_{1}${i}_{1}$ and
i_{2}${i}_{2}$ specify the maximum number of iterations allowed in the feasibility and optimality phases.
Optimality Phase Iteration Limit is equivalent to
Iteration Limit. Setting
i_{1} = 0${i}_{1}=0$ and
Print Level > 0${\mathbf{Print\; Level}}>0$ means that the workspace needed will be computed and printed, but no iterations will be performed. If
i_{1} < 0${i}_{1}<0$ or
i_{2} < 0${i}_{2}<0$, the default value is used.
Feasibility Tolerance r$r$Default = sqrt(ε)$\text{}=\sqrt{\epsilon}$If r ≥ ε$r\ge \epsilon $, r$r$ defines the maximum acceptable absolute violation in each constraint at a ‘feasible’ point. For example, if the variables and the coefficients in the general constraints are of order unity, and the latter are correct to about 6$6$ decimal digits, it would be appropriate to specify r$r$ as 10^{ − 6}${10}^{6}$. If 0 ≤ r < ε$0\le r<\epsilon $, the default value is used.
nag_mip_iqp_dense (h02cb) attempts to find a feasible solution before optimizing the objective function. If the sum of infeasibilities cannot be reduced to zero, the
Minimum Sum of Infeasibilities can be used to find the minimum value of the sum. Let
Sinf be the corresponding sum of infeasibilities. If
Sinf is quite small, it may be appropriate to raise
r$r$ by a factor of
10$10$ or
100$100$. Otherwise, some error in the data should be suspected.
Note that a ‘feasible solution’ is a solution that satisfies the current constraints to within the tolerance r$r$.
Hessian Rows i$i$Default = n$\text{}=n$Note that this option does not apply to problems of type FP or LP.
This specifies m$m$, the number of rows of the Hessian matrix H$H$. The default value of m$m$ is n$n$, the number of variables of the problem.
If the problem is of type QP,
m$m$ will usually be
n$n$, the number of variables. However, a value of
m$m$ less than
n$n$ is appropriate for QP3 or QP4 if
h${\mathbf{h}}$ is an upper trapezoidal matrix with
m$m$ rows. Similarly,
m$m$ may be used to define the dimension of a leading block of nonzeros in the Hessian matrices of QP1 or QP2, in which case the last
n − m$nm$ rows and columns of
h${\mathbf{h}}$ are assumed to be zero. In the QP case,
m$m$ should not be greater than
n$n$; if it is, the last
m − n$mn$ rows of
h${\mathbf{h}}$ are ignored.
If i < 0$i<0$ or i > n$i>n$, the default value is used.
Infinite Bound Size r$r$Default = 10^{20}$\text{}={10}^{20}$If r > 0$r>0$, r$r$ defines the ‘infinite’ bound bigbnd$\mathit{bigbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to bigbnd$\mathit{bigbnd}$ will be regarded as + ∞$+\infty $ (and similarly any lower bound less than or equal to − bigbnd$\mathit{bigbnd}$ will be regarded as − ∞$\infty $). If r ≤ 0$r\le 0$, the default value is used.
Infinite Step Size r$r$Default = max (bigbnd,10^{20})$\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}(\mathit{bigbnd},{10}^{20})$If r > 0$r>0$, r$r$ specifies the magnitude of the change in variables that will be considered a step to an unbounded solution. (Note that an unbounded solution can occur only when the Hessian is not positive definite.) If the change in x$x$ during an iteration would exceed the value of r$r$, the objective function is considered to be unbounded below in the feasible region. If r ≤ 0$r\le 0$, the default value is used.
Iteration Limit i$i$Default = max (50,5(n + m_{L}))$\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}(50,5(n+{m}_{L}))$Iters Itns List DefaultNolist Normally each optional parameter specification is printed as it is supplied. Optional parameter
Nolist may be used to suppress the printing and optional parameter
List may be used to restore printing.
Maximum Degrees of Freedom i$i$Default = n$\text{}=n$Note that this option does not apply to problems of type FP or LP.
This places a limit on the storage allocated for the triangular factor R$R$ of the reduced Hessian H_{R}${H}_{R}$. Ideally, i$i$ should be set slightly larger than the value of n_{R}${n}_{R}$ expected at the solution. It need not be larger than m_{n} + 1${m}_{{\mathbf{n}}}+1$, where m_{n}${m}_{{\mathbf{n}}}$ is the number of variables that appear nonlinearly in the quadratic objective function. For many problems it can be much smaller than m_{n}${m}_{{\mathbf{n}}}$.
For quadratic problems, a minimizer may lie on any number of constraints, so that
n_{R}${n}_{R}$ may vary between
1$1$ and
n$n$. The default value of
i$i$ is therefore the number of variables
n$n$. If
Hessian Rows m$m$ is specified, the default value of
i$i$ is the same number,
m$m$.
Minimum Sum of Infeasibilities Default = NO$=\mathrm{NO}$If no feasible point exists for the constraints, this option is used to control whether or not
nag_mip_iqp_dense (h02cb) will calculate a point that minimizes the constraint violations. If
Minimum Sum of Infeasibilities = NO${\mathbf{Minimum\; Sum\; of\; Infeasibilities}}=\overline{)\mathrm{N}}\mathrm{O}$,
nag_mip_iqp_dense (h02cb) will terminate as soon as it is evident that no feasible point exists for the constraints. The final point will generally not be the point at which the sum of infeasibilities is minimized. If
Minimum Sum of Infeasibilities = YES${\mathbf{Minimum\; Sum\; of\; Infeasibilities}}=\overline{)\mathrm{Y}}\mathrm{ES}$,
nag_mip_iqp_dense (h02cb) will continue until the sum of infeasibilities is minimized.
Monitoring File i$i$Default = − 1$\text{}=1$If
i ≥ 0$i\ge 0$ and
Print Level ≥ 5${\mathbf{Print\; Level}}\ge 5$, monitoring information produced by
nag_mip_iqp_dense (h02cb) at every iteration is sent to a file with logical unit number
i$i$. If
i < 0$i<0$ and/or
Print Level < 5${\mathbf{Print\; Level}}<5$, no monitoring information is produced.
Optimality Tolerance r$r$Default = ε^{0.8}$\text{}={\epsilon}^{0.8}$If r ≥ ε$r\ge \epsilon $, r$r$ defines the tolerance used to determine if the bounds and general constraints have the right ‘sign’ for the solution to be judged to be optimal.
If 0 ≤ r < ε$0\le r<\epsilon $, the default value is used.
Print Level i$i$Default = 10$\text{}=10$The value of
i$i$ controls the amount of printout produced by
nag_mip_iqp_dense (h02cb), as indicated below. A detailed description of the printed output is given in
Section [Description of the Printed Output] (summary output at each iteration and the final solution) and
Section [Description of Monitoring Information] (monitoring information at each iteration). If
i < 0$i<0$, the default value is used.
The following printout is sent to the current advisory message unit (as defined by
nag_file_set_unit_advisory (x04ab)):
≥ i$\phantom{\ge}i$ 
Output 
≥ 00$\phantom{\ge 0}0$ 
No output. 
≥ 01$\phantom{\ge 0}1$  The final solution only. 
≥ 05$\phantom{\ge 0}5$  One line of summary output ( < 80$\text{}<80$ characters; see Section [Description of the Printed Output]) for each iteration (no printout of the final solution). 
≥ 10$\text{}\ge 10$  The final solution and one line of summary output for each iteration. 
The following printout is sent to the logical unit number defined by the
Monitoring File:
≥ i$\phantom{\ge}i$  Output 
< 5$\text{}<5$  No output. 
≥ 5$\text{}\ge 5$  One long line of output ( > 80$\text{}>80$ characters; see Section [Description of Monitoring Information]) for each iteration (no printout of the final solution). 
≥ 20$\text{}\ge 20$  At each iteration, the Lagrangemultipliers, the variables x$x$, the constraint values Ax$Ax$ and the constraint status. 
≥ 30$\text{}\ge 30$  At each iteration, the diagonal elements of the upper triangular matrix T$T$ associated with the TQ$TQ$ factorization (3) (see Section [Definition of the Search Direction]) of the working set, and the diagonal elements of the upper triangular matrix R$R$. 
If
Print Level ≥ 5${\mathbf{Print\; Level}}\ge 5$ and the unit number defined by
Monitoring File is the same as that defined by
nag_file_set_unit_advisory (x04ab), then the summary output is suppressed.
Problem Type a$a$Default = $=$ QP2This option specifies the type of objective function to be minimized during the optimality phase. The following are the five optional keywords and the dimensions of the arrays that must be specified in order to define the objective function:
LP 
h not referenced, cvec(n)${\mathbf{cvec}}\left({\mathbf{n}}\right)$ required; 
QP1 
h(ldh, * )${\mathbf{h}}\left(\mathit{ldh},*\right)$ symmetric, cvec not referenced; 
QP2 
h(ldh, * )${\mathbf{h}}\left(\mathit{ldh},*\right)$ symmetric, cvec(n)${\mathbf{cvec}}\left({\mathbf{n}}\right)$ required; 
QP3 
h(ldh, * )${\mathbf{h}}\left(\mathit{ldh},*\right)$ upper trapezoidal, cvec not referenced; 
QP4 
h(ldh, * )${\mathbf{h}}\left(\mathit{ldh},*\right)$ upper trapezoidal, cvec(n)${\mathbf{cvec}}\left({\mathbf{n}}\right)$ required. 
For problems of type FP, the objective function is omitted and neither
h nor
cvec are referenced.
The following keywords are also acceptable. The minimum abbreviation of each keyword is underlined.
a$a$ 
Option 
Quadratic 
QP2 
Linear 
LP 
Feasible 
FP 
In addition, the keyword QP is equivalent to the default option QP2.
If
h = 0${\mathbf{h}}=0$, i.e., the objective function is purely linear, the efficiency of
nag_mip_iqp_dense (h02cb) may be increased by specifying
a$a$ as LP.
Rank Tolerance r$r$Default = 100ε$\text{}=100\epsilon $Note that this option does not apply to problems of type FP or LP.
This parameter enables you to control the condition number of the triangular factor
R$R$ (see
Section [Algorithmic Details]). If
ρ_{i}${\rho}_{i}$ denotes the function
ρ_{i} = max {R_{11},R_{22}, … ,R_{ii}}${\rho}_{i}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\{\left{R}_{11}\right,\left{R}_{22}\right,\dots ,\left{R}_{ii}\right\}$, the dimension of
R$R$ is defined to be smallest index
i$i$ such that
R_{i + 1,i + 1} ≤ sqrt(r)ρ_{i + 1}$\left{R}_{i+1,i+1}\right\le \sqrt{r}\left{\rho}_{i+1}\right$. If
r ≤ 0$r\le 0$, the default value is used.
Description of Monitoring Information
This section describes the long line of output (
> 80$\text{}>80$ characters) which forms part of the monitoring information produced by
nag_mip_iqp_dense (h02cb). (See also the description of the optional parameters
Monitoring File and
Print Level in
Section [Description of the Optional s].) You can control the level of printed output.
To aid interpretation of the printed results, the following convention is used for numbering the constraints: indices 1$1$ through n$n$ refer to the bounds on the variables, and indices n + 1$n+1$ through n + m_{L}$n+{m}_{L}$ refer to the general constraints. When the status of a constraint changes, the index of the constraint is printed, along with the designation L (lower bound), U (upper bound), E (equality), F (temporarily fixed variable) or A (artificial constraint).
When
Print Level ≥ 5${\mathbf{Print\; Level}}\ge 5$ and
Monitoring File ≥ 0${\mathbf{Monitoring\; File}}\ge 0$, the following line of output is produced at every iteration on the unit number specified by
Monitoring File. In all cases, the values of the quantities printed are those in effect
on
completion of the given iteration.
Itn 
is the iteration count.

Jdel 
is the index of the constraint deleted from the working set. If Jdel is zero, no constraint was deleted.

Jadd 
is the index of the constraint added to the working set. If Jadd is zero, no constraint was added.

Step 
is the step taken along the computed search direction. If a constraint is added during the current iteration, Step will be the step to the nearest constraint. When the problem is of type LP, the step can be greater than one during the optimality phase.

Ninf 
is the number of violated constraints (infeasibilities). This will be zero during the optimality phase.

Sinf/Objective 
is the value of the current objective function. If x$x$ is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If x$x$ is feasible, Objective is the value of the objective function. The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point. During the optimality phase, the value of the objective function will be nonincreasing. During the feasibility phase, the number of constraint infeasibilities will not increase until either a feasible point is found, or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained, the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.

Bnd 
is the number of simple bound constraints in the current working set.

Lin 
is the number of general linear constraints in the current working set.

Art 
is the number of artificial constraints in the working set, i.e., the number of columns of Z_{A}${Z}_{A}$ (see Section [Choosing the Initial Working Set]).

Zr 
is the number of columns of Z_{R}${Z}_{R}$ (see Section [Definition of the Search Direction]). Zr is the dimension of the subspace in which the objective function is currently being minimized. The value of Zr is the number of variables minus the number of constraints in the working set; i.e., Zr = n − (Bnd + Lin + Art)$\mathtt{Zr}=n(\mathtt{Bnd}+\mathtt{Lin}+\mathtt{Art})$.The value of n_{Z}${n}_{Z}$, the number of columns of Z$Z$ (see Section [Definition of the Search Direction]) can be calculated as n_{Z} = n − (Bnd + Lin)${n}_{Z}=n(\mathtt{Bnd}+\mathtt{Lin})$. A zero value of n_{Z}${n}_{Z}$ implies that x$x$ lies at a vertex of the feasible region.

Norm Gz 
is
‖Z_{R}^{T}g_{FR}‖
$\Vert {Z}_{R}^{\mathrm{T}}{g}_{\mathrm{FR}}\Vert $, the Euclidean norm of the reduced gradient with respect to Z_{R}${Z}_{R}$ (see Sections [Definition of the Search Direction] and [Choosing the Initial Working Set]). During the optimality phase, this norm will be approximately zero after a unit step.

NOpt 
is the number of nonoptimal Lagrangemultipliers at the current point. NOpt is not printed if the current x$x$ is infeasible or no multipliers have been calculated. At a minimizer, NOpt will be zero.

Min Lm 
is the value of the Lagrangemultiplier associated with the deleted constraint. If Min Lm is negative, a lower bound constraint has been deleted, if Min Lm is positive, an upper bound constraint has been deleted. If no multipliers are calculated during a given iteration, Min Lm will be zero.

Cond T 
is a lower bound on the condition number of the working set.

Cond Rz 
is a lower bound on the condition number of the triangular factor R$R$ (the Cholesky factor of the current reduced Hessian; see Section [Definition of the Search Direction]). If the problem is specified to be of type LP, Cond Rz is not printed.

Rzz 
is the last diagonal element μ$\mu $ of the matrix D$D$ associated with the R^{T}DR${R}^{\mathrm{T}}DR$ factorization of the reduced Hessian H_{R}${H}_{R}$ (see Section [Definition of the Search Direction]). Rzz is only printed if H_{R}${H}_{R}$ is not positive definite (in which case μ ≠ 1$\mu \ne 1$). If the printed value of Rzz is small in absolute value, then H_{R}${H}_{R}$ is approximately singular. A negative value of Rzz implies that the objective function has negative curvature on the current working set.

PDF version (NAG web site
, 64bit version, 64bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013