naginterfaces.library.opt.lsq_​uncon_​mod_​deriv2_​comp

naginterfaces.library.opt.lsq_uncon_mod_deriv2_comp(m, lsqfun, lsqhes, xtol, x, lsqmon=None, iprint=1, maxcal=None, eta=None, stepmx=100000.0, data=None, spiked_sorder='C')[source]

lsq_uncon_mod_deriv2_comp is a comprehensive modified Gauss–Newton algorithm for finding an unconstrained minimum of a sum of squares of nonlinear functions in variables . First and second derivatives are required.

The function is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

For full information please refer to the NAG Library document for e04he

https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e04/e04hef.html

Parameters
mint

The number of residuals, , and the number of variables, .

lsqfuncallable (iflag, fvec, fjac) = lsqfun(iflag, m, xc, data=None)

must calculate the vector of values and Jacobian matrix of first derivatives at any point . (However, if you do not wish to calculate the residuals or first derivatives at a particular , there is the option of setting an argument to cause lsq_uncon_mod_deriv2_comp to terminate immediately.)

Parameters
iflagint

To , will be set to .

mint

, the numbers of residuals.

xcfloat, ndarray, shape

The point at which the values of the and the are required.

dataarbitrary, optional, modifiable in place

User-communication data for callback functions.

Returns
iflagint

If it is not possible to evaluate the or their first derivatives at the point given in (or if it wished to stop the calculations for any other reason), you should reset to some negative number and return control to lsq_uncon_mod_deriv2_comp. lsq_uncon_mod_deriv2_comp will then terminate immediately, with set to your setting of .

fvecfloat, array-like, shape

Unless is reset to a negative number, must contain the value of at the point , for .

fjacfloat, array-like, shape

Unless is reset to a negative number, must contain the value of at the point , for , for .

lsqhescallable (iflag, b) = lsqhes(iflag, fvec, xc, lb, data=None)

must calculate the elements of the symmetric matrix

at any point , where is the Hessian matrix of . (As with , there is the option of causing lsq_uncon_mod_deriv2_comp to terminate immediately.)

Parameters
iflagint

Is set to a non-negative number.

fvecfloat, ndarray, shape

The value of the residual at the point , for , so that the values of the can be used in the calculation of the elements of .

xcfloat, ndarray, shape

The point at which the elements of are to be evaluated.

lbint

The length of the array .

dataarbitrary, optional, modifiable in place

User-communication data for callback functions.

Returns
iflagint

If resets to some negative number, lsq_uncon_mod_deriv2_comp will terminate immediately, with set to your setting of .

bfloat, array-like, shape

Unless is reset to a negative number, must contain the lower triangle of the matrix , evaluated at the point , stored by rows. (The upper triangle is not required because the matrix is symmetric.) More precisely, must contain evaluated at the point , for , for .

xtolfloat

The accuracy in to which the solution is required.

If is the true value of at the minimum, then , the estimated position before a normal exit, is such that

where .

For example, if the elements of are not much larger than in modulus and if , then is usually accurate to about five decimal places. (For further details see Accuracy.)

If and the variables are scaled roughly as described in Further Comments and is the machine precision, then a setting of order will usually be appropriate.

If is set to or some positive value less than , lsq_uncon_mod_deriv2_comp will use instead of , since is probably the smallest reasonable setting.

xfloat, array-like, shape

must be set to a guess at the th component of the position of the minimum, for .

lsqmonNone or callable lsqmon(xc, fvec, fjac, s, igrade, niter, nf, data=None), optional

Note: if this argument is None then a NAG-supplied facility will be used.

If , you must supply which is suitable for monitoring the minimization process. must not change the values of any of its arguments.

Parameters
xcfloat, ndarray, shape

The coordinates of the current point .

fvecfloat, ndarray, shape

The values of the residuals at the current point .

fjacfloat, ndarray, shape

contains the value of at the current point , for , for .

sfloat, ndarray, shape

The singular values of the current Jacobian matrix. Thus may be useful as information about the structure of your problem. (If , is called at the initial point before the singular values have been calculated, so the elements of are set to zero for the first call of .)

igradeint

lsq_uncon_mod_deriv2_comp estimates the dimension of the subspace for which the Jacobian matrix can be used as a valid approximation to the curvature (see Gill and Murray (1978)). This estimate is called the grade of the Jacobian matrix, and gives its current value.

niterint

The number of iterations which have been performed in lsq_uncon_mod_deriv2_comp.

nfint

The number of times that has been called so far. Thus gives the number of evaluations of the residuals and the Jacobian matrix.

dataarbitrary, optional, modifiable in place

User-communication data for callback functions.

iprintint, optional

Specifies the frequency with which is to be called.

is called once every iterations and just before exit from lsq_uncon_mod_deriv2_comp.

is just called at the final point.

is not called at all.

should normally be set to a small positive number.

maxcalNone or int, optional

Note: if this argument is None then a default value will be used, determined as follows: .

This argument is present so as to enable you to limit the number of times that is called by lsq_uncon_mod_deriv2_comp. There will be an error exit (see Exceptions) after calls of .

etaNone or float, optional

Note: if this argument is None then a default value will be used, determined as follows: if : ; otherwise: .

Every iteration of lsq_uncon_mod_deriv2_comp involves a linear minimization (i.e., minimization of with respect to ). must lie in the range , and specifies how accurately these linear minimizations are to be performed. The minimum with respect to will be located more accurately for small values of (say, ) than for large values (say, ).

Although accurate linear minimizations will generally reduce the number of iterations performed by lsq_uncon_mod_deriv2_comp, they will increase the number of calls of made each iteration.

On balance it is usually more efficient to perform a low accuracy minimization.

stepmxfloat, optional

An estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency, a slight overestimate is preferable.)

lsq_uncon_mod_deriv2_comp will ensure that, for each iteration

where is the iteration number.

Thus, if the problem has more than one solution, lsq_uncon_mod_deriv2_comp is most likely to find the one nearest to the starting point.

On difficult problems, a realistic choice can prevent the sequence of entering a region where the problem is ill-behaved and can help avoid overflow in the evaluation of .

However, an underestimate of can lead to inefficiency.

dataarbitrary, optional

User-communication data for callback functions.

spiked_sorderstr, optional

If in is spiked (i.e., has unit extent in all but one dimension, or has size ), selects the storage order to associate with it in the NAG Engine:

spiked_sorder =

row-major storage will be used;

spiked_sorder =

column-major storage will be used.

Returns
xfloat, ndarray, shape

The final point . Thus, if no exception or warning is raised on exit, is the th component of the estimated position of the minimum.

fsumsqfloat

The value of , the sum of squares of the residuals , at the final point given in .

fvecfloat, ndarray, shape

The value of the residual at the final point given in , for .

fjacfloat, ndarray, shape

The value of the first derivative evaluated at the final point given in , for , for .

sfloat, ndarray, shape

The singular values of the Jacobian matrix at the final point. Thus may be useful as information about the structure of your problem.

vfloat, ndarray, shape

The matrix associated with the singular value decomposition

of the Jacobian matrix at the final point, stored by columns. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of .

niterint

The number of iterations which have been performed in lsq_uncon_mod_deriv2_comp.

nfint

The number of times that the residuals and Jacobian matrix have been evaluated (i.e., number of calls of ).

Raises
NagValueError
(errno )

On entry, and .

Constraint: .

(errno )

On entry, .

Constraint: .

(errno )

On entry, .

Constraint: .

(errno )

On entry, .

Constraint: .

(errno )

On entry, and .

Constraint: .

(errno )

On entry, .

Constraint: .

(errno )

There have been calls to .

(errno )

Failure in computing SVD of Jacobian matrix.

Warns
NagAlgorithmicWarning
(errno )

User requested termination by setting negative in or .

(errno )

The conditions for a minimum have not all been satisfied, but a lower point could not be found.

Notes

No equivalent traditional C interface for this routine exists in the NAG Library.

lsq_uncon_mod_deriv2_comp is essentially identical to the function LSQSDN in the NPL Algorithms Library. It is applicable to problems of the form:

where and . (The functions are often referred to as ‘residuals’.)

You must supply functions to calculate the values of the and their first derivatives and second derivatives at any point .

From a starting point supplied by you, the function generates a sequence of points , which is intended to converge to a local minimum of . The sequence of points is given by

where the vector is a direction of search, and is chosen such that is approximately a minimum with respect to .

The vector used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then is the Gauss–Newton direction; otherwise the second derivatives of the are taken into account.

The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton’s method.

References

Gill, P E and Murray, W, 1978, Algorithms for the solution of the nonlinear least squares problem, SIAM J. Numer. Anal. (15), 977–992