hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox: nag_opt_lsq_uncon_mod_deriv2_easy (e04hy)

 Contents

    1  Purpose
    2  Syntax
    7  Accuracy
    9  Example

Purpose

nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) is an easy-to-use modified Gauss–Newton algorithm for finding an unconstrained minimum of a sum of squares of m nonlinear functions in n variables mn. First and second derivatives are required.
It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

Syntax

[x, fsumsq, user, ifail] = e04hy(m, lsfun2, lshes2, x, 'n', n, 'user', user)
[x, fsumsq, user, ifail] = nag_opt_lsq_uncon_mod_deriv2_easy(m, lsfun2, lshes2, x, 'n', n, 'user', user)

Description

nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) is similar to the function LSSDN2 in the NPL Algorithms Library. It is applicable to problems of the form:
MinimizeFx=i=1mfix2  
where x=x1,x2,,xnT and mn. (The functions fix are often referred to as ‘residuals’.)
You must supply a function to evaluate the residuals and their first derivatives at any point x, and a function to evaluate the elements of the second derivative term of the Hessian matrix of Fx.
Before attempting to minimize the sum of squares, the algorithm checks the user-supplied functions for consistency. Then, from a starting point supplied by you, a sequence of points is generated which is intended to converge to a local minimum of the sum of squares. These points are generated using estimates of the curvature of Fx.

References

Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem SIAM J. Numer. Anal. 15 977–992

Parameters

Compulsory Input Parameters

1:     m int64int32nag_int scalar
The number m of residuals, fix, and the number n of variables, xj.
Constraint: 1nm.
2:     lsfun2 – function handle or string containing name of m-file
You must supply this function to calculate the vector of values fix and the Jacobian matrix of first derivatives fi xj  at any point x. It should be tested separately before being used in conjunction with nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) (see the E04 Chapter Introduction).
[fvec, fjac, user] = lsfun2(m, n, xc, ldfjac, user)

Input Parameters

1:     m int64int32nag_int scalar
m, the numbers of residuals.
2:     n int64int32nag_int scalar
n, the numbers of variables.
3:     xcn – double array
The point x at which the values of the fi and the fi xj  are required.
4:     ldfjac int64int32nag_int scalar
The first dimension of the array fjac.
5:     user – Any MATLAB object
lsfun2 is called from nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) with the object supplied to nag_opt_lsq_uncon_mod_deriv2_easy (e04hy).

Output Parameters

1:     fvecm – double array
fveci must be set to the value of fi at the point x, for i=1,2,,m.
2:     fjacldfjacn – double array
fjacij must be set to the value of fi xj at the point x, for i=1,2,,m and j=1,2,,n.
3:     user – Any MATLAB object
3:     lshes2 – function handle or string containing name of m-file
You must supply this function to calculate the elements of the symmetric matrix
Bx=i=1mfixGix,  
at any point x, where Gix is the Hessian matrix of fix. It should be tested separately before being used in conjunction with nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) (see the E04 Chapter Introduction).
[b, user] = lshes2(m, n, fvec, xc, lb, user)

Input Parameters

1:     m int64int32nag_int scalar
m, the number of residuals.
2:     n int64int32nag_int scalar
n, the number of residuals.
3:     fvecm – double array
The value of the residual fi at the point x, for i=1,2,,m, so that the values of the fi can be used in the calculation of the elements of b.
4:     xcn – double array
The point x at which the elements of b are to be evaluated.
5:     lb int64int32nag_int scalar
The length of the array b.
6:     user – Any MATLAB object
lshes2 is called from nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) with the object supplied to nag_opt_lsq_uncon_mod_deriv2_easy (e04hy).

Output Parameters

1:     blb – double array
Must contain the lower triangle of the matrix Bx, evaluated at the point x, stored by rows. (The upper triangle is not required because the matrix is symmetric.) More precisely, bjj-1/2+k must contain i=1mfi 2fi xjxk evaluated at the point x, for j=1,2,,n and k=1,2,,j.
2:     user – Any MATLAB object
4:     xn – double array
xj must be set to a guess at the jth component of the position of the minimum, for j=1,2,,n. The function checks lsfun2 and lshes2 at the starting point and so is more likely to detect any error in your functions if the initial xj are nonzero and mutually distinct.

Optional Input Parameters

1:     n int64int32nag_int scalar
Default: For n, the dimension of the array x.
The number m of residuals, fix, and the number n of variables, xj.
Constraint: 1nm.
2:     user – Any MATLAB object
user is not used by nag_opt_lsq_uncon_mod_deriv2_easy (e04hy), but is passed to lsfun2 and lshes2. Note that for large objects it may be more efficient to use a global variable which is accessible from the m-files than to use user.

Output Parameters

1:     xn – double array
The lowest point found during the calculations. Thus, if ifail=0 on exit, xj is the jth component of the position of the minimum.
2:     fsumsq – double scalar
The value of the sum of squares, Fx, corresponding to the final point stored in x.
3:     user – Any MATLAB object
4:     ifail int64int32nag_int scalar
ifail=0 unless the function detects an error (see Error Indicators and Warnings).

Error Indicators and Warnings

Note: nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:

Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.

   ifail=1
On entry,n<1,
orm<n,
orlw<8×n+2×n×n+2×m×n+3×m, when n>1,
orlw<11+5×m, when n=1.
   ifail=2
There have been 50×n calls of lsfun2, yet the algorithm does not seem to have converged. This may be due to an awkward function or to a poor starting point, so it is worth restarting nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) from the final point held in x.
W  ifail=3
The final point does not satisfy the conditions for acceptance as a minimum, but no lower point could be found.
   ifail=4
An auxiliary function has been unable to complete a singular value decomposition in a reasonable number of sub-iterations.
W  ifail=5
W  ifail=6
W  ifail=7
W  ifail=8
There is some doubt about whether the point xx found by nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) is a minimum of Fx. The degree of confidence in the result decreases as ifail increases. Thus, when ifail=5, it is probable that the final x gives a good estimate of the position of a minimum, but when ifail=8 it is very unlikely that the function has found a minimum.
   ifail=9
It is very likely that you have made an error in forming the derivatives fi xj  in lsfun2.
   ifail=10
It is very likely that you have made an error in forming the quantities Bjk in lshes2.
   ifail=-99
An unexpected error has been triggered by this routine. Please contact NAG.
   ifail=-399
Your licence key may have expired or may not have been installed correctly.
   ifail=-999
Dynamic memory allocation failed.
If you are not satisfied with the result (e.g., because ifail lies between 3 and 8), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. Repeated failure may indicate some defect in the formulation of the problem.

Accuracy

If the problem is reasonably well scaled and a successful exit is made, then, for a computer with a mantissa of t decimals, one would expect to get about t/2-1 decimals accuracy in the components of x and between t-1 (if Fx is of order 1 at the minimum) and 2t-2 (if Fx is close to zero at the minimum) decimals accuracy in Fx.

Further Comments

The number of iterations required depends on the number of variables, the number of residuals and their behaviour, and the distance of the starting point from the solution. The number of multiplications performed per iteration of nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) varies, but for mn is approximately n×m2+On3. In addition, each iteration makes at least one call of lsfun2 and some iterations may call lshes2. So, unless the residuals and their derivatives can be evaluated very quickly, the run time will be dominated by the time spent in lsfun2 (and, to a lesser extent, in lshes2).
Ideally, the problem should be scaled so that the minimum value of the sum of squares is in the range 0,+1 and so that at points a unit distance away from the solution the sum of squares is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_lsq_uncon_mod_deriv2_easy (e04hy) will take less computer time.
When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a subsequent call to nag_opt_lsq_uncon_covariance (e04yc), using information returned in segments of the workspace array w. See nag_opt_lsq_uncon_covariance (e04yc) for further details.

Example

This example finds least squares estimates of x1, x2 and x3 in the model
y=x1+t1x2t2+x3t3  
using the 15 sets of data given in the following table.
y t1 t2 t3 0.14 1.0 15.0 1.0 0.18 2.0 14.0 2.0 0.22 3.0 13.0 3.0 0.25 4.0 12.0 4.0 0.29 5.0 11.0 5.0 0.32 6.0 10.0 6.0 0.35 7.0 9.0 7.0 0.39 8.0 8.0 8.0 0.37 9.0 7.0 7.0 0.58 10.0 6.0 6.0 0.73 11.0 5.0 5.0 0.96 12.0 4.0 4.0 1.34 13.0 3.0 3.0 2.10 14.0 2.0 2.0 4.39 15.0 1.0 1.0  
The program uses 0.5,1.0,1.5 as the initial guess at the position of the minimum.
function e04hy_example


fprintf('e04hy example results\n\n');

global y t;

% Model fitting data.
m = int64(15);
y = [ 0.14, 0.18, 0.22, 0.25, 0.29,...
      0.32, 0.35, 0.39, 0.37, 0.58,...
      0.73, 0.96, 1.34, 2.10, 4.39];
t = [ 1.0 15.0 1.0;
      2.0 14.0 2.0;
      3.0 13.0 3.0;
      4.0 12.0 4.0;
      5.0 11.0 5.0;
      6.0 10.0 6.0;
      7.0  9.0 7.0;
      8.0  8.0 8.0;
      9.0  7.0 7.0;
     10.0  6.0 6.0;
     11.0  5.0 5.0;
     12.0  4.0 4.0;
     13.0  3.0 3.0;
     14.0  2.0 2.0;
     15.0  1.0 1.0];

% Initial guess
n = 3;
x = [0.5;  1;  1.5];

[x, fsumsq, user, ifail] = e04hy(m, @lsfun2, @lshes2, x);

fprintf('Best fit model parameters are:\n');
for i = 1:n
  fprintf('        x_%d = %10.3f\n',i,x(i));
end
fprintf('\nSum of squares of residuals = %7.4f\n',fsumsq);



function [fvecc, fjacc, user] = lsfun2(m, n, xc, ljc, user)

  global y t;

  fvecc = zeros(m, 1);
  fjacc = zeros(ljc, n);

  for i = 1:m
   denom      = xc(2)*t(i,2) + xc(3)*t(i,3);
   fvecc(i)   = xc(1) + t(i,1)/denom - y(i);
   fjacc(i,1) = 1;
   dummy      = -1/(denom*denom);
   fjacc(i,2) = t(i,1)*t(i,2)*dummy;
   fjacc(i,3) = t(i,1)*t(i,3)*dummy;
  end

function [b, user] = lshes2(m, n, fvecc, xc, lb, user)

  global y t;

  b = zeros(lb, 1);

  sum22 = 0;
  sum32 = 0;
  sum33 = 0;
  for i = 1:m
    dummy = 2*t(i,1)/(xc(2)*t(i,2)+xc(3)*t(i,3))^3;
    sum22 = sum22 + fvecc(i)*dummy*t(i,2)^2;
    sum32 = sum32 + fvecc(i)*dummy*t(i,2)*t(i,3);
    sum33 = sum33 + fvecc(i)*dummy*t(i,3)^2;
  end
  b(3) = sum22;
  b(5) = sum32;
  b(6) = sum33;
e04hy example results

Best fit model parameters are:
        x_1 =      0.082
        x_2 =      1.133
        x_3 =      2.344

Sum of squares of residuals =  0.0082

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015