Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_fit_2dcheb_lines (e02ca)

## Purpose

nag_fit_2dcheb_lines (e02ca) forms an approximation to the weighted, least squares Chebyshev series surface fit to data arbitrarily distributed on lines parallel to one independent coordinate axis.

## Syntax

[a, ifail] = e02ca(m, k, l, x, y, f, w, xmin, xmax, nux, nuy, 'n', n, 'inuxp1', inuxp1, 'inuyp1', inuyp1)
[a, ifail] = nag_fit_2dcheb_lines(m, k, l, x, y, f, w, xmin, xmax, nux, nuy, 'n', n, 'inuxp1', inuxp1, 'inuyp1', inuyp1)

## Description

nag_fit_2dcheb_lines (e02ca) determines a bivariate polynomial approximation of degree $k$ in $x$ and $l$ in $y$ to the set of data points $\left({x}_{\mathit{r},\mathit{s}},{y}_{\mathit{s}},{f}_{\mathit{r},\mathit{s}}\right)$, with weights ${w}_{\mathit{r},\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$ and $\mathit{r}=1,2,\dots ,{m}_{\mathit{s}}$. That is, the data points are on lines $y={y}_{s}$, but the $x$ values may be different on each line. The values of $k$ and $l$ are prescribed by you (for guidance on their choice, see Further Comments). The function is based on the method described in Sections 5 and 6 of Clenshaw and Hayes (1965).
The polynomial is represented in double Chebyshev series form with arguments $\stackrel{-}{x}$ and $\stackrel{-}{y}$. The arguments lie in the range $-1$ to $+1$ and are related to the original variables $x$ and $y$ by the transformations
 $x-=2x-xmax+xmin xmax-xmin and y-=2y-ymax+ymin ymax-ymin .$
Here ${y}_{\mathrm{max}}$ and ${y}_{\mathrm{min}}$ are set by the function to, respectively, the largest and smallest value of ${y}_{s}$, but ${x}_{\mathrm{max}}$ and ${x}_{\mathrm{min}}$ are functions of $y$ prescribed by you (see Further Comments). For this function, only their values ${x}_{\mathrm{max}}^{\left(s\right)}$ and ${x}_{\mathrm{min}}^{\left(s\right)}$ at each $y={y}_{s}$ are required. For each $s=1,2,\dots ,n$, ${x}_{\mathrm{max}}^{\left(s\right)}$ must not be less than the largest ${x}_{r,s}$ on the line $y={y}_{s}$, and, similarly, ${x}_{\mathrm{min}}^{\left(s\right)}$ must not be greater than the smallest ${x}_{r,s}$.
The double Chebyshev series can be written as
 $∑i=0k∑j=0laijTix-Tjy-$
where ${T}_{i}\left(\stackrel{-}{x}\right)$ is the Chebyshev polynomial of the first kind of degree $i$ with argument $\stackrel{-}{x}$, and ${T}_{j}\left(y\right)$ is similarly defined. However, the standard convention, followed in this function, is that coefficients in the above expression which have either $i$ or $j$ zero are written as $\frac{1}{2}{a}_{ij}$, instead of simply ${a}_{ij}$, and the coefficient with both $i$ and $j$ equal to zero is written as $\frac{1}{4}{a}_{0,0}$. The series with coefficients output by the function should be summed using this convention. nag_fit_2dcheb_eval (e02cb) is available to compute values of the fitted function from these coefficients.
The function first obtains Chebyshev series coefficients ${c}_{s,\mathit{i}}$, for $\mathit{i}=0,1,\dots ,k$, of the weighted least squares polynomial curve fit of degree $k$ in $\stackrel{-}{x}$ to the data on each line $y={y}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$, in turn, using an auxiliary function. The same function is then called $k+1$ times to fit ${c}_{\mathit{s},i}$, for $\mathit{s}=1,2,\dots ,n$, by a polynomial of degree $l$ in $\stackrel{-}{y}$, for each $i=0,1,\dots ,k$. The resulting coefficients are the required ${a}_{ij}$.
You can force the fit to contain a given polynomial factor. This allows for the surface fit to be constrained to have specified values and derivatives along the boundaries $x={x}_{\mathrm{min}}$, $x={x}_{\mathrm{max}}$, $y={y}_{\mathrm{min}}$ and $y={y}_{\mathrm{max}}$ or indeed along any lines $\stackrel{-}{x}=\text{}$ constant or $\stackrel{-}{y}=\text{}$ constant (see Section 8 of Clenshaw and Hayes (1965)).

## References

Clenshaw C W and Hayes J G (1965) Curve and surface fitting J. Inst. Math. Appl. 1 164–183
Hayes J G (ed.) (1970) Numerical Approximation to Functions and Data Athlone Press, London

## Parameters

### Compulsory Input Parameters

1:     $\mathrm{m}\left({\mathbf{n}}\right)$int64int32nag_int array
${\mathbf{m}}\left(\mathit{s}\right)$ must be set to ${m}_{\mathit{s}}$, the number of data $x$ values on the line $y={y}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$.
Constraint: ${\mathbf{m}}\left(\mathit{s}\right)>0$, for $\mathit{s}=1,2,\dots ,{\mathbf{n}}$.
2:     $\mathrm{k}$int64int32nag_int scalar
$k$, the required degree of $x$ in the fit.
Constraint: for $s=1,2,\dots ,n$, ${\mathbf{inuxp1}}-1\le {\mathbf{k}}<\mathit{mdist}\left(s\right)+{\mathbf{inuxp1}}-1$, where $\mathit{mdist}\left(s\right)$ is the number of distinct $x$ values with nonzero weight on the line $y={y}_{s}$. See Further Comments.
3:     $\mathrm{l}$int64int32nag_int scalar
$l$, the required degree of $y$ in the fit.
Constraints:
• ${\mathbf{l}}\ge 0$;
• ${\mathbf{inuyp1}}-1\le {\mathbf{l}}<{\mathbf{n}}+{\mathbf{inuyp1}}-1$.
4:     $\mathrm{x}\left(\mathit{mtot}\right)$ – double array
mtot, the dimension of the array, must satisfy the constraint $\mathit{mtot}\ge \sum _{\mathit{s}=1}^{{\mathbf{n}}}{\mathbf{m}}\left(\mathit{s}\right)$.
The $x$ values of the data points. The sequence must be
• all points on $y={y}_{1}$, followed by
• all points on $y={y}_{2}$, followed by
• $⋮$
• all points on $y={y}_{n}$.
Constraint: for each ${y}_{s}$, the $x$ values must be in nondecreasing order.
5:     $\mathrm{y}\left({\mathbf{n}}\right)$ – double array
${\mathbf{y}}\left(\mathit{s}\right)$ must contain the $y$ value of line $y={y}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$, on which data is given.
Constraint: the ${y}_{s}$ values must be in strictly increasing order.
6:     $\mathrm{f}\left(\mathit{mtot}\right)$ – double array
mtot, the dimension of the array, must satisfy the constraint $\mathit{mtot}\ge \sum _{\mathit{s}=1}^{{\mathbf{n}}}{\mathbf{m}}\left(\mathit{s}\right)$.
$f$, the data values of the dependent variable in the same sequence as the $x$ values.
7:     $\mathrm{w}\left(\mathit{mtot}\right)$ – double array
mtot, the dimension of the array, must satisfy the constraint $\mathit{mtot}\ge \sum _{\mathit{s}=1}^{{\mathbf{n}}}{\mathbf{m}}\left(\mathit{s}\right)$.
The weights to be assigned to the data points, in the same sequence as the $x$ values. These weights should be calculated from estimates of the absolute accuracies of the ${f}_{r}$, expressed as standard deviations, probable errors or some other measure which is of the same dimensions as ${f}_{r}$. Specifically, each ${w}_{r}$ should be inversely proportional to the accuracy estimate of ${f}_{r}$. Often weights all equal to unity will be satisfactory. If a particular weight is zero, the corresponding data point is omitted from the fit.
8:     $\mathrm{xmin}\left({\mathbf{n}}\right)$ – double array
${\mathbf{xmin}}\left(\mathit{s}\right)$ must contain ${x}_{\mathrm{min}}^{\left(\mathit{s}\right)}$, the lower end of the range of $x$ on the line $y={y}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$. It must not be greater than the lowest data value of $x$ on the line. Each ${x}_{\mathrm{min}}^{\left(s\right)}$ is scaled to $-1.0$ in the fit. (See also Further Comments.)
9:     $\mathrm{xmax}\left({\mathbf{n}}\right)$ – double array
${\mathbf{xmax}}\left(\mathit{s}\right)$ must contain ${x}_{\mathrm{max}}^{\left(\mathit{s}\right)}$, the upper end of the range of $x$ on the line $y={y}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$. It must not be less than the highest data value of $x$ on the line. Each ${x}_{\mathrm{max}}^{\left(s\right)}$ is scaled to $+1.0$ in the fit. (See also Further Comments.)
Constraint: ${\mathbf{xmax}}\left(s\right)>{\mathbf{xmin}}\left(s\right)$.
10:   $\mathrm{nux}\left({\mathbf{inuxp1}}\right)$ – double array
${\mathbf{nux}}\left(\mathit{i}\right)$ must contain the coefficient of the Chebyshev polynomial of degree $\left(\mathit{i}-1\right)$ in $\stackrel{-}{x}$, in the Chebyshev series representation of the polynomial factor in $\stackrel{-}{x}$ which you require the fit to contain, for $\mathit{i}=1,2,\dots ,{\mathbf{inuxp1}}$. These coefficients are defined according to the standard convention of Description.
Constraint: ${\mathbf{nux}}\left({\mathbf{inuxp1}}\right)$ must be nonzero, unless ${\mathbf{inuxp1}}=1$, in which case nux is ignored.
11:   $\mathrm{nuy}\left({\mathbf{inuyp1}}\right)$ – double array
${\mathbf{nuy}}\left(\mathit{i}\right)$ must contain the coefficient of the Chebyshev polynomial of degree $\left(\mathit{i}-1\right)$ in $\stackrel{-}{y}$, in the Chebyshev series representation of the polynomial factor which you require the fit to contain, for $\mathit{i}=1,2,\dots ,{\mathbf{inuyp1}}$. These coefficients are defined according to the standard convention of Description.
Constraint: ${\mathbf{nuy}}\left({\mathbf{inuyp1}}\right)$ must be nonzero, unless ${\mathbf{inuyp1}}=1$, in which case nuy is ignored.

### Optional Input Parameters

1:     $\mathrm{n}$int64int32nag_int scalar
Default: the dimension of the arrays m, y, xmin, xmax. (An error is raised if these dimensions are not equal.)
The number of lines $y=\text{}$ constant on which data points are given.
Constraint: ${\mathbf{n}}>0$.
2:     $\mathrm{inuxp1}$int64int32nag_int scalar
Default: the dimension of the array nux.
$\mathit{inux}+1$, where $\mathit{inux}$ is the degree of a polynomial factor in $\stackrel{-}{x}$ which you require the fit to contain. (See Description, last paragraph.)
If this option is not required, inuxp1 should be set equal to $1$.
Constraint: $1\le {\mathbf{inuxp1}}\le {\mathbf{k}}+1$.
3:     $\mathrm{inuyp1}$int64int32nag_int scalar
Default: the dimension of the array nuy.
$\mathit{inuy}+1$, where $\mathit{inuy}$ is the degree of a polynomial factor in $\stackrel{-}{y}$ which you require the fit to contain. (See Description, last paragraph.) If this option is not required, inuyp1 should be set equal to $1$.

### Output Parameters

1:     $\mathrm{a}\left(\mathit{na}\right)$ – double array
$\mathit{na}=\left({\mathbf{k}}+1\right)×\left({\mathbf{l}}+1\right)$, the total number of coefficients in the fit.
Contains the Chebyshev coefficients of the fit. ${\mathbf{a}}\left(i×\left({\mathbf{l}}+1\right)+j\right)$ is the coefficient ${a}_{ij}$ of Description defined according to the standard convention. These coefficients are used by nag_fit_2dcheb_eval (e02cb) to calculate values of the fitted function.
2:     $\mathrm{ifail}$int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see Error Indicators and Warnings).

## Error Indicators and Warnings

Errors or warnings detected by the function:
${\mathbf{ifail}}=1$
 On entry, k or ${\mathbf{l}}<0$, or inuxp1 or ${\mathbf{inuyp1}}<1$, or ${\mathbf{inuxp1}}>{\mathbf{k}}+1$, or ${\mathbf{inuyp1}}>{\mathbf{l}}+1$, or ${\mathbf{m}}\left(i\right)<{\mathbf{k}}-{\mathbf{inuxp1}}+2$ for some $i=1,2,\dots ,{\mathbf{n}}$, or ${\mathbf{n}}<{\mathbf{l}}-{\mathbf{inuyp1}}+2$, or na is too small, or nwork is too small, or mtot is too small.
${\mathbf{ifail}}=2$
${\mathbf{xmin}}\left(i\right)$ and ${\mathbf{xmax}}\left(i\right)$ do not span the data x values on ${\mathbf{y}}={\mathbf{y}}\left(i\right)$ for some $i=1,2,\dots ,{\mathbf{n}}$, possibly because ${\mathbf{xmin}}\left(i\right)\ge {\mathbf{xmax}}\left(i\right)$.
${\mathbf{ifail}}=3$
The data x values on ${\mathbf{y}}={\mathbf{y}}\left(i\right)$ are not nondecreasing for some $i=1,2,\dots ,{\mathbf{n}}$, or the ${\mathbf{y}}\left(i\right)$ themselves are not strictly increasing.
${\mathbf{ifail}}=4$
The number of distinct x values with nonzero weight on ${\mathbf{y}}={\mathbf{y}}\left(i\right)$ is less than ${\mathbf{k}}-{\mathbf{inuxp1}}+2$ for some $i=1,2,\dots ,{\mathbf{n}}$.
${\mathbf{ifail}}=5$
 On entry, ${\mathbf{nux}}\left({\mathbf{inuxp1}}\right)=0.0$ and ${\mathbf{inuxp1}}\ne 1$, or ${\mathbf{nuy}}\left({\mathbf{inuyp1}}\right)=0.0$ and ${\mathbf{inuyp1}}\ne 1$.
${\mathbf{ifail}}=-99$
An unexpected error has been triggered by this routine. Please contact NAG.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.

## Accuracy

No error analysis for this method has been published. Practical experience with the method, however, is generally extremely satisfactory.

## Further Comments

The time taken is approximately proportional to $k×\left(k×\mathit{mtot}+n×{l}^{2}\right)$.
The reason for allowing ${x}_{\mathrm{max}}$ and ${x}_{\mathrm{min}}$ (which are used to normalize the range of $x$) to vary with $y$ is that unsatisfactory fits can result if the highest (or lowest) data values of the normalized $x$ on each line $y={y}_{s}$ are not approximately the same. (For an explanation of this phenomenon, see page 176 of Clenshaw and Hayes (1965).) Commonly in practice, the lowest (for example) data values ${x}_{1,s}$, while not being approximately constant, do lie close to some smooth curve in the $\left(x,y\right)$ plane. Using values from this curve as the values of ${x}_{\mathrm{min}}$, different in general on each line, causes the lowest transformed data values ${\stackrel{-}{x}}_{1,s}$ to be approximately constant. Sometimes, appropriate curves for ${x}_{\mathrm{max}}$ and ${x}_{\mathrm{min}}$ will be clear from the context of the problem (they need not be polynomials). If this is not the case, suitable curves can often be obtained by fitting to the lowest data values ${x}_{1,s}$ and to the corresponding highest data values of $x$, low degree polynomials in $y$, using function nag_fit_1dcheb_arb (e02ad), and then shifting the two curves outwards by a small amount so that they just contain all the data between them. The complete curves are not in fact supplied to the present function, only their values at each ${y}_{s}$; and the values simply need to lie on smooth curves. More values on the complete curves will be required subsequently, when computing values of the fitted surface at arbitrary $y$ values.
Naturally, a satisfactory approximation to the surface underlying the data cannot be expected if the character of the surface is not adequately represented by the data. Also, as always with polynomials, the approximating function may exhibit unwanted oscillations (particularly near the ends of the ranges) if the degrees $k$ and $l$ are taken greater than certain values, generally unknown but depending on the total number of coefficients $\left(k+1\right)×\left(l+1\right)$ should be significantly smaller than, say not more than half, the total number of data points. Similarly, $k+1$ should be significantly smaller than most (preferably all) the ${m}_{s}$, and $l+1$ significantly smaller than $n$. Closer spacing of the data near the ends of the $x$ and $y$ ranges is an advantage. In particular, if ${\stackrel{-}{y}}_{\mathit{s}}=-\mathrm{cos}\left(\pi \left(\mathit{s}-1\right)/\left(n-1\right)\right)$, for $\mathit{s}=1,2,\dots ,n$ and ${\stackrel{-}{x}}_{\mathit{r},s}=-\mathrm{cos}\left(\pi \left(\mathit{r}-1\right)/\left(m-1\right)\right)$, for $\mathit{r}=1,2,\dots ,m$, (thus ${m}_{s}=m$ for all $s$), then the values $k=m-1$ and $l=n-1$ (so that the polynomial passes exactly through all the data points) should not give unwanted oscillations. Other datasets should be similarly satisfactory if they are everywhere at least as closely spaced as the above cosine values with $m$ replaced by $k+1$ and $n$ by $l+1$ (more precisely, if for every $s$ the largest interval between consecutive values of $\mathrm{arccos}{\stackrel{-}{x}}_{\mathit{r},s}$, for $\mathit{r}=1,2,\dots ,m$, is not greater than $\pi /k$, and similarly for the ${\stackrel{-}{y}}_{s}$). The polynomial obtained should always be examined graphically before acceptance. Note that, for this purpose it is not sufficient to plot the polynomial only at the data values of $x$ and $y$: intermediate values should also be plotted, preferably via a graphics facility.
Provided the data are adequate, and the surface underlying the data is of a form that can be represented by a polynomial of the chosen degrees, the function should produce a good approximation to this surface. It is not, however, the true least squares surface fit nor even a polynomial in $x$ and $y$, the original variables (see Section 6 of Clenshaw and Hayes (1965), ), except in certain special cases. The most important of these is where the data values of $x$ are the same on each line $y={y}_{s}$, (i.e., the data points lie on a rectangular mesh in the $\left(x,y\right)$ plane), the weights of the data points are all equal, and ${x}_{\mathrm{max}}$ and ${x}_{\mathrm{min}}$ are both constants (in this case they should be set to the largest and smallest data values of $x$, respectively).
If the dataset is such that it can be satisfactorily approximated by a polynomial of degrees ${k}^{\prime }$ and ${l}^{\prime }$, say, then if higher values are used for $k$ and $l$ in the function, all the coefficients ${a}_{ij}$ for $i>{k}^{\prime }$ or $j>{l}^{\prime }$ will take apparently random values within a range bounded by the size of the data errors, or rather less. (This behaviour of the Chebyshev coefficients, most readily observed if they are set out in a rectangular array, closely parallels that in curve-fitting, examples of which are given in Section 8 of Hayes (1970).) In practice, therefore, to establish suitable values of ${k}^{\prime }$ and ${l}^{\prime }$, you should first be seeking (within the limitations discussed above) values for $k$ and $l$ which are large enough to exhibit the behaviour described. Values for ${k}^{\prime }$ and ${l}^{\prime }$ should then be chosen as the smallest which do not exclude any coefficients significantly larger than the random ones. A polynomial of degrees ${k}^{\prime }$ and ${l}^{\prime }$ should then be fitted to the data.
If the option to force the fit to contain a given polynomial factor in $x$ is used and if zeros of the chosen factor coincide with data $x$ values on any line, then the effective number of data points on that line is reduced by the number of such coincidences. A similar consideration applies when forcing the $y$-direction. No account is taken of this by the function when testing that the degrees $k$ and $l$ have not been chosen too large.

## Example

This example reads data in the following order, using the notation of the argument list for nag_fit_2dcheb_lines (e02ca) above:
 $n k l yi mi xmini xmaxi, for ​i=1,2,…,n xi fi wi, for ​i=1,2,…,mtot.$
The data points are fitted using nag_fit_2dcheb_lines (e02ca), and then the fitting polynomial is evaluated at the data points using nag_fit_2dcheb_eval (e02cb).
The output is:
• the data points and their fitted values;
• the Chebyshev coefficients of the fit.
```function e02ca_example

fprintf('e02ca example results\n\n');

% Fit cubic in x and quadratic in y
k = int64(3);
l = int64(2);

% 4 lines of data, 1 for each of 4 values of y
n = 4;
m = int64([8 7 7 6]);
mtot = sum(m);
x = [0.1     1.0     1.6     2.1     3.3     3.9     4.2     4.9 ...
0.1     1.1     1.9     2.7     3.2     4.1     4.5         ...
0.5     1.1     1.3     2.2     2.9     3.5     3.9         ...
1.7     2.0     2.4     2.7     3.1     3.5 ];
f = [1.01005 1.10517 1.17351 1.23368 1.39097 1.47698 1.52196 1.63232 ...
2.02010 2.23256 2.41850 2.61993 2.75426 3.01364 3.13662         ...
3.15381 3.34883 3.41649 3.73823 4.00928 4.25720 4.43094         ...
5.92652 6.10701 6.35625 6.54982 6.81713 7.09534 ];
w =  ones(mtot, 1);

y    = [0   1    2    4  ];  ymin = min(y); ymax = max(y);
xmin = [0   0.1  0.4  1.6];
xmax = [5   4.5  4    3.5];

nux  = ;
nuy  = nux;

% Compute surface fit
[a, ifail] = e02ca( ...
m, k, l, x, y, f, w, xmin, xmax, nux, nuy);

fig1 = figure;
colors = {'Blue','Green','Red','Black'};
hold on;
% Evaluate fit
mlast = int64(0);
for i = 1:n
mfirst = mlast + 1;
mlast  = mlast + m(i);
[fit, ifail] = e02cb( ...
mfirst, k, l, x, xmin(i), xmax(i), y(i), ...
ymin, ymax, a, 'mlast', mlast);
mm = mfirst:mlast;
mfit(mm) = fit(mm);

fprintf('\nLine number %d, y = %7.2f\n',i,y(i));
fprintf('       x          f         fit      residual\n');
sol = [x(mm); f(mm); fit(mm)'; fit(mm)'-f(mm)];
fprintf('%11.4f%11.4f%11.4f%11.2e\n', sol);
col = colors(:,i);
plot(x(mm),fit(mm),'Color',char(col));
end
plot(x,f,'*','Color','Magenta');
hold off
title('Least-squares bi-variate polynomial fit');
xlabel('x');
ylabel('p(x,y=0,1,2,4)')
legend('line 1: y = 0','line 2: y = 1','line 3: y = 2',...
'line 4: y = 4','data points','Location','NorthWest');

```
```e02ca example results

Line number 1, y =    0.00
x          f         fit      residual
0.1000     1.0100     1.0175   7.40e-03
1.0000     1.1052     1.1126   7.39e-03
1.6000     1.1735     1.1809   7.43e-03
2.1000     1.2337     1.2412   7.55e-03
3.3000     1.3910     1.3992   8.19e-03
3.9000     1.4770     1.4857   8.72e-03
4.2000     1.5220     1.5310   9.03e-03
4.9000     1.6323     1.6422   9.83e-03

Line number 2, y =    1.00
x          f         fit      residual
0.1000     2.0201     1.9987  -2.14e-02
1.1000     2.2326     2.2110  -2.16e-02
1.9000     2.4185     2.3962  -2.23e-02
2.7000     2.6199     2.5966  -2.34e-02
3.2000     2.7543     2.7299  -2.43e-02
4.1000     3.0136     2.9869  -2.68e-02
4.5000     3.1366     3.1084  -2.82e-02

Line number 3, y =    2.00
x          f         fit      residual
0.5000     3.1538     3.1700   1.62e-02
1.1000     3.3488     3.3648   1.60e-02
1.3000     3.4165     3.4325   1.60e-02
2.2000     3.7382     3.7549   1.66e-02
2.9000     4.0093     4.0272   1.79e-02
3.5000     4.2572     4.2769   1.97e-02
3.9000     4.4309     4.4521   2.12e-02

Line number 4, y =    4.00
x          f         fit      residual
1.7000     5.9265     5.9231  -3.42e-03
2.0000     6.1070     6.1036  -3.41e-03
2.4000     6.3563     6.3527  -3.50e-03
2.7000     6.5498     6.5462  -3.64e-03
3.1000     6.8171     6.8132  -3.98e-03
3.5000     7.0953     7.0909  -4.49e-03
``` PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015