Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_fit_2dcheb_lines (e02ca)

## Purpose

nag_fit_2dcheb_lines (e02ca) forms an approximation to the weighted, least squares Chebyshev series surface fit to data arbitrarily distributed on lines parallel to one independent coordinate axis.

## Syntax

[a, ifail] = e02ca(m, k, l, x, y, f, w, xmin, xmax, nux, nuy, 'n', n, 'inuxp1', inuxp1, 'inuyp1', inuyp1)
[a, ifail] = nag_fit_2dcheb_lines(m, k, l, x, y, f, w, xmin, xmax, nux, nuy, 'n', n, 'inuxp1', inuxp1, 'inuyp1', inuyp1)

## Description

nag_fit_2dcheb_lines (e02ca) determines a bivariate polynomial approximation of degree k$k$ in x$x$ and l$l$ in y$y$ to the set of data points (xr,s,ys,fr,s)$\left({x}_{\mathit{r},\mathit{s}},{y}_{\mathit{s}},{f}_{\mathit{r},\mathit{s}}\right)$, with weights wr,s${w}_{\mathit{r},\mathit{s}}$, for s = 1,2,,n$\mathit{s}=1,2,\dots ,n$ and r = 1,2,,ms$\mathit{r}=1,2,\dots ,{m}_{\mathit{s}}$. That is, the data points are on lines y = ys$y={y}_{s}$, but the x$x$ values may be different on each line. The values of k$k$ and l$l$ are prescribed by you (for guidance on their choice, see Section [Further Comments]). The function is based on the method described in Sections 5 and 6 of Clenshaw and Hayes (1965).
The polynomial is represented in double Chebyshev series form with arguments x$\stackrel{-}{x}$ and y$\stackrel{-}{y}$. The arguments lie in the range 1$-1$ to + 1$+1$ and are related to the original variables x$x$ and y$y$ by the transformations
 x = (2x − (xmax + xmin))/((xmax − xmin))  and  y = (2y − (ymax + ymin))/((ymax − ymin)). $x-=2x-(xmax+xmin) (xmax-xmin) and y-=2y-(ymax+ymin) (ymax-ymin) .$
Here ymax${y}_{\mathrm{max}}$ and ymin${y}_{\mathrm{min}}$ are set by the function to, respectively, the largest and smallest value of ys${y}_{s}$, but xmax${x}_{\mathrm{max}}$ and xmin${x}_{\mathrm{min}}$ are functions of y$y$ prescribed by you (see Section [Further Comments]). For this function, only their values xmax(s) ${x}_{\mathrm{max}}^{\left(s\right)}$ and xmin(s) ${x}_{\mathrm{min}}^{\left(s\right)}$ at each y = ys$y={y}_{s}$ are required. For each s = 1,2,,n$s=1,2,\dots ,n$, xmax(s) ${x}_{\mathrm{max}}^{\left(s\right)}$ must not be less than the largest xr,s${x}_{r,s}$ on the line y = ys$y={y}_{s}$, and, similarly, xmin(s) ${x}_{\mathrm{min}}^{\left(s\right)}$ must not be greater than the smallest xr,s${x}_{r,s}$.
The double Chebyshev series can be written as
 k l ∑ ∑ aijTi(x)Tj(y) i = 0 j = 0
$∑i=0k∑j=0laijTi(x-)Tj(y-)$
where Ti(x)${T}_{i}\left(\stackrel{-}{x}\right)$ is the Chebyshev polynomial of the first kind of degree i$i$ with argument x$\stackrel{-}{x}$, and Tj(y)${T}_{j}\left(y\right)$ is similarly defined. However, the standard convention, followed in this function, is that coefficients in the above expression which have either i$i$ or j$j$ zero are written as (1/2)aij$\frac{1}{2}{a}_{ij}$, instead of simply aij${a}_{ij}$, and the coefficient with both i$i$ and j$j$ equal to zero is written as (1/4)a0,0$\frac{1}{4}{a}_{0,0}$. The series with coefficients output by the function should be summed using this convention. nag_fit_2dcheb_eval (e02cb) is available to compute values of the fitted function from these coefficients.
The function first obtains Chebyshev series coefficients cs,i${c}_{s,\mathit{i}}$, for i = 0,1,,k$\mathit{i}=0,1,\dots ,k$, of the weighted least squares polynomial curve fit of degree k$k$ in x$\stackrel{-}{x}$ to the data on each line y = ys$y={y}_{\mathit{s}}$, for s = 1,2,,n$\mathit{s}=1,2,\dots ,n$, in turn, using an auxiliary function. The same function is then called k + 1$k+1$ times to fit cs,i${c}_{\mathit{s},i}$, for s = 1,2,,n$\mathit{s}=1,2,\dots ,n$, by a polynomial of degree l$l$ in y$\stackrel{-}{y}$, for each i = 0,1,,k$i=0,1,\dots ,k$. The resulting coefficients are the required aij${a}_{ij}$.
You can force the fit to contain a given polynomial factor. This allows for the surface fit to be constrained to have specified values and derivatives along the boundaries x = xmin$x={x}_{\mathrm{min}}$, x = xmax$x={x}_{\mathrm{max}}$, y = ymin$y={y}_{\mathrm{min}}$ and y = ymax$y={y}_{\mathrm{max}}$ or indeed along any lines x = $\stackrel{-}{x}=\text{}$ constant or y = $\stackrel{-}{y}=\text{}$ constant (see Section 8 of Clenshaw and Hayes (1965)).

## References

Clenshaw C W and Hayes J G (1965) Curve and surface fitting J. Inst. Math. Appl. 1 164–183
Hayes J G (ed.) (1970) Numerical Approximation to Functions and Data Athlone Press, London

## Parameters

### Compulsory Input Parameters

1:     m(n) – int64int32nag_int array
n, the dimension of the array, must satisfy the constraint n > 0${\mathbf{n}}>0$.
m(s)${\mathbf{m}}\left(\mathit{s}\right)$ must be set to ms${m}_{\mathit{s}}$, the number of data x$x$ values on the line y = ys$y={y}_{\mathit{s}}$, for s = 1,2,,n$\mathit{s}=1,2,\dots ,n$.
Constraint: m(s) > 0${\mathbf{m}}\left(\mathit{s}\right)>0$, for s = 1,2,,n$\mathit{s}=1,2,\dots ,{\mathbf{n}}$.
2:     k – int64int32nag_int scalar
k$k$, the required degree of x$x$ in the fit.
Constraint: for s = 1,2,,n$s=1,2,\dots ,n$, inuxp11k < mdist(s) + inuxp11${\mathbf{inuxp1}}-1\le {\mathbf{k}}<\mathit{mdist}\left(s\right)+{\mathbf{inuxp1}}-1$, where mdist(s)$\mathit{mdist}\left(s\right)$ is the number of distinct x$x$ values with nonzero weight on the line y = ys$y={y}_{s}$. See Section [Further Comments].
3:     l – int64int32nag_int scalar
l$l$, the required degree of y$y$ in the fit.
Constraints:
• l0${\mathbf{l}}\ge 0$;
• inuyp11l < n + inuyp11${\mathbf{inuyp1}}-1\le {\mathbf{l}}<{\mathbf{n}}+{\mathbf{inuyp1}}-1$.
4:     x(mtot) – double array
mtot, the dimension of the array, must satisfy the constraint mtot s = 1n m(s) $\mathit{mtot}\ge \sum _{\mathit{s}=1}^{{\mathbf{n}}}{\mathbf{m}}\left(\mathit{s}\right)$.
The x$x$ values of the data points. The sequence must be
• all points on y = y1$y={y}_{1}$, followed by
• all points on y = y2$y={y}_{2}$, followed by
• $⋮$
• all points on y = yn$y={y}_{n}$.
Constraint: for each ys${y}_{s}$, the x$x$ values must be in nondecreasing order.
5:     y(n) – double array
n, the dimension of the array, must satisfy the constraint n > 0${\mathbf{n}}>0$.
y(s)${\mathbf{y}}\left(\mathit{s}\right)$ must contain the y$y$ value of line y = ys$y={y}_{\mathit{s}}$, for s = 1,2,,n$\mathit{s}=1,2,\dots ,n$, on which data is given.
Constraint: the ys${y}_{s}$ values must be in strictly increasing order.
6:     f(mtot) – double array
mtot, the dimension of the array, must satisfy the constraint mtot s = 1n m(s) $\mathit{mtot}\ge \sum _{\mathit{s}=1}^{{\mathbf{n}}}{\mathbf{m}}\left(\mathit{s}\right)$.
f$f$, the data values of the dependent variable in the same sequence as the x$x$ values.
7:     w(mtot) – double array
mtot, the dimension of the array, must satisfy the constraint mtot s = 1n m(s) $\mathit{mtot}\ge \sum _{\mathit{s}=1}^{{\mathbf{n}}}{\mathbf{m}}\left(\mathit{s}\right)$.
The weights to be assigned to the data points, in the same sequence as the x$x$ values. These weights should be calculated from estimates of the absolute accuracies of the fr${f}_{r}$, expressed as standard deviations, probable errors or some other measure which is of the same dimensions as fr${f}_{r}$. Specifically, each wr${w}_{r}$ should be inversely proportional to the accuracy estimate of fr${f}_{r}$. Often weights all equal to unity will be satisfactory. If a particular weight is zero, the corresponding data point is omitted from the fit.
8:     xmin(n) – double array
n, the dimension of the array, must satisfy the constraint n > 0${\mathbf{n}}>0$.
xmin(s)${\mathbf{xmin}}\left(\mathit{s}\right)$ must contain xmin(s)${x}_{\mathrm{min}}^{\left(\mathit{s}\right)}$, the lower end of the range of x$x$ on the line y = ys$y={y}_{\mathit{s}}$, for s = 1,2,,n$\mathit{s}=1,2,\dots ,n$. It must not be greater than the lowest data value of x$x$ on the line. Each xmin(s)${x}_{\mathrm{min}}^{\left(s\right)}$ is scaled to 1.0$-1.0$ in the fit. (See also Section [Further Comments].)
9:     xmax(n) – double array
n, the dimension of the array, must satisfy the constraint n > 0${\mathbf{n}}>0$.
xmax(s)${\mathbf{xmax}}\left(\mathit{s}\right)$ must contain xmax(s) ${x}_{\mathrm{max}}^{\left(\mathit{s}\right)}$, the upper end of the range of x$x$ on the line y = ys$y={y}_{\mathit{s}}$, for s = 1,2,,n$\mathit{s}=1,2,\dots ,n$. It must not be less than the highest data value of x$x$ on the line. Each xmax(s)${x}_{\mathrm{max}}^{\left(s\right)}$ is scaled to + 1.0$+1.0$ in the fit. (See also Section [Further Comments].)
Constraint: xmax(s) > xmin(s)${\mathbf{xmax}}\left(s\right)>{\mathbf{xmin}}\left(s\right)$.
10:   nux(inuxp1) – double array
inuxp1, the dimension of the array, must satisfy the constraint 1inuxp1k + 1$1\le {\mathbf{inuxp1}}\le {\mathbf{k}}+1$.
nux(i)${\mathbf{nux}}\left(\mathit{i}\right)$ must contain the coefficient of the Chebyshev polynomial of degree (i1)$\left(\mathit{i}-1\right)$ in x$\stackrel{-}{x}$, in the Chebyshev series representation of the polynomial factor in x$\stackrel{-}{x}$ which you require the fit to contain, for i = 1,2,,inuxp1$\mathit{i}=1,2,\dots ,{\mathbf{inuxp1}}$. These coefficients are defined according to the standard convention of Section [Description].
Constraint: ${\mathbf{nux}}\left({\mathbf{inuxp1}}\right)$ must be nonzero, unless inuxp1 = 1${\mathbf{inuxp1}}=1$, in which case nux is ignored.
11:   nuy(inuyp1) – double array
nuy(i)${\mathbf{nuy}}\left(\mathit{i}\right)$ must contain the coefficient of the Chebyshev polynomial of degree (i1)$\left(\mathit{i}-1\right)$ in y$\stackrel{-}{y}$, in the Chebyshev series representation of the polynomial factor which you require the fit to contain, for i = 1,2,,inuyp1$\mathit{i}=1,2,\dots ,{\mathbf{inuyp1}}$. These coefficients are defined according to the standard convention of Section [Description].
Constraint: ${\mathbf{nuy}}\left({\mathbf{inuyp1}}\right)$ must be nonzero, unless inuyp1 = 1${\mathbf{inuyp1}}=1$, in which case nuy is ignored.

### Optional Input Parameters

1:     n – int64int32nag_int scalar
Default: The dimension of the arrays m, y, xmin, xmax. (An error is raised if these dimensions are not equal.)
The number of lines y = $y=\text{}$ constant on which data points are given.
Constraint: n > 0${\mathbf{n}}>0$.
2:     inuxp1 – int64int32nag_int scalar
Default: The dimension of the array nux.
inux + 1$\mathit{inux}+1$, where inux$\mathit{inux}$ is the degree of a polynomial factor in x$\stackrel{-}{x}$ which you require the fit to contain. (See Section [Description], last paragraph.)
If this option is not required, inuxp1 should be set equal to 1$1$.
Constraint: 1inuxp1k + 1$1\le {\mathbf{inuxp1}}\le {\mathbf{k}}+1$.
3:     inuyp1 – int64int32nag_int scalar
Default: The dimension of the array nuy.
inuy + 1$\mathit{inuy}+1$, where inuy$\mathit{inuy}$ is the degree of a polynomial factor in y$\stackrel{-}{y}$ which you require the fit to contain. (See Section [Description], last paragraph.) If this option is not required, inuyp1 should be set equal to 1$1$.

### Input Parameters Omitted from the MATLAB Interface

mtot na work nwork

### Output Parameters

1:     a(na) – double array
na(k + 1) × (l + 1)$\mathit{na}\ge \left({\mathbf{k}}+1\right)×\left({\mathbf{l}}+1\right)$, the total number of coefficients in the fit.
Contains the Chebyshev coefficients of the fit. a(i × (l + 1) + j)${\mathbf{a}}\left(i×\left({\mathbf{l}}+1\right)+j\right)$ is the coefficient aij${a}_{ij}$ of Section [Description] defined according to the standard convention. These coefficients are used by nag_fit_2dcheb_eval (e02cb) to calculate values of the fitted function.
2:     ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

## Error Indicators and Warnings

Errors or warnings detected by the function:
ifail = 1${\mathbf{ifail}}=1$
 On entry, k or l < 0${\mathbf{l}}<0$, or inuxp1 or inuyp1 < 1${\mathbf{inuyp1}}<1$, or inuxp1 > k + 1${\mathbf{inuxp1}}>{\mathbf{k}}+1$, or inuyp1 > l + 1${\mathbf{inuyp1}}>{\mathbf{l}}+1$, or m(i) < k − inuxp1 + 2${\mathbf{m}}\left(i\right)<{\mathbf{k}}-{\mathbf{inuxp1}}+2$ for some i = 1,2, … ,n$i=1,2,\dots ,{\mathbf{n}}$, or n < l − inuyp1 + 2${\mathbf{n}}<{\mathbf{l}}-{\mathbf{inuyp1}}+2$, or na is too small, or nwork is too small, or mtot is too small.
ifail = 2${\mathbf{ifail}}=2$
xmin(i)${\mathbf{xmin}}\left(i\right)$ and xmax(i)${\mathbf{xmax}}\left(i\right)$ do not span the data x values on y = y(i)${\mathbf{y}}={\mathbf{y}}\left(i\right)$ for some i = 1,2,,n$i=1,2,\dots ,{\mathbf{n}}$, possibly because xmin(i)xmax(i)${\mathbf{xmin}}\left(i\right)\ge {\mathbf{xmax}}\left(i\right)$.
ifail = 3${\mathbf{ifail}}=3$
The data x values on y = y(i)${\mathbf{y}}={\mathbf{y}}\left(i\right)$ are not nondecreasing for some i = 1,2,,n$i=1,2,\dots ,{\mathbf{n}}$, or the y(i)${\mathbf{y}}\left(i\right)$ themselves are not strictly increasing.
ifail = 4${\mathbf{ifail}}=4$
The number of distinct x values with nonzero weight on y = y(i)${\mathbf{y}}={\mathbf{y}}\left(i\right)$ is less than kinuxp1 + 2${\mathbf{k}}-{\mathbf{inuxp1}}+2$ for some i = 1,2,,n$i=1,2,\dots ,{\mathbf{n}}$.
ifail = 5${\mathbf{ifail}}=5$
 On entry, = 0.0${\mathbf{nux}}\left({\mathbf{inuxp1}}\right)=0.0$ and inuxp1 ≠ 1${\mathbf{inuxp1}}\ne 1$, or = 0.0${\mathbf{nuy}}\left({\mathbf{inuyp1}}\right)=0.0$ and inuyp1 ≠ 1${\mathbf{inuyp1}}\ne 1$.

## Accuracy

No error analysis for this method has been published. Practical experience with the method, however, is generally extremely satisfactory.

The time taken is approximately proportional to k × (k × mtot + n × l2)$k×\left(k×\mathit{mtot}+n×{l}^{2}\right)$.
The reason for allowing xmax${x}_{\mathrm{max}}$ and xmin${x}_{\mathrm{min}}$ (which are used to normalize the range of x$x$) to vary with y$y$ is that unsatisfactory fits can result if the highest (or lowest) data values of the normalized x$x$ on each line y = ys$y={y}_{s}$ are not approximately the same. (For an explanation of this phenomenon, see page 176 of Clenshaw and Hayes (1965).) Commonly in practice, the lowest (for example) data values x1,s${x}_{1,s}$, while not being approximately constant, do lie close to some smooth curve in the (x,y)$\left(x,y\right)$ plane. Using values from this curve as the values of xmin${x}_{\mathrm{min}}$, different in general on each line, causes the lowest transformed data values x1,s${\stackrel{-}{x}}_{1,s}$ to be approximately constant. Sometimes, appropriate curves for xmax${x}_{\mathrm{max}}$ and xmin${x}_{\mathrm{min}}$ will be clear from the context of the problem (they need not be polynomials). If this is not the case, suitable curves can often be obtained by fitting to the lowest data values x1,s${x}_{1,s}$ and to the corresponding highest data values of x$x$, low degree polynomials in y$y$, using function nag_fit_1dcheb_arb (e02ad), and then shifting the two curves outwards by a small amount so that they just contain all the data between them. The complete curves are not in fact supplied to the present function, only their values at each ys${y}_{s}$; and the values simply need to lie on smooth curves. More values on the complete curves will be required subsequently, when computing values of the fitted surface at arbitrary y$y$ values.
Naturally, a satisfactory approximation to the surface underlying the data cannot be expected if the character of the surface is not adequately represented by the data. Also, as always with polynomials, the approximating function may exhibit unwanted oscillations (particularly near the ends of the ranges) if the degrees k$k$ and l$l$ are taken greater than certain values, generally unknown but depending on the total number of coefficients (k + 1) × (l + 1)$\left(k+1\right)×\left(l+1\right)$ should be significantly smaller than, say not more than half, the total number of data points. Similarly, k + 1$k+1$ should be significantly smaller than most (preferably all) the ms${m}_{s}$, and l + 1$l+1$ significantly smaller than n$n$. Closer spacing of the data near the ends of the x$x$ and y$y$ ranges is an advantage. In particular, if ys = cos(π(s1) / (n1)) ${\stackrel{-}{y}}_{\mathit{s}}=-\mathrm{cos}\left(\pi \left(\mathit{s}-1\right)/\left(n-1\right)\right)$, for s = 1,2,,n$\mathit{s}=1,2,\dots ,n$ and xr,s = cos(π(r1) / (m1)) ${\stackrel{-}{x}}_{\mathit{r},s}=-\mathrm{cos}\left(\pi \left(\mathit{r}-1\right)/\left(m-1\right)\right)$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$, (thus ms = m${m}_{s}=m$ for all s$s$), then the values k = m1$k=m-1$ and l = n1$l=n-1$ (so that the polynomial passes exactly through all the data points) should not give unwanted oscillations. Other datasets should be similarly satisfactory if they are everywhere at least as closely spaced as the above cosine values with m$m$ replaced by k + 1$k+1$ and n$n$ by l + 1$l+1$ (more precisely, if for every s$s$ the largest interval between consecutive values of arccosxr,s$\mathrm{arccos}{\stackrel{-}{x}}_{\mathit{r},s}$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$, is not greater than π / k$\pi /k$, and similarly for the ys${\stackrel{-}{y}}_{s}$). The polynomial obtained should always be examined graphically before acceptance. Note that, for this purpose it is not sufficient to plot the polynomial only at the data values of x$x$ and y$y$: intermediate values should also be plotted, preferably via a graphics facility.
Provided the data are adequate, and the surface underlying the data is of a form that can be represented by a polynomial of the chosen degrees, the function should produce a good approximation to this surface. It is not, however, the true least squares surface fit nor even a polynomial in x$x$ and y$y$, the original variables (see Section 6 of Clenshaw and Hayes (1965), ), except in certain special cases. The most important of these is where the data values of x$x$ are the same on each line y = ys$y={y}_{s}$, (i.e., the data points lie on a rectangular mesh in the (x,y)$\left(x,y\right)$ plane), the weights of the data points are all equal, and xmax${x}_{\mathrm{max}}$ and xmin${x}_{\mathrm{min}}$ are both constants (in this case they should be set to the largest and smallest data values of x$x$, respectively).
If the dataset is such that it can be satisfactorily approximated by a polynomial of degrees k${k}^{\prime }$ and l${l}^{\prime }$, say, then if higher values are used for k$k$ and l$l$ in the function, all the coefficients aij${a}_{ij}$ for i > k$i>{k}^{\prime }$ or j > l$j>{l}^{\prime }$ will take apparently random values within a range bounded by the size of the data errors, or rather less. (This behaviour of the Chebyshev coefficients, most readily observed if they are set out in a rectangular array, closely parallels that in curve-fitting, examples of which are given in Section 8 of Hayes (1970).) In practice, therefore, to establish suitable values of k${k}^{\prime }$ and l${l}^{\prime }$, you should first be seeking (within the limitations discussed above) values for k$k$ and l$l$ which are large enough to exhibit the behaviour described. Values for k${k}^{\prime }$ and l${l}^{\prime }$ should then be chosen as the smallest which do not exclude any coefficients significantly larger than the random ones. A polynomial of degrees k${k}^{\prime }$ and l${l}^{\prime }$ should then be fitted to the data.
If the option to force the fit to contain a given polynomial factor in x$x$ is used and if zeros of the chosen factor coincide with data x$x$ values on any line, then the effective number of data points on that line is reduced by the number of such coincidences. A similar consideration applies when forcing the y$y$-direction. No account is taken of this by the function when testing that the degrees k$k$ and l$l$ have not been chosen too large.

## Example

```function nag_fit_2dcheb_lines_example
m = [int64(8);7;7;6];
k = int64(3);
l = int64(2);
x = [0.1;
1;
1.6;
2.1;
3.3;
3.9;
4.2;
4.9;
0.1;
1.1;
1.9;
2.7;
3.2;
4.1;
4.5;
0.5;
1.1;
1.3;
2.2;
2.9;
3.5;
3.9;
1.7;
2;
2.4;
2.7;
3.1;
3.5 ];
y = [0;
1;
2;
4];
f = [1.01005;
1.10517;
1.17351;
1.23368;
1.39097;
1.47698;
1.52196;
1.63232;
2.0201;
2.23256;
2.4185;
2.61993;
2.75426;
3.01364;
3.13662;
3.15381;
3.34883;
3.41649;
3.73823;
4.00928;
4.2572;
4.43094;
5.92652;
6.10701;
6.35625;
6.54982;
6.81713;
7.09534 ];
w =  ones(28, 1);
xmin = [0;
0.1;
0.4;
1.6];
xmax = [5;
4.5;
4;
3.5];
nux = [0];
nuy = [0];
[a, ifail] = nag_fit_2dcheb_lines(m, k, l, x, y, f, w, xmin, xmax, nux, nuy)
```
```

a =

15.3482
5.1507
0.1014
1.1472
0.1442
-0.1046
0.0490
-0.0031
-0.0070
0.0015
-0.0003
-0.0002

ifail =

0

```
```function e02ca_example
m = [int64(8);7;7;6];
k = int64(3);
l = int64(2);
x = [0.1;
1;
1.6;
2.1;
3.3;
3.9;
4.2;
4.9;
0.1;
1.1;
1.9;
2.7;
3.2;
4.1;
4.5;
0.5;
1.1;
1.3;
2.2;
2.9;
3.5;
3.9;
1.7;
2;
2.4;
2.7;
3.1;
3.5 ];
y = [0;
1;
2;
4];
f = [1.01005;
1.10517;
1.17351;
1.23368;
1.39097;
1.47698;
1.52196;
1.63232;
2.0201;
2.23256;
2.4185;
2.61993;
2.75426;
3.01364;
3.13662;
3.15381;
3.34883;
3.41649;
3.73823;
4.00928;
4.2572;
4.43094;
5.92652;
6.10701;
6.35625;
6.54982;
6.81713;
7.09534 ];
w =  ones(28, 1);
xmin = [0;
0.1;
0.4;
1.6];
xmax = [5;
4.5;
4;
3.5];
nux = [0];
nuy = [0];
[a, ifail] = e02ca(m, k, l, x, y, f, w, xmin, xmax, nux, nuy)
```
```

a =

15.3482
5.1507
0.1014
1.1472
0.1442
-0.1046
0.0490
-0.0031
-0.0070
0.0015
-0.0003
-0.0002

ifail =

0

```

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013