The function may be called by the names: e02dac, nag_fit_dim2_spline_panel or nag_2d_spline_fit_panel.
3Description
e02dac determines a bicubic spline fit $s(x,y)$ to the set of data points $({x}_{r},{y}_{r},{f}_{r})$ with weights ${w}_{r}$, for $\mathit{r}=1,2,\dots ,m$. The two sets of internal knots of the spline, $\left\{\lambda \right\}$ and $\left\{\mu \right\}$, associated with the variables $x$ and $y$ respectively, are prescribed by you. These knots can be thought of as dividing the data region of the $(x,y)$ plane into panels (see Figure 1 in Section 5). A bicubic spline consists of a separate bicubic polynomial in each panel, the polynomials joining together with continuity up to the second derivative across the panel boundaries.
$s(x,y)$ has the property that $\Sigma $, the sum of squares of its weighted residuals ${\rho}_{r}$, for $\mathit{r}=1,2,\dots ,m$, where
is as small as possible for a bicubic spline with the given knot sets. The function produces this minimized value of $\Sigma $ and the coefficients ${c}_{ij}$ in the B-spline representation of $s(x,y)$ – see Section 9. e02dec,e02dfcande02dhc are available to compute values and derivatives of the fitted spline from the coefficients ${c}_{ij}$.
The least squares criterion is not always sufficient to determine the bicubic spline uniquely: there may be a whole family of splines which have the same minimum sum of squares. In these cases, the function selects from this family the spline for which the sum of squares of the coefficients ${c}_{ij}$ is smallest: in other words, the minimal least squares solution. This choice, although arbitrary, reduces the risk of unwanted fluctuations in the spline fit. The method employed involves forming a system of $m$ linear equations in the coefficients ${c}_{ij}$ and then computing its least squares solution, which will be the minimal least squares solution when appropriate. The basis of the method is described in Hayes and Halliday (1974). The matrix of the equation is formed using a recurrence relation for B-splines which is numerically stable (see Cox (1972) and de Boor (1972) – the former contains the more elementary derivation but, unlike de Boor (1972), does not cover the case of coincident knots). The least squares solution is also obtained in a stable manner by using orthogonal transformations, viz. a variant of Givens rotation (see Gentleman (1973)). This requires only one row of the matrix to be stored at a time. Advantage is taken of the stepped-band structure which the matrix possesses when the data points are suitably ordered, there being at most sixteen nonzero elements in any row because of the definition of B-splines. First the matrix is reduced to upper triangular form and then the diagonal elements of this triangle are examined in turn. When an element is encountered whose square, divided by the mean squared weight, is less than a threshold $\epsilon $, it is replaced by zero and the rest of the elements in its row are reduced to zero by rotations with the remaining rows. The rank of the system is taken to be the number of nonzero diagonal elements in the final triangle, and the nonzero rows of this triangle are used to compute the minimal least squares solution. If all the diagonal elements are nonzero, the rank is equal to the number of coefficients ${c}_{ij}$ and the solution obtained is the ordinary least squares solution, which is unique in this case.
4References
Cox M G (1972) The numerical evaluation of B-splines J. Inst. Math. Appl.10 134–149
de Boor C (1972) On calculating with B-splines J. Approx. Theory6 50–62
Gentleman W M (1973) Least squares computations by Givens transformations without square roots J. Inst. Math. Applic.12 329–336
Hayes J G and Halliday J (1974) The least squares fitting of cubic spline surfaces to general data sets J. Inst. Math. Appl.14 89–103
On entry: the coordinates of the data point
$({x}_{\mathit{r}},{y}_{\mathit{r}},{f}_{\mathit{r}})$, for $\mathit{r}=1,2,\dots ,m$. The order of the data points is immaterial, but see the array point.
On entry: the weight ${w}_{r}$ of the $r$th data point. It is important to note the definition of weight implied by the equation (1) in Section 3, since it is also common usage to define weight as the square of this weight. In this function, each ${w}_{r}$ should be chosen inversely proportional to the (absolute) accuracy of the corresponding ${f}_{r}$, as expressed, for example, by the standard deviation or probable error of the ${f}_{r}$. When the ${f}_{r}$ are all of the same accuracy, all the ${w}_{r}$ may be set equal to $1.0$.
Note: the dimension, dim, of the array point
must be at least
$\left({\mathbf{m}}+({\mathbf{spline}}\mathbf{.}\mathbf{nx}-7)\times ({\mathbf{spline}}\mathbf{.}\mathbf{ny}-7)\right)$.
On entry: indexing information usually provided by e02zac which enables the data points to be accessed in the order which produces the advantageous matrix structure mentioned in Section 3. This order is such that, if the $(x,y)$ plane is thought of as being divided into rectangular panels by the two sets of knots, all data in a panel occur before data in succeeding panels, where the panels are numbered from bottom to top and then left to right with the usual arrangement of axes, as indicated in Figure 1.
Figure 1
A data point lying exactly on one or more panel sides is considered to be in the highest numbered panel adjacent to the point. e02zac should be called to obtain the array point, unless it is provided by other means.
Note: the dimension, dim, of the array dl
must be at least
$({\mathbf{spline}}\mathbf{.}\mathbf{nx}-4)\times ({\mathbf{spline}}\mathbf{.}\mathbf{ny}-4)$.
On exit: gives the squares of the diagonal elements of the reduced triangular matrix, divided by the mean squared weight. It includes those elements, less than $\epsilon $, which are treated as zero (see Section 3).
8: $\mathbf{eps}$ – doubleInput
On entry: a threshold $\epsilon $ for determining the effective rank of the system of linear equations. The rank is determined as the number of elements of the array dl which are nonzero. An element of dl is regarded as zero if it is less than $\epsilon $. Machine precision is a suitable value for $\epsilon $ in most practical applications which have only $2$ or $3$ decimals accurate in data. If some coefficients of the fit prove to be very large compared with the data ordinates, this suggests that $\epsilon $ should be increased so as to decrease the rank. The array dl will give a guide to appropriate values of $\epsilon $ to achieve this, as well as to the choice of $\epsilon $ in other cases where some experimentation may be needed to determine a value which leads to a satisfactory fit.
9: $\mathbf{sigma}$ – double *Output
On exit: $\Sigma $, the weighted sum of squares of residuals. This is not computed from the individual residuals but from the right-hand sides of the orthogonally-transformed linear equations. For further details see page 97 of Hayes and Halliday (1974). The two methods of computation are theoretically equivalent, but the results may differ because of rounding error.
10: $\mathbf{rank}$ – Integer *Output
On exit: the rank of the system as determined by the value of the threshold $\epsilon $.
Pointer to structure of type Nag_2dSpline with the following members:
nx – IntegerInput
On entry: $\mathbf{nx}$ must specify the total number of knots associated with the variables $x$. It is such that $\mathbf{nx}-8$ is the number of interior knots.
Constraint:
$\mathbf{nx}\ge 8$.
lamda – doubleInput/Output
On entry: $\mathbf{lamda}\left[i+4\right]$ must contain the $i$th interior knot
${\lambda}_{\mathit{i}+4}$ associated with the variable $x$, for $\mathit{i}=1,2,\dots ,\mathbf{nx}-8$. The knots must be in nondecreasing order and lie strictly within the range covered by the data values of $x$. A knot is a value of $x$ at which the spline is allowed to be discontinuous in the third derivative with respect to $x$, though continuous up to the second derivative. This degree of continuity can be reduced, if you require, by the use of coincident knots, provided that no more than four knots are chosen to coincide at any point. Two, or three, coincident knots allow loss of continuity in, respectively, the second and first derivative with respect to $x$ at the value of $x$ at which they coincide. Four coincident knots split the spline surface into two independent parts. For choice of knots see Section 9.
On exit: the interior knots $\mathbf{lamda}\left[4\right]$ to $\mathbf{lamda}\left[\mathbf{nx}-5\right]$ are unchanged, and the segments $\mathbf{LAMDA}(1:4)$ and $\mathbf{LAMDA}(\mathbf{nx}-3:\mathbf{nx})$ contain additional (exterior) knots introduced by the function in order to define the full set of B-splines required. The four knots in the first segment are all set equal to the lowest data value of $x$ and the other four additional knots are all set equal to the highest value: there is experimental evidence that coincident end-knots are best for numerical accuracy. The complete array must be left undisturbed if e02decore02dfc is to be used subsequently.
ny – IntegerInput
On entry: $\mathbf{ny}$ must specify the total number of knots associated with the variable $y$.
It is such that $\mathbf{ny}-8$ is the number of interior knots.
Constraint:
$\mathbf{ny}\ge 8$.
mu – doubleInput/Output
On entry: $\mathbf{mu}\left[i+4\right]$ must contain the $i$th interior knot ${\mu}_{i+4}$ associated with the variable $y$, $i=1,2,\dots ,\mathbf{ny}-8$.
On exit: the same remarks apply to $\mathbf{mu}$ as to $\mathbf{lamda}$ above, with y replacing x, and $y$ replacing $x$.
c – doubleOutput
On exit: gives the coefficients of the fit. $\mathbf{c}((\mathbf{ny}-4)\times (i-1)+j)$ is the coefficient
${c}_{\mathit{i}\mathit{j}}$ of Sections 3 and 9, for $\mathit{i}=1,2,\dots ,\mathbf{nx}-4$ and $\mathit{j}=1,2,\dots ,\mathbf{ny}-4$. These coefficients are used by e02decore02dfc to calculate values of the fitted function.
In normal usage, the call to e02dac follows a call to e01dac, e02dcc or e02ddc, in which case, members of the structure spline will have been set up correctly for input to e02dac.
12: $\mathbf{fail}$ – NagError *Input/Output
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).
6Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information.
NE_BAD_PARAM
On entry, argument $\u27e8\mathit{\text{value}}\u27e9$ had an illegal value.
NE_CONSTRAINT
Constraint: $\mathbf{nx}\ge 8$.
Constraint: $\mathbf{ny}\ge 8$.
NE_INT
On entry, ${\mathbf{m}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{m}}>1$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 7.5 in the Introduction to the NAG Library CL Interface for further information.
NE_KNOTS_COINCIDE
More than four knots coincide at a single point.
NE_KNOTS_CONS
At least one set of knots is not in nondecreasing order.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library CL Interface for further information.
NE_PANEL_ORDER
Array point does not indicate the data points in panel order.
NE_WEIGHT_ZERO
All the weights are zero, or rank determined as zero.
7Accuracy
The computation of the B-splines and reduction of the observation matrix to triangular form are both numerically stable.
8Parallelism and Performance
e02dac is not threaded in any implementation.
9Further Comments
The time taken is approximately proportional to the number of data points, $m$, and to ${(3\times ({\mathbf{spline}}\mathbf{.}\mathbf{ny}-4)+4)}^{2}$.
The B-spline representation of the bicubic spline is
summed over $i=1,2,\dots ,{\mathbf{spline}}\mathbf{.}\mathbf{nx}-4$ and over $j=1,2,\dots ,{\mathbf{spline}}\mathbf{.}\mathbf{ny}-4$. Here ${M}_{i}\left(x\right)$ and ${N}_{j}\left(y\right)$ denote normalized cubic B-splines, the former defined on the knots ${\lambda}_{i},{\lambda}_{i+1},\dots ,{\lambda}_{i+4}$ and the latter on the knots ${\mu}_{j},{\mu}_{j+1},\dots ,{\mu}_{j+4}$. For further details, see Hayes and Halliday (1974) for bicubic splines and de Boor (1972) for normalized B-splines.
The choice of the interior knots, which help to determine the spline's shape, must largely be a matter of trial and error. It is usually best to start with a small number of knots and, examining the fit at each stage, add a few knots at a time in places where the fit is particularly poor. In intervals of $x$ or $y$ where the surface represented by the data changes rapidly, in function value or derivatives, more knots will be needed than elsewhere. In some cases guidance can be obtained by analogy with the case of coincident knots: for example, just as three coincident knots can produce a discontinuity in slope, three close knots can produce rapid change in slope. Of course, such rapid changes in behaviour must be adequately represented by the data points, as indeed must the behaviour of the surface generally, if a satisfactory fit is to be achieved. When there is no rapid change in behaviour, equally-spaced knots will often suffice.
In all cases the fit should be examined graphically before it is accepted as satisfactory.
The fit obtained is not defined outside the rectangle
The reason for taking the extreme data values of $x$ and $y$ for these four knots is that, as is usual in data fitting, the fit cannot be expected to give satisfactory values outside the data region. If, nevertheless, you require values over a larger rectangle, this can be achieved by augmenting the data with two artificial data points $(a,c,0)$ and $(b,d,0)$ with zero weight, where $a\le x\le b$, $c\le y\le d$ defines the enlarged rectangle. In the case when the data are adequate to make the least squares solution unique (${\mathbf{rank}}=({\mathbf{spline}}\mathbf{.}\mathbf{nx}-4)\times ({\mathbf{spline}}\mathbf{.}\mathbf{ny}-4)$), this enlargement will not affect the fit over the original rectangle, except for possibly enlarged rounding errors, and will simply continue the bicubic polynomials in the panels bordering the rectangle out to the new boundaries: in other cases the fit will be affected. Even using the original rectangle there may be regions within it, particularly at its corners, which lie outside the data region and where, therefore, the fit will be unreliable. For example, if there is no data point in panel $1$ of Figure 1 in Section 5, the least squares criterion leaves the spline indeterminate in this panel: the minimal spline determined by the function in this case passes through the value zero at the point $({\lambda}_{4},{\mu}_{4})$.
10Example
This example reads a value for $\epsilon $, and a set of data points, weights and knot positions. If there are more $y$ knots than $x$ knots, it interchanges the $x$ and $y$ axes. It calls e02zac to sort the data points into panel order, e02dac to fit a bicubic spline to them, and e02dec to evaluate the spline at the data points.
Finally it prints:
–the weighted sum of squares of residuals computed from the linear equations;
–the rank determined by e02dac;
–data points, fitted values and residuals in panel order;
–the weighted sum of squares of the residuals; and
–the coefficients of the spline fit.
The program is written to handle any number of datasets.
Note: the data supplied in this example is not typical of a realistic problem: the number of data points would normally be much larger (in which case the array dimensions would have to be increased); and the value of $\epsilon $ would normally be much smaller on most machines (see Section 5; the relatively large value of ${10}^{\mathrm{-6}}$ has been chosen in order to illustrate a minimal least squares solution when ${\mathbf{rank}}<({\mathbf{spline}}\mathbf{.}\mathbf{nx}-4)\times ({\mathbf{spline}}\mathbf{.}\mathbf{ny}-4)$; in this example $({\mathbf{spline}}\mathbf{.}\mathbf{nx}-4)\times ({\mathbf{spline}}\mathbf{.}\mathbf{ny}-4)=24$).