NAG Library Function Document
nag_2d_spline_fit_panel (e02dac)
1 Purpose
nag_2d_spline_fit_panel (e02dac) forms a minimal, weighted least squares bicubic spline surface fit with prescribed knots to a given set of data points.
2 Specification
#include <nag.h> 
#include <nage02.h> 
void 
nag_2d_spline_fit_panel (Integer m,
const double x[],
const double y[],
const double f[],
const double w[],
const Integer point[],
double dl[],
double eps,
double *sigma,
Integer *rank,
Nag_2dSpline *spline,
NagError *fail) 

3 Description
nag_2d_spline_fit_panel (e02dac) determines a bicubic spline fit
$s\left(x,y\right)$ to the set of data points
$\left({x}_{r},{y}_{r},{f}_{r}\right)$ with weights
${w}_{r}$, for
$\mathit{r}=1,2,\dots ,m$. The two sets of internal knots of the spline,
$\left\{\lambda \right\}$ and
$\left\{\mu \right\}$, associated with the variables
$x$ and
$y$ respectively, are prescribed by you. These knots can be thought of as dividing the data region of the
$\left(x,y\right)$ plane into panels (see
Figure 1 in
Section 5). A bicubic spline consists of a separate bicubic polynomial in each panel, the polynomials joining together with continuity up to the second derivative across the panel boundaries.
$s\left(x,y\right)$ has the property that
$\Sigma $, the sum of squares of its weighted residuals
${\rho}_{r}$, for
$\mathit{r}=1,2,\dots ,m$, where
is as small as possible for a bicubic spline with the given knot sets. The function produces this minimized value of
$\Sigma $ and the coefficients
${c}_{ij}$ in the Bspline representation of
$s\left(x,y\right)$ – see
Section 8.
nag_2d_spline_eval (e02dec),
nag_2d_spline_eval_rect (e02dfc) and
nag_2d_spline_deriv_rect (e02dhc) are available to compute values and derivatives of the fitted spline from the coefficients
${c}_{ij}$.
The least squares criterion is not always sufficient to determine the bicubic spline uniquely: there may be a whole family of splines which have the same minimum sum of squares. In these cases, the function selects from this family the spline for which the sum of squares of the coefficients
${c}_{ij}$ is smallest: in other words, the minimal least squares solution. This choice, although arbitrary, reduces the risk of unwanted fluctuations in the spline fit. The method employed involves forming a system of
$m$ linear equations in the coefficients
${c}_{ij}$ and then computing its least squares solution, which will be the minimal least squares solution when appropriate. The basis of the method is described in
Hayes and Halliday (1974). The matrix of the equation is formed using a recurrence relation for Bsplines which is numerically stable (see
Cox (1972) and
de Boor (1972) – the former contains the more elementary derivation but, unlike
de Boor (1972), does not cover the case of coincident knots). The least squares solution is also obtained in a stable manner by using orthogonal transformations, viz. a variant of Givens rotation (see
Gentleman (1973)). This requires only one row of the matrix to be stored at a time. Advantage is taken of the steppedband structure which the matrix possesses when the data points are suitably ordered, there being at most sixteen nonzero elements in any row because of the definition of Bsplines. First the matrix is reduced to upper triangular form and then the diagonal elements of this triangle are examined in turn. When an element is encountered whose square, divided by the mean squared weight, is less than a threshold
$\epsilon $, it is replaced by zero and the rest of the elements in its row are reduced to zero by rotations with the remaining rows. The rank of the system is taken to be the number of nonzero diagonal elements in the final triangle, and the nonzero rows of this triangle are used to compute the minimal least squares solution. If all the diagonal elements are nonzero, the rank is equal to the number of coefficients
${c}_{ij}$ and the solution obtained is the ordinary least squares solution, which is unique in this case.
4 References
Cox M G (1972) The numerical evaluation of Bsplines J. Inst. Math. Appl. 10 134–149
de Boor C (1972) On calculating with Bsplines J. Approx. Theory 6 50–62
Gentleman W M (1973) Leastsquares computations by Givens transformations without square roots J. Inst. Math. Applic. 12 329–336
Hayes J G and Halliday J (1974) The leastsquares fitting of cubic spline surfaces to general data sets J. Inst. Math. Appl. 14 89–103
5 Arguments
 1:
m – IntegerInput
On entry:
$m$, the number of data points.
Constraint:
${\mathbf{m}}>1$.
 2:
x[m] – const doubleInput
 3:
y[m] – const doubleInput
 4:
f[m] – const doubleInput
On entry: the coordinates of the data point
$\left({x}_{\mathit{r}},{y}_{\mathit{r}},{f}_{\mathit{r}}\right)$, for
$\mathit{r}=1,2,\dots ,m$. The order of the data points is immaterial, but see the array
point.
 5:
w[m] – const doubleInput
On entry: the weight
${w}_{r}$ of the
$r$th data point. It is important to note the definition of weight implied by the equation
(1) in
Section 3, since it is also common usage to define weight as the square of this weight. In this function, each
${w}_{r}$ should be chosen inversely proportional to the (absolute) accuracy of the corresponding
${f}_{r}$, as expressed, for example, by the standard deviation or probable error of the
${f}_{r}$. When the
${f}_{r}$ are all of the same accuracy, all the
${w}_{r}$ may be set equal to
$1.0$.
 6:
point[$\mathit{dim}$] – const IntegerInput

Note: the dimension,
dim, of the array
point
must be at least
$\left({\mathbf{m}}+\left({\mathbf{spline}}\mathbf{.}\mathbf{nx}7\right)\times \left({\mathbf{spline}}\mathbf{.}\mathbf{ny}7\right)\right)$.
On entry: indexing information usually provided by
nag_2d_panel_sort (e02zac) which enables the data points to be accessed in the order which produces the advantageous matrix structure mentioned in
Section 3. This order is such that, if the
$\left(x,y\right)$ plane is thought of as being divided into rectangular panels by the two sets of knots, all data in a panel occur before data in succeeding panels, where the panels are numbered from bottom to top and then left to right with the usual arrangement of axes, as indicated in
Figure 1.
Figure 1
A data point lying exactly on one or more panel sides is considered to be in the highest numbered panel adjacent to the point.
nag_2d_panel_sort (e02zac) should be called to obtain the array
point, unless it is provided by other means.
 7:
dl[$\mathit{dim}$] – doubleOutput

Note: the dimension,
dim, of the array
dl
must be at least
$\left({\mathbf{spline}}\mathbf{.}\mathbf{nx}4\right)\times \left({\mathbf{spline}}\mathbf{.}\mathbf{ny}4\right)$.
On exit: gives the squares of the diagonal elements of the reduced triangular matrix, divided by the mean squared weight. It includes those elements, less than
$\epsilon $, which are treated as zero (see
Section 3).
 8:
eps – doubleInput
On entry: a threshold
$\epsilon $ for determining the effective rank of the system of linear equations. The rank is determined as the number of elements of the array
dl which are nonzero. An element of
dl is regarded as zero if it is less than
$\epsilon $.
Machine precision is a suitable value for
$\epsilon $ in most practical applications which have only
$2$ or
$3$ decimals accurate in data. If some coefficients of the fit prove to be very large compared with the data ordinates, this suggests that
$\epsilon $ should be increased so as to decrease the rank. The array
dl will give a guide to appropriate values of
$\epsilon $ to achieve this, as well as to the choice of
$\epsilon $ in other cases where some experimentation may be needed to determine a value which leads to a satisfactory fit.
 9:
sigma – double *Output
On exit:
$\Sigma $, the weighted sum of squares of residuals. This is not computed from the individual residuals but from the righthand sides of the orthogonallytransformed linear equations. For further details see page 97 of
Hayes and Halliday (1974). The two methods of computation are theoretically equivalent, but the results may differ because of rounding error.
 10:
rank – Integer *Output
On exit: the rank of the system as determined by the value of the threshold
$\epsilon $.
 ${\mathbf{rank}}=\left({\mathbf{spline}}\mathbf{.}\mathbf{nx}4\right)\times \left({\mathbf{spline}}\mathbf{.}\mathbf{ny}4\right)$
 The least squares solution is unique.
 ${\mathbf{rank}}\ne \left({\mathbf{spline}}\mathbf{.}\mathbf{nx}4\right)\times \left({\mathbf{spline}}\mathbf{.}\mathbf{ny}4\right)$
 The minimal least squares solution is computed.
 11:
spline – Nag_2dSpline *

Pointer to structure of type Nag_2dSpline with the following members:
 nx – IntegerInput

On entry: $\mathbf{nx}$ must specify the total number of knots associated with the variables $x$. It is such that $\mathbf{nx}8$ is the number of interior knots.
Constraint:
$\mathbf{nx}\ge 8$.
 lamda – doubleInput/Output

On entry:
$\mathbf{lamda}\left[i+4\right]$ must contain the
$i$th interior knot
${\lambda}_{\mathit{i}+4}$ associated with the variable
$x$, for
$\mathit{i}=1,2,\dots \mathbf{nx}8$. The knots must be in nondecreasing order and lie strictly within the range covered by the data values of
$x$. A knot is a value of
$x$ at which the spline is allowed to be discontinuous in the third derivative with respect to
$x$, though continuous up to the second derivative. This degree of continuity can be reduced, if you require, by the use of coincident knots, provided that no more than four knots are chosen to coincide at any point. Two, or three, coincident knots allow loss of continuity in, respectively, the second and first derivative with respect to
$x$ at the value of
$x$ at which they coincide. Four coincident knots split the spline surface into two independent parts. For choice of knots see
Section 8.
On exit: the interior knots
$\mathbf{lamda}\left[4\right]$ to
$\mathbf{lamda}\left[\mathbf{nx}5\right]$ are unchanged, and the segments
$\mathbf{LAMDA}\left(1:4\right)$ and
$\mathbf{LAMDA}\left(\mathbf{nx}3:\mathbf{nx}\right)$ contain additional (exterior) knots introduced by the function in order to define the full set of Bsplines required. The four knots in the first segment are all set equal to the lowest data value of
$x$ and the other four additional knots are all set equal to the highest value: there is experimental evidence that coincident endknots are best for numerical accuracy. The complete array must be left undisturbed if
nag_2d_spline_eval (e02dec) or
nag_2d_spline_eval_rect (e02dfc) is to be used subsequently.
 ny – IntegerInput

On entry:
$\mathbf{ny}$ must specify the total number of knots associated with the variable
$y$.
It is such that $\mathbf{ny}8$ is the number of interior knots.
Constraint:
$\mathbf{ny}\ge 8$.
 mu – doubleInput/Output

On entry: $\mathbf{mu}\left[i+4\right]$ must contain the $i$th interior knot ${\mu}_{i+4}$ associated with the variable $y$, $i=1,2,\dots ,\mathbf{ny}8$.
On exit: the same remarks apply to
$\mathbf{mu}$ as to
$\mathbf{lamda}$ above, with
y replacing
x, and
$y$ replacing
$x$.
 c – doubleOutput

On exit: gives the coefficients of the fit.
$\mathbf{c}\left(\left(\mathbf{ny}4\right)\times \left(i1\right)+j\right)$ is the coefficient
${c}_{\mathit{i}\mathit{j}}$ of
Sections 3 and
8, for
$\mathit{i}=1,2,\dots \mathbf{nx}4$ and
$\mathit{j}=1,2,\dots \mathbf{ny}4$. These coefficients are used by
nag_2d_spline_eval (e02dec) or
nag_2d_spline_eval_rect (e02dfc) to calculate values of the fitted function.
In normal usage, the call to nag_2d_spline_fit_panel (e02dac) follows a call to
nag_2d_spline_interpolant (e01dac),
nag_2d_spline_fit_grid (e02dcc) or
nag_2d_spline_fit_scat (e02ddc), in which case, members of the structure
spline will have been set up correctly for input to nag_2d_spline_fit_panel (e02dac).
 12:
fail – NagError *Input/Output

The NAG error argument (see
Section 3.6 in the Essential Introduction).
6 Error Indicators and Warnings
 NE_ALLOC_FAIL
Dynamic memory allocation failed.
 NE_BAD_PARAM
On entry, argument $\u2329\mathit{\text{value}}\u232a$ had an illegal value.
 NE_CONSTRAINT
On entry, $\mathbf{nx}=\u2329\mathit{\text{value}}\u232a$.
Constraint: $\mathbf{nx}\ge 8$.
On entry, $\mathbf{ny}=\u2329\mathit{\text{value}}\u232a$.
Constraint: $\mathbf{ny}\ge 8$.
 NE_INT
On entry, ${\mathbf{m}}=\u2329\mathit{\text{value}}\u232a$.
Constraint: ${\mathbf{m}}>1$.
 NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact
NAG for assistance.
 NE_KNOTS_COINCIDE
More than four knots coincide at a single point.
 NE_KNOTS_CONS
At least one set of knots is not in nondecreasing order.
 NE_PANEL_ORDER
Array
point does not indicate the data points in panel order.
 NE_WEIGHT_ZERO
All the weights are zero, or rank determined as zero.
7 Accuracy
The computation of the Bsplines and reduction of the observation matrix to triangular form are both numerically stable.
The time taken is approximately proportional to the number of data points, $m$, and to ${\left(3\times \left({\mathbf{spline}}\mathbf{.}\mathbf{ny}4\right)+4\right)}^{2}$.
The Bspline representation of the bicubic spline is
summed over
$i=1,2,\dots ,{\mathbf{spline}}\mathbf{.}\mathbf{nx}4$ and over
$j=1,2,\dots ,{\mathbf{spline}}\mathbf{.}\mathbf{ny}4$. Here
${M}_{i}\left(x\right)$ and
${N}_{j}\left(y\right)$ denote normalized cubic Bsplines, the former defined on the knots
${\lambda}_{i},{\lambda}_{i+1},\dots ,{\lambda}_{i+4}$ and the latter on the knots
${\mu}_{j},{\mu}_{j+1},\dots ,{\mu}_{j+4}$. For further details, see
Hayes and Halliday (1974) for bicubic splines and
de Boor (1972) for normalized Bsplines.
The choice of the interior knots, which help to determine the spline's shape, must largely be a matter of trial and error. It is usually best to start with a small number of knots and, examining the fit at each stage, add a few knots at a time in places where the fit is particularly poor. In intervals of $x$ or $y$ where the surface represented by the data changes rapidly, in function value or derivatives, more knots will be needed than elsewhere. In some cases guidance can be obtained by analogy with the case of coincident knots: for example, just as three coincident knots can produce a discontinuity in slope, three close knots can produce rapid change in slope. Of course, such rapid changes in behaviour must be adequately represented by the data points, as indeed must the behaviour of the surface generally, if a satisfactory fit is to be achieved. When there is no rapid change in behaviour, equallyspaced knots will often suffice.
In all cases the fit should be examined graphically before it is accepted as satisfactory.
The fit obtained is not defined outside the rectangle
The reason for taking the extreme data values of
$x$ and
$y$ for these four knots is that, as is usual in data fitting, the fit cannot be expected to give satisfactory values outside the data region. If, nevertheless, you require values over a larger rectangle, this can be achieved by augmenting the data with two artificial data points
$\left(a,c,0\right)$ and
$\left(b,d,0\right)$ with zero weight, where
$a\le x\le b$,
$c\le y\le d$ defines the enlarged rectangle. In the case when the data are adequate to make the least squares solution unique (
${\mathbf{rank}}=\left({\mathbf{spline}}\mathbf{.}\mathbf{nx}4\right)\times \left({\mathbf{spline}}\mathbf{.}\mathbf{ny}4\right)$), this enlargement will not affect the fit over the original rectangle, except for possibly enlarged rounding errors, and will simply continue the bicubic polynomials in the panels bordering the rectangle out to the new boundaries: in other cases the fit will be affected. Even using the original rectangle there may be regions within it, particularly at its corners, which lie outside the data region and where, therefore, the fit will be unreliable. For example, if there is no data point in panel
$1$ of
Figure 1 in
Section 5, the least squares criterion leaves the spline indeterminate in this panel: the minimal spline determined by the function in this case passes through the value zero at the point
$\left({\lambda}_{4},{\mu}_{4}\right)$.
9 Example
This example reads a value for
$\epsilon $, and a set of data points, weights and knot positions. If there are more
$y$ knots than
$x$ knots, it interchanges the
$x$ and
$y$ axes. It calls
nag_2d_panel_sort (e02zac) to sort the data points into panel order, nag_2d_spline_fit_panel (e02dac) to fit a bicubic spline to them, and
nag_2d_spline_eval (e02dec) to evaluate the spline at the data points.
Finally it prints:
– 
the weighted sum of squares of residuals computed from the linear equations; 
– 
the rank determined by nag_2d_spline_fit_panel (e02dac); 
– 
data points, fitted values and residuals in panel order; 
– 
the weighted sum of squares of the residuals; and 
– 
the coefficients of the spline fit. 
The program is written to handle any number of datasets.
Note: the data supplied in this example is
not typical of a realistic problem: the number of data points would normally be much larger (in which case the array dimensions would have to be increased); and the value of
$\epsilon $ would normally be much smaller on most machines (see
Section 5; the relatively large value of
${10}^{6}$ has been chosen in order to illustrate a minimal least squares solution when
${\mathbf{rank}}<\left({\mathbf{spline}}\mathbf{.}\mathbf{nx}4\right)\times \left({\mathbf{spline}}\mathbf{.}\mathbf{ny}4\right)$; in this example
$\left({\mathbf{spline}}\mathbf{.}\mathbf{nx}4\right)\times \left({\mathbf{spline}}\mathbf{.}\mathbf{ny}4\right)=24$).
9.1 Program Text
Program Text (e02dace.c)
9.2 Program Data
Program Data (e02dace.d)
9.3 Program Results
Program Results (e02dace.r)