Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox Chapter IntroductionE02 — Curve and Surface Fitting

## Scope of the Chapter

The main aim of this chapter is to assist you in finding a function which approximates a set of data points. Typically the data contain random errors, as of experimental measurement, which need to be smoothed out. To seek an approximation to the data, it is first necessary to specify for the approximating function a mathematical form (a polynomial, for example) which contains a number of unspecified coefficients: the appropriate fitting function then derives for the coefficients the values which provide the best fit of that particular form. The chapter deals mainly with curve and surface fitting (i.e., fitting with functions of one and of two variables) when a polynomial or a cubic spline is used as the fitting function, since these cover the most common needs. However, fitting with other functions and/or more variables can be undertaken by means of general linear or nonlinear functions (some of which are contained in other chapters) depending on whether the coefficients in the function occur linearly or nonlinearly. Cases where a graph rather than a set of data points is given can be treated simply by first reading a suitable set of points from the graph.
The chapter also contains functions for evaluating, differentiating and integrating polynomial and spline curves and surfaces, once the numerical values of their coefficients have been determined.
There is, too, a function for computing a Padé approximant of a mathematical function (see Sections [Padé Approximants] and [Padé Approximants]).

## Background to the Problems

### Preliminary Considerations

In the curve-fitting problems considered in this chapter, we have a dependent variable y$y$ and an independent variable x$x$, and we are given a set of data points (xr,yr)$\left({x}_{\mathit{r}},{y}_{\mathit{r}}\right)$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$. The preliminary matters to be considered in this section will, for simplicity, be discussed in this context of curve-fitting problems. In fact, however, these considerations apply equally well to surface and higher-dimensional problems. Indeed, the discussion presented carries over essentially as it stands if, for these cases, we interpret x$x$ as a vector of several independent variables and correspondingly each xr${x}_{r}$ as a vector containing the r$r$th data value of each independent variable.
We wish, then, to approximate the set of data points as closely as possible with a specified function, f(x)$f\left(x\right)$ say, which is as smooth as possible: f(x)$f\left(x\right)$ may, for example, be a polynomial. The requirements of smoothness and closeness conflict, however, and a balance has to be struck between them. Most often, the smoothness requirement is met simply by limiting the number of coefficients allowed in the fitting function – for example, by restricting the degree in the case of a polynomial. Given a particular number of coefficients in the function in question, the fitting functions of this chapter determine the values of the coefficients such that the ‘distance’ of the function from the data points is as small as possible. The necessary balance is struck when you compare a selection of such fits having different numbers of coefficients. If the number of coefficients is too low, the approximation to the data will be poor. If the number is too high, the fit will be too close to the data, essentially following the random errors and tending to have unwanted fluctuations between the data points. Between these extremes, there is often a group of fits all similarly close to the data points and then, particularly when least squares polynomials are used, the choice is clear: it is the fit from this group having the smallest number of coefficients.
You are in effect minimizing the smoothness measure (i.e., the number of coefficients) subject to the distance from the data points being acceptably small. Some of the functions, however, do this task themselves. They use a different measure of smoothness (in each case one that is continuous) and minimize it subject to the distance being less than a threshold specified by you. This is a much more automatic process, requiring only some experimentation with the threshold.

#### Fitting criteria: norms

A measure of the above ‘distance’ between the set of data points and the function f(x)$f\left(x\right)$ is needed. The distance from a single data point (xr,yr)$\left({x}_{r},{y}_{r}\right)$ to the function can simply be taken as
 εr = yr − f(xr), $εr=yr-f(xr),$ (1)
and is called the residual of the point. (With this definition, the residual is regarded as a function of the coefficients contained in f(x)$f\left(x\right)$; however, the term is also used to mean the particular value of εr${\epsilon }_{r}$ which corresponds to the fitted values of the coefficients.) However, we need a measure of distance for the set of data points as a whole. Three different measures are used in the different functions (which measure to select, according to circumstances, is discussed later in this sub-section). With εr${\epsilon }_{r}$ defined in (1), these measures, or norms, are
 m ∑ |εr|, r = 1
$∑r=1m|εr|,$
(2)
 sqrt( ∑ r = 1mεr2), $∑r=1mεr2,$ (3)
and
 max |εr|, r
$maxr|εr|,$
(4)
respectively the 1${\ell }_{1}$ norm, the 2${\ell }_{2}$ norm and the ${\ell }_{\infty }$ norm.
Minimization of one or other of these norms usually provides the fitting criterion, the minimization being carried out with respect to the coefficients in the mathematical form used for f(x)$f\left(x\right)$: with respect to the bi${b}_{i}$ for example if the mathematical form is the power series in (8) below. The fit which results from minimizing (2) is known as the 1${\ell }_{1}$ fit, or the fit in the 1${\ell }_{1}$ norm: that which results from minimizing (3) is the 2${\ell }_{2}$ fit, the well-known least squares fit (minimizing (3) is equivalent to minimizing the square of (3), i.e., the sum of squares of residuals, and it is the latter which is used in practice), and that from minimizing (4) is the ${\ell }_{\infty }$, or minimax, fit.
Strictly speaking, implicit in the use of the above norms are the statistical assumptions that the random errors in the yr${y}_{r}$ are independent of one another and that any errors in the xr${x}_{r}$ are negligible by comparison. From this point of view, the use of the 2${\ell }_{2}$ norm is appropriate when the random errors in the yr${y}_{r}$ have a Normal distribution, and the ${\ell }_{\infty }$ norm is appropriate when they have a rectangular distribution, as when fitting a table of values rounded to a fixed number of decimal places. The 1${\ell }_{1}$ norm is appropriate when the error distribution has its frequency function proportional to the negative exponential of the modulus of the normalized error – not a common situation.
However, it may be that you are indifferent to these statistical considerations, and simply seek a fit which can be assessed by inspection, perhaps visually from a graph of the results. In this event, the 1${\ell }_{1}$ norm is particularly appropriate when the data are thought to contain some ‘wild’ points (since fitting in this norm tends to be unaffected by the presence of a small number of such points), though of course in simple situations you may prefer to identify and reject these points. The ${\ell }_{\infty }$ norm should be used only when the maximum residual is of particular concern, as may be the case for example when the data values have been obtained by accurate computation, as of a mathematical function. Generally, however, a function based on least squares should be preferred, as being computationally faster and usually providing more information on which to assess the results. In many problems the three fits will not differ significantly for practical purposes.
Some of the functions based on the 2${\ell }_{2}$ norm do not minimize the norm itself but instead minimize some (intuitively acceptable) measure of smoothness subject to the norm being less than a user-specified threshold. These functions fit with cubic or bicubic splines (see (10) and (14) below) and the smoothing measures relate to the size of the discontinuities in their third derivatives. A much more automatic fitting procedure follows from this approach.

#### Weighting of data points

The use of the above norms also assumes that the data values yr${y}_{r}$ are of equal (absolute) accuracy. Some of the functions enable an allowance to be made to take account of differing accuracies. The allowance takes the form of ‘weights’ applied to the y$y$ values so that those values known to be more accurate have a greater influence on the fit than others. These weights, to be supplied by you, should be calculated from estimates of the absolute accuracies of the y$y$ values, these estimates being expressed as standard deviations, probable errors or some other measure which has the same dimensions as y$y$. Specifically, for each yr${y}_{r}$ the corresponding weight wr${w}_{r}$ should be inversely proportional to the accuracy estimate of yr${y}_{r}$. For example, if the percentage accuracy is the same for all yr${y}_{r}$, then the absolute accuracy of yr${y}_{r}$ is proportional to yr${y}_{r}$ (assuming yr${y}_{r}$ to be positive, as it usually is in such cases) and so wr = K / yr${w}_{\mathit{r}}=K/{y}_{\mathit{r}}$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$, for an arbitrary positive constant K$K$. (This definition of weight is stressed because often weight is defined as the square of that used here.) The norms (2), (3) and (4) above are then replaced respectively by
 m ∑ |wrεr|, r = 1
$∑r=1m|wrεr|,$
(5)
 sqrt( ∑ r = 1mwr2εr2), $∑r=1mwr2εr2,$ (6)
and
 max |wrεr|. r
$maxr|wrεr|.$
(7)
Again it is the square of (6) which is used in practice rather than (6) itself.

### Curve Fitting

When, as is commonly the case, the mathematical form of the fitting function is immaterial to the problem, polynomials and cubic splines are to be preferred because their simplicity and ease of handling confer substantial benefits. The cubic spline is the more versatile of the two. It consists of a number of cubic polynomial segments joined end to end with continuity in first and second derivatives at the joins. The third derivative at the joins is in general discontinuous. The x$x$ values of the joins are called knots, or, more precisely, interior knots. Their number determines the number of coefficients in the spline, just as the degree determines the number of coefficients in a polynomial.

#### Representation of polynomials

Two different forms for representing a polynomial are used in different functions. One is the usual power-series form
 f(x) ≡ b0 + b1x + b2x2 + ⋯ + bkxk. $f(x)≡b0+b1x+b2x2+⋯+bkxk.$ (8)
The other is the Chebyshev series form
 f(x) ≡ (1/2)a0T0(x) + a1T1(x) + a2T2(x) + ⋯ + akTk(x), $f(x)≡12a0T0(x)+a1T1(x)+a2T2(x)+⋯+akTk(x),$ (9)
where Ti(x)${T}_{i}\left(x\right)$ is the Chebyshev polynomial of the first kind of degree i$i$ (see page 9 of Cox and Hayes (1973)), and where the range of x$x$ has been normalized to run from 1$-1$ to + 1$+1$. The use of either form leads theoretically to the same fitted polynomial, but in practice results may differ substantially because of the effects of rounding error. The Chebyshev form is to be preferred, since it leads to much better accuracy in general, both in the computation of the coefficients and in the subsequent evaluation of the fitted polynomial at specified points. This form also has other advantages: for example, since the later terms in (9) generally decrease much more rapidly from left to right than do those in (8), the situation is more often encountered where the last terms are negligible and it is obvious that the degree of the polynomial can be reduced (note that on the interval 1x1$-1\le x\le 1$ for all i$i$, Ti(x)${T}_{i}\left(x\right)$ attains the value unity but never exceeds it, so that the coefficient ai${a}_{i}$ gives directly the maximum value of the term containing it). If the power-series form is used it is most advisable to work with the variable x$x$ normalized to the range 1$-1$ to + 1$+1$, carrying out the normalization before entering the relevant function. This will often substantially improve computational accuracy.

#### Representation of cubic splines

A cubic spline is represented in the form
 f(x) ≡ c1N1(x) + c2N2(x) + ⋯ + cpNp(x), $f(x)≡c1N1(x)+c2N2(x)+⋯+cpNp(x),$ (10)
where Ni(x)${N}_{\mathit{i}}\left(x\right)$, for i = 1,2,,p$\mathit{i}=1,2,\dots ,p$, is a normalized cubic B-spline (see Hayes (1974)). This form, also, has advantages of computational speed and accuracy over alternative representations.

### Surface Fitting

There are now two independent variables, and we shall denote these by x$x$ and y$y$. The dependent variable, which was denoted by y$y$ in the curve-fitting case, will now be denoted by f$f$. (This is a rather different notation from that indicated for the general-dimensional problem in the first paragraph of Section [Preliminary Considerations], but it has some advantages in presentation.)
Again, in the absence of contrary indications in the particular application being considered, polynomials and splines are the approximating functions most commonly used.

#### Representation of bivariate polynomials

The type of bivariate polynomial currently considered in the chapter can be represented in either of the two forms
 k ℓ f(x,y) ≡ ∑ ∑ bijxiyj, i = 0 j = 0
$f(x,y)≡∑i=0k∑j=0ℓbijxiyj,$
(11)
and
 k ℓ f(x,y) ≡ ∑′ ∑′ aijTi(x)Tj(y), i = 0 j = 0
$f(x,y) ≡ ∑′ i=0 k ∑′ j=0 ℓ aij Ti (x) Tj (y),$
(12)
where Ti(x)${T}_{i}\left(x\right)$ is the Chebyshev polynomial of the first kind of degree i$i$ in the argument x$x$ (see page 9 of Cox and Hayes (1973)), and correspondingly for Tj(y)${T}_{j}\left(y\right)$. The prime on the two summation signs, following standard convention, indicates that the first term in each sum is halved, as shown for one variable in equation (9). The two forms (11) and (12) are mathematically equivalent, but again the Chebyshev form is to be preferred on numerical grounds, as discussed in Section [Representation of polynomials].

#### Bicubic splines: definition and representation

The bicubic spline is defined over a rectangle R$R$ in the (x,y)$\left(x,y\right)$ plane, the sides of R$R$ being parallel to the x$x$ and y$y$ axes. R$R$ is divided into rectangular panels, again by lines parallel to the axes. Over each panel the bicubic spline is a bicubic polynomial, that is it takes the form
 3 3 ∑ ∑ aijxiyj. i = 0 j = 0
$∑i=03∑j=03aijxiyj.$
(13)
Each of these polynomials joins the polynomials in adjacent panels with continuity up to the second derivative. The constant x$x$ values of the dividing lines parallel to the y$y$-axis form the set of interior knots for the variable x$x$, corresponding precisely to the set of interior knots of a cubic spline. Similarly, the constant y$y$ values of dividing lines parallel to the x$x$-axis form the set of interior knots for the variable y$y$. Instead of representing the bicubic spline in terms of the above set of bicubic polynomials, however, it is represented, for the sake of computational speed and accuracy, in the form
 p q f(x,y) = ∑ ∑ cijMi(x)Nj(y), i = 1 j = 1
$f(x,y)=∑i=1p∑j=1qcijMi(x)Nj(y),$
(14)
where Mi(x)${M}_{\mathit{i}}\left(x\right)$, for i = 1,2,,p$\mathit{i}=1,2,\dots ,p$, and Nj(y)${N}_{\mathit{j}}\left(y\right)$, for j = 1,2,,q$\mathit{j}=1,2,\dots ,q$, are normalized B-splines (see Hayes and Halliday (1974) for further details of bicubic splines and Hayes (1974) for normalized B-splines).

### General Linear and Nonlinear Fitting Functions

We have indicated earlier that, unless the data-fitting application under consideration specifically requires some other type of fit, a polynomial or a spline is usually to be preferred. Special functions for these types, in one and in two variables, are provided in this chapter. When the application does specify some other form of fitting, however, it may be treated by a function which deals with a general linear function, or by one for a general nonlinear function, depending on whether the coefficients occur linearly or nonlinearly.
The general linear fitting function can be written in the form
 f(x) ≡ c1φ1(x) + c2φ2(x) + ⋯ + cpφp(x), $f(x)≡c1ϕ1(x)+c2ϕ2(x)+⋯+cpϕp(x),$ (15)
where x$x$ is a vector of one or more independent variables, and the φi${\varphi }_{i}$ are any given functions of these variables (though they must be linearly independent of one another if there is to be the possibility of a unique solution to the fitting problem). This is not intended to imply that each φi${\varphi }_{i}$ is necessarily a function of all the variables: we may have, for example, that each φi${\varphi }_{i}$ is a function of a different single variable, and even that one of the φi${\varphi }_{i}$ is a constant. All that is required is that a value of each φi(x)${\varphi }_{i}\left(x\right)$ can be computed when a value of each independent variable is given.
When the fitting function f(x)$f\left(x\right)$ is not linear in its coefficients, no more specific representation is available in general than f(x)$f\left(x\right)$ itself. However, we shall find it helpful later on to indicate the fact that f(x)$f\left(x\right)$ contains a number of coefficients (to be determined by the fitting process) by using instead the notation f(x ; c)$f\left(x;c\right)$, where c$c$ denotes the vector of coefficients. An example of a nonlinear fitting function is
 f(x ; c) ≡ c1 + c2exp( − c4x) + c3exp( − c5x), $f(x;c)≡c1+c2exp(-c4x)+c3exp(-c5x),$ (16)
which is in one variable and contains five coefficients. Note that here, as elsewhere in this Chapter Introduction, we use the term ‘coefficients’ to include all the quantities whose values are to be determined by the fitting process, not just those which occur linearly. We may observe that it is only the presence of the coefficients c4${c}_{4}$ and c5${c}_{5}$ which makes the form (16) nonlinear. If the values of these two coefficients were known beforehand, (16) would instead be a linear function which, in terms of the general linear form (15), has p = 3$p=3$ and
 φ1(x) ≡ 1,φ2(x) ≡ exp( − c4x),and ​φ3(x) ≡ exp( − c5x). $ϕ1(x)≡1,ϕ2(x)≡exp(-c4x),and ​ϕ3(x)≡exp(-c5x).$
We may note also that polynomials and splines, such as (9) and (14), are themselves linear in their coefficients. Thus if, when fitting with these functions, a suitable special function is not available (as when more than two independent variables are involved or when fitting in the 1${\ell }_{1}$ norm), it is appropriate to use a function designed for a general linear function.

### Constrained Problems

So far, we have considered only fitting processes in which the values of the coefficients in the fitting function are determined by an unconstrained minimization of a particular norm. Some fitting problems, however, require that further restrictions be placed on the determination of the coefficient values. Sometimes these restrictions are contained explicitly in the formulation of the problem in the form of equalities or inequalities which the coefficients, or some function of them, must satisfy. For example, if the fitting function contains a term Aexp(kx)$A\mathrm{exp}\left(-kx\right)$, it may be required that k0$k\ge 0$. Often, however, the equality or inequality constraints relate to the value of the fitting function or its derivatives at specified values of the independent variable(s), but these too can be expressed in terms of the coefficients of the fitting function, and it is appropriate to do this if a general linear or nonlinear function is being used. For example, if the fitting function is that given in (10), the requirement that the first derivative of the function at x = x0$x={x}_{0}$ be non-negative can be expressed as
 c1N1 ′ (x0) + c2N2 ′ (x0) + ⋯ + cpNp ′ (x0) ≥ 0, $c1N1′(x0)+c2N2′(x0)+⋯+cpNp′(x0)≥0,$ (17)
where the prime denotes differentiation with respect to x$x$ and each derivative is evaluated at x = x0$x={x}_{0}$. On the other hand, if the requirement had been that the derivative at x = x0$x={x}_{0}$ be exactly zero, the inequality sign in (17) would be replaced by an equality.
Functions which provide a facility for minimizing the appropriate norm subject to such constraints are discussed in Section [Constraints].

A Padé approximant to a function f(x)$f\left(x\right)$ is a rational function (ratio of two polynomials) whose Maclaurin series expansion is the same as that of f(x)$f\left(x\right)$ up to and including the term in xk${x}^{k}$, where k$k$ is the sum of the degrees of the numerator and denominator of the approximant. Padé approximation can be a useful technique when values of a function are to be obtained from its Maclaurin series but convergence of the series is unacceptably slow or even nonexistent.

## Recommendations on Choice and Use of Available Functions

### General

The choice of a function to treat a particular fitting problem will depend first of all on the fitting function and the norm to be used. Unless there is good reason to the contrary, the fitting function should be a polynomial or a cubic spline (in the appropriate number of variables) and the norm should be the 2${\ell }_{2}$ norm (leading to the least squares fit). If some other function is to be used, the choice of function will depend on whether the function is nonlinear (in which case see Section [Nonlinear functions]) or linear in its coefficients (see Section [General linear functions]), and, in the latter case, on whether the 1${\ell }_{1}$, 2${\ell }_{2}$ or ${\ell }_{\infty }$ norm is to be used. The latter section is appropriate for polynomials and splines, too, if the 1${\ell }_{1}$ or ${\ell }_{\infty }$ norm is preferred, with one exception: there is a special function for fitting polynomial curves in the unweighted ${\ell }_{\infty }$ norm (see Section [Minimax space polynomials]).
In the case of a polynomial or cubic spline, if there is only one independent variable, you should choose a spline (see Section [Cubic Spline Curves]) when the curve represented by the data is of complicated form, perhaps with several peaks and troughs. When the curve is of simple form, first try a polynomial (see Section [Polynomial Curves]) of low degree, say up to degree 5$5$ or 6$6$, and then a spline if the polynomial fails to provide a satisfactory fit. (Of course, if third-derivative discontinuities are unacceptable, a polynomial is the only choice.) If the problem is one of surface fitting, the polynomial function (see Section [Least squares polynomials]) should be tried first if the data arrangement happens to be appropriate, otherwise one of the spline functions (see Section [Automatic fitting with bicubic splines]). If the problem has more than two independent variables, it may be treated by the general linear function in Section [General linear functions], again using a polynomial in the first instance.
Another factor which affects the choice of function is the presence of constraints, as previously discussed on Section [Constrained Problems]. Indeed this factor is likely to be overriding at present, because of the limited number of functions which have the necessary facility. Consequently those functions have been grouped together for discussion in Section [Constraints].

#### Data considerations

A satisfactory fit cannot be expected by any means if the number and arrangement of the data points do not adequately represent the character of the underlying relationship: sharp changes in behaviour, in particular, such as sharp peaks, should be well covered. Data points should extend over the whole range of interest of the independent variable(s): extrapolation outside the data ranges is most unwise. Then, with polynomials, it is advantageous to have additional points near the ends of the ranges, to counteract the tendency of polynomials to develop fluctuations in these regions. When, with polynomial curves, you can precisely choose the x$x$ values of the data, the special points defined in Section [Least squares polynomials: selected data points] should be selected. With polynomial surfaces, each of these same x$x$ values should, where possible, be combined with each of a corresponding set of y$y$ values (not necessarily with the same value of n$n$), thus forming a rectangular grid of (x,y)$\left(x,y\right)$-values. With splines the choice is less critical as long as the character of the relationship is adequately represented. All fits should be tested graphically before accepting them as satisfactory.
For this purpose it should be noted that it is not sufficient to plot the values of the fitted function only at the data values of the independent variable(s); at the least, its values at a similar number of intermediate points should also be plotted, as unwanted fluctuations may otherwise go undetected. Such fluctuations are the less likely to occur the lower the number of coefficients chosen in the fitting function. No firm guide can be given, but as a rough rule, at least initially, the number of coefficients should not exceed half the number of data points (points with equal or nearly equal values of the independent variable, or both independent variables in surface fitting, counting as a single point for this purpose). However, the situation may be such, particularly with a small number of data points, that a satisfactorily close fit to the data cannot be achieved without unwanted fluctuations occurring. In such cases, it is often possible to improve the situation by a transformation of one or more of the variables, as discussed in the next section: otherwise it will be necessary to provide extra data points. Further advice on curve-fitting is given in Cox and Hayes (1973) and, for polynomials only, in Hayes (1970). Much of the advice applies also to surface fitting; see also the function documents.

#### Transformation of variables

Before starting the fitting, consideration should be given to the choice of a good form in which to deal with each of the variables: often it will be satisfactory to use the variables as they stand, but sometimes the use of the logarithm, square root, or some other function of a variable will lead to a better-behaved relationship. This question is customarily taken into account in preparing graphs and tables of a relationship and the same considerations apply when curve or surface fitting. The practical context will often give a guide. In general, it is best to avoid having to deal with a relationship whose behaviour in one region is radically different from that in another. A steep rise at the left-hand end of a curve, for example, can often best be treated by curve-fitting in terms of log(x + c)$\mathrm{log}\left(x+c\right)$ with some suitable value of the constant c$c$. A case when such a transformation gave substantial benefit is discussed in page 60 of Hayes (1970). According to the features exhibited in any particular case, transformation of either dependent variable or independent variable(s) or both may be beneficial. When there is a choice it is usually better to transform the independent variable(s): if the dependent variable is transformed, the weights attached to the data points must be adjusted. Thus (denoting the dependent variable by y$y$, as in the notation for curves) if the yr${y}_{r}$ to be fitted have been obtained by a transformation y = g(Y)$y=g\left(Y\right)$ from original data values Yr${Y}_{r}$, with weights Wr${W}_{\mathit{r}}$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$, we must take
 wr = Wr / (dy / dY), $wr=Wr/(dy/dY),$ (18)
where the derivative is evaluated at Yr${Y}_{r}$. Strictly, the transformation of Y$Y$ and the adjustment of weights are valid only when the data errors in the Yr${Y}_{r}$ are small compared with the range spanned by the Yr${Y}_{r}$, but this is usually the case.

### Polynomial Curves

#### Least squares polynomials: arbitrary data points

nag_fit_1dcheb_arb (e02ad) fits to arbitrary data points, with arbitrary weights, polynomials of all degrees up to a maximum degree k$k$, which is a choice. If you are seeking only a low-degree polynomial, up to degree 5$5$ or 6$6$ say, k = 10$k=10$ is an appropriate value, providing there are about 20$20$ data points or more. To assist in deciding the degree of polynomial which satisfactorily fits the data, the function provides the root-mean-square residual si${s}_{i}$ for all degrees i = 1,2,,k$i=1,2,\dots ,k$. In a satisfactory case, these si${s}_{i}$ will decrease steadily as i$i$ increases and then settle down to a fairly constant value, as shown in the example
 i si 0 3.5215 1 0.7708 2 0.1861 3 0.0820 4 0.0554 5 0.0251 6 0.0264 7 0.0280 8 0.0277 9 0.0297 10 0.0271
$i si 0 3.5215 1 0.7708 2 0.1861 3 0.0820 4 0.0554 5 0.0251 6 0.0264 7 0.0280 8 0.0277 9 0.0297 10 0.0271$
If the si${s}_{i}$ values settle down in this way, it indicates that the closest polynomial approximation justified by the data has been achieved. The degree which first gives the approximately constant value of si${s}_{i}$ (degree 5$5$ in the example) is the appropriate degree to select. (If you are prepared to accept a fit higher than sixth degree you should simply find a high enough value of k$k$ to enable the type of behaviour indicated by the example to be detected: thus you should seek values of k$k$ for which at least 4$4$ or 5$5$ consecutive values of si${s}_{i}$ are approximately the same.) If the degree were allowed to go high enough, si${s}_{i}$ would, in most cases, eventually start to decrease again, indicating that the data points are being fitted too closely and that undesirable fluctuations are developing between the points. In some cases, particularly with a small number of data points, this final decrease is not distinguishable from the initial decrease in si${s}_{i}$. In such cases, you may seek an acceptable fit by examining the graphs of several of the polynomials obtained. Failing this, you may (a) seek a transformation of variables which improves the behaviour, (b) try fitting a spline, or (c) provide more data points. If data can be provided simply by drawing an approximating curve by hand and reading points from it, use the points discussed in Section [Least squares polynomials: selected data points].

#### Least squares polynomials: selected data points

When you are at liberty to choose the x$x$ values of data points, such as when the points are taken from a graph, it is most advantageous when fitting with polynomials to use the values xr = cos(πr / n)${x}_{\mathit{r}}=\mathrm{cos}\left(\pi \mathit{r}/n\right)$, for r = 0,1,,n$\mathit{r}=0,1,\dots ,n$ for some value of n$n$, a suitable value for which is discussed at the end of this section. Note that these xr${x}_{r}$ relate to the variable x$x$ after it has been normalized so that its range of interest is 1$-1$ to + 1$+1$. nag_fit_1dcheb_arb (e02ad) may then be used as in Section [Least squares polynomials: arbitrary data points] to seek a satisfactory fit. However, if the ordinate values are of equal weight, as would often be the case when they are read from a graph, nag_fit_1dcheb_glp (e02af) is to be preferred, as being simpler to use and faster. This latter algorithm provides the coefficients aj${a}_{\mathit{j}}$, for j = 0,1,,n$\mathit{j}=0,1,\dots ,n$, in the Chebyshev series form of the polynomial of degree n$n$ which interpolates the data. In a satisfactory case, the later coefficients in this series, after some initial significant ones, will exhibit a random behaviour, some positive and some negative, with a size about that of the errors in the data or less. All these ‘random’ coefficients should be discarded, and the remaining (initial) terms of the series be taken as the approximating polynomial. This truncated polynomial is a least squares fit to the data, though with the point at each end of the range given half the weight of each of the other points. The following example illustrates a case in which degree 5$5$ or perhaps 6$6$ would be chosen for the approximating polynomial.
 j aj0 0 9.315 1 − 8.030 2 0.303 3 − 1.483 4 0.256 5 − 0.386 6 0.076 7 0.022 8 0.014 9 0.005 10 0.011 11 − 0.040 12 0.017 13 − 0.054 14 0.010 15 − 0.034 16 − 0.001
$j aj0 0 9.315 1 -8.030 2 0.303 3 -1.483 4 0.256 5 -0.386 6 0.076 7 0.022 8 0.014 9 0.005 10 0.011 11 -0.040 12 0.017 13 -0.054 14 0.010 15 -0.034 16 -0.001$
Basically, the value of n$n$ used needs to be large enough to exhibit the type of behaviour illustrated in the above example. A value of 16$16$ is suggested as being satisfactory for very many practical problems, the required cosine values for this value of n$n$ being given on page 11 of Cox and Hayes (1973). If a satisfactory fit is not obtained, a spline fit should be tried, or, if you are prepared to accept a higher degree of polynomial, n$n$ should be increased: doubling n$n$ is an advantageous strategy, since the set of values cos(πr / n)$\mathrm{cos}\left(\pi \mathit{r}/n\right)$, for r = 0,1,,n$\mathit{r}=0,1,\dots ,n$, contains all the values of cos(πr / 2n)$\mathrm{cos}\left(\pi \mathit{r}/2n\right)$, for r = 0,1,,n$\mathit{r}=0,1,\dots ,n$, so that the old dataset will then be re-used in the new one. Thus, for example, increasing n$n$ from 16$16$ to 32$32$ will require only 16$16$ new data points, a smaller number than for any other increase of n$n$. If data points are particularly expensive to obtain, a smaller initial value than 16$16$ may be tried, provided you are satisfied that the number is adequate to reflect the character of the underlying relationship. Again, the number should be doubled if a satisfactory fit is not obtained.

#### Minimax space polynomials

nag_fit_1dmmax (e02ac) determines the polynomial of given degree which is a minimax space fit to arbitrary data points with equal weights. (If unequal weights are required, the polynomial must be treated as a general linear function and fitted using nag_fit_glin_linf (e02gc).) To arrive at a satisfactory degree it will be necessary to try several different degrees and examine the results graphically. Initial guidance can be obtained from the value of the maximum residual: this will vary with the degree of the polynomial in very much the same way as does si${s}_{i}$ in least squares fitting, but it is much more expensive to investigate this behaviour in the same detail.
The algorithm uses the power-series form of the polynomial so for numerical accuracy it is advisable to normalize the data range of x$x$ to [1,1]$\left[-1,1\right]$.

### Cubic Spline Curves

#### Least squares cubic splines

nag_fit_1dspline_knots (e02ba) fits to arbitrary data points, with arbitrary weights, a cubic spline with interior knots specified by you. The choice of these knots so as to give an acceptable fit must largely be a matter of trial and error, though with a little experience a satisfactory choice can often be made after one or two trials. It is usually best to start with a small number of knots (too many will result in unwanted fluctuations in the fit, or even in there being no unique solution) and, examining the fit graphically at each stage, to add a few knots at a time at places where the fit is particularly poor. Moving the existing knots towards these places will also often improve the fit. In regions where the behaviour of the curve underlying the data is changing rapidly, closer knots will be needed than elsewhere. Otherwise, positioning is not usually very critical and equally-spaced knots are often satisfactory. See also the next section, however.
A useful feature of the function is that it can be used in applications which require the continuity to be less than the normal continuity of the cubic spline. For example, the fit may be required to have a discontinuous slope at some point in the range. This can be achieved by placing three coincident knots at the given point. Similarly a discontinuity in the second derivative at a point can be achieved by placing two knots there. Analogy with these discontinuous cases can provide guidance in more usual cases: for example, just as three coincident knots can produce a discontinuity in slope, so three close knots can produce a rapid change in slope. The closer the knots are, the more rapid can the change be.
An example set of data is given in Figure 1. It is a rather tricky set, because of the scarcity of data on the right, but it will serve to illustrate some of the above points and to show some of the dangers to be avoided. Three interior knots (indicated by the vertical lines at the top of the diagram) are chosen as a start. We see that the resulting curve is not steep enough in the middle and fluctuates at both ends, severely on the right. The spline is unable to cope with the shape and more knots are needed.
Figure 1
Figure 2
Figure 3
In Figure 2, three knots have been added in the centre, where the data shows a rapid change in behaviour, and one further out at each end, where the fit is poor. The fit is still poor, so a further knot is added in this region and, in Figure 3, disaster ensues in rather spectacular fashion.
The reason is that, at the right-hand end, the fits in Figures , have been interpreted as poor simply because of the fluctuations about the curve underlying the data (or what it is naturally assumed to be). But the fitting process knows only about the data and nothing else about the underlying curve, so it is important to consider only closeness to the data when deciding goodness-of-fit.
Thus, in Figure 1, the curve fits the last two data points quite well compared with the fit elsewhere, so no knot should have been added in this region. In Figure 2, the curve goes exactly through the last two points, so a further knot is certainly not needed here.
Figure 4 shows what can be achieved without the extra knot on each of the flat regions. Remembering that within each knot interval the spline is a cubic polynomial, there is really no need to have more than one knot interval covering each flat region.
Figure 4
What we have, in fact, in Figures , is a case of too many knots (so too many coefficients in the spline equation) for the number of data points. The warning in the second paragraph of Section [Preliminary Considerations] was that the fit will then be too close to the data, tending to have unwanted fluctuations between the data points. The warning applies locally for splines, in the sense that, in localities where there are plenty of data points, there can be a lot of knots, as long as there are few knots where there are few points, especially near the ends of the interval. In the present example, with so few data points on the right, just the one extra knot in Figure 2 is too many! The signs are clearly present, with the last two points fitted exactly (at least to the graphical accuracy and actually much closer than that) and fluctuations within the last two knot-intervals (see Figure 1, where only the final point is fitted exactly and one of the wobbles spans several data points).
The situation in Figure 3 is different. The fit, if computed exactly, would still pass through the last two data points, with even more violent fluctuations. However, the problem has become so ill-conditioned that all accuracy has been lost. Indeed, if the last interior knot were moved a tiny amount to the right, there would be no unique solution and an error message would have been caused. Near-singularity is, sadly, not picked up by the function, but can be spotted readily in a graph, as Figure 3. B-spline coefficients becoming large, with alternating signs, is another indication. However, it is better to avoid such situations, firstly by providing, whenever possible, data adequately covering the range of interest, and secondly by placing knots only where there is a reasonable amount of data.
The example here could, in fact, have utilized from the start the observation made in the second paragraph of this section, that three close knots can produce a rapid change in slope. The example has two such rapid changes and so requires two sets of three close knots (in fact, the two sets can be so close that one knot can serve in both sets, so only five knots prove sufficient in Figure 4). It should be noted, however, that the rapid turn occurs within the range spanned by the three knots. This is the reason that the six knots in Figure 2 are not satisfactory as they do not quite span the two turns.
Some more examples to illustrate the choice of knots are given in Cox and Hayes (1973).

#### Automatic fitting with cubic splines

nag_fit_1dspline_auto (e02be) fits cubic splines to arbitrary data points with arbitrary weights but itself chooses the number and positions of the knots. You have to supply only a threshold for the sum of squares of residuals. The function first builds up a knot set by a series of trial fits in the 2${\ell }_{2}$ norm. Then, with the knot set decided, the final spline is computed to minimize a certain smoothing measure subject to satisfaction of the chosen threshold. Thus it is easier to use than nag_fit_1dspline_knots (e02ba) (see the previous section), requiring only some experimentation with this threshold. It should therefore be first choice unless you have a preference for the ordinary least squares fit or, for example, if you wish to experiment with knot positions, trying to keep their number down (nag_fit_1dspline_auto (e02be) aims only to be reasonably frugal with knots).

### Polynomial and Spline Surfaces

#### Least squares polynomials

nag_fit_2dcheb_lines (e02ca) fits bivariate polynomials of the form (12), with k$k$ and $\ell$ specified by you, to data points in a particular, but commonly occurring, arrangement. This is such that, when the data points are plotted in the plane of the independent variables x$x$ and y$y$, they lie on lines parallel to the x$x$-axis. Arbitrary weights are allowed. The matter of choosing satisfactory values for k$k$ and $\ell$ is discussed in Section [Further Comments] in (e02ca).

#### Least squares bicubic splines

nag_fit_2dspline_panel (e02da) fits to arbitrary data points, with arbitrary weights, a bicubic spline with its two sets of interior knots specified by you. For choosing these knots, the advice given for cubic splines, in Section [Least squares cubic splines] above, applies here too (see also the next section, however). If changes in the behaviour of the surface underlying the data are more marked in the direction of one variable than of the other, more knots will be needed for the former variable than the latter. Note also that, in the surface case, the reduction in continuity caused by coincident knots will extend across the whole spline surface: for example, if three knots associated with the variable x$x$ are chosen to coincide at a value L$L$, the spline surface will have a discontinuous slope across the whole extent of the line x = L$x=L$.
With some sets of data and some choices of knots, the least squares bicubic spline will not be unique. This will not occur, with a reasonable choice of knots, if the rectangle R$R$ is well covered with data points: here R$R$ is defined as the smallest rectangle in the (x,y)$\left(x,y\right)$ plane, with sides parallel to the axes, which contains all the data points. Where the least squares solution is not unique, the minimal least squares solution is computed, namely that least squares solution which has the smallest value of the sum of squares of the B-spline coefficients cij${c}_{ij}$ (see the end of Section [Bicubic splines: definition and representation] above). This choice of least squares solution tends to minimize the risk of unwanted fluctuations in the fit. The fit will not be reliable, however, in regions where there are few or no data points.

#### Automatic fitting with bicubic splines

nag_fit_2dspline_sctr (e02dd) fits bicubic splines to arbitrary data points with arbitrary weights but chooses the knot sets itself. You have to supply only a threshold for the sum of squares of residuals. Just like the automatic curve, nag_fit_1dspline_auto (e02be) (see Section [Automatic fitting with cubic splines]), nag_fit_2dspline_sctr (e02dd) then builds up the knot sets and finally fits a spline minimizing a smoothing measure subject to satisfaction of the threshold. Again, this easier to use function is normally to be preferred, at least in the first instance.
nag_fit_2dspline_grid (e02dc) is a very similar function to nag_fit_2dspline_sctr (e02dd) but deals with data points of equal weight which lie on a rectangular mesh in the (x,y)$\left(x,y\right)$ plane. This kind of data allows a very much faster computation and so is to be preferred when applicable. Substantial departures from equal weighting can be ignored if you are not concerned with statistical questions, though the quality of the fit will suffer if this is taken too far. In such cases, you should revert to nag_fit_2dspline_sctr (e02dd).

#### Large data sets

For fitting large sets of equally weighted, arbitrary data points, nag_fit_2dspline_ts_sctr (e02jd) provides a two stage method where local, least squares, polynomial fits are extended to form a C1${C}^{1}$ surface.

### General Linear and Nonlinear Fitting Functions

#### General linear functions

For the general linear function (15), functions are available for fitting in all three norms. The least squares functions (which are to be preferred unless there is good reason to use another norm – see Section [Fitting criteria: norms]) are in Chapters F04 and F08. The ${\ell }_{\infty }$ function is nag_fit_glin_linf (e02gc). Two functions for the 1${\ell }_{1}$ norm are provided, nag_fit_glin_l1sol (e02ga) and nag_fit_glinc_l1sol (e02gb). Of these two, the former should be tried in the first instance, since it will be satisfactory in most cases, has a much shorter code and is faster. nag_fit_glinc_l1sol (e02gb), however, uses a more stable computational algorithm and therefore may provide a solution when nag_fit_glin_l1sol (e02ga) fails to do so. It also provides a facility for imposing linear inequality constraints on the solution (see Section [Constraints]).
All the above functions are essentially linear algebra functions, and in considering their use we need to view the fitting process in a slightly different way from hitherto. Taking y$y$ to be the dependent variable and x$x$ the vector of independent variables, we have, as for equation (1) but with each xr${x}_{r}$ now a vector,
 εr = yr − f(xr),  r = 1,2, … ,m. $εr=yr-f(xr), r=1,2,…,m.$
Substituting for f(x)$f\left(x\right)$ the general linear form (15), we can write this as
 c1φ1(xr) + c2φ2(xr) + ⋯ + cpφp(xr) = yr − εr,  r = 1,2, … ,m. $c1ϕ1(xr)+c2ϕ2(xr)+⋯+cpϕp(xr)=yr-εr, r=1,2,…,m.$ (19)
Thus we have a system of linear equations in the coefficients cj${c}_{j}$. Usually, in writing these equations, the εr${\epsilon }_{r}$ are omitted and simply taken as implied. The system of equations is then described as an overdetermined system (since we must have mp$m\ge p$ if there is to be the possibility of a unique solution to our fitting problem), and the fitting process of computing the cj${c}_{j}$ to minimize one or other of the norms (2), (3) and (4) can be described, in relation to the system of equations, as solving the overdetermined system in that particular norm. In matrix notation, the system can be written as
 Φc = y, $Φc=y,$ (20)
where Φ$\Phi$ is the m$m$ by p$p$ matrix whose element in row r$\mathit{r}$ and column j$\mathit{j}$ is φj(xr)${\varphi }_{\mathit{j}}\left({x}_{\mathit{r}}\right)$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$ and j = 1,2,,p$\mathit{j}=1,2,\dots ,p$. The vectors c$c$ and y$y$ respectively contain the coefficients cj${c}_{j}$ and the data values yr${y}_{r}$.
All four functions, however, use the standard notation of linear algebra, the overdetermined system of equations being denoted by
 Ax = b. $Ax=b.$ (21)
(In fact, nag_linsys_real_gen_lsqsol (f04am) can deal with several right-hand sides simultaneously, and thus is concerned with a matrix of right-hand sides, denoted by B$B$, instead of the single vector b$b$, and correspondingly with a matrix X$X$ of solutions instead of the single vector x$x$.) The correspondence between this notation and that which we have used for the data-fitting problem (20) is therefore given by
 A ≡ Φ, x ≡ c, b ≡ y.
$A≡Φ, x≡c, b≡y.$
(22)
Note that the norms used by these functions are the unweighted norms (2), (3) and (4). If you wish to apply weights to the data points, that is to use the norms (5), (6) or (7), the equivalences (22) should be replaced by
 A ≡ D Φ, x ≡ c, b ≡ Dy,
$A≡D Φ, x≡c, b≡Dy,$
where D$D$ is a diagonal matrix with wr${w}_{r}$ as the r$\mathit{r}$th diagonal element. Here wr${w}_{\mathit{r}}$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$, is the weight of the r$r$th data point as defined in Section [Weighting of data points].

#### Nonlinear functions

Functions for fitting with a nonlinear function in the 2${\ell }_{2}$ norm are provided in Chapter E04. Consult the E04 Chapter Introduction for the appropriate choice of function. Again, however, the notation adopted is different from that we have used for data fitting. In the latter, we denote the fitting function by f(x ; c)$f\left(x;c\right)$, where x$x$ is the vector of independent variables and c$c$ is the vector of coefficients, whose values are to be determined. The squared 2${\ell }_{2}$ norm, to be minimized with respect to the elements of c$c$, is then
 m ∑ wr2[yr − f(xr ; c)]2 r = 1
$∑r=1mwr2[yr-f(xr;c)]2$
(23)
where yr${y}_{r}$ is the r$r$th data value of the dependent variable, xr${x}_{r}$ is the vector containing the r$r$th values of the independent variables, and wr${w}_{r}$ is the corresponding weight as defined in Section [Weighting of data points].
On the other hand, in the nonlinear least squares functions of Chapter E04, the function to be minimized is denoted by
 m ∑ fi2(x), i = 1
$∑i=1mfi2(x),$
(24)
the minimization being carried out with respect to the elements of the vector x$x$. The correspondence between the two notations is given by
 x ≡ c​ and ​fi(x) ≡ wr[yr − f(xr ; c)],  i = r = 1,2, … ,m. $x≡c​ and ​fi(x)≡wr[yr-f(xr;c)], i=r=1,2,…,m.$
Note especially that the vector x$x$ of variables of the nonlinear least squares functions is the vector c$c$ of coefficients of the data-fitting problem, and in particular that, if the selected function requires derivatives of the fi(x)${f}_{i}\left(x\right)$ to be provided, these are derivatives of wr[yrf(xr ; c)]${w}_{r}\left[{y}_{r}-f\left({x}_{r};c\right)\right]$ with respect to the coefficients of the data-fitting problem.

### Constraints

At present, there are only a limited number of functions which fit subject to constraints. nag_fit_glinc_l1sol (e02gb) allows the imposition of linear inequality constraints (the inequality (17) for example) when fitting with the general linear function in the 1${\ell }_{1}$ norm. In addition, Chapter E04 contains a function, nag_opt_lsq_gencon_deriv (e04us), which can be used for fitting with a nonlinear function in the 2${\ell }_{2}$ norm subject to general equality or inequality constraints.
The remaining two constraint functions relate to fitting with polynomials in the 2${\ell }_{2}$ norm. nag_fit_1dcheb_con (e02ag) deals with polynomial curves and allows precise values of the fitting function and (if required) all its derivatives up to a given order to be prescribed at one or more values of the independent variable. The related surface-fitting nag_fit_2dcheb_lines (e02ca), designed for data on lines as discussed in Section [Least squares polynomials], has a feature which permits precise values of the function and its derivatives to be imposed all along one or more lines parallel to the x$x$ or y$y$ axes (see the function document for the relationship between these normalized variables and your original variables). In this case, however, the prescribed values cannot be supplied directly to the function: instead, you must provide modified data ordinates Fr,s${F}_{r,s}$ and polynomial factors γ1(x)${\gamma }_{1}\left(x\right)$ and γ2(x)${\gamma }_{2}\left(x\right)$, as defined on page 95 of Hayes (1970).

### Evaluation, Differentiation and Integration

Functions are available to evaluate, differentiate and integrate polynomials in Chebyshev series form and cubic or bicubic splines in B-spline form. These polynomials and splines may have been produced by the various fitting functions or, in the case of polynomials, from prior calls of the differentiation and integration functions themselves.
nag_fit_1dcheb_eval (e02ae) and nag_fit_1dcheb_eval2 (e02ak) evaluate polynomial curves: the latter has a longer parameter list but does not require you to normalize the values of the independent variable and can accept coefficients which are not stored in contiguous locations. nag_fit_2dcheb_eval (e02cb) evaluates polynomial surfaces, nag_fit_1dspline_eval (e02bb) cubic spline curves, nag_fit_2dspline_evalv (e02de) and nag_fit_2dspline_evalm (e02df) bicubic spline surfaces, and nag_fit_2dspline_derivm (e02dh) bicubic spline surfaces with derivatives.
Differentiation and integration of polynomial curves are carried out by nag_fit_1dcheb_deriv (e02ah) and nag_fit_1dcheb_integ (e02aj) respectively. The results are provided in Chebyshev series form and so repeated differentiation and integration are catered for. Values of the derivative or integral can then be computed using the appropriate evaluation function. Polynomial surfaces can be treated by a sequence of calls of one or other of the same two functions, differentiating or integrating the form (12) piece by piece. For example, if, for some given value of j$j$, the coefficients aij${a}_{\mathit{i}j}$, for i = 0,1,,k$\mathit{i}=0,1,\dots ,k$, are supplied to nag_fit_1dcheb_deriv (e02ah), we obtain coefficients aij${\stackrel{-}{a}}_{\mathit{i}j}$ say, for i = 0,1,,k1$\mathit{i}=0,1,\dots ,k-1$, which are the coefficients in the derivative with respect to x$x$ of the polynomial
 k ∑′ aijTi(x). i = 0
$∑′i=0kaijTi(x).$
If this is repeated for all values of j$j$, we obtain all the coefficients in the derivative of the surface with respect to x$x$, namely
 k − 1 ℓ ∑′ ∑′ aijTj(y). i = 0 j = 0
$∑′ i=0 k-1 ∑′j=0ℓa-ijTj(y).$
(25)
The derivative of (12), or of (25), with respect to y$y$ can be obtained in a corresponding manner. In the latter case, for example, for each value of i$i$ in turn we supply the coefficients ai0,ai1,ai2,${\stackrel{-}{a}}_{i0},{\stackrel{-}{a}}_{i1},{\stackrel{-}{a}}_{i2},\dots \text{}$, to the function. Values of the resulting polynomials, such as (25), can subsequently be computed using nag_fit_2dcheb_eval (e02cb). It is important, however, to note one exception: the process described will not give valid results for differentiating or integrating a surface with respect to y$y$ if the normalization of x$x$ was made dependent upon y$y$, an option which is available in the fitting function nag_fit_2dcheb_lines (e02ca).
For splines the differentiation and integration functions provided are of a different nature from those for polynomials. nag_fit_1dspline_deriv (e02bc) and nag_fit_1dspline_deriv_vector (e02bf) provide values of a cubic spline curve together with its first three derivatives (the rest, of course, are zero) at a given value, or vector of values, of x$x$. nag_fit_1dspline_integ (e02bd) computes the value of the definite integral of a cubic spline over its whole range. Again the functions can be applied to surfaces, this time of the form (14). For example, if, for each value of j$j$ in turn, the coefficients cij${c}_{\mathit{i}j}$, for i = 1,2,,p$\mathit{i}=1,2,\dots ,p$, are supplied to nag_fit_1dspline_deriv (e02bc) with x = x0$x={x}_{0}$ and on each occasion we select from the output the value of the second derivative, dj${d}_{j}$ say, and if the whole set of dj${d}_{j}$ are then supplied to the same function with x = y0$x={y}_{0}$, the output will contain all the values at (x0,y0)$\left({x}_{0},{y}_{0}\right)$ of
 (∂2f)/( ∂ x2)  and   (∂r + 2f)/( ∂ x2 ∂ yr),  r = 1,2,3. $∂2f ∂x2 and ∂r+2f ∂x2∂yr , r=1,2,3.$
Equally, if after each of the first p$p$ calls of nag_fit_1dspline_deriv (e02bc) we had selected the function value (nag_fit_1dspline_eval (e02bb) would also provide this) instead of the second derivative and we had supplied these values to nag_fit_1dspline_integ (e02bd), the result obtained would have been the value of
 B ∫ f(x0,y)dy, A
$∫ABf(x0,y)dy,$
where A$A$ and B$B$ are the end points of the y$y$ interval over which the spline was defined.

Given two non-negative integers $\ell$ and m$m$, and the coefficients in the Maclaurin series expansion of a function up to degree + m$\ell +m$, nag_fit_pade_app (e02ra) calculates the Padé approximant of degree $\ell$ in the numerator and degree m$m$ in the denominator. See Sections [Description] and [Example] in (e02ra) for further advice.
nag_fit_pade_eval (e02rb) is provided to compute values of the Padé approximant, once it has been obtained.

## Decision Trees

Note: these Decision Trees are concerned with unconstrained fitting; for constrained fitting, consult Section [Constraints].

### Tree 1

 Do you have reason to choose a type of fitting function other than polynomials or spline? _yes Are the fitting functions nonlinear? _yes See least squares functions in Chapter E04 | no| | Do you prefer the ℓ2${\ell }_{2}$ norm? _yes See the least squares functions in Chapters F04 and F08 | no| | Do you prefer the ℓ1${\ell }_{1}$ norm? _yes nag_fit_glin_l1sol (e02ga) | no| | Use the ℓ∞${\ell }_{\infty }$ norm, nag_fit_glin_linf (e02gc) no| Does the problem have more than two independent variables? _yes Do you have a reason to prefer a norm other than the ℓ2${\ell }_{2}$ norm? _yes Do you prefer the ℓ1${\ell }_{1}$ norm? _yes nag_fit_glin_l1sol (e02ga) | | no| | | use ℓ∞${\ell }_{\infty }$ norm, nag_fit_glin_linf (e02gc) | no| | See the least squares functions in Chapters F04 and F08 no| Do you wish to use the ℓ1${\ell }_{1}$ norm? _yes nag_fit_glin_l1sol (e02ga) no| Is there just one independent variable (curve-fitting)? _yes Continue on Tree 2 no| Does the data have equal weights? _yes Does the data lie on a rectangular mesh? _yes nag_fit_2dspline_grid (e02dc) | no| | nag_fit_2dspline_ts_sctr (e02jd) no| Are the data on lines? (see Section [Least squares polynomials]) _yes nag_fit_2dcheb_lines (e02ca) no| nag_fit_2dspline_sctr (e02dd) or nag_fit_2dspline_panel (e02da)

### Tree 2

 Do you wish to use the ℓ∞${\ell }_{\infty }$ norm? _yes Do you prefer splines? _yes nag_fit_glin_linf (e02gc) (using a spline basis) | no| | Have data points different weights? _yes nag_fit_glin_linf (e02gc) (using a polynomial basis) | no| | nag_fit_1dmmax (e02ac) no| Do you prefer polynomials? _yes Are the x$x$ values of data points different from the values defined in Section [Least squares polynomials: selected data points] or cannot be chosen to be so? _yes nag_fit_1dcheb_arb (e02ad) | no| | Choose x$x$ values defined in Section [Least squares polynomials: selected data points]. Have data points different weights? _yes nag_fit_1dcheb_arb (e02ad) | no| | nag_fit_1dcheb_glp (e02af) no| Do you wish to use ℓ2${\ell }_{2}$ norm? _yes nag_fit_1dspline_knots (e02ba) no| nag_fit_1dspline_auto (e02be)

## Functionality Index

 Automatic fitting,
 with cubic splines nag_fit_1dspline_auto (e02be)
 Automatic knot placement,
 with bicubic splines,
 data on rectangular mesh nag_fit_2dspline_grid (e02dc)
 Data on lines nag_fit_2dcheb_lines (e02ca)
 Data on rectangular mesh nag_fit_2dspline_grid (e02dc)
 Differentiation,
 of bicubic splines nag_fit_2dspline_derivm (e02dh)
 of cubic splines nag_fit_1dspline_deriv (e02bc)
 of polynomials nag_fit_1dcheb_deriv (e02ah)
 Evaluation,
 at vector of points,
 of C1 scattered fit nag_fit_2dspline_ts_evalv (e02je)
 of bicubic splines at vector of points nag_fit_2dspline_evalv (e02de)
 of bicubic splines on mesh nag_fit_2dspline_evalm (e02df)
 of cubic splines nag_fit_1dspline_eval (e02bb)
 of cubic splines and derivatives nag_fit_1dspline_deriv (e02bc)
 of cubic splines and optionally derivatives at a vector of points nag_fit_1dspline_deriv_vector (e02bf)
 of definite integral of cubic splines nag_fit_1dspline_integ (e02bd)
 of polynomials,
 in one variable nag_fit_1dcheb_eval2 (e02ak)
 in one variable (simple interface) nag_fit_1dcheb_eval (e02ae)
 in two variables nag_fit_2dcheb_eval (e02cb)
 on mesh,
 of C1 scattered fit nag_fit_2dspline_ts_evalm (e02jf)
 Integration,
 of cubic splines (definite integral) nag_fit_1dspline_integ (e02bd)
 of polynomials nag_fit_1dcheb_integ (e02aj)
 l1 fit,
 with constraints nag_fit_glinc_l1sol (e02gb)
 with general linear function nag_fit_glin_l1sol (e02ga)
 Least squares curve fit,
 with cubic splines nag_fit_1dspline_knots (e02ba)
 with polynomials,
 selected data points nag_fit_1dcheb_glp (e02af)
 with constraints nag_fit_1dcheb_con (e02ag)
 Least squares surface fit,
 with bicubic splines nag_fit_2dspline_panel (e02da)
 with polynomials nag_fit_2dcheb_lines (e02ca)
 Minimax space fit,
 with general linear function nag_fit_glin_linf (e02gc)
 with polynomials in one variable (deprecated) nag_fit_1dmmax (e02ac)
 Scattered data fit,
 bicubic spline nag_fit_2dspline_sctr (e02dd)
 C1 spline nag_fit_2dspline_ts_sctr (e02jd)
 Service functions,
 general option getting function nag_fit_opt_get (e02zl)
 general option setting function nag_fit_opt_set (e02zk)
 Sorting nag_fit_2dspline_sort (e02za)

## References

Baker G A (1975) Essentials of Padé Approximants Academic Press, New York
Cox M G and Hayes J G (1973) Curve fitting: a guide and suite of algorithms for the non-specialist user NPL Report NAC26 National Physical Laboratory
Hayes J G (ed.) (1970) Numerical Approximation to Functions and Data Athlone Press, London
Hayes J G (1974) Numerical methods for curve and surface fitting Bull. Inst. Math. Appl. 10 144–152
Hayes J G and Halliday J (1974) The least squares fitting of cubic spline surfaces to general data sets J. Inst. Math. Appl. 14 89–103