e02 Chapter Contents
e02 Chapter Introduction
NAG Library Manual

# NAG Library Function Documentnag_1d_cheb_fit (e02adc)

## 1  Purpose

nag_1d_cheb_fit (e02adc) computes weighted least squares polynomial approximations to an arbitrary set of data points.

## 2  Specification

 #include #include
 void nag_1d_cheb_fit (Integer m, Integer kplus1, Integer tda, const double x[], const double y[], const double w[], double a[], double s[], NagError *fail)

## 3  Description

nag_1d_cheb_fit (e02adc) determines least squares polynomial approximations of degrees $0,1,\dots ,k$ to the set of data points $\left({x}_{\mathit{r}},{y}_{\mathit{r}}\right)$ with weights ${w}_{\mathit{r}}$, for $\mathit{r}=1,2,\dots ,m$.
The approximation of degree $i$ has the property that it minimizes ${\sigma }_{i}$ the sum of squares of the weighted residuals ${\epsilon }_{r}$, where
 $ε r = w r y r - f r$
and ${f}_{r}$ is the value of the polynomial of degree $i$ at the $r$th data point.
Each polynomial is represented in Chebyshev series form with normalized argument $\stackrel{-}{x}$. This argument lies in the range $-1$ to $+1$ and is related to the original variable $x$ by the linear transformation
 $x - = 2 x - x max - x min x max - x min .$
Here ${x}_{\mathrm{max}}$ and ${x}_{\mathrm{min}}$ are respectively the largest and smallest values of ${x}_{r}$. The polynomial approximation of degree $i$ is represented as
 $1 2 a i + 1 , 1 T 0 x - + a i + 1 , 2 T 1 x - + a i + 1 , 3 T 2 x - + ⋯ + a i + 1 , i + 1 T i x - ,$
where ${T}_{j}\left(\stackrel{-}{x}\right)$ is the Chebyshev polynomial of the first kind of degree $j$ with argument $\stackrel{-}{x}$.
For $i=0,1,\dots ,k$, the function produces the values of ${a}_{i+1,\mathit{j}+1}$, for $\mathit{j}=0,1,\dots ,i$, together with the value of the root mean square residual ${s}_{i}=\sqrt{{\sigma }_{i}/\left(m-i-1\right)}$. In the case $m=i+1$ the function sets the value of ${s}_{i}$ to zero.
The method employed is due to Forsythe (1957) and is based upon the generation of a set of polynomials orthogonal with respect to summation over the normalized dataset. The extensions due to Clenshaw (1960) to represent these polynomials as well as the approximating polynomials in their Chebyshev series forms are incorporated. The modifications suggested by Reinsch and Gentleman (Gentleman (1969)) to the method originally employed by Clenshaw for evaluating the orthogonal polynomials from their Chebyshev series representations are used to give greater numerical stability.
For further details of the algorithm and its use see Cox (1974), Cox and Hayes (1973).
Subsequent evaluation of the Chebyshev series representations of the polynomial approximations should be carried out using nag_1d_cheb_eval (e02aec).
Clenshaw C W (1960) Curve fitting with a digital computer Comput. J. 2 170–173
Cox M G (1974) A data-fitting package for the non-specialist user Software for Numerical Mathematics (ed D J Evans) Academic Press
Cox M G and Hayes J G (1973) Curve fitting: a guide and suite of algorithms for the non-specialist user NPL Report NAC26 National Physical Laboratory
Forsythe G E (1957) Generation and use of orthogonal polynomials for data fitting with a digital computer J. Soc. Indust. Appl. Math. 5 74–88
Gentleman W M (1969) An error analysis of Goertzel's (Watt's) method for computing Fourier coefficients Comput. J. 12 160–165
Hayes J G (ed.) (1970) Numerical Approximation to Functions and Data Athlone Press, London

## 5  Arguments

1:    $\mathbf{m}$IntegerInput
On entry: the number $m$ of data points.
Constraint: ${\mathbf{m}}\ge \mathit{mdist}\ge 2$, where $\mathit{mdist}$ is the number of distinct $x$ values in the data.
2:    $\mathbf{kplus1}$IntegerInput
On entry: $k+1$, where $k$ is the maximum degree required.
Constraint: $0<{\mathbf{kplus1}}\le \mathit{mdist}$, where $\mathit{mdist}$ is the number of distinct $x$ values in the data.
3:    $\mathbf{tda}$IntegerInput
On entry: the stride separating matrix column elements in the array a.
Constraint: ${\mathbf{tda}}\ge {\mathbf{kplus1}}$.
4:    $\mathbf{x}\left[{\mathbf{m}}\right]$const doubleInput
On entry: the values ${x}_{\mathit{r}}$ of the independent variable, for $\mathit{r}=1,2,\dots ,m$.
Constraint: the values must be supplied in nondecreasing order with ${\mathbf{x}}\left[m-1\right]>{\mathbf{x}}\left[0\right]$.
5:    $\mathbf{y}\left[{\mathbf{m}}\right]$const doubleInput
On entry: the values ${y}_{\mathit{r}}$ of the dependent variable, for $\mathit{r}=1,2,\dots ,m$.
6:    $\mathbf{w}\left[{\mathbf{m}}\right]$const doubleInput
On entry: the set of weights, ${w}_{\mathit{r}}$, for $\mathit{r}=1,2,\dots ,m$. For advice on the choice of weights, see the e02 Chapter Introduction.
Constraint: ${\mathbf{w}}\left[\mathit{r}\right]>0.0$, for $\mathit{r}=0,1,\dots ,{\mathbf{m}}-1$.
7:    $\mathbf{a}\left[{\mathbf{kplus1}}×{\mathbf{tda}}\right]$doubleOutput
On exit: the coefficients of ${\mathrm{T}}_{\mathit{j}}\left(\stackrel{-}{x}\right)$ in the approximating polynomial of degree $\mathit{i}$. ${\mathbf{a}}\left[\left(\mathit{i}\right)×{\mathbf{tda}}+\mathit{j}\right]$ contains the coefficient ${a}_{\mathit{i}+1,\mathit{j}+1}$, for $\mathit{i}=0,1,\dots ,k$ and $\mathit{j}=0,1,\dots ,\mathit{i}$.
8:    $\mathbf{s}\left[{\mathbf{kplus1}}\right]$doubleOutput
On exit: ${\mathbf{s}}\left[\mathit{i}\right]$ contains the root mean square residual ${s}_{\mathit{i}}$, for $\mathit{i}=0,1,\dots ,k$, as described in Section 3. For the interpretation of the values of the ${s}_{\mathit{i}}$ and their use in selecting an appropriate degree, see the e02 Chapter Introduction.
9:    $\mathbf{fail}$NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).

## 6  Error Indicators and Warnings

NE_2_INT_ARG_GT
On entry, ${\mathbf{kplus1}}=〈\mathit{\text{value}}〉$ while the number of distinct $x$ values, $\mathit{mdist}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{kplus1}}\le \mathit{mdist}$.
NE_2_INT_ARG_LT
On entry, ${\mathbf{tda}}=〈\mathit{\text{value}}〉$ while ${\mathbf{kplus1}}=〈\mathit{\text{value}}〉$.
The arguments must satisfy ${\mathbf{tda}}\ge {\mathbf{kplus1}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_INT_ARG_LT
On entry, kplus1 must not be less than 1: ${\mathbf{kplus1}}=〈\mathit{\text{value}}〉$.
NE_NO_NORMALISATION
On entry, all the ${\mathbf{x}}\left[r\right]$ in the sequence ${\mathbf{x}}\left[r\right]$, $r=0,1,\dots ,{\mathbf{m}}-1$ are the same.
NE_NOT_NON_DECREASING
On entry, the sequence ${\mathbf{x}}\left[r\right]$, $r=0,1,\dots ,{\mathbf{m}}-1$ is not in nondecreasing order.
NE_WEIGHTS_NOT_POSITIVE
On entry, the weights are not strictly positive: ${\mathbf{w}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$.

## 7  Accuracy

No error analysis for the method has been published. Practical experience with the method, however, is generally extremely satisfactory.

## 8  Parallelism and Performance

Not applicable.

The time taken by nag_1d_cheb_fit (e02adc) is approximately proportional to $m\left(k+1\right)\left(k+11\right)$.
The approximating polynomials may exhibit undesirable oscillations (particularly near the ends of the range) if the maximum degree $k$ exceeds a critical value which depends on the number of data points $m$ and their relative positions. As a rough guide, for equally spaced data, this critical value is about $2×\sqrt{m}$. For further details see page 60 of Hayes (1970).

## 10  Example

Determine weighted least squares polynomial approximations of degrees 0, 1, 2 and 3 to a set of 11 prescribed data points. For the approximation of degree 3, tabulate the data and the corresponding values of the approximating polynomial, together with the residual errors, and also the values of the approximating polynomial at points half-way between each pair of adjacent data points.
The example program supplied is written in a general form that will enable polynomial approximations of degrees $0,1,\dots ,k$ to be obtained to $m$ data points, with arbitrary positive weights, and the approximation of degree $k$ to be tabulated. nag_1d_cheb_eval (e02aec) is used to evaluate the approximating polynomial. The program is self-starting in that any number of datasets can be supplied.