# NAG CL Interfacee02agc (dim1_​cheb_​con)

Settings help

CL Name Style:

## 1Purpose

e02agc computes constrained weighted least squares polynomial approximations in Chebyshev series form to an arbitrary set of data points. The values of the approximations and any number of their derivatives can be specified at selected points.

## 2Specification

 #include
 void e02agc (Nag_OrderType order, Integer m, Integer k, double xmin, double xmax, const double x[], const double y[], const double w[], Integer mf, const double xf[], const double yf[], const Integer p[], double a[], double s[], Integer *n, double wrk[], NagError *fail)
The function may be called by the names: e02agc, nag_fit_dim1_cheb_con or nag_1d_cheb_fit_constr.

## 3Description

e02agc determines least squares polynomial approximations of degrees up to $k$ to the set of data points $\left({x}_{\mathit{r}},{y}_{\mathit{r}}\right)$ with weights ${w}_{\mathit{r}}$, for $\mathit{r}=1,2,\dots ,m$. The value of $k$, the maximum degree required, is to be prescribed by you. At each of the values ${xf}_{\mathit{r}}$, for $\mathit{r}=1,2,\dots ,mf$, of the independent variable $x$, the approximations and their derivatives up to order ${p}_{\mathit{r}}$ are constrained to have one of the values ${yf}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,\mathit{n}$, specified by you, where $\mathit{n}=mf+\sum _{r=0}^{mf}{p}_{r}$.
The approximation of degree $i$ has the property that, subject to the imposed constraints, it minimizes ${\sigma }_{i}$, the sum of the squares of the weighted residuals ${\epsilon }_{\mathit{r}}$, for $\mathit{r}=1,2,\dots ,m$, where
 $εr=wr(yr-fi(xr))$
and ${f}_{i}\left({x}_{r}\right)$ is the value of the polynomial approximation of degree $i$ at the $r$th data point.
Each polynomial is represented in Chebyshev series form with normalized argument $\overline{x}$. This argument lies in the range $-1$ to $+1$ and is related to the original variable $x$ by the linear transformation
 $x¯=2x-(xmax+xmin) (xmax-xmin)$
where ${x}_{\mathrm{min}}$ and ${x}_{\mathrm{max}}$, specified by you, are respectively the lower and upper end points of the interval of $x$ over which the polynomials are to be defined.
The polynomial approximation of degree $i$ can be written as
 $12ai,0+ai,1T1(x¯)+⋯+aijTj(x¯)+⋯+aiiTi(x¯)$
where ${T}_{j}\left(\overline{x}\right)$ is the Chebyshev polynomial of the first kind of degree $j$ with argument $\overline{x}$. For $i=\mathit{n},\mathit{n}+1,\dots ,k$, the function produces the values of the coefficients ${a}_{i\mathit{j}}$, for $\mathit{j}=0,1,\dots ,i$, together with the value of the root mean square residual,
 $Si = σ i ( m ′ +n-i-1) ,$
where ${m}^{\prime }$ is the number of data points with nonzero weight.
Values of the approximations may subsequently be computed using e02aec or e02akc.
First e02agc determines a polynomial $\mu \left(\overline{x}\right)$, of degree $\mathit{n}-1$, which satisfies the given constraints, and a polynomial $\nu \left(\overline{x}\right)$, of degree $\mathit{n}$, which has value (or derivative) zero wherever a constrained value (or derivative) is specified. It then fits ${y}_{\mathit{r}}-\mu \left({x}_{\mathit{r}}\right)$, for $\mathit{r}=1,2,\dots ,m$, with polynomials of the required degree in $\overline{x}$ each with factor $\nu \left(\overline{x}\right)$. Finally the coefficients of $\mu \left(\overline{x}\right)$ are added to the coefficients of these fits to give the coefficients of the constrained polynomial approximations to the data points $\left({x}_{\mathit{r}},{y}_{\mathit{r}}\right)$, for $\mathit{r}=1,2,\dots ,m$. The method employed is given in Hayes (1970): it is an extension of Forsythe's orthogonal polynomials method (see Forsythe (1957)) as modified by Clenshaw (see Clenshaw (1960)).
Clenshaw C W (1960) Curve fitting with a digital computer Comput. J. 2 170–173
Forsythe G E (1957) Generation and use of orthogonal polynomials for data fitting with a digital computer J. Soc. Indust. Appl. Math. 5 74–88
Hayes J G (ed.) (1970) Numerical Approximation to Functions and Data Athlone Press, London

## 5Arguments

1: $\mathbf{order}$Nag_OrderType Input
On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.1.3 in the Introduction to the NAG Library CL Interface for a more detailed explanation of the use of this argument.
Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or $\mathrm{Nag_ColMajor}$.
2: $\mathbf{m}$Integer Input
On entry: $m$, the number of data points to be fitted.
Constraint: ${\mathbf{m}}\ge 1$.
3: $\mathbf{k}$Integer Input
On entry: $k$, the maximum degree required.
Constraint: $\mathit{n}\le {\mathbf{k}}\le {m}^{\prime \prime }+\mathit{n}-1$ where $\mathit{n}$ is the total number of constraints and ${m}^{\prime \prime }$ is the number of data points with nonzero weights and distinct abscissae which do not coincide with any of the ${{\mathbf{xf}}}_{r}$.
4: $\mathbf{xmin}$double Input
5: $\mathbf{xmax}$double Input
On entry: the lower and upper end points, respectively, of the interval $\left[{x}_{\mathrm{min}},{x}_{\mathrm{max}}\right]$. Unless there are specific reasons to the contrary, it is recommended that xmin and xmax be set respectively to the lowest and highest value among the ${x}_{r}$ and ${xf}_{r}$. This avoids the danger of extrapolation provided there is a constraint point or data point with nonzero weight at each end point.
Constraint: ${\mathbf{xmax}}>{\mathbf{xmin}}$.
6: $\mathbf{x}\left[{\mathbf{m}}\right]$const double Input
On entry: ${\mathbf{x}}\left[\mathit{r}-1\right]$ must contain the value ${x}_{\mathit{r}}$ of the independent variable at the $\mathit{r}$th data point, for $\mathit{r}=1,2,\dots ,m$.
Constraint: the ${\mathbf{x}}\left[r-1\right]$ must be in nondecreasing order and satisfy ${\mathbf{xmin}}\le {\mathbf{x}}\left[r-1\right]\le {\mathbf{xmax}}$.
7: $\mathbf{y}\left[{\mathbf{m}}\right]$const double Input
On entry: ${\mathbf{y}}\left[\mathit{r}-1\right]$ must contain ${y}_{\mathit{r}}$, the value of the dependent variable at the $\mathit{r}$th data point, for $\mathit{r}=1,2,\dots ,m$.
8: $\mathbf{w}\left[{\mathbf{m}}\right]$const double Input
On entry: ${\mathbf{w}}\left[\mathit{r}-1\right]$ must contain the weight ${w}_{\mathit{r}}$ to be applied to the data point ${x}_{\mathit{r}}$, for $\mathit{r}=1,2,\dots ,m$. For advice on the choice of weights see the E02 Chapter Introduction. Negative weights are treated as positive. A zero weight causes the corresponding data point to be ignored. Zero weight should be given to any data point whose $x$ and $y$ values both coincide with those of a constraint (otherwise the denominators involved in the root mean square residuals ${S}_{i}$ will be slightly in error).
9: $\mathbf{mf}$Integer Input
On entry: $mf$, the number of values of the independent variable at which a constraint is specified.
Constraint: ${\mathbf{mf}}\ge 1$.
10: $\mathbf{xf}\left[{\mathbf{mf}}\right]$const double Input
On entry: ${\mathbf{xf}}\left[\mathit{r}-1\right]$ must contain ${xf}_{\mathit{r}}$, the value of the independent variable at which a constraint is specified, for $\mathit{r}=1,2,\dots ,{\mathbf{mf}}$.
Constraint: these values need not be ordered but must be distinct and satisfy ${\mathbf{xmin}}\le {\mathbf{xf}}\left[r-1\right]\le {\mathbf{xmax}}$.
11: $\mathbf{yf}\left[\mathit{dim}\right]$const double Input
Note: the dimension, dim, of the array yf must be at least $\left({\mathbf{mf}}+\sum _{\mathit{i}=0}^{{\mathbf{mf}}-1}{\mathbf{p}}\left[\mathit{i}\right]\right)$.
On entry: the values which the approximating polynomials and their derivatives are required to take at the points specified in xf. For each value of ${\mathbf{xf}}\left[\mathit{r}-1\right]$, yf contains in successive elements the required value of the approximation, its first derivative, second derivative, $\dots ,{p}_{\mathit{r}}$th derivative, for $\mathit{r}=1,2,\dots ,mf$. Thus the value, ${yf}_{s}$, which the $k$th derivative of each approximation ($k=0$ referring to the approximation itself) is required to take at the point ${\mathbf{xf}}\left[r-1\right]$ must be contained in ${\mathbf{yf}}\left[s-1\right]$, where
 $s=r+k+p1+p2+⋯+pr-1,$
where $k=0,1,\dots ,{p}_{r}$ and $r=1,2,\dots ,mf$. The derivatives are with respect to the independent variable $x$.
12: $\mathbf{p}\left[{\mathbf{mf}}\right]$const Integer Input
On entry: ${\mathbf{p}}\left[\mathit{r}-1\right]$ must contain ${p}_{\mathit{r}}$, the order of the highest-order derivative specified at ${\mathbf{xf}}\left[\mathit{r}-1\right]$, for $\mathit{r}=1,2,\dots ,mf$. ${p}_{r}=0$ implies that the value of the approximation at ${\mathbf{xf}}\left[r-1\right]$ is specified, but not that of any derivative.
Constraint: ${\mathbf{p}}\left[\mathit{r}-1\right]\ge 0$, for $\mathit{r}=1,2,\dots ,{\mathbf{mf}}$.
13: $\mathbf{a}\left[\mathit{dim}\right]$double Output
Note: the dimension, dim, of the array a must be at least $\left({\mathbf{k}}+1{\mathbf{k}}+1\right)×\left({\mathbf{k}}+1\right)$.
the dimension, dim, of the array a must be at least $\left({\mathbf{k}}+1\right)×\left({\mathbf{k}}+1{\mathbf{k}}+1\right)$.
where ${\mathbf{A}}\left(i,j\right)$ appears in this document, it refers to the array element
• ${\mathbf{a}}\left[\left(j-1\right)×\left(\left({\mathbf{k}}+1{\mathbf{k}}+1\right)\right)+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$;
• ${\mathbf{a}}\left[\left(i-1\right)×\left({\mathbf{k}}+1\right)+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$.
On exit: ${\mathbf{A}}\left(\mathit{i}+1,\mathit{j}+1\right)$ contains the coefficient ${a}_{\mathit{i}\mathit{j}}$ in the approximating polynomial of degree $\mathit{i}$, for $\mathit{i}=\mathit{n},\dots ,k$ and $\mathit{j}=0,1,\dots ,\mathit{i}$.
14: $\mathbf{s}\left[{\mathbf{k}}+1\right]$double Output
On exit: ${\mathbf{s}}\left[\mathit{i}\right]$ contains ${S}_{\mathit{i}}$, for $\mathit{i}=\mathit{n},\dots ,k$, the root mean square residual corresponding to the approximating polynomial of degree $i$. In the case where the number of data points with nonzero weight is equal to $k+1-\mathit{n}$, ${S}_{i}$ is indeterminate: the function sets it to zero. For the interpretation of the values of ${S}_{i}$ and their use in selecting an appropriate degree, see Section 3.1 in the E02 Chapter Introduction.
15: $\mathbf{n}$Integer * Output
On exit: contains the total number of constraint conditions imposed: ${\mathbf{n}}={\mathbf{mf}}+{p}_{1}+{p}_{2}+\cdots +{p}_{{\mathbf{mf}}}$.
16: $\mathbf{wrk}\left[\mathit{dim}\right]$double Output
On exit: contains weighted residuals of the highest degree of fit determined $\left(k\right)$. The residual at ${x}_{\mathit{r}}$ is in element $2\left(\mathit{n}+1\right)+3\left(m+\mathit{k}+1\right)+\mathit{r}$, for $\mathit{r}=1,2,\dots ,m$. The rest of the array is used as workspace.
17: $\mathbf{fail}$NagError * Input/Output
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).

## 6Error Indicators and Warnings

NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information.
On entry, argument $⟨\mathit{\text{value}}⟩$ had an illegal value.
NE_CONSTRAINT
Constraint: $\mathit{n}\le {\mathbf{k}}\le {m}^{\prime \prime }+\mathit{n}-1$ where $\mathit{n}$ is the total number of constraints and ${m}^{\prime \prime }$ is the number of data points with nonzero weights and distinct abscissae which do not coincide with any of the ${{\mathbf{xf}}}_{r}$.
NE_ILL_CONDITIONED
The polynomials $\mu \left(x\right)$ and/or $\nu \left(x\right)$ cannot be found. The problem is too ill-conditioned. This may occur when the constraint points are very close together, or large in number, or when an attempt is made to constrain high-order derivatives.
NE_INT
On entry, ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{m}}\ge 1$.
On entry, ${\mathbf{mf}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{mf}}\ge 1$.
NE_INT_3
On entry, ${\mathbf{k}}+1>{m}^{\prime \prime }+{\mathbf{n}}$, where ${m}^{\prime \prime }$ is the number of data points with nonzero weight and distinct abscissae different from all xf, and n is the total number of constraints: ${\mathbf{k}}+1=⟨\mathit{\text{value}}⟩$, ${m}^{\prime \prime }=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 7.5 in the Introduction to the NAG Library CL Interface for further information.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library CL Interface for further information.
NE_NOT_MONOTONIC
On entry, $\mathit{i}=⟨\mathit{\text{value}}⟩$, ${\mathbf{x}}\left[\mathit{i}-1\right]=⟨\mathit{\text{value}}⟩$ and ${\mathbf{x}}\left[\mathit{i}-2\right]=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{x}}\left[\mathit{i}-1\right]\ge {\mathbf{x}}\left[\mathit{i}-2\right]$.
NE_REAL_2
On entry, ${\mathbf{xmin}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{xmax}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{xmin}}<{\mathbf{xmax}}$.
NE_REAL_ARRAY
On entry, $\mathit{I}=⟨\mathit{\text{value}}⟩$, ${\mathbf{xf}}\left[\mathit{I}-1\right]=⟨\mathit{\text{value}}⟩$, $\mathit{J}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{xf}}\left[\mathit{J}-1\right]=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{xf}}\left[\mathit{I}-1\right]\ne {\mathbf{xf}}\left[\mathit{J}-1\right]$.
On entry, ${\mathbf{xf}}\left[\mathit{I}-1\right]$ lies outside interval $\left[{\mathbf{xmin}},{\mathbf{xmax}}\right]$: $\mathit{I}=⟨\mathit{\text{value}}⟩$, ${\mathbf{xf}}\left[\mathit{I}-1\right]=⟨\mathit{\text{value}}⟩$, ${\mathbf{xmin}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{xmax}}=⟨\mathit{\text{value}}⟩$.
On entry, ${\mathbf{x}}\left[\mathit{I}-1\right]$ lies outside interval $\left[{\mathbf{xmin}},{\mathbf{xmax}}\right]$: $\mathit{I}=⟨\mathit{\text{value}}⟩$, ${\mathbf{x}}\left[\mathit{I}-1\right]=⟨\mathit{\text{value}}⟩$, ${\mathbf{xmin}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{xmax}}=⟨\mathit{\text{value}}⟩$.
On entry, ${\mathbf{x}}\left[\mathit{I}-1\right]$ lies outside interval $\left[{\mathbf{xmin}},{\mathbf{xmax}}\right]$ for some $\mathit{I}$.

## 7Accuracy

No complete error analysis exists for either the interpolating algorithm or the approximating algorithm. However, considerable experience with the approximating algorithm shows that it is generally extremely satisfactory. Also the moderate number of constraints, of low-order, which are typical of data fitting applications, are unlikely to cause difficulty with the interpolating function.

## 8Parallelism and Performance

e02agc is not threaded in any implementation.

The time taken to form the interpolating polynomial is approximately proportional to ${\mathit{n}}^{3}$, and that to form the approximating polynomials is very approximately proportional to $m\left(k+1\right)\left(k+1-\mathit{n}\right)$.
To carry out a least squares polynomial fit without constraints, use e02adc. To carry out polynomial interpolation only, use e01aec.

## 10Example

This example reads data in the following order, using the notation of the argument list above:
• mf
• ${\mathbf{p}}\left[\mathit{i}-1\right]$, ${\mathbf{xf}}\left[\mathit{i}-1\right]$, Y-value and derivative values (if any) at ${\mathbf{xf}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{mf}}$
• m
• ${\mathbf{x}}\left[\mathit{i}-1\right]$, ${\mathbf{y}}\left[\mathit{i}-1\right]$, ${\mathbf{w}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{m}}$
• k, xmin, xmax
The output is:
• the root mean square residual for each degree from $\mathit{n}$ to $k$;
• the Chebyshev coefficients for the fit of degree $k$;
• the data points, and the fitted values and residuals for the fit of degree $k$.
The program is written in a generalized form which will read any number of datasets.
The dataset supplied specifies $5$ data points in the interval $\left[0.0,4.0\right]$ with unit weights, to which are to be fitted polynomials, $p$, of degrees up to $4$, subject to the $3$ constraints:
• $p\left(0.0\right)=1.0\text{, }{p}^{\prime }\left(0.0\right)=-2.0\text{, }p\left(4.0\right)=9.0\text{.}$

### 10.1Program Text

Program Text (e02agce.c)

### 10.2Program Data

Program Data (e02agce.d)

### 10.3Program Results

Program Results (e02agce.r)