Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_fit_1dcheb_con (e02ag)

## Purpose

nag_fit_1dcheb_con (e02ag) computes constrained weighted least squares polynomial approximations in Chebyshev series form to an arbitrary set of data points. The values of the approximations and any number of their derivatives can be specified at selected points.

## Syntax

[a, s, np1, wrk, ifail] = e02ag(k, xmin, xmax, x, y, w, xf, yf, ip, lwrk, 'm', m, 'mf', mf)
[a, s, np1, wrk, ifail] = nag_fit_1dcheb_con(k, xmin, xmax, x, y, w, xf, yf, ip, lwrk, 'm', m, 'mf', mf)

## Description

nag_fit_1dcheb_con (e02ag) determines least squares polynomial approximations of degrees up to k$k$ to the set of data points (xr,yr)$\left({x}_{\mathit{r}},{y}_{\mathit{r}}\right)$ with weights wr${w}_{\mathit{r}}$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$. The value of k$k$, the maximum degree required, is to be prescribed by you. At each of the values xfr${xf}_{\mathit{r}}$, for r = 1,2,,mf$\mathit{r}=1,2,\dots ,mf$, of the independent variable x$x$, the approximations and their derivatives up to order pr${p}_{\mathit{r}}$ are constrained to have one of the values yfs${yf}_{\mathit{s}}$, for s = 1,2,,n$\mathit{s}=1,2,\dots ,\mathit{n}$, specified by you, where n = mf + r = 0mfpr$\mathit{n}=mf+\sum _{r=0}^{mf}{p}_{r}$.
The approximation of degree i$i$ has the property that, subject to the imposed constraints, it minimizes σi${\sigma }_{i}$, the sum of the squares of the weighted residuals εr${\epsilon }_{\mathit{r}}$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$, where
 εr = wr(yr − fi(xr)) $εr=wr(yr-fi(xr))$
and fi(xr)${f}_{i}\left({x}_{r}\right)$ is the value of the polynomial approximation of degree i$i$ at the r$r$th data point.
Each polynomial is represented in Chebyshev series form with normalized argument x$\stackrel{-}{x}$. This argument lies in the range 1$-1$ to + 1$+1$ and is related to the original variable x$x$ by the linear transformation
 x = (2x − (xmax + xmin))/((xmax − xmin)) $x-=2x-(xmax+xmin) (xmax-xmin)$
where xmin${x}_{\mathrm{min}}$ and xmax${x}_{\mathrm{max}}$, specified by you, are respectively the lower and upper end points of the interval of x$x$ over which the polynomials are to be defined.
The polynomial approximation of degree i$i$ can be written as
 (1/2)ai,0 + ai,1T1(x) + ⋯ + aijTj(x) + ⋯ + aiiTi(x) $12ai,0+ai,1T1(x-)+⋯+aijTj(x-)+⋯+aiiTi(x-)$
where Tj(x)${T}_{j}\left(\stackrel{-}{x}\right)$ is the Chebyshev polynomial of the first kind of degree j$j$ with argument x$\stackrel{-}{x}$. For i = n,n + 1,,k$i=\mathit{n},\mathit{n}+1,\dots ,k$, the function produces the values of the coefficients aij${a}_{i\mathit{j}}$, for j = 0,1,,i$\mathit{j}=0,1,\dots ,i$, together with the value of the root mean square residual,
 Si = sqrt( ( σi )/((m′ + n − i − 1)) ), $Si = σ i ( m ′ +n -i -1 ) ,$
where m${m}^{\prime }$ is the number of data points with nonzero weight.
Values of the approximations may subsequently be computed using nag_fit_1dcheb_eval (e02ae) or nag_fit_1dcheb_eval2 (e02ak).
First nag_fit_1dcheb_con (e02ag) determines a polynomial μ(x)$\mu \left(\stackrel{-}{x}\right)$, of degree n1$\mathit{n}-1$, which satisfies the given constraints, and a polynomial ν(x)$\nu \left(\stackrel{-}{x}\right)$, of degree n$\mathit{n}$, which has value (or derivative) zero wherever a constrained value (or derivative) is specified. It then fits yrμ(xr)${y}_{\mathit{r}}-\mu \left({x}_{\mathit{r}}\right)$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$, with polynomials of the required degree in x$\stackrel{-}{x}$ each with factor ν(x)$\nu \left(\stackrel{-}{x}\right)$. Finally the coefficients of μ(x)$\mu \left(\stackrel{-}{x}\right)$ are added to the coefficients of these fits to give the coefficients of the constrained polynomial approximations to the data points (xr,yr)$\left({x}_{\mathit{r}},{y}_{\mathit{r}}\right)$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$. The method employed is given in Hayes (1970): it is an extension of Forsythe's orthogonal polynomials method (see Forsythe (1957)) as modified by Clenshaw (see Clenshaw (1960)).

## References

Clenshaw C W (1960) Curve fitting with a digital computer Comput. J. 2 170–173
Forsythe G E (1957) Generation and use of orthogonal polynomials for data fitting with a digital computer J. Soc. Indust. Appl. Math. 5 74–88
Hayes J G (ed.) (1970) Numerical Approximation to Functions and Data Athlone Press, London

## Parameters

### Compulsory Input Parameters

1:     k – int64int32nag_int scalar
k$k$, the maximum degree required.
Constraint: nkm + n1$\mathit{n}\le {\mathbf{k}}\le {m}^{\prime \prime }+\mathit{n}-1$ where n$\mathit{n}$ is the total number of constraints and m${m}^{\prime \prime }$ is the number of data points with nonzero weights and distinct abscissae which do not coincide with any of the xfr${{\mathbf{xf}}}_{r}$.
2:     xmin – double scalar
3:     xmax – double scalar
The lower and upper end points, respectively, of the interval [xmin,xmax]$\left[{x}_{\mathrm{min}},{x}_{\mathrm{max}}\right]$. Unless there are specific reasons to the contrary, it is recommended that xmin and xmax be set respectively to the lowest and highest value among the xr${x}_{r}$ and xfr${xf}_{r}$. This avoids the danger of extrapolation provided there is a constraint point or data point with nonzero weight at each end point.
Constraint: ${\mathbf{xmax}}>{\mathbf{xmin}}$.
4:     x(m) – double array
m, the dimension of the array, must satisfy the constraint m1${\mathbf{m}}\ge 1$.
x(r)${\mathbf{x}}\left(\mathit{r}\right)$ must contain the value xr${x}_{\mathit{r}}$ of the independent variable at the r$\mathit{r}$th data point, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$.
Constraint: the x(r)${\mathbf{x}}\left(r\right)$ must be in nondecreasing order and satisfy xminx(r)xmax${\mathbf{xmin}}\le {\mathbf{x}}\left(r\right)\le {\mathbf{xmax}}$.
5:     y(m) – double array
m, the dimension of the array, must satisfy the constraint m1${\mathbf{m}}\ge 1$.
y(r)${\mathbf{y}}\left(\mathit{r}\right)$ must contain yr${y}_{\mathit{r}}$, the value of the dependent variable at the r$\mathit{r}$th data point, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$.
6:     w(m) – double array
m, the dimension of the array, must satisfy the constraint m1${\mathbf{m}}\ge 1$.
w(r)${\mathbf{w}}\left(\mathit{r}\right)$ must contain the weight wr${w}_{\mathit{r}}$ to be applied to the data point xr${x}_{\mathit{r}}$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$. For advice on the choice of weights see the E02 Chapter Introduction. Negative weights are treated as positive. A zero weight causes the corresponding data point to be ignored. Zero weight should be given to any data point whose x$x$ and y$y$ values both coincide with those of a constraint (otherwise the denominators involved in the root mean square residuals Si${S}_{i}$ will be slightly in error).
7:     xf(mf) – double array
mf, the dimension of the array, must satisfy the constraint mf1${\mathbf{mf}}\ge 1$.
xf(r)${\mathbf{xf}}\left(\mathit{r}\right)$ must contain xfr${xf}_{\mathit{r}}$, the value of the independent variable at which a constraint is specified, for r = 1,2,,mf$\mathit{r}=1,2,\dots ,{\mathbf{mf}}$.
Constraint: these values need not be ordered but must be distinct and satisfy xminxf(r)xmax${\mathbf{xmin}}\le {\mathbf{xf}}\left(r\right)\le {\mathbf{xmax}}$.
8:     yf(lyf) – double array
lyf, the dimension of the array, must satisfy the constraint lyfmf + i = 1mf ip(i)$\mathit{lyf}\ge {\mathbf{mf}}+\sum _{\mathit{i}=1}^{{\mathbf{mf}}}{\mathbf{ip}}\left(\mathit{i}\right)$.
The values which the approximating polynomials and their derivatives are required to take at the points specified in xf. For each value of xf(r)${\mathbf{xf}}\left(\mathit{r}\right)$, yf contains in successive elements the required value of the approximation, its first derivative, second derivative, ,pr$\dots ,{p}_{\mathit{r}}$th derivative, for r = 1,2,,mf$\mathit{r}=1,2,\dots ,mf$. Thus the value, yfs${yf}_{s}$, which the k$k$th derivative of each approximation (k = 0$k=0$ referring to the approximation itself) is required to take at the point xf(r)${\mathbf{xf}}\left(r\right)$ must be contained in yf(s)${\mathbf{yf}}\left(s\right)$, where
 s = r + k + p1 + p2 + ⋯ + pr − 1, $s=r+k+p1+p2+⋯+pr-1,$
where k = 0,1,,pr$k=0,1,\dots ,{p}_{r}$ and r = 1,2,,mf$r=1,2,\dots ,mf$. The derivatives are with respect to the independent variable x$x$.
9:     ip(mf) – int64int32nag_int array
mf, the dimension of the array, must satisfy the constraint mf1${\mathbf{mf}}\ge 1$.
ip(r)${\mathbf{ip}}\left(\mathit{r}\right)$ must contain pr${p}_{\mathit{r}}$, the order of the highest-order derivative specified at xf(r)${\mathbf{xf}}\left(\mathit{r}\right)$, for r = 1,2,,mf$\mathit{r}=1,2,\dots ,mf$. pr = 0${p}_{r}=0$ implies that the value of the approximation at xf(r)${\mathbf{xf}}\left(r\right)$ is specified, but not that of any derivative.
Constraint: ip(r)0${\mathbf{ip}}\left(\mathit{r}\right)\ge 0$, for r = 1,2,,mf$\mathit{r}=1,2,\dots ,{\mathbf{mf}}$.
10:   lwrk – int64int32nag_int scalar
The dimension of the array wrk as declared in the (sub)program from which nag_fit_1dcheb_con (e02ag) is called.
Constraint: lwrkmax (4 × m + 3 × kplus1,8 × n + 5 × ipmax + mf + 10) + 2 × n + 2${\mathbf{lwrk}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(4×{\mathbf{m}}+3×\mathit{kplus1},8×\mathit{n}+5×\mathit{ipmax}+{\mathbf{mf}}+10\right)+2×\mathit{n}+2$, where ipmax = max (ip(r))$\mathit{ipmax}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{ip}}\left(\mathit{r}\right)\right)$, for r = 1,2,,mf$\mathit{r}=1,2,\dots ,mf$.

### Optional Input Parameters

1:     m – int64int32nag_int scalar
Default: The dimension of the arrays x, y, w. (An error is raised if these dimensions are not equal.)
m$m$, the number of data points to be fitted.
Constraint: m1${\mathbf{m}}\ge 1$.
2:     mf – int64int32nag_int scalar
Default: The dimension of the arrays xf, ip. (An error is raised if these dimensions are not equal.)
mf$mf$, the number of values of the independent variable at which a constraint is specified.
Constraint: mf1${\mathbf{mf}}\ge 1$.

### Input Parameters Omitted from the MATLAB Interface

kplus1 lda lyf np1 iwrk liwrk

### Output Parameters

1:     a(lda,kplus1) – double array
kplus1 = k + 1$\mathit{kplus1}={\mathbf{k}}+1$.
ldakplus1$\mathit{lda}\ge \mathit{kplus1}$.
a(i + 1,j + 1)${\mathbf{a}}\left(\mathit{i}+1,\mathit{j}+1\right)$ contains the coefficient aij${a}_{\mathit{i}\mathit{j}}$ in the approximating polynomial of degree i$\mathit{i}$, for i = n,,k$\mathit{i}=\mathit{n},\dots ,k$ and j = 0,1,,i$\mathit{j}=0,1,\dots ,\mathit{i}$.
2:     s(kplus1) – double array
kplus1 = k + 1$\mathit{kplus1}={\mathbf{k}}+1$.
s(i + 1)${\mathbf{s}}\left(\mathit{i}+1\right)$ contains Si${S}_{\mathit{i}}$, for i = n,,k$\mathit{i}=\mathit{n},\dots ,k$, the root mean square residual corresponding to the approximating polynomial of degree i$i$. In the case where the number of data points with nonzero weight is equal to k + 1n$k+1-\mathit{n}$, Si${S}_{i}$ is indeterminate: the function sets it to zero. For the interpretation of the values of Si${S}_{i}$ and their use in selecting an appropriate degree, see Section [General] in the E02 Chapter Introduction.
3:     np1 – int64int32nag_int scalar
n + 1$\mathit{n}+1$, where n$\mathit{n}$ is the total number of constraint conditions imposed: n = mf + p1 + p2 + + pmf$\mathit{n}=mf+{p}_{1}+{p}_{2}+\cdots +{p}_{mf}$.
4:     wrk(lwrk) – double array
Contains weighted residuals of the highest degree of fit determined (k)$\left(k\right)$. The residual at xr${x}_{\mathit{r}}$ is in element 2(n + 1) + 3(m + k + 1) + r$2\left(\mathit{n}+1\right)+3\left(m+\mathit{k}+1\right)+\mathit{r}$, for r = 1,2,,m$\mathit{r}=1,2,\dots ,m$. The rest of the array is used as workspace.
5:     ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

## Error Indicators and Warnings

Errors or warnings detected by the function:
ifail = 1${\mathbf{ifail}}=1$
 On entry, m < 1${\mathbf{m}}<1$, or kplus1 < n + 1$\mathit{kplus1}<\mathit{n}+1$, or lda < kplus1$\mathit{lda}<\mathit{kplus1}$, or mf < 1${\mathbf{mf}}<1$, or lyf < n$\mathit{lyf}<\mathit{n}$, or lwrk is too small (see Section [Parameters]), or liwrk < 2 × mf + 2$\mathit{liwrk}<2×{\mathbf{mf}}+2$.
(Here n$\mathit{n}$ is the total number of constraint conditions.)
ifail = 2${\mathbf{ifail}}=2$
ip(r) < 0${\mathbf{ip}}\left(r\right)<0$ for some r = 1,2,,mf$r=1,2,\dots ,{\mathbf{mf}}$.
ifail = 3${\mathbf{ifail}}=3$
${\mathbf{xmin}}\ge {\mathbf{xmax}}$, or xf(r)${\mathbf{xf}}\left(r\right)$ is not in the interval xmin to xmax for some r = 1,2,,mf$r=1,2,\dots ,{\mathbf{mf}}$, or the xf(r)${\mathbf{xf}}\left(r\right)$ are not distinct.
ifail = 4${\mathbf{ifail}}=4$
x(r)${\mathbf{x}}\left(r\right)$ is not in the interval xmin to xmax for some r = 1,2,,m$r=1,2,\dots ,{\mathbf{m}}$.
ifail = 5${\mathbf{ifail}}=5$
x(r) < x(r1)${\mathbf{x}}\left(r\right)<{\mathbf{x}}\left(r-1\right)$ for some r = 2,3,,m$r=2,3,\dots ,{\mathbf{m}}$.
ifail = 6${\mathbf{ifail}}=6$
kplus1 > m + n$\mathit{kplus1}>{m}^{\prime \prime }+\mathit{n}$, where m${m}^{\prime \prime }$ is the number of data points with nonzero weight and distinct abscissae which do not coincide with any xf(r)${\mathbf{xf}}\left(r\right)$. Thus there is no unique solution.
ifail = 7${\mathbf{ifail}}=7$
The polynomials μ(x)$\mu \left(x\right)$ and/or ν(x)$\nu \left(x\right)$ cannot be determined. The problem supplied is too ill-conditioned. This may occur when the constraint points are very close together, or large in number, or when an attempt is made to constrain high-order derivatives.

## Accuracy

No complete error analysis exists for either the interpolating algorithm or the approximating algorithm. However, considerable experience with the approximating algorithm shows that it is generally extremely satisfactory. Also the moderate number of constraints, of low-order, which are typical of data fitting applications, are unlikely to cause difficulty with the interpolating function.

The time taken to form the interpolating polynomial is approximately proportional to n3${\mathit{n}}^{3}$, and that to form the approximating polynomials is very approximately proportional to m(k + 1)(k + 1n)$m\left(k+1\right)\left(k+1-\mathit{n}\right)$.
To carry out a least squares polynomial fit without constraints, use nag_fit_1dcheb_arb (e02ad). To carry out polynomial interpolation only, use nag_interp_1d_cheb (e01ae).

## Example

```function nag_fit_1dcheb_con_example
k = int64(4);
xmin = 0;
xmax = 4;
x = [0.5;
1;
2;
2.5;
3];
y = [0.03;
-0.75;
-1;
-0.1;
1.75];
w = [1;
1;
1;
1;
1];
xf = [0;
4];
yf = [1;
-2;
9];
ip = [int64(1);0];
lwrk = int64(200);
[a, s, np1, wrk, ifail] = nag_fit_1dcheb_con(k, xmin, xmax, x, y, w, xf, yf, ip, lwrk)
```
```

a =

0         0         0         0         0
0         0         0         0         0
0         0         0         0         0
3.9980    3.4995    3.0010    0.5005         0
3.9980    3.4995    3.0010    0.5005   -0.0000

s =

0
0
0
0.0025
0.0029

np1 =

4

wrk =

6.0000
4.0000
2.0000
0
-1.0000
-0.2500
0.5000
0.2500
-0.2200
-0.7500
-2.0000
-2.3500
-2.2500
-0.7500
0.8632
-0.5000
2.2096
0
1.8923
0.2500
-0.1262
0.5000
-2.3711
-2.0776
-1.5269
-0.8471
-0.1077
-0.5131
-1.5269
-0.1529
-0.9462
1.0000
1.5269
1.9443
2.4006
2.8146
2.4006
3.8531
-0.0010
0.0008
0.0020
-0.0039
0.0022
0
0.2500
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

ifail =

0

```
```function e02ag_example
k = int64(4);
xmin = 0;
xmax = 4;
x = [0.5;
1;
2;
2.5;
3];
y = [0.03;
-0.75;
-1;
-0.1;
1.75];
w = [1;
1;
1;
1;
1];
xf = [0;
4];
yf = [1;
-2;
9];
ip = [int64(1);0];
lwrk = int64(200);
[a, s, np1, wrk, ifail] = e02ag(k, xmin, xmax, x, y, w, xf, yf, ip, lwrk)
```
```

a =

0         0         0         0         0
0         0         0         0         0
0         0         0         0         0
3.9980    3.4995    3.0010    0.5005         0
3.9980    3.4995    3.0010    0.5005   -0.0000

s =

0
0
0
0.0025
0.0029

np1 =

4

wrk =

6.0000
4.0000
2.0000
0
-1.0000
-0.2500
0.5000
0.2500
-0.2200
-0.7500
-2.0000
-2.3500
-2.2500
-0.7500
0.8632
-0.5000
2.2096
0
1.8923
0.2500
-0.1262
0.5000
-2.3711
-2.0776
-1.5269
-0.8471
-0.1077
-0.5131
-1.5269
-0.1529
-0.9462
1.0000
1.5269
1.9443
2.4006
2.8146
2.4006
3.8531
-0.0010
0.0008
0.0020
-0.0039
0.0022
0
0.2500
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

ifail =

0

```