G02 Chapter Contents
G02 Chapter Introduction
NAG Library Manual

# NAG Library Routine DocumentG02CGF

Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details.

## 1  Purpose

G02CGF performs a multiple linear regression on a set of variables whose means, sums of squares and cross-products of deviations from means, and Pearson product-moment correlation coefficients are given.

## 2  Specification

 SUBROUTINE G02CGF ( N, K1, K, XBAR, SSP, LDSSP, R, LDR, RESULT, COEF, LDCOEF, CON, RINV, LDRINV, C, LDC, WKZ, LDWKZ, IFAIL)
 INTEGER N, K1, K, LDSSP, LDR, LDCOEF, LDRINV, LDC, LDWKZ, IFAIL REAL (KIND=nag_wp) XBAR(K1), SSP(LDSSP,K1), R(LDR,K1), RESULT(13), COEF(LDCOEF,3), CON(3), RINV(LDRINV,K), C(LDC,K), WKZ(LDWKZ,K)

## 3  Description

G02CGF fits a curve of the form
 $y=a+b1x1+b2x2+⋯+bkxk$
to the data points
 $x11,x21,…,xk1,y1 x12,x22,…,xk2,y2 ⋮ x1n,x2n,…,xkn,yn$
such that
 $yi=a+b1x1i+b2x2i+⋯+bkxki+ei, i=1,2,…,n.$
The routine calculates the regression coefficients, ${b}_{1},{b}_{2},\dots ,{b}_{k}$, the regression constant, $a$, and various other statistical quantities by minimizing
 $∑i=1nei2.$
The actual data values $\left({x}_{1i},{x}_{2i},\dots ,{x}_{ki},{y}_{i}\right)$ are not provided as input to the routine. Instead, input consists of:
 (i) The number of cases, $n$, on which the regression is based. (ii) The total number of variables, dependent and independent, in the regression, $\left(k+1\right)$. (iii) The number of independent variables in the regression, $k$. (iv) The means of all $k+1$ variables in the regression, both the independent variables $\left({x}_{1},{x}_{2},\dots ,{x}_{k}\right)$ and the dependent variable $\left(y\right)$, which is the $\left(k+1\right)$th variable: i.e., ${\stackrel{-}{x}}_{1},{\stackrel{-}{x}}_{2},\dots ,{\stackrel{-}{x}}_{k},\stackrel{-}{y}$. (v) The $\left(k+1\right)$ by $\left(k+1\right)$ matrix [${S}_{ij}$] of sums of squares and cross-products of deviations from means of all the variables in the regression; the terms involving the dependent variable, $y$, appear in the $\left(k+1\right)$th row and column. (vi) The $\left(k+1\right)$ by $\left(k+1\right)$ matrix [${R}_{ij}$] of the Pearson product-moment correlation coefficients for all the variables in the regression; the correlations involving the dependent variable, $y$, appear in the $\left(k+1\right)$th row and column.
The quantities calculated are:
(a) The inverse of the $k$ by $k$ partition of the matrix of correlation coefficients, [${R}_{ij}$], involving only the independent variables. The inverse is obtained using an accurate method which assumes that this sub-matrix is positive definite.
(b) The modified inverse matrix, $C=\left[{c}_{ij}\right]$, where
 $cij=RijrijSij, i,j=1,2,…,k,$
where ${r}_{ij}$ is the $\left(i,j\right)$th element of the inverse matrix of [${R}_{ij}$] as described in (a) above. Each element of $C$ is thus the corresponding element of the matrix of correlation coefficients multiplied by the corresponding element of the inverse of this matrix, divided by the corresponding element of the matrix of sums of squares and cross-products of deviations from means.
(c) The regression coefficients:
 $bi=∑j=ikcijSjk+1, i=1,2,…,k,$
where ${S}_{j\left(k+1\right)}$ is the sum of cross-products of deviations from means for the independent variable ${x}_{j}$ and the dependent variable $y$.
(d) The sum of squares attributable to the regression, $SSR$, the sum of squares of deviations about the regression, $SSD$, and the total sum of squares, $SST$:
• $SST={S}_{\left(k+1\right)\left(k+1\right)}$, the sum of squares of deviations from the mean for the dependent variable, $y$;
• $SSR=\sum _{j=1}^{k}{b}_{j}{S}_{j\left(k+1\right)}\text{; }SSD=SST-SSR$
(e) The degrees of freedom attributable to the regression, $DFR$, the degrees of freedom of deviations about the regression, $DFD$, and the total degrees of freedom, $DFT$:
 $DFR=k; DFD=n-k-1; DFT=n-1.$
(f) The mean square attributable to the regression, $MSR$, and the mean square of deviations about the regression, $MSD$:
 $MSR=SSR/DFR; MSD=SSD/DFD.$
(g) The $F$ values for the analysis of variance:
 $F=MSR/MSD.$
(h) The standard error estimate:
 $s=MSD.$
(i) The coefficient of multiple correlation, $R$, the coefficient of multiple determination, ${R}^{2}$ and the coefficient of multiple determination corrected for the degrees of freedom, ${\stackrel{-}{R}}^{2}$;
 $R=1-SSD SST ; R2=1-SSD SST ; R-2=1-SSD×DFT SST×DFD .$
(j) The standard error of the regression coefficients:
 $sebi=MSD×cii, i= 1,2,…,k.$
(k) The $t$ values for the regression coefficients:
 $tbi=bi sebi , i=1,2,…,k.$
(l) The regression constant, $a$, its standard error, $se\left(a\right)$, and its $t$ value, $t\left(a\right)$:
 $a=y--∑i=1kbix-i; sea=MSD×1n+∑i=1k∑j=1kx-icijx-j ; ta=asea .$

## 4  References

Draper N R and Smith H (1985) Applied Regression Analysis (2nd Edition) Wiley

## 5  Parameters

1:     N – INTEGERInput
On entry: the number of cases $n$, used in calculating the sums of squares and cross-products and correlation coefficients.
2:     K1 – INTEGERInput
On entry: the total number of variables, independent and dependent, $\left(k+1\right)$, in the regression.
Constraint: $2\le {\mathbf{K1}}<{\mathbf{N}}$.
3:     K – INTEGERInput
On entry: the number of independent variables $k$ in the regression.
Constraint: ${\mathbf{K}}={\mathbf{K1}}-1$.
4:     XBAR(K1) – REAL (KIND=nag_wp) arrayInput
On entry: ${\mathbf{XBAR}}\left(\mathit{i}\right)$ must be set to ${\stackrel{-}{x}}_{\mathit{i}}$, the mean value of the $\mathit{i}$th variable, for $\mathit{i}=1,2,\dots ,k+1$; the mean of the dependent variable must be contained in ${\mathbf{XBAR}}\left(k+1\right)$.
5:     SSP(LDSSP,K1) – REAL (KIND=nag_wp) arrayInput
On entry: ${\mathbf{SSP}}\left(\mathit{i},\mathit{j}\right)$ must be set to ${S}_{\mathit{i}\mathit{j}}$, the sum of cross-products of deviations from means for the $\mathit{i}$th and $\mathit{j}$th variables, for $\mathit{i}=1,2,\dots ,k+1$ and $\mathit{j}=1,2,\dots ,k+1$; terms involving the dependent variable appear in row $k+1$ and column $k+1$.
6:     LDSSP – INTEGERInput
On entry: the first dimension of the array SSP as declared in the (sub)program from which G02CGF is called.
Constraint: ${\mathbf{LDSSP}}\ge {\mathbf{K1}}$.
7:     R(LDR,K1) – REAL (KIND=nag_wp) arrayInput
On entry: ${\mathbf{R}}\left(\mathit{i},\mathit{j}\right)$ must be set to ${R}_{\mathit{i}\mathit{j}}$, the Pearson product-moment correlation coefficient for the $\mathit{i}$th and $\mathit{j}$th variables, for $\mathit{i}=1,2,\dots ,k+1$ and $\mathit{j}=1,2,\dots ,k+1$; terms involving the dependent variable appear in row $k+1$ and column $k+1$.
8:     LDR – INTEGERInput
On entry: the first dimension of the array R as declared in the (sub)program from which G02CGF is called.
Constraint: ${\mathbf{LDR}}\ge {\mathbf{K1}}$.
9:     RESULT($13$) – REAL (KIND=nag_wp) arrayOutput
On exit: the following information:
 ${\mathbf{RESULT}}\left(1\right)$ $SSR$, the sum of squares attributable to the regression; ${\mathbf{RESULT}}\left(2\right)$ $DFR$, the degrees of freedom attributable to the regression; ${\mathbf{RESULT}}\left(3\right)$ $MSR$, the mean square attributable to the regression; ${\mathbf{RESULT}}\left(4\right)$ $F$, the $F$ value for the analysis of variance; ${\mathbf{RESULT}}\left(5\right)$ $SSD$, the sum of squares of deviations about the regression; ${\mathbf{RESULT}}\left(6\right)$ $DFD$, the degrees of freedom of deviations about the regression; ${\mathbf{RESULT}}\left(7\right)$ $MSD$, the mean square of deviations about the regression; ${\mathbf{RESULT}}\left(8\right)$ $SST$, the total sum of squares; ${\mathbf{RESULT}}\left(9\right)$ $DFT$, the total degrees of freedom; ${\mathbf{RESULT}}\left(10\right)$ $s$, the standard error estimate; ${\mathbf{RESULT}}\left(11\right)$ $R$, the coefficient of multiple correlation; ${\mathbf{RESULT}}\left(12\right)$ ${R}^{2}$, the coefficient of multiple determination; ${\mathbf{RESULT}}\left(13\right)$ ${\stackrel{-}{R}}^{2}$, the coefficient of multiple determination corrected for the degrees of freedom.
10:   COEF(LDCOEF,$3$) – REAL (KIND=nag_wp) arrayOutput
On exit: for $i=1,2,\dots ,k$, the following information:
${\mathbf{COEF}}\left(i,1\right)$
${b}_{i}$, the regression coefficient for the $i$th variable.
${\mathbf{COEF}}\left(i,2\right)$
$se\left({b}_{i}\right)$, the standard error of the regression coefficient for the $i$th variable.
${\mathbf{COEF}}\left(i,3\right)$
$t\left({b}_{i}\right)$, the $t$ value of the regression coefficient for the $i$th variable.
11:   LDCOEF – INTEGERInput
On entry: the first dimension of the array COEF as declared in the (sub)program from which G02CGF is called.
Constraint: ${\mathbf{LDCOEF}}\ge {\mathbf{K}}$.
12:   CON($3$) – REAL (KIND=nag_wp) arrayOutput
On exit: the following information:
 ${\mathbf{CON}}\left(1\right)$ $a$, the regression constant; ${\mathbf{CON}}\left(2\right)$ $se\left(a\right)$, the standard error of the regression constant; ${\mathbf{CON}}\left(3\right)$ $t\left(a\right)$, the $t$ value for the regression constant.
13:   RINV(LDRINV,K) – REAL (KIND=nag_wp) arrayOutput
On exit: the inverse of the matrix of correlation coefficients for the independent variables; that is, the inverse of the matrix consisting of the first $k$ rows and columns of R.
14:   LDRINV – INTEGERInput
On entry: the first dimension of the array RINV as declared in the (sub)program from which G02CGF is called.
Constraint: ${\mathbf{LDRINV}}\ge {\mathbf{K}}$.
15:   C(LDC,K) – REAL (KIND=nag_wp) arrayOutput
On exit: the modified inverse matrix, where
• ${\mathbf{C}}\left(\mathit{i},\mathit{j}\right)={\mathbf{R}}\left(\mathit{i},\mathit{j}\right)×{\mathbf{RINV}}\left(\mathit{i},\mathit{j}\right)/{\mathbf{SSP}}\left(\mathit{i},\mathit{j}\right)$, for $\mathit{i}=1,2,\dots ,k$ and $\mathit{j}=1,2,\dots ,k$.
16:   LDC – INTEGERInput
On entry: the first dimension of the array C as declared in the (sub)program from which G02CGF is called.
Constraint: ${\mathbf{LDC}}\ge {\mathbf{K}}$.
17:   WKZ(LDWKZ,K) – REAL (KIND=nag_wp) arrayWorkspace
18:   LDWKZ – INTEGERInput
On entry: the first dimension of the array WKZ as declared in the (sub)program from which G02CGF is called.
Constraint: ${\mathbf{LDWKZ}}\ge {\mathbf{K}}$.
19:   IFAIL – INTEGERInput/Output
On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).

## 6  Error Indicators and Warnings

If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF).
Errors or warnings detected by the routine:
${\mathbf{IFAIL}}=1$
 On entry, ${\mathbf{K1}}<2$.
${\mathbf{IFAIL}}=2$
 On entry, ${\mathbf{K1}}\ne \left({\mathbf{K}}+1\right)$.
${\mathbf{IFAIL}}=3$
 On entry, ${\mathbf{N}}\le {\mathbf{K1}}$.
${\mathbf{IFAIL}}=4$
 On entry, ${\mathbf{LDSSP}}<{\mathbf{K1}}$, or ${\mathbf{LDR}}<{\mathbf{K1}}$, or ${\mathbf{LDCOEF}}<{\mathbf{K}}$, or ${\mathbf{LDRINV}}<{\mathbf{K}}$, or ${\mathbf{LDC}}<{\mathbf{K}}$, or ${\mathbf{LDWKZ}}<{\mathbf{K}}$.
${\mathbf{IFAIL}}=5$
The $k$ by $k$ partition of the matrix $R$ which is to be inverted is not positive definite.
${\mathbf{IFAIL}}=6$
The refinement following the actual inversion fails, indicating that the $k$ by $k$ partition of the matrix $R$, which is to be inverted, is ill-conditioned. The use of G02DAF, which employs a different numerical technique, may avoid this difficulty (an extra ‘variable’ representing the constant term must be introduced for G02DAF).
${\mathbf{IFAIL}}=7$
Unexpected error in F04ABF.

## 7  Accuracy

The accuracy of any regression routine is almost entirely dependent on the accuracy of the matrix inversion method used. In G02CGF, it is the matrix of correlation coefficients rather than that of the sums of squares and cross-products of deviations from means that is inverted; this means that all terms in the matrix for inversion are of a similar order, and reduces the scope for computational error. For details on absolute accuracy, the relevant section of the document describing the inversion routine used, F04ABF, should be consulted. G02DAF uses a different method, based on F04AMF, and that routine may well prove more reliable numerically. It does not handle missing values, nor does it provide the same output as this routine. (In particular it is necessary to include explicitly the constant in the regression equation as another ‘variable’.)
If, in calculating $F$, $t\left(a\right)$, or any of the $t\left({b}_{i}\right)$  (see Section 3), the numbers involved are such that the result would be outside the range of numbers which can be stored by the machine, then the answer is set to the largest quantity which can be stored as a real variable, by means of a call to X02ALF.

The time taken by G02CGF depends on $k$.
This routine assumes that the matrix of correlation coefficients for the independent variables in the regression is positive definite; it fails if this is not the case.
This correlation matrix will in fact be positive definite whenever the correlation matrix and the sums of squares and cross-products (of deviations from means) matrix have been formed either without regard to missing values, or by eliminating completely any cases involving missing values, for any variable. If, however, these matrices are formed by eliminating cases with missing values from only those calculations involving the variables for which the values are missing, no such statement can be made, and the correlation matrix may or may not be positive definite. You should be aware of the possible dangers of using correlation matrices formed in this way (see the G02 Chapter Introduction), but if they nevertheless wish to carry out regression using such matrices, this routine is capable of handling the inversion of such matrices provided they are positive definite.
If a matrix is positive definite, its subsequent re-organisation by either G02CEF or G02CFF will not affect this property, and the new matrix can safely be used in this routine. Thus correlation matrices produced by any of G02BAF, G02BBF, G02BGF or G02BHF, even if subsequently modified by either G02CEF or G02CFF, can be handled by this routine.
It should be noted that in forming the sums of squares and cross-products matrix and the correlation matrix a column of constants should not be added to the data as an additional ‘variable’ in order to obtain a constant term in the regression. This routine automatically calculates the regression constant, $a$, and any attempt to insert such a ‘dummy variable’ is likely to cause the routine to fail.
It should also be noted that the routine requires the dependent variable to be the last of the $k+1$ variables whose statistics are provided as input to the routine. If this variable is not correctly positioned in the original data, the means, standard deviations, sums of squares and cross-products of deviations from means, and correlation coefficients can be manipulated by using G02CEF or G02CFF to reorder the variables as necessary.

## 9  Example

This example reads in the means, sums of squares and cross-products of deviations from means, and correlation coefficients for three variables. A multiple linear regression is then performed with the third and final variable as the dependent variable. Finally the results are printed.

### 9.1  Program Text

Program Text (g02cgfe.f90)

### 9.2  Program Data

Program Data (g02cgfe.d)

### 9.3  Program Results

Program Results (g02cgfe.r)