Integer type:** int32**** int64**** nag_int** show int32 show int32 show int64 show int64 show nag_int show nag_int

F05 — Orthogonalization

This chapter is concerned with the orthogonalization of vectors in a finite dimensional space.

Let a_{1},a_{2}, … ,a_{n}${a}_{1},{a}_{2},\dots ,{a}_{n}$ be a set of n$n$ linearly independent vectors in m$m$-dimensional space; m ≥ n$m\ge n$.

We wish to construct a set of n$n$ vectors q_{1},q_{2}, … ,q_{n}${q}_{1},{q}_{2},\dots ,{q}_{n}$ such that:

- – the vectors {q
_{i}}$\left\{{q}_{i}\right\}$ form an orthonormal set; that is, q_{i}^{T}q_{j}= 0${q}_{i}^{\mathrm{T}}{q}_{j}=0$ for i ≠ j$i\ne j$, and ‖q_{i}‖_{2}= 1${\Vert {q}_{i}\Vert}_{2}=1$; - – each a
_{i}${a}_{i}$ is linearly dependent on the set {q_{i}}$\left\{{q}_{i}\right\}$.

The classical Gram–Schmidt orthogonalization process is described in many textbooks; see for example Chapter 5 of Golub and Van Loan (1996).

It constructs the orthonormal set progressively. Suppose it has computed orthonormal vectors q_{1},q_{2}, … ,q_{k}${q}_{1},{q}_{2},\dots ,{q}_{k}$ which orthogonalise the first k$k$ vectors a_{1},a_{2}, … ,a_{k}${a}_{1},{a}_{2},\dots ,{a}_{k}$. It then uses a_{k + 1}${a}_{k+1}$ to compute q_{k + 1}${q}_{k+1}$ as follows:

In finite precision computation, this process can result in a set of vectors {q_{i}}$\left\{{q}_{i}\right\}$ which are far from being orthogonal. This is caused by |z_{k + 1}|$\left|{z}_{k+1}\right|$ being small compared with |a_{k + 1}|$\left|{a}_{k+1}\right|$. If this situation is detected, it can be remedied by reorthogonalising the computed q_{k + 1}${q}_{k+1}$ against q_{1},q_{2}, … ,q_{k}${q}_{1},{q}_{2},\dots ,{q}_{k}$, that is, repeating the process with the computed q_{k + 1}${q}_{k+1}$ instead of a_{k + 1}${a}_{k+1}$. See Danial et al. (1976).

$$\begin{array}{lll}{z}_{k+1}& =& {a}_{k+1}-\sum _{i=1}^{k}\left({q}_{i}^{\mathrm{T}}{a}_{k+1}\right){q}_{i}\\ {q}_{k+1}& =& {z}_{k+1}/{\Vert {z}_{k+1}\Vert}_{2}\text{.}\end{array}$$ |

An alternative approach to orthogonalising a set of vectors is based on the QR$QR$ factorization (see the F08 Chapter Introduction), which is usually performed by Householder's method. See Chapter 5 of Golub and Van Loan (1996).

Let A$A$ be the m$m$ by n$n$ matrix whose columns are the n$n$ vectors to be orthogonalised. The QR$QR$ factorization gives

where R$R$ is an n$n$ by n$n$ upper triangular matrix and Q$Q$ is an m$m$ by n$n$ matrix, whose columns are the required orthonormal set.

A = QR
$$A=QR$$ |

Moreover, for any k$k$ such that 1 ≤ k ≤ n$1\le k\le n$, the first k$k$ columns of Q$Q$ are an orthonormal basis for the first k$k$ columns of A$A$.

Householder's method requires twice as much work as the Gram–Schmidt method, provided that no re-orthogonalization is required in the latter. However, it has satisfactory numerical properties and yields vectors which are close to orthogonality even when the original vectors a_{i}${a}_{i}$ are close to being linearly dependent.

The single function in this chapter, nag_orthog_real_gram_schmidt (f05aa), uses the Gram–Schmidt method, with re-orthogonalization to ensure that the computed vectors are close to being exactly orthogonal. This method is only available for real vectors.

To apply Householder's method, you must use functions in Chapter F08:

for real vectors: | nag_lapack_dgeqrf (f08ae), followed by nag_lapack_dorgqr (f08af) |

for complex vectors: | nag_lapack_zgeqrf (f08as), followed by nag_lapack_zungqr (f08at) |

The example programs for nag_lapack_dgeqrf (f08ae) or nag_lapack_zgeqrf (f08as) illustrate the necessary calls to these functions.

Danial J W, Gragg W B, Kaufman L and Stewart G W (1976) Reorthogonalization and stable algorithms for updating the Gram–Schmidt QR$QR$ factorization *Math. Comput.* **30** 772–795

Golub G H and Van Loan C F (1996) *Matrix Computations* (3rd Edition) Johns Hopkins University Press, Baltimore

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013