This chapter is concerned with the orthogonalization of vectors in a finite dimensional space.
2Background to the Problems
Let be a set of linearly independent vectors in -dimensional space; .
We wish to construct a set of vectors such that:
–the vectors form an orthonormal set; that is,
for , and ;
–each is linearly dependent on the set .
The classical Gram–Schmidt orthogonalization process is described in many textbooks; see for example Chapter 5 of Golub and Van Loan (1996).
It constructs the orthonormal set progressively. Suppose it has computed orthonormal vectors which orthogonalise the first vectors . It then uses to compute as follows:
In finite precision computation, this process can result in a set of vectors which are far from being orthogonal. This is caused by being small compared with . If this situation is detected, it can be remedied by reorthogonalising the computed against , that is, repeating the process with the computed instead of . See Danial et al. (1976).
Let be the matrix whose columns are the vectors to be orthogonalised. The factorization gives
where is an upper triangular matrix and is an matrix, whose columns are the required orthonormal set.
Moreover, for any such that , the first columns of are an orthonormal basis for the first columns of .
Householder's method requires twice as much work as the Gram–Schmidt method, provided that no re-orthogonalization is required in the latter. However, it has satisfactory numerical properties and yields vectors which are close to orthogonality even when the original vectors are close to being linearly dependent.
3Recommendations on Choice and Use of Available Routines
The single routine in this chapter, f05aaf, uses the Gram–Schmidt method, with re-orthogonalization to ensure that the computed vectors are close to being exactly orthogonal. This method is only available for real vectors.
To apply Householder's method, you must use routines in Chapter F08: