hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox Chapter Introduction

F04 — Simultaneous Linear Equations

Scope of the Chapter

This chapter is concerned with the solution of the matrix equation AX = BAX=B, where BB may be a single vector or a matrix of multiple right-hand sides. The matrix AA may be real, complex, symmetric, Hermitian, positive definite, positive definite Toeplitz or banded. It may also be rectangular, in which case a least squares solution is obtained.
Much of the functionality of this chapter has been superseded by functions from Chapters F07 and F08 (Lapack routines) as those chapters have grown and have included driver and expert driver functions.
For a general introduction to sparse systems of equations, see the F11 Chapter Introduction, which currently provides functions for large sparse systems. Some functions for sparse problems are also included in this chapter; they are described in Section [Sparse Matrix s].

Background to the Problems

A set of linear equations may be written in the form
Ax = b
Ax=b
where the known matrix AA, with real or complex coefficients, is of size mm by nn (mm rows and nn columns), the known right-hand vector bb has mm components (mm rows and one column), and the required solution vector xx has nn components (nn rows and one column). There may also be pp vectors bibi, for i = 1,2,,pi=1,2,,p, on the right-hand side and the equations may then be written as
AX = B,
AX=B,
the required matrix XX having as its pp columns the solutions of Axi = biAxi=bi, for i = 1,2,,pi=1,2,,p. Some functions deal with the latter case, but for clarity only the case p = 1p=1 is discussed here.
The most common problem, the determination of the unique solution of Ax = bAx=b, occurs when m = nm=n and AA is not singular, that is rank(A) = nrank(A)=n. This is discussed in Section [Unique Solution of ] below. The next most common problem, discussed in Section [The Least Squares Solution of , , ] below, is the determination of the least squares solution of AxbAxb required when m > nm>n and rank(A) = nrank(A)=n, i.e., the determination of the vector xx which minimizes the norm of the residual vector r = bAxr=b-Ax. All other cases are rank deficient, and they are treated in Section [Rank-deficient Cases].

Unique Solution of Ax=b

Most functions in this chapter solve this particular problem. The computation starts with the triangular decomposition A = PLUA=PLU, where LL and UU are respectively lower and upper triangular matrices and PP is a permutation matrix, chosen so as to ensure that the decomposition is numerically stable. The solution is then obtained by solving in succession the simpler equations
Ly = PTb
Ux = y
Ly = PTb Ux = y
the first by forward-substitution and the second by back-substitution.
If AA is real symmetric and positive definite, U = LTU=LT, while if AA is complex Hermitian and positive definite, U = LHU=LH; in both these cases PP is the identity matrix (i.e., no permutations are necessary). In all other cases either UU or LL has unit diagonal elements.
Due to rounding errors the computed ‘solution’ x0x0, say, is only an approximation to the true solution xx. This approximation will sometimes be satisfactory, agreeing with xx to several figures, but if the problem is ill-conditioned then xx and x0x0 may have few or even no figures in common, and at this stage there is no means of estimating the ‘accuracy’ of x0x0.
There are three possible approaches to estimating the accuracy of a computed solution.
One way to do so, and to ‘correct’ x0x0 when this is meaningful (see next paragraph), involves computing the residual vector r = bAx0r=b-Ax0 in extended precision arithmetic, and obtaining a correction vector dd by solving PLUd = rPLUd=r. The new approximate solution x0 + dx0+d is usually more accurate and the correcting process is repeated until (a) further corrections are negligible or (b) they show no further decrease.
It must be emphasized that the ‘true’ solution xx may not be meaningful, that is correct to all figures quoted, if the elements of AA and bb are known with certainty only to say pp figures, where pp is less than full precision. The first correction vector dd will then give some useful information about the number of figures in the ‘solution’ which probably remain unchanged with respect to maximum possible uncertainties in the coefficients.
An alternative approach to assessing the accuracy of the solution is to compute or estimate the condition number of AA, defined as
κ(A) = A . A1 .
κ(A) = A . A-1 .
Roughly speaking, errors or uncertainties in AA or bb may be amplified in the solution by a factor κ(A)κ(A). Thus, for example, if the data in AA and bb are only accurate to 55 digits and κ(A)103κ(A)103, then the solution cannot be guaranteed to have more than 22 correct digits. If κ(A)105κ(A)105, the solution may have no meaningful digits.
To be more precise, suppose that
Ax = b  and  (A + δA)(x + δx) = b + δb.
Ax=b  and  (A+δA)(x+δx)=b+δb.
Here δAδA and δbδb represent perturbations to the matrices AA and bb which cause a perturbation δxδx in the solution. We can define measures of the relative sizes of the perturbations in AA, bb and xx as
ρA = (δA)/(A),  ρb = (δb)/(b)  and  ρx = (δx)/(x)  respectively.
ρA=δA A ,  ρb=δb b   and  ρx=δx x   respectively.
Then
ρx(κ (A))/(1κ (A)ρA)(ρA + ρb)
ρxκ (A) 1-κ (A)ρA (ρA+ρb)
provided that κ(A)ρA < 1κ(A)ρA<1. Often κ(A)ρA1κ(A)ρA1 and then the bound effectively simplifies to
ρxκ(A)(ρA + ρb).
ρxκ(A)(ρA+ρb).
Hence, if we know κ(A)κ(A), ρAρA and ρbρb, we can compute a bound on the relative errors in the solution. Note that ρAρA, ρbρb and ρxρx are defined in terms of the norms of AA, bb and xx. If AA, bb or xx contains elements of widely differing magnitude, then ρAρA, ρbρb and ρxρx will be dominated by the errors in the larger elements, and ρxρx will give no information about the relative accuracy of smaller elements of xx.
A third way to obtain useful information about the accuracy of a solution is to solve two sets of equations, one with the given coefficients, which are assumed to be known with certainty to pp figures, and one with the coefficients rounded to (p1p-1) figures, and to count the number of figures to which the two solutions agree. In ill-conditioned problems this can be surprisingly small and even zero.

The Least Squares Solution of Ax≃b, m>n, rankA=n

The least squares solution is the vector x^ which minimizes the sum of the squares of the residuals,
S = (bA)T(bA) = bA22.
S=(b-Ax^)T(b-Ax^)=b-Ax^22.
The solution is obtained in two steps.
(a) Householder transformations are used to reduce AA to ‘simpler form’ via the equation QA = RQA=R, where RR has the appearance
(()/0)
(R^0)
with R^ a nonsingular upper triangular nn by nn matrix and 00 a zero matrix of shape (mn)(m-n) by nn. Similar operations convert bb to Qb = cQb=c, where
c = ((c1)/(c2))
c=(c1c2)
with c1c1 having nn rows and c2c2 having (mnm-n) rows.
(b) The required least squares solution is obtained by back-substitution in the equation
= c1.
R^x^=c1.
Again due to rounding errors the computed 0x^0 is only an approximation to the required x^, but as in Section [Unique Solution of ], this can be improved by ‘iterative refinement’. The first correction dd is the solution of the least squares problem
Ad = bA0 = r
Ad=b-Ax^0=r
and since the matrix AA is unchanged, this computation takes less time than that of the original 0x^0. The process can be repeated until further corrections are (a) negligible or (b) show no further decrease.

Rank-deficient Cases

If, in the least squares problem just discussed, rank(A) < nrank(A)<n, then a least squares solution exists but it is not unique. In this situation it is usual to ask for the least squares solution ‘of minimal length’, i.e., the vector xx which minimizes x2x2, among all those xx for which bAx2b-Ax2 is a minimum.
This can be computed from the Singular Value Decomposition (SVD) of AA, in which AA is factorized as
A = QDPT
A=QDPT
where QQ is an mm by nn matrix with orthonormal columns, PP is an nn by nn orthogonal matrix and DD is an nn by nn diagonal matrix. The diagonal elements of DD are called the ‘singular values’ of AA; they are non-negative and can be arranged in decreasing order of magnitude:
d1d2dn0.
d1d2dn0.
The columns of QQ and PP are called respectively the left and right singular vectors of AA. If the singular values dr + 1,,dndr+1,,dn are zero or negligible, but drdr is not negligible, then the rank of AA is taken to be rr (see also Section [The Rank of a Matrix]) and the minimal length least squares solution of AxbAxb is given by
= DQTb
x^=DQTb
where DD is the diagonal matrix with diagonal elements d11,d21,,dr1,0,,0d1-1,d2-1,,dr-1,0,,0.
The SVD may also be used to find solutions to the homogeneous system of equations Ax = 0Ax=0, where AA is mm by nn. Such solutions exist if and only if rank(A) < nrank(A)<n, and are given by
n
x = αipi
i = r + 1
x=i=r+1nαipi
where the αiαi are arbitrary numbers and the pipi are the columns of PP which correspond to negligible elements of DD.
The general solution to the rank-deficient least squares problem is given by + xx^+x, where x^ is the minimal length least squares solution and xx is any solution of the homogeneous system of equations Ax = 0Ax=0.

The Rank of a Matrix

In theory the rank is rr if nrn-r elements of the diagonal matrix DD of the singular value decomposition are exactly zero. In practice, due to rounding and/or experimental errors, some of these elements have very small values which usually can and should be treated as zero.
For example, the following 55 by 88 matrix has rank 33 in exact arithmetic:
  22 14 − 1 − 3 9 9 2 4 10 7 13 − 2 8 1 − 6 5 2 10 − 1 13 1 − 7 6 0 3 0 − 11 − 2 − 2 5 5 − 2 7 8 3 4 4 − 1 1 2  
.
22 14 -1 -3 9 9 2 4 10 7 13 -2 8 1 -6 5 2 10 -1 13 1 -7 6 0 3 0 -11 -2 -2 5 5 -2 7 8 3 4 4 -1 1 2 .
On a computer with 77 decimal digits of precision the computed singular values were
3.5 × 101,   2.0 × 101,   2.0 × 101,   1.3 × 106,   5.5 × 107
3.5×101,   2.0×101,   2.0×101,   1.3×10-6,   5.5×10-7
and the rank would be correctly taken to be 33.
It is not, however, always certain that small computed singular values are really zero. With the 77 by 77 Hilbert matrix, for example, where aij = 1 / (i + j1)aij=1/(i+j-1), the singular values are
1.7,  2.7 × 101,  2.1 × 102,  1.0 × 103,  2.9 × 105,  4.9 × 107,  3.5 × 109.
1.7,  2.7×10-1,  2.1×10-2,  1.0×10-3,  2.9×10-5,  4.9×10-7,  3.5×10-9.
Here there is no clear cut-off between small (i.e., negligible) singular values and larger ones. In fact, in exact arithmetic, the matrix is known to have full rank and none of its singular values is zero. On a computer with 77 decimal digits of precision, the matrix is effectively singular, but should its rank be taken to be 66, or 55, or 44?
It is therefore impossible to give an infallible rule, but generally the rank can be taken to be the number of singular values which are neither zero nor very small compared with other singular values. For example, if there is a sharp decrease in singular values from numbers of order unity to numbers of order 10710-7, then the latter will almost certainly be zero in a machine in which 77 significant decimal figures is the maximum accuracy. Similarly for a least squares problem in which the data is known to about four significant figures and the largest singular value is of order unity then a singular value of order 10410-4 or less should almost certainly be regarded as zero.
It should be emphasized that rank determination and least squares solutions can be sensitive to the scaling of the matrix. If at all possible the units of measurement should be chosen so that the elements of the matrix have data errors of approximately equal magnitude.

Generalized Linear Least Squares Problems

The simple type of linear least squares problem described in Section [The Least Squares Solution of , , ] can be generalized in various ways.
  1. Linear least squares problems with equality constraints:
    find ​x​ to minimize ​S = cAx22  subject to  Bx = d,
    find ​x​ to minimize ​S=c-Ax22  subject to  Bx=d,
    where AA is mm by nn and BB is pp by nn, with pnm + ppnm+p. The equations Bx = dBx=d may be regarded as a set of equality constraints on the problem of minimizing SS. Alternatively the problem may be regarded as solving an overdetermined system of equations
    (A)
    B
    x =
    (c)
    d
    ,
    A B x= c d ,
    where some of the equations (those involving BB) are to be solved exactly, and the others (those involving AA) are to be solved in a least squares sense. The problem has a unique solution on the assumptions that BB has full row rank pp and the matrix
    (A)
    B
    A B  has full column rank nn. (For linear least squares problems with inequality constraints, refer to Chapter E04.)
  2. General Gauss–Markov linear model problems:
    minimize ​y2  subject to  d = Ax + By,
    minimize ​y2  subject to  d=Ax+By,
    where AA is mm by nn and BB is mm by pp, with nmn + pnmn+p. When B = IB=I, the problem reduces to an ordinary linear least squares problem. When BB is square and nonsingular, it is equivalent to a weighted linear least squares problem:
    find ​x​ to minimize ​B1(dAx)2.
    find ​x​ to minimize ​B-1(d-Ax)2.
    The problem has a unique solution on the assumptions that AA has full column rank nn, and the matrix (A,B)(A,B) has full row rank mm.

Calculating the Inverse of a Matrix

The functions in this chapter can also be used to calculate the inverse of a square matrix AA by solving the equation
AX = I
AX=I
where II is the identity matrix. However, solving the equations AX = BAX=B by calculation of the inverse of the coefficient matrix AA, i.e., by X = A1BX=A-1B, is definitely not recommended.
Similar remarks apply to the calculation of the pseudo-inverse of a singular or rectangular matrix.

Estimating the 1-norm of a Matrix

The 1-norm of a matrix AA is defined to be:
m
A1 = max |aij|
1jn i = 1
A1 = max 1jn i=1 m |aij|
(1)
Typically it is useful to calculate the condition number of a matrix with respect to the solution of linear equations, or inversion. The higher the condition number the less accuracy might be expected from a numerical computation. A condition number for the solution of linear equations is A . A1A.A-1. Since this might be a relatively expensive computation it often suffices to estimate the norm of each matrix.

Recommendations on Choice and Use of Available Functions

See also Section [Recommendations on Choice and Use of Available Functions] in the F07 Chapter Introduction for recommendations on the choice of available functions from that chapter.

Black Box and General Purpose Functions

Most of the functions in this chapter are categorised either as Black Box functions or general purpose functions.
Black Box functions solve the equations Axi = biAxi=bi, for i = 1,2,,pi=1,2,,p, in a single call with the matrix AA and the right-hand sides, bibi, being supplied as data. These are the simplest functions to use and are suitable when all the right-hand sides are known in advance and do not occupy too much storage.
General purpose functions, in general, require a previous call to a function in Chapters F01, F03 or F07 to factorize the matrix AA. This factorization can then be used repeatedly to solve the equations for one or more right-hand sides which may be generated in the course of the computation. The Black Box functions simply call a factorization function and then a general purpose function to solve the equations.
The function nag_linsys_real_gen_sparse_lsqsol (f04qa) which uses an iterative method for sparse systems of equations does not fit easily into this categorization, but is classified as a general purpose function in the decision trees and indexes.

Systems of Linear Equations

Most of the functions in this chapter solve linear equations Ax = bAx=b when AA is nn by nn and a unique solution is expected (see Section [Unique Solution of ]). The matrix AA may be ‘general’ real or complex, or may have special structure or properties, e.g., it may be banded, tridiagonal, almost block-diagonal, sparse, symmetric, Hermitian, positive definite (or various combinations of these).
It must be emphasized that it is a waste of computer time and space to use an inappropriate function, for example one for the complex case when the equations are real. It is also unsatisfactory to use the special functions for a positive definite matrix if this property is not known in advance.
Functions are given for calculating the approximate solution, that is solving the linear equations just once, and also for obtaining the accurate solution by successive iterative corrections of this first approximation using additional precision arithmetic, as described in Section [Unique Solution of ]. The latter, of course, are more costly in terms of time and storage, since each correction involves the solution of nn sets of linear equations and since the original AA and its LULU decomposition must be stored together with the first and successively corrected approximations to the solution. In practice the storage requirements for the ‘corrected’ functions are about double those of the ‘approximate’ functions, though the extra computer time may not be prohibitive since the same matrix and the same LULU decomposition is used in every linear equation solution.
A number of the Black Box functions in this chapter return estimates of the condition number and the forward error, along with the solution of the equations. But for those functions that do not return a condition estimate two functions are provided – nag_linsys_real_gen_norm_rcomm (f04yd) for real matrices, nag_linsys_complex_gen_norm_rcomm (f04zd) for complex matrices – which can return a cheap but reliable estimate of A1A-1, and hence an estimate of the condition number κ(A)κ(A) (see Section [Unique Solution of ]). These functions can also be used in conjunction with most of the linear equation solving functions in Chapter F11: further advice is given in the function documents.
Other functions for solving linear equation systems, computing inverse matrices, and estimating condition numbers can be found in Chapter F07, which contains LAPACK software.

Linear Least Squares Problems

The majority of the functions for solving linear least squares problems are to be found in Chapter F08.
For the case described in Section [The Least Squares Solution of , , ], when mnmn and a unique least squares solution is expected, there are two functions for a general real AA, one of which (nag_linsys_real_gen_solve (f04jg)) computes a first approximation and the other (nag_linsys_real_gen_lsqsol (f04am)) computes iterative corrections. If it transpires that rank(A) < nrank(A)<n, so that the least squares solution is not unique, then nag_linsys_real_gen_lsqsol (f04am) takes a failure exit, but nag_linsys_real_gen_solve (f04jg) proceeds to compute the minimal length solution by using the SVD (see below).
If AA is expected to be of less than full rank then one of the functions for calculating the minimal length solution may be used.
For mnmn the use of the SVD is not significantly more expensive than the use of functions based upon the QRQR factorization.
Problems with linear equality constraints can be solved by nag_lapack_dgglse (f08za) (for real data) or by nag_lapack_zgglse (f08zn) (for complex data), provided that the problems are of full rank. Problems with linear inequality constraints can be solved by nag_opt_lsq_lincon_solve (e04nc) in Chapter E04.
General Gauss–Markov linear model problems, as formulated in Section [Generalized Linear Least Squares Problems], can be solved by nag_lapack_dggglm (f08zb) (for real data) or by nag_lapack_zggglm (f08zp) (for complex data).

Sparse Matrix Functions

Functions specifically for sparse matrices are appropriate only when the number of nonzero elements is very small, less than, say, 10% of the n2n2 elements of AA, and the matrix does not have a relatively small band width.
Chapter F11 contains functions for both the direct and iterative solution of real sparse linear systems. There are two functions in Chapter F04 for solving sparse linear equations (nag_linsys_real_sparse_fac_solve (f04ax) and nag_linsys_real_gen_sparse_lsqsol (f04qa)). nag_linsys_real_sparse_fac_solve (f04ax) utilizes a factorization of the matrix AA obtained from nag_matop_real_gen_sparse_lu (f01br) or nag_matop_real_gen_sparse_lu_reuse (f01bs), while nag_linsys_real_gen_sparse_lsqsol (f04qa) uses an iterative technique and requires a user-supplied function to compute matrix-vector products AcAc and ATcATc for any given vector cc.
nag_linsys_real_gen_sparse_lsqsol (f04qa) solves sparse least squares problems by an iterative technique, and also allows the solution of damped (regularized) least squares problems (see the function document for details).

Decision Trees

The name of the function (if any) that should be used to factorize the matrix AA is given in brackets after the name of the function for solving the equations.

Tree 1: Black Box functions for unique solution of Ax = bAx=b (Real matrix)

Is AA symmetric? _
yes
Is AA positive definite? _
yes
Is AA a band matrix? _
yes
Is AA tridiagonal? _
yes
nag_linsys_real_posdef_tridiag_solve (f04bg) (see Note 1) or nag_lapack_dptsv (f07ja) or nag_lapack_dptsvx (f07jb) (see Note 2)
| | | no
|
| | | nag_linsys_real_posdef_band_solve (f04bf) (see Note 1) or nag_lapack_dpbsv (f07ha) or nag_lapack_dpbsvx (f07hb) (see Note 2)
| | no
|
| | Is AA a Toeplitz matrix? _
yes
Are the equations the Yule–Walker equations? _
yes
nag_linsys_real_toeplitz_yule (f04fe)
| | | no
|
| | | nag_linsys_real_toeplitz_solve (f04ff)
| | no
|
| | Do you require an accurate solution using iterative refinement? _
yes
nag_linsys_real_posdef_solve_ref (f04ab) or nag_linsys_real_posdef_solve_1rhs (f04as) (see Note 3)
| | no
|
| | Is one triangle of AA stored as a linear array? _
yes
nag_linsys_real_posdef_packed_solve (f04be) (see Note 1) or nag_lapack_dppsv (f07ga) or nag_lapack_dppsvx (f07gb) (see Note 2)
| | no
|
| | nag_linsys_real_posdef_solve (f04bd) (see Note 1) or nag_lapack_dposv (f07fa) or nag_lapack_dposvx (f07fb) (see Note 2)
| no
|
| Is one triangle of AA stored as a linear array? _
yes
nag_linsys_real_symm_packed_solve (f04bj) (see Note 1) or nag_lapack_dspsv (f07pa) or nag_lapack_dspsvx (f07pb) (see Note 2)
| no
|
| nag_linsys_real_symm_solve (f04bh) (see Note 1) or nag_lapack_dsysv (f07ma) or nag_lapack_dsysvx (f07mb) (see Note 2)
no
|
Is AA a band matrix? _
yes
Is AA tridiagonal? _
yes
nag_linsys_real_tridiag_solve (f04bc) (see Note 1) or nag_lapack_dgtsv (f07ca) or nag_lapack_dgtsvx (f07cb) (see Note 2)
| no
|
| nag_linsys_real_band_solve (f04bb) (see Note 1) or nag_lapack_dgbsv (f07ba) or nag_lapack_dgbsvx (f07bb) (see Note 2)
no
|
Do you require an accurate solution using iterative refinement? _
yes
nag_linsys_real_square_solve_ref (f04ae) or nag_linsys_real_square_solve_1rhs (f04at) (see Note 3)
no
|
nag_linsys_real_square_solve (f04ba) (see Note 1) or nag_lapack_dgesv (f07aa) or nag_lapack_dgesvx (f07ab) (see Note 2)

Tree 2: Black Box functions for unique solution of Ax = bAx=b (Complex matrix)

Is AA Hermitian? _
yes
Is AA positive definite? _
yes
Is AA a band matrix? _
yes
Is AA tridiagonal? _
yes
nag_linsys_complex_posdef_tridiag_solve (f04cg) (see Note 1) or
nag_lapack_zptsv (f07jn) or nag_lapack_zptsvx (f07jp) (see Note 2)
| | | no
|
| | | nag_linsys_complex_posdef_band_solve (f04cf) (see Note 1) or nag_lapack_zpbsv (f07hn) or nag_lapack_zpbsvx (f07hp) (see Note 2)
| | no
|
| | Is one triangle of AA stored as a linear array? _
yes
nag_linsys_complex_posdef_packed_solve (f04ce) (see Note 1) or nag_lapack_zppsv (f07gn) or nag_lapack_zppsvx (f07gp) (see Note 2)
| | no
|
| | nag_linsys_complex_posdef_solve (f04cd) (see Note 1) or nag_lapack_zposv (f07fn) or nag_lapack_zposvx (f07fp) (see Note 2)
| no
|
| Is one triangle of AA stored as a linear array? _
yes
nag_linsys_complex_herm_packed_solve (f04cj) (see Note 1) or nag_lapack_zhpsv (f07pn) or nag_lapack_zhpsvx (f07pp) (see Note 2)
| no
|
| nag_linsys_complex_herm_solve (f04ch) (see Note 1) or nag_lapack_zhesv (f07mn) or nag_lapack_zhesvx (f07mp) (see Note 2)
no
|
Is AA symmetric? _
yes
Is one triangle of AA stored as a linear array? _
yes
nag_linsys_complex_symm_packed_solve (f04dj) (see Note 1) or nag_lapack_zspsv (f07qn) or nag_lapack_zspsvx (f07qp) (see Note 2)
| no
|
| nag_linsys_complex_symm_solve (f04dh) (see Note 1) or nag_lapack_zsysv (f07nn) or nag_lapack_zsysvx (f07np) (see Note 2)
no
|
Is AA a band matrix? _
yes
Is AA tridiagonal? _
yes
nag_linsys_complex_tridiag_solve (f04cc) (see Note 1) or nag_lapack_zgtsv (f07cn) or nag_lapack_zgtsvx (f07cp) (see Note 2)
| no
|
| nag_linsys_complex_band_solve (f04cb) (see Note 1) or nag_lapack_zgbsv (f07bn) or nag_lapack_zgbsvx (f07bp) (see Note 2)
no
|
nag_linsys_complex_square_solve (f04ca) (see Note 1) or nag_lapack_zgesv (f07an) or nag_lapack_zgesvx (f07ap) (see Note 2)

Tree 3: General purpose functions for unique solution of Ax = bAx=b (Real matrix)

Is AA a sparse matrix and not banded? _
yes
Chapter F11 or nag_linsys_real_sparse_fac_solve (f04ax) (nag_matop_real_gen_sparse_lu (f01br) or nag_matop_real_gen_sparse_lu_reuse (f01bs)) or nag_linsys_real_gen_sparse_lsqsol (f04qa)
no
|
Is AA symmetric? _
yes
Is AA positive definite? _
yes
Is AA band matrix? _
yes
Is AA tridiagonal? _
yes
nag_lapack_dpttrs (f07je) (nag_lapack_dpttrf (f07jd))
| | | no
|
| | | Is AA variable band width? _
yes
nag_linsys_real_posdef_vband_solve (f04mc) (nag_matop_real_vband_posdef_fac (f01mc))
| | | no
|
| | | nag_lapack_dpbtrs (f07he) (nag_lapack_dpbtrf (f07hd))
| | no
|
| | Is AA a Toeplitz matrix? _
yes
Are the equations the Yule–Walker equations? _
yes
nag_linsys_real_toeplitz_yule_update (f04me)
| | | no
|
| | | nag_linsys_real_toeplitz_update (f04mf)
| | no
|
| | Is one triangle of AA stored as a linear array? _
yes
nag_lapack_dpptrs (f07ge) (nag_lapack_dpptrf (f07gd))
| | no
|
| | nag_lapack_dpotrs (f07fe) (nag_lapack_dpotrf (f07fd))
| no
|
| Is one triangle of AA stored as a linear array? _
yes
nag_lapack_dsptrs (f07pe) (nag_lapack_dsptrf (f07pd))
| no
|
| nag_lapack_dsytrs (f07me) (nag_lapack_dsytrf (f07md))
no
|
Is AA triangular? _
yes
Is AA a band matrix? _
yes
nag_lapack_dtbtrs (f07ve)
| no
|
| Is AA stored as a linear array? _
yes
nag_lapack_dtptrs (f07ue)
| no
|
| nag_lapack_dtrtrs (f07te)
no
|
Is AA a band matrix? _
yes
Is AA tridiagonal? _
yes
nag_linsys_real_tridiag_fac_solve (f04le) (nag_matop_real_gen_tridiag_lu (f01le)) or nag_lapack_dgttrs (f07ce) (nag_lapack_dgttrf (f07cd))
| no
|
| Is AA almost block diagonal? _
yes
nag_linsys_real_blkdiag_fac_solve (f04lh) (nag_matop_real_gen_blkdiag_lu (f01lh))
| no
|
| nag_lapack_dgbtrs (f07be) (nag_lapack_dgbtrf (f07bd))
no
|
nag_lapack_dgetrs (f07ae) (nag_lapack_dgetrf (f07ad))

Tree 4: General purpose functions for unique solution of Ax = bAx=b (Complex matrix)

Is AA a sparse matrix and not banded? _
yes
Chapter F11
no
|
Is AA Hermitian? _
yes
Is AA positive definite? _
yes
Is AA a band matrix? _
yes
Is AA tridiagonal? _
yes
nag_lapack_zpttrs (f07js) (nag_lapack_zpttrf (f07jr))
| | | no
|
| | | nag_lapack_zpbtrs (f07hs) (nag_lapack_zpbtrf (f07hr))
| | no
|
| | Is one triangle of AA stored as a linear array? _
yes
nag_lapack_zpptrs (f07gs) (nag_lapack_zpptrf (f07gr))
| | no
|
| | nag_lapack_zpotrs (f07fs) (nag_lapack_zpotrf (f07fr))
| no
|
| Is one triangle of AA stored as a linear array? _
yes
nag_lapack_zhptrs (f07ps) (nag_lapack_zhptrf (f07pr))
| no
|
| nag_lapack_zhetrs (f07ms) (nag_lapack_zhetrf (f07mr))
no
|
Is AA symmetric? _
yes
Is one triangle of AA stored as a linear array? _
yes
nag_lapack_zsptrs (f07qs) (nag_lapack_zsptrf (f07qr))
| no
|
| nag_lapack_zsytrs (f07ns) (nag_lapack_zsytrf (f07nr))
no
|
Is AA triangular? _
yes
Is AA a band matrix? _
yes
nag_lapack_ztbtrs (f07vs)
| no
|
| Is AA stored as a linear array? _
yes
nag_lapack_ztptrs (f07us)
| no
|
| nag_lapack_ztrtrs (f07ts)
no
|
Is AA a band matrix? _
yes
Is AA tridiagonal? _
yes
nag_lapack_zgttrs (f07cs) (nag_lapack_zgttrf (f07cr))
| no
|
| nag_lapack_zgbtrs (f07bs) (nag_lapack_zgbtrf (f07br))
no
|
nag_lapack_zgetrs (f07as) (nag_lapack_zgetrf (f07ar))

Tree 5: General purpose functions for least squares and homogeneous equations (without constraints)

Is the problem Ax = 0Ax=0? _
yes
nag_lapack_dgelss (f08ka)
no
|
Is AA sparse? _
yes
nag_linsys_real_gen_sparse_lsqsol (f04qa)
no
|
Is rank(A) = nrank(A)=n? _
yes
Are storage and time more important than accuracy? _
yes
nag_linsys_real_gen_solve (f04jg)
| no
|
| nag_linsys_real_gen_lsqsol (f04am)
no
|
Is m > nm>n? _
yes
nag_linsys_real_gen_solve (f04jg) or nag_lapack_dgelss (f08ka)
no
|
nag_lapack_dgelss (f08ka)
Note: there are also functions in Chapter F08 for solving least squares problems.
Note 1: also returns an estimate of the condition number and the forward error.
Note 2: also returns an estimate of the condition number, the forward error and the backward error. Requires additional workspace.
Note 3: for a single right-hand side only.

Functionality Index

Black Box functions, Ax = b, 
    complex general band matrix nag_linsys_complex_band_solve (f04cb)
    complex general matrix nag_linsys_complex_square_solve (f04ca)
    complex general tridiagonal matrix nag_linsys_complex_tridiag_solve (f04cc)
    complex Hermitian matrix, 
        packed matrix format nag_linsys_complex_herm_packed_solve (f04cj)
        standard matrix format nag_linsys_complex_herm_solve (f04ch)
    complex Hermitian positive definite band matrix nag_linsys_complex_posdef_band_solve (f04cf)
    complex Hermitian positive definite matrix, 
        packed matrix format nag_linsys_complex_posdef_packed_solve (f04ce)
        standard matrix format nag_linsys_complex_posdef_solve (f04cd)
    complex Hermitian positive definite tridiagonal matrix nag_linsys_complex_posdef_tridiag_solve (f04cg)
    complex symmetric matrix, 
        packed matrix format nag_linsys_complex_symm_packed_solve (f04dj)
        standard matrix format nag_linsys_complex_symm_solve (f04dh)
    real general band matrix nag_linsys_real_band_solve (f04bb)
    real general matrix, 
        multiple right-hand sides, 
            iterative refinement using additional precision nag_linsys_real_square_solve_ref (f04ae)
        multiple right-hand sides, standard precision nag_linsys_real_square_solve (f04ba)
        single right-hand side, 
            iterative refinement using additional precision nag_linsys_real_square_solve_1rhs (f04at)
    real general tridiagonal matrix nag_linsys_real_tridiag_solve (f04bc)
    real symmetric matrix, 
        packed matrix format nag_linsys_real_symm_packed_solve (f04bj)
        standard matrix format nag_linsys_real_symm_solve (f04bh)
    real symmetric positive definite band matrix nag_linsys_real_posdef_band_solve (f04bf)
    real symmetric positive definite matrix, 
        multiple right-hand sides, 
            iterative refinement using additional precision nag_linsys_real_posdef_solve_ref (f04ab)
        multiple right-hand sides, standard precision nag_linsys_real_posdef_solve (f04bd)
        packed matrix format nag_linsys_real_posdef_packed_solve (f04be)
        single right-hand side, 
            iterative refinement using additional precision nag_linsys_real_posdef_solve_1rhs (f04as)
    real symmetric positive definite Toeplitz matrix, 
        general right-hand side nag_linsys_real_toeplitz_solve (f04ff)
        Yule–Walker equations nag_linsys_real_toeplitz_yule (f04fe)
    real symmetric positive definite tridiagonal matrix nag_linsys_real_posdef_tridiag_solve (f04bg)
General Purpose functions, Ax = b, 
    real almost block-diagonal matrix nag_linsys_real_blkdiag_fac_solve (f04lh)
    real band symmetric positive definite matrix, variable bandwidth nag_linsys_real_posdef_vband_solve (f04mc)
    real sparse matrix, 
        direct method nag_linsys_real_sparse_fac_solve (f04ax)
        iterative method nag_linsys_real_gen_sparse_lsqsol (f04qa)
    real symmetric positive definite Toeplitz matrix, 
        general right-hand side, update solution nag_linsys_real_toeplitz_update (f04mf)
        Yule–Walker equations, update solution nag_linsys_real_toeplitz_yule_update (f04me)
    real tridiagonal matrix nag_linsys_real_tridiag_fac_solve (f04le)
Least squares and Homogeneous Equations, 
    real m by n matrix, 
        m ≥ n, rank  = n or minimal solution nag_linsys_real_gen_solve (f04jg)
        rank  = n, iterative refinement nag_linsys_real_gen_lsqsol (f04am)
    real sparse matrix nag_linsys_real_gen_sparse_lsqsol (f04qa)
Service Functions, 
    complex rectangular matrix, 
        norm and condition number estimation nag_linsys_complex_gen_norm_rcomm (f04zd)
    real matrix, 
        covariance matrix for linear least squares problems nag_linsys_real_gen_lsq_covmat (f04ya)
    real rectangular matrix, 
        norm and condition number estimation nag_linsys_real_gen_norm_rcomm (f04yd)

References

Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Lawson C L and Hanson R J (1974) Solving Least Squares Problems Prentice–Hall
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013