hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox Chapter Introduction

F01 — Matrix Operations, Including Inversion

Scope of the Chapter

This chapter provides facilities for four types of problem:
(i) Matrix Inversion
(ii) Matrix Factorizations
(iii) Matrix Arithmetic and Manipulation
(iv) Matrix Functions
These problems are discussed separately in Section [Matrix Inversion], Section [Matrix Factorizations], Section [Matrix Arithmetic and Manipulation] and Section [Matrix Functions].

Background to the Problems

Matrix Inversion

(i) Nonsingular square matrices of order nn.
If AA, a square matrix of order nn, is nonsingular (has rank nn), then its inverse XX exists and satisfies the equations AX = XA = IAX=XA=I (the identity or unit matrix).
It is worth noting that if AXI = RAX-I=R, so that RR is the ‘residual’ matrix, then a bound on the relative error is given by RR, i.e.,
(XA1)/(A1)R.
X-A-1 A-1 R.
(ii) General real rectangular matrices.
A real matrix AA has no inverse if it is square (nn by nn) and singular (has rank < n<n), or if it is of shape (mm by nn) with mnmn, but there is a Generalized or Pseudo-inverse A+A+ which satisfies the equations
AA+A = A,  A+AA+ = A+,  (AA+)T = AA+,  (A+A)T = A+A
AA+A=A,  A+AA+=A+,  (AA+)T=AA+,  (A+A)T=A+A
(which of course are also satisfied by the inverse XX of AA if AA is square and nonsingular).
(a) if mnmn and rank(A) = nrank(A)=n then AA can be factorized using a QRQR factorization, given by
A = Q
(R)
0
,
A=Q R 0 ,
where QQ is an mm by mm orthogonal matrix and RR is an nn by nn, nonsingular, upper triangular matrix. The pseudo-inverse of AA is then given by
A+ = R1T,
A+=R-1Q~T,
where Q~ consists of the first nn columns of QQ.
(b) if mnmn and rank(A) = mrank(A)=m then AA can be factorized using an RQ factorization, given by
A = (R0)QT
A=(R0)QT
where QQ is an nn by nn orthogonal matrix and RR is an mm by mm, nonsingular, upper triangular matrix. The pseudo-inverse of AA is then given by
A+ = R1 ,
A+ = Q~R-1 ,
where Q~ consists of the first mm columns of QQ.
(c) if mnmn and rank(A) = rnrank(A)=rn then AA can be factorized using a QRQR factorization, with column interchanges, as
A = Q
(R)
0
PT,
A=Q R 0 PT,
where QQ is an mm by mm orthogonal matrix, RR is an rr by nn upper trapezoidal matrix and PP is an nn by nn permutation matrix. The pseudo-inverse of AA is then given by
A+ = PRT(RRT)1T,
A+=PRT(RRT)-1Q~T,
where Q~ consists of the first rr columns of QQ.
(d) if rank(A) = rk = min (m,n)rank(A)=rk=min(m,n), then AA can be factorized as the singular value decomposition
A = UΣVT,
A=UΣVT,
where UU is an mm by mm orthogonal matrix, VV is an nn by nn orthogonal matrix and ΣΣ is an mm by nn diagonal matrix with non-negative diagonal elements σσ. The first kk columns of UU and VV are the left- and right-hand singular vectors of AA respectively and the kk diagonal elements of ΣΣ are the singular values of AA. ΣΣ may be chosen so that
σ1σ2σk0
σ1σ2σk0
and in this case if rank(A) = rrank(A)=r then
σ1σ2σr > 0,  σr + 1 = = σk = 0.
σ1σ2σr>0,  σr+1==σk=0.
If U~ and V~ consist of the first rr columns of UU and VV respectively and Σ̃Σ~ is an rr by rr diagonal matrix with diagonal elements σ1,σ2,,σrσ1,σ2,,σr then AA is given by
A = Σ̃T
A=U~Σ~V~T
and the pseudo-inverse of AA is given by
A+ = Σ̃1T.
A+=V~Σ~-1U~T.
Notice that
ATA = V(ΣTΣ)VT
ATA=V(ΣTΣ)VT
which is the classical eigenvalue (spectral) factorization of ATAATA.
(e) if AA is complex then the above relationships are still true if we use ‘unitary’ in place of ‘orthogonal’ and conjugate transpose in place of transpose. For example, the singular value decomposition of AA is
A = UΣVH,
A=UΣVH,
where UU and VV are unitary, VHVH the conjugate transpose of VV and ΣΣ is as in (d) above.

Matrix Factorizations

The functions in this section perform matrix factorizations which are required for the solution of systems of linear equations with various special structures. A few functions which perform associated computations are also included.
Other functions for matrix factorizations are to be found in Chapters F07, F08 and F11.
This section also contains a few functions associated with eigenvalue problems (see Chapter F02). (Historical note: this section used to contain many more such functions, but they have now been superseded by functions in Chapter F08.)

Matrix Arithmetic and Manipulation

The intention of functions in this section (sub-chapters F01C, F01V and F01Z) is to cater for some of the commonly occurring operations in matrix manipulation, e.g., transposing a matrix or adding part of one matrix to another, and for conversion between different storage formats, e.g., conversion between rectangular band matrix storage and packed band matrix storage. For vector or matrix-vector or matrix-matrix operations refer to Chapter F16.

Matrix Functions

Given a square matrix AA, the matrix function f(A)f(A) is a matrix with the same dimensions as AA which provides a generalization of the scalar function ff.
If AA has a full set of eigenvectors VV then AA can be factorized as
A = V D V1 ,
A = V D V-1 ,
where DD is the diagonal matrix whose diagonal elements, didi, are the eigenvalues of AA. f(A)f(A) is given by
f(A) = V f(D) V1 ,
f(A) = V f(D) V-1 ,
where f(D)f(D) is the diagonal matrix whose iith diagonal element is f(di)f(di).
In general, AA may not have a full set of eigenvectors. The matrix function can then be defined via a Cauchy integral. For An × nAn×n,
f(A) = 1/(2π i) Γ f(z) (zIA)1 dz ,
f(A) = 1 2π i Γ f(z) (zI-A)-1 dz ,
where ΓΓ is a closed contour surrounding the eigenvalues of AA, and ff is analytic within ΓΓ.
Algorithms for computing matrix functions are usually tailored to a specific function. Currently Chapter F01 contains routines for calculating the exponential, logarithm, sine, cosine, sinh and cosh of both real and complex matrices. In addition there are routines to compute a general function of real symmetric and complex Hermitian matrices and a general function of general real and complex matrices.
The condition number of a matrix function is a measure of its sensitivity to perturbations in the data. Chapter F01 contains functions for estimating the condition number of the matrix exponential, logarithm, sine, cosine, sinh or cosh for real or complex matrices. It also contains functions for estimating the condition number of a general function of a real or complex matrix.

Recommendations on Choice and Use of Available Functions

Matrix Inversion

Note:  before using any function for matrix inversion, consider carefully whether it is really needed.
Although the solution of a set of linear equations Ax = bAx=b can be written as x = A1bx=A-1b, the solution should never be computed by first inverting AA and then computing A1bA-1b; the functions in Chapters F04 or F07 should always be used to solve such sets of equations directly; they are faster in execution, and numerically more stable and accurate. Similar remarks apply to the solution of least squares problems which again should be solved by using the functions in Chapters F04 and F08 rather than by computing a pseudo-inverse.
(a) Nonsingular square matrices of order nn 
This chapter describes techniques for inverting a general real matrix AA and matrices which are positive definite (have all eigenvalues positive) and are either real and symmetric or complex and Hermitian. It is wasteful and uneconomical not to use the appropriate function when a matrix is known to have one of these special forms. A general function must be used when the matrix is not known to be positive definite. In most functions the inverse is computed by solving the linear equations Axi = eiAxi=ei, for i = 1,2,,ni=1,2,,n, where eiei is the iith column of the identity matrix.
Functions are given for calculating the approximate inverse, that is solving the linear equations just once, and also for obtaining the accurate inverse by successive iterative corrections of this first approximation. The latter, of course, are more costly in terms of time and storage, since each correction involves the solution of nn sets of linear equations and since the original AA and its LULU decomposition must be stored together with the first and successively corrected approximations to the inverse. In practice the storage requirements for the ‘corrected’ inverse functions are about double those of the ‘approximate’ inverse functions, though the extra computer time is not prohibitive since the same matrix and the same LULU decomposition is used in every linear equation solution.
Despite the extra work of the ‘corrected’ inverse functions they are superior to the ‘approximate’ inverse functions. A correction provides a means of estimating the number of accurate figures in the inverse or the number of ‘meaningful’ figures relating to the degree of uncertainty in the coefficients of the matrix.
The residual matrix R = AXIR=AX-I, where XX is a computed inverse of AA, conveys useful information. Firstly RR is a bound on the relative error in XX and secondly R < (1/2) R<12  guarantees the convergence of the iterative process in the ‘corrected’ inverse functions.
The decision trees for inversion show which functions in Chapter F04 and Chapter F07 should be used for the inversion of other special types of matrices not treated in the chapter.
(b) General real rectangular matrices
For real matrices nag_lapack_dgeqrf (f08ae) and nag_matop_real_gen_rq (f01qj) return QRQR and RQRQ factorizations of AA respectively and nag_lapack_dgeqp3 (f08bf) returns the QRQR factorization with column interchanges. The corresponding complex functions are nag_lapack_zgeqrf (f08as), nag_matop_complex_gen_rq (f01rj) and nag_lapack_zgeqp3 (f08bt) respectively. Functions are also provided to form the orthogonal matrices and transform by the orthogonal matrices following the use of the above functions. nag_matop_real_trapez_rq (f01qg) and nag_matop_complex_trapez_rq (f01rg) form the RQRQ factorization of an upper trapezoidal matrix for the real and complex cases respectively.
nag_matop_real_gen_pseudinv (f01bl) uses the QRQR factorization as described in Section [Matrix Inversion](ii)(a) and is the only function that explicitly returns a pseudo-inverse. If mnmn, then the function will calculate the pseudo-inverse A+A+ of the matrix AA. If m < nm<n, then the nn by mm matrix ATAT should be used. The function will calculate the pseudo-inverse Z = (AT)+ = (A+)TZ=(AT)+=(A+)T of ATAT and the required pseudo-inverse will be ZTZT. The function also attempts to calculate the rank, rr, of the matrix given a tolerance to decide when elements can be regarded as zero. However, should this function fail due to an incorrect determination of the rank, the singular value decomposition method (described below) should be used.
nag_lapack_dgesvd (f08kb) and nag_lapack_zgesvd (f08kp) compute the singular value decomposition as described in Section [Background to the Problems] for real and complex matrices respectively. If AA has rank rk = min (m,n)rk=min(m,n) then the krk-r smallest singular values will be negligible and the pseudo-inverse of AA can be obtained as A+ = VΣ1UTA+=VΣ-1UT as described in Section [Background to the Problems]. If the rank of AA is not known in advance it can be estimated from the singular values (see Section [The Rank of a Matrix] in the F04 Chapter Introduction). In the real case with mnmn, nag_eigen_real_gen_qu_svd (f02wd) provides details of the QRQR factorization or the singular value decomposition depending on whether or not AA is of full rank and for some problems provides an attractive alternative to nag_lapack_dgesvd (f08kb). For large sparse matrices, leading terms in the singular value decomposition can be computed using functions from Chapter F12.

Matrix Factorizations

Each of these functions serves a special purpose required for the solution of sets of simultaneous linear equations or the eigenvalue problem. For further details you should consult Sections [Recommendations on Choice and Use of Available Functions] or [Decision Trees] in the F02 Chapter Introduction or Sections [Recommendations on Choice and Use of Available Functions] or [Decision Trees] in the F04 Chapter Introduction.
nag_matop_real_gen_sparse_lu (f01br) and nag_matop_real_gen_sparse_lu_reuse (f01bs) are provided for factorizing general real sparse matrices. A more recent algorithm for the same problem is available through nag_sparse_direct_real_gen_lu (f11me). For factorizing real symmetric positive definite sparse matrices, see nag_sparse_real_symm_precon_ichol (f11ja). These functions should be used only when AA is not banded and when the total number of nonzero elements is less than 10% of the total number of elements. In all other cases either the band functions or the general functions should be used.

Matrix Arithmetic and Manipulation

The functions in the F01C section are designed for the general handling of mm by nn matrices. Emphasis has been placed on flexibility in the parameter specifications and on avoiding, where possible, the use of internally declared arrays. They are therefore suited for use with large matrices of variable row and column dimensions. Functions are included for the addition and subtraction of sub-matrices of larger matrices, as well as the standard manipulations of full matrices. Those functions involving matrix multiplication may use additional-precision arithmetic for the accumulation of inner products. See also .
The functions in the F01V (LAPACK) and F01Z section are designed to allow conversion between full storage format and one of the packed storage schemes required by some of the functions in Chapters F02, F04, F07 and F08.

NAG Names and LAPACK Names

Functions with NAG name beginning F01V may be called either by their NAG names or by their LAPACK names. When using the NAG Library, the double precision form of the LAPACK name must be used (beginning with D- or Z-).
References to Chapter F01 functions in the manual normally include the LAPACK double precision names, for example, nag_matop_dtrttf (f01ve).
The LAPACK function names follow a simple scheme (which is similar to that used for the BLAS in ). Most names have the structure XYYTZZ, where the components have the following meanings:
– the initial letter, X, indicates the data type (real or complex) and precision:
– the fourth letter, T, indicates that the function is performing a storage scheme transformation (conversion)
– the letters YY indicate the original storage scheme used to store a triangular part of the matrix AA, while the letters ZZ indicate the target storage scheme of the conversion (YY cannot equal ZZ since this would do nothing):

Matrix Functions

nag_matop_real_gen_matrix_exp (f01ec) and nag_matop_complex_gen_matrix_exp (f01fc) compute the matrix exponential, eAeA, of a real and complex square matrix AA respectively. If estimates of the condition number of the matrix exponential are required then nag_matop_real_gen_matrix_cond_std (f01ja) and nag_matop_complex_gen_matrix_cond_std (f01ka) should be used.
nag_matop_real_symm_matrix_exp (f01ed) and nag_matop_complex_herm_matrix_exp (f01fd) compute the matrix exponential, eAeA, of a real symmetric and complex Hermitian matrix respectively. If the matrix is real symmetric, or complex Hermitian then it is recommended that nag_matop_real_symm_matrix_exp (f01ed), or nag_matop_complex_herm_matrix_exp (f01fd) be used as they are more efficient and, in general, more accurate than nag_matop_real_gen_matrix_exp (f01ec) and nag_matop_complex_gen_matrix_exp (f01fc).
nag_matop_real_gen_matrix_log (f01ej) and nag_matop_complex_gen_matrix_log (f01fj) compute the principal matrix logarithm, log(A)log(A), of a real and complex square matrix AA respectively. If estimates of the condition number of the matrix logarithm are required then nag_matop_real_gen_matrix_cond_std (f01ja) and nag_matop_complex_gen_matrix_cond_std (f01ka) should be used.
nag_matop_real_gen_matrix_fun_std (f01ek) and nag_matop_complex_gen_matrix_fun_std (f01fk) compute the matrix exponential, sine, cosine, sinh or cosh of a real and complex square matrix AA respectively. If the matrix exponential is required then it is recommended that nag_matop_real_gen_matrix_exp (f01ec) or nag_matop_complex_gen_matrix_exp (f01fc) be used as they are, in general, more accurate than nag_matop_real_gen_matrix_fun_std (f01ek) and nag_matop_complex_gen_matrix_fun_std (f01fk). If estimates of the condition number of the matrix function are required then nag_matop_real_gen_matrix_cond_std (f01ja) and nag_matop_complex_gen_matrix_cond_std (f01ka) should be used.
nag_matop_real_gen_matrix_fun_num (f01el) and nag_matop_real_gen_matrix_fun_usd (f01em) compute the matrix function, f(A)f(A), of a real square matrix. nag_matop_complex_gen_matrix_fun_num (f01fl) and nag_matop_complex_gen_matrix_fun_usd (f01fm) compute the matrix function of a complex square matrix. The derivatives of ff are required for these computations. nag_matop_real_gen_matrix_fun_num (f01el) and nag_matop_complex_gen_matrix_fun_num (f01fl) use numerical differentiation to obtain the derivatives of ff. nag_matop_real_gen_matrix_fun_usd (f01em) and nag_matop_complex_gen_matrix_fun_usd (f01fm) use derivatives you have supplied. If estimates of the condition number of the matrix function are required and you are supplying derivatives of ff, then nag_matop_real_gen_matrix_cond_usd (f01jc) and nag_matop_complex_gen_matrix_cond_usd (f01kc) should be used. If estimates of the condition number are required but you are not supplying derivatives then nag_matop_real_gen_matrix_cond_num (f01jb) and nag_matop_complex_gen_matrix_cond_num (f01kb) should be used.
nag_matop_real_symm_matrix_fun (f01ef) and nag_matop_complex_herm_matrix_fun (f01ff) compute the matrix function, f(A)f(A), of a real symmetric and complex Hermitian matrix AA respectively. If the matrix is real symmetric or complex Hermitian then it is recommended that nag_matop_real_symm_matrix_fun (f01ef) or nag_matop_complex_herm_matrix_fun (f01ff) be used as they are more efficient and, in general, more accurate than nag_matop_real_gen_matrix_fun_num (f01el), nag_matop_real_gen_matrix_fun_usd (f01em), nag_matop_complex_gen_matrix_fun_num (f01fl) and nag_matop_complex_gen_matrix_fun_usd (f01fm).
nag_matop_real_gen_matrix_actexp (f01ga) and nag_matop_complex_gen_matrix_actexp (f01ha) compute the matrix function etABetAB for explicitly stored dense real and complex matrices AA and BB respectively while nag_matop_real_gen_matrix_actexp_rcomm (f01gb) and nag_matop_complex_gen_matrix_actexp_rcomm (f01hb) compute the same using reverse communication. In the latter case, control is returned to you. You should calculate any required matrix-matrix products and then call the function again.

Decision Trees

The decision trees show the functions in this chapter and in Chapter F04, Chapter F07 and Chapter F08 that should be used for inverting matrices of various types. They also show which function should be used to calculate various matrix functions.
(i) Matrix Inversion:

Tree 1

Is AA an nn by nn matrix of rank nn? _
yes
Is AA a real matrix? _
yes
see Tree 2
| no
|
| see Tree 3
no
|
see Tree 4

Tree 2: Inverse of a real n by n matrix of full rank

Is AA a band matrix? _
yes
See Note 1.
no
|
Is AA symmetric? _
yes
Is AA positive definite? _
yes
Do you want guaranteed accuracy? (See Note 2) _
yes
nag_matop_real_symm_posdef_inv (f01ab)
| | no
|
| | Is one triangle of AA stored as a linear array? _
yes
nag_lapack_dpptrf (f07gd) and nag_lapack_dpptri (f07gj)
| | no
|
| | nag_matop_real_symm_posdef_inv_noref (f01ad) or nag_lapack_dpotrf (f07fd) and nag_lapack_dpotri (f07fj)
| no
|
| Is one triangle of AA stored as a linear array? _
yes
nag_lapack_dsptrf (f07pd) and nag_lapack_dsptri (f07pj)
| no
|
| nag_lapack_dsytrf (f07md) and nag_lapack_dsytri (f07mj)
no
|
Is AA triangular? _
yes
Is AA stored as a linear array? _
yes
nag_lapack_dtptri (f07uj)
| no
|
| nag_lapack_dtrtri (f07tj)
no
|
Do you want guaranteed accuracy? (See Note 2) _
yes
nag_linsys_real_square_solve_ref (f04ae)
no
|
nag_lapack_dgetrf (f07ad) and nag_lapack_dgetri (f07aj)

Tree 3: Inverse of a complex n by n matrix of full rank

Is AA a band matrix? _
yes
See Note 1.
no
|
Is AA Hermitian? _
yes
Is AA positive definite? _
yes
Is one triangle of AA stored as a linear array? _
yes
nag_lapack_zpptrf (f07gr) and nag_lapack_zpptri (f07gw)
| | no
|
| | nag_lapack_zpotrf (f07fr) and nag_lapack_zpotri (f07fw)
| no
|
| Is one triangle AA stored as a linear array? _
yes
nag_lapack_zhptrf (f07pr) and nag_lapack_zhptri (f07pw)
| no
|
| nag_lapack_zhetrf (f07mr) and nag_lapack_zhetri (f07mw)
no
|
Is AA symmetric? _
yes
Is one triangle of AA stored as a linear array? _
yes
nag_lapack_zsptrf (f07qr) and nag_lapack_zsptri (f07qw)
| no
|
| nag_lapack_zsytrf (f07nr) and nag_lapack_zsytri (f07nw)
no
|
Is AA triangular? _
yes
Is AA stored as a linear array? _
yes
nag_lapack_ztptri (f07uw)
| no
|
| nag_lapack_ztrtri (f07tw)
no
|
nag_lapack_zgesv (f07an) or nag_lapack_zgetrf (f07ar) and nag_lapack_zgetri (f07aw)

Tree 4: Pseudo-inverses

Is AA a complex matrix? _
yes
Is AA of full rank? _
yes
Is AA an mm by nn matrix with m < nm<n? _
yes
nag_matop_complex_gen_rq (f01rj) and nag_matop_complex_gen_rq_formq (f01rk)
| | no
|
| | nag_lapack_zgeqrf (f08as) and nag_lapack_zunmqr (f08au) or nag_lapack_zungqr (f08at)
| no
|
| nag_lapack_zgesvd (f08kp)
no
|
Is AA of full rank? _
yes
Is AA an mm by nn matrix with m < nm<n? _
yes
nag_matop_real_gen_rq (f01qj) and nag_matop_real_gen_rq_formq (f01qk)
| no
|
| nag_lapack_dgeqrf (f08ae) and nag_lapack_dormqr (f08ag) or nag_lapack_dorgqr (f08af)
no
|
Is AA an mm by nn matrix with m < nm<n? _
yes
nag_lapack_dgesvd (f08kb)
no
|
Is reliability more important than efficiency? _
yes
nag_lapack_dgesvd (f08kb)
no
|
nag_matop_real_gen_pseudinv (f01bl)
Note 1: the inverse of a band matrix AA does not in general have the same shape as AA, and no functions are provided specifically for finding such an inverse. The matrix must either be treated as a full matrix, or the equations AX = BAX=B must be solved, where BB has been initialized to the identity matrix II. In the latter case, see the decision trees in Section [Decision Trees] in the F04 Chapter Introduction.
Note 2: by ‘guaranteed accuracy’ we mean that the accuracy of the inverse is improved by use of the iterative refinement technique using additional precision.
(ii) Matrix Factorizations: see the decision trees in Section [Decision Trees] in the F02 and F04 Chapter Introductions.
(iii) Matrix Arithmetic and Manipulation: not appropriate.
(iv) Matrix Functions:

Tree 5: Matrix functions f(A)f(A) of an n by n real matrix AA 

Is etABetAB required? _
yes
Is AA stored in dense format? _
yes
nag_matop_real_gen_matrix_actexp (f01ga)
| no
|
| nag_matop_real_gen_matrix_actexp_rcomm (f01gb)
no
|
Is AA real symmetric? _
yes
Is eAeA required? _
yes
nag_matop_real_symm_matrix_exp (f01ed)
| no
|
| nag_matop_real_symm_matrix_fun (f01ef)
no
|
Is cos(A)cos(A) or cosh(A)cosh(A) or sin(A)sin(A) or sinh(A)sinh(A) required? _
yes
Is the condition number of the matrix function required? _
yes
nag_matop_real_gen_matrix_cond_std (f01ja)
| no
|
| nag_matop_real_gen_matrix_fun_std (f01ek)
no
|
Is log(A)log(A) required? _
yes
Is the condition number of the matrix logarithm required? _
yes
nag_matop_real_gen_matrix_cond_std (f01ja)
| no
|
| nag_matop_real_gen_matrix_log (f01ej)
no
|
Is exp(A)exp(A) required? _
yes
Is the condition number of the matrix exponential required? _
yes
nag_matop_real_gen_matrix_cond_std (f01ja)
| no
|
| nag_matop_real_gen_matrix_exp (f01ec)
no
|
f(A)f(A) will be computed. Will derivatives of ff be supplied by the user? _
yes
Is the condition number of the matrix function required? _
yes
nag_matop_real_gen_matrix_cond_usd (f01jc)
| no
|
| nag_matop_real_gen_matrix_fun_usd (f01em)
no
|
Is the condition number of the matrix function required? _
yes
nag_matop_real_gen_matrix_cond_num (f01jb)
no
|
nag_matop_real_gen_matrix_fun_num (f01el)

Tree 6: Matrix functions f(A)f(A) of an n by n complex matrix AA 

Is etABetAB required? _
yes
Is AA stored in dense format? _
yes
nag_matop_complex_gen_matrix_actexp (f01ha)
| no
|
| nag_matop_complex_gen_matrix_actexp_rcomm (f01hb)
no
|
Is AA complex Hermitian? _
yes
Is eAeA required? _
yes
nag_matop_complex_herm_matrix_exp (f01fd)
| no
|
| nag_matop_complex_herm_matrix_fun (f01ff)
no
|
Is cos(A)cos(A) or cosh(A)cosh(A) or sin(A)sin(A) or sinh(A)sinh(A) required? _
yes
Is the condition number of the matrix function required? _
yes
nag_matop_complex_gen_matrix_cond_std (f01ka)
| no
|
| nag_matop_complex_gen_matrix_fun_std (f01fk)
no
|
Is log(A)log(A) required? _
yes
Is the condition number of the matrix logarithm required? _
yes
nag_matop_complex_gen_matrix_cond_std (f01ka)
| no
|
| nag_matop_complex_gen_matrix_log (f01fj)
no
|
Is exp(A)exp(A) required? _
yes
Is the condition number of the matrix exponential required? _
yes
nag_matop_complex_gen_matrix_cond_std (f01ka)
| no
|
| nag_matop_complex_gen_matrix_exp (f01fc)
no
|
f(A)f(A) will be computed. Will derivatives of ff be supplied by the user? _
yes
Is the condition number of the matrix function required? _
yes
nag_matop_complex_gen_matrix_cond_usd (f01kc)
| no
|
| nag_matop_complex_gen_matrix_fun_usd (f01fm)
no
|
Is the condition number of the matrix function required? _
yes
nag_matop_complex_gen_matrix_cond_num (f01kb)
no
|
nag_matop_complex_gen_matrix_fun_num (f01fl)

Functionality Index

Action of the matrix exponential on a complex matrix nag_matop_complex_gen_matrix_actexp (f01ha)
Action of the matrix exponential on a complex matrix (reverse communication) nag_matop_complex_gen_matrix_actexp_rcomm (f01hb)
Action of the matrix exponential on a real matrix nag_matop_real_gen_matrix_actexp (f01ga)
Action of the matrix exponential on a real matrix (reverse communication) nag_matop_real_gen_matrix_actexp_rcomm (f01gb)
Inversion (also see Chapter F07), 
    real m by n matrix, 
        pseudo-inverse nag_matop_real_gen_pseudinv (f01bl)
    real symmetric positive definite matrix, 
        accurate inverse nag_matop_real_symm_posdef_inv (f01ab)
        approximate inverse nag_matop_real_symm_posdef_inv_noref (f01ad)
Matrix Arithmetic and Manipulation, 
    matrix addition, 
        complex matrices nag_matop_complex_addsub (f01cw)
        real matrices nag_matop_real_addsub (f01ct)
    matrix multiplication nag_matop_real_gen_matmul (f01ck)
    matrix storage conversion, 
        full to packed triangular storage, 
            complex matrices nag_matop_ztrttp (f01vb)
            real matrices nag_matop_dtrttp (f01va)
        full to Rectangular Full Packed storage, 
            complex matrix nag_matop_ztrttf (f01vf)
            real matrix nag_matop_dtrttf (f01ve)
        packed band  ↔  rectangular storage, special provision for diagonal 
            complex matrices nag_matop_complex_band_pack (f01zd)
            real matrices nag_matop_real_band_pack (f01zc)
        packed triangular to full storage, 
            complex matrices nag_matop_ztpttr (f01vd)
            real matrices nag_matop_dtpttr (f01vc)
        packed triangular to Rectangular Full Packed storage, 
            complex matrices nag_matop_ztpttf (f01vk)
            real matrices nag_matop_dtpttf (f01vj)
        packed triangular  ↔  square storage, special provision for diagonal 
            complex matrices nag_matop_complex_tri_pack (f01zb)
            real matrices nag_matop_real_tri_pack (f01za)
        Rectangular Full Packed to full storage, 
            complex matrices nag_matop_ztfttr (f01vh)
            real matrices nag_matop_dtfttr (f01vg)
        Rectangular Full Packed to packed triangular storage, 
            complex matrices nag_matop_ztfttp (f01vm)
            real matrices nag_matop_dtfttp (f01vl)
    matrix subtraction, 
        complex matrices nag_matop_complex_addsub (f01cw)
        real matrices nag_matop_real_addsub (f01ct)
    matrix transpose nag_matop_real_gen_trans_inplace (f01cr)
Matrix function, 
    complex Hermitian n by n matrix, 
        matrix exponential nag_matop_complex_herm_matrix_exp (f01fd)
        matrix function nag_matop_complex_herm_matrix_fun (f01ff)
    complex n by n matrix, 
        condition number for a matrix exponential, logarithm, sine, cosine, sinh or cosh nag_matop_complex_gen_matrix_cond_std (f01ka)
        condition number for a matrix function, using numerical differentiation nag_matop_complex_gen_matrix_cond_num (f01kb)
        condition number for a matrix function, using user-supplied derivatives nag_matop_complex_gen_matrix_cond_usd (f01kc)
        matrix exponential nag_matop_complex_gen_matrix_exp (f01fc)
        matrix exponential, sine, cosine, sinh or cosh nag_matop_complex_gen_matrix_fun_std (f01fk)
        matrix function, using numerical differentiation nag_matop_complex_gen_matrix_fun_num (f01fl)
        matrix function, using user-supplied derivatives nag_matop_complex_gen_matrix_fun_usd (f01fm)
        matrix logarithm nag_matop_complex_gen_matrix_log (f01fj)
    real n by n matrix, 
        condition number for a matrix function, using numerical differentiation nag_matop_real_gen_matrix_cond_num (f01jb)
        condition number for a matrix function, using user-supplied derivatives nag_matop_real_gen_matrix_cond_usd (f01jc)
        condition number for the matrix exponential, logarithm, sine, cosine, sinh or cosh nag_matop_real_gen_matrix_cond_std (f01ja)
        matrix exponential nag_matop_real_gen_matrix_exp (f01ec)
        matrix exponential, sine, cosine, sinh or cosh nag_matop_real_gen_matrix_fun_std (f01ek)
        matrix function, using numerical differentiation nag_matop_real_gen_matrix_fun_num (f01el)
        matrix function, using user-supplied derivatives nag_matop_real_gen_matrix_fun_usd (f01em)
        matrix logarithm nag_matop_real_gen_matrix_log (f01ej)
    real symmetric n by n matrix, 
        matrix exponential nag_matop_real_symm_matrix_exp (f01ed)
        matrix function nag_matop_real_symm_matrix_fun (f01ef)
Matrix Transformations, 
    complex matrix, form unitary matrix nag_matop_complex_gen_rq_formq (f01rk)
    complex m by n(m ≤ n) matrix, 
        RQ factorization nag_matop_complex_gen_rq (f01rj)
    complex upper trapezoidal matrix, 
        RQ factorization nag_matop_complex_trapez_rq (f01rg)
    eigenproblem Ax = λBx, A, B banded, 
        reduction to standard symmetric problem nag_matop_real_symm_posdef_geneig (f01bv)
    real almost block-diagonal matrix, 
        LU factorization nag_matop_real_gen_blkdiag_lu (f01lh)
    real band symmetric positive definite matrix, 
        ULDLTUT factorization nag_matop_real_symm_posdef_fac (f01bu)
        variable bandwidth, LDLT factorization nag_matop_real_vband_posdef_fac (f01mc)
    real matrix, 
        form orthogonal matrix nag_matop_real_gen_rq_formq (f01qk)
    real m by n(m  ≤  n) matrix, 
        RQ factorization nag_matop_real_gen_rq (f01qj)
    real sparse matrix, 
        factorization nag_matop_real_gen_sparse_lu (f01br)
        factorization, known sparsity pattern nag_matop_real_gen_sparse_lu_reuse (f01bs)
    real upper trapezoidal matrix, 
        RQ factorization nag_matop_real_trapez_rq (f01qg)
    tridiagonal matrix, 
        LU factorization nag_matop_real_gen_tridiag_lu (f01le)

References

Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Higham N J (2008) Functions of Matrices: Theory and Computation SIAM, Philadelphia, PA, USA
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H (1977) Some recent advances in numerical linear algebra The State of the Art in Numerical Analysis (ed D A H Jacobs) Academic Press
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013