corrmat_nearest_rank computes the nearest correlation matrix of maximum prescribed rank, in the Frobenius norm, to a given square, input matrix.
template <typename G, typename X>
void function corrmat_nearest_rank(G &&g, const types::f77_integerrank, X &&x, double &f, double &rankerr, types::f77_integer &nsub, OptionalG02AKopt)
template <typename G, typename X>
void function corrmat_nearest_rank(G &&g, const types::f77_integerrank, X &&x, double &f, double &rankerr, types::f77_integer &nsub)
corrmat_nearest_rank finds the nearest correlation matrix of maximum prescribed rank to an approximate correlation matrix in the Frobenius norm.
The solver is based on the Majorized Penalty Approach (MPA) proposed by Gao and Sun (2010). One of the key elements in this type of method is that the subproblems are similar to the nearest correlation matrix problem without rank constraint, and can be solved efficiently by g02aaf (no CPP interface). The total number of subproblems solved is controlled by the arguments maxit and maxits. The algorithm behaviour and solver accuracy can be modified by these and other input arguments. The default values for these arguments are chosen to work well in the general case but it is recommended that you tune them to your particular problem. For a detailed description of the algorithm see Section 11.
Bai S, Qi H–D and Xiu N (2015) Constrained best Euclidean distance embedding on a sphere: A matrix optimization approach SIAM J. Optim.25(1) 439–467
Gao Y and Sun D (2010) A majorized penalty approach for calibrating rank constrained correlation matrix problems Technical report Department of Mathematics, National University of Singapore
Qi H–D and Yuan X (2014) Computing the nearest Euclidean distance matrix with low embedding dimensions Mathematical Programming 147(1–2) 351–389
1: – double arrayInput/Output
On entry: , the initial matrix.
On exit: a symmetric matrix with diagonal elements set to .
On entry: specifies the maximum number of iterations for the penalty method, i.e., the maximum level of penalty parameter.
If , then a value of is used.
The order of the matrix
6Exceptions and Warnings
Errors or warnings detected by the function:
All errors and warnings have an associated numeric error code field, errorid, stored either as a member of the thrown exception object (see errorid), or as a member of
opt.ifail, depending on how errors
and warnings are being handled (see Error Handling for more details).
corrmat_nearest_rank is aimed at solving the rank constrained nearest correlation matrix problem formulated as follows:
where is the input approximate correlation matrix, is the upper bound of rank of the output nearest correlation matrix , the expression stands for a constraint on eigenvalues of in the space of by symmetric matrices , namely, all the eigenvalues should be non-negative, i.e., the matrix should be positive semidefinite, denotes the Frobenius norm. Note that the rank constraint is given as an inequality, which means if the intrinsic rank of the input matrix is already less than or equal to , the solver will calculate a nearest correlation matrix without increasing the rank.
This section contains a short description of the algorithm used in corrmat_nearest_rank which is based on the Majorized Penalty Approach (MPA) by Gao and Sun (2010). Further details on accuracy and stopping criterion are also included.
Let be the vector of the eigenvalues of (arranged in the non-increasing order) and be the corresponding matrix of orthonormal eigenvectors of . The equivalent relationship for a positive semidefinite matrix is as follows.
Therefore, problem (1) can be equivalently written as
Introducing the penalty parameter , we can obtain the following penalized problem by taking a trade-off between the rank constraint and the least square distance .
where is the identity matrix, is the standard trace inner product in . We can rewrite problem (3) as
The penalty parameter is updated according to the progress of rank reduction. The input argument maxit controls the maximum number of updates of .
The penalized problem (4) is not equivalent to the original problem (1) and the relationship can be described as follows. If the rank of the minimizer to problem (4) is not larger than , then is a global optimal solution to problem (1), otherwise an -optimal solution to problem (1) is guaranteed given that parameter satisfies some bound constraint. Please see Gao and Sun (2010) for more details.
The focus now is on solving the penalty problem (4). Since is nonsmooth and concave, we majorize it by the linear function defined by its subgradient. For given (the current iteration) and , we have
Now, instead of solving the nonconvex problem (4), we solve the following convex model:
The framework combines ideas of a penalty method with majorization, it can be described as follows:
Majorized Penalty Algorithm (MPA)
1.Select a penalty parameter and a feasible point , set .
3.If , stop; otherwise, update penalty parameter , set and go to step 2.
Let , the subproblem (5) is actually a nearest correlation matrix problem with input without rank constraint, which can be solved efficiently by g02aaf (no CPP interface). Input maxits controls the maximum number of iterations used in solving one problem (5) with fixed .
The algorithm shown in Table 1 is stopped when all the stopping criteria are satisfied to the requested accuracy, these are:
Here and may be set by arguments errtol and ranktol respectively in order to achieve various returned accuracy. The above quantity to measure rank feasibility does not scale well with the magnitude of . To rectify this drawback, we also build in the third stopping criterion to control the percentage of the first eigenvalues of out of all the eigenvalues: