# NAG CPP Interfacenagcpp::correg::corrmat_nearest_rank (g02ak)

## 1Purpose

corrmat_nearest_rank computes the nearest correlation matrix of maximum prescribed rank, in the Frobenius norm, to a given square, input matrix.

## 2Specification

```#include "g02/nagcpp_g02ak.hpp"
```
```template <typename G, typename X>

void function corrmat_nearest_rank(G &&g, const types::f77_integer rank, X &&x, double &f, double &rankerr, types::f77_integer &nsub, OptionalG02AK opt)```
```template <typename G, typename X>

void function corrmat_nearest_rank(G &&g, const types::f77_integer rank, X &&x, double &f, double &rankerr, types::f77_integer &nsub)```

## 3Description

corrmat_nearest_rank finds the nearest correlation matrix $X$ of maximum prescribed rank $r$ to an approximate correlation matrix $G$ in the Frobenius norm.
The solver is based on the Majorized Penalty Approach (MPA) proposed by Gao and Sun (2010). One of the key elements in this type of method is that the subproblems are similar to the nearest correlation matrix problem without rank constraint, and can be solved efficiently by g02aaf (no CPP interface). The total number of subproblems solved is controlled by the arguments maxit and maxits. The algorithm behaviour and solver accuracy can be modified by these and other input arguments. The default values for these arguments are chosen to work well in the general case but it is recommended that you tune them to your particular problem. For a detailed description of the algorithm see Section 11.

## 4References

Bai S, Qi H–D and Xiu N (2015) Constrained best Euclidean distance embedding on a sphere: A matrix optimization approach SIAM J. Optim. 25(1) 439–467
Gao Y and Sun D (2010) A majorized penalty approach for calibrating rank constrained correlation matrix problems Technical report Department of Mathematics, National University of Singapore
Qi H–D and Yuan X (2014) Computing the nearest Euclidean distance matrix with low embedding dimensions Mathematical Programming 147(1–2) 351–389

## 5Arguments

1: $\mathbf{g}\left({\mathbf{n}},{\mathbf{n}}\right)$double array Input/Output
On entry: $\stackrel{~}{G}$, the initial matrix.
On exit: a symmetric matrix $G=\frac{1}{2}\left(\stackrel{~}{G}+{\stackrel{~}{G}}^{\mathrm{T}}\right)$ with diagonal elements set to $1.0$.
2: $\mathbf{rank}$types::f77_integer Input
On entry: $r$, the upper bound for the rank of $X$.
Constraint: $0<{\mathbf{rank}}\le {\mathbf{n}}$.
3: $\mathbf{x}\left({\mathbf{n}},{\mathbf{n}}\right)$double array Output
On exit: $X$, the nearest correlation matrix of rank $r$.
4: $\mathbf{f}$double Output
On exit: the difference between $X$ and $G$ given by $\frac{1}{2}{‖X-G‖}_{F}^{2}$.
5: $\mathbf{rankerr}$double Output
On exit: the rank error of $X$, defined as ${\sum }_{i=r+1}^{n}\left({\lambda }_{i}\right)$, given that ${\lambda }_{i}$ denote eigenvalues of $X$ sorted in non-increasing order.
6: $\mathbf{nsub}$types::f77_integer Output
On exit: the total number of majorized problems that have been solved, i.e., the total number of calls for g02aaf (no CPP interface).
7: $\mathbf{opt}$OptionalG02AK Input/Output
Optional parameter container, derived from Optional.
Container for:
errtoldouble
This optional parameter may be set using the method OptionalG02AK::errtol and accessed via OptionalG02AK::get_errtol.
Default: $0.0$
On entry: the termination tolerance for the convergence measure of the objective function value.
If ${\mathbf{errtol}}\le 0.0$, then a value of $1.0E-5$ is used. See Section 11 for further details.
ranktoldouble
This optional parameter may be set using the method OptionalG02AK::ranktol and accessed via OptionalG02AK::get_ranktol.
Default: $0.0$
On entry: the feasibility tolerance for the rank constraint.
If ${\mathbf{ranktol}}\le 0.0$, is used. See Section 11 for further details.
maxitstypes::f77_integer
This optional parameter may be set using the method OptionalG02AK::maxits and accessed via OptionalG02AK::get_maxits.
Default: $0$
On entry: specifies the maximum number of iterations used for the majorization approach to solve penalized problems at each level of penalty parameter.
If ${\mathbf{maxits}}\le 0$, then a value of $100$ is used.
maxittypes::f77_integer
This optional parameter may be set using the method OptionalG02AK::maxit and accessed via OptionalG02AK::get_maxit.
Default: $0$
On entry: specifies the maximum number of iterations for the penalty method, i.e., the maximum level of penalty parameter.
If ${\mathbf{maxit}}\le 0$, then a value of $200$ is used.

1: $\mathbf{n}$
The order of the matrix $G$

## 6Exceptions and Warnings

Errors or warnings detected by the function:
All errors and warnings have an associated numeric error code field, errorid, stored either as a member of the thrown exception object (see errorid), or as a member of opt.ifail, depending on how errors and warnings are being handled (see Error Handling for more details).
Raises: ErrorException
$\mathbf{errorid}=1$
On entry, ${\mathbf{n}}=⟨\mathit{value}⟩$.
Constraint: ${\mathbf{n}}>0$.
$\mathbf{errorid}=4$
On entry, ${\mathbf{rank}}=⟨\mathit{value}⟩$ and ${\mathbf{n}}=⟨\mathit{value}⟩$.
Constraint: $0<{\mathbf{rank}}\le {\mathbf{n}}$.
$\mathbf{errorid}=5$
Majorized penalty approach fails to converge in maxit level of penalty
iterations.
$\mathbf{errorid}=10601$
On entry, argument $⟨\mathit{\text{value}}⟩$ must be a $⟨\mathit{\text{value}}⟩$ x $⟨\mathit{\text{value}}⟩$ array.
Supplied argument has $⟨\mathit{\text{value}}⟩$ dimensions.
$\mathbf{errorid}=10601$
On entry, argument $⟨\mathit{\text{value}}⟩$ must be a $⟨\mathit{\text{value}}⟩$ x $⟨\mathit{\text{value}}⟩$ array.
Supplied argument was a $⟨\mathit{\text{value}}⟩$ x $⟨\mathit{\text{value}}⟩$ array.
$\mathbf{errorid}=10601$
On entry, argument $⟨\mathit{\text{value}}⟩$ must be a $⟨\mathit{\text{value}}⟩$ x $⟨\mathit{\text{value}}⟩$ array.
Not all of the sizes for the supplied array could be ascertained.
$\mathbf{errorid}=10602$
On entry, the raw data component of $⟨\mathit{\text{value}}⟩$ is null.
$\mathbf{errorid}=10603$
On entry, unable to ascertain a value for $⟨\mathit{\text{value}}⟩$.
$\mathbf{errorid}=10604$
On entry, the data in $⟨\mathit{\text{value}}⟩$ is stored in $⟨\mathit{\text{value}}⟩$ Major Order.
The data was expected to be in $⟨\mathit{\text{value}}⟩$ Major Order.
$\mathbf{errorid}=-99$
An unexpected error has been triggered by this routine.
$\mathbf{errorid}=-399$
Your licence key may have expired or may not have been installed correctly.
$\mathbf{errorid}=-999$
Dynamic memory allocation failed.
Raises: WarningException
$\mathbf{errorid}=6$
Convergence is limited by machine precision. The objective function
value or rank is decreasing very slowly.
The array returned in x may still be of interest.

## 7Accuracy

The returned accuracy is controlled by errtol and ranktol, and is limited by machine precision.

## 8Parallelism and Performance

Please see the description for the underlying computational routine in this section of the FL Interface documentation.

Arrays are internally allocated by corrmat_nearest_rank. The total size of these arrays is $11×{\mathbf{n}}+3×{\mathbf{n}}×{\mathbf{n}}+\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(2×{\mathbf{n}}×{\mathbf{n}}+6×{\mathbf{n}}+1,120+9×{\mathbf{n}}\right)$ real elements and $5×{\mathbf{n}}+3$ integer elements.

## 10Example

This example finds the nearest correlation matrix with target rank $2$ to:
 $G = ( 2 -1 0 0 -1 2 -1 0 0 -1 2 -1 0 0 -1 2 )$
 Source File Data Results ex_g02ak.cpp None ex_g02ak.r

## 11Algorithmic Details

corrmat_nearest_rank is aimed at solving the rank constrained nearest correlation matrix problem formulated as follows:
 $min X∈𝕊n 12 ‖X-G‖ 2 s.t. Xii =1 , i=1,…,n, X⪰0, rank(X)≤r,$ (1)
where $G$ is the input approximate correlation matrix, $r$ is the upper bound of rank of the output nearest correlation matrix $X$, the expression $X⪰0$ stands for a constraint on eigenvalues of $X$ in the space of $n$ by $n$ symmetric matrices ${𝕊}^{n}$, namely, all the eigenvalues should be non-negative, i.e., the matrix should be positive semidefinite, $‖·‖$ denotes the Frobenius norm. Note that the rank constraint is given as an inequality, which means if the intrinsic rank of the input matrix is already less than or equal to $r$, the solver will calculate a nearest correlation matrix without increasing the rank.
This section contains a short description of the algorithm used in corrmat_nearest_rank which is based on the Majorized Penalty Approach (MPA) by Gao and Sun (2010). Further details on accuracy and stopping criterion are also included.

### 11.1Penalty approach

Let $\lambda \left(X\right)$ be the vector of the eigenvalues of $X$ (arranged in the non-increasing order) and $P\in {ℝ}^{n×n}$ be the corresponding matrix of orthonormal eigenvectors of $X$. The equivalent relationship for a positive semidefinite matrix $X$ is as follows.
 $rank(X)≤r⇔λr+1(X)+⋯+λn(X)=0.$
Therefore, problem (1) can be equivalently written as
 $minX∈𝕊n f(X)≔12‖X-G‖2 s.t. Xii=1, i=1,…n, X⪰0, λr+1(X)+⋯+λn(X)=0.$ (2)
Introducing the penalty parameter $c>0$, we can obtain the following penalized problem by taking a trade-off between the rank constraint and the least square distance $f\left(X\right)$.
 $minX∈𝕊n f(X) + c(λr+1(X)+⋯+λn(X)) s.t. Xii=1, i=1,…,n, X⪰0.$ (3)
Define
 $p(X) ≔ ∑ i=r+1 n λi(X) = ⟨I,X⟩ - ∑ i=1 r λi(X),$
where $I$ is the identity matrix, $⟨·,·⟩$ is the standard trace inner product in ${𝕊}^{n}$. We can rewrite problem (3) as
 $minX∈𝕊n f(X)+c⁢p(X) s.t. Xii=1, i=1,…,n, X⪰0.$ (4)
The penalty parameter $c$ is updated according to the progress of rank reduction. The input argument maxit controls the maximum number of updates of $c$.
The penalized problem (4) is not equivalent to the original problem (1) and the relationship can be described as follows. If the rank of the minimizer ${X}_{c}^{*}$ to problem (4) is not larger than $r$, then ${X}_{c}^{*}$ is a global optimal solution to problem (1), otherwise an $\epsilon$-optimal solution to problem (1) is guaranteed given that parameter $c$ satisfies some bound constraint. Please see Gao and Sun (2010) for more details.

### 11.2Majorization approach

The focus now is on solving the penalty problem (4). Since $p\left(X\right)$ is nonsmooth and concave, we majorize it by the linear function defined by its subgradient. For given ${X}^{k}$ (the current iteration) and ${U}^{k}\in \partial p\left({X}^{k}\right)$, we have
 $p(X) ≤ mkp(X) ≔ p(Xk) + ⟨Uk,X-Xk⟩ ∀X.$
Now, instead of solving the nonconvex problem (4), we solve the following convex model:
 $minX∈𝕊n f(X) + c⁢mkp(X) s.t. Xii = 1 , i=1,…,n, X⪰0.$ (5)
The framework combines ideas of a penalty method with majorization, it can be described as follows:
 Majorized Penalty Algorithm (MPA) 1.Select a penalty parameter $c>0$ and a feasible point ${X}^{0}$, set $k≔0$. 2.Solve subproblem (5) to get ${X}^{k+1}={\mathrm{argmin}}_{X}\left\{f\left(X\right)+c{m}_{k}^{p}\left(X\right)|{X}_{ii}=1\text{, ​}i=1,\dots ,n\text{, ​}X⪰0\right\}$. 3.If ${X}^{k+1}={X}^{k}$, stop; otherwise, update penalty parameter $c$, set $k≔k+1$ and go to step 2.
Let $\overline{G}=G-c{U}^{k}$, the subproblem (5) is actually a nearest correlation matrix problem with input $\overline{G}$ without rank constraint, which can be solved efficiently by g02aaf (no CPP interface). Input maxits controls the maximum number of iterations used in solving one problem (5) with fixed $c$.
For the convergence result of the algorithm, see Gao and Sun (2010). For other implementations as well as parameter setting strategies, see Qi and Yuan (2014) and Bai et al. (2015).

### 11.3Stopping criteria

The algorithm shown in Table 1 is stopped when all the stopping criteria are satisfied to the requested accuracy, these are:
 $‖2f(Xk)-2f(Xk-1)‖ max{100, 2f(Xk-1) } ≤ ε1, (relative precision)$
 $∑ i=r+1 n λi(Xk) ≤ ε2. (rank feasibility)$
Here ${\epsilon }_{1}$ and ${\epsilon }_{2}$ may be set by arguments errtol and ranktol respectively in order to achieve various returned accuracy. The above quantity to measure rank feasibility does not scale well with the magnitude of ${X}^{k}$. To rectify this drawback, we also build in the third stopping criterion to control the percentage of the first $r$ eigenvalues of ${X}^{k}$ out of all the eigenvalues:
 $∑ i=1 r λi(Xk) / ∑ i=1 n λi(Xk) ≥ 90% .$