# Rate-optimal posterior contraction for sparse PCA

###### Abstract

Principal component analysis (PCA) is possibly one of the most widely used statistical tools to recover a low-rank structure of the data. In the high-dimensional settings, the leading eigenvector of the sample covariance can be nearly orthogonal to the true eigenvector. A sparse structure is then commonly assumed along with a low rank structure. Recently, minimax estimation rates of sparse PCA were established under various interesting settings. On the other side, Bayesian methods are becoming more and more popular in high-dimensional estimation, but there is little work to connect frequentist properties and Bayesian methodologies for high-dimensional data analysis. In this paper, we propose a prior for the sparse PCA problem and analyze its theoretical properties. The prior adapts to both sparsity and rank. The posterior distribution is shown to contract to the truth at optimal minimax rates. In addition, a computationally efficient strategy for the rank-one case is discussed.

10.1214/14-AOS1268\volume43 \issue2 2015 \firstpage785 \lastpage818 \docsubtyFLA \newproclaimremarkRemark[section]

Bayes sparse PCA

A]\fnmsChao \snmGaolabel=e1] and A]\fnmsHarrison H. \snmZhou\correflabel=e2]

class=AMS] \kwd62H25 \kwd62G05 Principal component analysis \kwdBayesian estimation \kwdposterior contraction

## 1 Introduction

Principal component analysis is a classical statistical tool used to project data into a lower dimensional space while maximizing the variance [Jolliffe (1986)]. When the sample size is small compared to the number of variables , Johnstone and Lu (2009) show that the standard PCA may fail in the sense that the leading eigenvector of the sample covariance can be nearly orthogonal to the true eigenvector. Therefore, the recovery of principal components in the high-dimensional setting requires extra structural assumptions. The sparse PCA, assuming that the leading eigenvectors or eigen-subspace only depend on a relatively small number of variables, is applied in a wide range of applications. Estimation methods for sparse PCA problems are proposed in Zou, Hastie and Tibshirani (2006) and d’Aspremont et al. (2007). Amini and Wainwright (2009) and Ma (2013) obtain rates of convergence of sparse PCA methods under the spiked covariance model proposed in Johnstone and Lu (2009). Minimax rates of sparse PCA problems are established by Birnbaum et al. (2013), Cai, Ma and Wu (2014, 2013) and Vu and Lei (2013) under various interesting settings.

Bayesian methods have been very popular in high-dimensional estimation, but there is little work to connect frequentist properties and Bayesian methodologies for high-dimensional models. This paper serves as a bridge between the frequentist and Bayesian worlds by addressing the following question for high-dimensional PCA: Is it possible for a Bayes procedure to optimally recover the leading principal components in the sense that the posterior distribution contracts to the truth with a minimax rate? The optimal posterior contraction rate immediately implies that the posterior mean attains the optimal convergence rate as a point estimator.

In this paper we consider a spiked covariance model with an unknown growing rank. We propose a sparse prior on the covariance matrix with a spiked structure and show that the induced posterior distribution contracts to the truth with an optimal minimax rate. The assumptions are nearly identical to those in Vu and Lei (2013), where the rank of the principal space and the number of nonzero entries of each spike is allowed to be at the order of for any , as long as the minimax rate . In addition, we prove that the posterior distribution consistently estimates the rank. To the best of our knowledge, this is the first work where a Bayes procedure is able to adapt to both the sparsity and the rank.

There are two key ingredients in our approach. The first ingredient is in the design of the prior. We propose a prior that imposes a spiked structure on a random covariance matrix, under which each spike is sparse and orthogonal to each other. This leads to sufficient prior concentration together with the sparse property. In addition, each spike has a bounded norm under the prior distribution such that there is a fixed eigen-gap between the spikes and the noise, which eventually leads to consistent rank estimation. The second ingredient is in constructing appropriate tests in the proof of posterior contraction under spectral and Frobenius norms. We first construct a test with the alternative hypothesis outside of the neighborhood of the true covariance under the spectral norm. For the covariance matrices inside the neighborhood of the truth under the spectral norm, we propose a delicate way to divide the region into many small pieces, where the likelihood ratio test is applicable in each small region. A final test is then constructed by combining these small tests. The errors are controlled by correctly calculating the covering number under the metric for measuring the distance of subspaces.

The theoretical tools we use for this problem follow the recent line of developments in Bayesian nonparametrics pioneered by Barron (1988) and Barron, Schervish and Wasserman (1999), which generalize the testing theory of Le Cam (1973) and Schwartz (1965) to construct an exponentially consistent test on the essential support of a prior to prove posterior consistency. The idea was later extended by Ghosal, Ghosh and van der Vaart (2000) and Shen and Wasserman (2001) to prove rates of convergence of posterior distribution. Compared to Bayesian nonparametrics, little work has been done for Bayesian high-dimensional estimation, especially in the sparse setting. Castillo and van der Vaart ( 2012) is the first work in this area. They prove rates of convergence in sparse vector estimation for a large class of priors.

The works closely related to this paper are Banerjee and Ghosal (2014) and Pati et al. (2014). Banerjee and Ghosal (2014) study rates of convergence for Bayesian precision matrix estimation by considering a conjugate prior. But as discussed in Birnbaum et al. (2013), estimation of sparse or bandable covariance/precision matrix is different from that of sparse principal subspace. The optimal rates of convergence can be different. Pati et al. (2014) study Bayesian covariance matrix estimation for a sparse factor model, which is similar to the spiked covariance model in the PCA problem. Instead of estimating the principal subspace as in the PCA problem, they consider estimating the whole covariance matrix. The posterior rate of convergence they obtain is not optimal, especially when the rank is allowed to grow with the sample size .

The paper is organized as follows. In Section 2, we introduce the sparse PCA problem and define the parameter space. In Section 3, we propose a prior and state the main result of the posterior convergence. Section 4 introduces an algorithm to compute the posterior mean in the rank-one case along with other discussions. All the proofs are presented in Section 5, with some technical results given in the supplementary material [Gao and Zhou (2015)].

## 2 The sparse PCA

Let be i.i.d. observations from , with being a covariance matrix with a spiked structure

where for any . It is easy to see that are the first eigenvectors of , with the corresponding eigenvalues . The rest eigenvalues are all . The spiked covariance is proposed by Johnstone and Lu (2009) to model data with a sparse and low-rank structure. An equivalent representation of the data is

(1) |

where and are independent. The matrix is defined as and . In such latent variable representation, models the signal part, which lives in an -dimensional subspace, and is the noise part, which has the same variance on every direction. Since the -dimensional subspace is determined by its projection matrix , the goal here is to recover the principal subspace by estimating its projection matrix in the Frobenius loss,

In a high-dimensional setting, extra structural assumptions are needed for consistent estimation. We assume that the first eigenvectors are sparse, in the sense that each of them only depends on a few coordinates among the total number . Define for , the support of the th eigenvector. We assume sparsity on each spike by . The parameter space for the covariance matrix is

where is a constant, which we treat as being known in this paper. The sparsity we consider matches the column sparsity in Vu and Lei (2013) in the case. We require both upper and lower bounds for . The lower bound implies an eigengap, which leads to rank adaptation and subspace estimation, while the upper bound controls the spectral norm of , which leads to estimation of the whole covariance matrix. Vu and Lei (2013) prove that under the assumptions

the minimax rate^{1}^{1}1The minimax rate is obtained by combining
Theorem 3.5 and Corollary 3.2 in Vu and Lei (2013).
The upper bound is a special case of their Corollary 3.2 because our
parameter space is a subset of theirs. The lower bound holds by
observing that the least favorable class in the proof of their Theorem 3.5 is a subset of our parameter space. of principal subspace
estimation is

The goal of this paper is to prove an alternative result, adaptive Bayesian estimation, by designing an appropriate prior , such that

(2) |

where is the minimax rate and . The number satisfies . The posterior contraction (2) leads to a risk bound of a point estimator. Let be the expectation under the prior distribution . Consider the posterior mean of the subspace projection matrix . Its risk upper bound is given in the following proposition. We prove the proposition in the supplementary material [Gao and Zhou (2015)].

###### Proposition 2.1

Equation (2) implies

In this paper, the number in (2) is at an order of for some . Thus the dominating term in is . The posterior mean is a rate-optimal point estimator.

The matrix may not be a projection matrix. However, it is still a valid estimator of the true projection matrix . A projection matrix estimator can be obtained by projecting the posterior mean to the space of projection matrices under the Frobenius norm. Denote the projection by . It can be shown that .

### 2.1 Notation

In this paper, we use to denote a spiked covariance matrix with structure , where is a matrix with orthogonal columns. We use to denote the support of for each . Define

Then is a unitary matrix, and has an alternative representation . We use to denote the probability or the expectation under the multivariate normal distribution and to denote the product measure. The symbol stands for a generic probability whose distribution will be made clear through the context. Correspondingly, we use to denote the true version of .

For a matrix , we use to denote its spectral norm and for the Frobenius norm. We define to be the space of all unitary matrices for such that for any , . For any , define the distance by for some diagonal matrix . We omit the subscript and write whenever . The number stands for the minimax rate throughout the paper.

## 3 The prior and the main results

We propose a prior from which we can sample a random covariance matrix with structure , where is a matrix. The prior is described as follows: {longlist}[(1)]

for each , we randomly choose by letting the indicator for each follow a Bernoulli distribution with parameter ;

given , we sample a matrix from to be specified below, and then let . The matrix (Figure 1) may contain some zero columns under the above sampling procedure. With slight abuse of notation, we gather those nonzero columns to form the matrix , with being the support of the column . Note that , where is a matrix. After specifying the distribution , the number of nonzero columns is also the rank of .

The number is a fixed constant in the prior. With as the mean for , the cardinality is small with high probability under the prior distribution.

The number is an upper bound of the rank . In this paper, we assume that the true rank is at the order of . Since , the range of covers the range of .

We need to define a distribution on to help introduce . Let follow and follow the uniform distribution on the interval . Then is defined to be the distribution of

(3) |

Now we are ready to specify the random matrix prior , which induces a distribution over the matrix . For any vector and any subset , we use the notation . We describe the prior through a sequential sampling procedure. If , we set . Otherwise, we sample and let

Suppose we have already obtained and then sample , conditioning on . We set . The prior distribution of depends on , through values of ’s on the index set . For simplicity, denote

(4) |

Define . If , we set . Otherwise, let be the projection matrix from to the subspace spanned by . There is a bijective linear isometry induced by such that

Remember that a linear isometry preserves the norms in the sense that . We sample from and let . Set . Then we have specified , which is . Repeating this step, we obtain . The prior on the random covariance matrix is now fully specified.

After collecting the nonzero ’s, we observe that the prior explicitly samples a spiked covariance matrix with the number of spikes being . The prior imposes orthogonality on the spikes, since is sampled on the orthogonal complement of the space . Therefore, for each , and are the eigenvectors. For each eigenvector , its support is in , whose cardinality is small under the prior distribution. Moreover, the first eigenvalues are all bounded from and because .

Given the data , the posterior distribution is defined as

(5) |

for any measurable set . The following theorem is the main result of this paper. The posterior distribution contracts to the truth with an optimal minimax rate.

###### Theorem 3.1

Assume , and for some constant . Then there exists , such that for any , we have

for some constant only depending on .

Note that we have obtained the optimal posterior contraction rate under a “mildly growing rank” regime , which is also assumed in Vu and Lei (2013), for them to match the upper and lower bounds for minimax estimation. The assumption is a convenient but mild condition in high-dimensional statistics to prove rates of convergence in expectation rather than with high probability; see Cai, Liu and Luo (2011), Paul and Johnstone (2012), etc. The posterior contraction result implies the same rate of convergence in expectation of a point estimator (Corollary 3.1), and thus we need such an assumption to hold. Additionally, we assume , which means that the level of the rank is not above the level of sparsity. This assumption is due to the fact that can be only identified up to a unitary transformation, that is, for any , and for some such that each row of may have at least nonzero entries.

As shown in Proposition 2.1, we can use the posterior mean as a point estimator to achieve the minimax optimal rate of convergence.

###### Corollary 3.1

The result follows from the fact that the part of Proposition 2.1 is exponentially small; hence, it is dominated by .

## 4 Discussion

In Section 4.1, we state a result on posterior contraction rate under the spectral norm. A computationally efficient algorithm is developed in Section 4.2 for the rank-one case. In Section 4.3, we discuss the possibility of using a simpler prior for sparse PCA.

### 4.1 Posterior convergence under spectral norm

In proving Theorem 3.1, there are some by-products serving as intermediate steps. The following theorem says that the posterior distribution concentrates on the true covariance matrix under the spectral norm, and the subspace projection matrix concentrates on the true subspace projection matrix under the spectral norm. In addition, the posterior distribution consistently estimates the rank of the true subspace. The theorem holds under a slightly weaker assumption without assuming .

###### Theorem 4.1

Consider the same prior and rate as in Theorem 3.1. Assume , and for some constant . Then there exists , such that for any , we have

(6) | |||||

(7) |

for some constant only depending on .

It is not practical to assume that is known in Theorems 3.1 and 4.1. To weaken the assumption, we can replace the prior in (3) by sampling , for some sequence slowly grows to infinity as . Then the conclusions of the two theorems still hold without knowing .

The posterior rate of convergence (6) for estimating the whole covariance matrix under the spectral norm does not require the assumption in the definition of . To remove this assumption, we need a slightly different prior with (3) modified by sampling . However, such modification may not lead to rank adaptation (7) due to lack of eigengap, which is critical for establishing the result in Theorem 3.1.

Results (6) and (7) together imply posterior convergence of the whole covariance matrix under the Frobenius norm. This is because when , we have . Hence the convergence rate for the loss is .

Pati et al. (2014) consider estimating the whole covariance matrix under spectral norm in a sparse factor model. Under their assumption , they obtain a posterior convergence rate of , compared with our rate . under the loss function

Though an improvement over the result of Pati et al. (2014), whether is the optimal rate of convergence for the loss functions and is still an open problem. To the best of our knowledge, the only minimax result addressing these two loss functions for sparse PCA problem is in Cai, Ma and Wu (2014). However, they consider a different sparsity class, defined as

Under the current setting, the results of Cai, Ma and Wu (2014) can be written as

Observe the relation that

Hence when , the minimax rates for the class under both loss functions lie between and . We claim that the posterior convergence rate obtained in Theorem 4.1 is optimal when . For a growing , it at most misses a factor of .

### 4.2 A computational strategy of the rank-one case

Bayesian procedures using sparse priors are usually harder to compute because the sampling procedure needs to mix all possible subsets. Castillo and van der Vaart (2012) develop an efficient algorithm for computing exact posterior mean in the setting of Bayesian sparse vector estimation. They explore the combinatorial nature of the posterior mean formula and show that it is sufficient to compute the coefficients of some th order polynomials. In this section, we use their idea to develop an algorithm for computing approximate posterior mean for the single spike model. In this rank-one case, there is no need for the prior to adapt to the rank. We do not need the prior to put constraint on the norm of the eigenvector as in (3). Thus we use the following simple prior on the single spiked covariance: {longlist}[(1)]

sample a cardinality according to the distribution supported on ;

given , sample a support with cardinality uniformly from all subsets;

given , sample , let and the covariance matrix is .

We choose to be for some constant . We let be the minimax rate when . The posterior distribution induced by the above prior has the following desired property:

###### Theorem 4.2

Assume and for some constant . Then there exists , such that for any , we have

for some constant only depending on .

Note that the loss function is the norm, which is stronger than the loss function used in Theorem 3.1. The theorem above is proved in the supplementary material [Gao and Zhou (2015)]. We use the posterior mean to estimate the spike .

We present a way for computing . Under the rank-one situation, representation (1) can be written as

(8) |

with and following i.i.d. for all and . Representation (8) resembles the Gaussian sequence model considered in Castillo and van der Vaart (2012). Following their idea, the th coordinate of can be written as

where and is the density function of . By Fubini’s theorem, we have

where for each ,

by the definition of the prior. In the same way,

Define

Then we may rewrite and as

The critical fact observed by Castillo and van der Vaart (2012) is that is the coefficient of of the polynomial

and is the coefficient of of the polynomial

For a given , the coefficients and can be computed efficiently. In the Gaussian sequence model, there is no randomness by , and the posterior mean can be computed exactly by finding the coefficients of the above polynomials. In the PCA case, we propose an approximation by first drawing i.i.d. from and then computing

(9) | |||

(10) |

One set of coefficients takes at most steps to compute. Thus the total computational complexity is for computing coefficients of polynomials and computing all the values of , and .

The above strategy can be directly generalized to the multiple rank case. However, it only works for the following prior without the ability for rank adaptation. To be specific, we assume the rank is known. Then, the third step of the prior is modified as follows: {longlist}[(3)]

Given , sample an matrix , with each entry i.i.d. . Let the matrix be defined as

The covariance matrix is .

Note that instead of sampling an individual support for each column of , we sample a common support for all columns. When , this will not be a problem because of the simple observation . The theoretical justification of the prior is stated in Theorem 4.3. Denote the th row of by . Then the posterior mean has formula

where for each , we have

and a similar formula for . Note that the only difference from the rank-one case is the inner product . The notation stands for , where each is an -dimensional standard Gaussian vector. A similar formula holds for . Thus we can apply the same Monte Carlo approximation (4.2) for as is done in the rank-one case.

In addition to our method, there are other methods proposed in the literature. A Gaussian shrinkage prior for Bayesian PCA have been developed by Bishop (1999a, 1999b) in the classical setting, but it is not appropriate for sparse PCA. More general shrinkage priors have been discussed in Polson and Scott (2011) and Bhattacharya et al. (2012) for high-dimensional mean vector estimation. One can extend the framework to sparse PCA and develop Gibbs sampling by taking advantage of the latent representation (1). We refer to Pati et al. (2014) and van der Pas, Kleijn and van der Vaart (2014) for some theoretical justifications of shrinkage priors.

### 4.3 Further remarks on the prior

The prior we proposed in Section 3 on the random covariance matrix imposes orthogonality on the columns of . The orthogonality constraint is convenient for creating an eigengap between the spikes and the noise. This leads to the rank adaptation (7). One may wonder if a simpler prior such as the one proposed in Section 4.2 without orthogonality constraint would also lead to a desired eigengap.

The answer is negative in the current proof technique. Let us consider the simplest case where the supports are known and . When the rank is not known, it is necessary to sample according to some prior distribution. Then, after sampling the rank , we only need to sample a submatrix of , with rows in . Let us denote the submatrix by . Consider the prior distribution of where each element follows i.i.d. . Assume so that we can also restrict . It is easy to see that the th eigenvalue of the matrix is . Hence the eigengap is . For rank adaptation (7), we need a positive eigengap . By nonasymptotic random matrix theory [Vershynin (2010)],

(11) |

for any . For , cannot be larger than , leading to a tail not smaller than . In order that there is an eigengap under the posterior distribution, the desired tail needed in the classical Bayes nonparametric theory [see Barron (1999) and Castillo (2008)] is for some . Hence the random matrix theory tail in (11) is not enough for our purpose, and the current proof technique does not lead to the desired posterior convergence for this simpler prior. One may consider a larger support with in the prior distribution, such that the tail probability in (11) is for some . However, it can be shown that the prior does not have sufficient mass around the truth.

Nonetheless, if we assume the rank is known and , then rank adaptation is not needed. In this case, the prior in Section 4.2 leads to the desired posterior rate of convergence. Remember .

###### Theorem 4.3

Assume , and for some constant . Then there exists