31
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Sparsistency and rates of convergence in large covariance matrix estimation

      Preprint
      ,

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order \((s_n\log p_n/n)^{1/2}\), where \(s_n\) is the number of nonzero elements, \(p_n\) is the size of the covariance matrix and \(n\) is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter \(\lambda_n\) goes to 0 have been made explicit and compared under different penalties. As a result, for the \(L_1\)-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: \(s_n'=O(p_n)\) at most, among \(O(p_n^2)\) parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where \(s_n'\) is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.

          Related collections

          Most cited references18

          • Record: found
          • Abstract: found
          • Article: not found

          Sparse inverse covariance estimation with the graphical lasso.

          We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The Adaptive Lasso and Its Oracle Properties

              Hui Zou (2006)
                Bookmark

                Author and article information

                Journal
                25 November 2007
                2009-11-20
                Article
                10.1214/09-AOS720
                0711.3933
                eff55e2a-db60-465c-be51-176f8e438d80

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                62F12 (Primary) 62J07 (Secondary)
                IMS-AOS-AOS720
                Annals of Statistics 2009, Vol. 37, No. 6B, 4254-4278
                Published in at http://dx.doi.org/10.1214/09-AOS720 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org)
                math.ST stat.TH

                Comments

                Comment on this article