Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

An Alternative Graphical Lasso Algorithm for Precision Matrices (2403.12357v1)

Published 19 Mar 2024 in stat.CO and stat.ML

Abstract: The Graphical Lasso (GLasso) algorithm is fast and widely used for estimating sparse precision matrices (Friedman et al., 2008). Its central role in the literature of high-dimensional covariance estimation rivals that of Lasso regression for sparse estimation of the mean vector. Some mysteries regarding its optimization target, convergence, positive-definiteness and performance have been unearthed, resolved and presented in Mazumder and Hastie (2011), leading to a new/improved (dual-primal) DP-GLasso. Using a new and slightly different reparametriztion of the last column of a precision matrix we show that the regularized normal log-likelihood naturally decouples into a sum of two easy to minimize convex functions one of which is a Lasso regression problem. This decomposition is the key in developing a transparent, simple iterative block coordinate descent algorithm for computing the GLasso updates with performance comparable to DP-GLasso. In particular, our algorithm has the precision matrix as its optimization target right at the outset, and retains all the favorable properties of the DP-GLasso algorithm.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (7)
  1. Alon, Uri, Naama Barkai, Daniel A. Notterman, Kenneth W. Gish, Suzanne E. Ybarra, Douglas Michael Mach, and Arnold J. Levine (1999), “Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays.” Proceedings of the National Academy of Sciences of the United States of America, 96 12, 6745–50.
  2. Banerjee, O., L. E. Ghaoui, and A. d’Aspermont (2008), “Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data.” Journal of Machine Learning Research, 9, 485–516.
  3. Friedman, J., T. Hastie, and R. Tibshirani (2008), “Sparse inverse covariance estimation with the graphical lasso.” Biostatistics, 9, 432–441.
  4. Mazumder, Rahul and Trevor Hastie (2012), “Exact covariance thresholding into connected components for large-scale graphical lasso.” Journal of Machine Learning Research, 13, 781–794.
  5. Mazumder, Rahul and Trevor J. Hastie (2011), “The graphical lasso: New insights and alternatives.” Electronic journal of statistics, 6, 2125–2149.
  6. Tseng, Paul (2001), “Convergence of a block coordinate descent method for nondifferentiable minimization.” Journal of Optimization Theory and Applications, 109, 475–494.
  7. Wang, Hao (2014), “Coordinate descent algorithm for covariance graphical lasso.” Statistics and Computing, 24, 521–529.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.