Emergent Mind

L0 Sparse Inverse Covariance Estimation

(1408.0850)
Published Aug 5, 2014 in stat.ML

Abstract

Recently, there has been focus on penalized log-likelihood covariance estimation for sparse inverse covariance (precision) matrices. The penalty is responsible for inducing sparsity, and a very common choice is the convex $l1$ norm. However, the best estimator performance is not always achieved with this penalty. The most natural sparsity promoting "norm" is the non-convex $l0$ penalty but its lack of convexity has deterred its use in sparse maximum likelihood estimation. In this paper we consider non-convex $l0$ penalized log-likelihood inverse covariance estimation and present a novel cyclic descent algorithm for its optimization. Convergence to a local minimizer is proved, which is highly non-trivial, and we demonstrate via simulations the reduced bias and superior quality of the $l0$ penalty as compared to the $l_1$ penalty.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.