Emergent Mind

Abstract

We consider the problem of learning a sparse graph under the Laplacian constrained Gaussian graphical models. This problem can be formulated as a penalized maximum likelihood estimation of the Laplacian constrained precision matrix. Like in the classical graphical lasso problem, recent works made use of the $\ell1$-norm regularization with the goal of promoting sparsity in Laplacian constrained precision matrix estimation. However, we find that the widely used $\ell1$-norm is not effective in imposing a sparse solution in this problem. Through empirical evidence, we observe that the number of nonzero graph weights grows with the increase of the regularization parameter. From a theoretical perspective, we prove that a large regularization parameter will surprisingly lead to a complete graph, i.e., every pair of vertices is connected by an edge. To address this issue, we introduce the nonconvex sparsity penalty, and propose a new estimator by solving a sequence of weighted $\ell_1$-norm penalized sub-problems. We establish the non-asymptotic optimization performance guarantees on both optimization error and statistical error, and prove that the proposed estimator can recover the edges correctly with a high probability. To solve each sub-problem, we develop a projected gradient descent algorithm which enjoys a linear convergence rate. Finally, an extension to learn disconnected graphs is proposed by imposing additional rank constraint. We propose a numerical algorithm based on based on the alternating direction method of multipliers, and establish its theoretical sequence convergence. Numerical experiments involving synthetic and real-world data sets demonstrate the effectiveness of the proposed method.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.