Emergent Mind

Linear Convergence of Stochastic Iterative Greedy Algorithms with Sparse Constraints

(1407.0088)
Published Jul 1, 2014 in math.NA , cs.IT , math.IT , and math.OC

Abstract

Motivated by recent work on stochastic gradient descent methods, we develop two stochastic variants of greedy algorithms for possibly non-convex optimization problems with sparsity constraints. We prove linear convergence in expectation to the solution within a specified tolerance. This generalized framework applies to problems such as sparse signal recovery in compressed sensing, low-rank matrix recovery, and covariance matrix estimation, giving methods with provable convergence guarantees that often outperform their deterministic counterparts. We also analyze the settings where gradients and projections can only be computed approximately, and prove the methods are robust to these approximations. We include many numerical experiments which align with the theoretical analysis and demonstrate these improvements in several different settings.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.