Emergent Mind

Sublinear Time, Approximate Model-based Sparse Recovery For All

(1203.4746)
Published Mar 21, 2012 in cs.IT and math.IT

Abstract

We describe a probabilistic, {\it sublinear} runtime, measurement-optimal system for model-based sparse recovery problems through dimensionality reducing, {\em dense} random matrices. Specifically, we obtain a linear sketch $u\in \RM$ of a vector $\bestsignal\in \RN$ in high-dimensions through a matrix $\Phi \in \R{M\times N}$ $(M<N)$. We assume this vector can be well approximated by $K$ non-zero coefficients (i.e., it is $K$-sparse). In addition, the nonzero coefficients of $\bestsignal$ can obey additional structure constraints such as matroid, totally unimodular, or knapsack constraints, which dub as model-based sparsity. We construct the dense measurement matrix using a probabilistic method so that it satisfies the so-called restricted isometry property in the $\ell2$-norm. While recovery using such matrices is measurement-optimal as they require the smallest sketch sizes $\numsam= O(\sparsity \log(\dimension/\sparsity))$, the existing algorithms require superlinear runtime $\Omega(N\log(N/K))$ with the exception of Porat and Strauss, which requires $O(\beta5\epsilon{-3}K(N/K){1/\beta}), ~\beta \in \mathbb{Z}{+}, $ but provides an $\ell1/\ell1$ approximation guarantee. In contrast, our approach features $ O\big(\max \lbrace \sketch \sparsity \log{O(1)} \dimension, ~\sketch \sparsity2 \log2 (\dimension/\sparsity) \rbrace\big) $ complexity where $ L \in \mathbb{Z}{+}$ is a design parameter, independent of $\dimension$, requires a smaller sketch size, can accommodate model sparsity, and provides a stronger $\ell2/\ell_1$ guarantee. Our system applies to "for all" sparse signals, is robust against bounded perturbations in $u$ as well as perturbations on $\bestsignal$ itself.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.