Emergent Mind

The sample complexity of weighted sparse approximation

(1507.06736)
Published Jul 24, 2015 in math.NA , cs.CC , cs.IT , math.FA , math.IT , and stat.CO

Abstract

For Gaussian sampling matrices, we provide bounds on the minimal number of measurements $m$ required to achieve robust weighted sparse recovery guarantees in terms of how well a given prior model for the sparsity support aligns with the true underlying support. Our main contribution is that for a sparse vector ${\bf x} \in \mathbb{R}N$ supported on an unknown set $\mathcal{S} \subset {1, \dots, N}$ with $|\mathcal{S}|\leq k$, if $\mathcal{S}$ has \emph{weighted cardinality} $\omega(\mathcal{S}) := \sum{j \in \mathcal{S}} \omegaj2$, and if the weights on $\mathcal{S}c$ exhibit mild growth, $\omegaj2 \geq \gamma \log(j/\omega(\mathcal{S}))$ for $j\in\mathcal{S}c$ and $\gamma > 0$, then the sample complexity for sparse recovery via weighted $\ell1$-minimization using weights $\omegaj$ is linear in the weighted sparsity level, and $m = \mathcal{O}(\omega(\mathcal{S})/\gamma)$. This main result is a generalization of special cases including a) the standard sparse recovery setting where all weights $\omegaj \equiv 1$, and $m = \mathcal{O}\left(k\log\left(N/k\right)\right)$; b) the setting where the support is known a priori, and $m = \mathcal{O}(k)$; and c) the setting of sparse recovery with prior information, and $m$ depends on how well the weights are aligned with the support set $\mathcal{S}$. We further extend the results in case c) to the setting of additive noise. Our results are {\em nonuniform} that is they apply for a fixed support, unknown a priori, and the weights on $\mathcal{S}$ do not all have to be smaller than the weights on $\mathcal{S}c$ for our recovery results to hold.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.