The sample complexity of weighted sparse approximation (1507.06736v5)
Abstract: For Gaussian sampling matrices, we provide bounds on the minimal number of measurements $m$ required to achieve robust weighted sparse recovery guarantees in terms of how well a given prior model for the sparsity support aligns with the true underlying support. Our main contribution is that for a sparse vector ${\bf x} \in \mathbb{R}N$ supported on an unknown set $\mathcal{S} \subset {1, \dots, N}$ with $|\mathcal{S}|\leq k$, if $\mathcal{S}$ has \emph{weighted cardinality} $\omega(\mathcal{S}) := \sum_{j \in \mathcal{S}} \omega_j2$, and if the weights on $\mathcal{S}c$ exhibit mild growth, $\omega_j2 \geq \gamma \log(j/\omega(\mathcal{S}))$ for $j\in\mathcal{S}c$ and $\gamma > 0$, then the sample complexity for sparse recovery via weighted $\ell_1$-minimization using weights $\omega_j$ is linear in the weighted sparsity level, and $m = \mathcal{O}(\omega(\mathcal{S})/\gamma)$. This main result is a generalization of special cases including a) the standard sparse recovery setting where all weights $\omega_j \equiv 1$, and $m = \mathcal{O}\left(k\log\left(N/k\right)\right)$; b) the setting where the support is known a priori, and $m = \mathcal{O}(k)$; and c) the setting of sparse recovery with prior information, and $m$ depends on how well the weights are aligned with the support set $\mathcal{S}$. We further extend the results in case c) to the setting of additive noise. Our results are {\em nonuniform} that is they apply for a fixed support, unknown a priori, and the weights on $\mathcal{S}$ do not all have to be smaller than the weights on $\mathcal{S}c$ for our recovery results to hold.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.