Emergent Mind

Batch Sparse Recovery, or How to Leverage the Average Sparsity

(1807.08478)
Published Jul 23, 2018 in cs.DS

Abstract

We introduce a \emph{batch} version of sparse recovery, where the goal is to report a sequence of vectors $A1',\ldots,Am' \in \mathbb{R}n$ that estimate unknown signals $A1,\ldots,Am \in \mathbb{R}n$ using a few linear measurements, each involving exactly one signal vector, under an assumption of \emph{average sparsity}. More precisely, we want to have \newline $(1) \;\;\; \sum{j \in [m]}{|Aj- Aj'|pp} \le C \cdot \min \Big{ \sum{j \in [m]}{|Aj - Aj*|pp} \Big}$ for predetermined constants $C \ge 1$ and $p$, where the minimum is over all $A1*,\ldots,Am*\in\mathbb{R}n$ that are $k$-sparse on average. We assume $k$ is given as input, and ask for the minimal number of measurements required to satisfy $(1)$. The special case $m=1$ is known as stable sparse recovery and has been studied extensively. We resolve the question for $p =1$ up to polylogarithmic factors, by presenting a randomized adaptive scheme that performs $\tilde{O}(km)$ measurements and with high probability has output satisfying $(1)$, for arbitrarily small $C > 1$. Finally, we show that adaptivity is necessary for every non-trivial scheme.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.