Emergent Mind

Improved Algorithms for Adaptive Compressed Sensing

(1804.09673)
Published Apr 25, 2018 in cs.DS , cs.IT , and math.IT

Abstract

In the problem of adaptive compressed sensing, one wants to estimate an approximately $k$-sparse vector $x\in\mathbb{R}n$ from $m$ linear measurements $A1 x, A2 x,\ldots, Am x$, where $Ai$ can be chosen based on the outcomes $A1 x,\ldots, A{i-1} x$ of previous measurements. The goal is to output a vector $\hat{x}$ for which $$|x-\hat{x}|p \le C \cdot \min{k\text{-sparse } x'} |x-x'|_q\,$$ with probability at least $2/3$, where $C > 0$ is an approximation factor. Indyk, Price and Woodruff (FOCS'11) gave an algorithm for $p=q=2$ for $C = 1+\epsilon$ with $\Oh((k/\epsilon) \loglog (n/k))$ measurements and $\Oh(\log*(k) \loglog (n))$ rounds of adaptivity. We first improve their bounds, obtaining a scheme with $\Oh(k \cdot \loglog (n/k) +(k/\epsilon) \cdot \loglog(1/\epsilon))$ measurements and $\Oh(\log*(k) \loglog (n))$ rounds, as well as a scheme with $\Oh((k/\epsilon) \cdot \loglog (n\log (n/k)))$ measurements and an optimal $\Oh(\loglog (n))$ rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for $(p,p)$ for every $0 < p < 2$. We show that the improvement from $O(k \log(n/k))$ measurements to $O(k \log \log (n/k))$ measurements in the adaptive setting can persist with a better $\epsilon$-dependence for other values of $p$ and $q$. For example, when $(p,q) = (1,1)$, we obtain $O(\frac{k}{\sqrt{\epsilon}} \cdot \log \log n \log3 (\frac{1}{\epsilon}))$ measurements.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.