L2/L2-foreach sparse recovery with low risk (1304.6232v1)
Abstract: In this paper, we consider the "foreach" sparse recovery problem with failure probability $p$. The goal of which is to design a distribution over $m \times N$ matrices $\Phi$ and a decoding algorithm $\algo$ such that for every $\vx\in\RN$, we have the following error guarantee with probability at least $1-p$ [|\vx-\algo(\Phi\vx)|_2\le C|\vx-\vx_k|_2,] where $C$ is a constant (ideally arbitrarily close to 1) and $\vx_k$ is the best $k$-sparse approximation of $\vx$. Much of the sparse recovery or compressive sensing literature has focused on the case of either $p = 0$ or $p = \Omega(1)$. We initiate the study of this problem for the entire range of failure probability. Our two main results are as follows: \begin{enumerate} \item We prove a lower bound on $m$, the number measurements, of $\Omega(k\log(n/k)+\log(1/p))$ for $2{-\Theta(N)}\le p <1$. Cohen, Dahmen, and DeVore \cite{CDD2007:NearOptimall2l2} prove that this bound is tight. \item We prove nearly matching upper bounds for \textit{sub-linear} time decoding. Previous such results addressed only $p = \Omega(1)$. \end{enumerate} Our results and techniques lead to the following corollaries: (i) the first ever sub-linear time decoding $\lolo$ "forall" sparse recovery system that requires a $\log{\gamma}{N}$ extra factor (for some $\gamma<1$) over the optimal $O(k\log(N/k))$ number of measurements, and (ii) extensions of Gilbert et al. \cite{GHRSW12:SimpleSignals} results for information-theoretically bounded adversaries.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.