Emergent Mind

Abstract

Compressed sensing deals with the reconstruction of sparse signals using a small number of linear measurements. One of the main challenges in compressed sensing is to find the support of a sparse signal. In the literature, several bounds on the scaling law of the number of measurements for successful support recovery have been derived where the main focus is on random Gaussian measurement matrices. In this paper, we investigate the noisy support recovery problem from an estimation theoretic point of view, where no specific assumption is made on the underlying measurement matrix. The linear measurements are perturbed by additive white Gaussian noise. We define the output of a support estimator to be a set of position values in increasing order. We set the error between the true and estimated supports as the $\ell2$-norm of their difference. On the one hand, this choice allows us to use the machinery behind the $\ell2$-norm error metric and on the other hand, converts the support recovery into a more intuitive and geometrical problem. First, by using the Hammersley-Chapman-Robbins (HCR) bound, we derive a fundamental lower bound on the performance of any \emph{unbiased} estimator of the support set. This lower bound provides us with necessary conditions on the number of measurements for reliable $\ell2$-norm support recovery, which we specifically evaluate for uniform Gaussian measurement matrices. Then, we analyze the maximum likelihood estimator and derive conditions under which the HCR bound is achievable. This leads us to the number of measurements for the optimum decoder which is sufficient for reliable $\ell2$-norm support recovery. Using this framework, we specifically evaluate sufficient conditions for uniform Gaussian measurement matrices.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.