Emergent Mind

Sharp MSE Bounds for Proximal Denoising

(1305.2714)
Published May 13, 2013 in cs.IT , math.IT , and math.OC

Abstract

Denoising has to do with estimating a signal $x0$ from its noisy observations $y=x0+z$. In this paper, we focus on the "structured denoising problem", where the signal $x0$ possesses a certain structure and $z$ has independent normally distributed entries with mean zero and variance $\sigma2$. We employ a structure-inducing convex function $f(\cdot)$ and solve $\minx{\frac{1}{2}|y-x|22+\sigma\lambda f(x)}$ to estimate $x0$, for some $\lambda>0$. Common choices for $f(\cdot)$ include the $\ell1$ norm for sparse vectors, the $\ell1-\ell2$ norm for block-sparse signals and the nuclear norm for low-rank matrices. The metric we use to evaluate the performance of an estimate $x*$ is the normalized mean-squared-error $\text{NMSE}(\sigma)=\frac{\mathbb{E}|x*-x0|22}{\sigma2}$. We show that NMSE is maximized as $\sigma\rightarrow 0$ and we find the \emph{exact} worst case NMSE, which has a simple geometric interpretation: the mean-squared-distance of a standard normal vector to the $\lambda$-scaled subdifferential $\lambda\partial f(x0)$. When $\lambda$ is optimally tuned to minimize the worst-case NMSE, our results can be related to the constrained denoising problem $\min{f(x)\leq f(x0)}{|y-x|2}$. The paper also connects these results to the generalized LASSO problem, in which, one solves $\min{f(x)\leq f(x0)}{|y-Ax|2}$ to estimate $x0$ from noisy linear observations $y=Ax0+z$. We show that certain properties of the LASSO problem are closely related to the denoising problem. In particular, we characterize the normalized LASSO cost and show that it exhibits a "phase transition" as a function of number of observations. Our results are significant in two ways. First, we find a simple formula for the performance of a general convex estimator. Secondly, we establish a connection between the denoising and linear inverse problems.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.