Emergent Mind

Simple Error Bounds for Regularized Noisy Linear Inverse Problems

(1401.6578)
Published Jan 25, 2014 in math.OC , cs.IT , math.IT , math.ST , and stat.TH

Abstract

Consider estimating a structured signal $\mathbf{x}0$ from linear, underdetermined and noisy measurements $\mathbf{y}=\mathbf{A}\mathbf{x}0+\mathbf{z}$, via solving a variant of the lasso algorithm: $\hat{\mathbf{x}}=\arg\min\mathbf{x}{ |\mathbf{y}-\mathbf{A}\mathbf{x}|2+\lambda f(\mathbf{x})}$. Here, $f$ is a convex function aiming to promote the structure of $\mathbf{x}0$, say $\ell1$-norm to promote sparsity or nuclear norm to promote low-rankness. We assume that the entries of $\mathbf{A}$ are independent and normally distributed and make no assumptions on the noise vector $\mathbf{z}$, other than it being independent of $\mathbf{A}$. Under this generic setup, we derive a general, non-asymptotic and rather tight upper bound on the $\ell2$-norm of the estimation error $|\hat{\mathbf{x}}-\mathbf{x}0|2$. Our bound is geometric in nature and obeys a simple formula; the roles of $\lambda$, $f$ and $\mathbf{x}0$ are all captured by a single summary parameter $\delta(\lambda\partial((f(\mathbf{x}_0)))$, termed the Gaussian squared distance to the scaled subdifferential. We connect our result to the literature and verify its validity through simulations.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.