Simple Error Bounds for Regularized Noisy Linear Inverse Problems (1401.6578v1)
Abstract: Consider estimating a structured signal $\mathbf{x}0$ from linear, underdetermined and noisy measurements $\mathbf{y}=\mathbf{A}\mathbf{x}_0+\mathbf{z}$, via solving a variant of the lasso algorithm: $\hat{\mathbf{x}}=\arg\min\mathbf{x}{ |\mathbf{y}-\mathbf{A}\mathbf{x}|_2+\lambda f(\mathbf{x})}$. Here, $f$ is a convex function aiming to promote the structure of $\mathbf{x}_0$, say $\ell_1$-norm to promote sparsity or nuclear norm to promote low-rankness. We assume that the entries of $\mathbf{A}$ are independent and normally distributed and make no assumptions on the noise vector $\mathbf{z}$, other than it being independent of $\mathbf{A}$. Under this generic setup, we derive a general, non-asymptotic and rather tight upper bound on the $\ell_2$-norm of the estimation error $|\hat{\mathbf{x}}-\mathbf{x}_0|_2$. Our bound is geometric in nature and obeys a simple formula; the roles of $\lambda$, $f$ and $\mathbf{x}_0$ are all captured by a single summary parameter $\delta(\lambda\partial((f(\mathbf{x}_0)))$, termed the Gaussian squared distance to the scaled subdifferential. We connect our result to the literature and verify its validity through simulations.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.