Emergent Mind

The Squared-Error of Generalized LASSO: A Precise Analysis

(1311.0830)
Published Nov 4, 2013 in cs.IT , math.IT , math.OC , and stat.ML

Abstract

We consider the problem of estimating an unknown signal $x0$ from noisy linear observations $y = Ax0 + z\in Rm$. In many practical instances, $x0$ has a certain structure that can be captured by a structure inducing convex function $f(\cdot)$. For example, $\ell1$ norm can be used to encourage a sparse solution. To estimate $x0$ with the aid of $f(\cdot)$, we consider the well-known LASSO method and provide sharp characterization of its performance. We assume the entries of the measurement matrix $A$ and the noise vector $z$ have zero-mean normal distributions with variances $1$ and $\sigma2$ respectively. For the LASSO estimator $x*$, we attempt to calculate the Normalized Square Error (NSE) defined as $\frac{|x*-x0|22}{\sigma2}$ as a function of the noise level $\sigma$, the number of observations $m$ and the structure of the signal. We show that, the structure of the signal $x0$ and choice of the function $f(\cdot)$ enter the error formulae through the summary parameters $D(cone)$ and $D(\lambda)$, which are defined as the Gaussian squared-distances to the subdifferential cone and to the $\lambda$-scaled subdifferential, respectively. The first LASSO estimator assumes a-priori knowledge of $f(x0)$ and is given by $\arg\min{x}{{|y-Ax|2}~\text{subject to}~f(x)\leq f(x0)}$. We prove that its worst case NSE is achieved when $\sigma\rightarrow 0$ and concentrates around $\frac{D(cone)}{m-D(cone)}$. Secondly, we consider $\arg\min{x}{|y-Ax|2+\lambda f(x)}$, for some $\lambda\geq 0$. This time the NSE formula depends on the choice of $\lambda$ and is given by $\frac{D(\lambda)}{m-D(\lambda)}$. We then establish a mapping between this and the third estimator $\arg\min{x}{\frac{1}{2}|y-Ax|22+ \lambda f(x)}$. Finally, for a number of important structured signal classes, we translate our abstract formulae to closed-form upper bounds on the NSE.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.