Asymptotically Exact Error Analysis for the Generalized $\ell_2^2$-LASSO (1502.06287v1)
Abstract: Given an unknown signal $\mathbf{x}0\in\mathbb{R}n$ and linear noisy measurements $\mathbf{y}=\mathbf{A}\mathbf{x}_0+\sigma\mathbf{v}\in\mathbb{R}m$, the generalized $\ell_22$-LASSO solves $\hat{\mathbf{x}}:=\arg\min{\mathbf{x}}\frac{1}{2}|\mathbf{y}-\mathbf{A}\mathbf{x}|22 + \sigma\lambda f(\mathbf{x})$. Here, $f$ is a convex regularization function (e.g. $\ell_1$-norm, nuclear-norm) aiming to promote the structure of $\mathbf{x}_0$ (e.g. sparse, low-rank), and, $\lambda\geq 0$ is the regularizer parameter. A related optimization problem, though not as popular or well-known, is often referred to as the generalized $\ell_2$-LASSO and takes the form $\hat{\mathbf{x}}:=\arg\min{\mathbf{x}}|\mathbf{y}-\mathbf{A}\mathbf{x}|2 + \lambda f(\mathbf{x})$, and has been analyzed in [1]. [1] further made conjectures about the performance of the generalized $\ell_22$-LASSO. This paper establishes these conjectures rigorously. We measure performance with the normalized squared error $\mathrm{NSE}(\sigma):=|\hat{\mathbf{x}}-\mathbf{x}_0|_22/\sigma2$. Assuming the entries of $\mathbf{A}$ and $\mathbf{v}$ be i.i.d. standard normal, we precisely characterize the "asymptotic NSE" $\mathrm{aNSE}:=\lim{\sigma\rightarrow 0}\mathrm{NSE}(\sigma)$ when the problem dimensions $m,n$ tend to infinity in a proportional manner. The role of $\lambda,f$ and $\mathbf{x}0$ is explicitly captured in the derived expression via means of a single geometric quantity, the Gaussian distance to the subdifferential. We conjecture that $\mathrm{aNSE} = \sup{\sigma>0}\mathrm{NSE}(\sigma)$. We include detailed discussions on the interpretation of our result, make connections to relevant literature and perform computational experiments that validate our theoretical findings.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.