Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 154 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 119 tok/s Pro
Kimi K2 175 tok/s Pro
GPT OSS 120B 362 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

The LASSO with Non-linear Measurements is Equivalent to One With Linear Measurements (1506.02181v1)

Published 6 Jun 2015 in math.ST, cs.IT, math.IT, and stat.TH

Abstract: Consider estimating an unknown, but structured, signal $x_0\in Rn$ from $m$ measurement $y_i=g_i(a_iTx_0)$, where the $a_i$'s are the rows of a known measurement matrix $A$, and, $g$ is a (potentially unknown) nonlinear and random link-function. Such measurement functions could arise in applications where the measurement device has nonlinearities and uncertainties. It could also arise by design, e.g., $g_i(x)=\text{sign}(x+z_i)$, corresponds to noisy 1-bit quantized measurements. Motivated by the classical work of Brillinger, and more recent work of Plan and Vershynin, we estimate $x_0$ via solving the Generalized-LASSO for some regularization parameter $\lambda>0$ and some (typically non-smooth) convex structure-inducing regularizer function. While this approach seems to naively ignore the nonlinear function $g$, both Brillinger (in the non-constrained case) and Plan and Vershynin have shown that, when the entries of $A$ are iid standard normal, this is a good estimator of $x_0$ up to a constant of proportionality $\mu$, which only depends on $g$. In this work, we considerably strengthen these results by obtaining explicit expressions for the squared error, for the \emph{regularized} LASSO, that are asymptotically \emph{precise} when $m$ and $n$ grow large. A main result is that the estimation performance of the Generalized LASSO with non-linear measurements is \emph{asymptotically the same} as one whose measurements are linear $y_i=\mu a_iTx_0 + \sigma z_i$, with $\mu = E\gamma g(\gamma)$ and $\sigma2 = E(g(\gamma)-\mu\gamma)2$, and, $\gamma$ standard normal. To the best of our knowledge, the derived expressions on the estimation performance are the first-known precise results in this context. One interesting consequence of our result is that the optimal quantizer of the measurements that minimizes the estimation error of the LASSO is the celebrated Lloyd-Max quantizer.

Citations (114)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.