Emergent Mind

A framework to characterize performance of LASSO algorithms

(1303.7291)
Published Mar 29, 2013 in cs.IT , math.IT , math.OC , math.PR , math.ST , and stat.TH

Abstract

In this paper we consider solving \emph{noisy} under-determined systems of linear equations with sparse solutions. A noiseless equivalent attracted enormous attention in recent years, above all, due to work of \cite{CRT,CanRomTao06,DonohoPol} where it was shown in a statistical and large dimensional context that a sparse unknown vector (of sparsity proportional to the length of the vector) can be recovered from an under-determined system via a simple polynomial $\ell1$-optimization algorithm. \cite{CanRomTao06} further established that even when the equations are \emph{noisy}, one can, through an SOCP noisy equivalent of $\ell1$, obtain an approximate solution that is (in an $\ell2$-norm sense) no further than a constant times the noise from the sparse unknown vector. In our recent works \cite{StojnicCSetam09,StojnicUpper10}, we created a powerful mechanism that helped us characterize exactly the performance of $\ell1$ optimization in the noiseless case (as shown in \cite{StojnicEquiv10} and as it must be if the axioms of mathematics are well set, the results of \cite{StojnicCSetam09,StojnicUpper10} are in an absolute agreement with the corresponding exact ones from \cite{DonohoPol}). In this paper we design a mechanism, as powerful as those from \cite{StojnicCSetam09,StojnicUpper10}, that can handle the analysis of a LASSO type of algorithm (and many others) that can be (or typically are) used for "solving" noisy under-determined systems. Using the mechanism we then, in a statistical context, compute the exact worst-case $\ell_2$ norm distance between the unknown sparse vector and the approximate one obtained through such a LASSO. The obtained results match the corresponding exact ones obtained in \cite{BayMon10,DonMalMon10}. Moreover, as a by-product of our analysis framework we recognize existence of an SOCP type of algorithm that achieves the same performance.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.