Emergent Mind

Towards a better compressed sensing

(1306.3801)
Published Jun 17, 2013 in cs.IT , math.IT , and math.OC

Abstract

In this paper we look at a well known linear inverse problem that is one of the mathematical cornerstones of the compressed sensing field. In seminal works \cite{CRT,DOnoho06CS} $\ell1$ optimization and its success when used for recovering sparse solutions of linear inverse problems was considered. Moreover, \cite{CRT,DOnoho06CS} established for the first time in a statistical context that an unknown vector of linear sparsity can be recovered as a known existing solution of an under-determined linear system through $\ell1$ optimization. In \cite{DonohoPol,DonohoUnsigned} (and later in \cite{StojnicCSetam09,StojnicUpper10}) the precise values of the linear proportionality were established as well. While the typical $\ell1$ optimization behavior has been essentially settled through the work of \cite{DonohoPol,DonohoUnsigned,StojnicCSetam09,StojnicUpper10}, we in this paper look at possible upgrades of $\ell1$ optimization. Namely, we look at a couple of algorithms that turn out to be capable of recovering a substantially higher sparsity than the $\ell1$. However, these algorithms assume a bit of "feedback" to be able to work at full strength. This in turn then translates the original problem of improving upon $\ell1$ to designing algorithms that would be able to provide output needed to feed the $\ell_1$ upgrades considered in this papers.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.