Emergent Mind

How to exploit prior information in low-complexity models

(1704.05397)
Published Apr 18, 2017 in cs.IT and math.IT

Abstract

Compressed Sensing refers to extracting a low-dimensional structured signal of interest from its incomplete random linear observations. A line of recent work has studied that, with the extra prior information about the signal, one can recover the signal with much fewer observations. For this purpose, the general approach is to solve weighted convex function minimization problem. In such settings, the convex function is chosen to promote the low-dimensional structure and the optimal weights are so chosen to reduce the number of measurements required for the optimization problem. In this paper, we consider a generalized non-uniform model in which the structured signal falls into some partitions, with entries of each partition having a definite probability to be an element of the structure support. Given these probabilities and regarding the recent developments in conic integral geometry, we provide a method to choose the unique optimal weights for any general low-dimensional signal model. This class of low-dimensional signal model includes many popular examples such as $\ell1$ analysis (entry-wise sparsity in an arbitrary redundant dictionary), $\ell{1,2}$ norm (block sparsity) and total variation semi-norm (for piece-wise constant signals). We show through precise analysis and simulations that the weighted convex optimization problem significantly improves the regular convex optimization problem as we choose the unique optimal weights.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.