Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 173 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 76 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

How to exploit prior information in low-complexity models (1704.05397v1)

Published 18 Apr 2017 in cs.IT and math.IT

Abstract: Compressed Sensing refers to extracting a low-dimensional structured signal of interest from its incomplete random linear observations. A line of recent work has studied that, with the extra prior information about the signal, one can recover the signal with much fewer observations. For this purpose, the general approach is to solve weighted convex function minimization problem. In such settings, the convex function is chosen to promote the low-dimensional structure and the optimal weights are so chosen to reduce the number of measurements required for the optimization problem. In this paper, we consider a generalized non-uniform model in which the structured signal falls into some partitions, with entries of each partition having a definite probability to be an element of the structure support. Given these probabilities and regarding the recent developments in conic integral geometry, we provide a method to choose the unique optimal weights for any general low-dimensional signal model. This class of low-dimensional signal model includes many popular examples such as $\ell_1$ analysis (entry-wise sparsity in an arbitrary redundant dictionary), $\ell_{1,2}$ norm (block sparsity) and total variation semi-norm (for piece-wise constant signals). We show through precise analysis and simulations that the weighted convex optimization problem significantly improves the regular convex optimization problem as we choose the unique optimal weights.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.