Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 64 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 452 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

New Low Rank Optimization Model and Convex Approach for Robust Spectral Compressed Sensing (2101.06433v1)

Published 16 Jan 2021 in cs.IT, eess.SP, and math.IT

Abstract: This paper investigates recovery of an undamped spectrally sparse signal and its spectral components from a set of regularly spaced samples within the framework of spectral compressed sensing and super-resolution. We show that the existing Hankel-based optimization methods suffer from the fundamental limitation that the prior of undampedness cannot be exploited. We propose a new low rank optimization model partially inspired by forward-backward processing for line spectral estimation and show its capability in restricting the spectral poles on the unit circle. We present convex relaxation approaches with the model and show their provable accuracy and robustness to bounded and sparse noise. All our results are generalized from the 1-D to arbitrary-dimensional spectral compressed sensing. Numerical simulations are provided that corroborate our analysis and show efficiency of our model and advantageous performance of our approach in improved accuracy and resolution as compared to the state-of-the-art Hankel and atomic norm methods.

Citations (5)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)