Emergent Mind

Optimal Quantized Compressed Sensing via Projected Gradient Descent

(2407.04951)
Published Jul 6, 2024 in cs.IT and math.IT

Abstract

This paper provides a unified treatment to the recovery of structured signals living in a star-shaped set from general quantized measurements $\mathcal{Q}(\mathbf{A}\mathbf{x}-\mathbf{\tau})$, where $\mathbf{A}$ is a sensing matrix, $\mathbf{\tau}$ is a vector of (possibly random) quantization thresholds, and $\mathcal{Q}$ denotes an $L$-level quantizer. The ideal estimator with consistent quantized measurements is optimal in some important instances but typically infeasible to compute. To this end, we study the projected gradient descent (PGD) algorithm with respect to the one-sided $\ell_1$-loss and identify the conditions under which PGD achieves the same error rate, up to logarithmic factors. For multi-bit case, these conditions only ensure local convergence, and we further develop a complementary approach based on product embedding. When applied to popular models such as 1-bit compressed sensing with Gaussian $\mathbf{A}$ and zero $\mathbf{\tau}$ and the dithered 1-bit/multi-bit models with sub-Gaussian $\mathbf{A}$ and uniform dither $\mathbf{\tau}$, our unified treatment yields error rates that improve on or match the sharpest results in all instances. Particularly, PGD achieves the information-theoretic optimal rate $\tilde{O}(\frac{k}{mL})$ for recovering $k$-sparse signals, and the rate $\tilde{O}((\frac{k}{mL}){1/3})$ for effectively sparse signals. For 1-bit compressed sensing of sparse signals, our result recovers the optimality of normalized binary iterative hard thresholding (NBIHT) that was proved very recently.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.