Emergent Mind

An Adaptive Sublinear-Time Block Sparse Fourier Transform

(1702.01286)
Published Feb 4, 2017 in cs.DS

Abstract

The problem of approximately computing the $k$ dominant Fourier coefficients of a vector $X$ quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT has resulted in algorithms with $O(k\log n\log (n/k))$ runtime [Hassanieh et al., STOC'12] and $O(k\log n)$ sample complexity [Indyk et al., FOCS'14]. These results are proved using non-adaptive algorithms, and the latter $O(k\log n)$ sample complexity result is essentially the best possible under the sparsity assumption alone. This paper revisits the sparse FFT problem with the added twist that the sparse coefficients approximately obey a $(k0,k1)$-block sparse model. In this model, signal frequencies are clustered in $k0$ intervals with width $k1$ in Fourier space, where $k= k0k1$ is the total sparsity. Signals arising in applications are often well approximated by this model with $k0\ll k$. Our main result is the first sparse FFT algorithm for $(k0, k1)$-block sparse signals with the sample complexity of $O*(k0k1 + k0\log(1+ k0)\log n)$ at constant signal-to-noise ratios, and sublinear runtime. A similar sample complexity was previously achieved in the works on model-based compressive sensing using random Gaussian measurements, but used $\Omega(n)$ runtime. To the best of our knowledge, our result is the first sublinear-time algorithm for model based compressed sensing, and the first sparse FFT result that goes below the $O(k\log n)$ sample complexity bound. Our algorithm crucially uses {\em adaptivity} to achieve the improved sample complexity bound, and we prove that adaptivity is in fact necessary if Fourier measurements are used: Any non-adaptive algorithm must use $\Omega(k0k1\log \frac{n}{k0k1})$ samples for the $(k0,k_1$)-block sparse model, ruling out improvements over the vanilla sparsity assumption.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.