- The paper introduces a novel Gaussian shifted-truncated-gamma (G-STG) prior to effectively induce sparsity in Bayesian compressed sensing.
- It employs an iterative greedy algorithm to update hyperparameters, ensuring rapid convergence and enhanced signal recovery.
- Empirical results show improved accuracy and computational efficiency compared to traditional sparse Bayesian learning methods.
Bayesian Compressed Sensing With New Sparsity-Inducing Prior
Introduction
The paper under discussion introduces a novel sparsity-inducing prior within a Bayesian Compressed Sensing (BCS) framework. The central focus is the Gaussian shifted-truncated-gamma (G-STG) prior, an advancement over existing hierarchical priors used in Sparse Bayesian Learning (SBL). The proposed prior is designed to enhance the recovery of sparse signals from compressive measurements, overcoming some limitations observed in conventional SBL methods. This approach is validated through extensive numerical simulations, showcasing its superior performance in signal accuracy and computational efficiency when compared to existing methods.

Figure 1: PDFs of the G-STG prior in the case of (a) ϵ=0.1 and varying τ and (b) τ=1×10−8 and varying ϵ with η=1.
Gaussian Shifted-Truncated-Gamma (G-STG) Prior
The G-STG prior is formulated to generalize previous approaches by parameterizing a sparsity-inducing prior via a hierarchical model. The first layer uses a Gaussian distribution, while the second involves a shifted-truncated-gamma (STG) distribution, controlled by shape (ϵ), rate (η), and threshold (τ) parameters. The flexibility introduced by varying these parameters enhances the model's ability to induce sparsity in signal recovery tasks.
Key Characteristics:
- Flexibility: By adjusting ϵ and τ, the prior can simulate other well-known priors and promote different levels of sparsity.
- Nonconvex Optimization: The G-STG corresponds to a nonconvex optimization problem when ϵ<1, aligning with established methods known to improve sparse signal recovery.
- Parameters Tuning: The choice of τ is particularly critical and is recommended to be set in accordance with the noise level in the measurements.
Sparse Bayesian Learning for Signal Recovery
The paper details an innovative iterative procedure for maximizing the modified likelihood function within the BCS framework. This involves decomposing the posterior distribution into terms that allow for efficient approximation through empirical Bayesian strategies. The evidence maximization approach is utilized for parameter estimation, further refined through a fast greedy algorithm for model support selection.
Theoretical Guarantees:
- Global and Local Maxima: The paper extends the theoretical results of SBL, showing that local maxima correspond to sparse solutions, and the global optimum achieves the maximal sparsity.
- Improved Convergence: The proposed method uses an evidence-driven algorithm, ensuring stable and rapid convergence to high-quality signal estimates.
Algorithmic Implementation
The practical implementation of this approach is through an iterative algorithm that efficiently updates hyperparameters and the signal estimate. The complexity is managed through strategic decomposition, avoiding high-dimensional matrix inversions typical in conventional SBL approaches.
- Fast Greedy Algorithm: The core of the proposed method relies on a greedy pursuit strategy, akin to Orthogonal Matching Pursuit (OMP), but with enhanced basis selection criteria leveraging the properties of the G-STG prior.
- Computational Efficiency: By focusing on updating a subset of model parameters iteratively, the algorithm achieves faster convergence in finding a sparse solution than traditional methods.



Figure 2: Performance of the proposed algorithm with respect to different settings of τ with Gaussian and uniform spherical ensembles.
The proposed method's performance was validated empirically against existing techniques like Basis Pursuit, Reweighted ℓ1 Minimization, and other Bayesian approaches. Metrics such as RMSE, support size, iteration count, and CPU time were employed.
- Simulation Results: Demonstrated improved recovery and faster computation for various synthetic and real signals, including one-dimensional signals and Mondrian images (Figure 3).
- Robustness to Noise: The tunable parameters allow the method to maintain high recovery accuracy across low SNR conditions, an improvement over fixed prior methods.
- Comparison with Existing Methods: The G-STG based method showed enhanced sparsity promotion with comparable or superior accuracy and faster execution times.



Figure 4: Performance of the proposed algorithm with respect to different settings of ϵ.
Conclusion
The introduction of the G-STG prior represents a significant enhancement in the Bayesian treatment of compressed sensing problems by robustly promoting sparsity. Theoretical and empirical validations underscore its potential to outperform existing SBL frameworks, providing a solid foundation for further research in applying Bayesian methods to sparse signal recovery tasks in various domains. Future work could explore more extensive real-world datasets and expand on hyperparameter setting strategies to automate prior selection in diverse sensing environments.