Emergent Mind

Abstract

The deep learning model Transformer has achieved remarkable success in the hyperspectral image (HSI) restoration tasks by leveraging Spectral and Spatial Self-Attention (SA) mechanisms. However, applying these designs to remote sensing (RS) HSI restoration tasks, which involve far more spectrums than typical HSI (e.g., ICVL dataset with 31 bands), presents challenges due to the enormous computational complexity of using Spectral and Spatial SA mechanisms. To address this problem, we proposed Hyper-Restormer, a lightweight and effective Transformer-based architecture for RS HSI restoration. First, we introduce a novel Lightweight Spectral-Spatial (LSS) Transformer Block that utilizes both Spectral and Spatial SA to capture long-range dependencies of input features map. Additionally, we employ a novel Lightweight Locally-enhanced Feed-Forward Network (LLFF) to further enhance local context information. Then, LSS Transformer Blocks construct a Single-stage Lightweight Spectral-Spatial Transformer (SLSST) that cleverly utilizes the low-rank property of RS HSI to decompose the feature maps into basis and abundance components, enabling Spectral and Spatial SA with low computational cost. Finally, the proposed Hyper-Restormer cascades several SLSSTs in a stepwise manner to progressively enhance the quality of RS HSI restoration from coarse to fine. Extensive experiments were conducted on various RS HSI restoration tasks, including denoising, inpainting, and super-resolution, demonstrating that the proposed Hyper-Restormer outperforms other state-of-the-art methods.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.