CMAR-Net: Accurate Cross-Modal 3D SAR Reconstruction of Vehicle Targets with Sparse-Aspect Multi-Baseline Data (2406.04158v4)
Abstract: Sparse-aspect multi-baseline Synthetic Aperture Radar (SAR) three-dimensional (3D) tomography is a crucial remote sensing technique. Compared to full-aspect observation, it needs only a few observation aspects to achieve a sufficiently clear 3D scene reconstruction, providing a cost-effective alternative. In the past, compressive sensing (CS) was the mainstream approach for sparse 3D SAR imaging. Recently, deep learning (DL) revolutionizes this field through its powerful data-driven representation capabilities and efficient inference characteristics. However, existing DL methods primarily depend on high-resolution radar images for supervising the training of deep neural networks (DNNs). This unimodal approach precludes the incorporation of complementary information from other data sources, thereby limiting potential improvements in imaging performance. In this paper, we propose a Cross-Modal 3D-SAR Reconstruction Network (CMAR-Net) that enhances 3D SAR imaging by fusing heterogeneous information. Leveraging cross-modal supervision from 2D optical images and error transfer guaranteed by differentiable rendering, CMAR-Net achieves efficient training and reconstructs highly sparse-aspect multi-baseline SAR image into visually structured and accurate 3D images, particularly for vehicle targets. Extensive experiments on simulated and real-world datasets demonstrate that CMAR-Net significantly outperforms state-of-the-art sparse reconstruction algorithms based on CS and DL, with average improvements of 75.83% in PSNR and 47.85% in SSIM. Furthermore, our method eliminates the need for time-consuming full-aperture data preprocessing and relies solely on computer-rendered optical images, significantly reducing dataset construction costs. This work highlights the potential of cross-modal learning for multi-baseline SAR 3D imaging and introduces a novel framework for radar imaging research.