Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-supervised SAR-optical Data Fusion and Land-cover Mapping using Sentinel-1/-2 Images (2103.05543v3)

Published 9 Mar 2021 in eess.IV

Abstract: The effective combination of the complementary information provided by the huge amount of unlabeled multi-sensor data (e.g., Synthetic Aperture Radar (SAR) and optical images) is a critical topic in remote sensing. Recently, contrastive learning methods have reached remarkable success in obtaining meaningful feature representations from multi-view data. However, these methods only focus on image-level features, which may not satisfy the requirement for dense prediction tasks such as land-cover mapping. In this work, we propose a self-supervised framework for SAR-optical data fusion and land-cover mapping tasks. SAR and optical images are fused by using multi-view contrastive loss at image-level and super-pixel level in the early, intermediate and later fashion individually. For the land-cover mapping task, we assign each pixel a land-cover class by the joint use of pre-trained features and spectral information of the image itself. Experimental results show that the proposed approach achieves a comparable accuracy and that reduces the dimension of features with respect to the image-level contrastive learning method. Among three fusion fashions, the intermediate fusion strategy achieves the best performance. The combination of the pixel-level fusion approach and spectral indices leads to further improvements on the land-cover mapping task with respect to the image-level fusion approach, especially with few pseudo labels.

Citations (1)

Summary

We haven't generated a summary for this paper yet.