Emergent Mind

Abstract

The task of UAV-view geo-localization is to estimate the localization of a query satellite/drone image by matching it against a reference dataset consisting of drone/satellite images. Though tremendous strides have been made in feature alignment between satellite and drone views, vast differences in both inter and intra-class due to changes in viewpoint, altitude, and lighting remain a huge challenge. In this paper, a style alignment based dynamic observation method for UAV-view geo-localization is proposed to meet the above challenges from two perspectives: visual style transformation and surrounding noise control. Specifically, we introduce a style alignment strategy to transfrom the diverse visual style of drone-view images into a unified satellite images visual style. Then a dynamic observation module is designed to evaluate the spatial distribution of images by mimicking human observation habits. It is featured by the hierarchical attention block (HAB) with a dual-square-ring stream structure, to reduce surrounding noise and geographical deformation. In addition, we propose a deconstruction loss to push away features of different geo-tags and squeeze knowledge from unmatched images by correlation calculation. The experimental results demonstrate the state-of-the-art performance of our model on benchmarked datasets. In particular, when compared to the prior art on University-1652, our results surpass the best of them (FSRA), while only requiring 2x fewer parameters. Code will be released at https://github.com/Xcco1/SA\_DOM

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.