Emergent Mind

Abstract

With the continuous development of underwater vision technology, more and more remote sensing images could be obtained. In the underwater scene, sonar sensors are currently the most effective remote perception devices, and the sonar images captured by them could provide rich environment information. In order to analyze a certain scene, we often need to merge the sonar images from different periods, various sonar frequencies and distinctive viewpoints. However, the above scenes will bring nonlinear intensity differences to the sonar images, which will make traditional matching methods almost ineffective. This paper proposes a non-linear intensity sonar image matching method that combines local feature points and deep convolution features. This method has two key advantages: (i) we generate data samples related to local feature points based on the self-learning idea; (ii) we use the convolutional neural network (CNN) and Siamese network architecture to measure the similarity of the local position in the sonar image pair. Our method encapsulates the feature extraction and feature matching stage in a model, and directly learns the mapping function from image patch pairs to matching labels, and achieves matching tasks in a near-end-to-end manner. Feature matching experiments are carried out on the sonar images acquired by autonomous underwater vehicle (AUV) in the real underwater environment. Experiment results show that our method has better matching effects and strong robustness.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.