Emergent Mind

Abstract

Image fusion aims to combine information from multiple source images into a single one with more comprehensive informational content. The significant challenges for deep learning-based image fusion algorithms are the lack of a definitive ground truth as well as the corresponding distance measurement, with current manually given loss functions constrain the flexibility of model and generalizability for unified fusion tasks. To overcome these limitations, we introduce a unified image fusion framework based on meta-learning, named ReFusion, which provides a learning paradigm that obtains the optimal fusion loss for various fusion tasks based on reconstructing the source images. Compared to existing methods, ReFusion employs a parameterized loss function, dynamically adjusted by the training framework according to the specific scenario and task. ReFusion is constituted by three components: a fusion module, a loss proposal module, and a source reconstruction module. To ensure the fusion module maximally preserves the information from the source images, enabling the reconstruction of the source images from the fused image, we adopt a meta-learning strategy to train the loss proposal module using reconstruction loss. The update of the fusion module relies on the fusion loss proposed by the loss proposal module. The alternating updates of the three modules mutually facilitate each other, aiming to propose an appropriate fusion loss for different tasks and yield satisfactory fusion results. Extensive experiments demonstrate that ReFusion is capable of adapting to various tasks, including infrared-visible, medical, multi-focus, and multi-exposure image fusion. The code will be released.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.