Paper

ReFusion: Learning Image Fusion from Reconstruction with Learnable Loss via Meta-Learning

Image fusion aims to combine information from multiple source images into a single one with more comprehensive informational content. The significant challenges for deep learning-based image fusion algorithms are the lack of a definitive ground truth as well as the corresponding distance measurement, with current manually given loss functions constrain the flexibility of model and generalizability for unified fusion tasks. To overcome these limitations, we introduce a unified image fusion framework based on meta-learning, named ReFusion, which provides a learning paradigm that obtains the optimal fusion loss for various fusion tasks based on reconstructing the source images. Compared to existing methods, ReFusion employs a parameterized loss function, dynamically adjusted by the training framework according to the specific scenario and task. ReFusion is constituted by three components: a fusion module, a loss proposal module, and a source reconstruction module. To ensure the fusion module maximally preserves the information from the source images, enabling the reconstruction of the source images from the fused image, we adopt a meta-learning strategy to train the loss proposal module using reconstruction loss. The update of the fusion module relies on the fusion loss proposed by the loss proposal module. The alternating updates of the three modules mutually facilitate each other, aiming to propose an appropriate fusion loss for different tasks and yield satisfactory fusion results. Extensive experiments demonstrate that ReFusion is capable of adapting to various tasks, including infrared-visible, medical, multi-focus, and multi-exposure image fusion. The code will be released.

Results in Papers With Code
(↓ scroll down to see all results)