Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 65 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 97 tok/s Pro
Kimi K2 164 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

SSPFusion: A Semantic Structure-Preserving Approach for Infrared and Visible Image Fusion (2309.14745v2)

Published 26 Sep 2023 in cs.CV

Abstract: Most existing learning-based infrared and visible image fusion (IVIF) methods exhibit massive redundant information in the fusion images, i.e., yielding edge-blurring effect or unrecognizable for object detectors. To alleviate these issues, we propose a semantic structure-preserving approach for IVIF, namely SSPFusion. At first, we design a Structural Feature Extractor (SFE) to extract the structural features of infrared and visible images. Then, we introduce a multi-scale Structure-Preserving Fusion (SPF) module to fuse the structural features of infrared and visible images, while maintaining the consistency of semantic structures between the fusion and source images. Owing to these two effective modules, our method is able to generate high-quality fusion images from pairs of infrared and visible images, which can boost the performance of downstream computer-vision tasks. Experimental results on three benchmarks demonstrate that our method outperforms eight state-of-the-art image fusion methods in terms of both qualitative and quantitative evaluations. The code for our method, along with additional comparison results, will be made available at: https://github.com/QiaoYang-CV/SSPFUSION.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. “Infrared and visual image fusion through infrared feature extraction and visual information preservation,” Infrared Phys. Techn., vol. 83, pp. 227–237, 2017.
  2. “Object fusion tracking based on visible and infrared images: A comprehensive review,” Inf. Fusion, vol. 63, pp. 166–187, 2020.
  3. “Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection,” in CVPR, 2022, pp. 5792–5801.
  4. “Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network,” Inf. Fusion, vol. 82, pp. 28–42, 2022.
  5. “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inf. Fusion, vol. 31, pp. 100–109, 2016.
  6. “DIDFuse: Deep image decomposition for infrared and visible image fusion,” in IJCAI, 2020, pp. 970–976.
  7. “IFCNN: A general image fusion framework based on convolutional neural network,” Inf. Fusion, vol. 54, pp. 99–118, 2020.
  8. “RFN-Nest: An end-to-end residual fusion network for infrared and visible images,” Inf. Fusion, vol. 73, pp. 72–86, 2021.
  9. “Infrared and visible image fusion by using multi-scale transformation and fractional-order gradient information,” in ICASSP, 2023, pp. 1–5.
  10. “Dif-fusion: Towards high color fidelity in infrared and visible image fusion with diffusion models,” arXiv:2301.08072, 2023.
  11. “Unsupervised misaligned infrared and visible image fusion via cross-modality image generation and registration,” in IJCAI, 2022, pp. 3508–3515.
  12. “U-net: Convolutional networks for biomedical image segmentation,” in MICCAI, 2015, pp. 234–241.
  13. “Fast and accurate image super-resolution with deep laplacian pyramid networks,” IEEE Trans. PAMI, vol. 41, no. 11, pp. 2599–2613, 2019.
  14. “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. IP, vol. 13, no. 4, pp. 600–612, 2004.
  15. “Efficient and model-based infrared and visible image fusion via algorithm unrolling,” IEEE Trans. CSVT, vol. 32, no. 3, pp. 1186–1196, 2022.
  16. “U2Fusion: A unified unsupervised image fusion network,” IEEE Trans. PAMI, vol. 44, pp. 502–518, 2022.
  17. “Fusion from Decomposition: A self-supervised decomposition approach for image fusion,” in ECCV, 2022, pp. 719–735.
  18. “Information measure for performance of image fusion,” Electron. Lett., vol. 38, no. 7, pp. 313––315, 2002.
  19. “Image quality measures and their performance,” IEEE Trans. COMM, vol. 43, no. 12, pp. 2959–2965, 1995.
  20. “Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition,” Opt. Commun., vol. 341, pp. 199–209, 2015.
  21. “A new image fusion performance metric based on visual information fidelity,” Inf. Fusion, vol. 14, no. 2, pp. 127–135, 2013.
  22. “Objective image fusion performance measure,” Electron. Lett., vol. 36, no. 4, pp. 308–309, 2000.
  23. “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in ECCV, 2018, pp. 833–851.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.