Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Spatial-frequency Dual-Domain Feature Fusion Network for Low-Light Remote Sensing Image Enhancement (2404.17400v2)

Published 26 Apr 2024 in cs.CV, cs.AI, and eess.IV

Abstract: Low-light remote sensing images generally feature high resolution and high spatial complexity, with continuously distributed surface features in space. This continuity in scenes leads to extensive long-range correlations in spatial domains within remote sensing images. Convolutional Neural Networks, which rely on local correlations for long-distance modeling, struggle to establish long-range correlations in such images. On the other hand, transformer-based methods that focus on global information face high computational complexities when processing high-resolution remote sensing images. From another perspective, Fourier transform can compute global information without introducing a large number of parameters, enabling the network to more efficiently capture the overall image structure and establish long-range correlations. Therefore, we propose a Dual-Domain Feature Fusion Network (DFFN) for low-light remote sensing image enhancement. Specifically, this challenging task of low-light enhancement is divided into two more manageable sub-tasks: the first phase learns amplitude information to restore image brightness, and the second phase learns phase information to refine details. To facilitate information exchange between the two phases, we designed an information fusion affine block that combines data from different phases and scales. Additionally, we have constructed two dark light remote sensing datasets to address the current lack of datasets in dark light remote sensing image enhancement. Extensive evaluations show that our method outperforms existing state-of-the-art methods. The code is available at https://github.com/iijjlk/DFFN.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (58)
  1. J. Zhang, J. Lei, W. Xie, Y. Li, G. Yang, and X. Jia, “Guided hybrid quantization for object detection in remote sensing imagery via one-to-one self-teaching,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–15, 2023.
  2. Y.-f. Zhang, J. Zheng, L. Li, N. Liu, W. Jia, X. Fan, C. Xu, and X. He, “Rethinking feature aggregation for deep rgb-d salient object detection,” Neurocomputing, vol. 423, pp. 463–473, 2021.
  3. L. Ma, R. Liu, Y. Wang, X. Fan, and Z. Luo, “Low-light image enhancement via self-reinforced retinex projection model,” IEEE Transactions on Multimedia, vol. 25, pp. 3573–3586, 2023.
  4. J. Li, X. Feng, and Z. Hua, “Low-light image enhancement via progressive-recursive network,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 11, pp. 4227–4240, 2021.
  5. Y. Zhang, X. Di, B. Zhang, R. Ji, and C. Wang, “Better than reference in low-light image enhancement: conditional re-enhancement network,” IEEE Transactions on Image Processing, vol. 31, pp. 759–772, 2021.
  6. Y. Huang, X. Tu, G. Fu, T. Liu, B. Liu, M. Yang, and Z. Feng, “Low-light image enhancement by learning contrastive representations in spatial and frequency domains,” in 2023 IEEE International Conference on Multimedia and Expo, 2023, pp. 1307–1312.
  7. G.-D. Fan, B. Fan, M. Gan, G.-Y. Chen, and C. P. Chen, “Multiscale low-light image enhancement network with illumination constraint,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 11, pp. 7403–7417, 2022.
  8. L. Ma, R. Liu, J. Zhang, X. Fan, and Z. Luo, “Learning deep context-sensitive decomposition for low-light image enhancement,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 10, pp. 5666–5680, 2021.
  9. Z. Li, Y. Wang, and J. Zhang, “Low-light image enhancement with knowledge distillation,” Neurocomputing, vol. 518, pp. 332–343, 2023.
  10. G. Fan, M. Gan, B. Fan, and C. L. P. Chen, “Multiscale cross-connected dehazing network with scene depth fusion,” IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 2, pp. 1598–1612, 2024.
  11. J. Zhou, Q. Liu, Q. Jiang, W. Ren, K.-M. Lam, and W. Zhang, “Underwater camera: Improving visual perception via adaptive dark pixel prior and color correction,” International Journal of Computer Vision, pp. 1–19, 2023.
  12. S. Yuan, L. Li, H. Chen, and X. Li, “Surface defect detection of highly reflective leather based on dual-mask-guided deep-learning model,” IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1–13, 2023.
  13. X. Xu, R. Wang, C.-W. Fu, and J. Jia, “Snr-aware low-light image enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17 714–17 724.
  14. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal processing letters, vol. 20, no. 3, pp. 209–212, 2012.
  15. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
  16. Q. Huynh-Thu and M. Ghanbari, “Scope of validity of psnr in image/video quality assessment,” Electronics letters, vol. 44, no. 13, pp. 800–801, 2008.
  17. J.-N. Su, M. Gan, G.-Y. Chen, J.-L. Yin, and C. L. P. Chen, “Global learnable attention for single image super-resolution,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 7, pp. 8453–8465, 2023.
  18. S. Zhang, J. Zhang, X. Wang, J. Wang, and Z. Wu, “Els2t: Efficient lightweight spectral–spatial transformer for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–16, 2023.
  19. J. Feng, Q. Wang, G. Zhang, X. Jia, and J. Yin, “Cat: Center attention transformer with stratified spatial-spectral token for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–1, 2024.
  20. C. Zhao, B. Qin, S. Feng, W. Zhu, W. Sun, W. Li, and X. Jia, “Hyperspectral image classification with multi-attention transformer and adaptive superpixel segmentation-based active learning,” IEEE Transactions on Image Processing, vol. 32, pp. 3606–3621, 2023.
  21. J.-N. Su, M. Gan, G.-Y. Chen, W. Guo, and C. L. P. Chen, “High-similarity-pass attention for single image super-resolution,” IEEE Transactions on Image Processing, vol. 33, pp. 610–624, 2024.
  22. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  23. W. Dong, Y. Yang, J. Qu, Y. Li, Y. Yang, and X. Jia, “Feature pyramid fusion network for hyperspectral pansharpening,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–13, 2023.
  24. J. Zhou, B. Li, D. Zhang, J. Yuan, W. Zhang, Z. Cai, and J. Shi, “Ugif-net: An efficient fully guided information flow network for underwater image enhancement,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–17, 2023.
  25. L. Ma, D. Jin, N. An, J. Liu, X. Fan, Z. Luo, and R. Liu, “Bilevel fast scene adaptation for low-light image enhancement,” International Journal of Computer Vision, pp. 1–19, 2023.
  26. C. Wang, H. Wu, and Z. Jin, “Fourllie: Boosting low-light image enhancement by fourier frequency information,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 7459–7469.
  27. A. Oppenheim, J. Lim, G. Kopec, and S. Pohlig, “Phase in speech and pictures,” in ICASSP’79. IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 4.   IEEE, 1979, pp. 632–637.
  28. C. Li, C.-L. Guo, M. Zhou, Z. Liang, S. Zhou, R. Feng, and C. C. Loy, “Embedding fourier for ultra-high-definition low-light image enhancement,” arXiv preprint arXiv:2302.11831, 2023.
  29. T. Arici, S. Dikbas, and Y. Altunbasak, “A histogram modification framework and its application for image contrast enhancement,” IEEE Transactions on Image Processing, vol. 18, no. 9, pp. 1921–1935, 2009.
  30. T. Celik and T. Tjahjadi, “Contextual and variational contrast enhancement,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3431–3441, 2011.
  31. H. Ibrahim and N. S. P. Kong, “Brightness preserving dynamic histogram equalization for image contrast enhancement,” IEEE Transactions on Consumer Electronics, vol. 53, no. 4, pp. 1752–1758, 2007.
  32. S.-C. Huang, F.-C. Cheng, and Y.-S. Chiu, “Efficient contrast enhancement using adaptive gamma correction with weighting distribution,” IEEE Transactions on Image Processing, vol. 22, no. 3, pp. 1032–1041, 2012.
  33. H. Singh, A. Kumar, L. Balyan, and G. K. Singh, “A novel optimally gamma corrected intensity span maximization approach for dark image enhancement,” in 2017 22nd International Conference on Digital Signal Processing.   IEEE, 2017, pp. 1–5.
  34. X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2782–2790.
  35. D. J. Jobson, Z.-u. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Transactions on Image Processing, vol. 6, no. 7, pp. 965–976, 1997.
  36. C. Xu, H. Fu, L. Ma, W. Jia, C. Zhang, F. Xia, X. Ai, B. Li, and W. Zhang, “Seeing text in the dark: Algorithm and benchmark,” arXiv preprint arXiv:2404.08965, 2024.
  37. M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2828–2841, 2018.
  38. X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 982–993, 2016.
  39. H. Shen, Z.-Q. Zhao, Y. Zhang, and Z. Zhang, “Mutual information-driven triple interaction network for efficient image dehazing,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 7–16.
  40. J. Liang, Y. Xu, Y. Quan, B. Shi, and H. Ji, “Self-supervised low-light image enhancement using discrepant untrained network priors,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 11, pp. 7332–7345, 2022.
  41. Y. Luo, B. You, G. Yue, and J. Ling, “Pseudo-supervised low-light image enhancement with mutual learning,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 1, pp. 85–96, 2024.
  42. L. Ma, R. Liu, J. Zhang, X. Fan, and Z. Luo, “Learning deep context-sensitive decomposition for low-light image enhancement.” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 10, pp. 5666–5680, 2022.
  43. C. Li, C. Guo, and C. C. Loy, “Learning to enhance low-light image via zero-reference deep curve estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 8, pp. 4225–4238, 2022.
  44. Y. Wang, Z. Liu, J. Liu, S. Xu, and S. Liu, “Low-light image enhancement with illumination-aware gamma correction and complete image modelling network,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 13 128–13 137.
  45. L. Xing, H. Qu, S. Xu, and Y. Tian, “Clegan: Toward low-light image enhancement for uavs via self-similarity exploitation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–14, 2023.
  46. X. Mao, Y. Liu, W. Shen, Q. Li, and Y. Wang, “Deep residual fourier transformation for single image deblurring,” arXiv preprint arXiv:2111.11745, vol. 2, no. 3, p. 5, 2021.
  47. R. Suvorov, E. Logacheva, A. Mashikhin, A. Remizova, A. Ashukha, A. Silvestrov, N. Kong, H. Goka, K. Park, and V. Lempitsky, “Resolution-robust large mask inpainting with fourier convolutions,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 2149–2159.
  48. D. Fuoli, L. Van Gool, and R. Timofte, “Fourier space losses for efficient perceptual image super-resolution,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2360–2369.
  49. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 14 821–14 831.
  50. L. Ma, T. Ma, R. Liu, X. Fan, and Z. Luo, “Toward fast, flexible, and robust low-light image enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5637–5646.
  51. T. Wang, K. Zhang, T. Shen, W. Luo, B. Stenger, and T. Lu, “Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 3, 2023, pp. 2654–2662.
  52. N. Zheng, M. Zhou, Y. Dong, X. Rui, J. Huang, C. Li, and F. Zhao, “Empowering low-light image enhancer through customized learnable priors,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 12 559–12 569.
  53. K.-F. Yang, C. Cheng, S.-X. Zhao, H.-M. Yan, X.-S. Zhang, and Y.-J. Li, “Learning to adapt to light,” International Journal of Computer Vision, vol. 131, no. 4, pp. 1022–1041, 2023.
  54. S. Yang, M. Ding, Y. Wu, Z. Li, and J. Zhang, “Implicit neural representation for cooperative low-light image enhancement,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 12 918–12 927.
  55. S. Waqas Zamir, A. Arora, A. Gupta, S. Khan, G. Sun, F. Shahbaz Khan, F. Zhu, L. Shao, G.-S. Xia, and X. Bai, “isaid: A large-scale dataset for instance segmentation in aerial images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019, pp. 28–37.
  56. C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” British Machine Vision Conference, 2018.
  57. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and pattern recognition, 2018, pp. 586–595.
  58. X. Chen, H. Li, M. Li, and J. Pan, “Learning a sparse transformer network for effective image deraining,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2023, pp. 5896–5905.
Citations (7)

Summary

We haven't generated a summary for this paper yet.