Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Representation Learning for 3D MRI Super Resolution with Degradation Adaptation (2205.06891v5)

Published 13 May 2022 in eess.IV, cs.CV, and physics.med-ph

Abstract: High-resolution (HR) magnetic resonance imaging is critical in aiding doctors in their diagnoses and image-guided treatments. However, acquiring HR images can be time-consuming and costly. Consequently, deep learning-based super-resolution reconstruction (SRR) has emerged as a promising solution for generating super-resolution (SR) images from low-resolution (LR) images. Unfortunately, training such neural networks requires aligned authentic HR and LR image pairs, which are challenging to obtain due to patient movements during and between image acquisitions. While rigid movements of hard tissues can be corrected with image registration, aligning deformed soft tissues is complex, making it impractical to train neural networks with authentic HR and LR image pairs. Previous studies have focused on SRR using authentic HR images and down-sampled synthetic LR images. However, the difference in degradation representations between synthetic and authentic LR images suppresses the quality of SR images reconstructed from authentic LR images. To address this issue, we propose a novel Unsupervised Degradation Adaptation Network (UDEAN). Our network consists of a degradation learning network and an SRR network. The degradation learning network downsamples the HR images using the degradation representation learned from the misaligned or unpaired LR images. The SRR network then learns the mapping from the down-sampled HR images to the original ones. Experimental results show that our method outperforms state-of-the-art networks and is a promising solution to the challenges in clinical settings.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. X. Zhao, Y. Zhang, T. Zhang, and X. Zou, “Channel splitting network for single MR image super-resolution,” IEEE Trans. Image Process., vol. 28, no. 11, pp. 5649–5662, 2019.
  2. Y. Zhang, K. Li, K. Li, and Y. Fu, “MR image super-resolution with squeeze and excitation reasoning attention network,” Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Virtual, 2021, pp. 13425–13434.
  3. C. H. Pham, A. Ducournau, R. Fablet, and F. Rousseau, “Brain MRI super-resolution using deep 3D convolutional networks,” Proc. IEEE Int. Symp. Biomed. Imag., Melbourne, Australia, 2017, pp. 197–200.
  4. Y. Chen, Y. Xie, Z. Zhou, F. Shi, A. G. Christodoulou, and D. Li, “Brain MRI super resolution using 3D deep densely connected neural networks,” Proc. IEEE Int. Symp. Biomed. Imag., Washington D.C., USA, 2018, pp. 739–742.
  5. Y. Chen, F. Shi, A. G. Christodoulou, Y. Xie, Z. Zhou, and D. Li, “Efficient and accurate MRI super-resolution using a generative adversarial network and 3D multi-level densely connected network,” Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., Granada, Spain, 2018, pp. 91–99.
  6. Y. Sui, O. Afacan, A. Gholipour, and S. K. Warfield, “Learning a gradient guidance for spatially isotropic MRI super-resolution reconstruction,” Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., Lima, Peru, 2020, pp. 136–146.
  7. C. Feng, H. Fu, S. Yuan, and Y. Xu, “Multi-contrast MRI super-resolution via a multi-stage integration network,” Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., Strasbourg, France, 2021, pp. 140–149.
  8. Y. Mei, Y. Fan, and, Y. Zhou, “Image Super-Resolution with Non-Local Sparse Attention,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Virtual, 2021, pp. 3516-3525.
  9. L. Sun, J. Pan, and J. Tang, “Shufflemixer: An efficient convnet for image super-resolution,” Adv Neural Inf Process Syst, vol. 35, pp. 17314-17326, 2022.
  10. J. Su, M. Gan, G. Chen, W. Guo, and C. Chen, “High-Similarity-Pass Attention for Single Image Super-Resolution,” IEEE Trans. Image Process., vol. 33, pp. 610-624, 2024.
  11. K. Jiang, Z. Wang, P. Yi, G. Wang, K. Gu, and J. Jiang, “ATMFN: Adaptive-Threshold-Based Multi-Model Fusion Network for Compressed Face Hallucination,” IEEE Trans. Multimedia, vol. 22, no. 10, pp. 2734-2747, Oct. 2020.
  12. K. Jiang, Z. Wang, P. Yi, T. Lu, J. Jiang, and Z. Xiong, “Dual-Path Deep Fusion Network for Face Image Hallucination,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 1, pp. 378-391, Jan. 2022.
  13. Y. Xiao, Q. Yuan, K. Jiang, J. He, X. Jin, and L. Zhang, “EDiffSR: An Efficient Diffusion Probabilistic Model for Remote Sensing Image Super-Resolution,” IEEE Trans. Geosci. Remote Sens., vol. 62, pp. 1-14, 2024.
  14. Y. Xiao, Q. Yuan, K. Jiang, J. He, Y. Wang, and L. Zhang, “From degrade to upgrade: Learning a self-supervised degradation guided adaptive network for blind remote sensing image super-resolution,” Inf. Fusion, vol. 96, 297-311, 2023.
  15. Y. Xiao, X. Su, Q. Yuan, D. Liu, H. Shen, and L. Zhang, “Satellite Video Super-Resolution via Multiscale Deformable Convolution Alignment and Temporal Grouping Projection,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1-19, 2022.
  16. L. Zhang, J. Nie, W. Wei, and Y. Zhang, “Unsupervised Test-Time Adaptation Learning for Effective Hyperspectral Image Super-Resolution With Unknown Degeneration,” IEEE Trans. Pattern Anal. Mach. Intell., 2024, doi: 10.1109/TPAMI.2024.3361894.
  17. J. Zhang, J. Liu, J. Yang, and Z. Wu, “Crossed Dual-Branch U-Net for Hyperspectral Image Super-Resolution,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 17, pp. 2296-2307, 2024.
  18. X. Zhang, C. Song, T. You, Q. Bai, W. Wei, and L. Zhang, “Dual ODE: Spatial–Spectral Neural Ordinary Differential Equations for Hyperspectral Image Super-Resolution,” IEEE Trans. Geosci. Remote Sens., vol. 62, pp. 1-15, 2024.
  19. Y. Sui, O. Afacan, A. Gholipour, and S. K. Warfield, “MRI super-resolution through generative degradation learning,” Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., Strasbourg, France, 2021, pp. 430–440.
  20. Z. Sui, O. Afacan, C. Jaimes, A. Gholipour, and S. K. Warfield, “Scan-Specific Generative Neural Network for MRI Super-Resolution Reconstruction,” IEEE Trans. Med. Imag, vol. 41, no. 6, pp. 2738-2749, 2022.
  21. C. Zhao, B. E. Dewey, D. L. Pham, P. A. Calabbresi, D. S. Reich, and J. L. Prince, “SMORE: A Self-Supervised Anti-Aliasing and Super-Resolution Algorithm for MRI Using Deep Learning,” IEEE Trans. Med. Imag, vol. 40, no. 3, pp. 805-817, 2021.
  22. H. Li and J. Liu, “3D High-Quality Magnetic Resonance Image Restoration in Clinics Using Deep Learning,” 2021, arXiv:2111.14259.
  23. A. Shocher, C. Nadav Cohen, and I. Michal, ““zero-shot” super-resolution using deep internal learning,” Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Salt Lake City, USA, 2018, pp. 3118–3126.
  24. Y. Iwamoto, K. Takeda, Y. Li, A. Shiino, and Y. Chen, “Unsupervised MRI Super-Resolution Using Deep External Learning and Guided Residual Dense Network with Multimodal Image Priors,” 2020, arXiv:2008.11921.
  25. J. Cui, K. Gong, P. Han, H. Liu, and Q. Li, “Unsupervised arterial spin labeling image superresolution via multiscale generative adversarial network,” Med. Phys., vol. 49, no. 4, pp. 2373-2385, 2022.
  26. J. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, Hawaii, USA, 2017, pp. 2223–2232.
  27. S. Maeda, “Unpaired image super-resolution using pseudo-supervision,” Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Virtual, 2020, pp. 291–300.
  28. Y. Wei, S. Gu, Y. Li, R. Timofte, L. jin, and H. Song, “Unsupervised real-world image super resolution via domain-distance aware training,” Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Virtual, 2021, pp. 13385–13394.
  29. H. Zhou, Y. Huang, Y. Li, Y. Zhou, and Y. Zheng, “Blind Super-Resolution of 3D MRI via Unsupervised Domain Transformation,” IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 3, pp. 1409–1418, 2023.
  30. A. Liu, Y. Liu, J. Gu, Y. Qiao, and C. Dong, “Blind image super-resolution: A survey and beyond,” IEEE Trans. Pattern Anal. Mach. Intell., 2022, doi: 10.1109/TPAMI.2022.3203009.
  31. J. Gu, H. Lu, W. Zuo, and C. Dong, “Blind super-resolution with iterative kernel correction,” Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Long Beach, CA, USA, 2019, pp. 1604–1613.
  32. L. Xie, X. Wang, C. Dong, Z. Qi, and Y. Shan, “Finding discriminative filters for specific degradations in blind super-resolution,” Proc. Adv. Neural Inf. Process. Syst., vol. 34, 2021, pp. 51–61.
  33. Z. Hui, J. Li, X. Wang, and X. Gao, “Learning the non-differentiable optimization for blind super-resolution,” Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Virtual, 2021, pp. 2093–2102.
  34. M. Yamac, A. Baran, and N. Aakif, “Kernelnet: A blind super-resolution kernel estimation network,” Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Virtual, 2021, pp. 453–462.
  35. W. Zhang, G. Shi, Y. Liu, C. Dong, and X. Wu, “A Closer Look at Blind Super-Resolution: Degradation Models, Baselines, and Performance Upper Bounds,” Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Louisiana, NO, USA, 2022, pp. 527–536.
  36. K. Zhang, J. Liang, L. Van Gool, and R. Timofte, “Designing a practical degradation model for deep blind image super-resolution,” Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Virtual, 2021, pp. 4791–4800.
  37. J. Zhang, S. Lu, F. Zhan, and Y. Yu, “Blind image super-resolution via contrastive representation learning,” 2021, arXiv:2107.00708.
  38. E. M. Masutani, B. Naeim, and H. Albert, “Deep learning single-frame and multiframe super-resolution for cardiac MRI,” Radiology, vol. 295, no.3, pp. 552–561, 2020.
  39. X. Mao, Q. Li, H. Xie, R. Y. K. Raymond, and Z. Wang, “Multi-class generative adversarial networks with the L2 loss function,” 2016, arXiv:1611.04076.
  40. C. T. Lloyd, S. Alessandro, and A. J. Tatem, “High resolution global gridded data for use in population studies,” Sci. Data, vol. 4, no. 1, pp. 1-17, 2017.
  41. Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” Proc. Eur. Conf. Comput. Vis., Munich, Germany, 2018, pp. 286–301.
  42. K. Simonyan, and Z. Andrew, “Very deep convolutional networks for large-scale image recognition,” 2014, arXiv:1409.1556.
  43. Z. Wang, A. C. Bovik, H. R. Scheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
Citations (1)

Summary

We haven't generated a summary for this paper yet.