Emergent Mind

Motion correction in MRI using deep learning and a novel hybrid loss function

(2210.14156)
Published Oct 19, 2022 in eess.IV and cs.CV

Abstract

Purpose To develop and evaluate a deep learning-based method (MC-Net) to suppress motion artifacts in brain magnetic resonance imaging (MRI). Methods MC-Net was derived from a UNet combined with a two-stage multi-loss function. T1-weighted axial brain images contaminated with synthetic motions were used to train the network. Evaluation used simulated T1 and T2-weighted axial, coronal, and sagittal images unseen during training, as well as T1-weighted images with motion artifacts from real scans. Performance indices included the peak signal to noise ratio (PSNR), structural similarity index measure (SSIM), and visual reading scores. Two clinical readers scored the images. Results The MC-Net outperformed other methods implemented in terms of PSNR and SSIM on the T1 axial test set. The MC-Net significantly improved the quality of all T1-weighted images (for all directions and for simulated as well as real motion artifacts), both on quantitative measures and visual scores. However, the MC-Net performed poorly on images of untrained contrast (T2-weighted). Conclusion The proposed two-stage multi-loss MC-Net can effectively suppress motion artifacts in brain MRI without compromising image context. Given the efficiency of the MC-Net (single image processing time ~40ms), it can potentially be used in real clinical settings. To facilitate further research, the code and trained model are available at https://github.com/MRIMoCo/DL_Motion_Correction.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.