Emergent Mind

Abstract

In-scanner motion degrades the quality of magnetic resonance imaging (MRI) thereby reducing its utility in the detection of clinically relevant abnormalities. We introduce a deep learning-based MRI artifact reduction model (DMAR) to localize and correct head motion artifacts in brain MRI scans. Our approach integrates the latest advances in object detection and noise reduction in Computer Vision. Specifically, DMAR employs a two-stage approach: in the first, degraded regions are detected using the Single Shot Multibox Detector (SSD), and in the second, the artifacts within the found regions are reduced using a convolutional autoencoder (CAE). We further introduce a set of novel data augmentation techniques to address the high dimensionality of MRI images and the scarcity of available data. As a result, our model was trained on a large synthetic dataset of 225,000 images generated from 375 whole brain T1-weighted MRI scans. DMAR visibly reduces image artifacts when applied to both synthetic test images and 55 real-world motion-affected slices from 18 subjects from the multi-center Autism Brain Imaging Data Exchange (ABIDE) study. Quantitatively, depending on the level of degradation, our model achieves a 27.8%-48.1% reduction in RMSE and a 2.88--5.79 dB gain in PSNR on a 5000-sample set of synthetic images. For real-world artifact-affected scans from ABIDE, our model reduced the variance of image voxel intensity within artifact-affected brain regions (p = 0.014).

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.