Emergent Mind

Abstract

Purpose: To develop an open-source and easy-to-use segmentation model that can automatically and robustly segment most major anatomical structures in MR images independently of the MR sequence. Materials and Methods: In this study we extended the capabilities of TotalSegmentator to MR images. 298 MR scans and 227 CT scans were used to segment 59 anatomical structures (20 organs, 18 bones, 11 muscles, 7 vessels, 3 tissue types) relevant for use cases such as organ volumetry, disease characterization, and surgical planning. The MR and CT images were randomly sampled from routine clinical studies and thus represent a real-world dataset (different ages, pathologies, scanners, body parts, sequences, contrasts, echo times, repetition times, field strengths, slice thicknesses and sites). We trained an nnU-Net segmentation algorithm on this dataset and calculated Dice similarity coefficients (Dice) to evaluate the model's performance. Results: The model showed a Dice score of 0.824 (CI: 0.801, 0.842) on the test set, which included a wide range of clinical data with major pathologies. The model significantly outperformed two other publicly available segmentation models (Dice score, 0.824 versus 0.762; p<0.001 and 0.762 versus 0.542; p<0.001). On the CT image test set of the original TotalSegmentator paper it almost matches the performance of the original TotalSegmentator (Dice score, 0.960 versus 0.970; p<0.001). Conclusion: Our proposed model extends the capabilities of TotalSegmentator to MR images. The annotated dataset (https://zenodo.org/doi/10.5281/zenodo.11367004) and open-source toolkit (https://www.github.com/wasserth/TotalSegmentator) are publicly available.

Overview

  • TotalSegmentator MRI addresses the challenge of manual segmentation in MRI by extending the TotalSegmentator framework to segment 59 anatomical structures automatically.

  • The model utilizes a combined dataset of 298 MRI and 227 CT scans, leveraging the nnU-Net framework to train an adaptive and robust segmentation model.

  • The model demonstrates superior performance compared to existing solutions, achieving a Dice score of 0.824 on MRI test sets and showing strong cross-modality robustness.

TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR Images

The paper "TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR Images" addresses a critical gap in the current state of automated medical image segmentation by extending the functionalities of the existing TotalSegmentator framework to Magnetic Resonance Imaging (MRI). This research seeks to alleviate the labor-intensive and error-prone process of manual MRI segmentation, enhancing the workflow in clinical and research environments.

Motivation and Context

Magnetic Resonance Imaging is paramount in medical diagnostics for its detailed, ionizing radiation-free imaging of the human body. However, the manual segmentation process is cumbersome and inconsistent due to variations in interrater reliability. Existing automated segmentation tools like nnU-Net have made strides, particularly in CT image segmentation, but the diverse nature of MRI protocols injects additional complexities that these tools struggle with.

Data and Methods

The dataset for model training included 298 MRI scans and 227 CT scans, ensuring a rich variety of imaging parameters and anatomical diversity. This methodology leverages the robustness of the nnU-Net framework, known for its adaptive architectural and preprocessing configurations. By employing an iterative learning approach, the research team generated a comprehensive ground truth for 59 anatomical structures across various MRI sequences.

Experimental Results

The model's performance, evaluated using the Dice similarity coefficient (Dice), demonstrated robust segmentation capabilities. On the MRI test set, encompassing intricate clinical data with major pathologies, the model achieved a Dice score of 0.824 [CI: 0.801, 0.842]. This performance significantly surpassed that of other publicly available models like MRSegmentator and AMOS, which scored 0.762 and 0.542 respectively (p<0.001 for both comparisons).

Moreover, when tested on CT images from the original TotalSegmentator dataset, the model nearly equaled the efficiency of the original TotalSegmentator (Dice score 0.960 versus 0.970; p<0.001), underscoring its cross-modality robustness. Despite some observed failure cases due to lower image quality in MRI, especially in highly anisotropic images, the model maintained a credible level of accuracy and reliability.

Implications and Future Directions

The practical implications of these results are manifold. Clinically, the TotalSegmentator MRI can considerably reduce radiologists' workload and enhance diagnostic accuracy through consistent and rapid segmentation. The model's ability to handle a broad spectrum of MRI sequences without sequence-specific tuning also elevates its adaptability in real-world scenarios.

Theoretically, these findings highlight the synergy of multi-modal training datasets (MRI and CT) in augmenting segmentation performance. The observed benefits of integrating CT scans into the training process suggest a promising direction for further improving model robustness across different imaging modalities.

Future research could expand this work by incorporating additional anatomical structures, refining ground truth annotations, and enlarging the training dataset to encompass even more diverse pathologies and imaging variations. Moreover, continued investigation into optimizing memory and computational efficiency will be crucial for widespread clinical integration.

Conclusion

The paper successfully extends the TotalSegmentator framework to MRI images, providing a versatile and high-performing tool for the automatic segmentation of 59 anatomical structures. This open-source model, backed by publicly available training data and resources, stands out for its ease of use, clinical relevance, and robust performance, setting a new benchmark for automated MRI image segmentation.

References

The full list of references used in the study can be found in the original paper. Key references include works on nnU-Net by Isensee et al., methodologies for MRI segmentation, and various clinical data collected from international repositories such as Imaging Data Commons and The Cancer Imaging Archive.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.