Quantifying uncertainty in lung cancer segmentation with foundation models applied to mixed-domain datasets (2403.13113v3)
Abstract: Medical image foundation models have shown the ability to segment organs and tumors with minimal fine-tuning. These models are typically evaluated on task-specific in-distribution (ID) datasets. However, reliable performance on ID datasets does not guarantee robust generalization on out-of-distribution (OOD) datasets. Importantly, once deployed for clinical use, it is impractical to have `ground truth' delineations to assess ongoing performance drifts, especially when images fall into the OOD category due to different imaging protocols. Hence, we introduced a comprehensive set of computationally fast metrics to evaluate the performance of multiple foundation models (Swin UNETR, SimMIM, iBOT, SMIT) trained with self-supervised learning (SSL). All models were fine-tuned on identical datasets for lung tumor segmentation from computed tomography (CT) scans. The evaluation was performed on two public lung cancer datasets (LRAD: n = 140, 5Rater: n = 21) with different image acquisitions and tumor stages compared to training data (n = 317 public resource with stage III-IV lung cancers) and a public non-cancer dataset containing volumetric CT scans of patients with pulmonary embolism (n = 120). All models produced similarly accurate tumor segmentation on the lung cancer testing datasets. SMIT produced the highest F1-score (LRAD: 0.60, 5Rater: 0.64) and lowest entropy (LRAD: 0.06, 5Rater: 0.12), indicating higher tumor detection rate and confident segmentations. In the OOD dataset, SMIT misdetected the least number of tumors, marked by a median volume occupancy of 5.67 cc compared to the best method SimMIM of 9.97 cc. Our analysis shows that additional metrics such as entropy and volume occupancy may help better understand model performance on mixed domain datasets.
- “Toward foundational deep learning models for medical imaging in the new era of transformer networks,” Radiology: Artificial Intelligence, vol. 4, no. 6, pp. e210284, 2022.
- “Trustworthy artificial intelligence in medical imaging,” PET Clinical, vol. 17, no. 1, pp. 1–12, 2022.
- “Trustworthy AI: From principles to practices,” ACM computing surveys, vol. 55, no. 9, pp. 1–46, 2023.
- “Improving calibration and out-of-distribution detection in medical image segmentation with convolutional neural networks,” arXiv preprint arXiv:2004.06569, 2020.
- “Transformer-based out-of-distribution detection for clinically safe segmentation,” in International Conference on Medical Imaging with Deep Learning. PMLR, 2022, pp. 457–476.
- “Devil is in the queries: Advancing mask transformers for real-world medical image segmentation and out-of-distribution localization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 23879–23889.
- “Self-supervised 3d anatomy segmentation using self-distilled masked image transformer (smit),” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2022, pp. 556–566.
- “Pretrained transformers improve out-of-distribution robustness,” arXiv preprint arXiv:2004.06100, 2020.
- “Data from NSCLC-radiomics. The Cancer Imaging Archive,” 2015.
- “A radiogenomic dataset of non-small cell lung cancer,” Sci Data, vol. 5, pp. 180202, 2018.
- “Data from NSCLC-Radiomics-Interobserver1 [data set],” .
- “Artificial intelligence for the detection of covid-19 pneumonia on chest ct using multinational datasets,” Nature communications, vol. 11, no. 1, pp. 4080, 2020.
- “Multimodal data integration using machine learning improves risk stratification of high-grade serous ovarian cancer,” Nat Cancer, vol. 3, no. 6, pp. 723–733, 2022.
- “PROSTATEx challenges for computerized classification of prostate lesions from multiparametric magnetic resonance images,” J Med Imaging (Bellingham), vol. 5, no. 4, pp. 044501, 2018.
- “Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small mr datasets,” Med Phys, vol. 46, no. 10, pp. 4392–4404, 2019.
- “Self-supervised pre-training of swin transformers for 3d medical image analysis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20730–20740.
- “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.
- “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
- “The liver tumor segmentation benchmark (lits),” Medical Image Analysis, vol. 84, pp. 102680, 2023.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.