Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 147 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 398 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Quantifying uncertainty in lung cancer segmentation with foundation models applied to mixed-domain datasets (2403.13113v3)

Published 19 Mar 2024 in eess.IV and cs.CV

Abstract: Medical image foundation models have shown the ability to segment organs and tumors with minimal fine-tuning. These models are typically evaluated on task-specific in-distribution (ID) datasets. However, reliable performance on ID datasets does not guarantee robust generalization on out-of-distribution (OOD) datasets. Importantly, once deployed for clinical use, it is impractical to have `ground truth' delineations to assess ongoing performance drifts, especially when images fall into the OOD category due to different imaging protocols. Hence, we introduced a comprehensive set of computationally fast metrics to evaluate the performance of multiple foundation models (Swin UNETR, SimMIM, iBOT, SMIT) trained with self-supervised learning (SSL). All models were fine-tuned on identical datasets for lung tumor segmentation from computed tomography (CT) scans. The evaluation was performed on two public lung cancer datasets (LRAD: n = 140, 5Rater: n = 21) with different image acquisitions and tumor stages compared to training data (n = 317 public resource with stage III-IV lung cancers) and a public non-cancer dataset containing volumetric CT scans of patients with pulmonary embolism (n = 120). All models produced similarly accurate tumor segmentation on the lung cancer testing datasets. SMIT produced the highest F1-score (LRAD: 0.60, 5Rater: 0.64) and lowest entropy (LRAD: 0.06, 5Rater: 0.12), indicating higher tumor detection rate and confident segmentations. In the OOD dataset, SMIT misdetected the least number of tumors, marked by a median volume occupancy of 5.67 cc compared to the best method SimMIM of 9.97 cc. Our analysis shows that additional metrics such as entropy and volume occupancy may help better understand model performance on mixed domain datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. “Toward foundational deep learning models for medical imaging in the new era of transformer networks,” Radiology: Artificial Intelligence, vol. 4, no. 6, pp. e210284, 2022.
  2. “Trustworthy artificial intelligence in medical imaging,” PET Clinical, vol. 17, no. 1, pp. 1–12, 2022.
  3. “Trustworthy AI: From principles to practices,” ACM computing surveys, vol. 55, no. 9, pp. 1–46, 2023.
  4. “Improving calibration and out-of-distribution detection in medical image segmentation with convolutional neural networks,” arXiv preprint arXiv:2004.06569, 2020.
  5. “Transformer-based out-of-distribution detection for clinically safe segmentation,” in International Conference on Medical Imaging with Deep Learning. PMLR, 2022, pp. 457–476.
  6. “Devil is in the queries: Advancing mask transformers for real-world medical image segmentation and out-of-distribution localization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 23879–23889.
  7. “Self-supervised 3d anatomy segmentation using self-distilled masked image transformer (smit),” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2022, pp. 556–566.
  8. “Pretrained transformers improve out-of-distribution robustness,” arXiv preprint arXiv:2004.06100, 2020.
  9. “Data from NSCLC-radiomics. The Cancer Imaging Archive,” 2015.
  10. “A radiogenomic dataset of non-small cell lung cancer,” Sci Data, vol. 5, pp. 180202, 2018.
  11. “Data from NSCLC-Radiomics-Interobserver1 [data set],” .
  12. “Artificial intelligence for the detection of covid-19 pneumonia on chest ct using multinational datasets,” Nature communications, vol. 11, no. 1, pp. 4080, 2020.
  13. “Multimodal data integration using machine learning improves risk stratification of high-grade serous ovarian cancer,” Nat Cancer, vol. 3, no. 6, pp. 723–733, 2022.
  14. “PROSTATEx challenges for computerized classification of prostate lesions from multiparametric magnetic resonance images,” J Med Imaging (Bellingham), vol. 5, no. 4, pp. 044501, 2018.
  15. “Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small mr datasets,” Med Phys, vol. 46, no. 10, pp. 4392–4404, 2019.
  16. “Self-supervised pre-training of swin transformers for 3d medical image analysis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20730–20740.
  17. “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.
  18. “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
  19. “The liver tumor segmentation benchmark (lits),” Medical Image Analysis, vol. 84, pp. 102680, 2023.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.