Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modular Deep Active Learning Framework for Image Annotation: A Technical Report for the Ophthalmo-AI Project (2403.15143v1)

Published 22 Mar 2024 in cs.CV and cs.AI

Abstract: Image annotation is one of the most essential tasks for guaranteeing proper treatment for patients and tracking progress over the course of therapy in the field of medical imaging and disease diagnosis. However, manually annotating a lot of 2D and 3D imaging data can be extremely tedious. Deep Learning (DL) based segmentation algorithms have completely transformed this process and made it possible to automate image segmentation. By accurately segmenting medical images, these algorithms can greatly minimize the time and effort necessary for manual annotation. Additionally, by incorporating Active Learning (AL) methods, these segmentation algorithms can perform far more effectively with a smaller amount of ground truth data. We introduce MedDeepCyleAL, an end-to-end framework implementing the complete AL cycle. It provides researchers with the flexibility to choose the type of deep learning model they wish to employ and includes an annotation tool that supports the classification and segmentation of medical images. The user-friendly interface allows for easy alteration of the AL and DL model settings through a configuration file, requiring no prior programming experience. While MedDeepCyleAL can be applied to any kind of image data, we have specifically applied it to ophthalmology data in this project.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. A. Farshad, Y. Yeganeh, P. Gehlbach, and N. Navab, “Y-Net: A Spatiospectral Dual-Encoder Network for Medical Image Segmentation,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, L. Wang, Q. Dou, P. T. Fletcher, S. Speidel, and S. Li, Eds.   Cham: Springer Nature Switzerland, 2022, pp. 582–592.
  2. W. Yuan, D. Lu, D. Wei, M. Ning, and Y. Zheng, “Multiscale Unsupervised Retinal Edema Area Segmentation in OCT Images,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, L. Wang, Q. Dou, P. T. Fletcher, S. Speidel, and S. Li, Eds.   Cham: Springer Nature Switzerland, 2022, pp. 667–676.
  3. V. Nath, D. Yang, H. R. Roth, and D. Xu, “Warm Start Active Learning with Proxy Labels and Selection via Semi-supervised Fine-Tuning,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, L. Wang, Q. Dou, P. T. Fletcher, S. Speidel, and S. Li, Eds.   Cham: Springer Nature Switzerland, 2022, pp. 297–308.
  4. B. Settles, “Active Learning Literature Survey,” University of Wisconsin, Madison, vol. 52, 07 2010.
  5. G. Weiler, U. Schwarz, J. Rauch, K. Rohm, T. Lehr, S. Theobald, S. Kiefer, K. Götz, K. Och, N. Pfeifer, L. Handl, S. Smola, M. Ihle, A. T. Turki, D. W. Beelen, J. Rissland, J. Bittenbring, and N. M. Graf, “XplOit: An Ontology-Based Data Integration Platform Supporting the Development of Predictive Models for Personalized Medicine,” in Building Continents of Knowledge in Oceans of Data: The Future of Co-Created eHealth - Proceedings of MIE 2018, Medical Informatics Europe, Gothenburg, Sweden, April 24-26, 2018, ser. Studies in Health Technology and Informatics, A. Ugon, D. Karlsson, G. O. Klein, and A. Moen, Eds., vol. 247.   IOS Press, 2018, pp. 21–25. [Online]. Available: https://doi.org/10.3233/978-1-61499-852-5-21
  6. B. Lee and K. Paeng, “A Robust and Effective Approach Towards Accurate Metastasis Detection and pN-stage Classification in Breast Cancer,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part II 11.   Springer, 2018, pp. 841–850.
  7. S. Samrath, E. Sayna, and D. Trevor, “Variational Adversarial Active Learning,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV).   IEEE, 2019.
  8. C. Dai, S. Wang, Y. Mo, K. Zhou et al., “Suggestive Annotation of Brain Tumour Images with Gradient-guided Sampling,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part IV 23.   Springer, 2020, pp. 156–165.
  9. F. Bai, X. Xing, Y. Shen, H. Ma et al., “Discrepancy-Based Active Learning for Weakly Supervised Bleeding Segmentation in Wireless Capsule Endoscopy Images,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part VIII.   Springer, 2022, pp. 24–34.
  10. A. J. Joshi, F. Porikli, and N. Papanikolopoulos, “Multi-class Active Learning for Image Classification,” in 2009 ieee conference on computer vision and pattern recognition.   IEEE, 2009, pp. 2372–2379.
  11. W. Luo, A. Schwing, and R. Urtasun, “Latent Structured Active Learning,” Advances in Neural Information Processing Systems, vol. 26, 2013.
  12. Y. Siddiqui, J. Valentin, and M. Nießner, “Viewal: Active learning with viewpoint entropy for semantic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9433–9443.
  13. Y. Gal, R. Islam, and Z. Ghahramani, “Deep Bayesian Active Learning with Image Data,” in International conference on machine learning.   PMLR, 2017, pp. 1183–1192.
  14. V. Nath, D. Yang, B. A. Landman, D. Xu, and H. R. Roth, “Diminishing Uncertainty Within the Training Pool: Active Learning for Medical image Segmentation,” IEEE Transactions on Medical Imaging, vol. 40, no. 10, pp. 2534–2547, 2020.
  15. L. Yang, Y. Zhang, J. Chen, S. Zhang et al., “Suggestive Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation,” in Medical Image Computing and Computer Assisted Intervention- MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part III 20.   Springer, 2017, pp. 399–407.
  16. O. Sener and S. Savarese, “Active Learning for Convolutional Neural Networks: A Core-set Approach,” arXiv preprint arXiv:1708.00489, 2017.
  17. M. Gorriz, A. Carlier, E. Faure, and X. Giro-i Nieto, “Cost-effective active learning for melanoma segmentation,” arXiv preprint arXiv:1711.09168, 2017.
  18. K. Wang, D. Zhang, Y. Li et al., “Cost-effective Active Learning for Deep Image Classification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 12, pp. 2591–2600, 2016.
  19. R. Mackowiak, P. Lenz, O. Ghori, F. Diego et al., “Cereals-Cost-effective Region-based Active Learning for Semantic Segmentation,” arXiv preprint arXiv:1810.09726, 2018.
  20. M. Gaillochet, C. Desrosiers, and H. Lombaert, “Active Learning for Medical Image Segmentation with Stochastic Batches,” arXiv preprint arXiv:2301.07670, 2023.
  21. X. Li, S. Niu, X. Gao, T. Liu et al., “Unsupervised Domain Adaptation with Self-selected Active Learning for Cross-domain OCT Image Segmentation,” in Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8–12, 2021, Proceedings, Part II 28.   Springer, 2021, pp. 585–596.
  22. A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi, “A Survey of the Recent Architectures of Deep Convolutional Neural Networks,” Artificial intelligence review, vol. 53, pp. 5455–5516, 2020.
  23. M. A. Kadir, H. M. T. Alam, and D. Sonntag, “EdgeAL: An Edge Estimation Based Active Learning Approach for OCT Segmentation,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2023, H. Greenspan, A. Madabhushi, P. Mousavi, S. Salcudean, J. Duncan, T. Syeda-Mahmood, and R. Taylor, Eds.   Cham: Springer Nature Switzerland, 2023, pp. 79–89.
  24. M. Wolf, D. Ruiter, A. G. D’Sa, L. Reiners, J. Alexandersson, and D. Klakow, “HUMAN: Hierarchical Universal Modular ANnotator,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations.   Online: Association for Computational Linguistics, Oct. 2020, pp. 55–61. [Online]. Available: https://aclanthology.org/2020.emnlp-demos.8
  25. K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A Survey of Transfer Learning,” Journal of Big data, vol. 3, no. 1, pp. 1–40, 2016.
  26. M. Melinščak, M. Radmilović, Z. Vatavuk, and S. Lončarić, “Annotated Retinal Optical Coherence Tomography Images (AROI) Database for Joint Retinal Layer and Fluid Segmentation,” Automatika, vol. 62, no. 3-4, pp. 375–385, 2021.
  27. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds.   Cham: Springer International Publishing, 2015.
  28. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: A Nested U-Net Architecture for Medical Image Segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, D. Stoyanov, Z. Taylor, G. Carneiro, T. Syeda-Mahmood, A. Martel, L. Maier-Hein, J. M. R. Tavares, A. Bradley, J. P. Papa, V. Belagiannis, J. C. Nascimento, Z. Lu, S. Conjeti, M. Moradi, H. Greenspan, and A. Madabhushi, Eds.   Cham: Springer International Publishing, 2018, pp. 3–11.
  29. O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz et al., “Attention U-net: Learning where to Look for the Pancreas,” arXiv preprint arXiv:1804.03999, 2018.
  30. M. Melinscak, M. Radmilović, S. Loncaric, and Z. Vatavuk, “Annotated Retinal Optical Coherence Tomography Images (AROI) Database for Joint Retinal Layer and Fluid Segmentation,” Automatika, vol. 62, pp. 375–385, 08 2021.
  31. A. Farshad, Y. Yeganeh, P. Gehlbach, and N. Navab, “Y-net: A spatiospectral dual-encoder network for medical image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part II.   Springer, 2022, pp. 582–592.
  32. C. K. Brinkmann, P. Chang, T. Schick, B. Heimes, J. Vögeler, B. Haegele, B. Kirchhof, F. G. Holz, D. Pauleikhoff, F. Ziemssen, S. Liakopoulos, G. Spital, and S. Schmitz-Valckenberg, “Initiale Diagnostik und Indikationsstellung zur Anti-Vascular-Endothelial Growth-Factor-Therapie bei Netzhauterkrankungen: Vergleich der Befundung durch Studienarzt versus Reading Center (ORCA/OCEAN-Studie),” Der Ophthalmologe, vol. 116, no. 8, pp. 753–765, Aug. 2019. [Online]. Available: http://link.springer.com/10.1007/s00347-018-0805-y
  33. A. Bhattacharya, J. Ooge, G. Stiglic, and K. Verbert, “Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations,” in Proceedings of the 28th International Conference on Intelligent User Interfaces, Mar. 2023, pp. 204–219, arXiv:2302.10671 [cs]. [Online]. Available: http://arxiv.org/abs/2302.10671
  34. R. Balestriero, M. Ibrahim, V. Sobal, A. Morcos, S. Shekhar, T. Goldstein, F. Bordes, A. Bardes, G. Mialon, Y. Tian et al., “A Cookbook of Self-supervised Learning,” arXiv preprint arXiv:2304.12210, 2023.
  35. J. Z. Bengar, J. van de Weijer, B. Twardowski, and B. Raducanu, “Reducing Label Effort: Self-supervised Meets Active Learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1631–1639.

Summary

We haven't generated a summary for this paper yet.