Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Supporting Mitosis Detection AI Training with Inter-Observer Eye-Gaze Consistencies (2404.01656v1)

Published 2 Apr 2024 in cs.CV

Abstract: The expansion of AI in pathology tasks has intensified the demand for doctors' annotations in AI development. However, collecting high-quality annotations from doctors is costly and time-consuming, creating a bottleneck in AI progress. This study investigates eye-tracking as a cost-effective technology to collect doctors' behavioral data for AI training with a focus on the pathology task of mitosis detection. One major challenge in using eye-gaze data is the low signal-to-noise ratio, which hinders the extraction of meaningful information. We tackled this by levering the properties of inter-observer eye-gaze consistencies and creating eye-gaze labels from consistent eye-fixations shared by a group of observers. Our study involved 14 non-medical participants, from whom we collected eye-gaze data and generated eye-gaze labels based on varying group sizes. We assessed the efficacy of such eye-gaze labels by training Convolutional Neural Networks (CNNs) and comparing their performance to those trained with ground truth annotations and a heuristic-based baseline. Results indicated that CNNs trained with our eye-gaze labels closely followed the performance of ground-truth-based CNNs, and significantly outperformed the baseline. Although primarily focused on mitosis, we envision that insights from this study can be generalized to other medical imaging tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. M. Cui and D. Y. Zhang, “Artificial intelligence and computational pathology,” Laboratory Investigation, vol. 101, no. 4, pp. 412–422, Apr. 2021.
  2. M. K. K. Niazi, A. V. Parwani, and M. N. Gurcan, “Digital pathology and artificial intelligence,” The Lancet Oncology, vol. 20, no. 5, pp. e253–e261, 2019.
  3. D. Montezuma, S. P. Oliveira, P. C. Neto, D. Oliveira, A. Monteiro, J. S. Cardoso, and I. Macedo-Pinto, “Annotating for Artificial Intelligence Applications in Digital Pathology: A Practical Guide for Pathologists and Researchers,” Modern Pathology, vol. 36, no. 4, p. 100086, Apr. 2023.
  4. J. Z. Lim, J. Mountstephens, and J. Teo, “Eye-tracking feature extraction for biometric machine learning,” Frontiers in neurorobotics, vol. 15, p. 796895, 2022.
  5. K. Mariam, O. M. Afzal, W. Hussain, M. U. Javed, A. Kiyani, N. Rajpoot, S. A. Khurram, and H. A. Khan, “On smart gaze based annotation of histopathology images for training of deep convolutional neural networks,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 7, pp. 3025–3036, 2022.
  6. A. Karargyris, S. Kashyap, I. Lourentzou, J. T. Wu, A. Sharma, M. Tong, S. Abedin, D. Beymer, V. Mukherjee, E. A. Krupinski et al., “Creation and validation of a chest x-ray dataset with eye-tracking and report dictation for ai development,” Scientific data, vol. 8, no. 1, p. 92, 2021.
  7. T. van Sonsbeek, X. Zhen, D. Mahapatra, and M. Worring, “Probabilistic integration of object level annotations in chest x-ray classification,” in 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 3619–3629.
  8. J. N. Stember, H. Celik, D. Gutman, N. Swinburne, R. Young, S. Eskreis-Winkler, A. Holodny, S. Jambawalikar, B. J. Wood, P. D. Chang et al., “Integrating eye tracking and speech recognition accurately annotates mr brain images for deep learning: proof of principle,” Radiology: Artificial Intelligence, vol. 3, no. 1, p. e200047, 2020.
  9. H. Jiang, Y. Hou, H. Miao, H. Ye, M. Gao, X. Li, R. Jin, and J. Liu, “Eye tracking based deep learning analysis for the early detection of diabetic retinopathy: A pilot study,” Biomedical Signal Processing and Control, vol. 84, p. 104830, 2023.
  10. H. Jiang, M. Gao, J. Huang, C. Tang, X. Zhang, and J. Liu, “Dcamil: Eye-tracking guided dual-cross-attention multi-instance learning for refining fundus disease detection,” Expert Systems with Applications, vol. 243, p. 122889, 2024.
  11. S. Wang, X. Ouyang, T. Liu, Q. Wang, and D. Shen, “Follow my eye: using gaze to supervise computer-aided diagnosis,” IEEE Transactions on Medical Imaging, vol. 41, no. 7, pp. 1688–1698, 2022.
  12. K. Saab, S. M. Hooper, N. S. Sohoni, J. Parmar, B. Pogatchnik, S. Wu, J. A. Dunnmon, H. R. Zhang, D. Rubin, and C. Ré, “Observational supervision for medical image classification using gaze data,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part II 24.   Springer, 2021, pp. 603–614.
  13. H. Zhu, S. Salcudean, and R. Rohling, “Gaze-guided class activation mapping: Leverage human visual attention for network attention in chest x-rays classification,” in Proceedings of the 15th International Symposium on Visual Information Communication and Interaction, ser. VINCI ’22.   New York, NY, USA: Association for Computing Machinery, 2022.
  14. C. Ma, L. Zhao, Y. Chen, S. Wang, L. Guo, T. Zhang, D. Shen, X. Jiang, and T. Liu, “Eye-gaze-guided vision transformer for rectifying shortcut learning,” IEEE Transactions on Medical Imaging, vol. 42, no. 11, pp. 3384–3394, 2023.
  15. J. N. Stember, H. Celik, E. Krupinski, P. D. Chang, S. Mutasa, B. J. Wood, A. Lignelli, G. Moonis, L. Schwartz, S. Jambawalikar et al., “Eye tracking for deep learning segmentation using convolutional neural networks,” Journal of digital imaging, vol. 32, pp. 597–604, 2019.
  16. T. T. Brunyé, T. Drew, M. J. Saikia, K. F. Kerr, M. M. Eguchi, A. C. Lee, C. May, D. E. Elder, and J. G. Elmore, “Melanoma in the blink of an eye: Pathologists’ rapid detection, classification, and localization of skin abnormalities,” Visual Cognition, pp. 1–15, 2021.
  17. D. N. Louis, A. Perry, P. Wesseling, D. J. Brat, I. A. Cree, D. Figarella-Branger, C. Hawkins, H. K. Ng, S. M. Pfister, G. Reifenberger, R. Soffietti, A. von Deimling, and D. W. Ellison, “The 2021 WHO Classification of Tumors of the Central Nervous System: a summary,” Neuro-Oncology, vol. 23, no. 8, pp. 1231–1251, 06 2021.
  18. H. Gu, Y. Liang, Y. Xu, C. K. Williams, S. Magaki, N. Khanlou, H. Vinters, Z. Chen, S. Ni, C. Yang, W. Yan, X. R. Zhang, Y. Li, M. Haeri, and X. A. Chen, “Improving workflow integration with xpath: Design and evaluation of a human-ai diagnosis system in pathology,” ACM Trans. Comput.-Hum. Interact., vol. 30, no. 2, mar 2023.
  19. A. C. Ruifrok, D. A. Johnston et al., “Quantification of histochemical staining by color deconvolution,” Analytical and quantitative cytology and histology, vol. 23, no. 4, pp. 291–299, 2001.
  20. E. Duregon, A. Cassenti, A. Pittaro, L. Ventura, R. Senetta, R. Rudà, and P. Cassoni, “Better see to better agree: phosphohistone H3 increases interobserver agreement in mitotic count for meningioma grading and imposes new specific thresholds,” Neuro-Oncology, vol. 17, no. 5, pp. 663–669, 02 2015.
  21. H. Gu, C. Yang, I. Al-kharouf, S. Magaki, N. Lakis, C. K. Williams, S. M. Alrosan, E. K. Onstott, W. Yan, N. Khanlou, I. Cobos, X. R. Zhang, N. Zarrin-Khameh, H. V. Vinters, X. A. Chen, and M. Haeri, “Enhancing mitosis quantification and detection in meningiomas with computational digital pathology,” Acta Neuropathologica Communications, vol. 12, no. 1, p. 7, Jan. 2024.
  22. A. M. Feit, S. Williams, A. Toledo, A. Paradiso, H. Kulkarni, S. Kane, and M. R. Morris, “Toward everyday gaze input: Accuracy and precision of eye tracking and implications for design,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ser. CHI ’17.   New York, NY, USA: Association for Computing Machinery, 2017, p. 1118–1130.
  23. M. Tan and Q. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97.   PMLR, 09–15 Jun 2019, pp. 6105–6114.
  24. H. Gu, M. Haeri, S. Ni, C. K. Williams, N. Zarrin-Khameh, S. Magaki, and X. A. Chen, “Detecting mitoses with a convolutional neural network for midog 2022 challenge,” in Mitosis Domain Generalization and Diabetic Retinopathy Analysis, B. Sheng and M. Aubreville, Eds.   Cham: Springer Nature Switzerland, 2023, pp. 211–216.
  25. A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018, pp. 839–847.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Hongyan Gu (9 papers)
  2. Zihan Yan (23 papers)
  3. Ayesha Alvi (1 paper)
  4. Brandon Day (2 papers)
  5. Chunxu Yang (4 papers)
  6. Zida Wu (5 papers)
  7. Shino Magaki (5 papers)
  8. Mohammad Haeri (12 papers)
  9. Xiang 'Anthony' Chen (31 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.