Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Medical Foundation Model Features in Graph Neural Network-Based Retrieval of Breast Histopathology Images (2405.04211v3)

Published 7 May 2024 in cs.CV

Abstract: Breast cancer is the most common cancer type in women worldwide. Early detection and appropriate treatment can significantly reduce its impact. While histopathology examinations play a vital role in rapid and accurate diagnosis, they often require experienced medical experts for proper recognition and cancer grading. Automated image retrieval systems have the potential to assist pathologists in identifying cancerous tissues, thereby accelerating the diagnostic process. Nevertheless, proposing an accurate image retrieval model is challenging due to considerable variability among the tissue and cell patterns in histological images. In this work, we leverage the features from foundation models in a novel attention-based adversarially regularized variational graph autoencoder model for breast histological image retrieval. Our results confirm the superior performance of models trained with foundation model features compared to those using pre-trained convolutional neural networks (up to 7.7% and 15.5% for mAP and mMV, respectively), with the pre-trained general-purpose self-supervised model for computational pathology (UNI) delivering the best overall performance. By evaluating two publicly available histology image datasets of breast cancer, our top-performing model, trained with UNI features, achieved average mAP/mMV scores of 96.7%/91.5% and 97.6%/94.2% for the BreakHis and BACH datasets, respectively. Our proposed retrieval model has the potential to be used in clinical settings to enhance diagnostic performance and ultimately benefit patients.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. doi:https://doi.org/10.1007/s10916-021-01786-9.
  2. doi:https://doi.org/10.1016/j.breast.2022.08.010.
  3. doi:https://doi.org/10.3322/caac.21824.
  4. doi:https://doi.org/10.1109/ISITIA52817.2021.9502205.
  5. doi:https://doi.org/10.1093/annonc/mdz235.
  6. doi:https://doi.org/10.1109/IVMSP54334.2022.9816325.
  7. doi:https://doi.org/10.1109/IVMSP54334.2022.9816352.
  8. doi:https://doi.org/10.1007/978-981-16-8892-8_33.
  9. doi:https://doi.org/10.1109/TPAMI.2018.2889473.
  10. doi:https://doi.org/10.1016/J.MEDIA.2017.09.007.
  11. doi:https://doi.org/10.1038/s41746-019-0131-z.
  12. doi:https://doi.org/10.1016/j.cmpb.2020.105637.
  13. doi:https://doi.org/10.1007/978-3-030-32239-7_61.
  14. doi:https://doi.org/10.1038/s41551-022-00929-8.
  15. doi:https://doi.org/10.2967/jnumed.123.266080.
  16. doi:https://doi.org/10.1007/s13244-018-0639-9.
  17. doi:https://doi.org/10.24963/ijcai.2018/362.
  18. doi:https://doi.org/10.1088/1742-6596/2171/1/012068.
  19. doi:https://doi.org/10.1109/TKDE.2022.3150300.
  20. doi:https://doi.org/10.1109/ACCESS.2020.3018033.
  21. doi:https://doi.org/10.1016/j.knosys.2023.110456.
  22. doi:https://doi.org/10.1016/j.media.2022.102645.
  23. doi:https://doi.org/10.1109/TBME.2015.2496264.
  24. doi:https://doi.org/10.1016/j.media.2019.05.010.
  25. doi:https://doi.org/10.1016/j.compeleceng.2022.108450.
  26. doi:https://doi.org/10.1136/gutjnl-2023-329512.
  27. doi:https://doi.org/10.1109/ICAISS58487.2023.10250462.
  28. doi:https://doi.org/10.3390/app122211375.
  29. doi:https://doi.org/10.3390/diagnostics12051152.
  30. doi:https://doi.org/10.1007/978-3-030-68763-2_26.
  31. doi:https://doi.org/10.1109/ICSCN.2017.8085676.
  32. doi:https://doi.org/10.1016/J.MEDIA.2020.101757.
  33. doi:https://doi.org/10.1109/CVPR.2009.5206848.
  34. doi:https://doi.org/10.3390/s21144758.
  35. doi:https://doi.org/10.1117/12.2550114.
  36. doi:https://doi.org/10.1016/j.compmedimag.2021.102027.
  37. doi:https://doi.org/10.1016/j.media.2019.101563.
  38. doi:https://doi.org/10.3389/fmed.2022.978146.
  39. doi:https://doi.org/10.1109/TMI.2018.2796130.
  40. doi:https://doi.org/10.1109/CVPR.2018.00474.
  41. doi:https://doi.org/10.1109/CVPR.2017.243.
  42. doi:https://doi.org/10.1109/CVPR.2017.195.
  43. doi:https://doi.org/10.1109/CVPR.2016.308.
  44. doi:https://doi.org/10.1109/CVPR.2018.00907.
  45. arXiv:2004.07399, doi:https://doi.org/10.1109/CVPRW50498.2020.00502.
  46. doi:https://doi.org/10.3390/e25040567.
  47. doi:https://doi.org/10.1109/TMI.2020.3046636.
  48. doi:https://doi.org/10.1109/JBHI.2022.3153671.
  49. doi:https://doi.org/10.1016/j.media.2021.102308.
  50. doi:https://doi.org/10.1109/tnnls.2020.2978386.
  51. doi:https://doi.org/10.1016/j.eswa.2021.115649.
  52. doi:https://doi.org/10.1016/j.neunet.2022.05.026.
  53. doi:https://doi.org/10.1109/TBDATA.2019.2921572.
  54. doi:https://doi.org/10.1038/s41598-020-61808-3.
  55. doi:https://doi.org/10.1016/j.csbj.2023.12.042.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com