Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Expert-Level Annotation Quality Achieved by Gamified Crowdsourcing for B-line Segmentation in Lung Ultrasound (2312.10198v1)

Published 15 Dec 2023 in cs.CY

Abstract: Accurate and scalable annotation of medical data is critical for the development of medical AI, but obtaining time for annotation from medical experts is challenging. Gamified crowdsourcing has demonstrated potential for obtaining highly accurate annotations for medical data at scale, and we demonstrate the same in this study for the segmentation of B-lines, an indicator of pulmonary congestion, on still frames within point-of-care lung ultrasound clips. We collected 21,154 annotations from 214 annotators over 2.5 days, and we demonstrated that the concordance of crowd consensus segmentations with reference standards exceeds that of individual experts with the same reference standards, both in terms of B-line count (mean squared error 0.239 vs. 0.308, p<0.05) as well as the spatial precision of B-line annotations (mean Dice-H score 0.755 vs. 0.643, p<0.05). These results suggest that expert-quality segmentations can be achieved using gamified crowdsourcing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Mike Jin (3 papers)
  2. Varoon Bashyakarla (1 paper)
  3. Maria Alejandra Duran Mendicuti (2 papers)
  4. Stephen Hallisey (2 papers)
  5. Denie Bernier (3 papers)
  6. Joseph Stegeman (1 paper)
  7. Erik Duhaime (2 papers)
  8. Tina Kapur (23 papers)
  9. Nicole M Duggan (2 papers)
  10. Andrew J Goldsmith (2 papers)

Summary

We haven't generated a summary for this paper yet.