Emergent Mind

Abstract

This work aims to analyze standard evaluation practices adopted by the research community when assessing chest x-ray classifiers, particularly focusing on the impact of class imbalance in such appraisals. Our analysis considers a comprehensive definition of model performance, covering not only discriminative performance but also model calibration, a topic of research that has received increasing attention during the last years within the machine learning community. Firstly, we conducted a literature study to analyze common scientific practices and confirmed that: (1) even when dealing with highly imbalanced datasets, the community tends to use metrics that are dominated by the majority class; and (2) it is still uncommon to include calibration studies for chest x-ray classifiers, albeit its importance in the context of healthcare. Secondly, we perform a systematic experiment on two major chest x-ray datasets to explore the behavior of several performance metrics under different class ratios and show that widely adopted metrics can conceal the performance in the minority class. Finally, we recommend the inclusion of complementary metrics to better reflect the system's performance in such scenarios. Our study indicates that current evaluation practices adopted by the research community for chest x-ray computer-aided diagnosis systems may not reflect their performance in real clinical scenarios, and suggest alternatives to improve this situation.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.