Emergent Mind

Abstract

Recent advances in generative vision-language models (VLMs) have exciting potential implications for AI in radiology, yet VLMs are also known to produce hallucinations, nonsensical text, and other unwanted behaviors that can waste clinicians' time and cause patient harm. Drawing on recent work on direct preference optimization (DPO), we propose a simple method for modifying the behavior of pretrained VLMs performing radiology report generation by suppressing unwanted types of generations. We apply our method to the prevention of hallucinations of prior exams, addressing a long-established problem behavior in models performing chest X-ray report generation. Across our experiments, we find that DPO fine-tuning achieves a 3.2-4.8x reduction in lines hallucinating prior exams while maintaining model performance on clinical accuracy metrics. Our work is, to the best of our knowledge, the first work to apply DPO to medical VLMs, providing a data- and compute- efficient way to suppress problem behaviors while maintaining overall clinical accuracy.

DPO fine-tuning reduces hallucinated prior exams while maintaining clinical accuracy; SFT improves clinical accuracy only.

Overview

  • The paper proposes and evaluates Direct Preference Optimization (DPO) methods to suppress hallucinated references to prior exams in chest X-ray (CXR) report generation by generative vision-language models (VLMs).

  • The study validates the use of new subsets of the MIMIC-CXR dataset annotated by GPT-4, highlighting the efficacy of DPO in mitigating hallucinated prior exams while maintaining clinical accuracy.

  • Experimental results show that the DPO methods significantly reduce hallucinations without degrading, and sometimes even improving, clinical accuracy, as confirmed through automated metrics and expert evaluations.

Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation

The paper "Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation" addresses the critical issue of hallucinations in generative vision-language models (VLMs) used for radiology report generation. The developed approach aims to mitigate unwanted generative behaviors, particularly the hallucination of prior exams, by employing Direct Preference Optimization (DPO) techniques.

Introduction

Generative VLMs hold significant promise for automating tasks in radiology, such as interpreting chest X-rays (CXRs) and generating accompanying textual reports. Nevertheless, these models are prone to hallucinations—generating nonsensical or erroneous text—which can burden clinicians with additional verification tasks and potentially lead to patient harm. To counter this, existing strategies involve either modifying pre-training datasets or adopting reinforcement learning with human or AI feedback (RLHF). However, dataset modification can be cost-prohibitive and time-consuming, whereas RLHF typically necessitates a reward model. DPO, a recent derivation of RLHF, obviates the need for an explicit reward model, thus offering a simpler, potentially more stable method for fine-tuning pretrained models to suppress unwanted behaviors.

Main Contributions

This paper introduces and validates several DPO methods tailored to minimize hallucinated references to prior exams in CXR report generation:

  1. DPO Methods: The study evaluates both standard and weighted DPO loss functions and finds that DPO can selectively remove unwanted behaviors while maintaining clinical accuracy.
  2. Annotated Datasets: New subsets of the MIMIC-CXR dataset annotated by GPT-4, which minimize references to prior exams, were created for training, validation, and testing.
  3. Application to Medical VLMs: This is the first application of DPO to medical VLMs, demonstrating an efficient means to suppress problematic behaviors without incurring substantial compute and data overheads.

Methodology

Direct Preference Optimization

DPO necessitates a pretrained model and a preference dataset, each example consisting of a prompt, a preferred response, and a dispreferred response. The loss function reinforces the probability of preferred responses while diminishing the probability of dispreferred ones. The weighted DPO variant introduces a hyperparameter $\gamma$, which adjusts the importance of tokens unrelated to the unwanted behavior.

Dataset Creation

Datasets employed CXRs and reports from MIMIC-CXR. GPT-4 was utilized to annotate and edit reports, thus removing references to prior exams. This process involved labeling lines in the reports as "none," "partial," or "all," followed by necessary rewrites or eliminations of references to prior exams.

Experimental Results

Experiments entailed comparing the pretrained model, a supervised fine-tuned model on edited reports, and three variants of DPO-fine-tuned models. The key findings include:

  • Reduction in Hallucinations: All DPO models significantly reduced hallucinated references to prior exams. The model trained with $\gamma = .5$ achieved the highest reduction, showing a 4.8x decrease in lines referencing prior exams.
  • Clinical Accuracy: While supervised fine-tuning improved clinical accuracy, it did not substantially reduce hallucinated prior exams. DPO, particularly with $\gamma = 1$ or $0$, maintained comparable or slightly improved clinical accuracy as measured by specialized metrics like RadCliq-V1 and RadGraph-F1.
  • Human Evaluation: Expert evaluation corroborated the trend observed in automated metrics, confirming the reduction in hallucinated references and noting slight impacts on clinical accuracy.

Implications and Future Directions

This research demonstrates the potential of DPO in fine-tuning medical VLMs, particularly within compute and data-efficient paradigms. The ability to significantly suppress hallucinated references without degrading clinical accuracy could streamline radiology workflows and mitigate risks associated with AI-driven diagnostic support. Future work may focus on refining annotation strategies to further improve the effectiveness of preprocessing and explore additional clinical contexts for the application of DPO. The development of benchmark datasets entirely free of references to prior exams could provide a clearer evaluation framework for such models.

Conclusion

The application of DPO to CXR report generation manifests substantial promise in mitigating undesirable model behaviors while preserving critical clinical accuracy. Through rigorous experimentation and expert validation, this work paves the way for more reliable and efficacious AI tools in medical imaging and reporting, heralding a finely-tuned balance between innovation and pragmatic clinical utility.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.