Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 42 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 217 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation (2406.06496v2)

Published 10 Jun 2024 in cs.LG, cs.CL, and cs.CV

Abstract: Recent advances in generative vision-LLMs (VLMs) have exciting potential implications for AI in radiology, yet VLMs are also known to produce hallucinations, nonsensical text, and other unwanted behaviors that can waste clinicians' time and cause patient harm. Drawing on recent work on direct preference optimization (DPO), we propose a simple method for modifying the behavior of pretrained VLMs performing radiology report generation by suppressing unwanted types of generations. We apply our method to the prevention of hallucinations of prior exams, addressing a long-established problem behavior in models performing chest X-ray report generation. Across our experiments, we find that DPO fine-tuning achieves a 3.2-4.8x reduction in lines hallucinating prior exams while maintaining model performance on clinical accuracy metrics. Our work is, to the best of our knowledge, the first work to apply DPO to medical VLMs, providing a data- and compute- efficient way to suppress problem behaviors while maintaining overall clinical accuracy.

Citations (2)

Summary

  • The paper demonstrates that DPO significantly reduces hallucinated prior exam references, achieving up to a 4.8x decrease in erroneous lines.
  • It introduces GPT-4 annotated subsets of the MIMIC-CXR dataset to fine-tune models without heavy data or compute overhead.
  • The method preserves clinical accuracy, validated by RadCliq-V1 and expert evaluations, highlighting its practical benefit in radiology reporting.

Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation

The paper "Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation" addresses the critical issue of hallucinations in generative vision-LLMs (VLMs) used for radiology report generation. The developed approach aims to mitigate unwanted generative behaviors, particularly the hallucination of prior exams, by employing Direct Preference Optimization (DPO) techniques.

Introduction

Generative VLMs hold significant promise for automating tasks in radiology, such as interpreting chest X-rays (CXRs) and generating accompanying textual reports. Nevertheless, these models are prone to hallucinations—generating nonsensical or erroneous text—which can burden clinicians with additional verification tasks and potentially lead to patient harm. To counter this, existing strategies involve either modifying pre-training datasets or adopting reinforcement learning with human or AI feedback (RLHF). However, dataset modification can be cost-prohibitive and time-consuming, whereas RLHF typically necessitates a reward model. DPO, a recent derivation of RLHF, obviates the need for an explicit reward model, thus offering a simpler, potentially more stable method for fine-tuning pretrained models to suppress unwanted behaviors.

Main Contributions

This paper introduces and validates several DPO methods tailored to minimize hallucinated references to prior exams in CXR report generation:

  1. DPO Methods: The paper evaluates both standard and weighted DPO loss functions and finds that DPO can selectively remove unwanted behaviors while maintaining clinical accuracy.
  2. Annotated Datasets: New subsets of the MIMIC-CXR dataset annotated by GPT-4, which minimize references to prior exams, were created for training, validation, and testing.
  3. Application to Medical VLMs: This is the first application of DPO to medical VLMs, demonstrating an efficient means to suppress problematic behaviors without incurring substantial compute and data overheads.

Methodology

Direct Preference Optimization

DPO necessitates a pretrained model and a preference dataset, each example consisting of a prompt, a preferred response, and a dispreferred response. The loss function reinforces the probability of preferred responses while diminishing the probability of dispreferred ones. The weighted DPO variant introduces a hyperparameter γ\gamma, which adjusts the importance of tokens unrelated to the unwanted behavior.

Dataset Creation

Datasets employed CXRs and reports from MIMIC-CXR. GPT-4 was utilized to annotate and edit reports, thus removing references to prior exams. This process involved labeling lines in the reports as "none," "partial," or "all," followed by necessary rewrites or eliminations of references to prior exams.

Experimental Results

Experiments entailed comparing the pretrained model, a supervised fine-tuned model on edited reports, and three variants of DPO-fine-tuned models. The key findings include:

  • Reduction in Hallucinations: All DPO models significantly reduced hallucinated references to prior exams. The model trained with γ=.5\gamma = .5 achieved the highest reduction, showing a 4.8x decrease in lines referencing prior exams.
  • Clinical Accuracy: While supervised fine-tuning improved clinical accuracy, it did not substantially reduce hallucinated prior exams. DPO, particularly with γ=1\gamma = 1 or $0$, maintained comparable or slightly improved clinical accuracy as measured by specialized metrics like RadCliq-V1 and RadGraph-F1.
  • Human Evaluation: Expert evaluation corroborated the trend observed in automated metrics, confirming the reduction in hallucinated references and noting slight impacts on clinical accuracy.

Implications and Future Directions

This research demonstrates the potential of DPO in fine-tuning medical VLMs, particularly within compute and data-efficient paradigms. The ability to significantly suppress hallucinated references without degrading clinical accuracy could streamline radiology workflows and mitigate risks associated with AI-driven diagnostic support. Future work may focus on refining annotation strategies to further improve the effectiveness of preprocessing and explore additional clinical contexts for the application of DPO. The development of benchmark datasets entirely free of references to prior exams could provide a clearer evaluation framework for such models.

Conclusion

The application of DPO to CXR report generation manifests substantial promise in mitigating undesirable model behaviors while preserving critical clinical accuracy. Through rigorous experimentation and expert validation, this work paves the way for more reliable and efficacious AI tools in medical imaging and reporting, heralding a finely-tuned balance between innovation and pragmatic clinical utility.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube