Emergent Mind

Abstract

Background. LLMs hold promise for improving genetic variant literature review in clinical testing. We assessed Generative Pretrained Transformer 4's (GPT-4) performance, nondeterminism, and drift to inform its suitability for use in complex clinical processes. Methods. A 2-prompt process for classification of functional evidence was optimized using a development set of 45 articles. The prompts asked GPT-4 to supply all functional data present in an article related to a variant or indicate that no functional evidence is present. For articles indicated as containing functional evidence, a second prompt asked GPT-4 to classify the evidence into pathogenic, benign, or intermediate/inconclusive categories. A final test set of 72 manually classified articles was used to test performance. Results. Over a 2.5-month period (Dec 2023-Feb 2024), we observed substantial differences in intraday (nondeterminism) and across day (drift) results, which lessened after 1/18/24. This variability is seen within and across models in the GPT-4 series, affecting different performance statistics to different degrees. Twenty runs after 1/18/24 identified articles containing functional evidence with 92.2% sensitivity, 95.6% positive predictive value (PPV) and 86.3% negative predictive value (NPV). The second prompt's identified pathogenic functional evidence with 90.0% sensitivity, 74.0% PPV and 95.3% NVP and for benign evidence with 88.0% sensitivity, 76.6% PPV and 96.9% NVP. Conclusion. Nondeterminism and drift within LLMs must be assessed and monitored when introducing LLM based functionality into clinical workflows. Failing to do this assessment or accounting for these challenges could lead to incorrect or missing information that is critical for patient care. The performance of our prompts appears adequate to assist in article prioritization but not in automated decision making.

Overview

  • GPT-4 can help classify scientific articles related to the pathogenicity of genetic variants, which is crucial in clinical settings.

  • The researchers used a two-step prompt sequence trained on 45 articles to optimize GPT-4 and then tested it on an independent set of 72 article-variant pairs.

  • GPT-4 showed a high sensitivity and positive predictive value for detecting functional evidence supporting or refuting genetic variant pathogenicity.

  • The tool excelled in identifying benign evidence and has the potential to streamline the literature review process for geneticists.

  • While effective, the study emphasizes the need for human expertise and further research to expand GPT-4's capabilities and ensure accuracy in clinical interpretations.

Introduction

The emergence of Generative Pre-trained Transformer 4 (GPT-4) presents new possibilities within the medical field, particularly in the realm of variant pathogenicity classification. Aronson et al. delve into GPT-4's capability to classify scientific articles that potentially offer functional evidence pertinent to the pathogenicity of genetic variants—a critical component in clinical diagnosis and therapeutic decision-making.

Methodology

The researchers conducted a robust investigation using a two-step prompt sequence, honed through iteration against a training dataset. Their methodology encompassed training GPT-4 on 45 articles each paired with a single genetic variant, adjusting prompts for optimal performance, and validating accuracy against an independently curated test set of 72 article-variant pairs. Articles were selected based on various criteria, such as availability and the extent of functional evidence. The study was meticulous in avoiding supplementary materials given the current limitations of GPT-4 in processing such data forms. To adhere to specificity, nomenclature aliases for variants and gene symbols were incorporated to capture different terminology utilized within articles.

Results

With considerable sensitivity and positive predictive value, GPT-4 effectively pinpointed the presence or absence of functional assays within the corpus of literature provided. Specifically, the tool demonstrated impressive sensitivity in identifying pathogenic evidence and a perfect score for benign evidence identification. While intermediate and inconclusive assessments posed a challenge, the high positive predictive value in these categories suggests potential utility in aiding geneticists by expediting the review process while minimizing the risk of misclassification.

Discussion and Future Implications

The study unearths GPT-4's potential as a utility in the initial stages of variant literature review, streamlining the process by efficiently triaging articles needing closer examination. However, it underscores the necessity of human expertise, particularly with subtleties in evidence interpretation that AI presently cannot navigate autonomously. The authors forecast future expansions to include additional data from figures and supplementary materials, enhancing the prompts' comprehensiveness and utility. GPT-4's applications could evolve to encompass broader evidence types, further economizing variant assessment workflows, and thus, increasing access to genetic testing. Despite its prowess, they advise caution, highlighting the paramountcy of human oversight to safeguard against erroneous clinical interpretations. The paper serves as an encouragement for continued research to refine AI's role in genetics while understanding the accompanying risks and limitations.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.