Emergent Mind

ChatQA: Surpassing GPT-4 on Conversational QA and RAG

(2401.10225)
Published Jan 18, 2024 in cs.CL , cs.AI , cs.IR , and cs.LG

Abstract

In this work, we introduce ChatQA, a suite of models that outperform GPT-4 on retrieval-augmented generation (RAG) and conversational question answering (QA). To enhance generation, we propose a two-stage instruction tuning method that significantly boosts the performance of RAG. For effective retrieval, we introduce a dense retriever optimized for conversational QA, which yields results comparable to the alternative state-of-the-art query rewriting models, while substantially reducing deployment costs. We also present the ChatRAG Bench, which encompasses ten datasets covering comprehensive evaluations on RAG, table-related QA, arithmetic calculations, and scenarios involving unanswerable questions. Our ChatQA-1.0-70B (score: 54.14), built on Llama2, a weaker foundation model than GPT-4, can slightly outperform GPT-4-0613 (score: 53.90) and GPT-4-Turbo-2024-04-09 (score: 54.03) on the ChatRAG Bench, without relying on any synthetic data from OpenAI GPT models. Notably, Llama3-ChatQA-1.5-70B model surpasses the accuracy of GPT-4-Turbo-2024-04-09 by a margin. To advance research in this field, we open-sourced the model weights, instruction tuning data, ChatRAG Bench, and retriever for the community: https://chatqa-project.github.io/.

A framework detailing a two-stage instruction tuning approach for ChatQA.

Overview

  • The ChatQA-70B model achieves GPT-4 level accuracy in conversational QA tasks using a two-stage instruction tuning approach.

  • The model optimizes conversation handling by understanding and integrating user context and has been tuned to reject insufficiently supported answers.

  • ChatQA's dense retriever is fine-tuned on multi-turn QA datasets, matching the performance of more costly LLM-based query rewriting models.

  • The ChatQA-70B model has been tested across ten diverse conversational QA datasets, showing superior or equivalent performance to current industry standards.

  • This study presents a cost-effective approach to conversational QA that doesn't rely on expensive computational resources or synthetic data.

Introduction

The development of conversational question answering (QA) models has seen a significant leap with the advent of models like ChatGPT and its successors. These models hold great promise for real-world applications as they can engage with users conversationally, generate answers in a zero-shot manner, and process information beyond a language model’s typical context window. A pertinent challenge in this domain has been constructing a conversational QA model that matches the performance of cutting-edge models like GPT-4 while remaining cost-effective.

ChatQA Model Architecture

The presented study introduces ChatQA-70B, a white-box conversational QA model that achieves GPT-4 level accuracy through a unique two-stage instruction tuning method. The first stage enhances the model’s ability to understand and integrate user-provided or retrieved context for conversational tasks. The second stage, context-enhanced instruction tuning, further sharpens the model’s performance in handling context-rich conversations. The study also advances the retrieval process in conversational QA by fine-tuning a dense retriever on a quality multi-turn QA dataset, resulting in comparable performance to LLM-based query rewriting models, but with substantial cost reduction.

Methodology

The two-stage instruction tuning comprises supervised fine-tuning using diverse high-quality datasets, followed by tuning with a blend of single-turn and multi-turn conversational QA datasets. Furthermore, the model's robustness against hallucination is tested in scenarios where information is not available in the context, steering the model to provide a "cannot answer" response when necessary. This approach achieves a fine balance between answerability and non-answerability, crucial for reducing misinformation.

Retrieval Optimization

In addition to the focus on conversational abilities, the retrieval process has been fine-tuned to be more effective for multi-turn conversational queries. This is achieved by using high-quality conversational datasets for fine-tuning a single-turn query retriever, paving the way for better context retrieval without the need for an expensive standalone query rewriter. The retrieval-optimized ChatQA is then compared to current state-of-the-art solutions on ten conversational QA datasets, showcasing its superior performance.

Conclusion

The results of the comprehensive evaluation illustrate that the introduced ChatQA-70B model outperforms or is comparable to industry-standard models such as GPT-3.5-turbo and GPT-4. This performance is particularly notable, given it does not rely on synthetic data from existing LLMs. Moreover, the study identifies cost efficiency in retrieval operations as a notable contribution, demonstrating similar or better performance without incurring extra computational expenses. This work represents a milestone in conversational QA modeling and provides a promising direction for future research and practical applications.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

Reddit