Emergent Mind

Abstract

Recent advances in LLMs have led to significant improvements in translating natural language questions into SQL queries. While achieving high accuracy in SQL generation is crucial, little is known about the extent to which these text-to-SQL models can reliably handle diverse types of questions encountered during real-world deployment, including unanswerable ones. To explore this aspect, we introduce TrustSQL, a new benchmark designed to assess the reliability of text-to-SQL models in both single-database and cross-database settings. TrustSQL requires models to provide one of two outputs: 1) an SQL prediction or 2) abstention from making an SQL prediction, either due to potential errors in the generated SQL or when faced with unanswerable questions. For model evaluation, we explore various modeling approaches specifically designed for this task: 1) optimizing separate models for answerability detection, SQL generation, and error detection, which are then integrated into a single pipeline; and 2) developing a unified approach that uses a single model to solve this task. Experimental results using our new reliability score show that addressing this challenge involves many different areas of research and opens new avenues for model development. However, none of the methods consistently surpasses the reliability scores of a naive baseline that abstains from SQL predictions for all questions, with varying penalties.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.