Emergent Mind

Rethinking Tabular Data Understanding with Large Language Models

(2312.16702)
Published Dec 27, 2023 in cs.CL , cs.AI , cs.DB , and cs.LG

Abstract

LLMs have shown to be capable of various tasks, yet their capability in interpreting and reasoning over tabular data remains an underexplored area. In this context, this study investigates from three core perspectives: the robustness of LLMs to structural perturbations in tables, the comparative analysis of textual and symbolic reasoning on tables, and the potential of boosting model performance through the aggregation of multiple reasoning pathways. We discover that structural variance of tables presenting the same content reveals a notable performance decline, particularly in symbolic reasoning tasks. This prompts the proposal of a method for table structure normalization. Moreover, textual reasoning slightly edges out symbolic reasoning, and a detailed error analysis reveals that each exhibits different strengths depending on the specific tasks. Notably, the aggregation of textual and symbolic reasoning pathways, bolstered by a mix self-consistency mechanism, resulted in achieving SOTA performance, with an accuracy of 73.6% on WIKITABLEQUESTIONS, representing a substantial advancement over previous existing table processing paradigms of LLMs.

Challenges LLMs encounter in understanding and interpreting table structures depicted.

Overview

  • LLMs are not as capable in handling structured tabular data as they are with unstructured text.

  • LLMs face challenges with different table structures, particularly when they are transposed.

  • A new normalization method improves LLM robustness to changes in table structure.

  • Integrating multiple reasoning pathways including textual and symbolic reasoning with self-consistency improves LLM performance.

  • The study establishes new benchmarks in LLMs' abilities to understand and work with tabular data.

Tabular Data and LLMs

LLMs are currently less adept at handling structured tabular data compared to unstructured text. Challenges arise because of versions of tables, such as those with headers as the first row (column tables) or the first column (row tables), as well as ones featuring numerical operations.

Robustness and Reasoning

LLMs tend to struggle when table structures are altered. Different orientations of the same information significantly drop performance, with transposed tables posing a particular challenge. Despite this, a new method for table structure normalization (NORM) enhances LLM robustness to structural changes. Textual reasoning is slightly ahead of symbolic reasoning in overall effectiveness, though each exhibits distinct advantages for specific tasks.

Performance Boost With Multiple Reasoning Aggregation

LLMs can improve their reasoning capabilities for tabular data interpretation when multiple reasoning pathways are integrated. One prominent method combines textual and symbolic reasoning with a self-consistency mechanism, achieving state-of-the-art performance on the WikiTableQuestions dataset with an accuracy of 73.6%.

Conclusion

This research outlines the difficulties LLMs face with tabular data and presents normalization strategies and reasoning pathway aggregation as effective solutions. The combination of textual and symbolic reasoning, enhanced by self-consistency, leads to significant advances over existing table processing frameworks, establishing new benchmarks in LLMs' abilities to understand and reason over tabular data.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube