Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ChartCheck: Explainable Fact-Checking over Real-World Chart Images (2311.07453v2)

Published 13 Nov 2023 in cs.CL and cs.CV

Abstract: Whilst fact verification has attracted substantial interest in the natural language processing community, verifying misinforming statements against data visualizations such as charts has so far been overlooked. Charts are commonly used in the real-world to summarize and communicate key information, but they can also be easily misused to spread misinformation and promote certain agendas. In this paper, we introduce ChartCheck, a novel, large-scale dataset for explainable fact-checking against real-world charts, consisting of 1.7k charts and 10.5k human-written claims and explanations. We systematically evaluate ChartCheck using vision-language and chart-to-table models, and propose a baseline to the community. Finally, we study chart reasoning types and visual attributes that pose a challenge to these models

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mubashara Akhtar (11 papers)
  2. Nikesh Subedi (1 paper)
  3. Vivek Gupta (75 papers)
  4. Sahar Tahmasebi (4 papers)
  5. Oana Cocarascu (14 papers)
  6. Elena Simperl (40 papers)
Citations (2)

Summary

  • The paper presents ChartCheck, a dataset of 1.7k charts paired with 10.5k human-written claims and explanations to advance explainable fact-checking in visual data.
  • It employs a four-step crowdsourcing pipeline to ensure high-quality, realistic claims and challenges models with complex reasoning tasks, including commonsense and comparison.
  • Evaluation reveals that vision-language models achieve only 73.8% accuracy compared to 95.7% human performance, underscoring the need for improved AI reasoning in chart analysis.

Explainable Fact-Checking in Chart Visualization: A Study of ChartCheck

The paper entitled "ChartCheck: Explainable Fact-Checking over Real-World Chart Images" presents significant contributions to the field of automated fact-checking, particularly focusing on data visualizations such as charts. In an era where misinformation is rampant, chart analysis becomes critical due to the visual medium's potential to subtly mislead audiences. This paper introduces ChartCheck, a dataset explicitly devised for explainable fact-checking against real-world chart images, comprising 1.7k charts alongside 10.5k human-written claims and explanations.

Overview of ChartCheck

ChartCheck aims to address the gap left by previous fact-checking research which primarily focused on textual claims, lacking comprehensive tools for chart-based misinformation. The dataset is valuable because it encapsulates a wide range of data visualizations, including varied chart types like bar, pie, and line graphs. The dataset is formatted to challenge existing models with realistic and complex scenarios, where claims need not only to be verified but explained thoroughly to ensure transparency and reliability.

Methodological Framework and Evaluation

The researchers utilized a systematic approach to build and evaluate the dataset. A four-step crowdsourcing pipeline was implemented to ensure data quality, involving chart filtering, claim and explanation generation, and rigorous validation. These efforts ensure that the dataset is not only large but also rich in quality and breadth.

The dataset was tested against state-of-the-art vision-LLMs (VLMs) and chart-to-table architectures. Despite advancements in these models, the highest accuracy achieved was 73.8%, highlighting a significant gap from human performance at 95.7%. This underperformance emphasizes the complexity and subtlety of understanding chart-based misinformation, even with advanced AI models.

Key Insights and Challenges

From the evaluation, several critical insights and challenges emerged:

  • Chart Complexity: Certain chart types posed more of a challenge, with pie and 3D pie charts being notably difficult for models due to their visual intricacies. This suggests the need for more sophisticated models that can engage with complex visual data.
  • Reasoning Types: Reasoning types such as "commonsense reasoning" and "comparison" were particularly challenging for models. This aligns with the notion that machine understanding of visual nuances requires improved integration of visual and textual information.
  • Model Performance: The experiment demonstrated that vision-language fusion models could extract and interpret information better than chart-to-table transformations, primarily when these models are guided with reasoning processes such as Chain-of-Thought (CoT) prompting.

Implications and Future Directions

The implications of this work are significant for both practical applications and theoretical advancements in AI technologies focusing on misinformation. Practically, ChartCheck provides a benchmark for developers aiming to create systems that can accurately reason about and explain visual data misinterpretations. Theoretically, it challenges researchers to enhance machine learning models with capabilities akin to human cognition in interpreting visual datasets integrated with text.

Future directions include enhancing model architectures to combine visual and textual data interpretation more effectively, improving model training with reasoning-focused datasets, and expanding datasets to include a broader range of visualizations. Moreover, developing multilingual resources could ensure broader applicability across global misinformation contexts.

In sum, ChartCheck stands as a pioneering resource that opens new avenues for research into explainable AI and visual reasoning. By confronting models with the complexities of real-world chart data and insightful fact-checking, this paper underscores the ongoing challenges in AI's ability to accurately and transparently interpret and verify data visualizations.

X Twitter Logo Streamline Icon: https://streamlinehq.com