Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CLOMO: Counterfactual Logical Modification with Large Language Models (2311.17438v4)

Published 29 Nov 2023 in cs.CL and cs.AI

Abstract: In this study, we delve into the realm of counterfactual reasoning capabilities of LLMs. Our primary objective is to cultivate the counterfactual thought processes within LLMs and rigorously assess these processes for their validity. Specifically, we introduce a novel task, Counterfactual Logical Modification (CLOMO), and a high-quality human-annotated benchmark. In this task, LLMs must adeptly alter a given argumentative text to uphold a predetermined logical relationship. To effectively evaluate a generation model's counterfactual capabilities, we propose an innovative evaluation metric, the decomposed Self-Evaluation Score (SES) to directly evaluate the natural language output of LLMs instead of modeling the task as a multiple-choice problem. Analysis shows that the proposed automatic metric aligns well with human preference. Our experimental results show that while LLMs demonstrate a notable capacity for logical counterfactual thinking, there remains a discernible gap between their current abilities and human performance. Code and data are available at https://github.com/Eleanor-H/CLOMO.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yinya Huang (22 papers)
  2. Ruixin Hong (10 papers)
  3. Hongming Zhang (111 papers)
  4. Wei Shao (95 papers)
  5. Zhicheng Yang (26 papers)
  6. Dong Yu (329 papers)
  7. Changshui Zhang (81 papers)
  8. Xiaodan Liang (318 papers)
  9. Linqi Song (93 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets