Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations (2205.11822v2)

Published 24 May 2022 in cs.CL

Abstract: Despite their impressive capabilities, large pre-trained LLMs (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has emerged as a promising direction to amend this. However, these approaches are fundamentally bounded by the correctness of explanations, which themselves are often noisy and inconsistent. In this work, we develop Maieutic Prompting, which infers a correct answer to a question even from the noisy and inconsistent generations of LM. Maieutic Prompting induces a tree of explanations abductively (e.g. X is true, because ...) and recursively, then frames the inference as a satisfiability problem over these explanations and their logical relations. We test Maieutic Prompting for true/false QA on three challenging benchmarks that require complex commonsense reasoning. Maieutic Prompting achieves up to 20% better accuracy than state-of-the-art prompting methods, and as a fully unsupervised approach, performs competitively with supervised models. We also show that Maieutic Prompting improves robustness in inference while providing interpretable rationales.

Citations (174)

Summary

  • The paper presents Maieutic Prompting, a novel technique that uses recursive abductive reasoning to refine language models' inference processes.
  • It constructs explanation trees that assess belief and consistency, achieving up to a 20% accuracy boost on complex commonsense benchmarks.
  • Its robust approach effectively handles semantic perturbations, paving the way for reliable applications in neuro-symbolic AI research.

Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations

The paper "Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations" introduces an innovative method designed to address the inherent challenges faced by pre-trained LLMs (LMs) in producing consistent reasoning outputs. The primary focus is on Maieutic prompting, a technique leveraging abductive reasoning and recursive explanations to enhance the logical consistency and reliability of inference processes in LLMs.

Overview and Methodology

Maieutic prompting is inspired by the Socratic method, where reasoning is refined through hypothesis elimination, identifying contradictions in generated content. It aims to resolve noisy and inconsistent explanations typically generated by LMs, framing this as a satisfiability problem. This is particularly useful in binary question-answering scenarios requiring complex commonsense reasoning.

The approach involves constructing a tree of explanations, each representing different hypotheses, using abductive logic. This tree is generated recursively, prompting the LM to rationalize both possible answers (True and False) and evaluate the logical integrity of each explanation. The correctness of these explanations is then assessed by testing their logical relations, forming constraints that guide the inference toward consistent outcomes.

Key to the implementation are the concepts of belief and consistency. Belief assesses the LM's confidence in the truth of generated propositions, while consistency evaluates the logical coherence of explanations within the tree framework. This dual evaluation forms a robust structure that is more resistant to logical errors such as invariant negation and self-falsification.

Experimental Results

Maieutic prompting was tested across three challenging benchmarks—Com2Sense, CSQA 2.0, and CREAK—that require intricate commonsense and factual reasoning. Results showcased a significant improvement, achieving up to a 20% increase in accuracy over state-of-the-art few-shot prompting methods. This performance metric highlights its competitive edge even against supervised models, which inherently have the advantage of fine-tuning with labeled data.

Moreover, Maieutic prompting demonstrated increased robustness under semantic perturbations, maintaining consistent reasoning across questions that are semantically similar but logically opposed. This robustness indicates its potential for reliable application in diverse real-world scenarios, where subtleties in language can challenge model consistency.

Implications and Future Directions

The implications of Maieutic prompting extend into both practical applications and theoretical developments in AI. Practically, it offers a more reliable method for deploying LLMs in environments requiring complex inferential reasoning, such as legal document analysis, medical diagnosis, and strategic decision-making systems. Theoretically, it provides a foundation for further exploration into integrating symbolic reasoning with neural networks, potentially influencing the future design of neuro-symbolic architectures.

The paper suggests several directions for future research, including extending Maieutic prompting to broader task formats beyond binary true/false statements and improving the interaction and relationship modeling between different maieutic trees. These advancements could amplify its utility in handling multifaceted queries typical in real-world applications.

In summary, Maieutic prompting presents a promising evolution of reasoning capabilities in LLMs, addressing existing limitations through a structured, logic-driven approach. Its successful integration with unsupervised models underscores the increasing importance of hybrid techniques in the advancement of AI research.