Emergent Mind

Abstract

Logical reasoning serves as a cornerstone for human cognition. Recently, the emergence of LLMs has demonstrated promising progress in solving logical reasoning tasks effectively. To improve this capability, recent studies have delved into integrating LLMs with various symbolic solvers using diverse techniques and methodologies. While some combinations excel on specific datasets, others fall short. However, it remains unclear whether the variance in performance stems from the methodologies employed or the specific symbolic solvers utilized. Therefore, there is a lack of consistent comparison between symbolic solvers and how they influence LLM's logical reasoning ability. We perform experiments on LLMs integrated with 3 symbolic solvers: Z3, Pyke, and Prover9, and compare their performance on 3 logical reasoning datasets: ProofWriter, PrOntoQA, and FOLIO. Our findings indicate that when combined with LLMs Pyke's performance is significantly inferior to that of Prover9 and Z3. Z3's overall accuracy performance slightly surpasses Prover9, but Prover9 could execute more questions.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.