Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 153 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 76 tok/s Pro
Kimi K2 169 tok/s Pro
GPT OSS 120B 441 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

Evaluating LLMs' Mathematical and Coding Competency through Ontology-guided Interventions (2401.09395v6)

Published 17 Jan 2024 in cs.CL

Abstract: Recent advancements in LLMs have showcased striking results on existing logical reasoning benchmarks, with some models even surpassing human performance. However, the true depth of their competencies and robustness in reasoning tasks remains an open question. To this end, in this paper, we focus on two popular reasoning tasks: arithmetic reasoning and code generation. Particularly, we introduce (i) a general ontology of perturbations for math and coding questions, (ii) a semi-automatic method to apply these perturbations, and (iii) two datasets, GSMORE and HUMANEVAL-CORE, respectively, of perturbed math and coding problems to probe LLM capabilities in numeric reasoning and coding tasks. Through comprehensive evaluations of both closed-source and open-source LLMs, we show a significant performance drop across all the models against the perturbed questions, suggesting that the current LLMs lack robust problem solving skills and structured reasoning abilities in many areas, as defined by our ontology. We open-source the datasets and source codes at: https://github.com/declare-lab/LLM-ReasoningTest.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. Training verifiers to solve math word problems. ArXiv, abs/2110.14168, 2021a. URL https://api.semanticscholar.org/CorpusID:239998651.
  2. Measuring massive multitask language understanding. ArXiv, abs/2009.03300, 2020. URL https://api.semanticscholar.org/CorpusID:221516475.
  3. Measuring mathematical problem solving with the math dataset, 2021.
  4. Beyond accuracy: Behavioral testing of nlp models with checklist. arXiv preprint arXiv:2005.04118, 2020.
  5. Reasoning or reciting? exploring the capabilities and limitations of language models through counterfactual tasks, 2023.
  6. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.
  7. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct, 2023a.
  8. Wizardcoder: Empowering code large language models with evol-instruct, 2023b.
  9. Time travel in llms: Tracing data contamination in large language models, 2023.
  10. Measuring reliability of large language models through semantic consistency, 2023.
  11. Metamath: Bootstrap your own mathematical questions for large language models, 2023.
  12. Training verifiers to solve math word problems. ArXiv, abs/2110.14168, 2021b. URL https://api.semanticscholar.org/CorpusID:239998651.
  13. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  14. Mistral 7b. ArXiv, abs/2310.06825, 2023. URL https://api.semanticscholar.org/CorpusID:263830494.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 7 tweets and received 70 likes.

Upgrade to Pro to view all of the tweets about this paper: