Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 56 tok/s
Gemini 2.5 Pro 39 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 155 tok/s Pro
GPT OSS 120B 476 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Reverse That Number! Decoding Order Matters in Arithmetic Learning (2403.05845v1)

Published 9 Mar 2024 in cs.CL and cs.AI

Abstract: Recent advancements in pretraining have demonstrated that modern LLMs possess the capability to effectively learn arithmetic operations. However, despite acknowledging the significance of digit order in arithmetic computation, current methodologies predominantly rely on sequential, step-by-step approaches for teaching LLMs arithmetic, resulting in a conclusion where obtaining better performance involves fine-grained step-by-step. Diverging from this conventional path, our work introduces a novel strategy that not only reevaluates the digit order by prioritizing output from the least significant digit but also incorporates a step-by-step methodology to substantially reduce complexity. We have developed and applied this method in a comprehensive set of experiments. Compared to the previous state-of-the-art (SOTA) method, our findings reveal an overall improvement of in accuracy while requiring only a third of the tokens typically used during training. For the purpose of facilitating replication and further research, we have made our code and dataset publicly available at \url{https://anonymous.4open.science/r/RAIT-9FB7/}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. GPT-4 technical report. CoRR, abs/2303.08774.
  2. Gemini: A family of highly capable multimodal models. CoRR, abs/2312.11805.
  3. Sparks of Artificial General Intelligence: Early experiments with GPT-4.
  4. PAL: program-aided language models. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, pages 10764–10799.
  5. MathPrompter: Mathematical reasoning using large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track), pages 37–42.
  6. Length generalization in arithmetic transformers. CoRR, abs/2306.15400.
  7. Teaching arithmetic to small transformers. CoRR, abs/2307.03381.
  8. Soochan Lee and Gunhee Kim. 2023. Recursion of thought: A divide-and-conquer approach to multi-context reasoning with language models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 623–658.
  9. Tiedong Liu and Bryan Kian Hsiang Low. 2023. Goat: Fine-tuned llama outperforms gpt-4 on arithmetic tasks.
  10. Evaluating transformer language models on arithmetic operations using number decomposition. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 291–297, Marseille, France. European Language Resources Association.
  11. Show your work: Scratchpads for intermediate computation with language models.
  12. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, pages 27730–27744.
  13. Limitations of language models in arithmetic and symbolic induction. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9285–9298.
  14. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761.
  15. Llama 2: Open Foundation and Fine-Tuned Chat Models.
  16. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
  17. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.
  18. GPT can solve mathematical problems without a calculator. CoRR, abs/2309.03241.
  19. How well do large language models perform in arithmetic tasks? CoRR, abs/2304.02015.
  20. Teaching algorithmic reasoning via in-context learning. CoRR, abs/2211.09066.
Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.