Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 62 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 213 tok/s Pro
GPT OSS 120B 458 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Scaling Relationship on Learning Mathematical Reasoning with Large Language Models (2308.01825v2)

Published 3 Aug 2023 in cs.CL

Abstract: Mathematical reasoning is a challenging task for LLMs, while the scaling relationship of it with respect to LLM capacity is under-explored. In this paper, we investigate how the pre-training loss, supervised data amount, and augmented data amount influence the reasoning performances of a supervised LLM. We find that pre-training loss is a better indicator of the model's performance than the model's parameter count. We apply supervised fine-tuning (SFT) with different amounts of supervised data and empirically find a log-linear relation between data amount and model performance, and we find better models improve less with enlarged supervised datasets. To augment more data samples for improving model performances without any human effort, we propose to apply Rejection sampling Fine-Tuning (RFT). RFT uses supervised models to generate and collect correct reasoning paths as augmented fine-tuning datasets. We find with augmented samples containing more distinct reasoning paths, RFT improves mathematical reasoning performance more for LLMs. We also find RFT brings more improvement for less performant LLMs. Furthermore, we combine rejection samples from multiple models which push LLaMA-7B to an accuracy of 49.3\% on GSM8K which outperforms the supervised fine-tuning (SFT) accuracy of 35.9\% significantly.

Citations (122)

Summary

  • The paper identifies pre-training loss as a stronger indicator of LLM mathematical reasoning ability than model parameter count.
  • It introduces a Rejection Sampling Fine-Tuning (RFT) strategy that leverages generated reasoning paths to effectively augment fine-tuning data.
  • Combining rejection samples from multiple models boosts performance by over 13 percentage points in models like LLaMA-7B.

Introduction to the Scaling Relationship in Mathematical Reasoning

LLMs have become increasingly adept at tackling mathematical reasoning problems, a development of particular interest for both theoretical and applied AI research. The extent to which different factors influence an LLM's capacity for mathematical reasoning is a question of paramount importance for optimizing their training and deployment.

Pre-Training Loss as Performance Indicator

A crucial finding of the analyzed paper is that pre-training loss is a better indicator of an LLM's mathematical reasoning ability than just the model's parameter count. This suggests that focusing on lower pre-training loss could be more effective for improving an LLM's performance than increasing its size. Supervised fine-tuning (SFT) with various amounts of supervised data reveals a log-linear relationship, with diminishing gains for larger, better-pre-trained models.

Rejection Sampling Fine-Tuning (RFT) Strategy

To achieve superior model performances without manual data generation, the researchers propose Rejection sampling Fine-Tuning (RFT). The strategy involves generating correct reasoning paths with a supervised model, which are then used as an augmented dataset for fine-tuning. RFT results vary depending on the distinct reasoning path amount, which can be adjusted by modifying the number of samples or combining samples from multiple models. Notably, this method is not only computationally cheaper than extensive pre-training but also brings marked improvements for less performant LLMs.

Enhanced Performance with Combined Rejection Samples

Applying RFT and combining rejection samples from multiple models significantly improves LLM capabilities, pushing the performance of models like LLaMA-7B by over 13 percentage points compared to SFT. This finding indicates that model performance could potentially benefit from diverse reasoning paths, leading to better generalization while reasoning.

Implications and Future Directions

The paper’s insights into the factors influencing LLMs' mathematical reasoning abilities—specifically the impact of pre-training losses, the quantity of supervised data, and the amount of augmented reasoning paths—are poised to inform future LLM training strategies. Moreover, the relative ease and efficiency of RFT compared to extended pre-training underscore its potential as a key approach for enhancing LLM performance on mathematical reasoning tasks.

In conclusion, while the pursuit of more efficient and effective LLMs continues, it has become clear that optimizing pre-training processes and utilizing innovative fine-tuning approaches like RFT hold significant promise for improving LLM reasoning skills in mathematical domains.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com