Emergent Mind

Common 7B Language Models Already Possess Strong Math Capabilities

(2403.04706)
Published Mar 7, 2024 in cs.CL and cs.AI

Abstract

Mathematical capabilities were previously believed to emerge in common language models only at a very large scale or require extensive math-related pre-training. This paper shows that the LLaMA-2 7B model with common pre-training already exhibits strong mathematical abilities, as evidenced by its impressive accuracy of 97.7% and 72.0% on the GSM8K and MATH benchmarks, respectively, when selecting the best response from 256 random generations. The primary issue with the current base model is the difficulty in consistently eliciting its inherent mathematical capabilities. Notably, the accuracy for the first answer drops to 49.5% and 7.9% on the GSM8K and MATH benchmarks, respectively. We find that simply scaling up the SFT data can significantly enhance the reliability of generating correct answers. However, the potential for extensive scaling is constrained by the scarcity of publicly available math questions. To overcome this limitation, we employ synthetic data, which proves to be nearly as effective as real data and shows no clear saturation when scaled up to approximately one million samples. This straightforward approach achieves an accuracy of 82.6% on GSM8K and 40.6% on MATH using LLaMA-2 7B models, surpassing previous models by 14.2% and 20.8%, respectively. We also provide insights into scaling behaviors across different reasoning complexities and error types.

Xwin-Math ranks second to GPT-4 in benchmark performance, showcasing strong generalization capabilities.

Overview

  • The paper demonstrates the inherent mathematical capabilities of the 7B parameter model LLaMA-2 7B, challenging the prevailing belief that only large-scale or math-specific models can achieve meaningful performance in mathematical reasoning.

  • It reveals the model's performance instability as a primary issue but shows remarkable improvement in accuracy on mathematical benchmarks GSM8K and MATH through the use of synthetic data scaling.

  • Through extensive experiments, scaling up supervised fine-tuning (SFT) data with high-quality synthetic questions generated by GPT-4 Turbo significantly enhances the model’s capabilities.

  • This study suggests a reevaluation of the need for extremely large models in domain-specific tasks and opens new avenues for leveraging synthetic data in various AI research domains.

Enhancing Mathematical Capabilities of 7B Language Models with Synthetic Data Scaling

Introduction

Emergent capabilities in language models, particularly concerning mathematical reasoning, have traditionally been associated with large-scale models exceeding tens of billions of parameters. Recent studies suggested that meaningful performance on mathematical benchmarks could only be achieved with such gargantuan models or those specifically trained on extensive mathematical corpora. However, this paper challenges that notion by demonstrating the inherent mathematical capabilities of a comparatively smaller 7B parameter model, LLaMA-2 7B, without resorting to math-centric pre-training. The paper’s critical insight revolves around the concept that the fundamental issue with existing models is not the lack of capability but the instability in consistently generating correct solutions. The authors propose a solution leveraging synthetic data, showing that it remarkably enhances performance on two major mathematical benchmarks: GSM8K and MATH.

Understanding Mathematical Capabilities in LLaMA-2 7B

The authors' exploration begins with an analysis of the LLaMA-2 7B model's performance on the GSM8K and MATH benchmarks. They employ two metrics for evaluation: Pass@N and PassRatio@N. These metrics reveal an intriguing aspect of the model's behavior; while exhibiting high potential capabilities (Pass@256), the model's inconsistency in producing correct answers on the first attempt (PassRatio@256) indicates an instability issue. Remarkably, when allowed to choose the best answer from 256 trials, the model's accuracy surpasses that of its contemporaries on GSM8K and showcases competitive performance on MATH.

Synthetic Data Scaling to Mitigate Instability

The paper posits that the instability issue can be significantly mitigated by scaling supervised fine-tuning (SFT) data. This assertion is grounded in observations that increasing SFT data leads to linear, or super-linear, improvements in accuracy without saturation. Given the limitation of accessible real math questions for further scaling, the authors turn to synthetic question generation as a solution, harnessing the GPT-4 Turbo model. This approach not only circumvents the scarcity of real questions but also proves nearly as effective, indicating the synthetic data's high quality and relevance.

The authors conduct extensive experiments, scaling SFT data up to approximately one million samples. These experiments illustrate that such scaling directly correlates with marked improvements in the model’s performance, achieving state-of-the-art accuracy on the GSM8K and MATH benchmarks with a 7B model. This outcome firmly establishes that the so-called instability issue can be substantially reduced through the strategic scaling of SFT data.

Implications and Future Directions

This study's implications extend beyond just improving mathematical abilities in language models. It provides a compelling argument against the necessity for extremely large models or specifically pre-trained models to achieve high performance in domain-specific tasks. Instead, it showcases the potential of leveraging synthetic data to uncover and enhance the capabilities of existing models.

Looking forward, the synthetic SFT data scaling approach opens new avenues for research and development across various domains, encouraging a reevaluation of how we perceive and unlock the potential of language models. With synthetic data proving to be a valuable resource for model training, future work might explore its application in other specialized areas beyond mathematics, promising further breakthroughs in AI research and applications.

In conclusion, this paper’s exploration into enhancing the mathematical capabilities of the LLaMA-2 7B model via synthetic data scaling not only challenges existing beliefs about model training and capabilities but also sets a precedent for future research in leveraging synthetic data to maximize the potential of language models across diverse domains.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.