Emergent Mind

gzip Predicts Data-dependent Scaling Laws

(2405.16684)
Published May 26, 2024 in cs.CL and cs.LG

Abstract

Past work has established scaling laws that predict the performance of a neural language model (LM) as a function of its parameter count and the number of tokens it's trained on, enabling optimal allocation of a fixed compute budget. Are these scaling laws agnostic to training data as some prior work suggests? We generate training datasets of varying complexities by modulating the syntactic properties of a PCFG, finding that 1) scaling laws are sensitive to differences in data complexity and that 2) gzip, a compression algorithm, is an effective predictor of how data complexity impacts scaling properties. We propose a new data-dependent scaling law for LM's that accounts for the training data's gzip-compressibility; its compute-optimal frontier increases in dataset size preference (over parameter count preference) as training data becomes harder to compress.

Increased compute required for less compressible code data; further comparison with natural language scaling needed.

Overview

  • The paper challenges the prevailing assumption that scaling laws for language models (LMs) are insensitive to the complexity of training data, introducing gzip-compressibility as a metric to quantify data complexity.

  • The researchers utilize Probabilistic Context-Free Grammars to generate datasets with varying syntactic complexities, using gzip to measure data entropy, and fit scaling laws across different model sizes and training steps.

  • Findings reveal that less compressible (more complex) data shifts the optimal compute allocation towards favoring dataset size over model parameter count, leading to potential computational savings and efficiency improvements in training large LMs.

The Influence of Data Complexity on Scaling Laws for Language Models

Introduction

The research explores the sensitivity of scaling laws for neural language models (LMs) to the complexity of training data. Traditionally, scaling laws have provided a framework for predicting the performance of LMs based on the model's parameter count and the number of training tokens. These laws guide optimal resource allocation within a fixed compute budget. However, the prevalent assumption has been that these scaling laws are largely agnostic to the specifics of the training data distribution. This paper challenges that assumption and introduces gzip-compressibility as a quantifiable metric of data complexity that influences scaling behavior.

Methodology

To interrogate the relationship between data complexity and scaling laws, the study employs Probabilistic Context-Free Grammars (PCFGs) to generate datasets with varying syntactic complexities. Six distinct datasets are created, each modulated by different syntactic properties such as the number of production rules and non-terminals. These datasets facilitate a controlled environment where the complexity of the data is systematically adjustable. The complexity is measured using gzip, a compression algorithm, thus providing a proxy for data entropy.

Language models of six different sizes, ranging from 4.4 million to 1.4 billion parameters, are trained on these datasets. Training steps vary across multiple magnitudes, enabling the researchers to fit scaling laws to the performance results for each dataset, exploring how these laws shift with increased data complexity.

Numerical Results

One of the pivotal findings is that as training data becomes less compressible (i.e., more complex), the optimal compute frontier shifts to favor dataset size over model parameter count. This is contrary to the traditional equal scaling of parameters and data tokens, as recommended by Chinchilla scaling laws. Specifically, the study notes that for high-complexity data, achieving similar performance requires a relatively larger dataset and fewer parameters. This nuanced allocation can lead to significant computational savings, illustrated by the estimation that an adjusted compute allocation could save $278,000 in GPU hours when training a model like StarCoder.

Theoretical and Practical Implications

The theoretical implications point toward a broader principle where data complexity, quantified through gzip-compressibility, should inform scaling laws. This raises critical questions about the underlying mechanisms that cause different scaling behaviors across datasets of varying complexity. Practically, these findings are highly consequential for domains such as code generation, where datasets exhibit significantly different compressibility profiles compared to natural language.

For instance, code datasets generally have lower gzip-compressibility, necessitating different scaling strategies—favoring parameters over data. This insight has practical ramifications for optimizing resource allocation in training LMs, rendering the training of large models more efficient and cost-effective.

Future Directions

Future work could delve into theoretical explanations for why scaling laws are sensitive to data complexity. Investigations could leverage linguistics and information theory to demystify the relationship between entropy, syntactic complexity, and LM performance. Empirical validation on real-world datasets beyond PCFGs would further bolster the generalizability of these findings. Additionally, extending this approach to other modalities, such as images or audio, could reveal if and how these principles apply more broadly across AI domains.

Exploring alternative complexity metrics or compression algorithms might yield even finer adjustments to scaling laws for more effective training regimen design. Real-world applications could involve dynamic compute resource allocation based on real-time data complexity assessments, enhancing both training efficiency and model performance.

Conclusion

The paper makes a significant contribution by demonstrating that the prevailing assumption of data-agnostic scaling laws is insufficient. It positions gzip-compressibility as a viable measure to adjust these laws based on data complexity, potentially leading to more optimal and cost-effective training regimes, particularly in specialized contexts like code generation. This paradigm shift urges the AI research community to reconsider how we design and allocate compute resources for training large-scale language models.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

HackerNews