Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaling Laws for Linear Complexity Language Models (2406.16690v1)

Published 24 Jun 2024 in cs.CL

Abstract: The interest in linear complexity models for LLMs is on the rise, although their scaling capacity remains uncertain. In this study, we present the scaling laws for linear complexity LLMs to establish a foundation for their scalability. Specifically, we examine the scaling behaviors of three efficient linear architectures. These include TNL, a linear attention model with data-independent decay; HGRN2, a linear RNN with data-dependent decay; and cosFormer2, a linear attention model without decay. We also include LLaMA as a baseline architecture for softmax attention for comparison. These models were trained with six variants, ranging from 70M to 7B parameters on a 300B-token corpus, and evaluated with a total of 1,376 intermediate checkpoints on various downstream tasks. These tasks include validation loss, commonsense reasoning, and information retrieval and generation. The study reveals that existing linear complexity LLMs exhibit similar scaling capabilities as conventional transformer-based models while also demonstrating superior linguistic proficiency and knowledge retention.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xuyang Shen (23 papers)
  2. Dong Li (429 papers)
  3. Ruitao Leng (4 papers)
  4. Zhen Qin (105 papers)
  5. Weigao Sun (19 papers)
  6. Yiran Zhong (75 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.