LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models (2409.00509v2)
Abstract: LLMs face significant challenges in handling long-context tasks because of their limited effective context window size during pretraining, which restricts their ability to generalize over extended sequences. Meanwhile, extending the context window in LLMs through post-pretraining is highly resource-intensive. To address this, we introduce LongRecipe, an efficient training strategy for extending the context window of LLMs, including impactful token analysis, position index transformation, and training optimization strategies. It simulates long-sequence inputs while maintaining training efficiency and significantly improves the model's understanding of long-range dependencies. Experiments on three types of LLMs show that LongRecipe can utilize long sequences while requiring only 30% of the target context window size, and reduces computational training resource over 85% compared to full sequence training. Furthermore, LongRecipe also preserves the original LLM's capabilities in general tasks. Ultimately, we can extend the effective context window of open-source LLMs from 8k to 128k, achieving performance close to GPT-4 with just one day of dedicated training using a single GPU with 80G memory. Our code is released at https://github.com/zhiyuanhubj/LongRecipe.
- Zhiyuan Hu (30 papers)
- Yuliang Liu (82 papers)
- Jinman Zhao (20 papers)
- Suyuchen Wang (16 papers)
- Yan Wang (734 papers)
- Wei Shen (181 papers)
- Qing Gu (44 papers)
- Anh Tuan Luu (69 papers)
- See-Kiong Ng (103 papers)
- Zhiwei Jiang (24 papers)
- Bryan Hooi (159 papers)