Papers
Topics
Authors
Recent
2000 character limit reached

Learning to Skip for Language Modeling (2311.15436v1)

Published 26 Nov 2023 in cs.CL

Abstract: Overparameterized large-scale LLMs have impressive generalization performance of in-context few-shot learning. However, most LLMs allocate the same amount of parameters or computation to each token, disregarding the complexity or importance of the input data. We argue that in LLM pretraining, a variable amount of computation should be assigned to different tokens, and this can be efficiently achieved via a simple routing mechanism. Different from conventional early stopping techniques where tokens can early exit at only early layers, we propose a more general method that dynamically skips the execution of a layer (or module) for any input token with a binary router. In our extensive evaluation across 24 NLP tasks, we demonstrate that the proposed method can significantly improve the 1-shot performance compared to other competitive baselines only at mild extra cost for inference.

Citations (7)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.