Transformers are Multi-State RNNs (2401.06104v2)
Abstract: Transformers are considered conceptually different from the previous generation of state-of-the-art NLP models - recurrent neural networks (RNNs). In this work, we demonstrate that decoder-only transformers can in fact be conceptualized as unbounded multi-state RNNs - an RNN variant with unlimited hidden state size. We further show that transformers can be converted into $\textit{bounded}$ multi-state RNNs by fixing the size of their hidden state, effectively compressing their key-value cache. We introduce a novel, training-free compression policy - $\textbf{T}$oken $\textbf{O}$mission $\textbf{V}$ia $\textbf{A}$ttention (TOVA). Our experiments with four long range tasks and several LLMs show that TOVA outperforms several baseline compression policies. Particularly, our results are nearly on par with the full model, using in some cases only $\frac{1}{8}$ of the original cache size, which translates to 4.8X higher throughput. Our results shed light on the connection between transformers and RNNs, and help mitigate one of LLMs' most painful computational bottlenecks - the size of their key-value cache. We publicly release our code at https://github.com/schwartz-lab-NLP/TOVA
- 01-ai. 2023. 01-ai/yi-6b. https://github.com/01-ai/Yi.
- Dynamic context pruning for efficient and interpretable autoregressive transformers. In Thirty-seventh Conference on Neural Information Processing Systems.
- Unitary evolution recurrent neural networks. In International conference on machine learning, pages 1120–1128. PMLR.
- Longformer: The long-document transformer. arXiv:2004.05150.
- Natural language processing with Python: analyzing text with the natural language toolkit. O’Reilly Media, Inc.
- Extending context window of large language models via positional interpolation. arXiv:2306.15595.
- Vicuna: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality.
- What does BERT look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics.
- A dataset of information-seeking questions and answers anchored in research papers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4599–4610, Online. Association for Computational Linguistics.
- Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179–211.
- Model tells you what to discard: Adaptive KV cache compression for LLMs. In Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023).
- Albert Gu and Tri Dao. 2023. Mamba: Linear-time sequence modeling with selective state spaces. arXiv:2312.00752.
- Efficiently modeling long sequences with structured state spaces. arXiv:2111.00396.
- Diagonal state spaces are as effective as structured state spaces. Advances in Neural Information Processing Systems, 35:22982–22994.
- LM-Infinite: Simple on-the-fly length generalization for large language models. arXiv:2308.16137.
- How much does attention actually attend? questioning the importance of attention in pretrained transformers. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1403–1416, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
- Block-recurrent transformers. In Advances in Neural Information Processing Systems.
- Mistral 7b. arXiv:2310.06825.
- Transformers are rnns: Fast autoregressive transformers with linear attention. In International conference on machine learning, pages 5156–5165. PMLR.
- FNet: Mixing tokens with Fourier transforms. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4296–4313, Seattle, United States. Association for Computational Linguistics.
- Differentiable subset pruning of transformer heads. Transactions of the Association for Computational Linguistics, 9:1442–1459.
- Pay attention to MLPs. In Advances in Neural Information Processing Systems, volume 34, pages 9204–9215. Curran Associates, Inc.
- S2ORC: The semantic scholar open research corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4969–4983, Online. Association for Computational Linguistics.
- Long range language modeling via gated state spaces. In The Eleventh International Conference on Learning Representations.
- Stephen Merity. 2019. Single headed attention RNN: Stop thinking with your head. arXiv:1911.11423.
- Are sixteen heads really better than one? In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
- Resurrecting recurrent neural networks for long sequences. arXiv:2303.06349.
- RWKV: Reinventing RNNs for the transformer era. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14048–14077, Singapore. Association for Computational Linguistics.
- ABC: Attention with bounded-memory control. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7469–7483, Dublin, Ireland. Association for Computational Linguistics.
- Efficiently scaling transformer inference. arXiv:2211.05102.
- Train short, test long: Attention with linear biases enables input length extrapolation. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
- Language models are unsupervised multitask learners.
- Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations.
- ZeroSCROLLS: A zero-shot benchmark for long text understanding. arXiv:2305.14196.
- Primer: Searching for efficient transformers for language modeling. arXiv:2109.08668.
- Retentive network: A successor to transformer for large language models. arXiv:2307.08621.
- LLaMA: Open and efficient foundation language models. arXiv:2302.13971.
- Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288.
- Attention is all you need. Advances in neural information processing systems, 30.
- SQuALITY: Building a long-document summarization dataset the hard way. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1139–1156, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Large language models are not fair evaluators. arXiv:2305.17926.
- Linformer: Self-attention with linear complexity. arXiv:2006.04768.
- Efficient streaming language models with attention sinks. arXiv:2309.17453.
- Gated linear attention transformers with hardware-efficient training. arXiv:2312.06635.
- Big bird: Transformers for longer sequences. Advances in neural information processing systems, 33:17283–17297.
- H22{}_{2}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRIPTo: Heavy-hitter oracle for efficient generative inference of large language models. arXiv:2306.14048.
- LIMA: Less is more for alignment. arXiv:2305.11206.