Emergent Mind

Mamba: Linear-Time Sequence Modeling with Selective State Spaces

(2312.00752)
Published Dec 1, 2023 in cs.LG and cs.AI

Abstract

Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$ higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.

Overview

  • The paper introduces Mamba, a neural network architecture efficient for long sequence data modeling without sacrificing performance.

  • Mamba uses selective state space models (SSMs) to handle information in a context-dependent manner, improving upon previous SSM's limitations.

  • It avoids attention mechanisms and MLPs, using homogeneous network blocks and a hardware-aware algorithm for efficient computation.

  • Mamba outperforms Transformers in language modeling and offers practical benefits for long sequences with faster inference speed and memory efficiency.

  • Mamba's code and pre-trained checkpoints have been open-sourced, facilitating further research and application in large-scale foundational models.

In the paper titled "Mamba: Linear-Time Sequence Modeling with Selective State Spaces," researchers introduce a new neural network architecture that can efficiently handle long sequences of data without sacrificing performance. This architecture, named Mamba, addresses the computational inefficiency of the widely-used Transformer models, particularly for tasks requiring the processing of lengthy input sequences.

Transformers, despite their success and versatility, are known to scale poorly with the sequence length due to their quadratic computational complexity. As a solution, various architectural alternatives have been proposed, none of which have managed to match Transformers' effectiveness across multiple domains. These alternatives have struggled mainly due to their limited content-based reasoning capabilities.

The key innovation in Mamba is the incorporation of selective state space models (SSMs), which are dynamically parameterized by the input data. By allowing the SSM parameters to be functions of the input, the network can intelligently determine when to propagate or forget information based on the current token's context. This selective process addresses the primary weakness of existing SSMs in dealing with discrete data types, such as text or genomic information.

Mamba eschews the standard attention mechanisms and even multi-layer perceptrons (MLPs) found in Transformer architectures. Instead, it integrates selective SSMs into an end-to-end architecture that consists of homogeneous network blocks. These blocks efficiently compute the model in a recurrent manner, leveraging a hardware-aware parallel algorithm that sidesteps the need for memory-intensive operations typically associated with convolutions.

The performance of Mamba is benchmarked against state-of-the-art models across several modalities, including language, audio, and genomics. Notably, Mamba significantly outperforms conventional Transformers in language modeling tasks, matching or exceeding the results of Transformers with double its parameter count. Mamba is especially effective for long-sequence data, showcasing improvements for sequences up to one million elements in length.

Moreover, Mamba boasts faster inference speeds—a 5x higher throughput compared to Transformers—and is memory efficient due to its linear scaling with the sequence length. This makes Mamba a promising candidate for the backbone of future "foundation models," which are large-scale models pre-trained on extensive data and later fine-tuned for various specific tasks.

The researchers have open-sourced Mamba's model code and pre-trained checkpoints, which will enable other researchers and practitioners to utilize and further develop this innovative architecture. Through extensive experimentation, Mamba demonstrates the feasibility of linear-time sequence modeling with remarkable effectiveness, potentially paving the way for its widespread adoption in applications requiring the processing of extended sequential data.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube
Reddit
Mamba: Linear-Time Sequence Modeling with Selective State Spaces (78 points, 5 comments) in /r/MachineLearning
Will new frontier LLM models be based on Mamba? (51 points, 17 comments) in /r/singularity