Emergent Mind

One Pass Streaming Algorithm for Super Long Token Attention Approximation in Sublinear Space

(2311.14652)
Published Nov 24, 2023 in cs.LG , cs.CL , and stat.ML

Abstract

Attention computation takes both the time complexity of $O(n2)$ and the space complexity of $O(n2)$ simultaneously, which makes deploying LLMs in streaming applications that involve long contexts requiring substantial computational resources. In recent OpenAI DevDay (Nov 6, 2023), OpenAI released a new model that is able to support a 128K-long document, in our paper, we focus on the memory-efficient issue when context length $n$ is much greater than 128K ($n \gg 2d$). Considering a single-layer self-attention with Query, Key, and Value matrices $Q, K, V \in \mathbb{R}{n \times d}$, the polynomial method approximates the attention output $T \in \mathbb{R}{n \times d}$. It accomplishes this by constructing $U1, U2 \in \mathbb{R}{n \times t}$ to expedite attention ${\sf Attn}(Q, K, V)$ computation within $n{1+o(1)}$ time executions. Despite this, computing the approximated attention matrix $U1U2\top \in \mathbb{R}{n \times n}$ still necessitates $O(n2)$ space, leading to significant memory usage. In response to these challenges, we introduce a new algorithm that only reads one pass of the data in a streaming fashion. This method employs sublinear space $o(n)$ to store three sketch matrices, alleviating the need for exact $K, V$ storage. Notably, our algorithm exhibits exceptional memory-efficient performance with super-long tokens. As the token length $n$ increases, our error guarantee diminishes while the memory usage remains nearly constant. This unique attribute underscores the potential of our technique in efficiently handling LLMs in streaming applications.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.