Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute (2301.10448v2)
Abstract: Retrieval-augmented LLMs such as Fusion-in-Decoder are powerful, setting the state of the art on a variety of knowledge-intensive tasks. However, they are also expensive, due to the need to encode a large number of retrieved passages. Some work avoids this cost by pre-encoding a text corpus into a memory and retrieving dense representations directly. However, pre-encoding memory incurs a severe quality penalty as the memory representations are not conditioned on the current input. We propose LUMEN, a hybrid between these two extremes, pre-computing the majority of the retrieval representation and completing the encoding on the fly using a live encoder that is conditioned on the question and fine-tuned for the task. We show that LUMEN significantly outperforms pure memory on multiple question-answering tasks while being much cheaper than FiD, and outperforms both for any given compute budget. Moreover, the advantage of LUMEN over FiD increases with model size.
- Michiel de Jong (14 papers)
- Yury Zemlyanskiy (12 papers)
- Nicholas FitzGerald (15 papers)
- Joshua Ainslie (32 papers)
- Sumit Sanghai (15 papers)
- Fei Sha (88 papers)
- William Cohen (11 papers)