ProPD: Dynamic Token Tree Pruning and Generation for LLM Parallel Decoding (2402.13485v1)
Abstract: Recent advancements in generative LLMs have significantly boosted the performance in natural language processing tasks. However, their efficiency is hampered by the inherent limitations in autoregressive token generation. While parallel decoding with token tree verification, e.g., Medusa, has been proposed to improve decoding parallelism and efficiency, it often struggles with maintaining contextual relationships due to its independent token prediction approach and incurs significant verification overhead, especially with large tree sizes and batch processing. In this paper, we propose ProPD, an efficient LLM parallel decoding framework based on dynamic token tree pruning and generation. ProPD features an advanced early pruning mechanism to efficiently eliminate unpromising token sequences to improve verification efficiency. Additionally, it introduces a dynamic token tree generation algorithm to balance the computation and parallelism of the verification phase in real-time and maximize the overall efficiency across different batch sizes, sequence lengths, and tasks, etc. We verify ProPD across a diverse set of datasets, LLMs, and batch sizes and demonstrate ProPD consistently outperforms existing decoding algorithms by 1.1-3.2x.
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding. arXiv preprint arXiv:2310.05424 (2023).
- Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
- Medusa: Simple framework for accelerating llm generation with multiple decoding heads.
- Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318 (2023).
- Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. arXiv preprint arXiv:1906.01749 (2019).
- How good are gpt models at machine translation? a comprehensive evaluation. arXiv preprint arXiv:2302.09210 (2023).
- RecycleGPT: An Autoregressive Language Model with Recyclable Module. arXiv:2308.03421 [cs.CL]
- Full stack optimization of transformer inference: a survey. arXiv preprint arXiv:2302.14017 (2023).
- Fast inference from transformers via speculative decoding. In International Conference on Machine Learning. PMLR, 19274–19286.
- SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification. arXiv preprint arXiv:2305.09781 (2023).
- MohamedRashad. [n. d.]. Chatgpt-prompts. https://huggingface.co/datasets/MohamedRashad/ChatGPT-prompts 2023.
- Improving language understanding by generative pre-training. (2018).
- Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
- Accelerating Transformer Inference for Translation via Parallel Decoding. arXiv preprint arXiv:2305.10427 (2023).
- Benjamin Spector and Chris Re. 2023. Accelerating llm inference with staged speculative decoding. arXiv preprint arXiv:2308.04623 (2023).
- Blockwise parallel decoding for deep autoregressive models. Advances in Neural Information Processing Systems 31 (2018).
- Stanford Alpaca: An Instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca.
- LLMCad: Fast and Scalable On-device Large Language Model Inference. arXiv preprint arXiv:2309.04255 (2023).
- Conversational question answering: A survey. Knowledge and Information Systems 64, 12 (2022), 3151–3195.
- Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. arXiv:2306.05685 [cs.CL]
- Shuzhang Zhong (5 papers)
- Zebin Yang (14 papers)
- Meng Li (244 papers)
- Ruihao Gong (40 papers)
- Runsheng Wang (49 papers)
- Ru Huang (52 papers)