Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 11 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 88 tok/s Pro
Kimi K2 214 tok/s Pro
GPT OSS 120B 460 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge (2312.05693v2)

Published 9 Dec 2023 in cs.LG, cs.AI, and cs.CL

Abstract: LLMs stand out for their impressive performance in intricate LLMing tasks. However, their demanding computational and memory needs pose obstacles for broad use on edge devices. Quantization is then introduced to boost LLMs' on-device efficiency. Recent works show that 8-bit or lower weight quantization is feasible with minimal impact on end-to-end task performance, while the activation is still not quantized. On the other hand, mainstream commodity edge devices still struggle to execute these sub-8-bit quantized networks effectively. In this paper, we propose Agile-Quant, an activation-guided quantization framework for popular LLMs, and implement an end-to-end accelerator on multiple edge devices for faster inference. Considering the hardware profiling and activation analysis, we first introduce a basic activation quantization strategy to balance the trade-off of task performance and real inference speed. Then we leverage the activation-aware token pruning technique to reduce the outliers and the adverse impact on attentivity. Ultimately, we utilize the SIMD-based 4-bit multiplier and our efficient TRIP matrix multiplication to implement the accelerator for LLMs on the edge. We apply our framework on different scales of LLMs including LLaMA, OPT, and BLOOM with 4-bit or 8-bit for the activation and 4-bit for the weight quantization. Experiments show that Agile-Quant achieves simultaneous quantization of model weights and activations while maintaining task performance comparable to existing weight-only quantization methods. Moreover, in the 8- and 4-bit scenario, Agile-Quant achieves an on-device speedup of up to 2.55x compared to its FP16 counterparts across multiple edge devices, marking a pioneering advancement in this domain. Code: https://github.com/shawnricecake/agile-quant

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. ARM. 2023. A collection of low-level machine learning functions optimized with SIMD technologies. https://arm-software.github.io/ComputeLibrary/v22.05/.
  2. Language models are few-shot learners. NeurIPS, 33: 1877–1901.
  3. Language Models are Few-Shot Learners.
  4. A Deep Look into Logarithmic Quantization of Model Parameters in Neural Networks. In IAIT.
  5. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339.
  6. SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression. arXiv.
  7. Heatvit: Hardware-efficient adaptive token pruning for vision transformers. In HPCA, 442–455. IEEE.
  8. QNNPACK: Open source library for optimized mobile deep learning.
  9. GPTQ: Accurate Post-training Compression for Generative Pretrained Transformers. arXiv.
  10. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In CVPR, 2704–2713.
  11. gemmlowp: A small self-contained low-precision gemm library. Retrieved June, 14: 2018.
  12. Learned token pruning for transformers. In KDD, 784–794.
  13. Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training. arXiv.
  14. Not all patches are what you need: Expediting vision transformers via token reorganizations. arXiv.
  15. AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration. arXiv.
  16. FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer. In IJCAI, 1173–1179.
  17. Pointer sentinel mixture models. arXiv.
  18. Language models are unsupervised multitask learners. OpenAI blog, 1(8): 9.
  19. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research.
  20. Bloom: A 176b-parameter open-access multilingual language model. arXiv.
  21. Data Level Lottery Ticket Hypothesis for Vision Transformers. In IJCAI.
  22. LLaMA: Open and Efficient Foundation Language Models. arXiv.
  23. Attention is all you need. NeurIPS, 30.
  24. ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats. arXiv preprint arXiv:2307.09782.
  25. SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models. arXiv.
  26. Opt: Open pre-trained transformer language models. arXiv.
  27. Integer or Floating Point? New Outlooks for Low-Bit Quantization on Large Language Models. arXiv.
Citations (12)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube