Emergent Mind

Abstract

Six-bit quantization (FP6) can effectively reduce the size of LLMs and preserve the model quality consistently across varied applications. However, existing systems do not provide Tensor Core support for FP6 quantization and struggle to achieve practical performance improvements during LLM inference. It is challenging to support FP6 quantization on GPUs due to (1) unfriendly memory access of model weights with irregular bit-width and (2) high runtime overhead of weight de-quantization. To address these problems, we propose TC-FPx, the first full-stack GPU kernel design scheme with unified Tensor Core support of float-point weights for various quantization bit-width. We integrate TC-FPx kernel into an existing inference system, providing new end-to-end support (called FP6-LLM) for quantized LLM inference, where better trade-offs between inference cost and model quality are achieved. Experiments show that FP6-LLM enables the inference of LLaMA-70b using only a single GPU, achieving 1.69x-2.65x higher normalized inference throughput than the FP16 baseline. The source code is publicly available at https://github.com/usyd-fsalab/fp6_llm.

Comparative visualization of dual and unified kernels for weight-only WxA16 quantization during LLM inference.

Overview

  • LLMs are crucial for natural language tasks but are limited by high memory requirements and computational costs.

  • FP6 offers a better balance between inference cost and model quality than traditional methods using larger data types like FP16.

  • The paper introduces TC-FPx, a full-stack GPU kernel design providing unified support for multiple quantization bit-widths.

  • FP6-LLM, utilizing TC-FPx, significantly improves normalized inference throughput for large models on a single GPU.

  • FP6-LLM's deployment demonstrates an effective algorithm-system co-design that can make LLMs more practical and widely accessible.

Introduction

LLMs have become central to numerous natural language processing tasks with their unparalleled ability to understand and generate human-like text. Deployment, however, remains a significant challenge, principally due to their extensive memory requirements and computational costs. Conventional methods frequently resort to larger-than-necessary data types such as FP16 for weights representation during inference, exacerbating these challenges. Recognition of 6-bit quantization (FP6) as a promising alternative has been growing, given its potential to balance inference cost and model quality proficiently.

6-bit Quantization Challenges and FP6-Centric Solution

The paper posits that existing systems lack support for FP6 data types and struggle with practical performance enhancements. Two main hurdles are the unfriendly memory access patterns caused by irregular bit-widths of model weights and the runtime overhead for weight de-quantization. To counter these challenges, the authors propose a full-stack GPU kernel design scheme, TC-FPx. This is the first to provide unified support across various quantization bit-widths. By incorporating this framework into their inference system dubbed FP6-LLM, the authors forge new pathways for quantized LLM inference that promises fiscally and computationally more efficient trade-offs without compromising quality.

FP6-LLM Empirical Advantages

Empirical benchmarks illustrate that FP6-LLM, leveraging the TC-FPx design, can serve models such as LLaMA-70b on a single GPU, significantly increasing normalized inference throughput by up to 2.65x compared to FP16 baseline executions. Crucially, these benefits stem from choices like ahead-of-time bit-level pre-packing and SIMT-efficient de-quantization runtime that together circumvent GPU memory access issues and dilute computation overheads during inference.

Conclusion

The paper concludes with the assertion of FP6-LLM's capabilities to efficiently facilitate the inference process for LLMs, relying on innovative algorithm-system co-design. By effectively overhauling the GPU kernel infrastructure to include FP6 support, it opens the door for wider adoption of quantization strategies. Consequently, FP6-LLM stands as a promising solution for deploying large, computationally demanding LLMs more broadly, enhancing their practicality and accessibility.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.