Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
124 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TMPQ-DM: Joint Timestep Reduction and Quantization Precision Selection for Efficient Diffusion Models (2404.09532v1)

Published 15 Apr 2024 in cs.CV and cs.LG

Abstract: Diffusion models have emerged as preeminent contenders in the realm of generative models. Distinguished by their distinctive sequential generative processes, characterized by hundreds or even thousands of timesteps, diffusion models progressively reconstruct images from pure Gaussian noise, with each timestep necessitating full inference of the entire model. However, the substantial computational demands inherent to these models present challenges for deployment, quantization is thus widely used to lower the bit-width for reducing the storage and computing overheads. Current quantization methodologies primarily focus on model-side optimization, disregarding the temporal dimension, such as the length of the timestep sequence, thereby allowing redundant timesteps to continue consuming computational resources, leaving substantial scope for accelerating the generative process. In this paper, we introduce TMPQ-DM, which jointly optimizes timestep reduction and quantization to achieve a superior performance-efficiency trade-off, addressing both temporal and model optimization aspects. For timestep reduction, we devise a non-uniform grouping scheme tailored to the non-uniform nature of the denoising process, thereby mitigating the explosive combinations of timesteps. In terms of quantization, we adopt a fine-grained layer-wise approach to allocate varying bit-widths to different layers based on their respective contributions to the final generative performance, thus rectifying performance degradation observed in prior studies. To expedite the evaluation of fine-grained quantization, we further devise a super-network to serve as a precision solver by leveraging shared quantization results. These two design components are seamlessly integrated within our framework, enabling rapid joint exploration of the exponentially large decision space via a gradient-free evolutionary search algorithm.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  18208–18218, 2022.
  2. Dreamdiffusion: Generating high-quality images from brain eeg signals. arXiv preprint arXiv:2306.16934, 2023.
  3. One transformer fits all distributions in multi-modal diffusion at scale. 2023.
  4. Rethinking Differentiable Search for Mixed-precision Neural Networks. In Proc. of CVPR, 2020.
  5. Diffusiondet: Diffusion model for object detection. In Proc. of ICCV, 2023.
  6. Towards Mixed-precision Quantization of Neural Networks via Constrained Optimization. In Proc. of ICCV, 2021.
  7. PACT: Parameterized Clipping Activation for Quantized Neural Networks. CoRR, 2018.
  8. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021.
  9. HAWQ: Hessian AWare Quantization of Neural Networks With Mixed-precision. In Proc. of ICCV, 2019.
  10. HAWQ-V2: Hessian Aware trace-weighted Quantization of Neural Networks. In Proc. of NeurIPS, 2020.
  11. ReLeQ : A Reinforcement Learning Approach for Automatic Deep Quantization of Neural Networks. IEEE Micro, 2020.
  12. Learned Step Size quantization. In Proc. of ICLR, 2020.
  13. Structural pruning for diffusion models. Proc. of NeurIPS, 2024.
  14. Single Path One-shot Neural Architecture Search with Uniform Sampling. In Proc. of ECCV, 2020.
  15. Ptqd: Accurate post-training quantization for diffusion models. Advances in Neural Information Processing Systems, 36, 2024.
  16. Denoising diffusion probabilistic models. Proc. of NeurIPS, 2020.
  17. Searching for MobileNetV3. In Proc. of ICCV, 2019.
  18. Accurate post training quantization with small calibration sets. In Proc. of ICML, 2021.
  19. Autodiffusion: Training-free optimization of time steps and architectures for automated diffusion model acceleration. In Proc. of ICCV, 2023a.
  20. Diffnas: Bootstrapping diffusion models by prompting for better architectures. In Proc. of ICDM, 2023b.
  21. Q-diffusion: Quantizing diffusion models. In Proc. of ICCV, 2023c.
  22. Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks. arXiv preprint arXiv:1909.13144, 2019.
  23. Diffusion action segmentation. In Proc. of ICCV, 2023.
  24. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
  25. Pseudo numerical methods for diffusion models on manifolds. In Proc. of ICLR, 2022.
  26. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Proc. of NeurIPS, 2022.
  27. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  11461–11471, 2022.
  28. Importance estimation for neural network pruning. In Proc. of CVPR, 2019.
  29. Up or down? adaptive rounding for post-training quantization. In Proc. of ICML, 2020.
  30. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  4195–4205, 2023.
  31. Efficient neural architecture search via parameters sharing. In Proc. of ICML, 2018.
  32. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023.
  33. High-resolution image synthesis with latent diffusion models, 2021.
  34. Cads: Unleashing the diversity of diffusion models through condition-annealed sampling. In The Twelfth International Conference on Learning Representations, 2023.
  35. Post-training quantization on diffusion models. In Proc. of CVPR, 2023.
  36. Denoising diffusion implicit models. In Proc. of ICLR, 2020.
  37. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proc. of ICML, 2019.
  38. Mixed-precision Neural Network Quantization via Learned Layer-wise Importance. In Proc. of ECCV, 2022a.
  39. Arbitrary bit-width network: A joint layer-wise quantization and adaptive inference approach. In Proc. of ACM MM, 2022b.
  40. Seam: Searching transferable mixed-precision quantization policy through large margin regularization. In Proc. of ACM MM, 2023a.
  41. Elasticvit: Conflict-aware supernet training for deploying fast vision transformer on diverse mobile devices. In Proc. of ICCV, 2023b.
  42. Retraining-free model quantization via one-shot weight-coupling learning. arXiv preprint arXiv:2401.01543, 2024.
  43. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  44. Towards accurate data-free quantization for diffusion models. arXiv preprint arXiv:2305.18723, 2(5), 2023.
  45. HAQ: Hardware-aware Automated Quantization With Mixed Precision. In Proc. of CVPR, 2019.
  46. An adaptive logarithm quantization method for dnn compression. In Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8–12, 2021, Proceedings, Part V 28, pp.  352–359. Springer, 2021a.
  47. Generalizable Mixed-precision Quantization via Attribution Rank Preservation. In Proc. of ICCV, 2021b.
  48. Learning fast samplers for diffusion models by differentiating through sample quality. In Proc. of ICLR, 2021.
  49. Search What You Want: Barrier Panelty NAS for Mixed Precision Quantization. In Proc. of ECCV, 2020a.
  50. Bignas: Scaling up neural architecture search with big single-stage models. In Proc. of ECCV, 2020b.
  51. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. CoRR, 2016.
  52. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
Citations (4)

Summary

We haven't generated a summary for this paper yet.