Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AdaBM: On-the-Fly Adaptive Bit Mapping for Image Super-Resolution (2404.03296v1)

Published 4 Apr 2024 in cs.CV and eess.IV

Abstract: Although image super-resolution (SR) problem has experienced unprecedented restoration accuracy with deep neural networks, it has yet limited versatile applications due to the substantial computational costs. Since different input images for SR face different restoration difficulties, adapting computational costs based on the input image, referred to as adaptive inference, has emerged as a promising solution to compress SR networks. Specifically, adapting the quantization bit-widths has successfully reduced the inference and memory cost without sacrificing the accuracy. However, despite the benefits of the resultant adaptive network, existing works rely on time-intensive quantization-aware training with full access to the original training pairs to learn the appropriate bit allocation policies, which limits its ubiquitous usage. To this end, we introduce the first on-the-fly adaptive quantization framework that accelerates the processing time from hours to seconds. We formulate the bit allocation problem with only two bit mapping modules: one to map the input image to the image-wise bit adaptation factor and one to obtain the layer-wise adaptation factors. These bit mappings are calibrated and fine-tuned using only a small number of calibration images. We achieve competitive performance with the previous adaptive quantization methods, while the processing time is accelerated by x2000. Codes are available at https://github.com/Cheeun/AdaBM.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. NTIRE 2017 challenge on single image super-resolution: Dataset and study. In CVPR Workshops, 2017.
  2. Fast, accurate, and lightweight super-resolution with cascading residual network. In ECCV, 2018.
  3. Mustafa Ayazoglu. Extremely lightweight quantization robust real-time single-image super resolution for mobile devices. In CVPR Workshops, 2021.
  4. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
  5. Towards mixed-precision quantization of neural networks via constrained optimization. In ICCV, 2021.
  6. Low-bit quantization of neural networks for efficient inference. In ICCV Workshops, 2019.
  7. Fast, accurate and lightweight super-resolution with neural architecture search. In ICPR, 2021.
  8. Learning a deep convolutional network for image super-resolution. In ECCV, 2014.
  9. Hawq: Hessian aware quantization of neural networks with mixed-precision. In ICCV, 2019.
  10. Image super-resolution using knowledge distillation. In ACCV, 2018.
  11. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In ICCV, 2019.
  12. Div8k: Diverse 8k resolution image dataset. In ICCV Workshops, 2019.
  13. Overcoming distribution mismatch in quantizing image super-resolution networks. arXiv preprint arXiv:2307.13337, 2023.
  14. Cadyq: Content-aware dynamic quantization for image super-resolution. In ECCV, 2022a.
  15. Daq: Channel-wise distribution-aware quantization for deep image super-resolution networks. In WACV, 2022b.
  16. Single image super-resolution from transformed self-exemplars. In CVPR, 2015.
  17. Fast and accurate single image super-resolution via information distillation network. In CVPR, 2018.
  18. Lightweight image super-resolution with information multi-distillation network. In ACMMM, 2019.
  19. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In CVPR, 2018.
  20. Learning lightweight super-resolution networks with weight pruning. Neural Networks, 144:21–32, 2021.
  21. Fine-grained neural architecture search for image super-resolution. Journal of Visual Communication and Image Representation, 89:103654, 2022.
  22. Adam: A method for stochastic optimization. In ICLR, 2015.
  23. Classsr: A general framework to accelerate super-resolution networks by data characteristic. In CVPR, 2021.
  24. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, 2017.
  25. Pams: Quantized super-resolution via parameterized max scale. In ECCV, 2020a.
  26. Fully quantized network for object detection. In CVPR, 2019.
  27. Dhp: Differentiable meta pruning via hypernetworks. In ECCV, 2020b.
  28. The heterogeneity hypothesis: Finding layer-wise differentiated network architectures. In CVPR, 2021.
  29. Enhanced deep residual networks for single image super-resolution. In CVPR Workshops, 2017.
  30. Super-resolution model quantized in multi-precision. Electronics, 10(17):2176, 2021.
  31. Deep adaptive inference networks for single image super-resolution. In ECCV, 2020.
  32. Efficient super resolution using binarized neural network. In CVPR Workshops, 2019.
  33. Attentive fine-grained structured sparsity for image restoration. In CVPR, 2022.
  34. Content-aware local gan for photo-realistic super-resolution. In ICCV, 2023.
  35. Quantsr: Accurate low-bit quantization for efficient image super-resolution. In NeurIPS, 2023.
  36. Addersr: Towards energy efficient image super-resolution. In CVPR, 2021.
  37. Mixed-precision neural network quantization via learned layer-wise importance. In ECCV, 2022.
  38. Cabm: Content-aware bit mapping for single image super-resolution network with large input. In CVPR, 2023.
  39. Toward accurate post-training quantization for image super resolution. In CVPR, 2023.
  40. Fully quantized image super-resolution networks. In ACMMM, 2021a.
  41. Haq: Hardware-aware automated quantization with mixed precision. In CVPR, 2019.
  42. Exploring sparsity in image super-resolution for efficient inference. In CVPR, 2021b.
  43. Adaptive patch exiting for scalable single image super-resolution. In ECCV, 2022.
  44. Learning frequency-aware dynamic network for efficient super-resolution. In ICCV, 2021.
  45. Binarized neural network for single image super resolution. In ECCV, 2020.
  46. Path-restore: Learning network path selection for image restoration. IEEE TPAMI, 2021.
  47. Achieving on-mobile real-time super-resolution with neural architecture and pruning search. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4821–4831, 2021.
  48. Residual dense network for image super-resolution. In CVPR, 2018.
  49. Data-free knowledge distillation for image super-resolution. In CVPR, 2021a.
  50. Learning efficient image super-resolution networks via structure-regularized pruning. In ICLR, 2021b.
  51. Dynamic dual trainable bounds for ultra-low precision super-resolution networks. In ECCV, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Cheeun Hong (6 papers)
  2. Kyoung Mu Lee (107 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.