Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Block Pruning for Enhanced Efficiency in Convolutional Neural Networks (2312.16904v2)

Published 28 Dec 2023 in cs.CV

Abstract: This paper presents a novel approach to network pruning, targeting block pruning in deep neural networks for edge computing environments. Our method diverges from traditional techniques that utilize proxy metrics, instead employing a direct block removal strategy to assess the impact on classification accuracy. This hands-on approach allows for an accurate evaluation of each block's importance. We conducted extensive experiments on CIFAR-10, CIFAR-100, and ImageNet datasets using ResNet architectures. Our results demonstrate the efficacy of our method, particularly on large-scale datasets like ImageNet with ResNet50, where it excelled in reducing model size while retaining high accuracy, even when pruning a significant portion of the network. The findings underscore our method's capability in maintaining an optimal balance between model size and performance, especially in resource-constrained edge computing scenarios.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. S. Chen and Q. Zhao, “Shallowing deep networks: Layer-wise pruning based on feature representations,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 12, pp. 3048–3056, 2018.
  2. S. Elkerdawy, M. Elhoushi, A. Singh, H. Zhang, and N. Ray, “To filter prune, or to layer prune, that is the question,” in Proceedings of the Asian Conference on Computer Vision (ACCV), November 2020.
  3. H. Tang, Y. Lu, and Q. Xuan, “Sr-init: An interpretable layer pruning method,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2023, pp. 1–5.
  4. K. Zhang and G. Liu, “Layer pruning for obtaining shallower resnets,” IEEE Signal Processing Letters, vol. 29, pp. 1172–1176, 2022.
  5. Z. Huang and N. Wang, “Data-driven sparse structure selection for deep neural networks,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 304–320.
  6. W. Wang, S. Zhao, M. Chen, J. Hu, D. Cai, and H. Liu, “Dbp: Discrimination based block-level pruning for deep model acceleration,” arXiv preprint arXiv:1912.10178, 2019.
  7. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  8. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  9. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
  10. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520.
  11. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 31, no. 1, 2017.
Citations (2)

Summary

We haven't generated a summary for this paper yet.