Emergent Mind

Abstract

Convolutional neural networks are prevailing in deep learning tasks. However, they suffer from massive cost issues when working on mobile devices. Network pruning is an effective method of model compression to handle such problems. This paper presents a novel structured network pruning method with auxiliary gating structures which assigns importance marks to blocks in backbone network as a criterion when pruning. Block-wise pruning is then realized by proposed voting strategy, which is different from prevailing methods who prune a model in small granularity like channel-wise. We further develop a three-stage training scheduling for the proposed architecture incorporating knowledge distillation for better performance. Our experiments demonstrate that our method can achieve state-of-the-arts compression performance for the classification tasks. In addition, our approach can integrate synergistically with other pruning methods by providing pretrained models, thus achieving a better performance than the unpruned model with over 93\% FLOPs reduced.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.