Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating Channel Pruning through Structural Redundancy Reduction -- A Statistical Study (1905.06498v3)

Published 16 May 2019 in cs.CV

Abstract: Most existing channel pruning methods formulate the pruning task from a perspective of inefficiency reduction which iteratively rank and remove the least important filters, or find the set of filters that minimizes some reconstruction errors after pruning. In this work, we investigate the channel pruning from a new perspective with statistical modeling. We hypothesize that the number of filters at a certain layer reflects the level of 'redundancy' in that layer and thus formulate the pruning problem from the aspect of redundancy reduction. Based on both theoretic analysis and empirical studies, we make an important discovery: randomly pruning filters from layers of high redundancy outperforms pruning the least important filters across all layers based on the state-of-the-art ranking criterion. These results advance our understanding of pruning and further testify to the recent findings that the structure of the pruned model plays a key role in the network efficiency as compared to inherited weights.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chengcheng Li (13 papers)
  2. Zi Wang (120 papers)
  3. Dali Wang (12 papers)
  4. Xiangyang Wang (10 papers)
  5. Hairong Qi (41 papers)

Summary

We haven't generated a summary for this paper yet.