Papers
Topics
Authors
Recent
2000 character limit reached

OptG: Optimizing Gradient-driven Criteria in Network Sparsity (2201.12826v4)

Published 30 Jan 2022 in cs.CV

Abstract: Network sparsity receives popularity mostly due to its capability to reduce the network complexity. Extensive studies excavate gradient-driven sparsity. Typically, these methods are constructed upon premise of weight independence, which however, is contrary to the fact that weights are mutually influenced. Thus, their performance remains to be improved. In this paper, we propose to optimize gradient-driven sparsity (OptG) by solving this independence paradox. Our motive comes from the recent advances in supermask training which shows that high-performing sparse subnetworks can be located by simply updating mask values without modifying any weight. We prove that supermask training is to accumulate the criteria of gradient-driven sparsity for both removed and preserved weights, and it can partly solve the independence paradox. Consequently, OptG integrates supermask training into gradient-driven sparsity, and a novel supermask optimizer is further proposed to comprehensively mitigate the independence paradox. Experiments show that OptG can well surpass many existing state-of-the-art competitors, especially at ultra-high sparsity levels. Our code is available at \url{https://github.com/zyxxmu/OptG}.

Citations (5)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.