Papers
Topics
Authors
Recent
2000 character limit reached

Small Scale Data-Free Knowledge Distillation (2406.07876v1)

Published 12 Jun 2024 in cs.CV, cs.AI, and cs.LG

Abstract: Data-free knowledge distillation is able to utilize the knowledge learned by a large teacher network to augment the training of a smaller student network without accessing the original training data, avoiding privacy, security, and proprietary risks in real applications. In this line of research, existing methods typically follow an inversion-and-distillation paradigm in which a generative adversarial network on-the-fly trained with the guidance of the pre-trained teacher network is used to synthesize a large-scale sample set for knowledge distillation. In this paper, we reexamine this common data-free knowledge distillation paradigm, showing that there is considerable room to improve the overall training efficiency through a lens of ``small-scale inverted data for knowledge distillation". In light of three empirical observations indicating the importance of how to balance class distributions in terms of synthetic sample diversity and difficulty during both data inversion and distillation processes, we propose Small Scale Data-free Knowledge Distillation SSD-KD. In formulation, SSD-KD introduces a modulating function to balance synthetic samples and a priority sampling function to select proper samples, facilitated by a dynamic replay buffer and a reinforcement learning strategy. As a result, SSD-KD can perform distillation training conditioned on an extremely small scale of synthetic samples (e.g., 10X less than the original training data scale), making the overall training efficiency one or two orders of magnitude faster than many mainstream methods while retaining superior or competitive model performance, as demonstrated on popular image classification and semantic segmentation benchmarks. The code is available at https://github.com/OSVAI/SSD-KD.

Citations (4)

Summary

  • The paper introduces a novel SSD-KD framework that leverages small-scale synthetic samples for efficient, data-free knowledge distillation.
  • It balances sample diversity and difficulty via a modulating function and dynamic replay buffer guided by reinforcement learning.
  • SSD-KD achieves significant efficiency gains, with up to 3.92× faster training on CIFAR-10 and notable accuracy improvements on CIFAR-100 and NYUv2.

Small Scale Data-Free Knowledge Distillation

Introduction

The paper "Small Scale Data-Free Knowledge Distillation" (2406.07876) introduces an advanced method for efficient data-free knowledge distillation (D-KD). D-KD is crucial for deploying neural networks on resource-constrained devices without accessing the original training dataset, thus avoiding privacy and security concerns. Traditional D-KD methods synthesize large-scale samples using a generative adversarial network, guided by a pre-trained large teacher network. This paper proposes a paradigm shift by employing a strategy that leverages small-scale synthetic samples to accelerate training while maintaining high model performance. Figure 1

Figure 1

Figure 1

Figure 1: Comparison of knowledge distillation (KD) using original samples vs. synthetic samples, under the same training data scale: 5000 samples (10% of the CIFAR-10 training dataset size).

Methodology

Small Scale Data-Free Knowledge Distillation (SSD-KD)

SSD-KD introduces a novel framework that focuses on using a significantly reduced dataset scale for synthetic samples. This approach is rooted in three key empirical observations:

  1. Synthetic Sample Superiority: Synthetic samples can perform better than original samples under a reduced data scale since they inherently reflect alternative views of the distribution learned by the teacher network.
  2. Balanced Class Distributions: Efficient D-KD requires balancing class distributions in terms of both diversity and difficulty of synthetic samples.
  3. Dynamic Replay Buffer: A dynamic buffer stores synthetic samples, regulated through reinforcement learning to prioritize and optimize sample selection.

The SSD-KD utilizes a modulating function that adjusts the priority of each synthetic sample, facilitating more efficient training. This function is composed of diversity-aware and difficulty-aware terms, enabling better balance during data inversion and distillation processes, thus reducing redundancy and computational costs. Figure 2

Figure 2

Figure 2: Difficulty distribution.

Results and Comparisons

The SSD-KD demonstrates remarkable improvements in training efficiency, achieving model performance comparable to or better than existing methods.

Image Classification

Table-based experiments on CIFAR-10 and CIFAR-100 illustrate SSD-KD's capabilities:

  • On CIFAR-10, SSD-KD achieves up to 3.92× training speed improvement over Fast2, with superior accuracy in most teacher-student configurations.
  • On CIFAR-100, substantial accuracy gains (up to 10.19%) are observed alongside similar efficiency enhancements. Figure 3

    Figure 3: Comparison of optimization pipelines for existing adversarial D-KD methods including both conventional family.

Semantic Segmentation

On NYUv2:

  • SSD-KD delivers state-of-the-art mIoU performance, with synthetic sample usage reduced drastically compared to methods like DFND and Fast10.
  • The process efficiency is confirmed by 8.9 hours of training for competitive model generation. Figure 4

Figure 4

Figure 4

Figure 4

Figure 4: Performance comparison of SSD-KD under different synthetic data scales against the original training dataset size, in terms of top-1 classification accuracy (\%) and overall training time cost (hour).

Ablation Study

SSD-KD's core components were scrutinized to evaluate their contributions. The priority sampling function and modulating function (with diversity and difficulty terms) are integral to achieving balanced performance enhancements in both model accuracy and training efficiency.

Conclusion

The proposed SSD-KD represents a significant step forward in D-KD by enabling efficient knowledge distillation with small-scale synthetic datasets. By balancing sample diversity and difficulty during training, SSD-KD achieves high accuracy with a fraction of the data typically required. The method holds potential for broad application in AI, particularly for tasks involving resource-constrained devices, and sets a new standard for efficient D-KD practices. Figure 5

Figure 5: Visualization examples of synthetic image samples generated by Fast10 and our SSD-KD for the NYUv2 dataset.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.