Emergent Mind

Distilling Effective Supervision from Severe Label Noise

(1910.00701)
Published Oct 1, 2019 in cs.LG , cs.CV , and stat.ML

Abstract

Collecting large-scale data with clean labels for supervised training of neural networks is practically challenging. Although noisy labels are usually cheap to acquire, existing methods suffer a lot from label noise. This paper targets at the challenge of robust training at high label noise regimes. The key insight to achieve this goal is to wisely leverage a small trusted set to estimate exemplar weights and pseudo labels for noisy data in order to reuse them for supervised training. We present a holistic framework to train deep neural networks in a way that is highly invulnerable to label noise. Our method sets the new state of the art on various types of label noise and achieves excellent performance on large-scale datasets with real-world label noise. For instance, on CIFAR100 with a $40\%$ uniform noise ratio and only 10 trusted labeled data per class, our method achieves $80.2{\pm}0.3\%$ classification accuracy, where the error rate is only $1.4\%$ higher than a neural network trained without label noise. Moreover, increasing the noise ratio to $80\%$, our method still maintains a high accuracy of $75.5{\pm}0.2\%$, compared to the previous best accuracy $48.2\%$. Source code available: https://github.com/google-research/google-research/tree/master/ieg

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.