Emergent Mind

Image recognition from raw labels collected without annotators

(1910.09055)
Published Oct 20, 2019 in cs.LG , cs.CV , and stat.ML

Abstract

Image classification problems are typically addressed by first collecting examples with candidate labels, second cleaning the candidate labels manually, and third training a deep neural network on the clean examples. The manual labeling step is often the most expensive one as it requires workers to label millions of images. In this paper we propose to work without any explicitly labeled data by i) directly training the deep neural network on the noisy candidate labels, and ii) early stopping the training to avoid overfitting. With this procedure we exploit an intriguing property of standard overparameterized convolutional neural networks trained with (stochastic) gradient descent: Clean labels are fitted faster than noisy ones. We consider two classification problems, a subset of ImageNet and CIFAR-10. For both, we construct large candidate datasets without any explicit human annotations, that only contain 10%-50% correctly labeled examples per class. We show that training on the candidate examples and regularizing through early stopping gives higher test performance for both problems than when training on the original, clean data. This is possible because the candidate datasets contain a huge number of clean examples, and, as we show in this paper, the noise generated through the label collection process is not nearly as adversarial for learning as the noise generated by randomly flipping labels.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.