Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 42 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 217 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

GANs for learning from very high class conditional noisy labels (2010.09577v1)

Published 19 Oct 2020 in cs.LG

Abstract: We use Generative Adversarial Networks (GANs) to design a class conditional label noise (CCN) robust scheme for binary classification. It first generates a set of correctly labelled data points from noisy labelled data and 0.1% or 1% clean labels such that the generated and true (clean) labelled data distributions are close; generated labelled data is used to learn a good classifier. The mode collapse problem while generating correct feature-label pairs and the problem of skewed feature-label dimension ratio ($\sim$ 784:1) are avoided by using Wasserstein GAN and a simple data representation change. Another WGAN with information-theoretic flavour on top of the new representation is also proposed. The major advantage of both schemes is their significant improvement over the existing ones in presence of very high CCN rates, without either estimating or cross-validating over the noise rates. We proved that KL divergence between clean and noisy distribution increases w.r.t. noise rates in symmetric label noise model; can be extended to high CCN rates. This implies that our schemes perform well due to the adversarial nature of GANs. Further, use of generative approach (learning clean joint distribution) while handling noise enables our schemes to perform better than discriminative approaches like GLC, LDMI and GCE; even when the classes are highly imbalanced. Using Friedman F test and Nemenyi posthoc test, we showed that on high dimensional binary class synthetic, MNIST and Fashion MNIST datasets, our schemes outperform the existing methods and demonstrate consistent performance across noise rates.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.