Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Two Wrongs Don't Make a Right: Combating Confirmation Bias in Learning with Label Noise (2112.02960v3)

Published 6 Dec 2021 in cs.LG

Abstract: Noisy labels damage the performance of deep networks. For robust learning, a prominent two-stage pipeline alternates between eliminating possible incorrect labels and semi-supervised training. However, discarding part of noisy labels could result in a loss of information, especially when the corruption has a dependency on data, e.g., class-dependent or instance-dependent. Moreover, from the training dynamics of a representative two-stage method DivideMix, we identify the domination of confirmation bias: pseudo-labels fail to correct a considerable amount of noisy labels, and consequently, the errors accumulate. To sufficiently exploit information from noisy labels and mitigate wrong corrections, we propose Robust Label Refurbishment (Robust LR) a new hybrid method that integrates pseudo-labeling and confidence estimation techniques to refurbish noisy labels. We show that our method successfully alleviates the damage of both label noise and confirmation bias. As a result, it achieves state-of-the-art performance across datasets and noise types, namely CIFAR under different levels of synthetic noise and Mini-WebVision and ANIMAL-10N with real-world noise.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mingcai Chen (11 papers)
  2. Hao Cheng (190 papers)
  3. Yuntao Du (30 papers)
  4. Ming Xu (154 papers)
  5. Wenyu Jiang (13 papers)
  6. Chongjun Wang (27 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.