Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Pseudo-label Correction for Instance-dependent Noise Using Teacher-student Framework (2311.14237v1)

Published 24 Nov 2023 in cs.LG and cs.CV

Abstract: The high capacity of deep learning models to learn complex patterns poses a significant challenge when confronted with label noise. The inability to differentiate clean and noisy labels ultimately results in poor generalization. We approach this problem by reassigning the label for each image using a new teacher-student based framework termed P-LC (pseudo-label correction). Traditional teacher-student networks are composed of teacher and student classifiers for knowledge distillation. In our novel approach, we reconfigure the teacher network into a triple encoder, leveraging the triplet loss to establish a pseudo-label correction system. As the student generates pseudo labels for a set of given images, the teacher learns to choose between the initially assigned labels and the pseudo labels. Experiments on MNIST, Fashion-MNIST, and SVHN demonstrate P-LC's superior performance over existing state-of-the-art methods across all noise levels, most notably in high noise. In addition, we introduce a noise level estimation to help assess model performance and inform the need for additional data cleaning procedures.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. “Understanding deep learning (still) requires rethinking generalization,” Communications of the ACM, vol. 64, no. 3, pp. 107–115, 2021.
  2. “Classification in the presence of label noise: a survey,” IEEE transactions on neural networks and learning systems, vol. 25, no. 5, pp. 845–869, 2013.
  3. “Training convolutional networks with noisy labels,” in 3rd International Conference on Learning Representations, ICLR 2015, 2015.
  4. “Online crowdsourcing: rating annotators and obtaining cost-effective labels,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. IEEE, 2010, pp. 25–32.
  5. “Crawling the hidden web,” in Vldb, 2001, vol. 1, pp. 129–138.
  6. “Learning object categories from internet image searches,” Proceedings of the IEEE, vol. 98, no. 8, pp. 1453–1466, 2010.
  7. Zhi-Hua Zhou, “A brief introduction to weakly supervised learning,” National science review, vol. 5, no. 1, pp. 44–53, 2018.
  8. “Learning from noisy examples,” Machine learning, vol. 2, pp. 343–370, 1988.
  9. “Part-dependent label noise: Towards instance-dependent label noise,” Advances in Neural Information Processing Systems, vol. 33, pp. 7597–7610, 2020.
  10. “Beyond class-conditional assumption: A primary attempt to combat instance-dependent label noise,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, vol. 35, pp. 11442–11450.
  11. “Classification with asymmetric label noise: Consistency and maximal denoising,” in Conference on learning theory. PMLR, 2013, pp. 489–511.
  12. “Beyond synthetic noise: Deep learning on controlled noisy labels,” in International conference on machine learning. PMLR, 2020, pp. 4804–4815.
  13. “On the design of loss functions for classification: theory, robustness to outliers, and savageboost,” Advances in neural information processing systems, vol. 21, 2008.
  14. “Making deep neural networks robust to label noise: A loss correction approach,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1944–1952.
  15. “Learning from corrupted binary labels via class-probability estimation,” in International conference on machine learning. PMLR, 2015, pp. 125–134.
  16. “Meta label correction for learning with weak supervision,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019.
  17. “Classification with noisy labels by importance reweighting,” IEEE Transactions on pattern analysis and machine intelligence, vol. 38, no. 3, pp. 447–461, 2015.
  18. “Meta-weight-net: Learning an explicit mapping for sample weighting,” Advances in neural information processing systems, vol. 32, 2019.
  19. “Learning to reweight examples for robust deep learning,” in International conference on machine learning. PMLR, 2018, pp. 4334–4343.
  20. “Learning from noisy labels with distillation,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1910–1918.
  21. “Learning from untrusted data,” in Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, 2017, pp. 47–60.
  22. “Learning from noisy large-scale datasets with minimal supervision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 839–847.
  23. “To smooth or not? when label smoothing meets noisy labels,” arXiv preprint arXiv:2106.04149, 2021.
  24. “Facenet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 815–823.
  25. “Learning to compare: Relation network for few-shot learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1199–1208.
  26. “Optimization as a model for few-shot learning,” in International conference on learning representations, 2016.
  27. “Co-teaching: Robust training of deep neural networks with extremely noisy labels,” Advances in neural information processing systems, vol. 31, 2018.
  28. “Combating label noise in deep learning using abstention,” in International Conference on Machine Learning. PMLR, 2019, pp. 6234–6243.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)