Towards Understanding Why Label Smoothing Degrades Selective Classification and How to Fix It (2403.14715v2)
Abstract: Label smoothing (LS) is a popular regularisation method for training neural networks as it is effective in improving test accuracy and is simple to implement. Hard one-hot labels are smoothed by uniformly distributing probability mass to other classes, reducing overfitting. Prior work has suggested that in some cases LS can degrade selective classification (SC) -- where the aim is to reject misclassifications using a model's uncertainty. In this work, we first demonstrate empirically across an extended range of large-scale tasks and architectures that LS consistently degrades SC. We then address a gap in existing knowledge, providing an explanation for this behaviour by analysing logit-level gradients: LS degrades the uncertainty rank ordering of correct vs incorrect predictions by regularising the max logit more when a prediction is likely to be correct, and less when it is likely to be wrong. This elucidates previously reported experimental results where strong classifiers underperform in SC. We then demonstrate the empirical effectiveness of post-hoc logit normalisation for recovering lost SC performance caused by LS. Furthermore, linking back to our gradient analysis, we again provide an explanation for why such normalisation is effective.
- NPJ Digital Medicine (2021)
- In: ICCV (2021)
- arXiv preprint arXiv:2311.01434 (2023)
- ArXiv abs/2305.15508 (2023)
- In: ICLR (2023)
- In: ICML (2022)
- In: ECCV (2018)
- In: ICLR (2022)
- Chow, C.K.: An optimum character recognition system using decision functions. IRE Transactions on Electronic Computers (1957)
- ArXiv abs/2003.03879 (2020)
- In: NeurIPS (2019)
- CVPR (2016)
- In: ICLR (2021)
- Journal of Machine Learning Research (2010)
- In: BMVC (2021)
- In: BMVC (2022)
- Gal, Y.: Uncertainty in Deep Learning. Ph.D. thesis, University of Cambridge (2016)
- In: IJCNLP (2020)
- In: NeurIPS (2017)
- In: ICML (2019)
- In: ECCV (2022)
- In: ICML (2017)
- In: WACV (2024)
- In: CVPR (2016)
- In: CVPR (2019)
- ICML (2022)
- ICLR (2017)
- In: NeurIPSW (2015)
- 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 2261–2269 (2017)
- In: NeurIPS (2020)
- In: NeurIPS (2021)
- Machine Learning (2021)
- Expert Systems with Applications (2023)
- Kirsch, A.: Advancing deep active learning & data subset selection: Unifying principles with information-theory intuitions (2024)
- Krizhevsky, A.: Learning multiple layers of features from tiny images. Tech. rep., MIT (2009)
- JMIR Med Inform (2022)
- In: ICLR (2024)
- In: ICCV (2019)
- In: CVPR (2022a)
- arXiv preprint arXiv:1506.04579 (2015)
- NeurIPS (2020)
- In: CVPR (2023)
- In: CVPR (2022b)
- In: CVPR (2022c)
- In: ECCV (2022d)
- In: ICLR (2019)
- In: ICML (2020)
- Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate 𝒪(1k2)𝒪1superscript𝑘2\mathcal{O}\left(\frac{1}{k^{2}}\right)caligraphic_O ( divide start_ARG 1 end_ARG start_ARG italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ). In: Doklady Akademii Nauk (1983)
- In: NeurIPS (2022)
- International Journal of Computer Vision (2015)
- The journal of machine learning research (2014)
- Transactions on Machine Learning Research (2022)
- In: ICML (2013)
- In: ICML (2021)
- In: CVPR (2022)
- Wightman, R.: Pytorch image models. https://github.com/rwightman/pytorch-image-models (2019)
- In: NeurIPSW (2021)
- In: ACCV (2022a)
- ArXiv abs/2207.07517 (2022b)
- In: ICCV (2023)
- arXiv preprint arXiv:2110.11334 (2021)
- In: CVPR (2020)
- In: ICLR (2018)
- arXiv preprint arXiv:2306.09301 (2023)
- Guoxuan Xia (13 papers)
- Olivier Laurent (11 papers)
- Gianni Franchi (36 papers)
- Christos-Savvas Bouganis (38 papers)