Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 158 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 117 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 439 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

EPA: Neural Collapse Inspired Robust Out-of-Distribution Detector (2401.01710v1)

Published 3 Jan 2024 in cs.LG and cs.CR

Abstract: Out-of-distribution (OOD) detection plays a crucial role in ensuring the security of neural networks. Existing works have leveraged the fact that In-distribution (ID) samples form a subspace in the feature space, achieving state-of-the-art (SOTA) performance. However, the comprehensive characteristics of the ID subspace still leave under-explored. Recently, the discovery of Neural Collapse ($\mathcal{NC}$) sheds light on novel properties of the ID subspace. Leveraging insight from $\mathcal{NC}$, we observe that the Principal Angle between the features and the ID feature subspace forms a superior representation for measuring the likelihood of OOD. Building upon this observation, we propose a novel $\mathcal{NC}$-inspired OOD scoring function, named Entropy-enhanced Principal Angle (EPA), which integrates both the global characteristic of the ID subspace and its inner property. We experimentally compare EPA with various SOTA approaches, validating its superior performance and robustness across different network architectures and OOD datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. “Enhancing the reliability of out-of-distribution image detection in neural networks,” in International Conference on Learning Representations, 2018.
  2. “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” in International Conference on Learning Representations, 2016.
  3. “Energy-based out-of-distribution detection,” Advances in Neural Information Processing Systems, vol. 33, pp. 21464–21475, 2020.
  4. “Scaling out-of-distribution detection for real-world settings,” in International Conference on Machine Learning, 2022.
  5. “Mitigating neural network overconfidence with logit normalization,” in International Conference on Machine Learning. PMLR, 2022, pp. 23631–23644.
  6. “Vim: Out-of-distribution with virtual-logit matching,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 4921–4930.
  7. “incdfm: Incremental deep feature modeling for continual novelty detection,” in European Conference on Computer Vision. Springer, 2022, pp. 588–604.
  8. “Out-of-distribution detection with subspace techniques and probabilistic modeling of features,” arXiv preprint arXiv:2012.04250, 2020.
  9. “React: Out-of-distribution detection with rectified activations,” Advances in Neural Information Processing Systems, vol. 34, pp. 144–157, 2021.
  10. “Out-of-distribution detection with deep nearest neighbors,” in International Conference on Machine Learning. PMLR, 2022, pp. 20827–20840.
  11. “Decoupling maxlogit for out-of-distribution detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 3388–3397.
  12. “Gen: Pushing the limits of softmax-based out-of-distribution detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 23946–23955.
  13. “Prevalence of neural collapse during the terminal phase of deep learning training,” Proceedings of the National Academy of Sciences, vol. 117, no. 40, pp. 24652–24663, 2020.
  14. “Neural collapse under mse loss: Proximity to and dynamics on the central path,” in International Conference on Learning Representations, 2021.
  15. “A geometric analysis of neural collapse with unconstrained features,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, Eds., 2021, vol. 34, pp. 29820–29834.
  16. “On the role of neural collapse in transfer learning,” in International Conference on Learning Representations, 2022.
  17. “Neural collapse inspired feature-classifier alignment for few-shot class-incremental learning,” in The Eleventh International Conference on Learning Representations, 2022.
  18. “Out-of-distribution detection using an ensemble of self supervised leave-out classifiers,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 550–564.
  19. “Entropy maximization and meta classification for out-of-distribution detection in semantic segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5128–5137.
  20. “Linking neural collapse and l2 normalization with improved out-of-distribution detection in deep neural networks,” Transactions on Machine Learning Research, 2022.
  21. “Repvgg: Making vgg-style convnets great again,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 13733–13742.
  22. “Bag of tricks for image classification with convolutional neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 558–567.
  23. “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012–10022.
  24. “Describing textures in the wild,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3606–3613.
  25. “The inaturalist species classification and detection dataset,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8769–8778.
  26. “Natural adversarial examples,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15262–15271.
  27. “Benchmarking neural network robustness to common corruptions and perturbations,” in International Conference on Learning Representations, 2018.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.