Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating the Corruption Robustness of Image Classifiers with Random Lp-norm Corruptions (2305.05400v4)

Published 9 May 2023 in cs.LG, cs.CV, stat.ML, and cs.AI

Abstract: Robustness is a fundamental property of machine learning classifiers required to achieve safety and reliability. In the field of adversarial robustness of image classifiers, robustness is commonly defined as the stability of a model to all input changes within a p-norm distance. However, in the field of random corruption robustness, variations observed in the real world are used, while p-norm corruptions are rarely considered. This study investigates the use of random p-norm corruptions to augment the training and test data of image classifiers. We evaluate the model robustness against imperceptible random p-norm corruptions and propose a novel robustness metric. We empirically investigate whether robustness transfers across different p-norms and derive conclusions on which p-norm corruptions a model should be trained and evaluated. We find that training data augmentation with a combination of p-norm corruptions significantly improves corruption robustness, even on top of state-of-the-art data augmentation schemes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. Uniform sample generation in l/sub p/balls for probabilistic robustness analysis. In Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No. 98CH36171), volume 3, pages 3335–3340. IEEE.
  2. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705.
  3. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE.
  4. Certified adversarial robustness via randomized smoothing. International Conference on Machine Learning (ICML) 2019, page 36.
  5. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, pages 2206–2216. PMLR.
  6. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 702–703.
  7. Benchmarking robustness of deep learning classifiers using two-factor perturbation. In 2021 IEEE International Conference on Big Data (Big Data), pages 5085–5094. IEEE.
  8. A study and comparison of human and deep learning recognition performance under visual distortions. In 2017 26th international conference on computer communication and networks (ICCCN), pages 1–7. IEEE.
  9. A systematic review of robustness in deep learning for computer vision: Mind the gap? arXiv preprint arXiv:2112.00639.
  10. Noisymix: Boosting robustness by combining data augmentations, stability training, and noise injections. arXiv preprint arXiv:2202.01263, 1.
  11. Adversarial vulnerability for any classifier. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
  12. Analysis of classifiers’ robustness to adversarial perturbations. Machine Learning, 107(3):481–508.
  13. Adversarial examples are a natural consequence of test error in noise. Prroceedings of the 36 th International Conference on Machine Learning (ICML), Long Beach, California, PMLR 97, 2019.
  14. Benchmarking neural network robustness to common corruptions and perturbations. International Conference on Learning Representations (ICLR) 2019, page 16.
  15. Augmix: A simple data processing method to improve robustness and uncertainty. International Conference on Learning Representations (ICLR) 2020, page 15.
  16. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708.
  17. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37:100270.
  18. On the effectiveness of adversarial training against common corruptions. In Uncertainty in Artificial Intelligence, pages 1012–1021. PMLR.
  19. Learning multiple layers of features from tiny images. Toronto, Canada.
  20. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90.
  21. Tiny imagenet visual recognition challenge. CS 231N, 2015.
  22. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE symposium on security and privacy (SP), pages 656–672. IEEE.
  23. Noisy feature mixup. In International Conference on Learning Representations.
  24. Improving robustness without sacrificing accuracy with patch gaussian augmentation. arXiv preprint arXiv:1906.02611.
  25. Sgdr: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations.
  26. Towards deep learning models resistant to adversarial attacks. International Conference on Learning Representations (ICLR) 2018, page 28.
  27. On interaction between augmentations and corruptions in natural corruption robustness. Advances in Neural Information Processing Systems, 34:3571–3583.
  28. Trivialaugment: Tuning-free yet state-of-the-art data augmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 774–782.
  29. A simple way to make neural networks robust against diverse image corruptions. In European Conference on Computer Vision (ECCV), pages 53–69. Springer.
  30. A survey on image data augmentation for deep learning. Journal of big data, 6(1):1–48.
  31. Utilizing class separation distance for the evaluation of corruption robustness of machine learning classifiers. The IJCAI-ECAI-22 Workshop on Artificial Intelligence Safety (AISafety 2022), July 24-25, 2022, Vienna, Austria.
  32. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
  33. Robustness may be at odds with accuracy. In International Conference on Learning Representations.
  34. Vryniotis, V. (2021). How to train state-of-the-art models using torchvision’s latest primitives. https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/, 17.10.2023.
  35. Statistically robust neural network classification. In Uncertainty in Artificial Intelligence, pages 1735–1745. PMLR.
  36. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612.
  37. Proven: Verifying robustness of neural networks with a probabilistic approach. Proceedings of the 36th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019.
  38. Evaluating the robustness of neural networks: An extreme value theory approach. Sixth International Conference on Learning Representations (ICLR), page 18.
  39. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492–1500.
  40. A closer look at accuracy vs. robustness. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
  41. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6023–6032.
  42. Wide residual networks. In British Machine Vision Conference 2016. British Machine Vision Association.
  43. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations.
  44. Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning, pages 7472–7482. PMLR.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets