Investigating the Corruption Robustness of Image Classifiers with Random Lp-norm Corruptions (2305.05400v4)
Abstract: Robustness is a fundamental property of machine learning classifiers required to achieve safety and reliability. In the field of adversarial robustness of image classifiers, robustness is commonly defined as the stability of a model to all input changes within a p-norm distance. However, in the field of random corruption robustness, variations observed in the real world are used, while p-norm corruptions are rarely considered. This study investigates the use of random p-norm corruptions to augment the training and test data of image classifiers. We evaluate the model robustness against imperceptible random p-norm corruptions and propose a novel robustness metric. We empirically investigate whether robustness transfers across different p-norms and derive conclusions on which p-norm corruptions a model should be trained and evaluated. We find that training data augmentation with a combination of p-norm corruptions significantly improves corruption robustness, even on top of state-of-the-art data augmentation schemes.
- Uniform sample generation in l/sub p/balls for probabilistic robustness analysis. In Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No. 98CH36171), volume 3, pages 3335–3340. IEEE.
- On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705.
- Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE.
- Certified adversarial robustness via randomized smoothing. International Conference on Machine Learning (ICML) 2019, page 36.
- Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, pages 2206–2216. PMLR.
- Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 702–703.
- Benchmarking robustness of deep learning classifiers using two-factor perturbation. In 2021 IEEE International Conference on Big Data (Big Data), pages 5085–5094. IEEE.
- A study and comparison of human and deep learning recognition performance under visual distortions. In 2017 26th international conference on computer communication and networks (ICCCN), pages 1–7. IEEE.
- A systematic review of robustness in deep learning for computer vision: Mind the gap? arXiv preprint arXiv:2112.00639.
- Noisymix: Boosting robustness by combining data augmentations, stability training, and noise injections. arXiv preprint arXiv:2202.01263, 1.
- Adversarial vulnerability for any classifier. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
- Analysis of classifiers’ robustness to adversarial perturbations. Machine Learning, 107(3):481–508.
- Adversarial examples are a natural consequence of test error in noise. Prroceedings of the 36 th International Conference on Machine Learning (ICML), Long Beach, California, PMLR 97, 2019.
- Benchmarking neural network robustness to common corruptions and perturbations. International Conference on Learning Representations (ICLR) 2019, page 16.
- Augmix: A simple data processing method to improve robustness and uncertainty. International Conference on Learning Representations (ICLR) 2020, page 15.
- Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708.
- A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37:100270.
- On the effectiveness of adversarial training against common corruptions. In Uncertainty in Artificial Intelligence, pages 1012–1021. PMLR.
- Learning multiple layers of features from tiny images. Toronto, Canada.
- Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90.
- Tiny imagenet visual recognition challenge. CS 231N, 2015.
- Certified robustness to adversarial examples with differential privacy. In 2019 IEEE symposium on security and privacy (SP), pages 656–672. IEEE.
- Noisy feature mixup. In International Conference on Learning Representations.
- Improving robustness without sacrificing accuracy with patch gaussian augmentation. arXiv preprint arXiv:1906.02611.
- Sgdr: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations.
- Towards deep learning models resistant to adversarial attacks. International Conference on Learning Representations (ICLR) 2018, page 28.
- On interaction between augmentations and corruptions in natural corruption robustness. Advances in Neural Information Processing Systems, 34:3571–3583.
- Trivialaugment: Tuning-free yet state-of-the-art data augmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 774–782.
- A simple way to make neural networks robust against diverse image corruptions. In European Conference on Computer Vision (ECCV), pages 53–69. Springer.
- A survey on image data augmentation for deep learning. Journal of big data, 6(1):1–48.
- Utilizing class separation distance for the evaluation of corruption robustness of machine learning classifiers. The IJCAI-ECAI-22 Workshop on Artificial Intelligence Safety (AISafety 2022), July 24-25, 2022, Vienna, Austria.
- Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
- Robustness may be at odds with accuracy. In International Conference on Learning Representations.
- Vryniotis, V. (2021). How to train state-of-the-art models using torchvision’s latest primitives. https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/, 17.10.2023.
- Statistically robust neural network classification. In Uncertainty in Artificial Intelligence, pages 1735–1745. PMLR.
- Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612.
- Proven: Verifying robustness of neural networks with a probabilistic approach. Proceedings of the 36th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019.
- Evaluating the robustness of neural networks: An extreme value theory approach. Sixth International Conference on Learning Representations (ICLR), page 18.
- Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492–1500.
- A closer look at accuracy vs. robustness. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
- Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6023–6032.
- Wide residual networks. In British Machine Vision Conference 2016. British Machine Vision Association.
- mixup: Beyond empirical risk minimization. In International Conference on Learning Representations.
- Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning, pages 7472–7482. PMLR.