Robustness-Congruent Adversarial Training for Secure Machine Learning Model Updates (2402.17390v2)
Abstract: Machine-learning models demand periodic updates to improve their average accuracy, exploiting novel architectures and additional data. However, a newly updated model may commit mistakes the previous model did not make. Such misclassifications are referred to as negative flips, experienced by users as a regression of performance. In this work, we show that this problem also affects robustness to adversarial examples, hindering the development of secure model update practices. In particular, when updating a model to improve its adversarial robustness, previously ineffective adversarial attacks on some inputs may become successful, causing a regression in the perceived security of the system. We propose a novel technique, named robustness-congruent adversarial training, to address this issue. It amounts to fine-tuning a model with adversarial training, while constraining it to retain higher robustness on the samples for which no adversarial example was found before the update. We show that our algorithm and, more generally, learning with non-regression constraints, provides a theoretically-grounded framework to train consistent estimators. Our experiments on robust models for computer vision confirm that both accuracy and robustness, even if improved after model update, can be affected by negative flips, and our robustness-congruent adversarial training can mitigate the problem, outperforming competing baseline methods.
- S. Yan, Y. Xiong, K. Kundu, S. Yang, S. Deng, M. Wang, W. Xia, and S. Soatto, “Positive-congruent training: Towards regression-free model updates,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
- F. Croce, M. Andriushchenko, V. Sehwag, E. Debenedetti, N. Flammarion, M. Chiang, P. Mittal, and M. Hein, “Robustbench: a standardized adversarial robustness benchmark,” in Neural Information Processing Systems, 2021.
- A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in Int’l Conference on Learning Representations, 2018.
- B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in European Conference on Machine Learning and Knowledge Discovery in Databases, 2013.
- I. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in ICLR, 2015.
- N. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber, D. Tsipras, I. Goodfellow, A. Madry, and A. Kurakin, “On evaluating adversarial robustness,” ArXiv e-prints, vol. 1902.06705, 2019.
- M. Pintor, F. Roli, W. Brendel, and B. Biggio, “Fast minimum-norm adversarial attacks through adaptive norm constraints,” in Neural Information Processing Systems, 2021.
- F. Croce and M. Hein, “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,” in ICML, 2020.
- L. Oneto, S. Ridella, and D. Anguita, “The benefits of adversarial defense in generalization,” Neurocomputing, vol. 505, pp. 125–141, 2022.
- N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.
- P. L. Bartlett and S. Mendelson, “Rademacher and gaussian complexities: Risk bounds and structural results,” Journal of Machine Learning Research, vol. 3, pp. 463–482, 2002.
- D. Yin, R. Kannan, and P. Bartlett, “Rademacher complexity for adversarially robust generalization,” in Int’l conference on machine learning, 2019.
- R. Bassily, K. Nissim, A. Smith, T. Steinke, U. Stemmer, and J. Ullman, “Algorithmic stability for adaptive data analysis,” in ACM symposium on Theory of Computing, 2016.
- D. Russo and J. Zou, “Controlling bias in adaptive data analysis using information theory,” in Int’l Conference on Artificial Intelligence and Statistics, 2016.
- L. Oneto, S. Ridella, and D. Anguita, “Tikhonov, ivanov and morozov regularization for support vector machine learning,” Machine Learning, vol. 103, pp. 103–136, 2016.
- L. Engstrom, A. Ilyas, H. Salman, S. Santurkar, and D. Tsipras, “Robustness (python library),” 2019. [Online]. Available: https://github.com/MadryLab/robustness
- J. Zhang, X. Xu, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli, “Attacks which do not kill training make adversarial learning stronger,” in International conference on machine learning. PMLR, 2020, pp. 11 278–11 287.
- L. Rice, E. Wong, and Z. Kolter, “Overfitting in adversarially robust deep learning,” in International Conference on Machine Learning. PMLR, 2020, pp. 8093–8104.
- R. Rade and S.-M. Moosavi-Dezfooli, “Helper-based adversarial training: Reducing excessive margin to achieve a better accuracy vs. robustness trade-off,” in ICML Workshop on Adversarial Machine Learning, 2021.
- D. Hendrycks, K. Lee, and M. Mazeika, “Using pre-training can improve model robustness and uncertainty,” in 36th ICML, ser. PMLR, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97, 2019, pp. 2712–2721.
- S. Addepalli, S. Jain, G. Sriramanan, and R. Venkatesh Babu, “Scaling adversarial training to large perturbation bounds,” in European Conference on Computer Vision. Springer, 2022, pp. 301–316.
- Y. Carmon, A. Raghunathan, L. Schmidt, J. C. Duchi, and P. Liang, “Unlabeled data improves adversarial robustness,” in NeurIPS, H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, and R. Garnett, Eds., 2019, pp. 11 190–11 201.
- E. Wong, L. Rice, and J. Z. Kolter, “Fast is better than free: Revisiting adversarial training,” in ICLR, 2020.
- M. De Lange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars, “A continual learning survey: Defying forgetting in classification tasks,” IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 7, pp. 3366–3385, 2021.
- J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,” National Academy of Sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
- M. Toneva, A. Sordoni, R. T. des Combes, A. Trischler, Y. Bengio, and G. J. Gordon, “An empirical study of example forgetting during deep neural network learning,” in ICLR, 2019.
- H. Ahn, J. Kwak, S. Lim, H. Bang, H. Kim, and T. Moon, “Ss-il: Separated softmax for incremental learning,” in Proc. IEEE/CVF Int’l Conf. on computer vision, 2021, pp. 844–853.
- A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. Dokania, P. Torr, and M. Ranzato, “Continual learning with tiny episodic memories,” in Workshop on Multi-Task and Lifelong Reinforcement Learning, 2019.
- H. Ahn, S. Cha, D. Lee, and T. Moon, “Uncertainty-based continual learning with adaptive regularization,” in NeurIPS, 2019, pp. 4394–4404.
- S. Wang, X. Li, J. Sun, and Z. Xu, “Training networks in null space of feature covariance for continual learning,” in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2021, pp. 184–193.
- A. Mallya and S. Lazebnik, “Packnet: Adding multiple tasks to a single network by iterative pruning,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2018, pp. 7765–7773.
- J. Serra, D. Suris, M. Miron, and A. Karatzoglou, “Overcoming catastrophic forgetting with hard attention to the task,” in ICML. PMLR, 2018, pp. 4548–4557.
- Y. Zhao, Y. Shen, Y. Xiong, S. Yang, W. Xia, Z. Tu, B. Shiele, and S. Soatto, “Elodi: Ensemble logit difference inhibition for positive-congruent training,” arXiv preprint arXiv:2205.06265, 2022.
- F. Träuble, J. von Kügelgen, M. Kleindessner, F. Locatello, B. Schölkopf, and P. V. Gehler, “Backward-compatible prediction updates: A probabilistic approach,” in NeurIPS, 2021, pp. 116–128.
- Y. Shen, Y. Xiong, W. Xia, and S. Soatto, “Towards backward-compatible representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 6368–6377.
- M. Srivastava, B. Nushi, E. Kamar, S. Shah, and E. Horvitz, “An empirical analysis of backward compatibility in machine learning systems,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 3272–3280.