Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BrainWash: A Poisoning Attack to Forget in Continual Learning (2311.11995v3)

Published 20 Nov 2023 in cs.LG, cs.AI, and cs.CR

Abstract: Continual learning has gained substantial attention within the deep learning community, offering promising solutions to the challenging problem of sequential learning. Yet, a largely unexplored facet of this paradigm is its susceptibility to adversarial attacks, especially with the aim of inducing forgetting. In this paper, we introduce "BrainWash," a novel data poisoning method tailored to impose forgetting on a continual learner. By adding the BrainWash noise to a variety of baselines, we demonstrate how a trained continual learner can be induced to forget its previously learned tasks catastrophically, even when using these continual learning baselines. An important feature of our approach is that the attacker requires no access to previous tasks' data and is armed merely with the model's current parameters and the data belonging to the most recent task. Our extensive experiments highlight the efficacy of BrainWash, showcasing degradation in performance across various regularization-based continual learning methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. A continual learning survey: Defying forgetting in classification tasks. IEEE transactions on pattern analysis and machine intelligence, 44(7):3366–3385, 2021.
  2. Biological underpinnings for lifelong learning machines. Nature Machine Intelligence, 4(3):196–210, 2022.
  3. Brain-inspired replay for continual learning with artificial neural networks. Nature communications, 11(1):4069, 2020.
  4. Achieving a better stability-plasticity trade-off via auxiliary networks in continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11930–11939, 2023.
  5. Targeted forgetting and false memory formation in continual learners through adversarial backdoor attacks. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2020.
  6. Poisoning generative replay in continual learning to promote forgetting. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 15769–15785. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/kang23c.html.
  7. Targeted data poisoning attacks against continual learning neural networks. In 2022 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2022.
  8. Data poisoning attack aiming the vulnerability of continual learning. In 2023 IEEE International Conference on Image Processing (ICIP), pages 1905–1909. IEEE, 2023.
  9. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pages 1322–1333, 2015.
  10. Algorithms that remember: model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133):20180083, 2018.
  11. Continual learning with deep generative replay. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 2994–3003, 2017.
  12. Towards robust evaluations of continual learning. arXiv preprint arXiv:1805.09733, 2018.
  13. Gido M Van de Ven and Andreas S Tolias. Generative replay with feedback connections as a general strategy for continual learning. arXiv preprint arXiv:1809.10635, 2018.
  14. Experience replay for continual learning. Advances in Neural Information Processing Systems, 32, 2019.
  15. Generative continual concept learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 5545–5552, 2020.
  16. Orthogonal gradient descent for continual learning. In International Conference on Artificial Intelligence and Statistics, pages 3762–3773. PMLR, 2020.
  17. Gradient projection memory for continual learning. In International Conference on Learning Representations, 2020.
  18. Training networks in null space of feature covariance for continual learning. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 184–193, 2021a.
  19. Trgp: Trust region gradient projection for continual learning. In International Conference on Learning Representations, 2022.
  20. Sparsity and heterogeneous dropout for continual learning in the null space of neural activations. In Conference on Lifelong Learning Agents, pages 617–628. PMLR, 2022.
  21. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
  22. Continual learning through synaptic intelligence. In International Conference on Machine Learning, pages 3987–3995. PMLR, 2017.
  23. Memory aware synapses: Learning what (not) to forget. In ECCV, 2018.
  24. Sliced cramer synaptic consolidation for preserving deeply learned representations. In International Conference on Learning Representations, 2020.
  25. Continual learning with hypernetworks. arXiv preprint arXiv:1906.00695, 2019.
  26. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
  27. Progress & compress: A scalable framework for continual learning. In International Conference on Machine Learning, pages 4528–4537. PMLR, 2018.
  28. Packnet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7765–7773, 2018.
  29. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In Proceedings of the European Conference on Computer Vision (ECCV), pages 67–82, 2018.
  30. Supermasks in superposition. Advances in Neural Information Processing Systems, 33:15173–15184, 2020.
  31. Lifelong reinforcement learning with modulating masks. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=V7tahqGrOq.
  32. Sharing lifelong reinforcement learning knowledge via modulating masks. In Second Conference on Lifelong Learning Agents (CoLLAs) 2023, 2023.
  33. Poisoning attacks against support vector machines. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pages 1467–1474, 2012.
  34. Data poisoning attacks against autoregressive models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016.
  35. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE symposium on security and privacy (SP), pages 19–35. IEEE, 2018.
  36. Poison frogs! targeted clean-label poisoning attacks on neural networks. Advances in neural information processing systems, 31, 2018.
  37. Learning to confuse: Generating training time adversarial data with auto-encoder. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019a. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/1ce83e5d4135b07c0b82afffbe2b3436-Paper.pdf.
  38. Transferable clean-label poisoning attacks on deep neural nets. In International Conference on Machine Learning, pages 7614–7623. PMLR, 2019.
  39. Witches’ brew: Industrial scale data poisoning via gradient matching. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=01olnfLIbD.
  40. An explicit solution to the multi-level programming problem. Computers & Operations Research, 9(1):77–100, 1982.
  41. Deeppoison: Feature transfer based stealthy poisoning attack for dnns. IEEE Transactions on Circuits and Systems II: Express Briefs, 68(7):2618–2622, 2021.
  42. Learning to confuse: generating training time adversarial data with auto-encoder. Advances in Neural Information Processing Systems, 32, 2019b.
  43. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM workshop on artificial intelligence and security, pages 27–38, 2017.
  44. Metapoison: Practical general-purpose clean-label data poisoning. Advances in Neural Information Processing Systems, 33:12080–12091, 2020.
  45. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pages 1126–1135. PMLR, 2017.
  46. Re-thinking model inversion attacks against deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16384–16393, 2023.
  47. Dreaming to distill: Data-free knowledge transfer via deepinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8715–8724, 2020.
  48. Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5188–5196, 2015.
  49. Deepdream-a code example for visualizing neural networks. Google Research, 2(5), 2015.
  50. Variational model inversion attacks. Advances in Neural Information Processing Systems, 34:9706–9719, 2021b.
  51. Goat: Go to any thing. arXiv preprint arXiv:2311.06430, 2023.
  52. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456. pmlr, 2015.
  53. Truncated back-propagation for bilevel optimization. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1723–1732. PMLR, 2019.
  54. Bilevel programming for hyperparameter optimization and meta-learning. In International conference on machine learning, pages 1568–1577. PMLR, 2018.
  55. Cifar-100 (canadian institute for advanced research). URL http://www.cs.toronto.edu/~kriz/cifar.html.
  56. Matching networks for one shot learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper_files/paper/2016/file/90e1357833654983612fb05e3ec9148c-Paper.pdf.
  57. Ya Le and Xuan S. Yang. Tiny imagenet visual recognition challenge. 2015. URL https://api.semanticscholar.org/CorpusID:16664790.
  58. Deep Residual Learning for Image Recognition. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’16, pages 770–778. IEEE, June 2016. doi:10.1109/CVPR.2016.90. URL http://ieeexplore.ieee.org/document/7780459.
  59. Afec: Active forgetting of negative transfer in continual learning. Advances in Neural Information Processing Systems, 34:22379–22391, 2021c.
  60. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In Proceedings of the European conference on computer vision (ECCV), pages 532–547, 2018.
  61. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017.
Citations (1)

Summary

We haven't generated a summary for this paper yet.