2000 character limit reached
Localizing Catastrophic Forgetting in Neural Networks (1906.02568v1)
Published 6 Jun 2019 in cs.LG, cs.AI, cs.NE, and stat.ML
Abstract: Artificial neural networks (ANNs) suffer from catastrophic forgetting when trained on a sequence of tasks. While this phenomenon was studied in the past, there is only very limited recent research on this phenomenon. We propose a method for determining the contribution of individual parameters in an ANN to catastrophic forgetting. The method is used to analyze an ANNs response to three different continual learning scenarios.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.