Emergent Mind

FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning

(2211.13131)
Published Nov 23, 2022 in cs.CV , cs.AI , and cs.LG

Abstract

Exemplar-free class-incremental learning is very challenging due to the negative effect of catastrophic forgetting. A balance between stability and plasticity of the incremental process is needed in order to obtain good accuracy for past as well as new classes. Existing exemplar-free class-incremental methods focus either on successive fine tuning of the model, thus favoring plasticity, or on using a feature extractor fixed after the initial incremental state, thus favoring stability. We introduce a method which combines a fixed feature extractor and a pseudo-features generator to improve the stability-plasticity balance. The generator uses a simple yet effective geometric translation of new class features to create representations of past classes, made of pseudo-features. The translation of features only requires the storage of the centroid representations of past classes to produce their pseudo-features. Actual features of new classes and pseudo-features of past classes are fed into a linear classifier which is trained incrementally to discriminate between all classes. The incremental process is much faster with the proposed method compared to mainstream ones which update the entire deep model. Experiments are performed with three challenging datasets, and different incremental settings. A comparison with ten existing methods shows that our method outperforms the others in most cases.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a detailed summary of this paper with a premium account.

We ran into a problem analyzing this paper.

Subscribe by Email

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

References
  1. Deesil: Deep-shallow incremental learning. TaskCV Workshop @ ECCV 2018.
  2. A comprehensive study of class incremental learning algorithms for visual tasks. Neural Networks, 135:38–54
  3. The tradeoffs of large scale learning. Advances in neural information processing systems, 20
  4. End-to-end incremental learning. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XII, pages 241–257
  5. A two-stage approach to few-shot learning for image recognition. IEEE Transactions on Image Processing, 29:3336–3350
  6. Self-Supervised Features Improve Open-World Learning
  7. Podnet: Pooled outputs distillation for small-tasks incremental learning. In Computer vision-ECCV 2020-16th European conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XX, volume 12365, pages 86–102. Springer
  8. Deep learning. MIT press
  9. Lifelong machine learning with deep streaming linear discriminant analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 220–221
  10. Online Continual Learning for Embedded Devices
  11. Exemplar-supported generative reproduction for class incremental learning. In British Machine Vision Conference 2018, BMVC 2018, Northumbria University, Newcastle, UK, September 3-6, 2018, page 98
  12. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition, CVPR
  13. Distilling the Knowledge in a Neural Network
  14. Learning a unified classifier incrementally via rebalancing. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 831–839
  15. Measuring catastrophic forgetting in neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32
  16. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526
  17. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto
  18. A continual learning survey: Defying forgetting in classification tasks
  19. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3
  20. Learning without forgetting. In European Conference on Computer Vision, ECCV
  21. More classifiers, less forgetting: A generic multi-classifier paradigm for incremental learning. In European Conference on Computer Vision, pages 699–716. Springer
  22. Adaptive aggregation networks for class-incremental learning. In Conference on Computer Vision and Pattern Recognition, CVPR
  23. Mnemonics training: Multi-class incremental learning without forgetting. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 12242–12251. IEEE
  24. Mnemonics training: Multi-class incremental learning without forgetting. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 06 2020.
  25. Class-incremental learning: survey and performance evaluation on image classification
  26. Catastrophic interference in connectionist networks: The sequential learning problem. The Psychology of Learning and Motivation, 24:104–169
  27. Distance-based image classification: Generalizing to new classes at near-zero cost. IEEE transactions on pattern analysis and machine intelligence, 35(11):2624–2637
  28. The stability-plasticity dilemma: investigating the continuum from catastrophic forgetting to age-limited learning effects. Frontiers in Psychology, 4:504–504
  29. What is being transferred in transfer learning?
  30. Continual lifelong learning with neural networks: A review. Neural Networks, 113
  31. Scikit-learn: Machine Learning in Python
  32. Gdumb: A simple approach that questions our progress in continual learning. In European Conference on Computer Vision, pages 524–540. Springer
  33. A tinyml platform for on-device continual learning with quantized latent replays. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 11(4):789–802
  34. icarl: Incremental classifier and representation learning. In Conference on Computer Vision and Pattern Recognition, CVPR
  35. The extreme value machine. IEEE transactions on pattern analysis and machine intelligence, 40(3):762–768
  36. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252
  37. A case study of incremental concept induction. In AAAI, volume 86, pages 496–501
  38. Cnn features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 806–813
  39. Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning
  40. A survey on deep transfer learning. In International conference on artificial neural networks, pages 270–279. Springer
  41. Three scenarios for continual learning
  42. A Strategy for an Uncompromising Incremental Learner
  43. Efficient Feature Transformations for Discriminative and Generative Continual Learning
  44. Max Welling. Herding dynamical weights to learn. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, pages 1121–1128
  45. Large scale incremental learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 374–382
  46. Semantic drift compensation for class-incremental learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 6980–6989. IEEE
  47. Maintaining discrimination and fairness in class incremental learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 13205–13214. IEEE
  48. Class-incremental learning via dual augmentation. Advances in Neural Information Processing Systems, 34
  49. Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5871–5880
  50. Self-sustaining representation expansion for non-exemplar class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9296–9305

Show All 50