Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 144 tok/s
Gemini 2.5 Pro 45 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 73 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 438 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting (2004.12651v1)

Published 27 Apr 2020 in cs.CL

Abstract: Deep pretrained LLMs have achieved great success in the way of pretraining first and then fine-tuning. But such a sequential transfer learning paradigm often confronts the catastrophic forgetting problem and leads to sub-optimal performance. To fine-tune with less forgetting, we propose a recall and learn mechanism, which adopts the idea of multi-task learning and jointly learns pretraining tasks and downstream tasks. Specifically, we propose a Pretraining Simulation mechanism to recall the knowledge from pretraining tasks without data, and an Objective Shifting mechanism to focus the learning on downstream tasks gradually. Experiments show that our method achieves state-of-the-art performance on the GLUE benchmark. Our method also enables BERT-base to achieve better performance than directly fine-tuning of BERT-large. Further, we provide the open-source RecAdam optimizer, which integrates the proposed mechanisms into Adam optimizer, to facility the NLP community.

Citations (197)

Summary

  • The paper presents a Recall and Learn mechanism that minimizes catastrophic forgetting by simulating pretraining objectives using a quadratic penalty.
  • It introduces objective shifting with an annealing coefficient, balancing retention of pretraining knowledge with learning new tasks.
  • Experiments on GLUE show that BERT-base gains +1.7% on average and ALBERT-xxlarge achieves state-of-the-art performance, especially on limited data.

Recall and Learn: Fine-tuning Deep Pretrained LLMs with Less Forgetting

The paper "Recall and Learn: Fine-tuning Deep Pretrained LLMs with Less Forgetting" by Chen et al. addresses a significant issue in the domain of NLP: catastrophic forgetting during the fine-tuning of deep pretrained LMs. This problem arises when models tuned for specific downstream tasks lose the knowledge acquired during their pretraining phase, often resulting in suboptimal performance.

Main Contributions

The authors introduce a novel approach to mitigate catastrophic forgetting by combining sequential transfer learning with multi-task learning principles. This methodology is encapsulated in their "Recall and Learn" mechanism, comprising two key components: Pretraining Simulation and Objective Shifting. Together, these mechanisms enable a model to simultaneously recall pretraining knowledge and adapt to new tasks, thus reducing the extent of forgetting.

  1. Pretraining Simulation: This technique approximates pretraining objectives using a quadratic penalty, thereby enabling the model to recall knowledge without accessing the actual pretraining data. It capitalizes on the Fisher information matrix and approximates it with a more computationally tractable form, alleviating the need for large datasets during fine-tuning.
  2. Objective Shifting: By incorporating an annealing coefficient, this method dynamically balances the focus between maintaining pretraining knowledge and learning new tasks. Over time, it allows the learning process to gradually shift toward optimizing performance on downstream tasks.

Additionally, the paper introduces the Recall Adam (RecAdam) optimizer, which integrates these mechanisms into the traditional Adam optimizer framework. This integration supports more effective fine-tuning by decoupling the quadratic penalty and annealing coefficient from the adapted gradients.

Empirical Results

The experiments, conducted using BERT-base and ALBERT-xxlarge models, demonstrate the efficacy of the proposed approach on the General Language Understanding Evaluation (GLUE) benchmark. Key findings include:

  • Significant performance improvements on 7 out of 8 tasks in the GLUE benchmark using BERT-base, particularly on datasets with limited labeled data, where improvements are shown to be +1.7% on average.
  • Using RecAdam, BERT-base achieved a comparable, if not superior, performance to BERT-large in a number of tasks despite having fewer parameters.
  • With ALBERT-xxlarge, the approach attained state-of-the-art results, particularly enhancing tasks with smaller training datasets by +1.5% average improvements over standard fine-tuning.

Theoretical and Practical Implications

The proposed method holds substantial theoretical and practical implications for NLP:

  • Theory: The integration of multi-task learning principles with fine-tuning strategies enriches the potential for models to generalize better without sacrificing learned knowledge. This approach encourages further exploration of model training paradigms where multi-task learning can provide appreciable benefits in alleviating forgetting in sequential learning settings.
  • Practice: Implementing the RecAdam optimizer in existing LLMs facilitates improved performance on specific tasks with limited data availability and supports efficient use of pretrained resources. This innovation could significantly enhance the deployment of NLP applications in real-world scenarios where labeled data is scarce.

Future Directions

Future research could focus on fine-tuning the annealing strategies and quadratic penalty approximations to further enhance the adaptive capabilities in language modeling tasks. Moreover, exploring similar paradigms across different domains and architectures, extending beyond NLP, could yield broader insights into managing catastrophic forgetting in various artificial intelligence applications.

This work is a notable contribution towards optimizing the learning process of pretrained LLMs, striking a balance between retaining prior knowledge and adapting to new information, which is crucial for advancing performance in complex NLP tasks.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.