Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Supervised Domain Adaptation by Augmenting Pre-trained Models with Random Units (2106.04935v1)

Published 9 Jun 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Neural Transfer Learning (TL) is becoming ubiquitous in NLP, thanks to its high performance on many tasks, especially in low-resourced scenarios. Notably, TL is widely used for neural domain adaptation to transfer valuable knowledge from high-resource to low-resource domains. In the standard fine-tuning scheme of TL, a model is initially pre-trained on a source domain and subsequently fine-tuned on a target domain and, therefore, source and target domains are trained using the same architecture. In this paper, we show through interpretation methods that such scheme, despite its efficiency, is suffering from a main limitation. Indeed, although capable of adapting to new domains, pre-trained neurons struggle with learning certain patterns that are specific to the target domain. Moreover, we shed light on the hidden negative transfer occurring despite the high relatedness between source and target domains, which may mitigate the final gain brought by transfer learning. To address these problems, we propose to augment the pre-trained model with normalised, weighted and randomly initialised units that foster a better adaptation while maintaining the valuable source knowledge. We show that our approach exhibits significant improvements to the standard fine-tuning scheme for neural domain adaptation from the news domain to the social media domain on four NLP tasks: part-of-speech tagging, chunking, named entity recognition and morphosyntactic tagging.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sara Meftah (3 papers)
  2. Nasredine Semmar (6 papers)
  3. Youssef Tamaazousti (7 papers)
  4. Hassane Essafi (2 papers)
  5. Fatiha Sadat (5 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.