Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robustifying Sentiment Classification by Maximally Exploiting Few Counterfactuals (2210.11805v1)

Published 21 Oct 2022 in cs.CL

Abstract: For text classification tasks, finetuned LLMs perform remarkably well. Yet, they tend to rely on spurious patterns in training data, thus limiting their performance on out-of-distribution (OOD) test data. Among recent models aiming to avoid this spurious pattern problem, adding extra counterfactual samples to the training data has proven to be very effective. Yet, counterfactual data generation is costly since it relies on human annotation. Thus, we propose a novel solution that only requires annotation of a small fraction (e.g., 1%) of the original training data, and uses automatic generation of extra counterfactuals in an encoding vector space. We demonstrate the effectiveness of our approach in sentiment classification, using IMDb data for training and other sets for OOD tests (i.e., Amazon, SemEval and Yelp). We achieve noticeable accuracy improvements by adding only 1% manual counterfactuals: +3% compared to adding +100% in-distribution training samples, +1.3% compared to alternate counterfactual approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Maarten De Raedt (4 papers)
  2. Fréderic Godin (23 papers)
  3. Chris Develder (59 papers)
  4. Thomas Demeester (76 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.