Understanding by Understanding Not: Modeling Negation in Language Models (2105.03519v1)
Abstract: Negation is a core construction in natural language. Despite being very successful on many tasks, state-of-the-art pre-trained LLMs often handle negation incorrectly. To improve LLMs in this regard, we propose to augment the LLMing objective with an unlikelihood objective that is based on negated generic sentences from a raw text corpus. By training BERT with the resulting combined objective we reduce the mean top~1 error rate to 4% on the negated LAMA dataset. We also see some improvements on the negated NLI benchmarks.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.