Jointly Learning Clinical Entities and Relations with Contextual Language Models and Explicit Context (2102.11031v1)
Abstract: We hypothesize that explicit integration of contextual information into an Multi-task Learning framework would emphasize the significance of context for boosting performance in jointly learning Named Entity Recognition (NER) and Relation Extraction (RE). Our work proves this hypothesis by segmenting entities from their surrounding context and by building contextual representations using each independent segment. This relation representation allows for a joint NER/RE system that achieves near state-of-the-art (SOTA) performance on both NER and RE tasks while beating the SOTA RE system at end-to-end NER & RE with a 49.07 F1.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.