A Survey of Knowledge Enhanced Pre-trained Models (2110.00269v5)
Abstract: Pre-trained LLMs learn informative word representations on a large-scale text corpus through self-supervised learning, which has achieved promising performance in fields of NLP after fine-tuning. These models, however, suffer from poor robustness and lack of interpretability. We refer to pre-trained LLMs with knowledge injection as knowledge-enhanced pre-trained LLMs (KEPLMs). These models demonstrate deep understanding and logical reasoning and introduce interpretability. In this survey, we provide a comprehensive overview of KEPLMs in NLP. We first discuss the advancements in pre-trained LLMs and knowledge representation learning. Then we systematically categorize existing KEPLMs from three different perspectives. Finally, we outline some potential directions of KEPLMs for future research.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.