Unsupervised Natural Question Answering with a Small Model (1911.08340v1)
Abstract: The recent (2019-02) demonstration of the power of huge LLMs such as GPT-2 to memorise the answers to factoid questions raises questions about the extent to which knowledge is being embedded directly within these large models. This short paper describes an architecture through which much smaller models can also answer such questions - by making use of 'raw' external knowledge. The contribution of this work is that the methods presented here rely on unsupervised learning techniques, complementing the unsupervised training of the LLM. The goal of this line of research is to be able to add knowledge explicitly, without extensive training.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.