Papers
Topics
Authors
Recent
2000 character limit reached

Scalable Syntax-Aware Language Models Using Knowledge Distillation (1906.06438v1)

Published 14 Jun 2019 in cs.CL and cs.LG

Abstract: Prior work has shown that, on small amounts of training data, syntactic neural LLMs learn structurally sensitive generalisations more successfully than sequential LLMs. However, their computational complexity renders scaling difficult, and it remains an open question whether structural biases are still necessary when sequential models have access to ever larger amounts of training data. To answer this question, we introduce an efficient knowledge distillation (KD) technique that transfers knowledge from a syntactic LLM trained on a small corpus to an LSTM LLM, hence enabling the LSTM to develop a more structurally sensitive representation of the larger training data it learns from. On targeted syntactic evaluations, we find that, while sequential LSTMs perform much better than previously reported, our proposed technique substantially improves on this baseline, yielding a new state of the art. Our findings and analysis affirm the importance of structural biases, even in models that learn from large amounts of data.

Citations (26)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.