Neural Networks Compression for Language Modeling (1708.05963v1)
Abstract: In this paper, we consider several compression techniques for the LLMing problem based on recurrent neural networks (RNNs). It is known that conventional RNNs, e.g, LSTM-based networks in LLMing, are characterized with either high space complexity or substantial inference time. This problem is especially crucial for mobile applications, in which the constant interaction with the remote server is inappropriate. By using the Penn Treebank (PTB) dataset we compare pruning, quantization, low-rank factorization, tensor train decomposition for LSTM networks in terms of model size and suitability for fast inference.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.