Emergent Mind

Abstract

In low-latency or mobile applications, lower computation complexity, lower memory footprint and better energy efficiency are desired. Many prior works address this need by removing redundant parameters. Parameter quantization replaces floating-point arithmetic with lower precision fixed-point arithmetic, further reducing complexity. Typical training of quantized weight neural networks starts from fully quantized weights. Quantization creates random noise. As a way to compensate for this noise, during training, we propose to quantize some weights while keeping others in floating-point precision. A deep neural network has many layers. To arrive at a fully quantized weight network, we start from one quantized layer and then quantize more and more layers. We show that the order of layer quantization affects accuracies. Order count is large for deep neural networks. A sensitivity pre-training is proposed to guide the layer quantization order. Recent work in weight binarization replaces weight-input matrix multiplication with additions. We apply the proposed iterative training to weight binarization. Our experiments cover fully connected and convolutional networks on MNIST, CIFAR-10 and ImageNet datasets. We show empirically that, starting from partial binary weights instead of from fully binary ones, training reaches fully binary weight networks with better accuracies for larger and deeper networks. Layer binarization in the forward order results in better accuracies. Guided layer binarization can further improve that. The improvements come at a cost of longer training time.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.