Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distribution-Aware Binarization of Neural Networks for Sketch Recognition (1804.02941v1)

Published 9 Apr 2018 in cs.CV

Abstract: Deep neural networks are highly effective at a range of computational tasks. However, they tend to be computationally expensive, especially in vision-related problems, and also have large memory requirements. One of the most effective methods to achieve significant improvements in computational/spatial efficiency is to binarize the weights and activations in a network. However, naive binarization results in accuracy drops when applied to networks for most tasks. In this work, we present a highly generalized, distribution-aware approach to binarizing deep networks that allows us to retain the advantages of a binarized network, while reducing accuracy drops. We also develop efficient implementations for our proposed approach across different architectures. We present a theoretical analysis of the technique to show the effective representational power of the resulting layers, and explore the forms of data they model best. Experiments on popular datasets show that our technique offers better accuracies than naive binarization, while retaining the same benefits that binarization provides - with respect to run-time compression, reduction of computational costs, and power consumption.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ameya Prabhu (37 papers)
  2. Vishal Batchu (8 papers)
  3. Sri Aurobindo Munagala (2 papers)
  4. Rohit Gajawada (3 papers)
  5. Anoop Namboodiri (18 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.