Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An exploration of parameter redundancy in deep networks with circulant projections (1502.03436v2)

Published 11 Feb 2015 in cs.CV

Abstract: We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection. The circulant structure substantially reduces memory footprint and enables the use of the Fast Fourier Transform to speed up the computation. Considering a fully-connected neural network layer with d input nodes, and d output nodes, this method improves the time complexity from O(d2) to O(dlogd) and space complexity from O(d2) to O(d). The space savings are particularly important for modern deep convolutional neural network architectures, where fully-connected layers typically contain more than 90% of the network parameters. We further show that the gradient computation and optimization of the circulant projections can be performed very efficiently. Our experiments on three standard datasets show that the proposed approach achieves this significant gain in storage and efficiency with minimal increase in error rate compared to neural networks with unstructured projections.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yu Cheng (355 papers)
  2. Felix X. Yu (20 papers)
  3. Rogerio S. Feris (9 papers)
  4. Sanjiv Kumar (123 papers)
  5. Alok Choudhary (23 papers)
  6. Shih-Fu Chang (131 papers)
Citations (48)

Summary

We haven't generated a summary for this paper yet.