Emergent Mind

Linear Delay-cell Design for Low-energy Delay Multiplication and Accumulation

(2007.13895)
Published Jul 27, 2020 in cs.ET and eess.SP

Abstract

A practical deep neural network's (DNN) evaluation involves thousands of multiply-and-accumulate (MAC) operations. To extend DNN's superior inference capabilities to energy constrained devices, architectures and circuits that minimize energy-per-MAC must be developed. In this respect, analog delay-based MAC is advantageous due to reasons both extrinsic and intrinsic to the MAC implementation - (1) lower fixed-point precision requirement for a DNN's evaluation, (2) better dynamic range than charge-based accumulation, for smaller technology nodes, and (3) simpler analog-digital interfacing. Implementing DNNs using delay-based MAC requires mixed-signal delay multipliers that accept digitally stored weights and analog voltages as arguments. To this end, a novel, linearly tune-able delay-cell is proposed, wherein, the delay is realized using an inverted MOS capacitor's (C) steady discharge from a linearly input-voltage dependent initial charge. The cell is analytically modeled, constraints for its functional validity are determined, and jitter-models are developed. Multiple cells with scaled delays, corresponding to each bit of the digital argument, must be cascaded to form the multiplier. To realize such bit-wise delay-scaling of the cells, a biasing circuit is proposed that generates sub-threshold gate-voltages to scale C's discharging rate, and thus area-expensive transistor width-scaling is avoided. For 130nm CMOS technology, the theoretical constraints and limits on jitter are used to find the optimal design-point and quantify the jitter versus bits-per-multiplier trade-off. Schematic-level simulations show a worst-case energy-consumption close to the state-of-art, and thus, feasibility of the cell.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.