Emergent Mind

Abstract

Memristive devices hold promise to improve the scale and efficiency of machine learning and neuromorphic hardware, thanks to their compact size, low power consumption, and the ability to perform matrix multiplications in constant time. However, on-chip training with memristor arrays still faces challenges, including device-to-device and cycle-to-cycle variations, switching non-linearity, and especially SET and RESET asymmetry. To combat device non-linearity and asymmetry, we propose to program memristors by harnessing neural networks that map desired conductance updates to the required pulse times. With our method, approximately 95% of devices can be programmed within a relative percentage difference of +-50% from the target conductance after just one attempt. Our approach substantially reduces memristor programming delays compared to traditional write-and-verify methods, presenting an advantageous solution for on-chip training scenarios. Furthermore, our proposed neural network can be accelerated by memristor arrays upon deployment, providing assistance while reducing hardware overhead compared with previous works. This work contributes significantly to the practical application of memristors, particularly in reducing delays in memristor programming. It also envisions the future development of memristor-based machine learning accelerators.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.