Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model of the Weak Reset Process in HfOx Resistive Memory for Deep Learning Frameworks (2107.06064v2)

Published 2 Jul 2021 in cs.LG and physics.app-ph

Abstract: The implementation of current deep learning training algorithms is power-hungry, owing to data transfer between memory and logic units. Oxide-based RRAMs are outstanding candidates to implement in-memory computing, which is less power-intensive. Their weak RESET regime, is particularly attractive for learning, as it allows tuning the resistance of the devices with remarkable endurance. However, the resistive change behavior in this regime suffers many fluctuations and is particularly challenging to model, especially in a way compatible with tools used for simulating deep learning. In this work, we present a model of the weak RESET process in hafnium oxide RRAM and integrate this model within the PyTorch deep learning framework. Validated on experiments on a hybrid CMOS/RRAM technology, our model reproduces both the noisy progressive behavior and the device-to-device (D2D) variability. We use this tool to train Binarized Neural Networks for the MNIST handwritten digit recognition task and the CIFAR-10 object classification task. We simulate our model with and without various aspects of device imperfections to understand their impact on the training process and identify that the D2D variability is the most detrimental aspect. The framework can be used in the same manner for other types of memories to identify the device imperfections that cause the most degradation, which can, in turn, be used to optimize the devices to reduce the impact of these imperfections.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Atreya Majumdar (4 papers)
  2. Marc Bocquet (46 papers)
  3. Tifenn Hirtzlin (21 papers)
  4. Axel Laborieux (12 papers)
  5. Jacques-Olivier Klein (15 papers)
  6. Etienne Nowak (8 papers)
  7. Elisa Vianello (26 papers)
  8. Jean-Michel Portal (15 papers)
  9. Damien Querlioz (62 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.