Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 129 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

A Technique for Write-endurance aware Management of Resistive RAM Last Level Caches (1311.0041v2)

Published 31 Oct 2013 in cs.AR

Abstract: Due to increasing cache sizes and large leakage consumption of SRAM device, conventional SRAM caches contribute significantly to the processor power consumption. Recently researchers have used non-volatile memory devices to design caches, since they provide high density, comparable read latency and low leakage power dissipation. However, their high write latency may increase the execution time and hence, leakage energy consumption. Also, since their write endurance is small, a conventional energy saving technique may further aggravate the problem of write-variations, thus reducing their lifetime. In this paper, we present a cache energy saving technique for non-volatile caches, which also attempts to improve their lifetime by making writes equally distributed to the cache. Our technique uses dynamic cache reconfiguration to adjust the cache size to meet program requirement and turns off the remaining cache to save energy. Microarchitectural simulations performed using an x86-64 simulator, SPEC2006 benchmarks and a resistive-RAM LLC (last level cache) show that over an 8MB baseline cache, our technique saves 17.55% memory subsystem (last level cache + main memory) energy and improves the lifetime by 1.33X. Over the same resistive-RAM baseline, an SRAM of similar area with no cache reconfiguration leads to an energy loss of 186.13%.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube