Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 206 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Energy-Efficient Runtime Adaptable L1 STT-RAM Cache Design (1904.09363v1)

Published 19 Apr 2019 in cs.AR, cs.DC, and cs.ET

Abstract: Much research has shown that applications have variable runtime cache requirements. In the context of the increasingly popular Spin-Transfer Torque RAM (STT-RAM) cache, the retention time, which defines how long the cache can retain a cache block in the absence of power, is one of the most important cache requirements that may vary for different applications. In this paper, we propose a Logically Adaptable Retention Time STT-RAM (LARS) cache that allows the retention time to be dynamically adapted to applications' runtime requirements. LARS cache comprises of multiple STT-RAM units with different retention times, with only one unit being used at a given time. LARS dynamically determines which STT-RAM unit to use during runtime, based on executing applications' needs. As an integral part of LARS, we also explore different algorithms to dynamically determine the best retention time based on different cache design tradeoffs. Our experiments show that by adapting the retention time to different applications' requirements, LARS cache can reduce the average cache energy by 25.31%, compared to prior work, with minimal overheads.

Citations (26)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.