Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 52 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 454 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

A Cache Energy Optimization Technique for STT-RAM Last Level Cache (1312.2207v2)

Published 8 Dec 2013 in cs.AR

Abstract: Last level caches (LLCs) occupy a large chip-area and there size is expected to grow further to offset the limitations of memory bandwidth and speed. Due to high leakage consumption of SRAM device, caches designed with SRAM consume large amount of energy. To address this, use of emerging technologies such as spin torque transfer RAM (STT-RAM) has been investigated which have lower leakage power dissipation. However, the high write latency and power of it may lead to large energy consumption which present challenges in its use. In this report, we propose a cache reconfiguration based technique for improving the energy efficiency of STT-RAM based LLCs. Our technique dynamically adjusts the active cache size to reduce the cache leakage energy consumption with minimum performance loss. We choose a suitable value of STT-RAM retention time for avoiding refresh overhead and gaining performance. Single-core simulations have been performed using SPEC2006 benchmarks and Sniper x86-64 simulator. The results show that while, compared to an STT-RAM LLC of similar area, an SRAM LLC incurs nearly 100% loss in energy and 7.3% loss in performance; our technique using STT-RAM cache saves 21.8% energy and incurs only 1.7% loss in performance.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube