Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Performance of spatial Multi-LRU caching under traffic with temporal locality (1606.09206v1)

Published 29 Jun 2016 in cs.NI, cs.IT, cs.PF, and math.IT

Abstract: In this work a novel family of decentralised caching policies for wireless networks is introduced, referred to as spatial multi-LRU. These improve cache-hit probability by exploiting multi-coverage. Two variations are proposed, the multi-LRU-One and -All, which differ in the number of replicas inserted in the covering edge-caches. The evaluation is done under spatial traffic that exhibits temporal locality, with varying content catalogue and dependent demands. The performance metric is hit probability and the policies are compared to (1) the single-LRU and (2) an upper bound for all centralised policies with periodic popularity updates. Numerical results show the multi-LRU policies outperform both comparison policies. The reason is their passive adaptability to popularity changes. Between the -One and -All variation, which one is preferable strongly depends on the available storage space and on traffic characteristics. The performance also depends on the popularity shape.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.