Emergent Mind

Abstract

Caching systems have long been crucial for improving the performance of a wide variety of network and web based online applications. In such systems, end-to-end application performance heavily depends on the fraction of objects transferred from the cache, also known as the cache hit probability. Many caching policies have been proposed and implemented to improve the hit probability. In this work, we propose a new method to compute an upper bound on hit probability for all non-anticipative caching policies, i.e., for policies that have no knowledge of future requests. Our key insight is to order the objects according to the ratio of their Hazard Rate (HR) function values to their sizes and place in the cache the objects with the largest ratios till the cache capacity is exhausted. Under some statistical assumptions, we prove that our proposed HR to size ratio based ordering model computes the maximum achievable hit probability and serves as an upper bound for all non-anticipative caching policies. We derive closed form expressions for the upper bound under some specific object request arrival processes. We also provide simulation results to validate its correctness and to compare it to the state-of-the-art upper bounds. We find it to be tighter than state-of-the-art upper bounds for a variety of object request arrival processes.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.