Emergent Mind

Abstract

Caching is an efficient way to reduce network traffic congestion during peak hours by storing some content at the users' local caches. For the shared-link network with end-user-caches, Maddah-Ali and Niesen proposed a two-phase coded caching strategy. In practice, users may communicate with the server through intermediate relays. This paper studies the tradeoff between the memory size $M$ and the network load $R$ for networks where a server with $N$ files is connected to $H$ relays (without caches), which in turn are connected to $K$ users equipped with caches of $M$ files. When each user is connected to a different subset of $r$ relays, i.e., $K = \binom{H}{r}$, the system is referred to as a {\it combination network with end-user-caches}. In this work, converse bounds are derived for the practically motivated case of {\it uncoded} cache contents, that is, bits of the various files are directly pushed into the user caches without any coding. In this case, once the cache contents and the user demands are known, the problem reduces to a general index coding problem.This paper shows that relying on a well-known "acyclic index coding converse bound" results in converse bounds that are not tight for combination networks with end-user-caches. A novel converse bound that leverages the network topology is proposed, which is the tightest converse bound known to date. As a result of independent interest, an inequality that generalizes the well-known sub-modularity of entropy is derived. Several novel caching schemes are proposed, based on the Maddah-Ali and Niesen cache placement. The proposed schemes are proved: (i) to be (order) optimal for some $(N,M,H,r)$ parameters regimes under the constraint of uncoded cache placement, and (ii) to outperform the state-of-the-art schemes in numerical evaluations.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.