Emergent Mind

The Throughput-Outage Tradeoff of Wireless One-Hop Caching Networks

(1312.2637)
Published Dec 10, 2013 in cs.IT and math.IT

Abstract

We consider a wireless device-to-device (D2D) network where the nodes have pre-cached information from a library of available files. Nodes request files at random. If the requested file is not in the on-board cache, then it is downloaded from some neighboring node via one-hop "local" communication. An outage event occurs when a requested file is not found in the neighborhood of the requesting node, or if the network admission control policy decides not to serve the request. We characterize the optimal throughput-outage tradeoff in terms of tight scaling laws for various regimes of the system parameters, when both the number of nodes and the number of files in the library grow to infinity. Our analysis is based on Gupta and Kumar {\em protocol model} for the underlying D2D wireless network, widely used in the literature on capacity scaling laws of wireless networks without caching. Our results show that the combination of D2D spectrum reuse and caching at the user nodes yields a per-user throughput independent of the number of users, for any fixed outage probability in $(0,1)$. This implies that the D2D caching network is "scalable": even though the number of users increases, each user achieves constant throughput. This behavior is very different from the classical Gupta and Kumar result on ad-hoc wireless networks, for which the per-user throughput vanishes as the number of users increases. Furthermore, we show that the user throughput is directly proportional to the fraction of cached information over the whole file library size. Therefore, we can conclude that D2D caching networks can turn "memory" into "bandwidth" (i.e., doubling the on-board cache memory on the user devices yields a 100\% increase of the user throughout).

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.