Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Wireless Multihop Device-to-Device Caching Networks (1511.02574v1)

Published 9 Nov 2015 in cs.IT and math.IT

Abstract: We consider a wireless device-to-device (D2D) network where $n$ nodes are uniformly distributed at random over the network area. We let each node with storage capacity $M$ cache files from a library of size $m \geq M$. Each node in the network requests a file from the library independently at random, according to a popularity distribution, and is served by other nodes having the requested file in their local cache via (possibly) multihop transmissions. Under the classical "protocol model" of wireless networks, we characterize the optimal per-node capacity scaling law for a broad class of heavy-tailed popularity distributions including Zipf distributions with exponent less than one. In the parameter regimes of interest, we show that a decentralized random caching strategy with uniform probability over the library yields the optimal per-node capacity scaling of $\Theta(\sqrt{M/m})$, which is constant with $n$, thus yielding throughput scalability with the network size. Furthermore, the multihop capacity scaling can be significantly better than for the case of single-hop caching networks, for which the per-node capacity is $\Theta(M/m)$. The multihop capacity scaling law can be further improved for a Zipf distribution with exponent larger than some threshold $> 1$, by using a decentralized random caching uniformly across a subset of most popular files in the library. Namely, ignoring a subset of less popular files (i.e., effectively reducing the size of the library) can significantly improve the throughput scaling while guaranteeing that all nodes will be served with high probability as $n$ increases.

Citations (83)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.