Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 167 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 429 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Multi-Transmitter Coded Caching with Secure Delivery over Linear Networks -- Extended Version (2211.14672v1)

Published 26 Nov 2022 in cs.IT and math.IT

Abstract: In this paper, we consider multiple cache-enabled end-users connected to multiple transmitters through a linear network. We also prevent a totally passive eavesdropper, who sniffs the packets in the delivery phase, from obtaining any information about the original files in cache-aided networks. Three different secure centralized multi-transmitter coded caching scenarios namely, secure multi-transmitter coded caching, secure multi-transmitter coded caching with reduced subpacketization, and secure multi-transmitter coded caching with reduced feedback, are considered and closed-form coding delay and secret shared key storage expressions are provided. As our security guarantee, we show that the delivery phase does not reveal any information to the eavesdropper using the mutual information metric. Moreover, we investigate the secure decentralized multi-transmitter coded caching scenario, in which there is no cooperation between the clients and transmitters during the cache content placement phase and study its performance compared to the centralized scheme. We analyze the system's performance in terms of Coding Delay and guarantee the security of our presented schemes using the Mutual Information metric. Numerical evaluations verify that security incurs a negligible cost in terms of memory usage when the number of files and users are scaled up, in both centralized and decentralized scenarios. Also, we numerically show that by increasing the number of files and users, the secure coding delay of centralized and decentralized schemes became asymptotically equal.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube