Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 43 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Capacity Approaching Coding for Low Noise Interactive Quantum Communication, Part I: Large Alphabets (2001.02818v1)

Published 9 Jan 2020 in quant-ph, cs.CC, cs.DS, cs.IT, and math.IT

Abstract: We consider the problem of implementing two-party interactive quantum communication over noisy channels, a necessary endeavor if we wish to fully reap quantum advantages for communication. For an arbitrary protocol with $n$ messages, designed for a noiseless qudit channel over a $\mathrm{poly}(n)$ size alphabet, our main result is a simulation method that fails with probability less than $2{-\Theta(n\epsilon)}$ and uses a qudit channel over the same alphabet $n\left(1+\Theta \left(\sqrt{\epsilon}\right)\right)$ times, of which an $\epsilon$ fraction can be corrupted adversarially. The simulation is thus capacity achieving to leading order, and we conjecture that it is optimal up to a constant factor in the $\sqrt{\epsilon}$ term. Furthermore, the simulation is in a model that does not require pre-shared resources such as randomness or entanglement between the communicating parties. Our work improves over the best previously known quantum result where the overhead is a non-explicit large constant [Brassard et al., FOCS'14] for low $\epsilon$.

Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.