Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 144 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 428 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Complexity and Second Moment of the Mathematical Theory of Communication (2107.06420v1)

Published 13 Jul 2021 in cs.IT and math.IT

Abstract: The performance of an error correcting code is evaluated by its error probability, rate, and en/decoding complexity. The performance of a series of codes is evaluated by, as the block lengths approach infinity, whether their error probabilities decay to zero, whether their rates converge to capacity, and whether their growth in complexities stays under control. Over any discrete memoryless channel, I build codes such that: (1) their error probabilities and rates scale like random codes; and (2) their en/decoding complexities scale like polar codes. Quantitatively, for any constants $p,r>0$ s.t. $p+2r<1$, I construct a series of codes with block length $N$ approaching infinity, error probability $\exp(-Np)$, rate $N{-r}$ less than the capacity, and en/decoding complexity $O(N\log N)$ per block. Over any discrete memoryless channel, I also build codes such that: (1) they achieve capacity rapidly; and (2) their en/decoding complexities outperform all known codes over non-BEC channels. Quantitatively, for any constants $t,r>0$ s.t. $2r<1$, I construct a series of codes with block length $N$ approaching infinity, error probability $\exp(-(\log N)t)$, rate $N{-r}$ less than the capacity, and en/decoding complexity $O(N\log(\log N))$ per block. The two aforementioned results are built upon two pillars: a versatile framework that generates codes on the basis of channel polarization, and a calculus-probability machinery that evaluates the performances of codes. The framework that generates codes and the machinery that evaluates codes can be extended to many other scenarios in network information theory. To name a few: lossless compression, lossy compression, Slepian-Wolf, Wyner-Ziv, multiple access channel, wiretap channel, and broadcast channel. In each scenario, the adapted notions of error probability and rate approach their limits at the same paces as specified above.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.