Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 49 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

An Upper Bound on the Reliability Function of the DMC (2209.00968v1)

Published 2 Sep 2022 in cs.IT and math.IT

Abstract: We derive a new upper bound on the reliability function for channel coding over discrete memoryless channels. Our bounding technique relies on two main elements: (i) adding an auxiliary genie-receiver that reveals to the original receiver a list of codewords including the transmitted one, which satisfy a certain type property, and (ii) partitioning (most of) the list into subsets of codewords that satisfy a certain pairwise-symmetry property, which facilitates lower bounding of the average error probability by the pairwise error probability within a subset. We compare the obtained bound to the Shannon-Gallager-Berlekamp straight-line bound, the sphere-packing bound, and an amended version of Blahut's bound. Our bound is shown to be at least as tight for all rates, with cases of stricter tightness in a certain range of low rates, compared to all three aforementioned bounds. Our derivation is performed in a unified manner which is valid for any rate, as well as for a wide class of additive decoding metrics, whenever the corresponding zero-error capacity is zero. We further present a relatively simple function that may be regarded as an approximation to the reliability function in some cases. We also present a dual form of the bound, and discuss a looser bound of a simpler form, which is analyzed for the case of the binary symmetric channel with maximum likelihood decoding.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube