Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 27 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 70 tok/s Pro
Kimi K2 117 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4 34 tok/s Pro
2000 character limit reached

Mutual Information-Maximizing Quantized Belief Propagation Decoding of Regular LDPC Codes (1904.06666v5)

Published 14 Apr 2019 in cs.IT and math.IT

Abstract: In this paper, we propose a class of finite alphabet iterative decoder (FAID), called mutual information-maximizing quantized belief propagation (MIM-QBP) decoder, for decoding regular low-density parity-check (LDPC) codes. Our decoder follows the reconstruction-calculation-quantization (RCQ) decoding architecture that is widely used in FAIDs. We present the first complete and systematic design framework for the RCQ parameters, and prove that our design with sufficient precision at node update is able to maximize the mutual information between coded bits and exchanged messages. Simulation results show that the MIM-QBP decoder can always considerably outperform the state-of-the-art mutual information-maximizing FAIDs that adopt two-input single-output lookup tables for decoding. Furthermore, with only 3 bits being used for each exchanged message, the MIM-QBP decoder can outperform the floating-point belief propagation decoder at the high signal-to-noise ratio regions when testing on high-rate LDPC codes with a maximum of 10 and 30 iterations.

Citations (7)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.