Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 113 tok/s Pro
Kimi K2 216 tok/s Pro
GPT OSS 120B 428 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

A Reconstruction-Computation-Quantization (RCQ) Approach to Node Operations in LDPC Decoding (2005.07259v1)

Published 14 May 2020 in eess.SP, cs.IT, and math.IT

Abstract: In this paper, we propose a finite-precision decoding method that features the three steps of Reconstruction, Computation, and Quantization (RCQ). Unlike Mutual-Information-Maximization Quantized Belief Propagation (MIM-QBP), RCQ can approximate either belief propagation or Min-Sum decoding. One problem faced by MIM-QBP decoder is that it cannot work well when the fraction of degree-2 variable nodes is large. However, sometimes a large fraction of degree-2 variable nodes is necessary for a fast encoding structure, as seen in the IEEE 802.11 standard and the DVB-S2 standard. In contrast, the proposed RCQ decoder may be applied to any off-the-shelf LDPC code, including those with a large fraction of degree-2 variable nodes.Our simulations show that a 4-bit Min-Sum RCQ decoder delivers frame error rate (FER) performance around 0.1dB of full-precision belief propagation (BP) for the IEEE 802.11 standard LDPC code in the low SNR region.The RCQ decoder actually outperforms full-precision BP in the high SNR region because it overcomes elementary trapping sets that create an error floor under BP decoding. This paper also introduces Hierarchical Dynamic Quantization (HDQ) to design the non-uniform quantizers required by RCQ decoders. HDQ is a low-complexity design technique that is slightly sub-optimal. Simulation results comparing HDQ and an optimal quantizer on the symmetric binary-input memoryless additive white Gaussian noise channel show a loss in mutual information between these two quantizers of less than $10{-6}$ bits, which is negligible for practical applications.

Citations (10)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.