Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 65 tok/s Pro
Kimi K2 186 tok/s Pro
GPT OSS 120B 439 tok/s Pro
Claude Sonnet 4.5 33 tok/s Pro
2000 character limit reached

Memory Efficient Mutual Information-Maximizing Quantized Min-Sum Decoding for Rate-Compatible LDPC Codes (2201.06071v1)

Published 16 Jan 2022 in cs.IT and math.IT

Abstract: In this letter, we propose a two-stage design method to construct memory efficient mutual information-maximizing quantized min-sum (MIM-QMS) decoder for rate-compatible low-density parity-check (LDPC) codes. We first develop a modified density evolution to design a unique set of lookup tables (LUTs) that can be used for rate-compatible LDPC codes. The constructed LUTs are optimized based on their discrepancy values and a merge function to reduce the memory requirement. Numerical results show that the proposed rate-compatible MIM-QMS decoder can reduce the memory requirement for decoding by up to 94.92% compared to the benchmark rate-compatible LUT-based decoder with generally faster convergence speed. In addition, the proposed decoder can approach the performance of the floating-pointing belief propagation decoder within 0.15 dB.

Citations (4)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.