Emergent Mind

A 588 Gbps LDPC Decoder Based on Finite-Alphabet Message Passing

(1703.05769)
Published Mar 16, 2017 in cs.AR , cs.IT , and math.IT

Abstract

An ultra-high throughput low-density parity check (LDPC) decoder with an unrolled full-parallel architecture is proposed, which achieves the highest decoding throughput compared to previously reported LDPC decoders in the literature. The decoder benefits from a serial message-transfer approach between the decoding stages to alleviate the well-known routing congestion problem in parallel LDPC decoders. Furthermore, a finite-alphabet message passing algorithm is employed to replace the variable node update rule of the standard min-sum decoder with look-up tables, which are designed in a way that maximizes the mutual information between decoding messages. The proposed algorithm results in an architecture with reduced bit-width messages, leading to a significantly higher decoding throughput and to a lower area as compared to a min-sum decoder when serial message-transfer is used. The architecture is placed and routed for the standard min-sum reference decoder and for the proposed finite-alphabet decoder using a custom pseudo-hierarchical backend design strategy to further alleviate routing congestions and to handle the large design. Post-layout results show that the finite-alphabet decoder with the serial message-transfer architecture achieves a throughput as large as 588 Gbps with an area of 16.2 mm$2$ and dissipates an average power of 22.7 pJ per decoded bit in a 28 nm FD-SOI library. Compared to the reference min-sum decoder, this corresponds to 3.1 times smaller area and 2 times better energy efficiency.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.