Emergent Mind

Punctured Low-Bias Codes Behave Like Random Linear Codes

(2109.11725)
Published Sep 24, 2021 in cs.CC , cs.IT , math.CO , and math.IT

Abstract

Random linear codes are a workhorse in coding theory, and are used to show the existence of codes with the best known or even near-optimal trade-offs in many noise models. However, they have little structure besides linearity, and are not amenable to tractable error-correction algorithms. In this work, we prove a general derandomization result applicable to random linear codes. Namely, in settings where the coding-theoretic property of interest is "local" (in the sense of forbidding certain bad configurations involving few vectors -- code distance and list-decodability being notable examples), one can replace random linear codes (RLCs) with a significantly derandomized variant with essentially no loss in parameters. Specifically, instead of randomly sampling coordinates of the (long) Hadamard code (which is an equivalent way to describe RLCs), one can randomly sample coordinates of any code with low bias. Over large alphabets, the low bias requirement can be weakened to just large distance. Furthermore, large distance suffices even with a small alphabet in order to match the current best known bounds for RLC list-decodability. In particular, by virtue of our result, all current (and future) achievability bounds for list-decodability of random linear codes extend automatically to random puncturings of any low-bias (or large alphabet) "mother" code. We also show that our punctured codes emulate the behavior of RLCs on stochastic channels, thus giving a derandomization of RLCs in the context of achieving Shannon capacity as well. Thus, we have a randomness-efficient way to sample codes achieving capacity in both worst-case and stochastic settings that can further inherit algebraic or other algorithmically useful structural properties of the mother code.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.