Bounds for list-decoding and list-recovery of random linear codes (2004.13247v2)
Abstract: A family of error-correcting codes is list-decodable from error fraction $p$ if, for every code in the family, the number of codewords in any Hamming ball of fractional radius $p$ is less than some integer $L$ that is independent of the code length. It is said to be list-recoverable for input list size $\ell$ if for every sufficiently large subset of codewords (of size $L$ or more), there is a coordinate where the codewords take more than $\ell$ values. The parameter $L$ is said to be the "list size" in either case. The capacity, i.e., the largest possible rate for these notions as the list size $L \to \infty$, is known to be $1-h_q(p)$ for list-decoding, and $1-\log_q \ell$ for list-recovery, where $q$ is the alphabet size of the code family. In this work, we study the list size of random linear codes for both list-decoding and list-recovery as the rate approaches capacity. We show the following claims hold with high probability over the choice of the code (below, $\epsilon > 0$ is the gap to capacity). (1) A random linear code of rate $1 - \log_q(\ell) - \epsilon$ requires list size $L \ge \ell{\Omega(1/\epsilon)}$ for list-recovery from input list size $\ell$. This is surprisingly in contrast to completely random codes, where $L = O(\ell/\epsilon)$ suffices w.h.p. (2) A random linear code of rate $1 - h_q(p) - \epsilon$ requires list size $L \ge \lfloor h_q(p)/\epsilon+0.99 \rfloor$ for list-decoding from error fraction $p$, when $\epsilon$ is sufficiently small. (3) A random binary linear code of rate $1 - h_2(p) - \epsilon$ is list-decodable from average error fraction $p$ with list size with $L \leq \lfloor h_2(p)/\epsilon \rfloor + 2$. The second and third results together precisely pin down the list sizes for binary random linear codes for both list-decoding and average-radius list-decoding to three possible values.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.