Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bounds on the Feedback Capacity of the $(d,\infty)$-RLL Input-Constrained Binary Erasure Channel (2101.08638v3)

Published 21 Jan 2021 in cs.IT and math.IT

Abstract: The paper considers the input-constrained binary erasure channel (BEC) with causal, noiseless feedback. The channel input sequence respects the $(d,\infty)$-runlength limited (RLL) constraint, i.e., any pair of successive $1$s must be separated by at least $d$ $0$s. We derive upper and lower bounds on the feedback capacity of this channel, for all $d\geq 1$, given by: $\max\limits_{\delta \in [0,\frac{1}{d+1}]}R(\delta) \leq C{\text{fb}}_{(d\infty)}(\epsilon) \leq \max\limits_{\delta \in [0,\frac{1}{1+d\epsilon}]}R(\delta)$, where the function $R(\delta) = \frac{h_b(\delta)}{d\delta + \frac{1}{1-\epsilon}}$, with $\epsilon\in [0,1]$ denoting the channel erasure probability, and $h_b(\cdot)$ being the binary entropy function. We note that our bounds are tight for the case when $d=1$ (see Sabag et al. (2016)), and, in addition, we demonstrate that for the case when $d=2$, the feedback capacity is equal to the capacity with non-causal knowledge of erasures, for $\epsilon \in [0,1-\frac{1}{2\log(3/2)}]$. For $d>1$, our bounds differ from the non-causal capacities (which serve as upper bounds on the feedback capacity) derived in Peled et al. (2019) in only the domains of maximization. The approach in this paper follows Sabag et al. (2017), by deriving single-letter bounds on the feedback capacity, based on output distributions supported on a finite $Q$-graph, which is a directed graph with edges labelled by output symbols.

Citations (1)

Summary

We haven't generated a summary for this paper yet.