Emergent Mind

Arimoto-Rényi Conditional Entropy and Bayesian $M$-ary Hypothesis Testing

(1701.01974)
Published Jan 8, 2017 in cs.IT , math.IT , math.PR , math.ST , and stat.TH

Abstract

This paper gives upper and lower bounds on the minimum error probability of Bayesian $M$-ary hypothesis testing in terms of the Arimoto-R\'enyi conditional entropy of an arbitrary order $\alpha$. The improved tightness of these bounds over their specialized versions with the Shannon conditional entropy ($\alpha=1$) is demonstrated. In particular, in the case where $M$ is finite, we show how to generalize Fano's inequality under both the conventional and list-decision settings. As a counterpart to the generalized Fano's inequality, allowing $M$ to be infinite, a lower bound on the Arimoto-R\'enyi conditional entropy is derived as a function of the minimum error probability. Explicit upper and lower bounds on the minimum error probability are obtained as a function of the Arimoto-R\'enyi conditional entropy for both positive and negative $\alpha$. Furthermore, we give upper bounds on the minimum error probability as functions of the R\'enyi divergence. In the setup of discrete memoryless channels, we analyze the exponentially vanishing decay of the Arimoto-R\'enyi conditional entropy of the transmitted codeword given the channel output when averaged over a random coding ensemble.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.