Emergent Mind

Optimal Guessing under Nonextensive Framework and associated Moment Bounds

(1905.07729)
Published May 19, 2019 in cs.IT , cond-mat.stat-mech , and math.IT

Abstract

We consider the problem of guessing the realization of a random variable but under more general Tsallis' non-extensive entropic framework rather than the classical Maxwell-Boltzman-Gibbs-Shannon framework. We consider both the conditional guessing problem in the presence of some related side information, and the unconditional one where no such side-information is available. For both types of the problem, the non-extensive moment bounds of the required number of guesses are derived; here we use the $q$-normalized expectation in place of the usual (linear) expectation to define the non-extensive moments. These moment bounds are seen to be a function of the logarithmic norm entropy measure, a recently developed two-parameter generalization of the Renyi entropy, and hence provide their information theoretic interpretation. We have also considered the case of uncertain source distribution and derived the non-extensive moment bounds for the corresponding mismatched guessing function. These mismatched bounds are interestingly seen to be linked with an important robust statistical divergence family known as the relative $(\alpha,\beta)$-entropies; similar link is discussed between the optimum mismatched guessing with the extremes of these relative entropy measures.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.