Papers
Topics
Authors
Recent
2000 character limit reached

On the Optimal Boolean Function for Prediction under Quadratic Loss (1607.02381v1)

Published 8 Jul 2016 in cs.IT and math.IT

Abstract: Suppose $Y{n}$ is obtained by observing a uniform Bernoulli random vector $X{n}$ through a binary symmetric channel. Courtade and Kumar asked how large the mutual information between $Y{n}$ and a Boolean function $\mathsf{b}(X{n})$ could be, and conjectured that the maximum is attained by a dictator function. An equivalent formulation of this conjecture is that dictator minimizes the prediction cost in a sequential prediction of $Y{n}$ under logarithmic loss, given $\mathsf{b}(X{n})$. In this paper, we study the question of minimizing the sequential prediction cost under a different (proper) loss function - the quadratic loss. In the noiseless case, we show that majority asymptotically minimizes this prediction cost among all Boolean functions. We further show that for weak noise, majority is better than dictator, and that for strong noise dictator outperforms majority. We conjecture that for quadratic loss, there is no single sequence of Boolean functions that is simultaneously (asymptotically) optimal at all noise levels.

Citations (8)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.