Emergent Mind

Bandit Phase Retrieval

(2106.01660)
Published Jun 3, 2021 in stat.ML , cs.LG , math.ST , stat.ME , and stat.TH

Abstract

We study a bandit version of phase retrieval where the learner chooses actions $(At){t=1}n$ in the $d$-dimensional unit ball and the expected reward is $\langle At, \theta\star\rangle2$ where $\theta_\star \in \mathbb Rd$ is an unknown parameter vector. We prove that the minimax cumulative regret in this problem is $\smash{\tilde \Theta(d \sqrt{n})}$, which improves on the best known bounds by a factor of $\smash{\sqrt{d}}$. We also show that the minimax simple regret is $\smash{\tilde \Theta(d / \sqrt{n})}$ and that this is only achievable by an adaptive algorithm. Our analysis shows that an apparently convincing heuristic for guessing lower bounds can be misleading and that uniform bounds on the information ratio for information-directed sampling are not sufficient for optimal regret.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.