Better Best of Both Worlds Bounds for Bandits with Switching Costs (2206.03098v2)
Abstract: We study best-of-both-worlds algorithms for bandits with switching cost, recently addressed by Rouyer, Seldin and Cesa-Bianchi, 2021. We introduce a surprisingly simple and effective algorithm that simultaneously achieves minimax optimal regret bound of $\mathcal{O}(T{2/3})$ in the oblivious adversarial setting and a bound of $\mathcal{O}(\min{\log (T)/\Delta2,T{2/3}})$ in the stochastically-constrained regime, both with (unit) switching costs, where $\Delta$ is the gap between the arms. In the stochastically constrained case, our bound improves over previous results due to Rouyer et al., that achieved regret of $\mathcal{O}(T{1/3}/\Delta)$. We accompany our results with a lower bound showing that, in general, $\tilde{\Omega}(\min{1/\Delta2,T{2/3}})$ regret is unavoidable in the stochastically-constrained case for algorithms with $\mathcal{O}(T{2/3})$ worst-case regret.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.