Emergent Mind

Thompson Sampling for Combinatorial Network Optimization in Unknown Environments

(1907.04201)
Published Jul 7, 2019 in cs.LG and stat.ML

Abstract

Influence maximization, adaptive routing, and dynamic spectrum allocation all require choosing the right action from a large set of alternatives. Thanks to the advances in combinatorial optimization, these and many similar problems can be efficiently solved given an environment with known stochasticity. In this paper, we take this one step further and focus on combinatorial optimization in unknown environments. We consider a very general learning framework called combinatorial multi-armed bandit with probabilistically triggered arms and a very powerful Bayesian algorithm called Combinatorial Thompson Sampling (CTS). Under the semi-bandit feedback model and assuming access to an oracle without knowing the expected base arm outcomes beforehand, we show that when the expected reward is Lipschitz continuous in the expected base arm outcomes CTS achieves $O(\sum{i =1}m\log T/(pi\Deltai))$ regret and $O(\max{\mathbb{E}[m\sqrt{T\log T/p}],\mathbb{E}[m2/p^]})$ Bayesian regret, where $m$ denotes the number of base arms, $pi$ and $\Delta_i$ denote the minimum non-zero triggering probability and the minimum suboptimality gap of base arm $i$ respectively, $T$ denotes the time horizon, and $p*$ denotes the overall minimum non-zero triggering probability. We also show that when the expected reward satisfies the triggering probability modulated Lipschitz continuity, CTS achieves $O(\max{m\sqrt{T\log T},m2})$ Bayesian regret, and when triggering probabilities are non-zero for all base arms, CTS achieves $O(1/p\log(1/p^))$ regret independent of the time horizon. Finally, we numerically compare CTS with algorithms based on upper confidence bounds in several networking problems and show that CTS outperforms these algorithms by at least an order of magnitude in majority of the cases.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.