Emergent Mind

Abstract

We consider decentralized stochastic multi-armed bandit problem with multiple players in the case of different communication probabilities between players. Each player makes a decision of pulling an arm without cooperation while aiming to maximize his or her reward but informs his or her neighbors in the end of every turn about the arm he or she pulled and the reward he or she got. Neighbors of players are determined according to an Erdos-Renyi graph with which is reproduced in the beginning of every turn. We consider i.i.d. rewards generated by a Bernoulli distribution and assume that players are unaware about the arms' probability distributions and their mean values. In case of a collision, we assume that only one of the players who is randomly chosen gets the reward where the others get zero reward. We study the effects of connectivity, the degree of communication between players, on the cumulative regret using well-known algorithms UCB1, epsilon-Greedy and Thompson Sampling.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.