Emergent Mind

Learning and Efficiency in Games with Dynamic Population

(1505.00391)
Published May 3, 2015 in cs.GT

Abstract

We study the quality of outcomes in repeated games when the population of players is dynamically changing and participants use learning algorithms to adapt to the changing environment. Game theory classically considers Nash equilibria of one-shot games, while in practice many games are played repeatedly, and in such games players often use algorithmic tools to learn to play in the given environment. Most previous work on learning in repeated games assumes that the population playing the game is static over time. We analyze the efficiency of repeated games in dynamically changing environments, motivated by application domains such as Internet ad-auctions and packet routing. We prove that, in many classes of games, if players choose their strategies in a way that guarantees low adaptive regret, then high social welfare is ensured, even under very frequent changes. In fact, in large markets learning players achieve asymptotically optimal social welfare despite high turnover. Previous work has only showed that high welfare is guaranteed for learning outcomes in static environments. Our work extends these results to more realistic settings when participation is drastically evolving over time.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.