Emergent Mind

Average submodularity of maximizing anticoordination in network games

(2207.00379)
Published Jul 1, 2022 in math.OC and cs.GT

Abstract

We consider the control of decentralized learning dynamics for agents in an anti-coordination network game. In the anti-coordination network game, there is a preferred action in the absence of neighbors' actions, and the utility an agent receives from the preferred action decreases as more of its neighbors select the preferred action, potentially causing the agent to select a less desirable action. The decentralized dynamics that is based on the iterated elimination of dominated strategies converge for the considered game. Given a convergent action profile, we measure anti-coordination by the number of edges in the underlying graph that have at least one agent in either end of the edge not taking the preferred action. The maximum anti-coordination (MAC) problem seeks to find an optimal set of agents to control under a finite budget so that the overall network disconnect is maximized on game convergence as a result of the dynamics. We show that the MAC is submodular in expectation in dense bipartite networks for any realization of the utility constants in the population. Utilizing this result, we obtain a performance guarantee for the greedy agent selection algorithm for MAC. Finally, we provide a computational study to show the effectiveness of greedy node selection strategies to solve MAC on general bipartite networks.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.