Emergent Mind

Abstract

In this paper, we address the challenge of Nash equilibrium (NE) seeking in non-cooperative convex games with partial-decision information. We propose a distributed algorithm, where each agent refines its strategy through projected-gradient steps and an averaging procedure. Each agent uses estimates of competitors' actions obtained solely from local neighbor interactions, in a directed communication network. Unlike previous approaches that rely on (strong) monotonicity assumptions, this work establishes the convergence towards a NE under a diagonal dominance property of the pseudo-gradient mapping, that can be checked locally by the agents. Further, this condition is physically interpretable and of relevance for many applications, as it suggests that an agent's objective function is primarily influenced by its individual strategic decisions, rather than by the actions of its competitors. In virtue of a novel block-infinity norm convergence argument, we provide explicit bounds for constant step-size that are independent of the communication structure, and can be computed in a totally decentralized way. Numerical simulations on an optical network's power control problem validate the algorithm's effectiveness.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.