Emergent Mind

Abstract

The application of deep reinforcement learning in multi-agent systems introduces extra challenges. In a scenario with numerous agents, one of the most important concerns currently being addressed is how to develop sufficient collaboration between diverse agents. To address this problem, we consider the form of agent interaction based on neighborhood and propose a multi-agent reinforcement learning (MARL) algorithm based on the actor-critic method, which can adaptively construct the hypergraph structure representing the agent interaction and further implement effective information extraction and representation learning through hypergraph convolution networks, leading to effective cooperation. Based on different hypergraph generation methods, we present two variants: Actor Hypergraph Convolutional Critic Network (HGAC) and Actor Attention Hypergraph Critic Network (ATT-HGAC). Experiments with different settings demonstrate the advantages of our approach over other existing methods.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.