Emergent Mind

Abstract

In most existing studies on large-scale multi-agent coordination, the control methods aim to learn discrete policies for agents with finite choices. They rarely consider selecting actions directly from continuous action spaces to provide more accurate control, which makes them unsuitable for more complex tasks. To solve the control issue due to large-scale multi-agent systems with continuous action spaces, we propose a novel MARL coordination control method that derives stable continuous policies. By optimizing policies with maximum entropy learning, agents improve their exploration in execution and acquire an excellent performance after training. We also employ hierarchical graph attention networks (HGAT) and gated recurrent units (GRU) to improve the scalability and transferability of our method. The experiments show that our method consistently outperforms all baselines in large-scale multi-agent cooperative reconnaissance tasks.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.