Emergent Mind

Abstract

We study the distributed synthesis of policies for multi-agent systems to perform \emph{spatial-temporal} tasks. We formalize the synthesis problem as a \emph{factored} Markov decision process subject to \emph{graph temporal logic} specifications. The transition function and task of each agent are functions of the agent itself and its neighboring agents. In this work, we develop another distributed synthesis method, which improves the scalability and runtime by two orders of magnitude compared to our prior work. The synthesis method decomposes the problem into a set of smaller problems, one for each agent by leveraging the structure in the model, and the specifications. We show that the running time of the method is linear in the number of agents. The size of the problem for each agent is exponential only in the number of neighboring agents, which is typically much smaller than the number of agents. We demonstrate the applicability of the method in case studies on disease control, urban security, and search and rescue. The numerical examples show that the method scales to hundreds of agents with hundreds of states per agent and can also handle significantly larger state spaces than our prior work.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.