Papers
Topics
Authors
Recent
2000 character limit reached

SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models

Published 18 Sep 2023 in cs.RO | (2309.10062v2)

Abstract: In this work, we introduce SMART-LLM, an innovative framework designed for embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task Planning using LLMs, harnesses the power of LLMs to convert high-level task instructions provided as input into a multi-robot task plan. It accomplishes this by executing a series of stages, including task decomposition, coalition formation, and task allocation, all guided by programmatic LLM prompts within the few-shot prompting paradigm. We create a benchmark dataset designed for validating the multi-robot task planning problem, encompassing four distinct categories of high-level instructions that vary in task complexity. Our evaluation experiments span both simulation and real-world scenarios, demonstrating that the proposed model can achieve promising results for generating multi-robot task plans. The experimental videos, code, and datasets from the work can be found at https://sites.google.com/view/smart-LLM/.

Citations (49)

Summary

  • The paper introduces SMART-LLM to translate high-level instructions into executable multi-robot task plans.
  • The framework employs LLMs for task decomposition, coalition formation, and optimal robot task allocation with notable performance metrics.
  • Real-world trials demonstrate the system's robust execution and potential for dynamic multi-agent task planning.

SMART-LLM: Smart Multi-Agent Robot Task Planning using LLMs

The paper "SMART-LLM: Smart Multi-Agent Robot Task Planning using LLMs" (2309.10062) introduces a novel framework designed to leverage LLMs for facilitating task planning in multi-agent robotic systems. This work aims to translate high-level natural language instructions into executable task plans for multi-robot teams, focusing on task decomposition, coalition formation, task allocation, and execution.

Introduction

Recent advancements in multi-robot systems have showcased their ability to significantly enhance efficiency across various domains. However, these systems pose challenges such as coordinating heterogeneous robots with diverse skill sets. This paper addresses these challenges by proposing a framework, SMART-LLM, which uses LLMs to break down complex natural language instructions into sub-tasks, form coalitions among robots, and allocate tasks effectively, enabling coherent execution by robot teams. Figure 1

Figure 1: An overview of SMART-LLM showing the task planning stages using LLMs for multi-agent systems.

Methodology

The proposed methodology of SMART-LLM is divided into four main stages, each crucial for planning tasks in multi-robot environments.

Task Decomposition

Task decomposition involves converting a high-level task instruction into a set of structured sub-tasks that can be individually managed and executed. SMART-LLM uses pre-defined robot skills and environmental details to inform the LLM of possible decompositions. These decompositions are framed as Pythonic scripts, which enhance the LLM's ability to generate executable plans.

Coalition Formation

Once tasks are broken down, SMART-LLM assesses which robots, based on their skills, should collaborate to accomplish each sub-task. This coalition formation step is critical for ensuring that tasks exceeding the capability of a single robot are handled effectively by a team. The LLM is prompted with examples of robot skills and environmental contexts, allowing it to generate a comprehensive coalition policy. Figure 2

Figure 2: System overview of SMART-LLM showing the four core stages from task decomposition to task execution.

Task Allocation

In the task allocation phase, the LLM allocates specific or groups of robots to each sub-task using the coalition policy derived earlier. This stage generates executable code by determining the optimal assignment of robots to tasks, ensuring efficient resource utilization and parallel execution where feasible.

Task Execution

Finally, the allocated plans are executed by the robots. The LLM-generated Python code calls upon pre-programmed APIs that trigger the robots' low-level functionalities, thus closing the loop from planning to action.

Results

The robustness of SMART-LLM was evaluated using a new benchmark dataset constructed within the AI2-THOR simulation environment. This dataset covers elemental to complex task categories, providing a comprehensive test bed for the framework. Key metrics such as Success Rate (SR), Task Completion Rate (TCR), Goal Condition Recall (GCR), and Robot Utilization (RU) were used to evaluate performance.

SMART-LLM exhibited impressive results, especially in complex task settings requiring sophisticated coalition formation and task allocation decisions. Different LLM backbones including GPT-3.5, GPT-4, Llama2, and Claude-3 were tested, with GPT-4 and Claude-3 demonstrating superior logical reasoning capabilities.

Real-World Application

In real-world trials, SMART-LLM was applied to tasks such as visibility coverage using heterogeneous robot teams. The system successfully generated task plans that optimally assigned available robots to tasks, reflecting its applicability in practical settings. Figure 3

Figure 3: Real-robot experiment displaying effective task planning and execution in a patrolling scenario.

Conclusion

SMART-LLM represents a significant advancement in multi-agent robotic task planning, leveraging the reasoning power of LLMs to automatically translate abstract task instructions into coherent plans. This approach highlights the potential for integrating LLMs into robotics and suggests pathways for future enhancements, such as incorporating dynamic task redistributions and exploring agent-specific LLM configurations for even greater adaptability. Further studies could focus on extending the framework's capabilities to include real-time adaptation of task plans in dynamic environments.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.