Emergent Mind

Abstract

A mixed aerial and ground robot team, which includes both unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs), is widely used for disaster rescue, social security, precision agriculture, and military missions. However, team capability and corresponding configuration vary since robots have different motion speeds, perceiving ranges, reaching areas, and resilient capabilities to the dynamic environment. Due to heterogeneous robots inside a team and the resilient capabilities of robots, it is challenging to perform a task with an optimal balance between reasonable task allocations and maximum utilization of robot capability. To address this challenge for effective mixed ground and aerial teaming, this paper developed a novel teaming method, proficiency aware multi-agent deep reinforcement learning (Mix-RL), to guide ground and aerial cooperation by considering the best alignments between robot capabilities, task requirements, and environment conditions. Mix-RL largely exploits robot capabilities while being aware of the adaption of robot capabilities to task requirements and environment conditions. Mix-RL's effectiveness in guiding mixed teaming was validated with the task "social security for criminal vehicle tracking".

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.