Emergent Mind

Abstract

Robots are good at performing repetitive tasks in modern manufacturing industries. However, robot motions are mostly planned and preprogrammed with a notable lack of adaptivity to task changes. Even for slightly changed tasks, the whole system must be reprogrammed by robotics experts. Therefore, it is highly desirable to have a flexible motion planning method, with which robots can adapt to specific task changes in unstructured environments, such as production systems or warehouses, with little or no intervention from non-expert personnel. In this paper, we propose a user-guided motion planning algorithm in combination with the reinforcement learning (RL) method to enable robots automatically generate their motion plans for new tasks by learning from a few kinesthetic human demonstrations. To achieve adaptive motion plans for a specific application environment, e.g., desk assembly or warehouse loading/unloading, a library is built by abstracting features of common human demonstrated tasks. The definition of semantical similarity between features in the library and features of a new task is proposed and further used to construct the reward function in RL. The RL policy can automatically generate motion plans for a new task if it determines that new task constraints can be satisfied with the current library and request additional human demonstrations. Multiple experiments conducted on common tasks and scenarios demonstrate that the proposed user-guided RL-assisted motion planning method is effective.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.