Exploiting Ergonomic Priors in Human-to-Robot Task Transfer (2003.00544v1)
Abstract: In recent years, there has been a booming shift in the development of versatile, autonomous robots by introducing means to intuitively teach robots task-oriented behaviour by demonstration. In this paper, a method based on programming by demonstration is proposed to learn null space policies from constrained motion data. The main advantage to using this is generalisation of a task by retargeting a systems redundancy as well as the capability to fully replace an entire system with another of varying link number and lengths while still accurately repeating a task subject to the same constraints. The effectiveness of the method has been demonstrated in a 3-link simulation and a real world experiment using a human subject as the demonstrator and is verified through task reproduction on a 7DoF physical robot. In simulation, the method works accurately with even as little as five data points producing errors less than 10-14. The approach is shown to outperform the current state-of-the-art approach in a simulated 3DoF robot manipulator control problem where motions are reproduced using learnt constraints. Retargeting of a systems null space component is also demonstrated in a task where controlling how redundancy is resolved allows for obstacle avoidance. Finally, the approach is verified in a real world experiment using demonstrations from a human subject where the learnt task space trajectory is transferred onto a 7DoF physical robot of a different embodiment.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.