Emergent Mind

Controlling Assistive Robots with Learned Latent Actions

(1909.09674)
Published Sep 20, 2019 in cs.RO

Abstract

Assistive robotic arms enable users with physical disabilities to perform everyday tasks without relying on a caregiver. Unfortunately, the very dexterity that makes these arms useful also makes them challenging to teleoperate: the robot has more degrees-of-freedom than the human can directly coordinate with a handheld joystick. Our insight is that we can make assistive robots easier for humans to control by leveraging latent actions. Latent actions provide a low-dimensional embedding of high-dimensional robot behavior: for example, one latent dimension might guide the assistive arm along a pouring motion. In this paper, we design a teleoperation algorithm for assistive robots that learns latent actions from task demonstrations. We formulate the controllability, consistency, and scaling properties that user-friendly latent actions should have, and evaluate how different low-dimensional embeddings capture these properties. Finally, we conduct two user studies on a robotic arm to compare our latent action approach to both state-of-the-art shared autonomy baselines and a teleoperation strategy currently used by assistive arms. Participants completed assistive eating and cooking tasks more efficiently when leveraging our latent actions, and also subjectively reported that latent actions made the task easier to perform. The video accompanying this paper can be found at: https://youtu.be/wjnhrzugBj4.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.