Emergent Mind
Situated Multimodal Control of a Mobile Robot: Navigation through a Virtual Environment
(2007.09053)
Published Jul 13, 2020
in
cs.RO
,
cs.AI
,
and
cs.CL
Abstract
We present a new interface for controlling a navigation robot in novel environments using coordinated gesture and language. We use a TurtleBot3 robot with a LIDAR and a camera, an embodied simulation of what the robot has encountered while exploring, and a cross-platform bridge facilitating generic communication. A human partner can deliver instructions to the robot using spoken English and gestures relative to the simulated environment, to guide the robot through navigation tasks.
We're not able to analyze this paper right now due to high demand.
Please check back later (sorry!).
Generate a summary of this paper on our Pro plan:
We ran into a problem analyzing this paper.