Emergent Mind

Abstract

Tactile sensors provide information that can be used to learn and execute manipulation tasks. Different tasks, however, might require different levels of sensory information; which in turn likely affect learning rates and performance. This paper evaluates the role of tactile information on autonomous learning of manipulation with a simulated 3-finger tendon-driven hand. We compare the ability of the same learning algorithm (Proximal Policy Optimization, PPO) to learn two manipulation tasks (rolling a ball about the horizontal axis with and without rotational stiffness) with three levels of tactile sensing: no sensing, 1D normal force, and 3D force vector. Surprisingly, and contrary to recent work on manipulation, adding 1D force-sensing did not always improve learning rates compared to no sensinglikely due to whether or not normal force is relevant to the task. Nonetheless, even though 3D force-sensing increases the dimensionality of the sensory inputwhich would in general hamper algorithm convergenceit resulted in faster learning rates and better performance. We conclude that, in general, sensory input is useful to learning only when it is relevant to the taskas is the case of 3D force-sensing for in-hand manipulation against gravity. Moreover, the utility of 3D force-sensing can even offset the added computational cost of learning with higher-dimensional sensory input.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.