Emergent Mind

Grasp Learning by Sampling from Demonstration

(1611.06366)
Published Nov 19, 2016 in cs.RO

Abstract

Robotic grasping traditionally relies on object features or shape information for learning new or applying already learned grasps. We argue however that such a strong reliance on object geometric information renders grasping and grasp learning a difficult task in the event of cluttered environments with high uncertainty where reasonable object models are not available. This being so, in this paper we thus investigate the application of model-free stochastic optimization for grasp learning. For this, our proposed learning method requires just a handful of user-demonstrated grasps and an initial prior by a rough sketch of an object's grasp affordance density, yet no object geometric knowledge except for its pose. Our experiments show promising applicability of our proposed learning method.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.