Emergent Mind

Abstract

The vision-based grasp detection method is an important research direction in the field of robotics. However, due to the rectangle metric of the grasp detection rectangle's limitation, a false-positive grasp occurs, resulting in the failure of the real-world robot grasp task. In this paper, we propose a novel generative convolutional neural network model to improve the accuracy and robustness of robot grasp detection in real-world scenes. First, a Gaussian-based guided training method is used to encode the quality of the grasp point and grasp angle in the grasp pose, highlighting the highest-quality grasp point position and grasp angle and reducing the generation of false-positive grasps. Simultaneously, deformable convolution is used to obtain the shape features of the object in order to guide the subsequent network to the position. Furthermore, a global-local feature fusion method is introduced in order to efficiently obtain finer features during the feature reconstruction stage, allowing the network to focus on the features of the grasped objects. On the Cornell Grasping Datasets and Jacquard Datasets, our method achieves excellent performance of 99.0$\%$ and 95.9$\%$, respectively. Finally, the proposed method is put to the test in a real-world robot grasping scenario.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.