Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Toward a Plug-and-Play Vision-Based Grasping Module for Robotics (2310.04349v2)

Published 6 Oct 2023 in cs.RO and cs.LG

Abstract: Despite recent advancements in AI for robotics, grasping remains a partially solved challenge, hindered by the lack of benchmarks and reproducibility constraints. This paper introduces a vision-based grasping framework that can easily be transferred across multiple manipulators. Leveraging Quality-Diversity (QD) algorithms, the framework generates diverse repertoires of open-loop grasping trajectories, enhancing adaptability while maintaining a diversity of grasps. This framework addresses two main issues: the lack of an off-the-shelf vision module for detecting object pose and the generalization of QD trajectories to the whole robot operational space. The proposed solution combines multiple vision modules for 6DoF object detection and tracking while rigidly transforming QD-generated trajectories into the object frame. Experiments on a Franka Research 3 arm and a UR5 arm with a SIH Schunk hand demonstrate comparable performance when the real scene aligns with the simulation used for grasp generation. This work represents a significant stride toward building a reliable vision-based grasping module transferable to new platforms, while being adaptable to diverse scenarios without further training iterations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (7)
  1. Team, O. M. (2023). Octo: An Open-Source Generalist Robot Policy.
  2. Hodson, R. (2018). A gripping problem: designing machines that can grasp and manipulate objects with anything approaching human levels of dexterity is first on the to-do list for robotics. Nature Spotlight: Robotics.
  3. https://pal-robotics.com/robots/tiago/
  4. http://wiki.ros.org/Robots/TIAGo/Tutorials/MoveIt/Pick_place
  5. https://enchanted.tools/
  6. A. Mousavian, C. Eppner, and D. Fox, “6-DoF graspnet: Variational grasp generation for object manipulation,” in International Conference on Computer Vision, 2019.
  7. R. Grimm, M. Grotz, S. Ottenhaus, and T. Asfour, “Vision-based robotic pushing and grasping for stone sample collection under computing resource constraints,” in IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 0–0.
Citations (3)

Summary

We haven't generated a summary for this paper yet.