Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 186 tok/s Pro
GPT OSS 120B 446 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Sim-Grasp: Learning 6-DOF Grasp Policies for Cluttered Environments Using a Synthetic Benchmark (2405.00841v2)

Published 1 May 2024 in cs.RO and cs.AI

Abstract: In this paper, we present Sim-Grasp, a robust 6-DOF two-finger grasping system that integrates advanced LLMs for enhanced object manipulation in cluttered environments. We introduce the Sim-Grasp-Dataset, which includes 1,550 objects across 500 scenarios with 7.9 million annotated labels, and develop Sim-GraspNet to generate grasp poses from point clouds. The Sim-Grasp-Polices achieve grasping success rates of 97.14% for single objects and 87.43% and 83.33% for mixed clutter scenarios of Levels 1-2 and Levels 3-4 objects, respectively. By incorporating LLMs for target identification through text and box prompts, Sim-Grasp enables both object-agnostic and target picking, pushing the boundaries of intelligent robotic systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. D. Berenson, R. Diankov, K. Nishiwaki, S. Kagami, and J. Kuffner, “Grasp planning in complex scenes,” in 2007 7th IEEE-RAS International Conference on Humanoid Robots, 2007, pp. 42–48.
  2. M. Bajracharya, J. Borders, D. Helmick, T. Kollar, M. Laskey, J. Leichty, J. Ma, U. Nagarajan, A. Ochiai, J. Petersen, K. Shankar, K. Stone, and Y. Takaoka, “A mobile manipulation system for one-shot teaching of complex tasks in homes,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 11 039–11 045.
  3. “Tiago - mobile manipulator robot,” Feb 2023. [Online]. Available: https://pal-robotics.com/robots/tiago/
  4. J. Li, C. Teeple, R. J. Wood, and D. J. Cappelleri, “Modular end-effector system for autonomous robotic maintenance & repair,” in 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 4510–4516.
  5. B. Zitkovich, T. Yu, S. Xu, and et al., “Rt-2: Vision-language-action models transfer web knowledge to robotic control,” in Proceedings of The 7th Conference on Robot Learning, ser. Proceedings of Machine Learning Research, J. Tan, M. Toussaint, and K. Darvish, Eds., vol. 229.   PMLR, 06–09 Nov 2023, pp. 2165–2183. [Online]. Available: https://proceedings.mlr.press/v229/zitkovich23a.html
  6. K. Shankar, M. Tjersland, J. Ma, K. Stone, and M. Bajracharya, “A learned stereo depth system for robotic manipulation in homes,” 2021.
  7. A. ten Pas, M. Gualtieri, K. Saenko, and R. P. Jr., “Grasp pose detection in point clouds,” CoRR, vol. abs/1706.09911, 2017. [Online]. Available: http://arxiv.org/abs/1706.09911
  8. J. Mahler, M. Matl, V. Satish, M. Danielczuk, B. DeRose, S. McKinley, and K. Goldberg, “Learning ambidextrous robot grasping policies,” Science Robotics, vol. 4, no. 26, p. eaau4984, 2019. [Online]. Available: https://www.science.org/doi/abs/10.1126/scirobotics.aau4984
  9. H.-S. Fang, C. Wang, M. Gou, and C. Lu, “Graspnet-1billion: A large-scale benchmark for general object grasping,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11 441–11 450.
  10. C. Eppner, A. Mousavian, and D. Fox, “A billion ways to grasp: An evaluation of grasp sampling schemes on a dense, physics-based grasp data set,” 2019. [Online]. Available: https://arxiv.org/abs/1912.05604
  11. A. B. Chowdhury, J. Li, and D. J. Cappelleri, “Neural Network-Based Pose Estimation Approaches for Mobile Manipulation,” Journal of Mechanisms and Robotics, vol. 15, no. 1, 04 2022, 011009. [Online]. Available: https://doi.org/10.1115/1.4053927
  12. H. Liang, X. Ma, S. Li, M. Gorner, S. Tang, B. Fang, F. Sun, and J. Zhang, “PointNetGPD: Detecting grasp configurations from point sets,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, may 2019. [Online]. Available: https://doi.org/10.1109%2Ficra.2019.8794435
  13. C. Wang, H. Fang, M. Gou, H. Fang, J. Gao, C. Lu, and S. J. Tong, “Graspness discovery in clutters for fast and accurate grasp detection,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 15 944–15 953, 2021.
  14. H.-S. Fang, C. Wang, and et al., “Anygrasp: Robust and efficient grasp perception in spatial and temporal domains,” IEEE Transactions on Robotics, vol. 39, no. 5, pp. 3929–3945, 2023.
  15. P. Ni, W. Zhang, X. Zhu, and Q. Cao, “Pointnet++ grasping: Learning an end-to-end spatial grasp generation algorithm from sparse point clouds,” 2020. [Online]. Available: https://arxiv.org/abs/2003.09644
  16. A. Dosovitskiy, L. Beyer, and el al., “An image is worth 16x16 words: Transformers for image recognition at scale,” ArXiv, vol. abs/2010.11929, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:225039882
  17. W. Huang, C. Wang, R. Zhang, Y. Li, J. Wu, and L. Fei-Fei, “Voxposer: Composable 3d value maps for robotic manipulation with language models,” in Conference on Robot Learning, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:259837330
  18. Y. Jiang and et al., “Efficient grasping from rgbd images: Learning using a new rectangle representation,” 2011 IEEE International Conference on Robotics and Automation, pp. 3304–3311, 2011.
  19. A. Depierre, E. Dellandréa, and L. Chen, “Jacquard: A large scale dataset for robotic grasp detection,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 3511–3516.
  20. C. Eppner, A. Mousavian, and D. Fox, “Acronym: A large-scale grasp dataset based on simulation,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 6222–6227.
  21. A. Mousavian, C. Eppner, and D. Fox, “6-dof graspnet: Variational grasp generation for object manipulation,” CoRR, vol. abs/1905.10520, 2019. [Online]. Available: http://arxiv.org/abs/1905.10520
  22. A. Zeng, S. Song, Yu, and et al., “Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 3750–3757.
  23. J. Li and D. J. Cappelleri, “Sim-suction: Learning a suction grasp policy for cluttered environments using a synthetic benchmark,” IEEE Transactions on Robotics, vol. 40, pp. 316–331, 2024.
  24. O. D. Team, “Openpcdet: An open-source toolbox for 3d object detection from point clouds,” https://github.com/open-mmlab/OpenPCDet, 2020.
  25. C. Eppner, A. Mousavian, and D. Fox, “A billion ways to grasp: An evaluation of grasp sampling schemes on a dense, physics-based grasp data set,” in Robotics Research, T. Asfour, E. Yoshida, J. Park, H. Christensen, and O. Khatib, Eds.   Cham: Springer International Publishing, 2022, pp. 890–905.
  26. J. Liang, V. Makoviychuk, A. Handa, N. Chentanez, M. Macklin, and D. Fox, “Gpu-accelerated robotic simulation for distributed reinforcement learning,” 2018. [Online]. Available: https://arxiv.org/abs/1810.05762
  27. Á. González, “Measurement of areas on a sphere using fibonacci and latitude–longitude lattices,” Mathematical Geosciences, vol. 42, pp. 49–64, 2009. [Online]. Available: https://api.semanticscholar.org/CorpusID:115155352
  28. C. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” in NIPS, 2017.
  29. S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. yue Li, J. Yang, H. Su, J.-J. Zhu, and L. Zhang, “Grounding dino: Marrying dino with grounded pre-training for open-set object detection,” ArXiv, vol. abs/2303.05499, 2023.
  30. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollár, and R. B. Girshick, “Segment anything,” ArXiv, vol. abs/2304.02643, 2023.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper:

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube