Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Real-Time Simultaneous Localization and Mapping with LiDAR intensity (2301.09257v2)

Published 23 Jan 2023 in cs.CV and cs.RO

Abstract: We propose a novel real-time LiDAR intensity image-based simultaneous localization and mapping method , which addresses the geometry degeneracy problem in unstructured environments. Traditional LiDAR-based front-end odometry mostly relies on geometric features such as points, lines and planes. A lack of these features in the environment can lead to the failure of the entire odometry system. To avoid this problem, we extract feature points from the LiDAR-generated point cloud that match features identified in LiDAR intensity images. We then use the extracted feature points to perform scan registration and estimate the robot ego-movement. For the back-end, we jointly optimize the distance between the corresponding feature points, and the point to plane distance for planes identified in the map. In addition, we use the features extracted from intensity images to detect loop closure candidates from previous scans and perform pose graph optimization. Our experiments show that our method can run in real time with high accuracy and works well with illumination changes, low-texture, and unstructured environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. T. Qin, P. Li, and S. Shen, “VINS-Mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
  2. J. Zhang and S. Singh, “LOAM: Lidar odometry and mapping in real-time.” in Robotics: Science and Systems, vol. 2, no. 9, 2014.
  3. T. Shan, B. Englot, C. Ratti, and D. Rus, “LVI-SAM: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping,” in 2021 IEEE international conference on robotics and automation (ICRA).   IEEE, 2021, pp. 5692–5698.
  4. Y. Cao and G. Beltrame, “VIR-SLAM: Visual, inertial, and ranging slam for single and multi-robot systems,” Autonomous Robots, vol. 45, no. 6, pp. 905–917, 2021.
  5. T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and R. Daniela, “LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 5135–5142.
  6. Q. Tong and C. Shaozu, “A-LOAM: Advanced implementation of loam,” https://github.com/HKUST-Aerial-Robotics/A-LOAM, 2019.
  7. T. Shan and B. Englot, “LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 4758–4765.
  8. K. Ebadi, M. Palieri, S. Wood, C. Padgett, and A.-a. Agha-mohammadi, “DARE-SLAM: Degeneracy-aware and resilient loop closing in perceptually-degraded environments,” Journal of Intelligent & Robotic Systems, vol. 102, pp. 1–25, 2021.
  9. W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast direct lidar-inertial odometry,” IEEE Transactions on Robotics, 2022.
  10. H. Li, B. Tian, H. Shen, and J. Lu, “An intensity-augmented lidar-inertial slam for solid-state lidars in degenerated environments,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–10, 2022.
  11. H. Wang, C. Wang, and L. Xie, “Intensity-SLAM: Intensity assisted localization and mapping for large scale environment,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1715–1721, 2021.
  12. R. Smith, M. Self, and P. Cheeseman, “Estimating uncertain spatial relationships in robotics,” in Autonomous robot vehicles.   Springer, 1990, pp. 167–193.
  13. B. Huang, J. Zhao, and J. Liu, “A survey of simultaneous localization and mapping,” arXiv preprint arXiv:1909.05214, 2019.
  14. W. Hess, D. Kohler, H. Rapp, and D. Andor, “Real-time loop closure in 2D LIDAR SLAM,” in 2016 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2016, pp. 1271–1278.
  15. K. Chen, B. T. Lopez, A.-a. Agha-mohammadi, and A. Mehta, “Direct LiDAR Odometry: Fast Localization with Dense Point Clouds,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2000–2007, 2022.
  16. J. Lin and F. Zhang, “A fast, complete, point cloud based loop closure for lidar odometry and mapping,” arXiv preprint arXiv:1909.11811, 2019.
  17. ——, “Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small fov,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 3126–3131.
  18. X. Chen, A. Milioto, E. Palazzolo, P. Giguère, J. Behley, and C. Stachniss, “SuMa++: Efficient LiDAR-based Semantic SLAM,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2019.
  19. X. Chen, T. Läbe, A. Milioto, T. Röhling, J. Behley, and C. Stachniss, “OverlapNet: A siamese network for computing LiDAR scan similarity with applications to loop closing and localization,” Autonomous Robots, pp. 1–21, 2022.
  20. D. Cattaneo, M. Vaghi, and A. Valada, “LCDNet: Deep Loop Closure Detection and Point Cloud Registration for LiDAR SLAM,” IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2074–2093, 2022.
  21. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: A 3D Convolutional Neural Network for real-time object class recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652–660.
  22. C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space,” Advances in neural information processing systems, vol. 30, 2017.
  23. R. Dubé, D. Dugas, E. Stumm, J. Nieto, R. Siegwart, and C. Cadena, “SegMatch: Segment based place recognition in 3D point clouds,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 5266–5272.
  24. M. A. Uy and G. H. Lee, “PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4470–4479.
  25. R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “NetVLAD: CNN Architecture for Weakly Supervised Place Recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 5297–5307.
  26. T. D. Barfoot, C. McManus, S. Anderson, H. Dong, E. Beerepoot, C. H. Tong, P. Furgale, J. D. Gammell, and J. Enright, “Into Darkness: Visual Navigation Based on a Lidar-Intensity-Image Pipeline,” in Robotics Research: The 16th International Symposium ISRR.   Springer, 2016, pp. 487–504.
  27. T. Shan, B. Englot, F. Duarte, C. Ratti, and D. Rus, “Robust Place Recognition using an Imaging Lidar,” 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5469–5475, 2021.
  28. T. Guadagnino, X. Chen, M. Sodano, J. Behley, G. Grisetti, and C. Stachniss, “Fast sparse lidar odometry using self-supervised feature selection on intensity images,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7597–7604, 2022.
  29. L. Di Giammarino, I. Aloise, C. Stachniss, and G. Grisetti, “Visual place recognition using lidar intensity information,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 4382–4389.
  30. Z. Liu and F. Zhang, “BALM: Bundle Adjustment for Lidar Mapping,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3184–3191, 2021.
  31. S. Rusinkiewicz and M. Levoy, “Efficient variants of the icp algorithm,” in Proceedings third international conference on 3-D digital imaging and modeling.   IEEE, 2001, pp. 145–152.
  32. R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A Versatile and Accurate Monocular SLAM System,” IEEE transactions on robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
  33. R. Mur-Artal and J. D. Tardós, “ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras,” IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
  34. C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, “ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM,” IEEE Transactions on Robotics, 2021.
  35. Y. Cai, W. Xu, and F. Zhang, “ikd-Tree: An incremental KD tree for robotic applications,” arXiv preprint arXiv:2102.10808, 2021.
  36. R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: A general framework for graph optimization,” in 2011 IEEE International Conference on Robotics and Automation.   IEEE, 2011, pp. 3607–3613.
  37. T. K. Kim, “T test as a parametric statistic,” Korean journal of anesthesiology, vol. 68, no. 6, pp. 540–546, 2015.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Wenqiang Du (4 papers)
  2. Giovanni Beltrame (64 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com