Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Change of Scenery: Unsupervised LiDAR Change Detection for Mobile Robots (2309.10924v2)

Published 19 Sep 2023 in cs.RO

Abstract: This paper presents a fully unsupervised deep change detection approach for mobile robots with 3D LiDAR. In unstructured environments, it is infeasible to define a closed set of semantic classes. Instead, semantic segmentation is reformulated as binary change detection. We develop a neural network, RangeNetCD, that uses an existing point-cloud map and a live LiDAR scan to detect scene changes with respect to the map. Using a novel loss function, existing point-cloud semantic segmentation networks can be trained to perform change detection without any labels or assumptions about local semantics. We demonstrate the performance of this approach on data from challenging terrains; mean intersection over union (mIoU) scores range between 67.4% and 82.2% depending on the amount of environmental structure. This outperforms the geometric baseline used in all experiments. The neural network runs faster than 10Hz and is integrated into a robot's autonomy stack to allow safe navigation around obstacles that intersect the planned path. In addition, a novel method for the rapid automated acquisition of per-point ground-truth labels is described. Covering changed parts of the scene with retroreflective materials and applying a threshold filter to the intensity channel of the LiDAR allows for quantitative evaluation of the change detector.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. L. Wijayathunga, A. Rassau, and D. Chai, “Challenges and Solutions for Autonomous Ground Robot Scene Understanding and Navigation in Unstructured Outdoor Environments: A Review,” Applied Sciences, vol. 13, p. 9877, Jan. 2023. Number: 17 Publisher: Multidisciplinary Digital Publishing Institute.
  2. W. M. Kouw and M. Loog, “A Review of Domain Adaptation without Target Labels,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, pp. 766–785, Mar. 2021. Conference Name: IEEE Transactions on Pattern Analysis and Machine Intelligence.
  3. Z. Ma, Y. Yang, G. Wang, X. Xu, H. T. Shen, and M. Zhang, “Rethinking Open-World Object Detection in Autonomous Driving Scenarios,” in Proceedings of the 30th ACM International Conference on Multimedia, MM ’22, (New York, NY, USA), pp. 1279–1288, Association for Computing Machinery, Oct. 2022.
  4. P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, V. Vasudevan, W. Han, J. Ngiam, H. Zhao, A. Timofeev, S. Ettinger, M. Krivokon, A. Gao, A. Joshi, Y. Zhang, J. Shlens, Z. Chen, and D. Anguelov, “Scalability in perception for autonomous driving: Waymo open dataset,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  5. J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences,” in Proc. of the IEEE/CVF International Conf. on Computer Vision (ICCV), 2019.
  6. X. Yan, J. Gao, C. Zheng, C. Zheng, R. Zhang, S. Cui, and Z. Li, “2dpass: 2d priors assisted semantic segmentation on lidar point clouds,” in European Conference on Computer Vision, pp. 677–695, Springer, 2022.
  7. Z. Zhou, Y. Zhang, and H. Foroosh, “Panoptic-PolarNet: Proposal-free LiDAR Point Cloud Panoptic Segmentation,” Mar. 2021. arXiv:2103.14962 [cs].
  8. D. Girardeau-Montaut, M. Roux, R. Marc, and G. Thibault, “Change detection on points cloud data acquired with a ground laser scanner,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 36, no. 3, p. W19, 2005.
  9. F. Pomerleau, P. Krüsi, F. Colas, P. Furgale, and R. Siegwart, “Long-term 3d map maintenance in dynamic environments,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3712–3719, 2014.
  10. J. P. Underwood, D. Gillsjö, T. Bailey, and V. Vlaskine, “Explicit 3D change detection using ray-tracing in spherical coordinates,” in 2013 IEEE International Conference on Robotics and Automation, pp. 4735–4741, May 2013. ISSN: 1050-4729.
  11. Y. Wu, “VT&R3: Generalizing the teach and repeat navigation framework,” Sept. 2022. MASc Thesis.
  12. P. Papadakis, “Terrain traversability analysis methods for unmanned ground vehicles: A survey,” Engineering Applications of Artificial Intelligence, vol. 26, pp. 1373–1385, Apr. 2013.
  13. “Clearpath Robotics Warthog UGV.” Available Online https://clearpathrobotics.com/warthog-unmanned-ground-vehicle-robot/, 2020.
  14. L. Wellhausen, R. Dubé, A. Gawel, R. Siegwart, and C. Cadena, “Reliable real-time change detection and mapping for 3d lidars,” in 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), pp. 81–87, 2017.
  15. W. Xiao, H. Cao, M. Tang, Z. Zhang, and N. Chen, “3D urban object change detection from aerial and terrestrial point clouds: A review,” International Journal of Applied Earth Observation and Geoinformation, vol. 118, p. 103258, Apr. 2023.
  16. M. Voelsen, J. Schachtschneider, and C. Brenner, “Classification and Change Detection in Mobile Mapping LiDAR Point Clouds,” PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science, vol. 89, pp. 195–207, June 2021.
  17. U. Okyay, J. Telling, C. L. Glennie, and W. E. Dietrich, “Airborne lidar change detection: An overview of earth sciences applications,” Earth-Science Reviews, vol. 198, p. 102929, 2019.
  18. U. Stilla and Y. Xu, “Change detection of urban objects using 3d point clouds: A review,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 197, pp. 228–255, 2023.
  19. Y. Gao, H. Yuan, T. Ku, R. C. Veltkamp, G. Zamanakos, L. Tsochatzidis, A. Amanatiadis, I. Pratikakis, A. Panou, I. Romanelis, V. Fotis, G. Arvanitis, and K. Moustakas, “SHREC 2023: Point cloud change detection for city scenes,” Computers & Graphics, vol. 115, pp. 35–42, 2023.
  20. I. de Gélis, S. Lefèvre, and T. Corpetti, “Siamese KPConv: 3D multiple change detection from raw point clouds using deep learning,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 197, pp. 274–291, Mar. 2023.
  21. I. de Gélis, S. Saha, M. Shahzad, T. Corpetti, S. Lefèvre, and X. X. Zhu, “Deep Unsupervised Learning for 3D ALS Point Clouds Change Detection,” May 2023. arXiv:2305.03529 [eess].
  22. H. Zhao, M. Tomko, and K. Khoshelham, “Interior structural change detection using a 3D model and LiDAR segmentation,” Journal of Building Engineering, vol. 72, p. 106628, Aug. 2023.
  23. M. Yguel, O. Aycard, and C. Laugier, “Update Policy of Dense Maps: Efficient Algorithms and Sparse Representation,” Field and Service Robotics: Results of the 6th International Conference (STAR: Springer Tracts in Advanced Robotics Series Volume 42), vol. 42, pp. 23–33, 2008.
  24. H. Lim, S. Hwang, and H. Myung, “Erasor: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3d point cloud map building,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2272–2279, 2021.
  25. J. Schauer and A. Nüchter, “The Peopleremover—Removing Dynamic Objects From 3-D Point Cloud Data by Traversing a Voxel Occupancy Grid,” IEEE Robotics and Automation Letters, vol. 3, pp. 1679–1686, July 2018. Conference Name: IEEE Robotics and Automation Letters.
  26. H. Lim, L. Nunes, B. Mersch, X. Chen, J. Behley, and H. Myung, “ERASOR2: Instance-Aware Robust 3D Mapping of the Static World in Dynamic Scenes,” in Robotics: Science and Systems, (Daegu, Republic of Korea.), July 2023.
  27. J. D. Yoon, Model-free Setting-Independent Detection of Dynamic Objects in 3D Lidar. MASc, University of Toronto, 2019.
  28. S. Andreas Baur, D. Josef Emmerichs, F. Moosmann, P. Pinggera, B. Ommer, and A. Geiger, “SLIM: Self-Supervised LiDAR Scene Flow and Motion Segmentation,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), (Montreal, QC, Canada), pp. 13106–13116, IEEE, Oct. 2021.
  29. J. Kim, J. Woo, and S. Im, “Rvmos: Range-view moving object segmentation leveraged by semantic and motion features,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 8044–8051, 2022.
  30. P. Furgale and T. D. Barfoot, “Visual teach and repeat for long-range rover autonomy,” Journal of Field Robotics, 2010.
  31. J. Sehn, Y. Wu, and T. D. Barfoot, “Along Similar Lines: Local Obstacle Avoidance for Long-term Autonomous Path Following,” in 20th Conference on Robots and Vision, June 2023.
  32. K. Burnett, Y. Wu, D. J. Yoon, A. P. Schoellig, and T. D. Barfoot, “Are We Ready for Radar to Replace Lidar in All-Weather Mapping and Localization?,” IEEE Robotics and Automation Letters, vol. 7, pp. 10328–10335, Oct. 2022. Conference Name: IEEE Robotics and Automation Letters.
  33. “Pytorch3d Chamfer Loss.” Available Online https://pytorch3d.readthedocs.io/en/latest/modules/loss.html.
  34. A. Milioto, I. Vizzo, J. Behley, and C. Stachniss, “RangeNet++: Fast and Accurate LiDAR Semantic Segmentation,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2019.
  35. H. Thomas, C. R. Qi, J.-E. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas, “KPConv: Flexible and Deformable Convolution for Point Clouds,” Aug. 2019. arXiv:1904.08889 [cs].
  36. “Ouster OS1 LiDAR.” Available Online [https://ouster.com/products/scanning-lidar/os1-sensor/].
  37. H. Thomas, B. Agro, M. Gridseth, J. Zhang, and T. D. Barfoot, “Self-Supervised Learning of Lidar Segmentation for Autonomous Indoor Navigation,” Dec. 2020. arXiv:2012.05897 [cs].

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com