Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Real-time Neural Dense Elevation Mapping for Urban Terrain with Uncertainty Estimations (2208.03467v2)

Published 6 Aug 2022 in cs.RO

Abstract: Having good knowledge of terrain information is essential for improving the performance of various downstream tasks on complex terrains, especially for the locomotion and navigation of legged robots. We present a novel framework for neural urban terrain reconstruction with uncertainty estimations. It generates dense robot-centric elevation maps online from sparse LiDAR observations. We design a novel pre-processing and point features representation approach that ensures high robustness and computational efficiency when integrating multiple point cloud frames. A Bayesian-GAN model then recovers the detailed terrain structures while simultaneously providing the pixel-wise reconstruction uncertainty. We evaluate the proposed pipeline through extensive simulation and real-world experiments. It demonstrates efficient terrain reconstruction with high quality and real-time performance on a mobile platform, which further benefits the downstream tasks of legged robots. (See https://kin-zhang.github.io/ndem/ for more details.)

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. K. Rematas, A. Liu, P. P. Srinivasan, J. T. Barron, A. Tagliasacchi, T. Funkhouser, and V. Ferrari, “Urban radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12932–12942, June 2022.
  2. H. Guo, S. Peng, H. Lin, Q. Wang, G. Zhang, H. Bao, and X. Zhou, “Neural 3d scene reconstruction with the manhattan-world assumption,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5511–5520, 2022.
  3. H. Oleynikova, Z. Taylor, M. Fehr, R. Siegwart, and J. Nieto, “Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1366–1373, IEEE, 2017.
  4. I. Vizzo, T. Guadagnino, J. Behley, and C. Stachniss, “Vdbfusion: Flexible and efficient tsdf integration of range sensor data,” Sensors, vol. 22, no. 3, p. 1296, 2022.
  5. D. Hoeller, N. Rudin, C. Choy, A. Anandkumar, and M. Hutter, “Neural scene representation for locomotion on structured terrain,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 8667–8674, 2022.
  6. P. Fankhauser, M. Bloesch, C. Gehring, M. Hutter, and R. Siegwart, “Robot-centric elevation mapping with uncertainty estimates,” in International Conference on Climbing and Walking Robots (CLAWAR), 2014.
  7. P. Fankhauser, M. Bloesch, and M. Hutter, “Probabilistic terrain mapping for mobile robots with uncertain localization,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3019–3026, 2018.
  8. M. Stölzle, T. Miki, L. Gerdes, M. Azkarate, and M. Hutter, “Reconstructing occluded elevation information in terrain maps with self-supervised learning,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1697–1704, 2022.
  9. T. Miki, L. Wellhausen, R. Grandia, F. Jenelten, T. Homberger, and M. Hutter, “Elevation mapping for locomotion and navigation using gpu,” arXiv preprint arXiv:2204.12876, 2022.
  10. K. Katyal, K. Popek, C. Paxton, J. Moore, K. Wolfe, P. Burlina, and G. D. Hager, “Occupancy map prediction using generative and fully convolutional networks for vehicle navigation,” arXiv preprint arXiv:1803.02007, 2018.
  11. V. D. Sharma, J. Chen, A. Shrivastava, and P. Tokekar, “Occupancy map prediction for improved indoor robot navigation,” arXiv preprint arXiv:2203.04177, 2022.
  12. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” in European conference on computer vision, pp. 405–421, Springer, 2020.
  13. T. Chen, P. Wang, Z. Fan, and Z. Wang, “Aug-nerf: Training stronger neural radiance fields with triple-level physically-grounded augmentations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15191–15202, June 2022.
  14. M. Tancik, V. Casser, X. Yan, S. Pradhan, B. Mildenhall, P. P. Srinivasan, J. T. Barron, and H. Kretzschmar, “Block-nerf: Scalable large scene neural view synthesis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8248–8258, June 2022.
  15. A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3d mapping framework based on octrees,” Autonomous robots, vol. 34, no. 3, pp. 189–206, 2013.
  16. T. Jia, E.-Y. Yang, Y.-S. Hsiao, J. Cruz, D. Brooks, G.-Y. Wei, and V. J. Reddi, “Omu: A probabilistic 3d occupancy mapping accelerator for real-time octomap at the edge,” in 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 909–914, 2022.
  17. K. Stepanas, J. Williams, E. Hernández, F. Ruetz, and T. Hines, “Ohm: Gpu based occupancy map generation,” arXiv preprint arXiv:2206.06079, 2022.
  18. M. Wei, D. Lee, V. Isler, and D. Lee, “Occupancy map inpainting for online robot navigation,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 8551–8557, IEEE, 2021.
  19. S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser, “Semantic scene completion from a single depth image,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 190–198, 2017.
  20. J. Zhang, H. Zhao, A. Yao, Y. Chen, L. Zhang, and H. Liao, “Efficient semantic scene completion network with spatial group convolution,” in European Conference on Computer Vision, pp. 749–765, Springer, 2018.
  21. A. Telea, “An image inpainting technique based on the fast marching method,” Journal of graphics tools, vol. 9, no. 1, pp. 23–34, 2004.
  22. M. Ebrahimi and E. Lunasin, “The navier–stokes–voight model for image inpainting,” The IMA Journal of Applied Mathematics, vol. 78, no. 5, pp. 869–894, 2013.
  23. F. Jenelten, R. Grandia, F. Farshidian, and M. Hutter, “Tamols: Terrain-aware motion optimization for legged systems,” IEEE Transactions on Robotics, pp. 1–19, 2022.
  24. A. Kendall, Geometry and uncertainty in deep learning for computer vision. PhD thesis, University of Cambridge, UK, 2019.
  25. “Jueying Mini.” https://www.deeprobotics.cn/en/products_jy_301.html. Accessed: 2021-08-30.
  26. R. Caruana, “Multitask learning,” Machine learning, vol. 28, no. 1, pp. 41–75, 1997.
  27. K. Nazeri, E. Ng, T. Joseph, F. Qureshi, and M. Ebrahimi, “Edgeconnect: Structure guided image inpainting using edge prediction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Oct 2019.
  28. E. Rohmer, S. P. N. Singh, and M. Freese, “Coppeliasim (formerly v-rep): a versatile and scalable robot simulation framework,” in Proc. of The International Conference on Intelligent Robots and Systems (IROS), 2013. www.coppeliarobotics.com.
  29. PhD thesis, Örebro universitet, 2009.
  30. W. Xu and F. Zhang, “Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3317–3324, 2021.
  31. B. Yang, L. Wellhausen, T. Miki, M. Liu, and M. Hutter, “Real-time optimal navigation planning using learned motion costs,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 9283–9289, IEEE, 2021.
Citations (10)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub