Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Off the Radar: Uncertainty-Aware Radar Place Recognition with Introspective Querying and Map Maintenance (2306.12556v1)

Published 21 Jun 2023 in cs.RO

Abstract: Localisation with Frequency-Modulated Continuous-Wave (FMCW) radar has gained increasing interest due to its inherent resistance to challenging environments. However, complex artefacts of the radar measurement process require appropriate uncertainty estimation to ensure the safe and reliable application of this promising sensor modality. In this work, we propose a multi-session map management system which constructs the best maps for further localisation based on learned variance properties in an embedding space. Using the same variance properties, we also propose a new way to introspectively reject localisation queries that are likely to be incorrect. For this, we apply robust noise-aware metric learning, which both leverages the short-timescale variability of radar data along a driven path (for data augmentation) and predicts the downstream uncertainty in metric-space-based place recognition. We prove the effectiveness of our method over extensive cross-validated tests of the Oxford Radar RobotCar and MulRan dataset. In this, we outperform the current state-of-the-art in radar place recognition and other uncertainty-aware methods when using only single nearest-neighbour queries. We also show consistent performance increases when rejecting queries based on uncertainty over a difficult test environment, which we did not observe for a competing uncertainty-aware place recognition system.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. I. Gulrajani and D. Lopez-Paz, “In search of lost domain generalization,” arXiv preprint arXiv:2007.01434, 2020.
  2. P. Checchin, F. Gérossier, C. Blanc, R. Chapuis, and L. Trassoudaine, “Radar scan matching slam using the fourier-mellin transform,” pp. 151–161, 2009.
  3. S. H. Cen and P. Newman, “Radar-only ego-motion estimation in difficult settings via graph matching,” in 2019 International Conference on Robotics and Automation (ICRA), pp. 298–304, IEEE, 2019.
  4. Z. Hong, Y. Petillot, and S. Wang, “Radarslam: Radar based large-scale slam in all weathers,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5164–5170, IEEE, 2020.
  5. G. Kim and A. Kim, “Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4802–4809, 2018.
  6. G. Kim, Y. S. Park, Y. Cho, J. Jeong, and A. Kim, “Mulran: Multimodal range dataset for urban place recognition,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2020.
  7. Ş. Săftescu, M. Gadd, D. De Martini, D. Barnes, and P. Newman, “Kidnapped radar: Topological radar localisation using rotationally-invariant metric learning,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 4358–4364, IEEE, 2020.
  8. M. Gadd, D. De Martini, and P. Newman, “Contrastive Learning for Unsupervised Radar Place Recognition,” in Proceedings of the IEEE International Conference on Advanced Robotics (ICAR), 2021.
  9. J. Komorowski, M. Wysoczanska, and T. Trzcinski, “Large-scale topological radar localization using learned descriptors,” in Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8–12, 2021, Proceedings, Part II 28, pp. 451–462, Springer, 2021.
  10. M. Gadd, D. De Martini, and P. Newman, “Look around you: Sequence-based radar place recognition with learned rotational invariance,” in 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), pp. 270–276, IEEE, 2020.
  11. W. Wang, P. P. de Gusmo, B. Yang, A. Markham, and N. Trigoni, “RadarLoc: Learning to Relocalize in FMCW Radar,” in IEEE International Conference on Robotics and Automation (ICRA), 2021.
  12. D. De Martini, M. Gadd, and P. Newman, “kRadar++: Coarse-to-fine FMCW Scanning Radar Localisation,” Sensors, vol. 20, no. 21, p. 6002, 2020.
  13. K. Cait, B. Wang, and C. X. Lu, “Autoplace: Robust place recognition with single-chip automotive radar,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 2222–2228, IEEE, 2022.
  14. M. Gadd, D. De Martini, and P. Newman, “Unsupervised Place Recognition with Deep Embedding Learning over Radar Videos,” arXiv preprint arXiv:2106.06703, 2021.
  15. H. Yin, X. Xu, Y. Wang, and R. Xiong, “Radar-to-lidar: Heterogeneous place recognition via joint learning,” Frontiers in Robotics and AI, vol. 8, p. 661199, 2021.
  16. Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in international conference on machine learning, pp. 1050–1059, PMLR, 2016.
  17. K. Cai, C. X. Lu, and X. Huang, “STUN: Self-teaching uncertainty estimation for place recognition,” pp. 6614–6621, 2022.
  18. F. Warburg, M. Jørgensen, J. Civera, and S. Hauberg, “Bayesian triplet loss: Uncertainty quantification in image retrieval,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12158–12168, 2021.
  19. Y. Shi and A. K. Jain, “Probabilistic face embeddings,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6902–6911, 2019.
  20. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  21. J. Oh and G. Eoh, “Variational bayesian approach to condition-invariant feature extraction for visual place recognition,” Applied Sciences, vol. 11, 2021.
  22. H. Wu and M. Flierl, “Learning product codebooks using vector-quantized autoencoders for image retrieval,” in 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 1–5, 2019.
  23. L. A. P. Rey, D. Jarnikov, and M. Holenderski, “Content-based image retrieval from weakly-supervised disentangled representations,” in NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021.
  24. Y. S. Park, J. Kim, and A. Kim, “Radar localization and mapping for indoor disaster environments via multi-modal registration to prior lidar map,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1307–1314, 2019.
  25. X. Lin, Y. Duan, Q. Dong, J. Lu, and J. Zhou, “Deep variational metric learning,” in European Conference on Computer Vision, 2018.
  26. K. Burnett, D. J. Yoon, A. P. Schoellig, and T. D. Barfoot, “Radar odometry combining probabilistic estimation and unsupervised feature learning,” arXiv preprint arXiv:2105.14152, 2021.
  27. A. Taha, Y.-T. Chen, T. Misu, A. Shrivastava, and L. Davis, “Unsupervised data uncertainty learning in visual retrieval systems,” arXiv preprint arXiv:1902.02586, 2019.
  28. D. Adolfsson, M. Karlsson, V. Kubelka, M. Magnusson, and H. Andreasson, “Tbv radar slam – trust but verify loop candidates,” arXiv preprint arXiv:2301.04397, 2023.
  29. R. Aldera, D. De Martini, M. Gadd, and P. Newman, “What could go wrong? introspective radar odometry in challenging environments,” in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 2835–2842, IEEE, 2019.
  30. Y. Gal, Uncertainty in deep learning. PhD thesis, University of Cambridge.
  31. J. Bütepage, L. Maystre, and M. Lalmas, “Gaussian process encoders: Vaes with reliable latent-space uncertainty,” in Machine Learning and Knowledge Discovery in Databases. Research Track, pp. 84–99, Springer International Publishing, 2021.
  32. J. He, D. Spokoyny, G. Neubig, and T. Berg-Kirkpatrick, “Lagging inference networks and posterior collapse in variational autoencoders,” arXiv preprint arXiv:1901.05534, 2019.
  33. H. Akrami, A. Joshi, S. Aydore, and R. Leahy, “Quantile regression for uncertainty estimation in vaes with applications to brain lesion detection,” in Information Processing in Medical Imaging (A. Feragen, S. Sommer, J. Schnabel, and M. Nielsen, eds.), pp. 689–700, 2021.
  34. R. Weston, M. Gadd, D. De Martini, P. Newman, and I. Posner, “Fast-MbyM: Leveraging Translational Invariance of the Fourier Transform for Efficient and Accurate Radar Odometry,” in International Conference on Robotics and Automation (ICRA), pp. 2186––2192, May 2022.
  35. D. Barnes, M. Gadd, P. Murcutt, P. Newman, and I. Posner, “The oxford radar robotcar dataset: A radar extension to the oxford robotcar dataset,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), (Paris), 2020.
  36. M. Ye, X. Zhang, P. C. Yuen, and S.-F. Chang, “Unsupervised embedding learning via invariant and spreading instance feature,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6210–6219, 2019.
  37. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  38. T. Y. Tang, D. De Martini, S. Wu, and P. Newman, “Self-supervised learning for using overhead imagery as maps in outdoor range sensor localization,” The International Journal of Robotics Research, vol. 40, no. 12-14, pp. 1488–1509, 2021.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com