Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Targetless Radar-Camera Extrinsic Calibration Based on the Common Features of Radar and Camera (2309.00787v2)

Published 2 Sep 2023 in cs.RO, cs.SY, eess.IV, eess.SP, and eess.SY

Abstract: Sensor fusion is essential for autonomous driving and autonomous robots, and radar-camera fusion systems have gained popularity due to their complementary sensing capabilities. However, accurate calibration between these two sensors is crucial to ensure effective fusion and improve overall system performance. Calibration involves intrinsic and extrinsic calibration, with the latter being particularly important for achieving accurate sensor fusion. Unfortunately, many target-based calibration methods require complex operating procedures and well-designed experimental conditions, posing challenges for researchers attempting to reproduce the results. To address this issue, we introduce a novel approach that leverages deep learning to extract a common feature from raw radar data (i.e., Range-Doppler-Angle data) and camera images. Instead of explicitly representing these common features, our method implicitly utilizes these common features to match identical objects from both data sources. Specifically, the extracted common feature serves as an example to demonstrate an online targetless calibration method between the radar and camera systems. The estimation of the extrinsic transformation matrix is achieved through this feature-based approach. To enhance the accuracy and robustness of the calibration, we apply the RANSAC and Levenberg-Marquardt (LM) nonlinear optimization algorithm for deriving the matrix. Our experiments in the real world demonstrate the effectiveness and accuracy of our proposed method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
  1. J. Domhof, J. F. Kooij, and D. M. Gavrila, “An extrinsic calibration tool for radar, camera and lidar,” in 2019 International Conference on Robotics and Automation (ICRA), pp. 8107–8113, IEEE, 2019.
  2. C. Schöller, M. Schnettler, A. Krämmer, G. Hinz, M. Bakovic, M. Güzet, and A. Knoll, “Targetless rotational auto-calibration of radar and camera for intelligent transportation systems,” in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 3934–3941, IEEE, 2019.
  3. E. Wise, J. Peršić, C. Grebe, I. Petrović, and J. Kelly, “A continuous-time approach for 3d radar-to-camera extrinsic calibration,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 13164–13170, IEEE, 2021.
  4. E. Wise, Q. Cheng, and J. Kelly, “Spatiotemporal calibration of 3d mm-wavelength radar-camera pairs,” arXiv preprint arXiv:2211.01871, 2022.
  5. J. Peršić, L. Petrović, I. Marković, and I. Petrović, “Online multi-sensor calibration based on moving object tracking,” Advanced Robotics, vol. 35, no. 3-4, pp. 130–140, 2021.
  6. L. Heng, “Automatic targetless extrinsic calibration of multiple 3d lidars and radars,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10669–10675, IEEE, 2020.
  7. A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
  8. X. Li, Y. Liu, V. Lakshminarasimhan, H. Cao, F. Zhang, and A. Knoll, “Globally optimal robust radar calibration in intelligent transportation systems,” IEEE Transactions on Intelligent Transportation Systems, 2023.
  9. A. Bhattacharya and R. Vaughan, “Deep learning radar design for breathing and fall detection,” IEEE Sensors Journal, vol. 20, no. 9, pp. 5072–5085, 2020.
  10. K. Patel, K. Rambach, T. Visentin, D. Rusev, M. Pfeiffer, and B. Yang, “Deep learning-based object classification on automotive radar spectra,” in 2019 IEEE Radar Conference (RadarConf), pp. 1–6, IEEE, 2019.
  11. L. Wang, J. Tang, and Q. Liao, “A study on radar target detection based on deep neural networks,” IEEE Sensors Letters, vol. 3, no. 3, pp. 1–4, 2019.
  12. Y. Wang, Z. Jiang, Y. Li, J.-N. Hwang, G. Xing, and H. Liu, “Rodnet: A real-time radar object detection network cross-supervised by camera-radar fused object 3d localization,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 4, pp. 954–967, 2021.
  13. A. Zhang, F. E. Nowruzi, and R. Laganiere, “Raddet: Range-azimuth-doppler based radar object detection for dynamic road users,” in 2021 18th Conference on Robots and Vision (CRV), pp. 95–102, IEEE, 2021.
  14. Y. Song, Z. Xie, X. Wang, and Y. Zou, “Ms-yolo: Object detection based on yolov5 optimized fusion millimeter-wave radar and machine vision,” IEEE Sensors Journal, vol. 22, no. 15, pp. 15435–15447, 2022.
  15. T.-Y. Huang, M.-C. Lee, C.-H. Yang, and T.-S. Lee, “Yolo-ore: A deep learning-aided object recognition approach for radar systems,” IEEE Transactions on Vehicular Technology, 2022.
  16. E. Marchand, H. Uchiyama, and F. Spindler, “Pose estimation for augmented reality: a hands-on survey,” IEEE transactions on visualization and computer graphics, vol. 22, no. 12, pp. 2633–2651, 2015.
  17. M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Lei Cheng (71 papers)
  2. Siyang Cao (13 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.