PeLiCal: Targetless Extrinsic Calibration via Penetrating Lines for RGB-D Cameras with Limited Co-visibility (2404.13949v2)
Abstract: RGB-D cameras are crucial in robotic perception, given their ability to produce images augmented with depth data. However, their limited FOV often requires multiple cameras to cover a broader area. In multi-camera RGB-D setups, the goal is typically to reduce camera overlap, optimizing spatial coverage with as few cameras as possible. The extrinsic calibration of these systems introduces additional complexities. Existing methods for extrinsic calibration either necessitate specific tools or highly depend on the accuracy of camera motion estimation. To address these issues, we present PeLiCal, a novel line-based calibration approach for RGB-D camera systems exhibiting limited overlap. Our method leverages long line features from surroundings, and filters out outliers with a novel convergence voting algorithm, achieving targetless, real-time, and outlier-robust performance compared to existing methods. We open source our implementation on https://github.com/joomeok/PeLiCal.git.
- S. Esquivel, F. Woelk, and R. Koch, “Calibration of a multi-camera rig from non-overlapping views,” in Pattern Recognition: 29th DAGM Symposium, Heidelberg, Germany, September 12-14, 2007. Proceedings 29. Springer, 2007, pp. 82–91.
- B. Li, L. Heng, K. Koser, and M. Pollefeys, “A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern,” in Proc. IEEE/RSJ Intl. Conf. on Intell. Robots and Sys. IEEE, 2013, pp. 1301–1307.
- R. K. Kumar, A. Ilie, J.-M. Frahm, and M. Pollefeys, “Simple calibration of non-overlapping cameras with a mirror,” in Proc. IEEE Conf. on Comput. Vision and Pattern Recog. IEEE, 2008, pp. 1–7.
- R. I. Hartley, “Camera calibration using line correspondences,” in Proc. DARPA Image Understanding Workshop, 1993, pp. 361–366.
- F. M. Mirzaei and S. I. Roumeliotis, “Globally optimal pose estimation from line correspondences,” in Proc. IEEE Intl. Conf. on Robot. and Automat. IEEE, 2011, pp. 5581–5588.
- B. Přibyl, P. Zemčík, and M. Čadík, “Camera pose estimation from lines using plücker coordinates,” arXiv preprint arXiv:1608.02824, 2016.
- C. Xu, L. Zhang, L. Cheng, and R. Koch, “Pose estimation from line correspondences: A complete analysis and a series of solutions,” IEEE Trans. Pattern Analysis and Machine Intell., vol. 39, no. 6, pp. 1209–1222, 2016.
- C. Zhu, Z. Zhou, Z. Xing, Y. Dong, Y. Ma, and J. Yu, “Robust plane-based calibration of multiple non-overlapping cameras,” in 2016 Fourth International Conference on 3D Vision (3DV). IEEE, 2016, pp. 658–666.
- A. Perez-Yus, E. Fernandez-Moral, G. Lopez-Nicolas, J. J. Guerrero, and P. Rives, “Extrinsic calibration of multiple rgb-d cameras from line observations,” IEEE Robot. and Automat. Lett., vol. 3, no. 1, pp. 273–280, 2017.
- R. Pautrat, I. Suárez, Y. Yu, M. Pollefeys, and V. Larsson, “Gluestick: Robust image matching by sticking points and lines together,” arXiv preprint arXiv:2304.02008, 2023.
- L. Zhou, D. Koppel, and M. Kaess, “A complete, accurate and efficient solution for the perspective-n-line problem,” IEEE Robot. and Automat. Lett., vol. 6, no. 2, pp. 699–706, 2020.
- F. M. Mirzaei and S. I. Roumeliotis, “Optimal estimation of vanishing points in a manhattan world,” in Proc. IEEE Intl. Conf. on Comput. Vision. IEEE, 2011, pp. 2454–2461.
- L. Zhou, J. Ye, and M. Kaess, “A stable algebraic camera pose estimation for minimal configurations of 2d/3d point and line correspondences,” in Asian Conference on Computer Vision (ACCV), 2019, pp. 273–288.
- X. Zuo, X. Xie, Y. Liu, and G. Huang, “Robust visual slam with point and line features,” in Proc. IEEE Intl. Conf. on Robot. and Automat., 2017, pp. 1775–1782.
- L. Zhou, S. Wang, and M. Kaess, “A fast and accurate solution for pose estimation from 3d correspondences,” in Proc. IEEE Intl. Conf. on Robot. and Automat., 2020, pp. 1308–1314.
- J. Levinson, C. Esteves, K. Chen, N. Snavely, A. Kanazawa, A. Rostamizadeh, and A. Makadia, “An analysis of svd for deep rotation estimation,” Advances in Neural Information Processing Sys. Conf., vol. 33, pp. 22 554–22 565, 2020.
- J. Rehder, J. Nikolic, T. Schneider, T. Hinzmann, and R. Siegwart, “Extending kalibr: Calibrating the extrinsics of multiple imus and of individual axes,” in Proc. IEEE Intl. Conf. on Robot. and Automat., 2016, pp. 4304–4311.
- Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Analysis and Machine Intell., vol. 22, no. 11, pp. 1330–1334, 2000.
- J. Xu, R. Li, L. Zhao, W. Yu, Z. Liu, B. Zhang, and Y. Li, “Cammap: Extrinsic calibration of non-overlapping cameras based on slam map alignment,” IEEE Robot. and Automat. Lett., vol. 7, no. 4, pp. 11 879–11 885, 2022.
- C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Trans. Robot. and Automat., vol. 37, no. 6, pp. 1874–1890, 2021.
- Jaeho Shin (16 papers)
- Seungsang Yun (2 papers)
- Ayoung Kim (47 papers)