Emergent Mind

Abstract

Hand-eye calibration, as a fundamental task in vision-based robotic systems, aims to estimate the transformation matrix between the coordinate frame of the camera and the robot flange. Most approaches to hand-eye calibration rely on external markers or human assistance. We proposed Look at Robot Base Once (LRBO), a novel methodology that addresses the hand-eye calibration problem without external calibration objects or human support, but with the robot base. Using point clouds of the robot base, a transformation matrix from the coordinate frame of the camera to the robot base is established as I=AXB. To this end, we exploit learning-based 3D detection and registration algorithms to estimate the location and orientation of the robot base. The robustness and accuracy of the method are quantified by ground-truth-based evaluation, and the accuracy result is compared with other 3D vision-based calibration methods. To assess the feasibility of our methodology, we carried out experiments utilizing a low-cost structured light scanner across varying joint configurations and groups of experiments. The proposed hand-eye calibration method achieved a translation deviation of 0.930 mm and a rotation deviation of 0.265 degrees according to the experimental results. Additionally, the 3D reconstruction experiments demonstrated a rotation error of 0.994 degrees and a position error of 1.697 mm. Moreover, our method offers the potential to be completed in 1 second, which is the fastest compared to other 3D hand-eye calibration methods. Code is released at github.com/leihui6/LRBO.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.