Browse Publications Technical Papers 2021-01-0086
2021-04-06

3D-3D Self-Calibration of Sensors Using Point Cloud Data 2021-01-0086

Self-calibration of sensors has become highly essential in the era of self-driving cars. Reducing the sensors’ errors increases the reliability of the decisions made by the autonomous systems. Various methods are currently under investigation but the traditional methods still prevail which maintain a strong dependency on human experts and expensive equipment that consume significant amounts of labor and time. Recently, various calibration techniques proposed for extrinsic calibration for Autonomous Vehicles (AVs) mostly rely on the camera 2D images and depth map to calibrate the 3D LiDAR points. While most methods work with the whole frame, some methods use the objects in the frame to perform the calibration. To the best of our knowledge, majority of these self-calibration methods rely on using actual or falsified ground truth values.
We propose a 3D-3D point cloud based continuous self-calibration approach that uses one or many objects identified in the sensor frames to cross-calibrate sensors without any reliance on initial calibration parameters or ground truth values. Considering the fact that multiple sensors have multiple views of the same scene, the common features or objects within the scene can be used to calibrate one sensor node with respect to the other. Our approach relies on point cloud data (PCD) generated from at least two sensors to cross-calibrate the mis-calibrated sensor with respect to the calibrated sensor. In this paper, we demonstrate that we can apply the method either by extracting the essential feature points of the object PCD (i.e., centroids of the objects) or the whole object PCD. Following which we optimize the cost function to obtain the extrinsic calibration parameters. Our method handles the problem of pose correction similar to calibration without any ground truth value from sensors just by using multiple consecutive frames. In comparison to other methods, our method performs calibration with low margins of errors in rotation and translation. This method has been tested on the publicly available KITTI dataset, which is widely used to assess various problems (object detection, segmentation, tracking, depth prediction etc.) related to self-driving cars. The simplicity of the method lets us do on-the-fly calibration of sensors with a high level of accuracy.

SAE MOBILUS

Subscribers can view annotate, and download all of SAE's content. Learn More »

Access SAE MOBILUS »

We also recommend:
JOURNAL ARTICLE

Physics-Based Simulation Solutions for Testing Performance of Sensors and Perception Algorithm under Adverse Weather Conditions

12-05-04-0024

View Details

TECHNICAL PAPER

Creating 3D Virtual Driving Environments for Simulation-Aided Development of Autonomous Driving and Active Safety

2017-01-0107

View Details

TECHNICAL PAPER

Training of Neural Networks with Automated Labeling of Simulated Sensor Data

2019-01-0120

View Details

X