Emergent Mind

Abstract

Autonomous navigation is one of the key requirements for every potential application of mobile robots in the real-world. Besides high-accuracy state estimation, a suitable and globally consistent representation of the 3D environment is indispensable. We present a fully tightly-coupled LiDAR-Visual-Inertial SLAM system and 3D mapping framework applying local submapping strategies to achieve scalability to large-scale environments. A novel and correspondence-free, inherently probabilistic, formulation of LiDAR residuals is introduced, expressed only in terms of the occupancy fields and its respective gradients. These residuals can be added to a factor graph optimisation problem, either as frame-to-map factors for the live estimates or as map-to-map factors aligning the submaps with respect to one another. Experimental validation demonstrates that the approach achieves state-of-the-art pose accuracy and furthermore produces globally consistent volumetric occupancy submaps which can be directly used in downstream tasks such as navigation or exploration.

Overview

  • This paper introduces an innovative LiDAR-Visual-Inertial SLAM system that produces accurate, globally consistent volumetric occupancy maps for autonomous navigation.

  • The system innovatively fuses LiDAR, visual, and inertial data for improved localization accuracy and real-time occupancy mapping, employing a tightly-coupled SLAM approach.

  • A submapping strategy enables the scalability of the mapping process for large-scale environments, maintaining global consistency and enhancing SLAM system robustness.

  • Evaluated on the HILTI 2022 SLAM Challenge, the system demonstrated competitive localization accuracy and produced highly useful maps for navigation tasks.

Tightly-Coupled LiDAR-Visual-Inertial SLAM and Large-Scale Volumetric Occupancy Mapping

Introduction

In autonomous navigation, precise localisation is essential, but so is an accurate representation of the 3D environment. Traditional SLAM (Simultaneous Localization And Mapping) systems that fuse different sensory inputs (such as stereo vision, Inertial Measurement Units (IMU), and Light Detection and Ranging (LiDAR) sensors) have shown promise in achieving accurate localisation. However, most current systems represent the 3D world in formats not immediately suitable for navigation and exploration tasks, which require knowledge of free space. In this paper, a novel approach integrating LiDAR, visual and inertial data in a tightly-coupled SLAM system is presented. The system produces globally consistent volumetric occupancy maps, enhancing both localisation accuracy and the practical utility of the generated maps for robotic navigation.

System Overview

The core innovation lies in the fusion of LiDAR, visual, and inertial measurements in a tightly-coupled SLAM system that also incorporates a volumetric mapping approach. The system leverages LiDAR data not only to improve localisation accuracy but also to update occupancy maps of the environment in real-time. A significant contribution is the introduction of novel LiDAR residuals based on occupancy fields and their gradients, enabling efficient addition of LiDAR data into the factor graph optimization without necessitating expensive data association steps.

Mapping Approach

The mapping module employs a submapping strategy to manage the scalability for large-scale environments, dividing the map into local submaps that are individually consistent. These submaps are then globally aligned and integrated into the SLAM system through novel frame-to-map and map-to-map optimization factors. This strategy not only contributes to maintaining the global consistency of the map but also improves the robustness and accuracy of the SLAM system by leveraging the volumetric information in the optimization process.

Experimental Results

The system was comprehensively evaluated on the HILTI 2022 SLAM Challenge, showing competitive performance in terms of localization accuracy against state-of-the-art methods. Additionally, the qualitative evaluation of the occupancy maps demonstrates their consistency and utility for navigation tasks. The system performs efficiently in real-time, with further enhancements achievable through parameter adjustments tailored to the processing capabilities of the deployment platform.

Conclusion and Future Work

This work introduces a state-of-the-art approach for tightly-coupled LiDAR-Visual-Inertial SLAM, capable of producing accurate, globally consistent volumetric maps. Future developments will focus on refining the uncertainty model for LiDAR measurements, enhancing robustness to difficult scenarios where visual tracking may fail, and expanding the framework to support autonomous exploration and navigation through dynamically generated submaps. This research represents a significant step forward in realizing fully autonomous robotic systems capable of navigating and understanding complex 3D environments in real-time.

Implications

The presented system has broad implications for the development of autonomous robotic navigation. By providing highly accurate localisation and a detailed, navigable map of the environment, robots can operate more effectively in complex, unstructured settings. This capability is crucial for a wide range of applications, including search and rescue operations in disaster-stricken areas, autonomous exploration in unknown territories, and sophisticated navigation tasks in industrial automation.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube