Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GVINS: Tightly Coupled GNSS-Visual-Inertial Fusion for Smooth and Consistent State Estimation (2103.07899v3)

Published 14 Mar 2021 in cs.RO

Abstract: Visual-Inertial odometry (VIO) is known to suffer from drifting especially over long-term runs. In this paper, we present GVINS, a non-linear optimization based system that tightly fuses GNSS raw measurements with visual and inertial information for real-time and drift-free state estimation. Our system aims to provide accurate global 6-DoF estimation under complex indoor-outdoor environment where GNSS signals may be intermittent or even totally unavailable. To connect global measurements with local states, a coarse-to-fine initialization procedure is proposed to efficiently calibrate the transformation online and initialize GNSS states from only a short window of measurements. The GNSS code pseudorange and Doppler shift measurements, along with visual and inertial information, are then modelled and used to constrain the system states in a factor graph framework. For complex and GNSS-unfriendly areas, the degenerate cases are discussed and carefully handled to ensure robustness. Thanks to the tightly-coupled multi-sensor approach and system design, our system fully exploits the merits of three types of sensors and is capable to seamlessly cope with the transition between indoor and outdoor environments, where satellites are lost and reacquired. We extensively evaluate the proposed system by both simulation and real-world experiments, and the result demonstrates that our system substantially eliminates the drift of VIO and preserves the local accuracy in spite of noisy GNSS measurements. The challenging indoor-outdoor and urban driving experiments verify the availability and robustness of GVINS in complex environments. In addition, experiments also show that our system can gain from even a single satellite while conventional GNSS algorithms need four at least.

Citations (181)

Summary

  • The paper presents a novel tightly-coupled fusion method that integrates GNSS, visual, and inertial measurements to achieve drift-free state estimation.
  • It employs a coarse-to-fine initialization and factor graph optimization framework to ensure robust performance even with limited satellite visibility.
  • Experiments in an urban scenario show a horizontal RMSE of 4.51 m, demonstrating its efficacy in smooth indoor-outdoor transitions.

Overview of GVINS: Tightly Coupled GNSS-Visual-Inertial Fusion for Smooth and Consistent State Estimation

The paper presents GVINS, a tightly-coupled GNSS-Visual-Inertial fusion system designed for real-time, drift-free state estimation, overcoming challenges experienced by standalone Visual-Inertial Odometry (VIO) systems in complex environments. This work focuses on augmenting the conventional Visual-Inertial Navigation (VIN) systems by integrating Global Navigation Satellite System (GNSS) raw measurements, thus enhancing spatial awareness and localization capabilities, especially in indoor-outdoor transitions where GNSS signals can be intermittent.

Key Methodologies

The proposed system employs a multi-sensor fusion approach, tightly coupling GNSS, visual, and inertial measurements within a non-linear optimization framework. This fusion system aims to mitigate the inherent drift issues of VIN systems by leveraging the global awareness provided by GNSS data.

  1. Coarse-to-Fine Initialization: The paper introduces a novel initialization procedure that efficiently calibrates the transformation between local VIN frames and global GNSS frames. This includes estimating the GNSS states using only a limited set of measurements, which is vital in scenarios where GNSS signals may be partially available.
  2. Factor Graph Framework: Measurements from GNSS pseudorange, Doppler effects, visual, and inertial sources are integrated into a probabilistic factor graph, allowing for a joint optimization of system states. This framework is particularly adept at handling complex environments and GNSS-unfriendly conditions, ensuring robust performance during satellite loss and reacquisition.
  3. Handling Degeneracy: The system carefully manages degenerate cases, such as pure rotational movements and insufficient satellite visibility, ensuring robustness and continuity in varying conditions.

Numerical Results and Claims

The paper elaborates on the extensive evaluations conducted, both in simulated environments and real-world experiments. Notably, in an urban driving scenario spanning 22.9 km, the system maintained a horizontal RMSE of 4.51 m, significantly outperforming systems that rely solely on visual or inertial data. The experiments affirm GVINS’s capability to operate effectively with fewer satellites than traditionally required, exhibiting robustness in mixed indoor-outdoor settings.

Implications and Future Work

Practically, GVINS addresses the limitations of stand-alone VIN and GNSS systems by offering a seamless transition across different environments, essential for applications like autonomous navigation in urban canyons or mixed environments. Theoretically, this tightly integrated approach contributes to the ongoing convergence of SLAM and localization techniques, potentially informing new strategies for sensor fusion in AI-driven navigation.

Future research could explore the integration of other sensor modalities and further refinement of initialization procedures. Moreover, developing methodologies to minimize absolute positioning errors through advancements in GNSS data processing techniques, such as Precise Point Positioning (PPP), could enhance distributed localization tasks in collaborative and swarm robotics.

In conclusion, GVINS stands as a significant advancement in GNSS-Visual-Inertial state estimation, contributing both to the practical deployment of autonomous systems in GNSS-challenging environments and the theoretical development of multi-sensor fusion frameworks. This work paves the way for future investigations that could expand its applicability and improve its accuracy in diverse operational contexts.