Emergent Mind

RL-PGO: Reinforcement Learning-based Planar Pose-Graph Optimization

(2202.13221)
Published Feb 26, 2022 in cs.RO and cs.AI

Abstract

The objective of pose SLAM or pose-graph optimization (PGO) is to estimate the trajectory of a robot given odometric and loop closing constraints. State-of-the-art iterative approaches typically involve the linearization of a non-convex objective function and then repeatedly solve a set of normal equations. Furthermore, these methods may converge to a local minima yielding sub-optimal results. In this work, we present to the best of our knowledge the first Deep Reinforcement Learning (DRL) based environment and proposed agent for 2D pose-graph optimization. We demonstrate that the pose-graph optimization problem can be modeled as a partially observable Markov Decision Process and evaluate performance on real-world and synthetic datasets. The proposed agent outperforms state-of-the-art solver g2o on challenging instances where traditional nonlinear least-squares techniques may fail or converge to unsatisfactory solutions. Experimental results indicate that iterative-based solvers bootstrapped with the proposed approach allow for significantly higher quality estimations. We believe that reinforcement learning-based PGO is a promising avenue to further accelerate research towards globally optimal algorithms. Thus, our work paves the way to new optimization strategies in the 2D pose SLAM domain.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.