Emergent Mind

Abstract

Sampling-based motion planners perform exceptionally well in robotic applications that operate in high-dimensional space. However, most works often constrain the planning workspace rooted at some fixed locations, do not adaptively reason on strategy in narrow passages, and ignore valuable local structure information. In this paper, we propose Rapidly-exploring Random Forest (RRF) -- a generalised multi-trees motion planner that combines the rapid exploring property of tree-based methods and adaptively learns to deploys a Bayesian local sampling strategy in regions that are deemed to be bottlenecks. Local sampling exploits the local-connectivity of spaces via Markov Chain random sampling, which is updated sequentially with a Bayesian proposal distribution to learns the local structure from past observations. The trees selection problem is formulated as a multi-armed bandit problem, which efficiently allocates resources on the most promising tree to accelerate planning runtime. RRF learns the region that is difficult to perform tree extensions and adaptively deploys local sampling in those regions to maximise the benefit of exploiting local structure. We provide rigorous proofs of completeness and optimal convergence guarantees, and we experimentally demonstrate that the effectiveness of RRF*'s adaptive multi-trees approach allows it to performs well in a wide range of problems.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.