Emergent Mind

RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes

(2405.04714)
Published May 7, 2024 in cs.RO , cs.AI , and cs.LG

Abstract

Reinforcement learning provides an appealing framework for robotic control due to its ability to learn expressive policies purely through real-world interaction. However, this requires addressing real-world constraints and avoiding catastrophic failures during training, which might severely impede both learning progress and the performance of the final policy. In many robotics settings, this amounts to avoiding certain "unsafe" states. The high-speed off-road driving task represents a particularly challenging instantiation of this problem: a high-return policy should drive as aggressively and as quickly as possible, which often requires getting close to the edge of the set of "safe" states, and therefore places a particular burden on the method to avoid frequent failures. To both learn highly performant policies and avoid excessive failures, we propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum. Furthermore, we show that our risk-sensitive objective automatically avoids out-of-distribution states when equipped with an estimator for epistemic uncertainty. We implement our algorithm on a small-scale rally car and show that it is capable of learning high-speed policies for a real-world off-road driving task. We show that our method greatly reduces the number of safety violations during the training process, and actually leads to higher-performance policies in both driving and non-driving simulation environments with similar challenges.

RACER's components: distributional critic for epistemic uncertainty, risk-sensitive actor, adaptive action limits.

Overview

  • RACER is a reinforcement learning framework designed for high-speed off-road driving that emphasizes safety by using risk-sensitive control and adaptive action limits.

  • The framework uses Conditional Value at Risk (CVaR) to focus on worst-case scenarios and employs distributional critics to handle both aleatoric and epistemic uncertainties, ensuring cautious training and better performance.

  • Strong numerical results show RACER's effectiveness, achieving higher speeds and fewer training failures in both real-world tests and simulations compared to traditional RL methods.

Reinforcement Learning for Safer High-Speed Driving: An Overview of RACER

Introduction

High-speed off-road driving with reinforcement learning (RL) poses unique challenges. The thrill of swiftly navigating uneven terrains comes with the risk of crashes. Here’s where RACER steps in, a framework that combines risk-sensitive control and an adaptive action space curriculum to learn high-speed driving policies efficiently and safely.

Core Components of RACER

Let’s break down the core components of RACER and how they contribute to its effectiveness:

Risk-Sensitive Actor-Critic Objective

Traditional RL methods optimize expected returns, which can be risky when training directly in real-world environments. RACER, however, leverages Conditional Value at Risk (CVaR) to prioritize safety:

  • CVaR: Instead of only focusing on the expected returns, CVaR allows the agent to consider the worst-case scenarios. This ensures that the policy is conservative in uncertain conditions, reducing the likelihood of catastrophic failures during training.

Distributional Critics

RACER’s critic models the full return distribution, addressing both aleatoric (stochasticity in the environment) and epistemic (uncertainty due to lack of data) uncertainties:

  • Ensembled Critics: Multiple neural networks (ensembles) independently predict the return distribution. When these critics disagree, it signals epistemic uncertainty.
  • Explicit Entropy Maximization: This approach maximizes the uncertainty for out-of-distribution actions, making the agent cautious about untrained scenarios.

Adaptive Action Limits

RACER uses adaptive action limits that start with cautious actions and gradually expand as the agent becomes more confident:

  • Soft-Clip Mechanism: Actions are initially restricted to a safe subset. Over time, as the critics grow more confident about the safety of these actions, the limits are expanded.
  • This adaptive mechanism ensures cautious exploration and fewer risky actions during early training stages, progressively increasing performance.

Strong Numerical Results

RACER showcases impressive numerical results:

  • In real-world tests with a tenth-scale autonomous vehicle, RACER achieved speeds over 10% higher while reducing training failures by more than half.
  • Simulation studies showed similar trends, with RACER outperforming traditional methods like SAC (Soft Actor-Critic) and even other risk-sensitive variants in terms of final policy performance and reduced training failures.

Practical and Theoretical Implications

Practical Implications

  • Real-World Safety: By reducing the number of failures during training, RACER makes RL more viable for real-world applications, especially in safety-critical domains like autonomous driving.
  • Performance and Efficiency: RACER's ability to learn high-speed, high-performance policies with fewer setbacks means more efficient training and less wear and tear on physical robots.

Theoretical Implications

  • Handling Epistemic Uncertainty: RACER demonstrates a novel approach to incorporating epistemic uncertainty into RL, providing a framework that can be extended to other domains where safety during training is critical.
  • Adaptive Risk Sensitivity: The combination of CVaR with adaptive action limits shows that risk-sensitive objectives can be pragmatically integrated into robotic control, leading to safer and more robust policies.

Future Developments

The promising results of RACER open avenues for further research and improvements:

  • Extending RACER to Other Domains: Applying RACER to other high-risk tasks, like aerial drones or underwater robots, could yield insights into generalizing this approach.
  • Improving Adaptive Mechanisms: Refining how action limits are adjusted could lead to even safer and more efficient training pipelines.
  • Hybrid Models: Combining model-free and model-based approaches using RACER’s framework might balance exploration and safety even better.

Ultimately, RACER represents a significant step forward in safe reinforcement learning, providing a blueprint for future research in making RL robust and practical for real-world, high-risk applications.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.