RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes (2405.04714v1)
Abstract: Reinforcement learning provides an appealing framework for robotic control due to its ability to learn expressive policies purely through real-world interaction. However, this requires addressing real-world constraints and avoiding catastrophic failures during training, which might severely impede both learning progress and the performance of the final policy. In many robotics settings, this amounts to avoiding certain "unsafe" states. The high-speed off-road driving task represents a particularly challenging instantiation of this problem: a high-return policy should drive as aggressively and as quickly as possible, which often requires getting close to the edge of the set of "safe" states, and therefore places a particular burden on the method to avoid frequent failures. To both learn highly performant policies and avoid excessive failures, we propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum. Furthermore, we show that our risk-sensitive objective automatically avoids out-of-distribution states when equipped with an estimator for epistemic uncertainty. We implement our algorithm on a small-scale rally car and show that it is capable of learning high-speed policies for a real-world off-road driving task. We show that our method greatly reduces the number of safety violations during the training process, and actually leads to higher-performance policies in both driving and non-driving simulation environments with similar challenges.
- Constrained policy optimization. In International conference on machine learning, pages 22–31. PMLR, 2017.
- A comparison of var and cvar constraints on portfolio selection with the mean-variance model. Management science, 50(9):1261–1273, 2004.
- Eitan Altman. Constrained Markov Decision Processes, volume 7. CRC Press, 1999.
- Coherent measures of risk. Mathematical finance, 9(3):203–228, 1999.
- Constrained policy optimization via bayesian world models. arXiv preprint arXiv:2201.09802, 2022.
- Distributed distributional deterministic policy gradients. arXiv preprint arXiv:1804.08617, 2018.
- A distributional perspective on reinforcement learning. In International conference on machine learning, pages 449–458. PMLR, 2017.
- Conservative safety critics for exploration. arXiv preprint arXiv:2010.14497, 2020.
- Time consistent dynamic risk measures. Mathematical Methods of Operations Research, 63:169–186, 2006.
- Marina Bruce. Anatomy of a rollover, 2018. URL https://outdooruae.com/outdoor-activity/off-road/anatomy-of-a-rollover/.
- High-speed autonomous drifting with deep reinforcement learning. IEEE Robotics and Automation Letters, 5(2):1247–1254, apr 2020. doi: 10.1109/lra.2020.2967299. URL https://doi.org/10.1109%2Flra.2020.2967299.
- Risk-sensitive safety analysis using conditional value-at-risk. IEEE Transactions on Automatic Control, 67(12):6521–6536, 2021.
- Randomized ensembled double q-learning: Learning fast without a model. arXiv preprint arXiv:2101.05982, 2021.
- End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3387–3395, 2019.
- Reinforcement learning for safety-critical control under model uncertainty, using control lyapunov functions and control barrier functions. arXiv preprint arXiv:2004.07584, 2020.
- Algorithms for cvar optimization in mdps. Advances in neural information processing systems, 27, 2014.
- Risk-constrained reinforcement learning with percentile risk criteria. The Journal of Machine Learning Research, 18(1):6070–6120, 2017.
- Deep reinforcement learning in a handful of trials using probabilistic dynamics models. Advances in neural information processing systems, 31, 2018.
- Distributional reinforcement learning with quantile regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
- Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757, 2018.
- Deep drifting: Autonomous drifting of arbitrary trajectories using deep reinforcement learning. In 2022 International Conference on Robotics and Automation (ICRA), pages 7753–7759, 2022. doi: 10.1109/ICRA46639.2022.9812249.
- Sample-efficient reinforcement learning by breaking the replay ratio barrier. In Deep Reinforcement Learning Workshop NeurIPS 2022, 2022.
- Ensemble deep learning: A review. Engineering Applications of Artificial Intelligence, 115:105151, 2022.
- A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437–1480, 2015.
- Opening new dimensions: Vehicle motion planning and control using brakes while drifting. In 2020 IEEE Intelligent Vehicles Symposium (IV), pages 560–565, 2020. doi: 10.1109/IV47402.2020.9304728.
- Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018.
- Risk-aware motion planning and control using cvar-constrained optimization. IEEE Robotics and Automation letters, 4(4):3924–3931, 2019.
- Rainbow: Combining improvements in deep reinforcement learning. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
- Model predictive control of vehicle roll-over with experimental verification. Control Engineering Practice, 77:95–108, 2018.
- Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179–1191, 2020.
- Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017.
- Rollover-free path planning for off-road autonomous driving. Electronics, 8(6):614, 2019.
- Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
- Parametric return density estimation for reinforcement learning. arXiv preprint arXiv:1203.3497, 2012.
- F1tenth: An open-source evaluation environment for continuous control and reinforcement learning. In Hugo Jair Escalante and Raia Hadsell, editors, Proceedings of the NeurIPS 2019 Competition and Demonstration Track, volume 123 of Proceedings of Machine Learning Research, pages 77–89. PMLR, 08–14 Dec 2020. URL https://proceedings.mlr.press/v123/o-kelly20a.html.
- Ken Perlin. An image synthesizer. ACM Siggraph Computer Graphics, 19(3):287–296, 1985.
- Georg Ch Pflug. Some remarks on the value-at-risk and the conditional value-at-risk. Probabilistic constrained optimization: Methodology and applications, pages 272–281, 2000.
- LA Prashanth. Policy gradients for cvar-constrained mdps. In International Conference on Algorithmic Learning Theory, pages 155–169. Springer, 2014.
- Rahul Rahaman et al. Uncertainty quantification and deep ensembles. Advances in Neural Information Processing Systems, 34:20063–20075, 2021.
- Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
- Bigger, better, faster: Human-level atari with human-level efficiency. In International Conference on Machine Learning, pages 30365–30380. PMLR, 2023.
- Autonomous reinforcement learning: Formalism and benchmarking. arXiv preprint arXiv:2112.09605, 2021.
- William T. Shaw. Risk, var, cvar and their associated portfolio optimizations when asset returns have a multivariate student t distribution, 2011.
- Solving stabilize-avoid optimal control via epigraph form and deep reinforcement learning, 2023.
- Learning to be safe: Deep rl with a safety critic. arXiv preprint arXiv:2010.14603, 2020.
- Fastrlap: A system for learning high-speed driving via deep rl and autonomous practicing, 2023.
- Responsive safety in reinforcement learning by pid lagrangian methods, 2020a.
- Scaling up robust mdps by reinforcement learning. arXiv preprint arXiv:1306.6189, 2013.
- Worst cases policy gradients. arXiv preprint arXiv:1911.03618, 2019.
- Reward constrained policy optimization. arXiv preprint arXiv:1805.11074, 2018.
- Safe reinforcement learning for autonomous vehicles through parallel constrained policy optimization. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pages 1–7. IEEE, 2020.
- P. Whittle. Risk-sensitive linear/quadratic/gaussian control. Advances in Applied Probability, 13(4):764–777, 1981. ISSN 00018678. URL http://www.jstor.org/stable/1426972.
- Information-theoretic model predictive control: Theory and applications to autonomous driving. IEEE Transactions on Robotics, 34(6):1603–1622, 2018. doi: 10.1109/TRO.2018.2865891.
- Ensemble-based out-of-distribution detection. Electronics, 10(5), 2021a. ISSN 2079-9292. doi: 10.3390/electronics10050567. URL https://www.mdpi.com/2079-9292/10/5/567.
- Wcsac: Worst-case soft actor critic for safety-constrained reinforcement learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12):10639–10646, May 2021b. doi: 10.1609/aaai.v35i12.17272. URL https://ojs.aaai.org/index.php/AAAI/article/view/17272.
- Sim-to-real transfer in deep reinforcement learning for robotics: a survey. In 2020 IEEE symposium series on computational intelligence (SSCI), pages 737–744. IEEE, 2020.