Emergent Mind

Abstract

Connected and Automated Vehicles (CAVs), in particular those with multiple power sources, have the potential to significantly reduce fuel consumption and travel time in real-world driving conditions. In particular, the Eco-driving problem seeks to design optimal speed and power usage profiles based upon look-ahead information from connectivity and advanced mapping features, to minimize the fuel consumption over a given itinerary. In this work, the Eco-driving problem is formulated as a Partially Observable Markov Decision Process (POMDP), which is then solved with a state-of-art Deep Reinforcement Learning (DRL) Actor Critic algorithm, Proximal Policy Optimization. An Eco-driving simulation environment is developed for training and evaluation purposes. To benchmark the performance of the DRL controller, a baseline controller representing the human driver, a trajectory optimization algorithm and the wait-and-see deterministic optimal solution are presented. With a minimal onboard computational requirement and a comparable travel time, the DRL controller reduces the fuel consumption by more than 17% compared against the baseline controller by modulating the vehicle velocity over the route and performing energy-efficient approach and departure at signalized intersections, over-performing the more computationally demanding trajectory optimization method

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.