Emergent Mind

Abstract

Automated vehicles are gradually entering people's daily life to provide a comfortable driving experience for the users. The generic and user-agnostic automated vehicles have limited ability to accommodate the different driving styles of different users. This limitation not only impacts users' satisfaction but also causes safety concerns. Learning from user demonstrations can provide direct insights regarding users' driving preferences. However, it is difficult to understand a driver's preference with limited data. In this study, we use a model-free inverse reinforcement learning method to study drivers' characteristics in the car-following scenario from a naturalistic driving dataset, and show this method is capable of representing users' preferences with reward functions. In order to predict the driving styles for drivers with limited data, we apply Gaussian Mixture Models and compute the similarity of a specific driver to the clusters of drivers. We design a personalized adaptive cruise control (P-ACC) system through a partially observable Markov decision process (POMDP) model. The reward function with the model to mimic drivers' driving style is integrated, with a constraint on the relative distance to ensure driving safety. Prediction of the driving styles achieves 85.7% accuracy with the data of less than 10 car-following events. The model-based experimental driving trajectories demonstrate that the P-ACC system can provide a personalized driving experience.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.