Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Anti-Jerk On-Ramp Merging Using Deep Reinforcement Learning (1909.12967v3)

Published 27 Sep 2019 in eess.SY and cs.SY

Abstract: Deep Reinforcement Learning (DRL) is used here for decentralized decision-making and longitudinal control for high-speed on-ramp merging. The DRL environment state includes the states of five vehicles: the merging vehicle, along with two preceding and two following vehicles when the merging vehicle is or is projected on the main road. The control action is the acceleration of the merging vehicle. Deep Deterministic Policy Gradient (DDPG) is the DRL algorithm for training to output continuous control actions. We investigated the relationship between collision avoidance for safety and jerk minimization for passenger comfort in the multi-objective reward function by obtaining the Pareto front. We found that, with a small jerk penalty in the multi-objective reward function, the vehicle jerk could be reduced by 73% compared with no jerk penalty while the collision rate was maintained at zero. Regardless of the jerk penalty, the merging vehicle exhibited decision-making strategies such as merging ahead or behind a main-road vehicle.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.