Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 33 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 220 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Validation of human benchmark models for Automated Driving System approval: How competent and careful are they really? (2406.09493v1)

Published 13 Jun 2024 in eess.SY and cs.SY

Abstract: Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS) are expected to improve comfort, productivity and, most importantly, safety for all road users. To ensure that the systems are safe, rules and regulations describing the systems' approval and validation procedures are in effect in Europe. The UNECE Regulation 157 (R157) is one of those. Annex 3 of R157 describes two driver models, representing the performance of a "competent and careful" driver, which can be used as benchmarks to determine whether, in certain situations, a crash would be preventable by a human driver. However, these models have not been validated against human behavior in real safety-critical events. Therefore, this study uses counterfactual simulation to assess the performance of the two models when applied to 38 safety-critical cut-in near-crashes from the SHRP2 naturalistic driving study. The results show that the two computational models performed rather differently from the human drivers: one model showed a generally delayed braking reaction compared to the human drivers, causing crashes in three of the original near-crashes. The other model demonstrated, in general, brake onsets substantially earlier than the human drivers, possibly being overly sensitive to lateral perturbations. That is, the first model does not seem to behave as the competent and careful driver it is supposed to represent, while the second seems to be overly careful. Overall, our results show that, if models are to be included in regulations, they need to be substantially improved. We argue that achieving this will require better validation across the scenario types that the models are intended to cover (e.g., cut-in conflicts), a process which should include applying the models counterfactually to near-crashes and validating them against several different safety related metrics.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.