Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving (2210.09539v1)

Published 18 Oct 2022 in cs.RO, cs.AI, and cs.LG

Abstract: We demonstrate the first large-scale application of model-based generative adversarial imitation learning (MGAIL) to the task of dense urban self-driving. We augment standard MGAIL using a hierarchical model to enable generalization to arbitrary goal routes, and measure performance using a closed-loop evaluation framework with simulated interactive agents. We train policies from expert trajectories collected from real vehicles driving over 100,000 miles in San Francisco, and demonstrate a steerable policy that can navigate robustly even in a zero-shot setting, generalizing to synthetic scenarios with novel goals that never occurred in real-world driving. We also demonstrate the importance of mixing closed-loop MGAIL losses with open-loop behavior cloning losses, and show our best policy approaches the performance of the expert. We evaluate our imitative model in both average and challenging scenarios, and show how it can serve as a useful prior to plan successful trajectories.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (17)
  1. Eli Bronstein (7 papers)
  2. Mark Palatucci (2 papers)
  3. Dominik Notz (2 papers)
  4. Brandyn White (7 papers)
  5. Alex Kuefler (8 papers)
  6. Yiren Lu (17 papers)
  7. Supratik Paul (7 papers)
  8. Payam Nikdel (8 papers)
  9. Paul Mougin (6 papers)
  10. Hongge Chen (20 papers)
  11. Justin Fu (20 papers)
  12. Austin Abrams (2 papers)
  13. Punit Shah (3 papers)
  14. Evan Racah (12 papers)
  15. Benjamin Frenkel (1 paper)
  16. Shimon Whiteson (122 papers)
  17. Dragomir Anguelov (73 papers)
Citations (47)

Summary

We haven't generated a summary for this paper yet.