Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 42 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 217 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Finding Physical Adversarial Examples for Autonomous Driving with Fast and Differentiable Image Compositing (2010.08844v2)

Published 17 Oct 2020 in cs.CV, cs.AI, and cs.LG

Abstract: There is considerable evidence that deep neural networks are vulnerable to adversarial perturbations applied directly to their digital inputs. However, it remains an open question whether this translates to vulnerabilities in real systems. For example, an attack on self-driving cars would in practice entail modifying the driving environment, which then impacts the video inputs to the car's controller, thereby indirectly leading to incorrect driving decisions. Such attacks require accounting for system dynamics and tracking viewpoint changes. We propose a scalable approach for finding adversarial modifications of a simulated autonomous driving environment using a differentiable approximation for the mapping from environmental modifications (rectangles on the road) to the corresponding video inputs to the controller neural network. Given the parameters of the rectangles, our proposed differentiable mapping composites them onto pre-recorded video streams of the original environment, accounting for geometric and color variations. Moreover, we propose a multiple trajectory sampling approach that enables our attacks to be robust to a car's self-correcting behavior. When combined with a neural network-based controller, our approach allows the design of adversarial modifications through end-to-end gradient-based optimization. Using the Carla autonomous driving simulator, we show that our approach is significantly more scalable and far more effective at identifying autonomous vehicle vulnerabilities in simulation experiments than a state-of-the-art approach based on Bayesian Optimization.

Citations (11)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.