Finding Physical Adversarial Examples for Autonomous Driving with Fast and Differentiable Image Compositing

17 Oct 2020  ·  Jinghan Yang, Adith Boloor, Ayan Chakrabarti, Xuan Zhang, Yevgeniy Vorobeychik ·

There is considerable evidence that deep neural networks are vulnerable to adversarial perturbations applied directly to their digital inputs. However, it remains an open question whether this translates to vulnerabilities in real systems. For example, an attack on self-driving cars would in practice entail modifying the driving environment, which then impacts the video inputs to the car's controller, thereby indirectly leading to incorrect driving decisions. Such attacks require accounting for system dynamics and tracking viewpoint changes. We propose a scalable approach for finding adversarial modifications of a simulated autonomous driving environment using a differentiable approximation for the mapping from environmental modifications (rectangles on the road) to the corresponding video inputs to the controller neural network. Given the parameters of the rectangles, our proposed differentiable mapping composites them onto pre-recorded video streams of the original environment, accounting for geometric and color variations. Moreover, we propose a multiple trajectory sampling approach that enables our attacks to be robust to a car's self-correcting behavior. When combined with a neural network-based controller, our approach allows the design of adversarial modifications through end-to-end gradient-based optimization. Using the Carla autonomous driving simulator, we show that our approach is significantly more scalable and far more effective at identifying autonomous vehicle vulnerabilities in simulation experiments than a state-of-the-art approach based on Bayesian Optimization.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods