Adversarial Deep Learning for Autonomous Driving
ABOUT THE PROJECT
At a glance
Deep learning has become the state-of-the-art approach in many areas, including vision, speech recognition, and natural language processing, and has enabled many applications. One important and appealing application domain is self-driving cars. For example, deep learning techniques can help self-driving cars understand the environment, such as traffic signs and surrounding objects, using the images taken from cameras on the car, or even provide end-to-end control for the car.
Recent research, however, has discovered that deep learning systems can be easily fooled. For example, an adversary can easily construct adversarial examples for deep neural image classification networks, which are input examples modified only slightly but interpreted wildly differently by the neural network. Moreover, in our recent work, we showed that such attack can be successful even without access to the details of the neural networks model, i.e., in a black-box setting. Such security issues can severely hinder the application of deep learning to safety-critical systems such as self-driving cars.
While most adversarial deep learning studies have focused on image classification, it is important to examine whether the networks for other tasks are equally vulnerable. In particular, recent research has demonstrated that it is possible to build a visual question answering system, which can understand the image and answer questions in natural language. Other works further extend this idea to not only provide an answer, but also show explanations why the system chooses the answer. In this perspective, an adversary who wants to fool the system must also fool the QA engine, and the explanation engine as well, which may be a more challenging goal.In this proposal, we propose to explore both attacks and defenses to deepen the understanding of the security issue of deep neural networks, with the goal to provide a feasible defense under a reasonable threat model so as to mitigate the attacks to security sensitive applications such as autonomous vehicles.