Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Naveed Akhtar
and
Ajmal Mian
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.CV
First published: 2018/01/02 (6 years ago) Abstract: Deep learning is at the heart of the current rise of machine learning and
artificial intelligence. In the field of Computer Vision, it has become the
workhorse for applications ranging from self-driving cars to surveillance and
security. Whereas deep neural networks have demonstrated phenomenal success
(often beyond human capabilities) in solving complex problems, recent studies
show that they are vulnerable to adversarial attacks in the form of subtle
perturbations to inputs that lead a model to predict incorrect outputs. For
images, such perturbations are often too small to be perceptible, yet they
completely fool the deep learning models. Adversarial attacks pose a serious
threat to the success of deep learning in practice. This fact has lead to a
large influx of contributions in this direction. This article presents the
first comprehensive survey on adversarial attacks on deep learning in Computer
Vision. We review the works that design adversarial attacks, analyze the
existence of such attacks and propose defenses against them. To emphasize that
adversarial attacks are possible in practical conditions, we separately review
the contributions that evaluate adversarial attacks in the real-world
scenarios. Finally, we draw on the literature to provide a broader outlook of
the research direction.
Akhtar and Mian present a comprehensive survey of attacks and defenses of deep neural networks, specifically in computer vision. Published on ArXiv in January 2018, but probably written prior to August 2017, the survey includes recent attacks and defenses. For example, Table 1 presents an overview of attacks on deep neural networks – categorized by knowledge, target and perturbation measure. The authors also provide a strength measure – in the form of a 1-5 start “rating”. Personally, however, I see this rating critically – many of the attacks have not been studies extensively (across a wide variety of defense mechanisms, tasks and datasets). In comparison to the related survey [1], their overview is slightly less detailed – the attacks, for example are described in less mathematical detail and the categorization in Table 1 is less comprehensive.
https://i.imgur.com/cdAcivj.png
Table 1: Overview of the discussed attacks on deep neural networks.
[1] Xiaoyong Yuan, Pan He, Qile Zhu, Rajendra Rana Bhat, Xiaolin Li:
Adversarial Examples: Attacks and Defenses for Deep Learning. CoRR abs/1712.07107 (2017)
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).