Adversarial Examples: Attacks and Defenses for Deep Learning
Xiaoyong Yuan
and
Pan He
and
Qile Zhu
and
Rajendra Rana Bhat
and
Xiaolin Li
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.LG, cs.CR, cs.CV, stat.ML
First published: 2017/12/19 (6 years ago) Abstract: With rapid progress and great successes in a wide spectrum of applications,
deep learning is being applied in many safety-critical environments. However,
deep neural networks have been recently found vulnerable to well-designed input
samples, called \textit{adversarial examples}. Adversarial examples are
imperceptible to human but can easily fool deep neural networks in the
testing/deploying stage. The vulnerability to adversarial examples becomes one
of the major risks for applying deep neural networks in safety-critical
scenarios. Therefore, the attacks and defenses on adversarial examples draw
great attention.
In this paper, we review recent findings on adversarial examples against deep
neural networks, summarize the methods for generating adversarial examples, and
propose a taxonomy of these methods. Under the taxonomy, applications and
countermeasures for adversarial examples are investigated. We further elaborate
on adversarial examples and explore the challenges and the potential solutions.
Yuan et al. present a comprehensive survey of attacks, defenses and studies regarding the robustness and security of deep neural networks. Published on ArXiv in December 2017, it includes most recent attacks and defenses. For examples, Table 1 lists all known attacks – Yuan et al. categorize the attacks according to the level of knowledge needed, targeted or non-targeted, the optimization needed (e.g. iterative) as well as the perturbation measure employed. As a result, Table 1 gives a solid overview of state-of-the-art attacks. Similarly, Table 2 gives an overview of applications reported so far. Only for defenses, a nice overview table is missing. Still, the authors discuss (as of my knowledge) all relevant defense strategies and comment on their performance reported in the literature.
https://i.imgur.com/3KpoYWr.png
Table 1: An overview of state-of-the-art attacks on deep neural networks.
https://i.imgur.com/4eq6Tzm.png
Table 2: An overview of application sof some of the attacks in Table 1.