Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
and
David Wagner
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.LG, cs.CR, cs.CV
First published: 2017/05/20 (7 years ago) Abstract: Neural networks are known to be vulnerable to adversarial examples: inputs
that are close to natural inputs but classified incorrectly. In order to better
understand the space of adversarial examples, we survey ten recent proposals
that are designed for detection and compare their efficacy. We show that all
can be defeated by constructing new loss functions. We conclude that
adversarial examples are significantly harder to detect than previously
appreciated, and the properties believed to be intrinsic to adversarial
examples are in fact not. Finally, we propose several simple guidelines for
evaluating future proposed defenses.
Carlini and Wagner study the effectiveness of adversarial example detectors as defense strategy and show that most of them can by bypassed easily by known attacks. Specifically, they consider a set of adversarial example detection schemes, including neural networks as detectors and statistical tests. After extensive experiments, the authors provide a set of lessons which include:
- Randomization is by far the most effective defense (e.g. dropout).
- Defenses seem to be dataset-specific. There is a discrepancy between defenses working well on MNIST and on CIFAR.
- Detection neural networks can easily be bypassed.
Additionally, they provide a set of recommendations for future work:
- For developing defense mechanism, we always need to consider strong white-box attacks (i.e. attackers that are informed about the defense mechanisms).
- Reporting accuracy only is not meaningful; instead, false positives and negatives should be reported.
- Simple datasets such as MNIST and CIFAR are not enough for evaluation.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).