First published: 2016/07/08 (8 years ago) Abstract: Most existing machine learning classifiers are highly vulnerable to
adversarial examples. An adversarial example is a sample of input data which
has been modified very slightly in a way that is intended to cause a machine
learning classifier to misclassify it. In many cases, these modifications can
be so subtle that a human observer does not even notice the modification at
all, yet the classifier still makes a mistake. Adversarial examples pose
security concerns because they could be used to perform an attack on machine
learning systems, even if the adversary has no access to the underlying model.
Up to now, all previous work have assumed a threat model in which the adversary
can feed data directly into the machine learning classifier. This is not always
the case for systems operating in the physical world, for example those which
are using signals from cameras and other sensors as an input. This paper shows
that even in such physical world scenarios, machine learning systems are
vulnerable to adversarial examples. We demonstrate this by feeding adversarial
images obtained from cell-phone camera to an ImageNet Inception classifier and
measuring the classification accuracy of the system. We find that a large
fraction of adversarial examples are classified incorrectly even when perceived
through the camera.
Kurakin et al. demonstrate that adversarial examples are also a concern in the physical world. Specifically, adversarial examples are crafted digitally and then printed to see if the classification network, running on a smartphone still misclassifies the examples. In many cases, adversarial examples are still able to fool the network, even after printing.
https://i.imgur.com/tYCKv79.png
Figure 1: Illustration of the experimental setup.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).