First published: 2017/10/24 (6 years ago) Abstract: Recent research has revealed that the output of Deep Neural Networks (DNN)
can be easily altered by adding relatively small perturbations to the input
vector. In this paper, we analyze an attack in an extremely limited scenario
where only one pixel can be modified. For that we propose a novel method for
generating one-pixel adversarial perturbations based on differential evolution.
It requires less adversarial information and can fool more types of networks.
The results show that 70.97% of the natural images can be perturbed to at least
one target class by modifying just one pixel with 97.47% confidence on average.
Thus, the proposed attack explores a different take on adversarial machine
learning in an extreme limited scenario, showing that current DNNs are also
vulnerable to such low dimension attacks.