Spatially Transformed Adversarial Examples
Xiao, Chaowei
and
Zhu, Jun-Yan
and
Li, Bo
and
He, Warren
and
Liu, Mingyan
and
Song, Dawn
arXiv e-Print archive - 2018 via Local Bibsonomy
Keywords:
dblp
Xiao et al. propose adversarial examples based on spatial transformations. Actually, this work is very similar to the adversarial deformations of [1]. In particular, a deformation flow field is optimized (allowing individual deformations per pixel) to cause a misclassification. The distance of the perturbation is computed on the flow field directly. Examples on MNIST are shown in Figure 1 – it can clearly be seen that most pixels are moved individually and no kind of smoothness is enforced. They also show that commonly used defense mechanisms are more or less useless against these attacks. Unfortunately, and in contrast to [1], they do not consider adversarial training on their own adversarial transformations as defense.
https://i.imgur.com/uDfttMU.png
Figure 1: Examples of the computed adversarial examples/transformations on MNIST for three different models. Note that these are targeted attacks.
[1] R. Alaifair, G. S. Alberti, T. Gauksson. Adef: an Iterative Algorithm to Construct Adversarial Deformations. ArXiv, abs/1804.07729v2, 2018.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).