A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations
Logan Engstrom
and
Brandon Tran
and
Dimitris Tsipras
and
Ludwig Schmidt
and
Aleksander Madry
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.LG, cs.CV, cs.NE, stat.ML
First published: 2017/12/07 (6 years ago) Abstract: We show that simple transformations, namely translations and rotations alone,
are sufficient to fool neural network-based vision models on a significant
fraction of inputs. This is in sharp contrast to previous work that relied on
more complicated optimization approaches that are unlikely to appear outside of
a truly adversarial setting. Moreover, fooling rotations and translations are
easy to find and require only a few black-box queries to the target model.
Overall, our findings emphasize the need for designing robust classifiers even
in natural, benign contexts.
Engstrom et al. demonstrate that spatial transformations such as translations and rotations can be used to generate adversarial examples. Personally, however, I think that the paper does not address the question where adversarial perturbations “end” and generalization issues “start”. For larger translations and rotations, the problem is clearly a problem of generalization. Small ones could also be interpreted as adversarial perturbations – especially when they are computed under the intention to fool the network. Still, the distinction is not clear ...
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).