A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations
Logan Engstrom
and
Brandon Tran
and
Dimitris Tsipras
and
Ludwig Schmidt
and
Aleksander Madry
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.LG, cs.CV, cs.NE, stat.ML
First published: 2017/12/07 (6 years ago) Abstract: We show that simple transformations, namely translations and rotations alone,
are sufficient to fool neural network-based vision models on a significant
fraction of inputs. This is in sharp contrast to previous work that relied on
more complicated optimization approaches that are unlikely to appear outside of
a truly adversarial setting. Moreover, fooling rotations and translations are
easy to find and require only a few black-box queries to the target model.
Overall, our findings emphasize the need for designing robust classifiers even
in natural, benign contexts.