Adversarial Spheres
Justin Gilmer
and
Luke Metz
and
Fartash Faghri
and
Samuel S. Schoenholz
and
Maithra Raghu
and
Martin Wattenberg
and
Ian Goodfellow
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.CV, 68T45, I.2.6
First published: 2018/01/09 (6 years ago) Abstract: State of the art computer vision models have been shown to be vulnerable to
small adversarial perturbations of the input. In other words, most images in
the data distribution are both correctly classified by the model and are very
close to a visually similar misclassified image. Despite substantial research
interest, the cause of the phenomenon is still poorly understood and remains
unsolved. We hypothesize that this counter intuitive behavior is a naturally
occurring result of the high dimensional geometry of the data manifold. As a
first step towards exploring this hypothesis, we study a simple synthetic
dataset of classifying between two concentric high dimensional spheres. For
this dataset we show a fundamental tradeoff between the amount of test error
and the average distance to nearest error. In particular, we prove that any
model which misclassifies a small constant fraction of a sphere will be
vulnerable to adversarial perturbations of size $O(1/\sqrt{d})$. Surprisingly,
when we train several different architectures on this dataset, all of their
error sets naturally approach this theoretical bound. As a result of the
theory, the vulnerability of neural networks to small adversarial perturbations
is a logical consequence of the amount of test error observed. We hope that our
theoretical analysis of this very simple case will point the way forward to
explore how the geometry of complex real-world data sets leads to adversarial
examples.