Towards the first adversarially robust neural network model on MNIST
Lukas Schott
and
Jonas Rauber
and
Matthias Bethge
and
Wieland Brendel
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.CV
First published: 2018/05/23 (6 years ago) Abstract: Despite much effort, deep neural networks remain highly susceptible to tiny
input perturbations and even for MNIST, one of the most common toy datasets in
computer vision, no neural network model exists for which adversarial
perturbations are large and make semantic sense to humans. We show that even
the widely recognized and by far most successful defense by Madry et al. (1)
overfits on the L-infinity metric (it's highly susceptible to L2 and L0
perturbations), (2) classifies unrecognizable images with high certainty, (3)
performs not much better than simple input binarization and (4) features
adversarial perturbations that make little sense to humans. These results
suggest that MNIST is far from being solved in terms of adversarial robustness.
We present a novel robust classification model that performs analysis by
synthesis using learned class-conditional data distributions. We derive bounds
on the robustness and go to great length to empirically evaluate our model
using maximally effective adversarial attacks by (a) applying decision-based,
score-based, gradient-based and transfer-based attacks for several different Lp
norms, (b) by designing a new attack that exploits the structure of our
defended model and (c) by devising a novel decision-based attack that seeks to
minimize the number of perturbed pixels (L0). The results suggest that our
approach yields state-of-the-art robustness on MNIST against L0, L2 and
L-infinity perturbations and we demonstrate that most adversarial examples are
strongly perturbed towards the perceptual boundary between the original and the
adversarial class.