Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
and
Minhao Cheng
and
Huan Zhang
and
Cho-Jui Hsieh
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.LG, cs.CR, stat.ML
First published: 2017/12/02 (6 years ago) Abstract: Recent studies have revealed the vulnerability of deep neural networks - A
small adversarial perturbation that is imperceptible to human can easily make a
well-trained deep neural network mis-classify. This makes it unsafe to apply
neural networks in security-critical applications. In this paper, we propose a
new defensive algorithm called Random Self-Ensemble (RSE) by combining two
important concepts: ${\bf randomness}$ and ${\bf ensemble}$. To protect a
targeted model, RSE adds random noise layers to the neural network to prevent
from state-of-the-art gradient-based attacks, and ensembles the prediction over
random noises to stabilize the performance. We show that our algorithm is
equivalent to ensemble an infinite number of noisy models $f_\epsilon$ without
any additional memory overhead, and the proposed training procedure based on
noisy stochastic gradient descent can ensure the ensemble model has good
predictive capability. Our algorithm significantly outperforms previous defense
techniques on real datasets. For instance, on CIFAR-10 with VGG network (which
has $92\%$ accuracy without any attack), under the state-of-the-art C&W attack
within a certain distortion tolerance, the accuracy of unprotected model drops
to less than $10\%$, the best previous defense technique has $48\%$ accuracy,
while our method still has $86\%$ prediction accuracy under the same level of
attack. Finally, our method is simple and easy to integrate into any neural
network.