Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
and
Minhao Cheng
and
Huan Zhang
and
Cho-Jui Hsieh
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.LG, cs.CR, stat.ML
First published: 2017/12/02 (6 years ago) Abstract: Recent studies have revealed the vulnerability of deep neural networks - A
small adversarial perturbation that is imperceptible to human can easily make a
well-trained deep neural network mis-classify. This makes it unsafe to apply
neural networks in security-critical applications. In this paper, we propose a
new defensive algorithm called Random Self-Ensemble (RSE) by combining two
important concepts: ${\bf randomness}$ and ${\bf ensemble}$. To protect a
targeted model, RSE adds random noise layers to the neural network to prevent
from state-of-the-art gradient-based attacks, and ensembles the prediction over
random noises to stabilize the performance. We show that our algorithm is
equivalent to ensemble an infinite number of noisy models $f_\epsilon$ without
any additional memory overhead, and the proposed training procedure based on
noisy stochastic gradient descent can ensure the ensemble model has good
predictive capability. Our algorithm significantly outperforms previous defense
techniques on real datasets. For instance, on CIFAR-10 with VGG network (which
has $92\%$ accuracy without any attack), under the state-of-the-art C&W attack
within a certain distortion tolerance, the accuracy of unprotected model drops
to less than $10\%$, the best previous defense technique has $48\%$ accuracy,
while our method still has $86\%$ prediction accuracy under the same level of
attack. Finally, our method is simple and easy to integrate into any neural
network.
Liu et al. propose randomizing neural networks, implicitly learning an ensemble of models, to defend against adversarial attacks. In particular, they introduce Gaussian noise layers before regular convolutional layers. The noise can be seen as additional parameter of the model. During training, noise is randomly added. During testing, the model is evaluated on a single testing input using multiple random noise vectors; this essentially corresponds to an ensemble of different models (parameterized by the different noise vectors).
Mathemtically, the authors provide two interesting interpretations. First, they argue that training essentially minimizes an upper bound of the (noisy) inference loss. Second, they show that their approach is equivalent to Lipschitz regularization [1].
[1] M. Hein, M. Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. ArXiv:1705.08475, 2017.
Also view this summary at [davidstutz.de](https://davidstutz.de/category/reading/).