Adversarial Vulnerability of Neural Networks Increases With Input Dimension
Carl-Johann Simon-Gabriel
and
Yann Ollivier
and
Léon Bottou
and
Bernhard Schölkopf
and
David Lopez-Paz
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
stat.ML, cs.CV, cs.LG, 68T45, I.2.6
First published: 2018/02/05 (6 years ago) Abstract: Over the past four years, neural networks have proven vulnerable to
adversarial images: targeted but imperceptible image perturbations lead to
drastically different predictions. We show that adversarial vulnerability
increases with the gradients of the training objective when seen as a function
of the inputs. For most current network architectures, we prove that the
$\ell_1$-norm of these gradients grows as the square root of the input-size.
These nets therefore become increasingly vulnerable with growing image size.
Over the course of our analysis we rediscover and generalize
double-backpropagation, a technique that penalizes large gradients in the loss
surface to reduce adversarial vulnerability and increase generalization
performance. We show that this regularization-scheme is equivalent at first
order to training with adversarial noise. Our proofs rely on the network's
weight-distribution at initialization, but extensive experiments confirm all
conclusions after training.
Simon-Gabriel et al. Study the robustness of neural networks with respect to the input dimensionality. Their main hypothesis is that the vulnerability of neural networks against adversarial perturbations increases with the input dimensionality. To support this hypothesis, they provide a theoretical analysis as well as experiments.
The general idea of robustness is that small perturbations $\delta$ of the input $x$ do only result in small variations $\delta \mathcal{L}$ of the loss:
$\delta \mathcal{L} = \max_{\|\delta\| \leq \epsilon} |\mathcal{L}(x + \delta) - \mathcal{L}(x)| \approx \max_{\|\delta\| \leq \epsilon} |\partial_x \mathcal{L} \cdot \delta| = \epsilon \||\partial_x \mathcal{L}\||$
where the approximation is due to a first-order Taylor expansion and $\||\cdot\||$ is the dual norm of $\|\cdot\|$. As a result, the vulnerability of networks can be quantified by considering $\epsilon\mathbb{E}_x\||\partial_x \mathcal{L}\||$. A natural regularizer to increase robustness (i.e. decrease vulnerability) would be $\epsilon \||\partial_x \mathcal{L}\||$ which is a similar regularizer as proposed in [1].
The remainder of the paper studies the norm $\|\partial_x \mathcal{L}\|$ with respect to the input dimension $d$. Specifically, they show that the gradient norm increases monotonically with the input dimension. I refer to the paper for the exact theorems and proofs. This claim is based on the assumption of non-trained networks that have merely been initialized. However, in experiments, they show that the conclusion may hold true in realistic settings, e.g. on ImageNet.
[1] Matthias Hein, Maksym Andriushchenko:
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation. NIPS 2017: 2263-2273
Also view this summary at [davidstutz.de](https://davidstutz.de/category/reading/).