Sensitivity and Generalization in Neural Networks: an Empirical Study
Roman Novak
and
Yasaman Bahri
and
Daniel A. Abolafia
and
Jeffrey Pennington
and
Jascha Sohl-Dickstein
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
stat.ML, cs.AI, cs.LG, cs.NE
First published: 2018/02/23 (6 years ago) Abstract: In practice it is often found that large over-parameterized neural networks
generalize better than their smaller counterparts, an observation that appears
to conflict with classical notions of function complexity, which typically
favor smaller models. In this work, we investigate this tension between
complexity and generalization through an extensive empirical exploration of two
natural metrics of complexity related to sensitivity to input perturbations.
Our experiments survey thousands of models with various fully-connected
architectures, optimizers, and other hyper-parameters, as well as four
different image classification datasets.
We find that trained neural networks are more robust to input perturbations
in the vicinity of the training data manifold, as measured by the norm of the
input-output Jacobian of the network, and that it correlates well with
generalization. We further establish that factors associated with poor
generalization $-$ such as full-batch training or using random labels $-$
correspond to lower robustness, while factors associated with good
generalization $-$ such as data augmentation and ReLU non-linearities $-$ give
rise to more robust functions. Finally, we demonstrate how the input-output
Jacobian norm can be predictive of generalization at the level of individual
test points.
Novak et al. study the relationship between neural network sensitivity and generalization. Here, sensitivity is measured in terms of the Frobenius gradient of the network’s probabilities (resulting in a Jacobian matrix, not depending on the true label) or based on a coding scheme of activations. The latter is intended to quantify transitions between linear regions of the piece-wise linear model. To this end, all activations are assigned either $0$ or $1$ depending on their ReLU output. Based on a path between two or more input examples, the difference in this coding scheme is an estimator of how many linear regions have been “traversed”. Both metrics are illustrated in Figure 1, showing that they are low for test and training examples, or in regions within the same class, and high otherwise. The second metric is also illustrated in Figure 2. Based on these metrics, the authors show that these metrics correlate with the generalization gap, meaning that the sensitivity of the network and its generalization performance seem to be inherently connected.
https://i.imgur.com/iRt3ADe.jpg
Figure 1: For a network trained on MNIST, illustrations of a possible trajectory (left) and the corresponding sensitivity metrics (middle and right). I refer to the paper for details.
https://i.imgur.com/0G8su3K.jpg
Figure 2: Linear regions for a random 2-dimensional slice of the pre-logit space before and after training.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).