Sensitivity and Generalization in Neural Networks: an Empirical Study
Roman Novak
and
Yasaman Bahri
and
Daniel A. Abolafia
and
Jeffrey Pennington
and
Jascha Sohl-Dickstein
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
stat.ML, cs.AI, cs.LG, cs.NE
First published: 2018/02/23 (6 years ago) Abstract: In practice it is often found that large over-parameterized neural networks
generalize better than their smaller counterparts, an observation that appears
to conflict with classical notions of function complexity, which typically
favor smaller models. In this work, we investigate this tension between
complexity and generalization through an extensive empirical exploration of two
natural metrics of complexity related to sensitivity to input perturbations.
Our experiments survey thousands of models with various fully-connected
architectures, optimizers, and other hyper-parameters, as well as four
different image classification datasets.
We find that trained neural networks are more robust to input perturbations
in the vicinity of the training data manifold, as measured by the norm of the
input-output Jacobian of the network, and that it correlates well with
generalization. We further establish that factors associated with poor
generalization $-$ such as full-batch training or using random labels $-$
correspond to lower robustness, while factors associated with good
generalization $-$ such as data augmentation and ReLU non-linearities $-$ give
rise to more robust functions. Finally, we demonstrate how the input-output
Jacobian norm can be predictive of generalization at the level of individual
test points.