Understanding deep learning requires rethinking generalization
Chiyuan Zhang
and
Samy Bengio
and
Moritz Hardt
and
Benjamin Recht
and
Oriol Vinyals
arXiv e-Print archive - 2016 via Local arXiv
Keywords:
cs.LG
First published: 2016/11/10 (7 years ago) Abstract: Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction showing
that simple depth two neural networks already have perfect finite sample
expressivity as soon as the number of parameters exceeds the number of data
points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models.
_Objective:_ Theoretical study of Deep Neural Network, their expressivity and regularizations.
## Results:
The key findings of the article are:
### A. Deep neural networks easily fit random labels.
Both when randomizing labels, replacing images with raw noise or all situations in-between.
1. The effective capacity of neural networks is sufficient for memorizing the entire data set.
2. Even optimization on random labels remains easy. In fact, training time increases only by a small constant factor compared with training on the true labels.
3. Randomizing labels is solely a data transformation, leaving all other properties of the learning problem unchanged.
### B. Explicit regularization may improve generalization performance, but is neither necessary nor by itself sufficient for controlling generalization error.
By explicit regularization they mean batch normalisation, weight decay, dropout, data augmentation, etc.
### C. Generically large neural networks can express any labeling of the training data.
More formally, a very simple two-layer ReLU network with `p = 2n + d` parameters can express any labeling of any sample of size `n` in `d` dimensions.
### D. The optimization algorithm itself is implicitly regularizing the solution.
SGD acts as an implicit regularizer and properties are inherited by models that were trained using SGD.