[link]
This paper deals with the question what / how exactly CNNs learn, considering the fact that they usually have more trainable parameters than data points on which they are trained. When the authors write "deep neural networks", they are talking about Inception V3, AlexNet and MLPs. ## Key contributions * Deep neural networks easily fit random labels (achieving a training error of 0 and a test error which is just randomly guessing labels as expected). $\Rightarrow$Those architectures can simply brute-force memorize the training data. * Deep neural networks fit random images (e.g. Gaussian noise) with 0 training error. The authors conclude that VC-dimension / Rademacher complexity, and uniform stability are bad explanations for generalization capabilities of neural networks * The authors give a construction for a 2-layer network with $p = 2n+d$ parameters - where $n$ is the number of samples and $d$ is the dimension of each sample - which can easily fit any labeling. (Finite sample expressivity). See section 4. ## What I learned * Any measure $m$ of the generalization capability of classifiers $H$ should take the percentage of corrupted labels ($p_c \in [0, 1]$, where $p_c =0$ is a perfect labeling and $p_c=1$ is totally random) into account: If $p_c = 1$, then $m()$ should be 0, too, as it is impossible to learn something meaningful with totally random labels. * We seem to have built models which work well on image data in general, but not "natural" / meaningful images as we thought. ## Funny > deep neural nets remain mysterious for many reasons > Note that this is not exactly simple as the kernel matrix requires 30GB to store in memory. Nonetheless, this system can be solved in under 3 minutes in on a commodity workstation with 24 cores and 256 GB of RAM with a conventional LAPACK call. ## See also * [Deep Nets Don't Learn Via Memorization](https://openreview.net/pdf?id=rJv6ZgHYg)
Your comment:
|
[link]
The authors investigate the generalisation properties of several well-known image recognition networks. https://i.imgur.com/km0mrVs.png They show that these networks are able to overfit to the training set with 100% accuracy even if the labels on the images are random, or if the pixels are randomly generated. Regularisation, such as weight decay and dropout, doesn’t stop overfitting as much as expected, still resulting in ~90% accuracy on random training data. They then argue that these models likely make use of massive memorization, in combination with learning low-complexity patterns, in order to perform well on these tasks. |
[link]
_Objective:_ Theoretical study of Deep Neural Network, their expressivity and regularizations. ## Results: The key findings of the article are: ### A. Deep neural networks easily fit random labels. Both when randomizing labels, replacing images with raw noise or all situations in-between. 1. The effective capacity of neural networks is sufficient for memorizing the entire data set. 2. Even optimization on random labels remains easy. In fact, training time increases only by a small constant factor compared with training on the true labels. 3. Randomizing labels is solely a data transformation, leaving all other properties of the learning problem unchanged. ### B. Explicit regularization may improve generalization performance, but is neither necessary nor by itself sufficient for controlling generalization error. By explicit regularization they mean batch normalisation, weight decay, dropout, data augmentation, etc. ### C. Generically large neural networks can express any labeling of the training data. More formally, a very simple two-layer ReLU network with `p = 2n + d` parameters can express any labeling of any sample of size `n` in `d` dimensions. ### D. The optimization algorithm itself is implicitly regularizing the solution. SGD acts as an implicit regularizer and properties are inherited by models that were trained using SGD. |
[link]
## Summary The broad goal of this paper is to understand how a neural network learns the underlying distribution of the input data and the properties of the network that describes its generalization power. Previous literature tries to use statistical measures like Rademacher complexity, uniform stability and VC dimension to explain the generalization error of the model. These methods explain generalization in terms of the number of parameters in the model along with the applied regularization. The experiments performed in the [Section 2] of the paper show that the learning capacity of a CNN cannot be sufficiently explained by traditional statistical learning theory. Even the effect of different regularization strategies in CNN is shown to be potentially unrelated to the generalization error, which contradicts the theory behind VC dimension. The experiments of the paper show that the model is able to learn some underlying patterns for random labels and input with different amounts of gaussian noise. When the authors gradually increase the noise in the inputs the generalization error gradually increases while the training error is still able to reach zero. The authors have concluded that big networks are able to completely memorise the complete dataset. ## Personal Thoughts 1) Firstly we need a new theory to explain why and how CNN memorizes the inputs and generalizes itself to new data. Since the paper shows that regularization doesn't have too much effect on the generalization for big networks, maybe the network is actually memorizing the whole input space. But the memorization is very strategic in the sense that only the inputs (eg. noise) where no underlying simple features are found, are completely memorized unlike inputs with a stronger signal where patterns can be found. This may explain the discrepancy in number of training steps between ‘true labels’ and noisy inputs in [Figure 1 a.]. My very general understanding of Information Bottleneck Hypothesis [4] is that networks compresses noisy input data as much as possible while preserving important information. For a network more time is taken to compress noise compared to strong signals in images. This may give some intuision behind the learning process taking place. 2) CNN is highly non-linear with millions of parameters and has a very complex loss landscape. There might be multiple minima and we need a theory to explain which of these minima gives the highest generalization. Unfortunately the working of SGD is still a black box and is very difficult to characterize. There are many interesting phenomena like adversarial attacks, effect of optimizer used on the weights found (Daniel Jiwoong et al., 2016) and the actual understanding of non-linearity in CNN (Ian J. Goodfellow et al., 2015) that all point to lapses in our overall understanding of very high dimensional manifolds. This requires rigorous experimentation to study and understand the effect of the network architecture, optimizer and the actual input (Nitish Shirish et al.,2017) to the network independently on generalization. ## References 1. Im, Daniel Jiwoong et al. “An empirical analysis of the optimization of deep network loss surfaces.” arXiv: Learning (2016): n. pag. 2. Goodfellow, Ian J. and Oriol Vinyals. “Qualitatively characterizing neural network optimization problems.” CoRR abs/1412.6544 (2015): n. pag. 3. Keskar, Nitish Shirish et al. “On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima.” ArXiv abs/1609.04836 (2017): n. pag. 4. https://www.youtube.com/watch?v=XL07WEc2TRI |