On the importance of single directions for generalization
Ari S. Morcos
and
David G. T. Barrett
and
Neil C. Rabinowitz
and
Matthew Botvinick
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
stat.ML, cs.AI, cs.LG, cs.NE
First published: 2018/03/19 (6 years ago) Abstract: Despite their ability to memorize large datasets, deep neural networks often
achieve good generalization performance. However, the differences between the
learned solutions of networks which generalize and those which do not remain
unclear. Additionally, the tuning properties of single directions (defined as
the activation of a single unit or some linear combination of units in response
to some input) have been highlighted, but their importance has not been
evaluated. Here, we connect these lines of inquiry to demonstrate that a
network's reliance on single directions is a good predictor of its
generalization performance, across networks trained on datasets with different
fractions of corrupted labels, across ensembles of networks trained on datasets
with unmodified labels, across different hyperparameters, and over the course
of training. While dropout only regularizes this quantity up to a point, batch
normalization implicitly discourages single direction reliance, in part by
decreasing the class selectivity of individual units. Finally, we find that
class selectivity is a poor predictor of task importance, suggesting not only
that networks which generalize well minimize their dependence on individual
units by reducing their selectivity, but also that individually selective units
may not be necessary for strong network performance.