Visualizing the Loss Landscape of Neural Nets
Hao Li
and
Zheng Xu
and
Gavin Taylor
and
Christoph Studer
and
Tom Goldstein
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.LG, cs.CV, stat.ML
First published: 2017/12/28 (6 years ago) Abstract: Neural network training relies on our ability to find "good" minimizers of
highly non-convex loss functions. It is well-known that certain network
architecture designs (e.g., skip connections) produce loss functions that train
easier, and well-chosen training parameters (batch size, learning rate,
optimizer) produce minimizers that generalize better. However, the reasons for
these differences, and their effects on the underlying loss landscape, are not
well understood. In this paper, we explore the structure of neural loss
functions, and the effect of loss landscapes on generalization, using a range
of visualization methods. First, we introduce a simple "filter normalization"
method that helps us visualize loss function curvature and make meaningful
side-by-side comparisons between loss functions. Then, using a variety of
visualizations, we explore how network architecture affects the loss landscape,
and how training parameters affect the shape of minimizers.