Regularizing Neural Networks by Penalizing Confident Output Distributions
Gabriel Pereyra
and
George Tucker
and
Jan Chorowski
and
Łukasz Kaiser
and
Geoffrey Hinton
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.NE, cs.LG
First published: 2017/01/23 (7 years ago) Abstract: We systematically explore regularizing neural networks by penalizing low
entropy output distributions. We show that penalizing low entropy output
distributions, which has been shown to improve exploration in reinforcement
learning, acts as a strong regularizer in supervised learning. Furthermore, we
connect a maximum entropy based confidence penalty to label smoothing through
the direction of the KL divergence. We exhaustively evaluate the proposed
confidence penalty and label smoothing on 6 common benchmarks: image
classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine
translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ).
We find that both label smoothing and the confidence penalty improve
state-of-the-art models across benchmarks without modifying existing
hyperparameters, suggesting the wide applicability of these regularizers.
Pereyra et al. propose an entropy regularizer for penalizing over-confident predictions of deep neural networks. Specifically, given the predicted distribution $p_\theta(y_i|x)$ for labels $y_i$ and network parameters $\theta$, a regularizer
$-\beta \max(0, \Gamma – H(p_\theta(y|x))$
is added to the learning objective. Here, $H$ denotes the entropy and $\beta$, $\Gamma$ are hyper-parameters allowing to weight and limit the regularizers influence. In experiments, this regularizer showed slightly improved performance on MNIST and Cifar-10.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).