Learning Confidence for Out-of-Distribution Detection in Neural Networks
Terrance DeVries
and
Graham W. Taylor
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
stat.ML, cs.LG
First published: 2018/02/13 (6 years ago) Abstract: Modern neural networks are very powerful predictive models, but they are
often incapable of recognizing when their predictions may be wrong. Closely
related to this is the task of out-of-distribution detection, where a network
must determine whether or not an input is outside of the set on which it is
expected to safely perform. To jointly address these issues, we propose a
method of learning confidence estimates for neural networks that is simple to
implement and produces intuitively interpretable outputs. We demonstrate that
on the task of out-of-distribution detection, our technique surpasses recently
proposed techniques which construct confidence based on the network's output
distribution, without requiring any additional labels or access to
out-of-distribution examples. Additionally, we address the problem of
calibrating out-of-distribution detectors, where we demonstrate that
misclassified in-distribution examples can be used as a proxy for
out-of-distribution examples.