Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning
Stefan Depeweg
and
José Miguel Hernández-Lobato
and
Finale Doshi-Velez
and
Steffen Udluft
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
stat.ML, cs.LG
First published: 2017/10/19 (7 years ago) Abstract: Bayesian neural networks with latent variables (BNNs+LVs) are scalable and
flexible probabilistic models: They account for uncertainty in the estimation
of the network weights and, by making use of latent variables, they can capture
complex noise patterns in the data. In this work, we show how to separate these
two forms of uncertainty for decision-making purposes. This decomposition
allows us to successfully identify informative points for active learning of
functions with heteroskedastic and bimodal noise. We also demonstrate how this
decomposition allows us to define a novel risk-sensitive reinforcement learning
criterion to identify policies that balance expected cost, model-bias and noise
averseness.