First published: 2016/05/27 (5 years ago) Abstract: Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations.
Just a brief comment about your summary:
This is indeed excellent work, but contrary to what you seem to say, the basic ideas behind this framework were already there in previous work, notably Laurent Dinh et al's previous paper and very related model, dubbed NICE (arXiv 2014, ICLR 2015). NVP extends the building blocks and uses more recent tricks (BatchNorm and ResNets) in a way that ends up being highly successful in bringing impressive performance, but your review made it sound like if the basic framework was completely new. NICE itself builds on a long series of attempts to exploit the change of variable formula for density estimation using neural networks, including in my thesis and in ICA...