First published: 2018/05/24 (6 years ago) Abstract: In this paper, we describe the "implicit autoencoder" (IAE), a generative
autoencoder in which both the generative path and the recognition path are
parametrized by implicit distributions. We use two generative adversarial
networks to define the reconstruction and the regularization cost functions of
the implicit autoencoder, and derive the learning rules based on
maximum-likelihood learning. Using implicit distributions allows us to learn
more expressive posterior and conditional likelihood distributions for the
autoencoder. Learning an expressive conditional likelihood distribution enables
the latent code to only capture the abstract and high-level information of the
data, while the remaining information is captured by the implicit conditional
likelihood distribution. For example, we show that implicit autoencoders can
disentangle the global and local information, and perform deterministic or
stochastic reconstructions of the images. We further show that implicit
autoencoders can disentangle discrete underlying factors of variation from the
continuous factors in an unsupervised fashion, and perform clustering and
semi-supervised learning.