First published: 2018/10/01 (6 years ago) Abstract: In spite of remarkable progress in deep latent variable generative modeling,
training still remains a challenge due to a combination of optimization and
generalization issues. In practice, a combination of heuristic algorithms (such
as hand-crafted annealing of KL-terms) is often used in order to achieve the
desired results, but such solutions are not robust to changes in model
architecture or dataset. The best settings can often vary dramatically from one
problem to another, which requires doing expensive parameter sweeps for each
new case. Here we develop on the idea of training VAEs with additional
constraints as a way to control their behaviour. We first present a detailed
theoretical analysis of constrained VAEs, expanding our understanding of how
these models work. We then introduce and analyze a practical algorithm termed
Generalized ELBO with Constrained Optimization, GECO. The main advantage of
GECO for the machine learning practitioner is a more intuitive, yet principled,
process of tuning the loss. This involves defining of a set of constraints,
which typically have an explicit relation to the desired model performance, in
contrast to tweaking abstract hyper-parameters which implicitly affect the
model behavior. Encouraging experimental results in several standard datasets
indicate that GECO is a very robust and effective tool to balance
reconstruction and compression constraints.