Least Squares Generative Adversarial Networks
Xudong Mao
and
Qing Li
and
Haoran Xie
and
Raymond Y. K. Lau
and
Zhen Wang
and
Stephen Paul Smolley
arXiv e-Print archive - 2016 via Local arXiv
Keywords:
cs.CV
First published: 2016/11/13 (8 years ago) Abstract: Unsupervised learning with generative adversarial networks (GANs) has proven
hugely successful. Regular GANs hypothesize the discriminator as a classifier
with the sigmoid cross entropy loss function. However, we found that this loss
function may lead to the vanishing gradients problem during the learning
process. To overcome such a problem, we propose in this paper the Least Squares
Generative Adversarial Networks (LSGANs) which adopt the least squares loss
function for the discriminator. We show that minimizing the objective function
of LSGAN yields minimizing the Pearson $\chi^2$ divergence. There are two
benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher
quality images than regular GANs. Second, LSGANs perform more stable during the
learning process. We evaluate LSGANs on five scene datasets and the
experimental results show that the images generated by LSGANs are of better
quality than the ones generated by regular GANs. We also conduct two comparison
experiments between LSGANs and regular GANs to illustrate the stability of
LSGANs.