First published: 2017/04/13 (7 years ago) Abstract: Generative adversarial networks (GANs) are highly effective unsupervised
learning frameworks that can generate very sharp data, even for data such as
images with complex, highly multimodal distributions. However GANs are known to
be very hard to train, suffering from problems such as mode collapse and
disturbing visual artifacts. Batch normalization (BN) techniques have been
introduced to address the training problem. However, though BN accelerates
training in the beginning, our experiments show that the use of BN can be
unstable and negatively impact the quality of the trained model. The evaluation
of BN and numerous other recent schemes for improving GAN training is hindered
by the lack of an effective objective quality measure for GAN models. To
address these issues, we first introduce a weight normalization (WN) approach
for GAN training that significantly improves the stability, efficiency and the
quality of the generated samples. To allow a methodical evaluation, we
introduce a new objective measure based on a squared Euclidean reconstruction
error metric, to assess training performance in terms of speed, stability, and
quality of generated samples. Our experiments indicate that training using WN
is generally superior to BN for GANs. We provide statistical evidence for
commonly used datasets (CelebA, LSUN, and CIFAR-10), that WN achieves 10% lower
mean squared loss for reconstruction and significantly better qualitative
results than BN.