First published: 2017/04/25 (7 years ago) Abstract: We study unsupervised learning by developing introspective generative
modeling (IGM) that attains a generator using progressively learned deep
convolutional neural networks. The generator is itself a discriminator, capable
of introspection: being able to self-evaluate the difference between its
generated samples and the given training data. When followed by repeated
discriminative learning, desirable properties of modern discriminative
classifiers are directly inherited by the generator. IGM learns a cascade of
CNN classifiers using a synthesis-by-classification algorithm. In the
experiments, we observe encouraging results on a number of applications
including texture modeling, artistic style transferring, face modeling, and
semi-supervised learning.
In this work they take a different approach to the GAN model \cite{1406.2661}. In the traditionally GAN model a neural network is trained to up-sample from random noise in a feed forward fashion to generate samples from the data distribution.
This work instead iteratively permutes an image of random noise similar to Artistic Style Transfer \cite{1508.06576}. The image is permuted in order to fool a set of discriminators. To obtain the set of discriminators each is trained starting from random noise until some max $t$ step.
1. At first a discriminator is trained to discriminate between the true data and random noise .
2. Images is then permuted using gradients which aim to fool the discriminator and included in the data distribution as a negative example.
3. The discriminator is trained on the true data + random noise + fake data from the previous steps
The images generated at each step are shown below:
https://i.imgur.com/kp575s8.png
After being trained the model is able to generate a sample by iterating over each trained discriminator and applying gradient updates on from random noise. For this storing only the weights of the discriminators is required.
Poster from ICCV2017:
https://i.imgur.com/vYSSdZx.png