Generate To Adapt: Aligning Domains using Generative Adversarial Networks
Sankaranarayanan, Swami
and
Balaji, Yogesh
and
Castillo, Carlos D.
and
Chellappa, Rama
arXiv e-Print archive - 2017 via Local Bibsonomy
Keywords:
dblp
_Objective:_ Use a GAN to learn an embedding invariant from domain shift.
_Dataset:_ [MNIST](yann.lecun.com/exdb/mnist/), [SVHN](http://ufldl.stanford.edu/housenumbers/), USPS, [OFFICE](https://cs.stanford.edu/%7Ejhoffman/domainadapt/) and [CFP](http://mukh.com/).
## Architecture:
The total network is composed of several sub-networks:
1. `F`, the Feature embedding network that takes as input an image from either the source or target dataset and generate a feature vector.
2. `C`, the Classifier network when the image come from the source dataset.
3. `G`, the Generative network that learns to generate an image similar to the source dataset using an image embedding from `F` and a random noise vector.
4. `D`, the Discriminator network that tries to guess if an image is either from the source or the generative network.
`G` and `D` play a minimax game where `D` tries to classify the generated samples as fake and `G` tries to fool `D` by producing examples that are as realistic as possible.
The scheme for training the network is the following:
[![screen shot 2017-04-14 at 5 50 22 pm](https://cloud.githubusercontent.com/assets/17261080/25048122/f2a648b6-213a-11e7-93bd-954981bd3838.png)](https://cloud.githubusercontent.com/assets/17261080/25048122/f2a648b6-213a-11e7-93bd-954981bd3838.png)
## Results:
Very interesting, the generated image is just a side-product but the overall approach seems to be the state-of-the-art at the time of writing (the paper was published one week ago).