Learning from Simulated and Unsupervised Images through Adversarial Training
Shrivastava, Ashish
and
Pfister, Tomas
and
Tuzel, Oncel
and
Susskind, Josh
and
Wang, Wenda
and
Webb, Russell
arXiv e-Print archive - 2016 via Local Bibsonomy
Keywords:
dblp
Problem
--------------
Refine synthetically simulated images to look real
https://machinelearning.apple.com/images/journals/gan/real_synt_refined_gaze.png
Approach
--------------
* Generative adversarial networks
Contributions
----------
1. **Refiner** FCN that improves simulated image to realistically looking image
2. **Adversarial + Self regularization loss**
* **Adversarial loss** term = CNN that Classifies whether the image is refined or real
* **Self regularization** term = L1 distance of refiner produced image from simulated image. The distance can be either in pixel space or in feature space (to preserve gaze direction for example).
https://i.imgur.com/I4KxCzT.png
Datasets
------------
* grayscale eye images
* depth sensor hand images
Technical Contributions
-------------------------------
1. **Local adversarial loss** - The discriminator is applied on image patches thus creating multiple "realness" metrices
https://machinelearning.apple.com/images/journals/gan/local-d.png
2. **Discriminator with history** - to avoid the refiner from going back to previously used refined images.
https://machinelearning.apple.com/images/journals/gan/history.gif