[link]
_Objective:_ Replace the usual GAN loss with a softmax croos-entropy loss to stabilize GAN training. _Dataset:_ [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) ## Inner working: Linked to recent work such as WGAN or Loss-Sensitive GAN that focus on objective functions with non-vanishing gradients to avoid the situation where the discriminator `D` becomes too good and the gradient vanishes. Thus they first introduce two targets for the discriminator `D` and the generator `G`: [](https://cloud.githubusercontent.com/assets/17261080/25347232/767049bc-291a-11e7-906e-c19a92bb7431.png) [](https://cloud.githubusercontent.com/assets/17261080/25347233/7670ff60-291a-11e7-974f-83eb9269d238.png) And then the two new losses: [](https://cloud.githubusercontent.com/assets/17261080/25347275/a303aa0a-291a-11e7-86b4-abd42c83d4a8.png) [](https://cloud.githubusercontent.com/assets/17261080/25347276/a307bc6c-291a-11e7-98b3-cbd7182090cd.png) ## Architecture: They use the DCGAN architecture and simply change the loss and remove the batch normalization and other empirical techniques used to stabilize training. They show that the soft-max GAN is still robust to training. ![]()
Your comment:
|