Lee et al. propose a variant of adversarial training where a generator is trained simultaneously to generated adversarial perturbations. This approach follows the idea that it is possible to “learn” how to generate adversarial perturbations (as in ). In this case, the authors use the gradient of the classifier with respect to the input as hint for the generator. Both generator and classifier are then trained in an adversarial setting (analogously to generative adversarial networks), see the paper for details.
 Omid Poursaeed, Isay Katsman, Bicheng Gao, Serge Belongie. Generative Adversarial Perturbations. ArXiv, abs/1712.02328, 2017.
Single cells behave in complex and sometimes seemingly random ways. We have applied Generative Adversarial Networks (GANs), a form of artificial intelligence, to make sense of the way genes are controlled in skin cells. A GAN involves two separate neural networks, a ‘generator’ and a ‘discriminator’. The generator simulates cells and the discriminator tries to tell the difference between the fake data created by the simulator, and data from real cells. As they compete against each other they improve at their tasks and provide new insights into the way cells behave.
Our neural network approach allows us to understand the relationship between different genes and how this contributes to cell behaviour. One of the networks, the generator, is responsible for simulating cells and we use this to predict how genes are controlled under different conditions, effectively simulating what would previously have been laborious and painstaking experiments. GANs also make it possible to compare data from multiple labs produced in different conditions, opening up the opportunity to answer new questions.