Playing the Game of Universal Adversarial Perturbations
Julien Perolat
and
Mateusz Malinowski
and
Bilal Piot
and
Olivier Pietquin
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.LG, cs.CV, stat.ML
First published: 2018/09/20 (6 years ago) Abstract: We study the problem of learning classifiers robust to universal adversarial
perturbations. While prior work approaches this problem via robust
optimization, adversarial training, or input transformation, we instead phrase
it as a two-player zero-sum game. In this new formulation, both players
simultaneously play the same game, where one player chooses a classifier that
minimizes a classification loss whilst the other player creates an adversarial
perturbation that increases the same loss when applied to every sample in the
training set. By observing that performing a classification (respectively
creating adversarial samples) is the best response to the other player, we
propose a novel extension of a game-theoretic algorithm, namely fictitious
play, to the domain of training robust classifiers. Finally, we empirically
show the robustness and versatility of our approach in two defence scenarios
where universal attacks are performed on several image classification datasets
-- CIFAR10, CIFAR100 and ImageNet.
Pérolat et al. propose a game-theoretic variant of adversarial training on universal adversarial perturbations. In particular, in each training iteration, the model is trained for a specific number of iterations on the current training set. Afterwards, a universal perturbation is found (and the corresponding test images) that fools the network. The found adversarial examples are added to the training set. In the next iteration, the network is trained on the new training set which includes adversarial examples. Overall, this leads to a network being trained on a sequence of universal adversarial perturbations corresponding to earlier versions of that network.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).