Out-distribution training confers robustness to deep neural networks
Abbasi, Mahdieh
and
Gagné, Christian
arXiv e-Print archive - 2018 via Local Bibsonomy
Keywords:
dblp
Abbasi and Gagné propose explicit but natural out-distribution training as defense against adversarial examples. Specifically, as also illustrated on the toy dataset in Figure 1, they argue that networks commonly produce high-confident predictions in regions that are clearly outside of the data manifold (i.e., the training data distribution). As mitigation strategy, the authors propose to explicitly train on out-of-distribution data, allowing the network to additionally classify this data as “dustbin” data. On MNIST, for example, this data comes from NotMNIST, a dataset of letters A-J – on CIFA-10, this data could be CIFAR-100. Experiments show that this out-of-distribution training allow networks to identify adversarial examples as “dustbin” and thus improve robustness.
https://i.imgur.com/nUSDZay.png
Figure 1: Illustration of a naive model versus an augmented model, i.e., trained on out-of-distribution data, on a toy dataset (left) and on MNIST (right).
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).