Adversarial Examples Are a Natural Consequence of Test Error in Noise
Ford, Nic
and
Gilmer, Justin
and
Carlini, Nicholas
and
Cubuk, Ekin Dogus
arXiv e-Print archive - 2019 via Local Bibsonomy
Keywords:
dblp
Ford et al. show that the existence of adversarial examples can directly linked to test error on noise and other types of random corruption. Additionally, obtaining model robust against random corruptions is difficult, and even adversarially robust models might not be entirely robust against these corruptions. Furthermore, many “defenses” against adversarial examples show poor performance on random corruption – showing that some defenses do not result in robust models, but make attacking the model using gradient-based attacks more difficult (gradient masking).
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).