First published: 2017/10/30 (4 years ago) Abstract: The Madry Lab recently hosted a competition designed to test the robustness
of their adversarially trained MNIST model. Attacks were constrained to perturb
each pixel of the input image by a scaled maximal $L_\infty$ distortion
$\epsilon$ = 0.3. This discourages the use of attacks which are not optimized
on the $L_\infty$ distortion metric. Our experimental results demonstrate that
by relaxing the $L_\infty$ constraint of the competition, the elastic-net
attack to deep neural networks (EAD) can generate transferable adversarial
examples which, despite their high average $L_\infty$ distortion, have minimal
visual distortion. These results call into question the use of $L_\infty$ as a
sole measure for visual distortion, and further demonstrate the power of EAD at
generating robust adversarial examples.
Sharma and Chen provide an experimental comparison of different state-of-the-art attacks against the adversarial training defense by Madry et al. . They consider several attacks, including the Carlini Wagner attacks , elastic net attacks  as well as projected gradient descent . Their experimental finding – that the defense by Madry et al. Can be broken by increasing the allowed perturbation size (i.e., epsilon) – should not be surprising. Every network trained adversarially will only defend reliable against attacks from the attacker used during training.
 A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. ArXiv, 1706.06083, 2017.
 N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks.InIEEE Symposiumon Security and Privacy (SP), 39–57., 2017.
 P.Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.J. Hsieh. Ead: Elastic-net attacks to deep neuralnetworks via adversarial examples. arXiv preprint arXiv:1709.04114, 2017.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).