Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
Sascha Saralajew
and
Lars Holdijk
and
Maike Rees
and
Thomas Villmann
arXiv e-Print archive - 2019 via Local arXiv
Keywords:
cs.LG, cs.AI, cs.CV, stat.ML
First published: 2019/02/01 (5 years ago) Abstract: Adversarial attacks and the development of (deep) neural networks robust
against them are currently two widely researched topics. The robustness of
Learning Vector Quantization (LVQ) models against adversarial attacks has
however not yet been studied to the same extent. We therefore present an
extensive evaluation of three LVQ models: Generalized LVQ, Generalized Matrix
LVQ and Generalized Tangent LVQ. The evaluation suggests that both Generalized
LVQ and Generalized Tangent LVQ have a high base robustness, on par with the
current state-of-the-art in robust neural network methods. In contrast to this,
Generalized Matrix LVQ shows a high susceptibility to adversarial attacks,
scoring consistently behind all other models. Additionally, our numerical
evaluation indicates that increasing the number of prototypes per class
improves the robustness of the models.