Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
Xingjun Ma
and
Bo Li
and
Yisen Wang
and
Sarah M. Erfani
and
Sudanthi Wijewickrema
and
Grant Schoenebeck
and
Dawn Song
and
Michael E. Houle
and
James Bailey
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.LG, cs.CR, cs.CV
First published: 2018/01/08 (6 years ago) Abstract: Deep Neural Networks (DNNs) have recently been shown to be vulnerable against
adversarial examples, which are carefully crafted instances that can mislead
DNNs to make errors during prediction. To better understand such attacks, a
characterization is needed of the properties of regions (the so-called
'adversarial subspaces') in which adversarial examples lie. We tackle this
challenge by characterizing the dimensional properties of adversarial regions,
via the use of Local Intrinsic Dimensionality (LID). LID assesses the
space-filling capability of the region surrounding a reference example, based
on the distance distribution of the example to its neighbors. We first provide
explanations about how adversarial perturbation can affect the LID
characteristic of adversarial regions, and then show empirically that LID
characteristics can facilitate the distinction of adversarial examples
generated using state-of-the-art attacks. As a proof-of-concept, we show that a
potential application of LID is to distinguish adversarial examples, and the
preliminary results show that it can outperform several state-of-the-art
detection measures by large margins for five attack strategies considered in
this paper across three benchmark datasets. Our analysis of the LID
characteristic for adversarial regions not only motivates new directions of
effective adversarial defense, but also opens up more challenges for developing
new attacks to better understand the vulnerabilities of DNNs.