Low Frequency Adversarial Perturbation
Chuan Guo
and
Jared S. Frank
and
Kilian Q. Weinberger
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.CV
First published: 2018/09/24 (6 years ago) Abstract: Adversarial images aim to change a target model's decision by minimally
perturbing a target image. In the black-box setting, the absence of gradient
information often renders this search problem costly in terms of query
complexity. In this paper we propose to restrict the search for adversarial
images to a low frequency domain. This approach is readily compatible with many
existing black-box attack frameworks and consistently reduces their query cost
by 2 to 4 times. Further, we can circumvent image transformation defenses even
when both the model and the defense strategy are unknown. Finally, we
demonstrate the efficacy of this technique by fooling the Google Cloud Vision
platform with an unprecedented low number of model queries.