NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles
Jiajun Lu
and
Hussein Sibai
and
Evan Fabry
and
David Forsyth
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.CV, cs.AI, cs.CR
First published: 2017/07/12 (7 years ago) Abstract: It has been shown that most machine learning algorithms are susceptible to
adversarial perturbations. Slightly perturbing an image in a carefully chosen
direction in the image space may cause a trained neural network model to
misclassify it. Recently, it was shown that physical adversarial examples
exist: printing perturbed images then taking pictures of them would still
result in misclassification. This raises security and safety concerns.
However, these experiments ignore a crucial property of physical objects: the
camera can view objects from different distances and at different angles. In
this paper, we show experiments that suggest that current constructions of
physical adversarial examples do not disrupt object detection from a moving
platform. Instead, a trained neural network classifies most of the pictures
taken from different distances and angles of a perturbed image correctly. We
believe this is because the adversarial property of the perturbation is
sensitive to the scale at which the perturbed picture is viewed, so (for
example) an autonomous car will misclassify a stop sign only from a small range
of distances.
Our work raises an important question: can one construct examples that are
adversarial for many or most viewing conditions? If so, the construction should
offer very significant insights into the internal representation of patterns by
deep networks. If not, there is a good prospect that adversarial examples can
be reduced to a curiosity with little practical impact.