Fully-Convolutional Siamese Networks for Object Tracking
Luca Bertinetto
and
Jack Valmadre
and
João F. Henriques
and
Andrea Vedaldi
and
Philip H. S. Torr
arXiv e-Print archive - 2016 via Local arXiv
Keywords:
cs.CV
First published: 2016/06/30 (8 years ago) Abstract: The problem of arbitrary object tracking has traditionally been tackled by
learning a model of the object's appearance exclusively online, using as sole
training data the video itself. Despite the success of these methods, their
online-only approach inherently limits the richness of the model they can
learn. Recently, several attempts have been made to exploit the expressive
power of deep convolutional networks. However, when the object to track is not
known beforehand, it is necessary to perform Stochastic Gradient Descent online
to adapt the weights of the network, severely compromising the speed of the
system. In this paper we equip a basic tracking algorithm with a novel
fully-convolutional Siamese network trained end-to-end on the ILSVRC15 video
object detection dataset. Our tracker operates at frame-rates beyond real-time
and, despite its extreme simplicity, achieves state-of-the-art performance in
the VOT2015 benchmark.
Summary:
This paper suggests an approach to find correlation score between different sub-window of a search image with a query image. Using a fully convolutional siamese network architecture that they describe helps in getting this correlation for different sub windows for search images in one forward pass of the network. For every video, they compute the features for the object being tracked once and use it for entire duration of video for computing correlation.
My take:
This is in the same spirit as GOTURN tracker. Although having fully convolutional helps in having translation invariance, it is not directly an advantage over predicting bounding boxes directly as adopted in GOTURN paper. Also, results are not directly comparable as this has been trained on a different data-set.