End-to-End Deep Learning for Person Search
Tong Xiao
and
Shuang Li
and
Bochao Wang
and
Liang Lin
and
Xiaogang Wang
arXiv e-Print archive - 2016 via Local arXiv
Keywords:
cs.CV
First published: 2016/04/07 (8 years ago) Abstract: Existing person re-identification (re-id) benchmarks and algorithms mainly
focus on matching cropped pedestrian images between queries and candidates.
However, it is different from real-world scenarios where the annotations of
pedestrian bounding boxes are unavailable and the target person needs to be
found from whole images. To close the gap, we investigate how to localize and
match query persons from the scene images without relying on the annotations of
candidate boxes. Instead of breaking it down into two separate
tasks---pedestrian detection and person re-id, we propose an end-to-end deep
learning framework to jointly handle both tasks. A random sampling softmax loss
is proposed to effectively train the model under the supervision of sparse and
unbalanced labels. On the other hand, existing benchmarks are small in scale
and the samples are collected from a few fixed camera views with low scene
diversities. To address this issue, we collect a large-scale and
scene-diversified person search dataset, which contains 18,184 images, 8,432
persons, and 99,809 annotated bounding
boxes\footnote{\url{http://www.ee.cuhk.edu.hk/~xgwang/PS/dataset.html}}. We
evaluate our approach and other baselines on the proposed dataset, and study
the influence of various factors. Experiments show that our method achieves the
best result.