Transferring Face Verification Nets To Pain and Expression Regression
Feng Wang
and
Xiang Xiang
and
Chang Liu
and
Trac D. Tran
and
Austin Reiter
and
Gregory D. Hager
and
Harry Quon
and
Jian Cheng
and
Alan L. Yuille
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.CV, cs.AI, cs.LG, cs.MM
First published: 2017/02/22 (7 years ago) Abstract: Limited annotated data is available for the research of estimating facial
expression intensities, which makes the training of deep networks for automated
expression assessment very challenging. Fortunately, fine-tuning from a
data-extensive pre-trained domain such as face verification can alleviate the
problem. In this paper, we propose a transferred network that fine-tunes a
state-of-the-art face verification network using expression-intensity labeled
data with a regression layer. In this way, the expression regression task can
benefit from the rich feature representations trained on a huge amount of data
for face verification. The proposed transferred deep regressor is applied in
estimating the intensity of facial action units (2017 EmotionNet Challenge) and
in particular pain intensity estimation (UNBS-McMaster Shoulder-Pain dataset).
It wins the second place in the challenge and achieves the state-of-the-art
performance on Shoulder-Pain dataset. Particularly for Shoulder-Pain with the
imbalance issue of different pain levels, a new weighted evaluation metric is
proposed.
I like the idea proposed in this paper - training on a label-rich domain and transfer the representation to a label-limited domain, but would like to extend it to data more than faces such as transferring the object attributes.