Skeleton-aided Articulated Motion Generation
Yichao Yan
and
Jingwei Xu
and
Bingbing Ni
and
Xiaokang Yang
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.CV
First published: 2017/07/04 (7 years ago) Abstract: This work make the first attempt to generate articulated human motion
sequence from a single image. On the one hand, we utilize paired inputs
including human skeleton information as motion embedding and a single human
image as appearance reference, to generate novel motion frames, based on the
conditional GAN infrastructure. On the other hand, a triplet loss is employed
to pursue appearance-smoothness between consecutive frames. As the proposed
framework is capable of jointly exploiting the image appearance space and
articulated/kinematic motion space, it generates realistic articulated motion
sequence, in contrast to most previous video generation methods which yield
blurred motion effects. We test our model on two human action datasets
including KTH and Human3.6M, and the proposed framework generates very
promising results on both datasets.
Problem
---------------
Video generation of human motion given:
1. Single appearance reference image
2. Skeleton motion sequence
Datasets
-----------
* KTH - grayscale human actions
* Human3.6M - color multiview human actions
Approach
---------------
Conditional GANs.
The authors try both Stack GAN and Siamese GAN.
The later provides better result.
https://preview.ibb.co/ighxQQ/Skeleton_aided_Articulated_Motion_Generation.png
Questions
----------------
Isn't using a full sequence of human skeleton motion considered more then a "hint"?