Auto-Conditioned LSTM Network for Extended Complex Human Motion Synthesis
Zimo Li
and
Yi Zhou
and
Shuangjiu Xiao
and
Chong He
and
Hao Li
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.LG
First published: 2017/07/17 (7 years ago) Abstract: We present a real-time method for synthesizing highly complex human motions
using a novel LSTM network training regime we call the auto-conditioned LSTM
(acLSTM). Recently, researchers have attempted to synthesize new motion by
using autoregressive techniques, but existing methods tend to freeze or diverge
after a couple of seconds due to an accumulation of errors that are fed back
into the network. Furthermore, such methods have only been shown to be reliable
for relatively simple human motions, such as walking or running. In contrast,
our approach can synthesize arbitrary motions with highly complex styles,
including dances or martial arts in addition to locomotion. The acLSTM is able
to accomplish this by explicitly accommodating for autoregressive noise
accumulation during training. Furthermore, the structure of the acLSTM is
modular and compatible with any other recurrent network architecture, and is
usable for tasks other than motion. Our work is the first to our knowledge that
demonstrates the ability to generate over 18,000 continuous frames (300
seconds) of new complex human motion w.r.t. different styles.
Problem
----------
Motion prediction
Dataset
----------
CMU
Approach
--------------
auto-conditioned LSTM - an LSTM network that uses only fraction of the input timestamps, but all of the outputs (a little bit similar to keyframes).
https://image.ibb.co/nimSs5/acLSTM.png
Video
--------
https://www.youtube.com/watch?v=AWlpNeOzMig