Professor Forcing: A New Algorithm for Training Recurrent Networks
Alex Lamb
and
Anirudh Goyal
and
Ying Zhang
and
Saizheng Zhang
and
Aaron Courville
and
Yoshua Bengio
arXiv e-Print archive - 2016 via Local arXiv
Keywords:
stat.ML, cs.LG
First published: 2016/10/27 (8 years ago) Abstract: The Teacher Forcing algorithm trains recurrent networks by supplying observed
sequence values as inputs during training and using the network's own
one-step-ahead predictions to do multi-step sampling. We introduce the
Professor Forcing algorithm, which uses adversarial domain adaptation to
encourage the dynamics of the recurrent network to be the same when training
the network and when sampling from the network over multiple time steps. We
apply Professor Forcing to language modeling, vocal synthesis on raw waveforms,
handwriting generation, and image generation. Empirically we find that
Professor Forcing acts as a regularizer, improving test likelihood on character
level Penn Treebank and sequential MNIST. We also find that the model
qualitatively improves samples, especially when sampling for a large number of
time steps. This is supported by human evaluation of sample quality. Trade-offs
between Professor Forcing and Scheduled Sampling are discussed. We produce
T-SNEs showing that Professor Forcing successfully makes the dynamics of the
network during training and sampling more similar.