Learning Online Alignments with Continuous Rewards Policy Gradient
Yuping Luo
and
Chung-Cheng Chiu
and
Navdeep Jaitly
and
Ilya Sutskever
arXiv e-Print archive - 2016 via Local arXiv
Keywords:
cs.LG, cs.CL
First published: 2016/08/03 (8 years ago) Abstract: Sequence-to-sequence models with soft attention had significant success in
machine translation, speech recognition, and question answering. Though capable
and easy to use, they require that the entirety of the input sequence is
available at the beginning of inference, an assumption that is not valid for
instantaneous translation and speech recognition. To address this problem, we
present a new method for solving sequence-to-sequence problems using hard
online alignments instead of soft offline alignments. The online alignments
model is able to start producing outputs without the need to first process the
entire input sequence. A highly accurate online sequence-to-sequence model is
useful because it can be used to build an accurate voice-based instantaneous
translator. Our model uses hard binary stochastic decisions to select the
timesteps at which outputs will be produced. The model is trained to produce
these stochastic decisions using a standard policy gradient method. In our
experiments, we show that this model achieves encouraging performance on TIMIT
and Wall Street Journal (WSJ) speech recognition datasets.
TLDR; The authors use policy gradients on an RNN to train a "hard" attention mechanism that decides whether to output something at the current timestep or not. Their algorithm is online, which means it does not need to see the complete sequence before making a prediction, as is the case with soft attention. The authors evaluate their model on small- and medium-scale speech recognition tasks, where they achieve performance comparable to standard sequential models.
#### Notes:
- Entropy regularization and baselines were critical to make the model learn
- Neat trick: Increase dropout as training progresses
- Grid LSTMs outperformed standard LSTMs