Globally Normalized Transition-Based Neural Networks
Daniel Andor
and
Chris Alberti
and
David Weiss
and
Aliaksei Severyn
and
Alessandro Presta
and
Kuzman Ganchev
and
Slav Petrov
and
Michael Collins
arXiv e-Print archive - 2016 via Local arXiv
Keywords:
cs.CL, cs.LG, cs.NE
First published: 2016/03/19 (8 years ago) Abstract: We introduce a globally normalized transition-based neural network model that
achieves state-of-the-art part-of-speech tagging, dependency parsing and
sentence compression results. Our model is a simple feed-forward neural network
that operates on a task-specific transition system, yet achieves comparable or
better accuracies than recurrent models. We discuss the importance of global as
opposed to local normalization: a key insight is that the label bias problem
implies that globally normalized models can be strictly more expressive than
locally normalized models.