Deep contextualized word representations
Matthew E. Peters
and
Mark Neumann
and
Mohit Iyyer
and
Matt Gardner
and
Christopher Clark
and
Kenton Lee
and
Luke Zettlemoyer
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.CL
First published: 2018/02/15 (6 years ago) Abstract: We introduce a new type of deep contextualized word representation that
models both (1) complex characteristics of word use (e.g., syntax and
semantics), and (2) how these uses vary across linguistic contexts (i.e., to
model polysemy). Our word vectors are learned functions of the internal states
of a deep bidirectional language model (biLM), which is pre-trained on a large
text corpus. We show that these representations can be easily added to existing
models and significantly improve the state of the art across six challenging
NLP problems, including question answering, textual entailment and sentiment
analysis. We also present an analysis showing that exposing the deep internals
of the pre-trained network is crucial, allowing downstream models to mix
different types of semi-supervision signals.