Learned in Translation: Contextualized Word Vectors
Bryan McCann
and
James Bradbury
and
Caiming Xiong
and
Richard Socher
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.CL, cs.AI, cs.LG
First published: 2017/08/01 (7 years ago) Abstract: Computer vision has benefited from initializing multiple deep layers with
weights pretrained on large supervised training sets like ImageNet. Natural
language processing (NLP) typically sees initialization of only the lowest
layer of deep models with pretrained word vectors. In this paper, we use a deep
LSTM encoder from an attentional sequence-to-sequence model trained for machine
translation (MT) to contextualize word vectors. We show that adding these
context vectors (CoVe) improves performance over using only unsupervised word
and character vectors on a wide variety of common NLP tasks: sentiment analysis
(SST, IMDb), question classification (TREC), entailment (SNLI), and question
answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe
improves performance of our baseline models to the state of the art.