Enriching Word Vectors with Subword Information
Piotr Bojanowski
and
Edouard Grave
and
Armand Joulin
and
Tomas Mikolov
arXiv e-Print archive - 2016 via Local arXiv
Keywords:
cs.CL, cs.LG
First published: 2016/07/15 (8 years ago) Abstract: Continuous word representations, trained on large unlabeled corpora are
useful for many natural language processing tasks. Popular models that learn
such representations ignore the morphology of words, by assigning a distinct
vector to each word. This is a limitation, especially for languages with large
vocabularies and many rare words. In this paper, we propose a new approach
based on the skipgram model, where each word is represented as a bag of
character $n$-grams. A vector representation is associated to each character
$n$-gram; words being represented as the sum of these representations. Our
method is fast, allowing to train models on large corpora quickly and allows us
to compute word representations for words that did not appear in the training
data. We evaluate our word representations on nine different languages, both on
word similarity and analogy tasks. By comparing to recently proposed
morphological word representations, we show that our vectors achieve
state-of-the-art performance on these tasks.