A Comparison of Word Embeddings for the Biomedical Natural Language Processing
Yanshan Wang
and
Sijia Liu
and
Naveed Afzal
and
Majid Rastegar-Mojarad
and
Liwei Wang
and
Feichen Shen
and
Paul Kingsbury
and
Hongfang Liu
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.IR
First published: 2018/02/01 (6 years ago) Abstract: Neural word embeddings have been widely used in biomedical Natural Language
Processing (NLP) applications as they provide vector representations of words
capturing the semantic properties of words and the linguistic relationship
between words. Many biomedical applications use different textual resources
(e.g., Wikipedia and biomedical articles) to train word embeddings and apply
these word embeddings to downstream biomedical applications. However, there has
been little work on evaluating the word embeddings trained from these
resources.In this study, we provide an empirical evaluation of word embeddings
trained from four different resources, namely clinical notes, biomedical
publications, Wikipedia, and news. We performed the evaluation qualitatively
and quantitatively. In qualitative evaluation, we manually inspected five most
similar medical words to a given set of target medical words, and then analyzed
word embeddings through the visualization of those word embeddings. In
quantitative evaluation, we conducted both intrinsic and extrinsic evaluation.
Based on the evaluation results, we can draw the following conclusions. First,
the word embeddings trained on EHR and PubMed can capture the semantics of
medical terms better than those trained on GloVe and Google News and find more
relevant similar medical terms, and are closer to human experts' judgments,
compared to these trained on GloVe and Google News. Second, there does not
exist a consistent global ranking of word embedding quality for downstream
biomedical NLP applications. However, adding word embeddings as extra features
will improve results on most downstream tasks. Finally, the word embeddings
trained on biomedical domain corpora do not necessarily have better performance
than those trained on other general domain corpora for any downstream
biomedical NLP tasks.