Lexical Features in Coreference Resolution: To be Used With Caution
Nafise Sadat Moosavi
and
Michael Strube
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.CL
First published: 2017/04/22 (7 years ago) Abstract: Lexical features are a major source of information in state-of-the-art
coreference resolvers. Lexical features implicitly model some of the linguistic
phenomena at a fine granularity level. They are especially useful for
representing the context of mentions. In this paper we investigate a drawback
of using many lexical features in state-of-the-art coreference resolvers. We
show that if coreference resolvers mainly rely on lexical features, they can
hardly generalize to unseen domains. Furthermore, we show that the current
coreference resolution evaluation is clearly flawed by only evaluating on a
specific split of a specific dataset in which there is a notable overlap
between the training, development and test sets.
Kind of a response/deeper dive into the durret/klein "easy victories" paper. Suggests that a) lexical features they used ("easy victories") are very prone to overfitting. They first show that several state of the art systems that use lexical features, trained on CoNLL data, perform poorly on wikiref, which was annotated using the same guidelines. Meanwhile the stanford sieve system performs about the same on both.
Then they show that a high percentage of gold standard linked headwords in the test set have been seen in the training set, and that a much lower percentage of errors are in the training set, implying that lexical features just allow you to memorize what kinds of things can be linked.
They suggest development of robust features, including using embeddings as lexical features, using lexical representations only for context, and on the evaluation side, using test sets that are different domains than the training set.
(Reposting under ACL 2017 version)
Kind of a response/deeper dive into the durret/klein "easy victories" paper. Suggests that a) lexical features they used ("easy victories") are very prone to overfitting. They first show that several state of the art systems that use lexical features, trained on CoNLL data, perform poorly on wikiref, which was annotated using the same guidelines. Meanwhile the stanford sieve system performs about the same on both.
Then they show that a high percentage of gold standard linked headwords in the test set have been seen in the training set, and that a much lower percentage of errors are in the training set, implying that lexical features just allow you to memorize what kinds of things can be linked.
They suggest development of robust features, including using embeddings as lexical features, using lexical representations only for context, and on the evaluation side, using test sets that are different domains than the training set.