What do Neural Machine Translation Models Learn about Morphology?
Belinkov, Yonatan
and
Durrani, Nadir
and
Dalvi, Fahim
and
Sajjad, Hassan
and
Glass, James R.
Association for Computational Linguistics - 2017 via Local Bibsonomy
Keywords:
dblp
This paper attempts to open up the black box of neural machine translation models and inspect what the representations look like, specifically with respect to morphology. The technique they use is to train word-based and character-based seq2seq-style models on multiple source-target language pairs, of varying morphological complexity, and then ignore the target side to focus on the representations learned about the source language. Once they have an encoder trained to generate these representations, they attempt to use the encoder to create feature representations for external tasks that directly evaluate for morphology and part of speech information. (Contrast this with methods that may, for example, try to inspect activation patterns of individual neurons in a trained model.)
The first experiment shows that representations learned from character-based models are superior for POS tagging in the source language. The gap is bigger for morphologically rich languages like Arabic. The same result holds for morphological tagging. For infrequent words the gap is especially large -- the system can memorize morphological information for frequent words. They also show that the increases in accuracy are due to getting prevoiusly unseen words correct (both for POS and morph prediction) and that the biggest increase in accuracy is in predicting plural and determined noun categories. Next, they show that in a deeper network, the middle layer (of 3) has the best representations for predicting pos/morph information. The authors suggest the higher layers are more focused on semantics or other higher abstractions.
Overall, this work empirically confirms some conventional wisdom, that character representations are better for unseen words because of their ability to represent morphology.