Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems
Jesse Dodge
and
Andreea Gane
and
Xiang Zhang
and
Antoine Bordes
and
Sumit Chopra
and
Alexander Miller
and
Arthur Szlam
and
Jason Weston
arXiv e-Print archive - 2015 via Local arXiv
Keywords:
cs.CL, cs.LG
First published: 2015/11/21 (9 years ago) Abstract: A long-term goal of machine learning is to build intelligent conversational
agents. One recent popular approach is to train end-to-end models on a large
amount of real dialog transcripts between humans (Sordoni et al., 2015; Vinyals
& Le, 2015; Shang et al., 2015). However, this approach leaves many questions
unanswered as an understanding of the precise successes and shortcomings of
each model is hard to assess. A contrasting recent proposal are the bAbI tasks
(Weston et al., 2015b) which are synthetic data that measure the ability of
learning machines at various reasoning tasks over toy language. Unfortunately,
those tests are very small and hence may encourage methods that do not scale.
In this work, we propose a suite of new tasks of a much larger scale that
attempt to bridge the gap between the two regimes. Choosing the domain of
movies, we provide tasks that test the ability of models to answer factual
questions (utilizing OMDB), provide personalization (utilizing MovieLens),
carry short conversations about the two, and finally to perform on natural
dialogs from Reddit. We provide a dataset covering 75k movie entities and with
3.5M training examples. We present results of various models on these tasks,
and evaluate their performance.