Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Jason Weston
and
Antoine Bordes
and
Sumit Chopra
and
Alexander M. Rush
and
Bart van Merriënboer
and
Armand Joulin
and
Tomas Mikolov
arXiv e-Print archive - 2015 via Local arXiv
Keywords:
cs.AI, cs.CL, stat.ML
First published: 2015/02/19 (9 years ago) Abstract: One long-term goal of machine learning research is to produce methods that
are applicable to reasoning and natural language, in particular building an
intelligent dialogue agent. To measure progress towards that goal, we argue for
the usefulness of a set of proxy tasks that evaluate reading comprehension via
question answering. Our tasks measure understanding in several ways: whether a
system is able to answer questions via chaining facts, simple induction,
deduction and many more. The tasks are designed to be prerequisites for any
system that aims to be capable of conversing with a human. We believe many
existing learning systems can currently not solve them, and hence our aim is to
classify these tasks into skill sets, so that researchers can identify (and
then rectify) the failings of their systems. We also extend and improve the
recently introduced Memory Networks model, and show it is able to solve some,
but not all, of the tasks.