Natural Language Comprehension with the EpiReader
arXiv e-Print archive - 2016 via Local arXiv
First published: 2016/06/07 (7 years ago) Abstract: We present the EpiReader, a novel model for machine comprehension of text.
Machine comprehension of unstructured, real-world text is a major research goal
for natural language processing. Current tests of machine comprehension pose
questions whose answers can be inferred from some supporting text, and evaluate
a model's response to the questions. The EpiReader is an end-to-end neural
model comprising two components: the first component proposes a small set of
candidate answers after comparing a question to its supporting text, and the
second component formulates hypotheses using the proposed candidates and the
question, then reranks the hypotheses based on their estimated concordance with
the supporting text. We present experiments demonstrating that the EpiReader
sets a new state-of-the-art on the CNN and Children's Book Test machine
comprehension benchmarks, outperforming previous neural models by a significant
TLDR; The authors prorpose the "EpiReader" model for Question Answering / Machine Comprehension. The model consists of two modules: An Extractor that selects answer candidates (single words) using a Pointer network, and a Reasoner that rank these candidates by estimating textual entailment. The model is trained end-to-end and works on cloze-style questions. The authors evaluate the model on CBT and CNN datasets where they beat Attention Sum Reader and MemNN architectures.
- In most architectures, the correct answer is among the top5 candidates 95% of the time.
- Soft Attention is a problem in many architectures. Need a way to do hard attention.