ServiceNow Research

Coarse-to-Fine Question Answering for Long Documents


We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of- the-art models. While most successful ap- proaches for reading comprehension rely on recurrent neural networks (RNNs), run- ning them over long documents is pro- hibitively slow because it is difficult to parallelize over sequences. Inspired by how people first skim the document, iden- tify relevant parts, and carefully read these parts to produce an answer, we combine a coarse, fast model for selecting rele- vant sentences and a more expensive RNN for producing the answer from those sen- tences. We treat sentence selection as a la- tent variable trained jointly from the an- swer only using reinforcement learning. Experiments demonstrate the state of the art performance on a challenging subset of the WIKIREADING dataset (Hewlett et al., 2016) and on a new dataset, while speed- ing up the model by 3.5x-6.7x.

Annual Meeting of the Association for Computational Linguistics (ACL)
Alexandre Lacoste
Alexandre Lacoste
Research Scientist

Research Scientist at Human Decision Support located at Montreal, QC, Canada.