About
People
Publications
Open Source
Demos
Events
Blog
Careers
Contact
English
English
Français
ServiceNow
ServiceNow AI Research
Tags
Natural Language Processing
ServiceNow AI Research
Natural Language Processing
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive …
Sandeep Subramanian
,
Raymond Li
,
Jonathan Pilault
,
Christopher Pal
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
PDF
Cite
On the impressive performance of randomly weighted encoders in summarization tasks
In this work, we investigate the performance of untrained randomly initialized encoders in a general class of sequence to sequence …
Jonathan Pilault
,
Jaehong Park
,
Christopher Pal
Annual Meeting of the Association for Computational Linguistics (ACL), 2019.
PDF
Cite
Investigating Trust Factors in Human-Robot Shared Control: Implicit Gender Bias Around Robot Voice
This paper explores the impact of warnings, audio feedback, and gender on human-robot trust in the context of autonomous driving and …
Alexander Wong
,
Anqi Xu
,
Gregory Dudek
Conference on Computer and Robotic Vision (CRV), 2019.
PDF
Cite
BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning
Allowing humans to interactively train artificial agents to understand language instructions is desirable for both practical and …
Maxime Chevalier-Boisvert
,
Dzmitry Bahdanau
,
Salem Lahlou
,
Lucas Willems
,
Chitwan Saharia
,
Thien Huu Nguyen
,
Yoshua Bengio
International Conference on Learning Representations (ICLR), 2019.
PDF
Cite
Code
Towards Deep Conversational Recommendations
There has been growing interest in using neural networks and deep learning techniques to create dialogue systems. Conversational …
Raymond Li
,
Samira Ebrahimi Kahou
,
Hannes Schulz
,
Vincent Michalski
,
Laurent Charlin
,
Christopher Pal
Conference on Neural Information Processing Systems (NeurIPS), 2018.
PDF
Cite
Code
Towards Text Generation with Adversarially Learned Neural Outlines
Recent progress in deep generative models has been fueled by two paradigms – au- toregressive and adversarial models. We propose a …
Sandeep Subramanian
,
Sai Rajeswar Mudumba
,
Alessandro Sordoni
,
Adam Trischler
,
Aaron Courville
,
Christopher Pal
Conference on Neural Information Processing Systems (NeurIPS), 2018.
PDF
Cite
Adversarially-Trained Normalized Noisy-Feature Auto-Encoder for Text Generation
This article proposes Adversarially-Trained Normalized Noisy-Feature Auto-Encoder (ATNNFAE) for byte-level text generation. An ATNNFAE …
Xiang Zhang
,
Yann LeCun
ArXiv, 2018.
PDF
Cite
LTL and Beyond: Formal Languages for Reward Function Specification in Reinforcement Learning
In Reinforcement Learning (RL), an agent is guided by the rewards it receives from the reward function. Unfortunately, it may take many …
Alberto Camacho
,
Rodrigo Toro Icarte
,
Toryn Q. Klassen
,
Richard Valenzano
,
Sheila A. McIlraith
International Join Conference on Artificial Intelligence (IJCAI), 2018.
PDF
Cite
Code
Strong Baselines for Simple Question Answering over Knowledge Graphs with and without Neural Networks
We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of …
Salman Mohammed
,
Peng Shi
,
Jimmy Li
North American Chapter of the Association for Computational Linguistics (NAACL), 2018.
PDF
Cite
Accurate Supervised and Semi-Supervised Machine Reading for Long Documents
We introduce a hierarchical architecture for machine reading capable of extracting precise information from long documents. The model …
Daniel Hewlett
,
Llion Jones
,
Alexandre Lacoste
,
Izzeddin Gur
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017.
PDF
Cite
«
»
Cite
×