About
People
Publications
Open Source
Demos
Events
Blog
Contact
English
English
Français
ServiceNow
ServiceNow Research
Tags
Transformers
ServiceNow Research
Transformers
Does entity abstraction help generative Transformers reason?
We study the utility of incorporating entity type abstractions into pre-trained Transformers and test these methods on four NLP tasks …
Nicolas Gontier
,
Siva Reddy
,
Christopher Pal
Transactions on Machine Learning Research, 2022.
PDF
Cite
TACTiS: Transformer-Attentional Copulas for Time Series
The estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance. …
Alexandre Drouin
,
Étienne Marcotte
,
Nicolas Chapados
International Conference on Machine Learning (ICML), 2022.
PDF
Cite
Code
Model Card
Latent Variable Sequential Set Transformers for Joint Multi-Agent Motion Prediction
Robust multi-agent trajectory prediction is essential for the safe control of robotic systems. A major challenge is to efficiently …
Roger Girgis
,
Florian Golemo
,
Felipe Codevilla
,
Martin Weiss
,
Jim Aldon D'Souza
,
Samira Ebrahimi Kahou
,
Felix Heide
,
Christopher Pal
International Conference on Learning Representations (ICLR), 2022.
PDF
Cite
Code
Systematic Generalization with Edge Transformers
Recent research suggests that systematic generalization in natural language understanding remains a challenge for state-of-the-art …
Leon Bergen
,
Timothy J. O'Donnell
,
Dzmitry Bahdanau
Conference on Neural Information Processing Systems (NeurIPS), 2021.
PDF
Cite
Measuring Systematic Generalization in Neural Proof Generation with Transformers
We are interested in understanding how well Transformer language models (TLMs) can perform reasoning tasks when trained on knowledge …
Nicolas Gontier
,
Koustuv Sinha
,
Siva Reddy
,
Christopher Pal
Conference on Neural Information Processing Systems (NeurIPS), 2020.
PDF
Cite
Code
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive …
Sandeep Subramanian
,
Raymond Li
,
Jonathan Pilault
,
Christopher Pal
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
PDF
Cite
Cite
×