ServiceNow Research

Transformers

Does entity abstraction help generative Transformers reason?
We study the utility of incorporating entity type abstractions into pre-trained Transformers and test these methods on four NLP tasks …
TACTiS: Transformer-Attentional Copulas for Time Series
The estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance. …
Latent Variable Sequential Set Transformers for Joint Multi-Agent Motion Prediction
Robust multi-agent trajectory prediction is essential for the safe control of robotic systems. A major challenge is to efficiently …
Systematic Generalization with Edge Transformers
Recent research suggests that systematic generalization in natural language understanding remains a challenge for state-of-the-art …
Measuring Systematic Generalization in Neural Proof Generation with Transformers
We are interested in understanding how well Transformer language models (TLMs) can perform reasoning tasks when trained on knowledge …
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive …