2

Bridging the Gap Between Target Networks and Functional Regularization
Target networks are at the core of recent success in Reinforcement Learning. They stabilize the training by using old parameters to …
Using Confounded Data in Latent Model-Based Reinforcement Learning
In the presence of confounding, naively using off-the-shelf offline reinforcement learning (RL) algorithms leads to sub-optimal …
Knowledge Hypergraph Embedding Meets Relational Algebra
Embedding-based methods for reasoning in knowledge hypergraphs learn a representation for each entity and relation. Current methods do …
Towards Learning to Imitate from a Single Video Demonstration
Agents that can learn to imitate given video observation – without direct access to state or action information are more …
Workflow discovery in low data regimes
Text-based dialogues are now widely used to solve real-world problems. In cases where solution strategies are already known, they can …
Advancing ethics review practices in AI research
The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI …
Does entity abstraction help generative Transformers reason?
We study the utility of incorporating entity type abstractions into pre-trained Transformers and test these methods on four NLP tasks …
The Stack: 3 TB of permissively licensed source code
Large Language Models (LLMs) play an ever-increasing role in the field of Artificial Intelligence (AI)–not only for natural …
A Closer Look at Embedding Propagation for Manifold Smoothing
Supervised training of neural networks requires a large amount of manually annotated data and the resulting networks tend to be …