ServiceNow Research

Implicit Offline Reinforcement Learning via Supervised Learning

Abstract

Offline Reinforcement Learning (RL) via Supervised Learning is a simple and effective way to learn robotic skills from a dataset of varied behaviors. It is as simple as supervised learning and Behavior Cloning (BC) but takes advantage of the return information. On BC tasks, implicit models have been shown to match or outperform explicit ones. Despite the benefits of using implicit models to learn robotic skills via BC, Offline RL via Supervised Learning algorithms have been limited to explicit models. We show how implicit models leverage return information and match or outperform explicit algorithms to acquire robotic skills from fixed datasets. Furthermore, we show how closely related our implicit methods are to other popular RL via Supervised Learning algorithms.

Publication
Workshop at the Neural Information Processing Systems (NeurIPS)
Alexandre Piche
Alexandre Piche
Research Scientist

Research Scientist at Human Decision Support located at Montreal, QC, Canada.

Rafael Pardinas
Rafael Pardinas
Applied Research Scientist

Applied Research Scientist at Human Decision Support located at London, UK.

David Vazquez
David Vazquez
Manager of Research Programs

Manager of Research Programs at Research Management located at Montreal, QC, Canada.

Christopher Pal
Christopher Pal
Distinguished Scientist

Distinguished Scientist at Low Data Learning located at Montreal, QC, Canada.