ServiceNow Research

Reinforced Imitation in Heterogeneous Action Space

Abstract

Imitation learning is an effective alternative approach to learn a policy when the reward function is sparse. In this paper, we consider a challenging setting where an agent and an expert use different actions from each other. We assume that the agent has access to a sparse reward function and state-only expert observations. We propose a method which gradually balances between the imitation learning cost and the reinforcement learning objective. In addition, this method adapts the agent’s policy based on either mimicking expert behavior or maximizing sparse reward. We show, through navigation scenarios, that (i) an agent is able to efficiently leverage sparse rewards to outperform standard state-only imitation learning, (ii) it can learn a policy even when its actions are different from the expert, and (iii) the performance of the agent is not bounded by that of the expert, due to the optimized usage of sparse rewards.

Publication
Workshop at the Neural Information Processing Systems (NeurIPS)
Yoshua Bengio
Yoshua Bengio
Research Advisor

Research Advisor at Human Decision Support located at Montreal, QC, Canada.