ServiceNow Research

Advice-Based Exploration in Model-Based Reinforcement Learning


Convergence to an optimal policy using model-based rein- forcement learning can require significant exploration of the environment. In some settings such exploration is costly or even impossible, such as in cases where simulators are not available, or where there are prohibitively large state spaces. In this paper we examine the use of advice to guide the search for an optimal policy. To this end we propose a rich language for providing advice to a reinforcement learning agent. Unlike constraints which potentially eliminate optimal policies, advice offers guidance for the exploration, while preserving the guarantee of convergence to an op- timal policy. Experimental results on deterministic grid worlds demon- strate the potential for good advice to reduce the amount of exploration required to learn a satisficing or optimal policy, while maintaining ro- bustness in the face of incomplete or misleading advice.

Canadian Conference on AI