2000 character limit reached
Automated curriculum generation for Policy Gradients from Demonstrations (1912.00444v1)
Published 1 Dec 2019 in cs.LG, cs.AI, and stat.ML
Abstract: In this paper, we present a technique that improves the process of training an agent (using RL) for instruction following. We develop a training curriculum that uses a nominal number of expert demonstrations and trains the agent in a manner that draws parallels from one of the ways in which humans learn to perform complex tasks, i.e by starting from the goal and working backwards. We test our method on the BabyAI platform and show an improvement in sample efficiency for some of its tasks compared to a PPO (proximal policy optimization) baseline.