Learning Rational Subgoals from Demonstrations and Instructions (2303.05487v1)
Abstract: We present a framework for learning useful subgoals that support efficient long-term planning to achieve novel goals. At the core of our framework is a collection of rational subgoals (RSGs), which are essentially binary classifiers over the environmental states. RSGs can be learned from weakly-annotated data, in the form of unsegmented demonstration trajectories, paired with abstract task descriptions, which are composed of terms initially unknown to the agent (e.g., collect-wood then craft-boat then go-across-river). Our framework also discovers dependencies between RSGs, e.g., the task collect-wood is a helpful subgoal for the task craft-boat. Given a goal description, the learned subgoals and the derived dependencies facilitate off-the-shelf planning algorithms, such as A* and RRT, by setting helpful subgoals as waypoints to the planner, which significantly improves performance-time efficiency.
- Zhezheng Luo (4 papers)
- Jiayuan Mao (55 papers)
- Jiajun Wu (249 papers)
- Tomás Lozano-Pérez (85 papers)
- Joshua B. Tenenbaum (257 papers)
- Leslie Pack Kaelbling (94 papers)