Emergent Mind

Aligning Human Intent from Imperfect Demonstrations

(2312.11194)
Published Dec 18, 2023 in cs.RO

Abstract

Standard imitation learning usually assumes that demonstrations are drawn from an optimal policy distribution. However, in the real world, where every human demonstration may exhibit nearly random behavior, the cost of collecting high-quality human datasets can be quite costly. This requires robots to be able to learn from imperfect demonstrations and thus acquire behavioral policy that align human intent. Prior work uses confidence scores to extract useful information from imperfect demonstrations, which relies on access to ground truth rewards or active human supervision. In this paper, we propose a dynamics-based method to obtain fine-grained confidence scores for data without the above efforts. We develop a generalized confidence-based imitation learning framework called Confidence-based Inverse soft-Q Learning (CIQL), which can employ different policy learning methods by changing object functions. Experimental results show that our confidence evaluation method can increase the success rate of the original algorithm by $40.3\%$, which is $13.5\%$ higher than the method of just filtering noise.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.