Emergent Mind

Multi-Task Trust Transfer for Human-Robot Interaction

(1807.01866)
Published Jul 5, 2018 in cs.RO and cs.HC

Abstract

Trust is essential in shaping human interactions with one another and with robots. This paper discusses how human trust in robot capabilities transfers across multiple tasks. We first present a human-subject study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. The findings expand our understanding of trust and inspire new differentiable models of trust evolution and transfer via latent task representations: (i) a rational Bayes model, (ii) a data-driven neural network model, and (iii) a hybrid model that combines the two. Experiments show that the proposed models outperform prevailing models when predicting trust over unseen tasks and users. These results suggest that (i) task-dependent functional trust models capture human trust in robot capabilities more accurately, and (ii) trust transfer across tasks can be inferred to a good degree. The latter enables trust-mediated robot decision-making for fluent human-robot interaction in multi-task settings.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.