Emergent Mind

Abstract

The role of robots is expanding from tool to collaborator. Socially assistive robots (SARs) are an example of collaborative robots that assist humans in the real world. As robots enter our social sphere, unforeseen risks occur during human-robot interaction (HRI), as everyday human space is full of uncertainties. Risk introduces an element of trust, so understanding human trust in the robot is imperative to initiate and maintain interactions with robots over time. While many scholars have investigated the issue of human-robot trust, a significant portion of that discussion is rooted in the human-automation interaction literature. As robots are no longer mere instruments, but social agents that co-exist with humans, we need a new lens to investigate the longitudinal dynamic nature of trust in HRI. In this position paper, we contend that focusing on the dynamic nature of trust as a new inquiry will help us better design trustworthy robots.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.