Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Shared Autonomy via Deep Reinforcement Learning (1802.01744v2)

Published 6 Feb 2018 in cs.LG, cs.HC, and cs.RO

Abstract: In shared autonomy, user input is combined with semi-autonomous control to achieve a common goal. The goal is often unknown ex-ante, so prior work enables agents to infer the goal from user input and assist with the task. Such methods tend to assume some combination of knowledge of the dynamics of the environment, the user's policy given their goal, and the set of possible goals the user might target, which limits their application to real-world scenarios. We propose a deep reinforcement learning framework for model-free shared autonomy that lifts these assumptions. We use human-in-the-loop reinforcement learning with neural network function approximation to learn an end-to-end mapping from environmental observation and user input to agent action values, with task reward as the only form of supervision. This approach poses the challenge of following user commands closely enough to provide the user with real-time action feedback and thereby ensure high-quality user input, but also deviating from the user's actions when they are suboptimal. We balance these two needs by discarding actions whose values fall below some threshold, then selecting the remaining action closest to the user's input. Controlled studies with users (n = 12) and synthetic pilots playing a video game, and a pilot study with users (n = 4) flying a real quadrotor, demonstrate the ability of our algorithm to assist users with real-time control tasks in which the agent cannot directly access the user's private information through observations, but receives a reward signal and user input that both depend on the user's intent. The agent learns to assist the user without access to this private information, implicitly inferring it from the user's input. This paper is a proof of concept that illustrates the potential for deep reinforcement learning to enable flexible and practical assistive systems.

Citations (167)

Summary

  • The paper introduces a model-free deep reinforcement learning framework for shared autonomy that optimizes performance without relying on predefined user goals.
  • It employs an action feasibility threshold to selectively blend human input with autonomous control in dynamic environments.
  • Empirical studies using video game simulations and quadrotor pilots demonstrate significantly higher success rates and fewer catastrophic failures.

Shared Autonomy via Deep Reinforcement Learning

The concept of shared autonomy represents a sophisticated interplay between human inputs and semi-autonomous control mechanisms aimed at achieving a mutual objective, often without prior knowledge of the specific goals. The paper "Shared Autonomy via Deep Reinforcement Learning" by Siddharth Reddy et al. introduces a model-free deep reinforcement learning paradigm that seeks to transcend existing limitations such as the need for a predefined understanding of environmental dynamics, user policies, or goal sets.

Overview

The framework proposed utilizes human-in-the-loop reinforcement learning and neural network function approximation to devise an end-to-end mapping from environmental states and user inputs to action values. Crucially, this approach solely relies on a task reward signal for supervision, presenting a challenge to maintain user feedback efficacy and simultaneously deviate from suboptimal user actions. By integrating an action feasibility threshold, the algorithm selectively adheres to user input while optimizing task performance.

Noteworthy is the empirical validation of this methodology through structured studies involving both synthetic entities like video game simulators and quadrotor pilots, as well as real human users. The results affirm the algorithm's capabilities in enhancing user performance in scenarios devoid of direct access to user-specific intent information.

Key Results

The experimental outcomes reveal that the framework not only increased the success rates significantly in simulated and real-world tasks but also decreased instances of catastrophic failures. For instance, pilot-copilot teams achieved higher task success rates in comparison to solo efforts, both in the Lunar Lander game and quadrotor control tasks.

Implications and Future Directions

Practically, this research presents implications for developing adaptive robotic systems capable of assisting in real-time, dynamic environments. Theoretically, the work underscores the potential of deep reinforcement learning to foster autonomy systems that need not rely on fixed user intent models, paving the way for flexible human-machine collaboration.

Looking forward, the development of more sophisticated memory mechanisms within these frameworks could further enhance intent inference accuracy, addressing one current limitation linked to user adaptability. Additionally, ensuring users do not excessively compromise the system's efficiency while providing inputs remains an area for further investigation.

In summary, this paper contributes a substantive advancement towards building practical assistive systems via deep reinforcement learning, effectively balancing the need for autonomy with user-centric adaptability and feedback utilization. The results present an optimistic outlook for shared autonomy applications across diverse sectors, suggesting pathways for future exploration within the sphere of intelligent systems.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.