Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Residual Policy Learning (1812.06298v2)

Published 15 Dec 2018 in cs.RO and cs.LG

Abstract: We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvements. We study RPL in six challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. For initial controllers, we consider both hand-designed policies and model-predictive controllers with known or learned transition models. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently. Video and code at https://k-r-allen.github.io/residual-policy-learning/.

Citations (163)

Summary

  • The paper presents Residual Policy Learning (RPL), which augments initial control policies with a learned residual function using deep reinforcement learning.
  • RPL leverages a burn-in phase for critic training, allowing gradient updates from the residual, and enables policy gradient methods to work with nondifferentiable controllers.
  • Experimental results show that RPL converges faster and outperforms both hand-designed policies and learning-from-scratch methods in challenging robotic manipulation tasks.

Residual Policy Learning

Introduction

The paper presents Residual Policy Learning (RPL), a technique designed to enhance nondifferentiable control policies using model-free deep reinforcement learning (RL). The central thesis is that RPL can significantly improve the performance of initially effective but imperfect controllers, especially in complex robotic tasks where learning from scratch is data-inefficient or infeasible.

Methodology

The methodology underlying RPL is to learn a residual function fθ(s)f_\theta(s), which adjusts the output of an existing policy π(s)\pi(s), effectively creating an augmented policy πθ(s)=π(s)+fθ(s)\pi_\theta(s) = \pi(s) + f_\theta(s). This framework allows the gradient of the policy updates during RL to be calculated directly from the residual, thereby enabling the use of policy gradient methods even with nondifferentiable initial policies.

RPL operates within a standard MDP framework and exploits various deep RL techniques, particularly those involving actor-critic architectures, such as DDPG. The introduction of a "burn-in" period ensures stability by training the critic before the actor is updated, minimizing the risk of degradation in policy performance when starting with a strong initial policy. Figure 1

Figure 1: (a) A simulated Fetch robot must use a hook to move a block to a target (red sphere). A hand-designed policy can accomplish this task perfectly. (b) The same policy often fails in a more difficult task where the block is replaced by a complex object and the table top contains large "bumps." Residual Policy Learning (RPL) augments the policy π\pi with a residual fθf_{\theta}, which can learn to accomplish the latter task.

Experimental Evaluation

Experiments were conducted on six robotic manipulation tasks using MuJoCo to assess RPL's effectiveness. Challenges addressed include partial observability, sensor noise, and controller miscalibration, exploring the spectrum from hand-designed policies to model-based controllers with learned dynamics models.

Baselines and Comparative Analysis

RPL's performance was compared to several baselines: the initial policy executed without learning, model-free RL approaches using DDPG and HER, and the "Expert Explore" baseline, which simulates exploration using the initial policy's actions. Figure 2

Figure 2: Illustration of the original ReactivePush policy and RPL on the SlipperyPush task. RPL learns to correct the faulty policy within 1 million simulator steps, while the RL from scratch remains ineffective.

The analysis demonstrated that RPL generally converges faster and with fewer samples than learning from scratch. Notably, in scenarios with significant changes, such as model miscalibration or sensor noise (e.g., SlipperyPush and NoisyHook tasks), RPL showed superior data efficiency and improved outcomes beyond the initial controller's capabilities. Figure 3

Figure 3

Figure 3

Figure 3: RPL and baseline results for the Push, SlipperyPush, and PickAndPlace tasks highlight how RPL sustains superior performance compared to initial policies and DDPG from scratch.

Implications and Future Directions

RPL is shown as a promising avenue for bridging the gap between optimal control and RL. It effectively leverages existing control policies, enhancing them with adaptive RL techniques. This balance can simplify the solving of complex, long-horizon tasks where handcrafted controller optimization is impractical.

The applicability of RPL extends beyond robotic manipulation, potentially benefiting any domain where robust but non-exacting controllers are utilized. Therefore, future research could focus on systematic exploration into other domains and refinement of RPL’s integration with varying RL architectures and model-based strategies.

Conclusion

Residual Policy Learning presents a scalable method for leveraging existing control strategies to derive enhanced robotic control policies. Through its experiment-backed evaluation, RPL underscores the benefits of combining domain-specific strategies with robust RL paradigms, offering a compelling model for addressing nuanced control challenges in AI-driven robotics. This solid foundation suggests broader applicability and room for further refinement in policy learning techniques.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com