Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 33 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 220 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Multi-task Reinforcement Learning in Reproducing Kernel Hilbert Spaces via Cross-learning (2008.11895v1)

Published 27 Aug 2020 in eess.SY and cs.SY

Abstract: Reinforcement learning (RL) is a framework to optimize a control policy using rewards that are revealed by the system as a response to a control action. In its standard form, RL involves a single agent that uses its policy to accomplish a specific task. These methods require large amounts of reward samples to achieve good performance, and may not generalize well when the task is modified, even if the new task is related. In this paper we are interested in a collaborative scheme in which multiple agents with different tasks optimize their policies jointly. To this end, we introduce cross-learning, in which agents tackling related tasks have their policies constrained to be close to one another. Two properties make our new approach attractive: (i) it produces a multi-task central policy that can be used as a starting point to adapt quickly to one of the tasks trained for, in a situation when the agent does not know which task is currently facing, and (ii) as in meta-learning, it adapts to environments related but different to those seen during training. We focus on continuous policies belonging to reproducing kernel Hilbert spaces for which we bound the distance between the task-specific policies and the cross-learned policy. To solve the resulting optimization problem, we resort to a projected policy gradient algorithm and prove that it converges to a near-optimal solution with high probability. We evaluate our methodology with a navigation example in which agents can move through environments with obstacles of multiple shapes and avoid obstacles not trained for.

Citations (6)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.