Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 76 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 113 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Addressing the issue of stochastic environments and local decision-making in multi-objective reinforcement learning (2211.08669v1)

Published 16 Nov 2022 in cs.LG and cs.AI

Abstract: Multi-objective reinforcement learning (MORL) is a relatively new field which builds on conventional Reinforcement Learning (RL) to solve multi-objective problems. One of common algorithm is to extend scalar value Q-learning by using vector Q values in combination with a utility function, which captures the user's preference for action selection. This study follows on prior works, and focuses on what factors influence the frequency with which value-based MORL Q-learning algorithms learn the optimal policy for an environment with stochastic state transitions in scenarios where the goal is to maximise the Scalarised Expected Return (SER) - that is, to maximise the average outcome over multiple runs rather than the outcome within each individual episode. The analysis of the interaction between stochastic environment and MORL Q-learning algorithms run on a simple Multi-objective Markov decision process (MOMDP) Space Traders problem with different variant versions. The empirical evaluations show that well designed reward signal can improve the performance of the original baseline algorithm, however it is still not enough to address more general environment. A variant of MORL Q-Learning incorporating global statistics is shown to outperform the baseline method in original Space Traders problem, but remains below 100 percent effectiveness in finding the find desired SER-optimal policy at the end of training. On the other hand, Option learning is guarantied to converge to desired SER-optimal policy but it is not able to scale up to solve more complex problem in real-life. The main contribution of this thesis is to identify the extent to which the issue of noisy Q-value estimates impacts on the ability to learn optimal policies under the combination of stochastic environments, non-linear utility and a constant learning rate.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.