Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 162 tok/s
Gemini 2.5 Pro 56 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 164 tok/s Pro
GPT OSS 120B 426 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Faster Reinforcement Learning by Freezing Slow States (2301.00922v2)

Published 3 Jan 2023 in cs.AI, cs.LG, cs.SY, math.OC, and eess.SY

Abstract: We study infinite horizon Markov decision processes (MDPs) with "fast-slow" structure, where some state variables evolve rapidly ("fast states") while others change more gradually ("slow states"). Such structure is common in real-world problems where sequential decisions need to be made at high frequencies over long horizons, where slowly evolving information also influences optimal decisions. Examples include inventory control under slowly changing demand, or dynamic pricing with gradually shifting consumer behavior. Modeling the problem at the natural decision frequency leads to MDPs with discount factors close to one, making them computationally challenging. We propose a novel approximation strategy that "freezes" slow states during a phase of lower-level planning, solving finite-horizon MDPs conditioned on a fixed slow state, and then applying value iteration to an auxiliary upper-level MDP that evolves on a slower timescale. Freezing states for short periods of time leads to easier-to-solve lower-level problems, while a slower upper-level timescale allows for a more favorable discount factor. On the theoretical side, we analyze the regret incurred by our frozen-state approach, which leads to simple insights on how to trade off computational budget versus regret. Empirically, we demonstrate that frozen-state methods produce high-quality policies with significantly less computation, and we show that simply omitting slow states is often a poor heuristic.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.