Natural Actor-Critic Converges Globally for Hierarchical Linear Quadratic Regulator (1912.06875v2)
Abstract: Multi-agent reinforcement learning has been successfully applied to a number of challenging problems. Despite these empirical successes, theoretical understanding of different algorithms is lacking, primarily due to the curse of dimensionality caused by the exponential growth of the state-action space with the number of agents. We study a fundamental problem of multi-agent linear quadratic regulator (LQR) in a setting where the agents are partially exchangeable. In this setting, we develop a hierarchical actor-critic algorithm, whose computational complexity is independent of the total number of agents, and prove its global linear convergence to the optimal policy. As LQRs are often used to approximate general dynamic systems, this paper provides an important step towards a better understanding of general hierarchical mean-field multi-agent reinforcement learning.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.