Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
9 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
40 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Replay Memory Architectures in Multi-Agent Reinforcement Learning for Traffic Congestion Control (2407.16034v1)

Published 22 Jul 2024 in eess.SY and cs.SY

Abstract: Episodic control, inspired by the role of episodic memory in the human brain, has been shown to improve the sample inefficiency of model-free reinforcement learning by reusing high-return past experiences. However, the memory growth of episodic control is undesirable in large-scale multi-agent problems such as vehicle traffic management. This paper proposes a novel replay memory architecture called Dual-Memory Integrated Learning, to augment to multi-agent reinforcement learning methods for congestion control via adaptive light signal scheduling. Our dual-memory architecture mimics two core capabilities of human decision-making. First, it relies on diverse types of memory--semantic and episodic, short-term and long-term--in order to remember high-return states that occur often in the network and filter out states that don't. Second, it employs equivalence classes to group together similar state-action pairs and that can be controlled using the same action (i.e., light signal sequence). Theoretical analyses establish memory growth bounds, and simulation experiments on several intersection networks showcase improved congestion performance (e.g., vehicle throughput) from our method.

Summary

We haven't generated a summary for this paper yet.