Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 168 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 79 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4.5 33 tok/s Pro
2000 character limit reached

Learning To Play Atari Games Using Dueling Q-Learning and Hebbian Plasticity (2405.13960v1)

Published 22 May 2024 in cs.AI and cs.LG

Abstract: In this work, an advanced deep reinforcement learning architecture is used to train neural network agents playing atari games. Given only the raw game pixels, action space, and reward information, the system can train agents to play any Atari game. At first, this system uses advanced techniques like deep Q-networks and dueling Q-networks to train efficient agents, the same techniques used by DeepMind to train agents that beat human players in Atari games. As an extension, plastic neural networks are used as agents, and their feasibility is analyzed in this scenario. The plasticity implementation was based on backpropagation and the Hebbian update rule. Plastic neural networks have excellent features like lifelong learning after the initial training, which makes them highly suitable in adaptive learning environments. As a new analysis of plasticity in this context, this work might provide valuable insights and direction for future works.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: