Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 41 tok/s Pro
GPT-5 High 39 tok/s Pro
GPT-4o 89 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 437 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Learning Visual Tracking and Reaching with Deep Reinforcement Learning on a UR10e Robotic Arm (2308.14652v1)

Published 28 Aug 2023 in cs.AI and cs.RO

Abstract: As technology progresses, industrial and scientific robots are increasingly being used in diverse settings. In many cases, however, programming the robot to perform such tasks is technically complex and costly. To maximize the utility of robots in industrial and scientific settings, they require the ability to quickly shift from one task to another. Reinforcement learning algorithms provide the potential to enable robots to learn optimal solutions to complete new tasks without directly reprogramming them. The current state-of-the-art in reinforcement learning, however, generally relies on fast simulations and parallelization to achieve optimal performance. These are often not possible in robotics applications. Thus, a significant amount of research is required to facilitate the efficient and safe, training and deployment of industrial and scientific reinforcement learning robots. This technical report outlines our initial research into the application of deep reinforcement learning on an industrial UR10e robot. The report describes the reinforcement learning environments created to facilitate policy learning with the UR10e, a robotic arm from Universal Robots, and presents our initial results in training deep Q-learning and proximal policy optimization agents on the developed reinforcement learning environments. Our results show that proximal policy optimization learns a better, more stable policy with less data than deep Q-learning. The corresponding code for this work is available at \url{https://github.com/cbellinger27/bendRL_reacher_tracker}

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.