Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 48 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 107 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 473 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Deep Reinforcement Learning for 2D Physics-Based Object Manipulation in Clutter (2312.04570v1)

Published 14 Nov 2023 in cs.RO

Abstract: Deep Reinforcement Learning (DRL) is a quickly evolving research field rooted in operations research and behavioural psychology, with potential applications extending across various domains, including robotics. This thesis delineates the background of modern Reinforcement Learning (RL), starting with the framework constituted by the Markov decision processes, Markov properties, goals and rewards, agent-environment interactions, and policies. We explain the main types of algorithms commonly used in RL, including value-based, policy gradient, and actor-critic methods, with a special emphasis on DQN, A2C and PPO. We then give a short literature review on some widely adopted frameworks for implementing RL algorithms and environments. Subsequently, we present Bidimensional Gripper Environment (BGE), a virtual simulator based on the Pymunk physics engine we developed to analyse top-down bidimensional object manipulation. The methodology section frames our agent-environment interaction as a Markov decision process, such that we can apply our RL algorithms. We list various goal formulation strategies, including reward shaping and curriculum learning. We also employ different steps of observation preprocessing to reduce the computational workload required. In the experimental phase, we run through a series of scenarios of increasing difficulty. We start with a simple static scenario and then gradually increase the amount of stochasticity. Whenever the agents show difficulty in learning, we counteract by increasing the degree of reward shaping and curriculum learning. These experiments demonstrate the substantial limitations and pitfalls of model-free algorithms under changing dynamics. In conclusion, we present a summary of our findings and remarks. We then outline potential future work to improve our methodology and possibly expand to real-world systems.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)