Emergent Mind

Abstract

Deep Reinforcement Learning (DRL) is a quickly evolving research field rooted in operations research and behavioural psychology, with potential applications extending across various domains, including robotics. This thesis delineates the background of modern Reinforcement Learning (RL), starting with the framework constituted by the Markov decision processes, Markov properties, goals and rewards, agent-environment interactions, and policies. We explain the main types of algorithms commonly used in RL, including value-based, policy gradient, and actor-critic methods, with a special emphasis on DQN, A2C and PPO. We then give a short literature review on some widely adopted frameworks for implementing RL algorithms and environments. Subsequently, we present Bidimensional Gripper Environment (BGE), a virtual simulator based on the Pymunk physics engine we developed to analyse top-down bidimensional object manipulation. The methodology section frames our agent-environment interaction as a Markov decision process, such that we can apply our RL algorithms. We list various goal formulation strategies, including reward shaping and curriculum learning. We also employ different steps of observation preprocessing to reduce the computational workload required. In the experimental phase, we run through a series of scenarios of increasing difficulty. We start with a simple static scenario and then gradually increase the amount of stochasticity. Whenever the agents show difficulty in learning, we counteract by increasing the degree of reward shaping and curriculum learning. These experiments demonstrate the substantial limitations and pitfalls of model-free algorithms under changing dynamics. In conclusion, we present a summary of our findings and remarks. We then outline potential future work to improve our methodology and possibly expand to real-world systems.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.