Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 27 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 70 tok/s Pro
Kimi K2 117 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4 34 tok/s Pro
2000 character limit reached

Mobile Robot Planner with Low-cost Cameras Using Deep Reinforcement Learning (2012.11160v1)

Published 21 Dec 2020 in cs.RO and cs.LG

Abstract: This study develops a robot mobility policy based on deep reinforcement learning. Since traditional methods of conventional robotic navigation depend on accurate map reproduction as well as require high-end sensors, learning-based methods are positive trends, especially deep reinforcement learning. The problem is modeled in the form of a Markov Decision Process (MDP) with the agent being a mobile robot. Its state of view is obtained by the input sensors such as laser findings or cameras and the purpose is navigating to the goal without any collision. There have been many deep learning methods that solve this problem. However, in order to bring robots to market, low-cost mass production is also an issue that needs to be addressed. Therefore, this work attempts to construct a pseudo laser findings system based on direct depth matrix prediction from a single camera image while still retaining stable performances. Experiment results show that they are directly comparable with others using high-priced sensors.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (2)