Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Real-World Robot Policies by Dreaming (1805.07813v4)

Published 20 May 2018 in cs.RO, cs.CV, and stat.ML

Abstract: Learning to control robots directly based on images is a primary challenge in robotics. However, many existing reinforcement learning approaches require iteratively obtaining millions of robot samples to learn a policy, which can take significant time. In this paper, we focus on learning a realistic world model capturing the dynamics of scene changes conditioned on robot actions. Our dreaming model can emulate samples equivalent to a sequence of images from the actual environment, technically by learning an action-conditioned future representation/scene regressor. This allows the agent to learn action policies (i.e., visuomotor policies) by interacting with the dreaming model rather than the real-world. We experimentally confirm that our dreaming model enables robot learning of policies that transfer to the real-world.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. AJ Piergiovanni (40 papers)
  2. Alan Wu (9 papers)
  3. Michael S. Ryoo (75 papers)
Citations (30)

Summary

We haven't generated a summary for this paper yet.