Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning latent state representation for speeding up exploration (1905.12621v1)

Published 27 May 2019 in cs.LG and stat.ML

Abstract: Exploration is an extremely challenging problem in reinforcement learning, especially in high dimensional state and action spaces and when only sparse rewards are available. Effective representations can indicate which components of the state are task relevant and thus reduce the dimensionality of the space to explore. In this work, we take a representation learning viewpoint on exploration, utilizing prior experience to learn effective latent representations, which can subsequently indicate which regions to explore. Prior experience on separate but related tasks help learn representations of the state which are effective at predicting instantaneous rewards. These learned representations can then be used with an entropy-based exploration method to effectively perform exploration in high dimensional spaces by effectively lowering the dimensionality of the search space. We show the benefits of this representation for meta-exploration in a simulated object pushing environment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Giulia Vezzani (12 papers)
  2. Abhishek Gupta (226 papers)
  3. Lorenzo Natale (68 papers)
  4. Pieter Abbeel (372 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.