Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 30 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Learning active tactile perception through belief-space control (2312.00215v1)

Published 30 Nov 2023 in cs.RO and cs.AI

Abstract: Robots operating in an open world will encounter novel objects with unknown physical properties, such as mass, friction, or size. These robots will need to sense these properties through interaction prior to performing downstream tasks with the objects. We propose a method that autonomously learns tactile exploration policies by developing a generative world model that is leveraged to 1) estimate the object's physical parameters using a differentiable Bayesian filtering algorithm and 2) develop an exploration policy using an information-gathering model predictive controller. We evaluate our method on three simulated tasks where the goal is to estimate a desired object property (mass, height or toppling height) through physical interaction. We find that our method is able to discover policies that efficiently gather information about the desired property in an intuitive manner. Finally, we validate our method on a real robot system for the height estimation task, where our method is able to successfully learn and execute an information-gathering policy from scratch.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper presents an innovative framework that enables robots to autonomously learn tactile exploration strategies via belief-space control.
  • It employs a generative world model with differentiable Bayesian filtering to predict object dynamics based on sensor interactions.
  • Experimental results demonstrate improved data efficiency and accuracy over deep reinforcement learning baselines in property estimation tasks.

Introduction

The research introduces an innovative framework for teaching robots to understand and interact with the physical properties of objects in their environment without any prior knowledge of those objects. The aim is to equip robots with the ability to actively perceive and deduce properties such as mass, friction, and size through tactile exploration, similar to how humans naturally explore unfamiliar objects.

Methodology

The framework outlined in the paper consists of two main components. Firstly, a generative world model is created to predict how an object will behave in reaction to the robot's actions. This model utilizes a differentiable Bayesian filtering algorithm to infer the dynamical properties of objects based on a sequence of sensor readings influenced by various physical interactions.

Secondly, an exploration policy that uses model predictive control based on information-gathering strategies is developed. This policy evolves to produce different motions based on the desired property being explored. Importantly, this approach enables the robot to autonomously discover exploration strategies rather than relying on pre-programmed or human-engineered motions.

Experimental Procedures

The model was put to the test in both simulated environments and real-world robot systems. Three experimental tasks were designed to validate the framework's capability to autonomously learn and optimize exploration strategies. These tasks included estimating the mass, height, and minimum toppling height of various objects through interaction, such as pushing or poking.

A comparative analysis was also conducted against a Deep Reinforcement Learning baseline. The method proposed in this paper was found to yield better data efficiency and more accurate property estimation, underlining the potential of this framework for developing robots that can engage in meaningful physical interactions with their environments.

Conclusion

The paper showcases a significant step forward in robot perception capabilities. By demonstrating that robots can independently learn how to manipulate objects to gather essential information, the framework promises to enhance robotic systems' adaptability and functionality in unstructured environments. The successful application of this novel active perception framework in simulated tasks and on a real robot system signifies an exciting progression in the field of robotics.