Emergent Mind

Multimodal VAE Active Inference Controller

(2103.04412)
Published Mar 7, 2021 in cs.RO and cs.AI

Abstract

Active inference, a theoretical construct inspired by brain processing, is a promising alternative to control artificial agents. However, current methods do not yet scale to high-dimensional inputs in continuous control. Here we present a novel active inference torque controller for industrial arms that maintains the adaptive characteristics of previous proprioceptive approaches but also enables large-scale multimodal integration (e.g., raw images). We extended our previous mathematical formulation by including multimodal state representation learning using a linearly coupled multimodal variational autoencoder. We evaluated our model on a simulated 7DOF Franka Emika Panda robot arm and compared its behavior with a previous active inference baseline and the Panda built-in optimized controller. Results showed improved tracking and control in goal-directed reaching due to the increased representation power, high robustness to noise and adaptability in changes on the environmental conditions and robot parameters without the need to relearn the generative models nor parameters retuning.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.