Emergent Mind

Deep Curiosity Loops in Social Environments

(1806.03645)
Published Jun 10, 2018 in cs.NE

Abstract

Inspired by infants' intrinsic motivation to learn, which values informative sensory channels contingent on their immediate social environment, we developed a deep curiosity loop (DCL) architecture. The DCL is composed of a learner, which attempts to learn a forward model of the agent's state-action transition, and a novel reinforcement-learning (RL) component, namely, an Action-Convolution Deep Q-Network, which uses the learner's prediction error as reward. The environment for our agent is composed of visual social scenes, composed of sitcom video streams, thereby both the learner and the RL are constructed as deep convolutional neural networks. The agent's learner learns to predict the zero-th order of the dynamics of visual scenes, resulting in intrinsic rewards proportional to changes within its social environment. The sources of these socially informative changes within the sitcom are predominantly motions of faces and hands, leading to the unsupervised curiosity-based learning of social interaction features. The face and hand detection is represented by the value function and the social interaction optical-flow is represented by the policy. Our results suggest that face and hand detection are emergent properties of curiosity-based learning embedded in social environments.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.