Emergent Mind

Abstract

In this paper, we present a data-driven approach to generate realistic steering behaviors for virtual crowds in crowd simulation. We take advantage of both rule-based models and data-driven models by applying the interaction patterns discovered from crowd videos. Unlike existing example-based models in which current states are matched to states extracted from crowd videos directly, our approach adopts a hierarchical mechanism to generate the steering behaviors of agents. First, each agent is classified into one of the interaction patterns that are automatically discovered from crowd video before simulation. Then the most matched action is selected from the associated interaction pattern to generate the steering behaviors of the agent. By doing so, agents can avoid performing a simple state matching as in the traditional example-based approaches, and can perform a wider variety of steering behaviors as well as mimic the cognitive process of pedestrians. Simulation results on scenarios with different crowd densities and main motion directions demonstrate that our approach performs better than two state-of-the-art simulation models, in terms of prediction accuracy. Besides, our approach is efficient enough to run at interactive rates in real time simulation.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.