Emergent Mind

Jointly Learning Agent and Lane Information for Multimodal Trajectory Prediction

(2111.13350)
Published Nov 26, 2021 in cs.LG , cs.CV , and cs.RO

Abstract

Predicting the plausible future trajectories of nearby agents is a core challenge for the safety of Autonomous Vehicles and it mainly depends on two external cues: the dynamic neighbor agents and static scene context. Recent approaches have made great progress in characterizing the two cues separately. However, they ignore the correlation between the two cues and most of them are difficult to achieve map-adaptive prediction. In this paper, we use lane as scene data and propose a staged network that Jointly learning Agent and Lane information for Multimodal Trajectory Prediction (JAL-MTP). JAL-MTP use a Social to Lane (S2L) module to jointly represent the static lane and the dynamic motion of the neighboring agents as instance-level lane, a Recurrent Lane Attention (RLA) mechanism for utilizing the instance-level lanes to predict the map-adaptive future trajectories and two selectors to identify the typical and reasonable trajectories. The experiments conducted on the public Argoverse dataset demonstrate that JAL-MTP significantly outperforms the existing models in both quantitative and qualitative.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.