Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multimodal Fusion Using Deep Learning Applied to Driver's Referencing of Outside-Vehicle Objects (2107.12167v1)

Published 26 Jul 2021 in cs.HC, cs.CV, and cs.LG

Abstract: There is a growing interest in more intelligent natural user interaction with the car. Hand gestures and speech are already being applied for driver-car interaction. Moreover, multimodal approaches are also showing promise in the automotive industry. In this paper, we utilize deep learning for a multimodal fusion network for referencing objects outside the vehicle. We use features from gaze, head pose and finger pointing simultaneously to precisely predict the referenced objects in different car poses. We demonstrate the practical limitations of each modality when used for a natural form of referencing, specifically inside the car. As evident from our results, we overcome the modality specific limitations, to a large extent, by the addition of other modalities. This work highlights the importance of multimodal sensing, especially when moving towards natural user interaction. Furthermore, our user based analysis shows noteworthy differences in recognition of user behavior depending upon the vehicle pose.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Abdul Rafey Aftab (4 papers)
  2. Michael von der Beeck (3 papers)
  3. Steven Rohrhirsch (1 paper)
  4. Benoit Diotte (1 paper)
  5. Michael Feld (12 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.