Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Large (Visual) Language Models for Robot 3D Scene Understanding (2209.05629v2)

Published 12 Sep 2022 in cs.RO, cs.CL, cs.CV, and cs.LG

Abstract: Abstract semantic 3D scene understanding is a problem of critical importance in robotics. As robots still lack the common-sense knowledge about household objects and locations of an average human, we investigate the use of pre-trained LLMs to impart common sense for scene understanding. We introduce and compare a wide range of scene classification paradigms that leverage language only (zero-shot, embedding-based, and structured-language) or vision and language (zero-shot and fine-tuned). We find that the best approaches in both categories yield $\sim 70\%$ room classification accuracy, exceeding the performance of pure-vision and graph classifiers. We also find such methods demonstrate notable generalization and transfer capabilities stemming from their use of language.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. William Chen (49 papers)
  2. Siyi Hu (21 papers)
  3. Rajat Talak (26 papers)
  4. Luca Carlone (109 papers)

Summary

We haven't generated a summary for this paper yet.