Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Depth Contribution for Camouflaged Object Detection (2106.13217v3)

Published 24 Jun 2021 in cs.CV

Abstract: Camouflaged object detection (COD) aims to segment camouflaged objects hiding in the environment, which is challenging due to the similar appearance of camouflaged objects and their surroundings. Research in biology suggests depth can provide useful object localization cues for camouflaged object discovery. In this paper, we study the depth contribution for camouflaged object detection, where the depth maps are generated with existing monocular depth estimation (MDE) methods. Due to the domain gap between the MDE dataset and our COD dataset, the generated depth maps are not accurate enough to be directly used. We then introduce two solutions to avoid the noisy depth maps from dominating the training process. Firstly, we present an auxiliary depth estimation branch ("ADE"), aiming to regress the depth maps. We find that "ADE" is especially necessary for our "generated depth" scenario. Secondly, we introduce a multi-modal confidence-aware loss function via a generative adversarial network to weigh the contribution of depth for camouflaged object detection. Our extensive experiments on various camouflaged object detection datasets explain that the existing "sensor depth" based RGB-D segmentation techniques work poorly with "generated depth", and our proposed two solutions work cooperatively, achieving effective depth contribution exploration for camouflaged object detection.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mochu Xiang (9 papers)
  2. Jing Zhang (731 papers)
  3. Yunqiu Lv (8 papers)
  4. Aixuan Li (11 papers)
  5. Yiran Zhong (75 papers)
  6. Yuchao Dai (123 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.