Emergent Mind

Casting Geometric Constraints in Semantic Segmentation as Semi-Supervised Learning

(1904.12534)
Published Apr 29, 2019 in cs.CV , cs.AI , and cs.LG

Abstract

We propose a simple yet effective method to learn to segment new indoor scenes from video frames: State-of-the-art methods trained on one dataset, even as large as the SUNRGB-D dataset, can perform poorly when applied to images that are not part of the dataset, because of the dataset bias, a common phenomenon in computer vision. To make semantic segmentation more useful in practice, one can exploit geometric constraints. Our main contribution is to show that these constraints can be cast conveniently as semi-supervised terms, which enforce the fact that the same class should be predicted for the projections of the same 3D location in different images. This is interesting as we can exploit general existing techniques developed for semi-supervised learning to efficiently incorporate the constraints. We show that this approach can efficiently and accurately learn to segment target sequences of ScanNet and our own target sequences using only annotations from SUNRGB-D, and geometric relations between the video frames of target sequences.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.