Emergent Mind

Open-Vocabulary Segmentation with Unpaired Mask-Text Supervision

(2402.08960)
Published Feb 14, 2024 in cs.CV and cs.AI

Abstract

Contemporary cutting-edge open-vocabulary segmentation approaches commonly rely on image-mask-text triplets, yet this restricted annotation is labour-intensive and encounters scalability hurdles in complex real-world scenarios. Although some methods are proposed to reduce the annotation cost with only text supervision, the incompleteness of supervision severely limits the versatility and performance. In this paper, we liberate the strict correspondence between masks and texts by using independent image-mask and image-text pairs, which can be easily collected respectively. With this unpaired mask-text supervision, we propose a new weakly-supervised open-vocabulary segmentation framework (Uni-OVSeg) that leverages confident pairs of mask predictions and entities in text descriptions. Using the independent image-mask and image-text pairs, we predict a set of binary masks and associate them with entities by resorting to the CLIP embedding space. However, the inherent noise in the correspondence between masks and entities poses a significant challenge when obtaining reliable pairs. In light of this, we advocate using the large vision-language model (LVLM) to refine text descriptions and devise a multi-scale ensemble to stablise the matching between masks and entities. Compared to text-only weakly-supervised methods, our Uni-OVSeg achieves substantial improvements of 15.5% mIoU on the ADE20K datasets, and even surpasses fully-supervised methods on the challenging PASCAL Context-459 dataset.

Overview

  • Uni-OVSeg introduces a weakly-supervised framework for open-vocabulary segmentation, significantly reducing data annotation costs by using unpaired image-mask and image-text pairs.

  • It addresses the limitations of existing methods by eliminating the need for paired image-mask-text annotations, offering a scalable solution for complex, real-world datasets.

  • Uni-OVSeg employs innovative techniques such as mask generation with independent image-mask pairs, mask-text alignment using CLIP embedding, and open-vocabulary segmentation in a zero-shot learning manner.

  • The framework outperforms existing weakly-supervised methods on benchmark datasets like ADE20K and PASCAL Context-459, demonstrating its potential to improve vision perception systems across various applications.

Uni-OVSeg: Enhancing Open-Vocabulary Segmentation with Unpaired Mask-Text Supervision

Introduction to Open-Vocabulary Segmentation

The landscape of object segmentation in images, particularly open-vocabulary segmentation, has been a focus of intense research efforts due to its potential to dramatically improve the flexibility and applicability of computer vision systems. Unlike traditional segmentation methods that rely on a limited, predefined vocabulary, open-vocabulary segmentation aspires to identify and categorize objects across an unrestricted range of categories, regardless of whether these categories were seen during the model's training phase. This innovation could transform capabilities across various domains, from improving autonomous vehicle navigation to advancing medical diagnostics.

The Limitation of Existing Methods

Current state-of-the-art methods predominantly supervise their models using image-mask-text triplets. While effective, the need for such detailed annotations introduces significant labor costs, rendering the approach less scalable and impractical for handling the complex, diverse datasets encountered in real-world scenarios. Although some advancements have been made to minimize annotation costs by relying solely on text supervision, these approaches fall short in performance due to their inability to capture intricate spatial details and differentiate between distinct instances of the same semantic class effectively.

Uni-OVSeg: A Novel Framework

This paper introduces Uni-OVSeg, a groundbreaking weakly-supervised framework for open-vocabulary segmentation, addressing the aforementioned limitations by eliminating the necessity for paired image-mask-text annotations. Instead, Uni-OVSeg operates with unpaired image-mask and image-text pairs, which are significantly more straightforward to collect. By doing so, it manages to significantly cut down on the costs associated with data annotation without compromising on the quality of segmentation.

Technical Innovations of Uni-OVSeg

  • Mask Generation: Utilization of independent image-mask pairs to generate binary masks, followed by the allocation of these masks to entities in text descriptions drawn from unpaired image-text pairs.
  • Mask-Text Alignment: To establish reliable correspondences between masks and text descriptions, Uni-OVSeg employs the CLIP embedding space and introduces a novel multi-scale ensemble method to stabilize mask-text matching despite the inherent noise in the correspondence.
  • Open-Vocabulary Segmentation: Achieves segmentation across an unrestricted set of vocabulary by embedding target dataset category names and assigning those categories to the predicted masks in a zero-shot learning manner.

Performance and Contributions

Uni-OVSeg notably outperforms previously established weakly-supervised methods across several benchmark datasets, demonstrating substantial improvements (15.5% mIoU) on ADE20K and surpassing fully-supervised methods on the challenging PASCAL Context-459 dataset. The significant advancements brought by Uni-OVSeg are attributed to its ability to align mask-wise embeddings with entity embeddings effectively, its sophisticated handling of the inherent noise in mask-text correspondences, and its refined strategy for mask-text alignment.

Broader Implications

The development of Uni-OVSeg represents a significant leap forward in the pursuit of efficient and scalable open-vocabulary segmentation. By reducing the dependency on labor-intensive annotations and improving segmentation performance, Uni-OVSeg paves the way for more advanced and accessible vision perception systems. Such advancements have profound implications for a wide array of applications, including but not limited to, autonomous driving, content filtering, and assistive technologies, further highlighting the potential of weakly-supervised learning paradigms in advancing the field.

Looking Forward

The research encourages future exploration into minimizing the annotation burden further and improving the robustness and adaptability of segmentation models to unseen categories. Looking ahead, the methods and insights presented by Uni-OVSeg will undoubtedly inspire continued innovation towards creating more sophisticated and practical vision-based AI systems that can navigate the complexity of the real world with unprecedented ease and accuracy.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.