Emergent Mind

Abstract

The CLIP and Segment Anything Model (SAM) are remarkable vision foundation models (VFMs). SAM excels in segmentation tasks across diverse domains, while CLIP is renowned for its zero-shot recognition capabilities. This paper presents an in-depth exploration of integrating these two models into a unified framework. Specifically, we introduce the Open-Vocabulary SAM, a SAM-inspired model designed for simultaneous interactive segmentation and recognition, leveraging two unique knowledge transfer modules: SAM2CLIP and CLIP2SAM. The former adapts SAM's knowledge into the CLIP via distillation and learnable transformer adapters, while the latter transfers CLIP knowledge into SAM, enhancing its recognition capabilities. Extensive experiments on various datasets and detectors show the effectiveness of Open-Vocabulary SAM in both segmentation and recognition tasks, significantly outperforming the naive baselines of simply combining SAM and CLIP. Furthermore, aided with image classification data training, our method can segment and recognize approximately 22,000 classes.

Open-Vocabulary SAM training with encoder as teacher, students aligning knowledge, and joint segmentation, classification.

Overview

  • Introduces Open-Vocabulary SAM, integrating SAM's segmentation and CLIP's recognition abilities.

  • Presents two knowledge transfer modules, SAM2CLIP and CLIP2SAM, for enhanced encoder-decoder synergy.

  • Demonstrates over 20% improvement in recognizing unseen objects and improved segmentation on benchmark datasets.

  • Enables interactive segmentation and recognition of approximately 22,000 object classes.

  • Discusses practical applications in various fields and suggests future research directions.

Introduction

The advancement of vision foundation models (VFMs) has seen notable proliferation, notably through models like CLIP and Segment Anything Model (SAM). These models have provided great strides in the field of computer vision, with SAM becoming a pivotal tool for segmentation tasks and CLIP for its striking zero-shot recognition capabilities. However, each model bears its limitations when operating in isolation, such as SAM's lack of recognition ability and CLIP's challenges with dense predictions. Addressing these shortcomings, this paper introduces the Open-Vocabulary SAM, an inventive framework that fuses the functionality of SAM and CLIP, enhancing both segmentation and recognition across a vast range of classes.

Knowledge Transfer Modules

The paper details two novel knowledge transfer modules central to this integration: SAM2CLIP and CLIP2SAM. The SAM2CLIP module enables the transfer of knowledge from SAM to CLIP using a distillation process and transformer-like adapters, allowing for knowledge alignment without modifying the robust CLIP encoder. Meanwhile, CLIP2SAM applies the reverse, transferring knowledge from CLIP to SAM to augment the model’s recognition capabilities while maintaining effective segmentation. These modules work synergistically in a unified encoder-decoder framework, substantially outperforming baseline models that naively combine SAM and CLIP without considering their architectural differences and knowledge compatibility.

Experiments and Results

Extensive experiments conducted across a spectrum of datasets, including COCO and LVIS, demonstrate the superior performance of the Open-Vocabulary SAM. The method showcases over a 20% improvement in recognizing previously unseen objects on the LVIS dataset and enhanced segmentation and classification performance on the COCO dataset. The key is the joint training with both segmentation mask and label annotations, leading to a synergy between SAM's and CLIP's functionalities. This combination allows Open-Vocabulary SAM to interactively segment and recognize approximately 22,000 classes, a significant upscale from its predecessors.

Implications and Future Directions

Open-Vocabulary SAM presents a robust AI architecture that has practical applications in image analysis, including interactive segmentations such as those used in autonomous driving or medical imaging. By effectively segmenting and recognizing a wide range of objects, the model sets the stage for more accurate and efficient image annotation tools. Additionally, the model’s open-vocabulary capabilities potentiate widespread use in fields requiring domain-specific recognition, from wildlife conservation to smart city surveillance, by learning from a vast and variable dataset. While this study marks a leap forward, it also opens avenues for further research to fine-tune models for specific domains and expand their interactive capabilities.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.