Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP
Feng Liang
and
Bichen Wu
and
Xiaoliang Dai
and
Kunpeng Li
and
Yinan Zhao
and
Hang Zhang
and
Peizhao Zhang
and
Peter Vajda
and
Diana Marculescu
arXiv e-Print archive - 2022 via Local arXiv
Keywords:
cs.CV, cs.LG
First published: 2024/11/23 (just now) Abstract: Open-vocabulary semantic segmentation aims to segment an image into semantic
regions according to text descriptions, which may not have been seen during
training. Recent two-stage methods first generate class-agnostic mask proposals
and then leverage pre-trained vision-language models, e.g., CLIP, to classify
masked regions. We identify the performance bottleneck of this paradigm to be
the pre-trained CLIP model, since it does not perform well on masked images. To
address this, we propose to finetune CLIP on a collection of masked image
regions and their corresponding text descriptions. We collect training data by
mining an existing image-caption dataset (e.g., COCO Captions), using CLIP to
match masked image regions to nouns in the image captions. Compared with the
more precise and manually annotated segmentation labels with fixed classes
(e.g., COCO-Stuff), we find our noisy but diverse dataset can better retain
CLIP's generalization ability. Along with finetuning the entire model, we
utilize the "blank" areas in masked images using a method we dub mask prompt
tuning. Experiments demonstrate mask prompt tuning brings significant
improvement without modifying any weights of CLIP, and it can further improve a
fully finetuned model. In particular, when trained on COCO and evaluated on
ADE20K-150, our best model achieves 29.6% mIoU, which is +8.5% higher than the
previous state-of-the-art. For the first time, open-vocabulary generalist
models match the performance of supervised specialist models in 2017 without
dataset-specific adaptations.