OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling

NeurIPS 2024
Linhui Xiao · Xiaoshan Yang · Fang Peng · Yaowei Wang · Changsheng Xu

arXiv PDF arXiv PDF arXiv PDF arXiv PDF arXiv PDF

A Comparison between OneRef model and the mainstream REC/RES architectures.

This repository is the official Pytorch implementation for the paper OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling (Publication, Github Code, HuggingFace model), which is an advanced version of our preliminary work HiVG (Publication, Paper, Code) and CLIP-VG (Publication, Paper, Code).

If you have any questions, please feel free to open an issue or contact me with emails: [email protected]. Any kind discussions are welcomed!

Please leave a STAR ⭐ if you like this project!

News

  • :fire: Update on 2025/07/30: All of the code and models have been released!

    :exclamation: During the code tidying process, some bugs may arise due to changes in variable names. If any issues occur, please raise them in the issue page, and I will try to resolve them timely.

  • :fire: Update on 2024/12/28: We conducted a Survey of Visual Grounding over the past decade, entitled "Towards Visual Grounding: A Survey" (Paper, Project), Comments are welcome !!!

  • :fire: Update on 2024/10/10: Our grounding work OneRef (Paper, Code, Model) has been accepted by the top conference NeurIPS 2024 !

  • Update on 2024/07/16: Our grounding work HiVG (Publication, Paper, Code) has been accepted by the top conference ACM MM 2024 !

  • Update on 2023/9/25: Our grounding work CLIP-VG (paper, Code) has been accepted by the top journal IEEE Transaction on Multimedia (2023)!

Citation

If you find our work helpful for your research, please consider citing the following BibTeX entry.

@inproceedings{xiao2024oneref,
  title={OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling},
  author={Xiao, Linhui and Yang, Xiaoshan and Peng, Fang and Wang, Yaowei and Xu, Changsheng},
  booktitle={Proceedings of the 38th International Conference on Neural Information Processing Systems},
  year={2024}
}

Links: ArXiv, NeurIPS 2024

TODO

All the code and models for this paper have been released!

  • Release all the checkpoints.
  • Release the full model code, training and inference code.

Contents

  1. Introduction
  2. Usage
  3. Results
  4. Contacts
  5. Acknowledgments

Highlight

  • (i) We pioneer the application of mask modeling to referring tasks by introducing a novel paradigm called mask referring modeling. This paradigm effectively models the referential relation between visual and language.
  • (ii) Diverging from previous works, we propose a remarkably concise one-tower framework for grounding and referring segmentation in a unified modality-shared feature space. Our model eliminates the commonly used modality interaction modules, modality fusion en-/decoders, and special grounding tokens.
  • (iii) We extensively validate the effectiveness of OneRef in three referring tasks on five datasets. Our method consistently surpasses existing approaches and achieves SoTA performance across several settings, providing a valuable new insights for future grounding and referring segmentation research.

Introduction

Constrained by the separate encoding of vision and language, existing grounding and referring segmentation works heavily rely on bulky Transformer-based fusion en-/decoders and a variety of early-stage interaction technologies. Simultaneously, the current mask visual language modeling (MVLM) fails to capture the nuanced referential relationship between image-text in referring tasks. In this paper, we propose OneRef, a minimalist referring framework built on the modality-shared one-tower transformer that unifies the visual and linguistic feature spaces. To modeling the referential relationship, we introduce a novel MVLM paradigm called Mask Referring Modeling (MRefM), which encompasses both referring-aware mask image modeling and referring-aware mask language modeling. Both modules not only reconstruct modality-related content but also cross-modal referring content. Within MRefM, we propose a referring-aware dynamic image masking strategy that is aware of the referred region rather than relying on fixed ratios or generic random masking schemes. By leveraging the unified visual language feature space and incorporating MRefM’s ability to model the referential relations, our approach enables direct regression of the referring results without resorting to various complex techniques. Our method consistently surpasses existing approaches and achieves SoTA performance on both grounding and segmentation tasks, providing valuable insights for future research.

For more details, please refer to our paper.

Usage

Dependencies

  • Python 3.9.10
  • PyTorch 2.0.1
  • timm 0.6.13
  • Check requirements.txt for other dependencies.

Our environment is aligned with Beit-3. Besides, our model is easy to deploy in a variety of environments and has been successfully tested on multiple pytorch versions.

Image Data Preparation

1.You can download the images from the original source and place them in your disk folder, such as $/path_to_image_data:

  • MS COCO 2014 (for RefCOCO, RefCOCO+, RefCOCOg dataset, almost 13.0GB)

  • ReferItGame

  • Flickr30K Entities

    We provide a script to download the mscoco2014 dataset, you just need to run the script in terminal with the following command:

    bash download_mscoco2014.sh
    

    Or you can also follow the data preparation of TransVG, which can be found in GETTING_STARTED.md.

Only the image data in these datasets is used, and these image data is easily find in similar repositories of visual grounding work, such as TransVG etc. Finally, the $/path_to_image_data folder will have the following structure:

|-- image_data
   |-- Flickr30k
      |-- flickr30k-images
   |-- other
      |-- images
        |-- mscoco
            |-- images
                |-- train2014
   |-- referit
      |-- images
  • $/path_to_image_data/image_data/Flickr30k/flickr30k-images/: Image data for the Flickr30K dataset, please download from this link. Fill the form and download the images.
  • $/path_to_image_data/image_data/other/images/: Image data for RefCOCO/RefCOCO+/RefCOCOg, i.e., mscoco2014.
  • $/path_to_image_data/image_data/referit/images/: Image data for ReferItGame.

Text-Box Anotations

The labels in the fully supervised scenario is consistent with previous works such as CLIP-VG.

:star: As we need to conduct pre-training with mixed datasets, we have shuffled the order of the datasets and unified some of the dataset formats. You need to download our text annotation files from the HuggingFace homepage.

Fully supervised setting

Datasets RefCOCO RefCOCO+ RefCOCOg-g RefCOCOg-u ReferIt Flickr mixup_with_refc mixup_with_refc_referit
url, size All of six datasets, ~400.0MB

* The mixup_with_refc denotes the mixup of the training data from RefCOCO/+/g-umd (without use gref), which used in RES task. The mixup_with_refc_referit denotes the mixup of the training data from RefCOCO/+/g (without use gref) and ReferIt Game, which used in REC task. The val and test split of both Mixup1 and Mixup2 are used the val and testA file from RefCOCOg. The training data in RefCOCOg-g (i.e., gref) exist data leakage.

Download the above annotations to a disk directory such as $/path_to_split; then will have the following similar directory structure:

|-- /single_dataset
    ├── flickr
    │   ├── flickr_test.pth
    │   ├── flickr_train.pth
    │   └── flickr_val.pth
    ├── gref
    │   ├── gref_train.pth
    │   └── gref_val.pth
    ├── gref_umd
    │   ├── gref_umd_test.pth
    │   ├── gref_umd_train.pth
    │   └── gref_umd_val.pth
    ├── referit
    │   ├── referit_test.pth
    │   ├── referit_train.pth
    │   └── referit_val.pth
    ├── unc
    │   ├── unc_testA.pth
    │   ├── unc_testB.pth
    │   ├── unc_train.pth
    │   └── unc_val.pth
    └── unc+
        ├── unc+_testA.pth
        ├── unc+_testB.pth
        ├── unc+_train.pth
        └── unc+_val.pth
|-- /mixup_with_refc
    ├── mixup
    │   ├── mixup_test.pth
    │   ├── mixup_train.pth
    │   └── mixup_val.pth
|-- /mixup_with_refc_referit
    ├── mixup
    │   ├── mixup_test.pth
    │   ├── mixup_train.pth
    │   └── mixup_val.pth

Pre-trained Checkpoints

The checkpoints include the Base model and Large model under the single-dataset fine-tuning setting and dataset-mixed grounding pretraining setting with Both REC and RES tasks.

It should be noted that OneRef involves 29 models with a total size of 125GB, and we have made all of these 29 models open source. We ensure that these models can reproduce the results in the paper. If these models fail to reproduce the results or encounter errors, please contact us promptly via email or by raising an issue. We will check and upload the correct models. This might be due to model upload errors or model corruption during disk storage. After all, we trained nearly a hundred models during the research course of this work.


HuggingFace: All the models are publicly available on the
OneRef Huggingface homepage. You can freely download the corresponding models on this website.

REC task: Single-dataset fine-tuning checkpoints download

Datasets RefCOCO RefCOCO+ RefCOCOg-u ReferIt Flickr
Base model Google Drive, rec_single_dataset_finetuning_base.zip (for all), ~9.0 GB
Base model Hugging Face, rec_single_dataset_finetuning_base.zip (for all), ~9.0 GB
Large model finetuning_large_unc, ~8.0 GB finetuning_large_unc+, ~8.0 GB finetuning_large_gref_umd, ~8.0 GB finetuning_large_referit, ~8.0 GB finetuning_large_flickr, ~8.0 GB

REC task: Mixup grounding pre-training checkpoints download

Datasets Mixup (RefCOCO/+/g) ReferIt Flickr
base model rec_mixup_grounding_pretraining_base.zip, ~6.0 GB
Large model mixup_pretraining_large_unc+g, ~8.0 GB mixup_pretraining_large_referit, ~8.0 GB mixup_pretraining_large_flickr, ~8.0 GB

REC task: Ultimate performance prediction in our Grounding Survey paper

Datasets Mixup (RefCOCO/+/g)
base model rec_mixup_grounding_ultimate_performance_base.zip, ~6.0 GB
Large model rec_mixup_grounding_ultimate_performance_large, ~8.0 GB

RES task: Single-dataset fine-tuning checkpoints download

Datasets RefCOCO RefCOCO+ RefCOCOg-u
base model res_single_dataset_finetuning_base.zip, ~6.0 GB
Large model finetuning_large_unc, ~8.0 GB finetuning_large_unc+, ~8.0 GB finetuning_large_gref_umd, ~8.0 GB

RES task: Mixup grounding pre-training checkpoints download

Datasets Mixup (RefCOCO/+/g)
base model res_mixup_pretraining_base.zip, ~1.0 GB
Large model res_mixup_pretraining_large, ~2.0 GB

After downloading all of these checkpoints, you can save them in the following directory, allowing you to train and test the five datasets at once and just using a single script.

|-- /finetuning_checkpoints (base or large model, rec or res task)
    ├── flickr
    │   └── best_checkpoint.pth
    ├── gref_umd
    │   └── best_checkpoint.pth
    ├── referit
    │   └── best_checkpoint.pth
    ├── unc
    │   └── best_checkpoint.pth
    └── unc+
        └── best_checkpoint.pth

|-- /mixup_grounding_pretraining (base or large model, rec or res task)
    └── mixup
        └── best_checkpoint.pth

MRefM pretrained backbone checkpoints download

We propose our multimodal Mask Referring Modeling (MRefM) paradigm to enhance the model's referring comprehension ability. Since MRefM aims to improve its general referring comprehension ability through pre-training, it mainly demonstrates its performance gain under the mixed pre-training setting. In the experiment, the MRefM pre-training for the REC task is mainly carried out through a mixture of the RefCOCO/+/g (short as RefC) and ReferIt datasets. To ensure a fair comparison, the MRefM pre-training for the RES task is mainly carried out through a mixture of the RefC datasets.

For MRefM pre-training, the base model took 15 hours on 32 NVIDIA A100 GPUs, while the large model took 50 hours on the same number of GPUs. We provide the MRefM pre-trained checkpoints at the following: All model are placed in HuggingFace Page

MRefM Model for REC Pretraining dataset Checkpoints
Base model RefC,ReferIt rec_mrefm_base_patch16_384, ~2 GB
Large model RefC,ReferIt rec_mrefm_large_patch16_384, ~7 GB
MRefM Model for RES Pretraining dataset Checkpoints
Base model RefC res_mrefm_base_patch16_384, ~2 GB
Large model RefC res_mrefm_base_patch16_384, ~7 GB

Original BEiT-3 checkpoints download

In order to facilitate the reproducibility of the MRefM pre-training results and to achieve transferability in non-MRefM settings, we also provide the original BEiT-3 model as follows. You can download it from the table below or from the BEiT-3 official repository.

BEiT-3 original model Checkpoints
Sentencepiece model (Tokenizer) sp3 Sentencepiece model, 1 MB
MIM VQKD model vqkd model, 438 MB
BEiT-3 Base model beit3_base_indomain_patch16_224, 554 MB
BEiT-3 Large model beit3_large_indomain_patch16_224, 1.5 GB

REC and RES Transfer Training and Evaluation

As shown below, we have provided complete evaluation, training, and pre-training scripts in the train_and_eval_script.

train_and_eval_script
├── eval_rec_mixup_grounding_pretraining_base.sh
├── eval_rec_mixup_grounding_pretraining_large.sh
├── eval_rec_single_dataset_finetuning_base.sh
├── eval_rec_single_dataset_finetuning_large.sh
├── eval_res_mixup_grounding_pretraining_base.sh
├── eval_res_mixup_grounding_pretraining_large.sh
├── eval_res_single_dataset_finetuning_base.sh
├── eval_res_single_dataset_finetuning_large.sh
├── MRefM_pretraining
│   ├── rec_mrefm_pretraining_base.sh
│   ├── rec_mrefm_pretraining_large.sh
│   ├── res_mrefm_pretraining_base.sh
│   └── res_mrefm_pretraining_large.sh
├── submit_for_multi_node_pretraining
│   ├── get_master_ip.sh
│   ├── master_ip.sh
│   └── train_and_eval_for_multi_node.sh
├── train_rec_mixup_grounding_pretraining_base.sh
├── train_rec_mixup_grounding_pretraining_large.sh
├── train_rec_single_dataset_finetuning_base.sh
├── train_rec_single_dataset_finetuning_large.sh
├── train_res_mixup_grounding_pretraining_base.sh
├── train_res_mixup_grounding_pretraining_large.sh
├── train_res_single_dataset_finetuning_base.sh
└── train_res_single_dataset_finetuning_large.sh

You only need to modify the corresponding paths (change $/path_to_split, $/path_to_image_data, $/path_to_output to your own file directory), and then execute the corresponding scripts with the bash command to test and train the relevant models.

  1. Training on RefCOCO with single dataset finetuning setting.

    CUDA_VISIBLE_DEVICES=3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=5 --master_port 28887 --use_env train_clip_vg.py --num_workers 32 --epochs 120 --batch_size 64 --lr 0.00025  --lr_scheduler cosine --aug_crop --aug_scale --aug_translate    --imsize 224 --max_query_len 77  --sup_type full --dataset unc      --data_root $/path_to_image_data --split_root $/path_to_split --output_dir $/path_to_output/output_v01/unc;
    

    Please refer to train_and_eval_script/train_rec_single_dataset_finetuning_base.sh for training commands on other datasets.

  2. Evaluation on RefCOCO.

    CUDA_VISIBLE_DEVICES=2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=6 --master_port 28888 --use_env eval.py --num_workers 2 --batch_size 128    --dataset unc      --imsize 224 --max_query_len 77 --data_root $/path_to_image_data --split_root $/path_to_split --eval_model $/path_to_output/output_v01/unc/best_checkpoint.pth      --eval_set val    --output_dir $/path_to_output/output_v01/unc;
    

    Please refer to train_and_eval_script/eval_rec_single_dataset_finetuning_base.sh for evaluation commands on other splits or datasets.

  3. We strongly recommend to use the bash commands to training or testing with different datasets and splits, which will significant reduce the training workforce. such as:

    bash train_and_eval_script/train_rec_single_dataset_finetuning_base.sh
    

It should be noted that, due to the limited number of data samples in the single-dataset setting, MRefM did not yield significant improvements in performance. To streamline the training process and facilitate the reproducibility of our work, we provide a training process without MRefM pre-training specifically for the single-dataset scenario.

MRefM Pre-training

1. One-node Pre-training

Single-node means that only one multi-card server is needed. You just need to run the following command. This training is not much different from the fine-tuning training.

CUDA_VISIBLE_DEVICES=3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=5 --master_port 28887 --use_env train_clip_vg.py --num_workers 32 --epochs 120 --batch_size 64 --lr 0.00025  --lr_scheduler cosine --aug_crop --aug_scale --aug_translate    --imsize 224 --max_query_len 77  --sup_type full --dataset unc      --data_root $/path_to_image_data --split_root $/path_to_split --output_dir $/path_to_output/output_v01/unc;

Or using the bash command as follows:

bash train_and_eval_script/MRefM_pretraining/rec_mrefm_pretraining_base.sh

2. Multi-node Pre-training

Multi-node training means that multiple multi-card servers are required. You need to use the scripts in the train_and_eval_script/submit_for_multi_node_pretraining directory to start the process on multiple servers. For detailed operations, you can refer to the relevant tutorials.

Results

1. REC task

REC Single-dataset Fine-tuning SoTA Result Table COCO
REC Dataset-mixed Pretraining SoTA Result Table COCO

2. RES task

RES Single-dataset Fine-tuning and Dataset-mixed Pretraining SoTA Result Table (mIoU) COCO
RES Single-dataset Fine-tuning and Dataset-mixed Pretraining SoTA Result Table (oIoU) COCO

3. Our model also has significant energy efficiency advantages.

Comparison of the computational cost in REC task.
COCO

Methods

An Illustration of our multimodal Mask Referring Modeling (MRefM) paradigm, which includes Referring-aware mask image modeling and Referring-aware mask language modeling.

An Illustration of the referring-based grounding and segmentation transfer.

Illustrations of random masking (MAE) [27], block-wise masking (BEiT) [4], and our referring-aware dynamic masking. α denotes the entire masking ratio, while β and γ denote the masking ratio beyond and within the referred region.

Visualization

Qualitative results on the RefCOCO-val dataset.

Qualitative results on the RefCOCO+-val dataset.

Qualitative results on the RefCOCOg-val dataset.

Each example shows two different query texts. From left to right: the original input image, the ground truth with box and segmentation mask (in green), the RES prediction of OneRef (in cyan), the REC prediction of OneRef (in cyan), and the cross-modal feature.

Contacts

Email: [email protected]. Any kind discussions are welcomed!

Acknowledgement

Our model is related to BEiT-3 and MAE. Thanks for their great work!

We also thank the great previous work including TransVG, DETR, CLIP, CLIP-VG, etc.

Thanks Microsoft for their awesome models.

Star History

Star History Chart

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support