jcwang0602 nielsr HF Staff commited on
Commit
03c7f36
·
verified ·
1 Parent(s): 8b8a8cf

Add model card content and `transformers` library tag (#1)

Browse files

- Add model card content and `transformers` library tag (9c066b71750a7cb200369c3dad0d326ecbe5cbcd)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +196 -4
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
- license: mit
 
3
  datasets:
4
  - lmms-lab/RefCOCO
5
  - lmms-lab/RefCOCOplus
@@ -7,10 +8,9 @@ datasets:
7
  - qixiangbupt/grefcoco
8
  language:
9
  - en
 
10
  metrics:
11
  - accuracy
12
- base_model:
13
- - OpenGVLab/InternVL2_5-8B
14
  pipeline_tag: image-segmentation
15
  tags:
16
  - Visual Grounding
@@ -18,4 +18,196 @@ tags:
18
  - Generalized Referring Expression Segmentation
19
  - Referring Expression Comprehension
20
  new_version: jcwang0602/MLLMSeg_InternVL2_5_8B_RES
21
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - OpenGVLab/InternVL2_5-8B
4
  datasets:
5
  - lmms-lab/RefCOCO
6
  - lmms-lab/RefCOCOplus
 
8
  - qixiangbupt/grefcoco
9
  language:
10
  - en
11
+ license: mit
12
  metrics:
13
  - accuracy
 
 
14
  pipeline_tag: image-segmentation
15
  tags:
16
  - Visual Grounding
 
18
  - Generalized Referring Expression Segmentation
19
  - Referring Expression Comprehension
20
  new_version: jcwang0602/MLLMSeg_InternVL2_5_8B_RES
21
+ library_name: transformers
22
+ ---
23
+
24
+ # MLLMSeg: Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder
25
+
26
+ This repository contains the `MLLMSeg_InternVL2_5_8B_RES` model presented in the paper [Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder](https://huggingface.co/papers/2508.04107).
27
+
28
+ **Abstract:**
29
+ Reference Expression Segmentation (RES) aims to segment image regions specified by referring expressions and has become popular with the rise of multimodal large models (MLLMs). While MLLMs excel in semantic understanding, their token-generation paradigm struggles with pixel-level dense prediction. Existing RES methods either couple MLLMs with the parameter-heavy Segment Anything Model (SAM) with 632M network parameters or adopt SAM-free lightweight pipelines that sacrifice accuracy. To address the trade-off between performance and cost, we specifically propose MLLMSeg, a novel framework that fully exploits the inherent visual detail features encoded in the MLLM vision encoder without introducing an extra visual encoder. Besides, we propose a detail-enhanced and semantic-consistent feature fusion module (DSFF) that fully integrates the detail-related visual feature with the semantic-related feature output by the large language model (LLM) of MLLM. Finally, we establish a light-weight mask decoder with only 34M network parameters that optimally leverages detailed spatial features from the visual encoder and semantic features from the LLM to achieve precise mask prediction. Extensive experiments demonstrate that our method generally surpasses both SAM-based and SAM-free competitors, striking a better balance between performance and cost.
30
+
31
+ <img src="https://raw.githubusercontent.com/jcwang0602/MLLMSeg/main/assets/method.png" width="800">
32
+
33
+ **Code:** Find the official implementation and full details on GitHub: [https://github.com/jcwang0602/MLLMSeg](https://github.com/jcwang0602/MLLMSeg)
34
+ **ArXiv:** [http://arxiv.org/abs/2508.04107](http://arxiv.org/abs/2508.04107)
35
+
36
+ ---
37
+
38
+ ## Quick Start
39
+
40
+ This section provides instructions on how to inference our pre-trained models.
41
+
42
+ **Notes:** Our models accept images of any size as input. The model outputs are normalized to relative coordinates within a 0-1000 range (either a center point or a bounding box defined by top-left and bottom-right coordinates). For visualization, please remember to convert these relative coordinates back to the original image dimensions.
43
+
44
+ ### Installation
45
+
46
+ First, install the `transformers` library and other necessary dependencies as specified by the original repository:
47
+
48
+ ```bash
49
+ conda create -n mllmseg python==3.10.18 -y
50
+ conda activate mllmseg
51
+ pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu118
52
+ # If you encounter any problems during the installation of datasets, please install this first.
53
+ # conda install -c conda-forge pyarrow
54
+ pip install -r requirements.txt
55
+ pip install flash-attn==2.3.6 --no-build-isolation # Note: need gpu to install
56
+ ```
57
+
58
+ ### Inference Example
59
+
60
+ ```python
61
+ import numpy as np
62
+ import torch
63
+ import torchvision.transforms as T
64
+ from PIL import Image
65
+ from torchvision.transforms.functional import InterpolationMode
66
+ from transformers import AutoModel, AutoTokenizer
67
+ import os
68
+
69
+ IMAGENET_MEAN = (0.485, 0.456, 0.406)
70
+ IMAGENET_STD = (0.229, 0.224, 0.225)
71
+
72
+ def build_transform(input_size):
73
+ MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
74
+ transform = T.Compose([
75
+ T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
76
+ T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
77
+ T.ToTensor(),
78
+ T.Normalize(mean=MEAN, std=STD)
79
+ ])
80
+ return transform
81
+
82
+ def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
83
+ best_ratio_diff = float('inf')
84
+ best_ratio = (1, 1)
85
+ area = width * height
86
+ for ratio in target_ratios:
87
+ target_aspect_ratio = ratio[0] / ratio[1]
88
+ ratio_diff = abs(aspect_ratio - target_aspect_ratio)
89
+ if ratio_diff < best_ratio_diff:
90
+ best_ratio_diff = ratio_diff
91
+ best_ratio = ratio
92
+ elif ratio_diff == best_ratio_diff:
93
+ if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
94
+ best_ratio = ratio
95
+ return best_ratio
96
+
97
+ def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
98
+ orig_width, orig_height = image.size
99
+ aspect_ratio = orig_width / orig_height
100
+
101
+ # calculate the existing image aspect ratio
102
+ target_ratios = set(
103
+ (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
104
+ i * j <= max_num and i * j >= min_num)
105
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
106
+
107
+ # find the closest aspect ratio to the target
108
+ target_aspect_ratio = find_closest_aspect_ratio(
109
+ aspect_ratio, target_ratios, orig_width, orig_height, image_size)
110
+
111
+ # calculate the target width and height
112
+ target_width = image_size * target_aspect_ratio[0]
113
+ target_height = image_size * target_aspect_ratio[1]
114
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
115
+
116
+ # resize the image
117
+ resized_img = image.resize((target_width, target_height))
118
+ processed_images = []
119
+ for i in range(blocks):
120
+ box = (
121
+ (i % (target_width // image_size)) * image_size,
122
+ (i // (target_width // image_size)) * image_size,
123
+ ((i % (target_width // image_size)) + 1) * image_size,
124
+ ((i // (target_width // image_size)) + 1) * image_size
125
+ )
126
+ # split the image
127
+ split_img = resized_img.crop(box)
128
+ processed_images.append(split_img)
129
+ assert len(processed_images) == blocks
130
+ if use_thumbnail and len(processed_images) != 1:
131
+ thumbnail_img = image.resize((image_size, image_size))
132
+ processed_images.append(thumbnail_img)
133
+ return processed_images
134
+
135
+ def load_image(image_file, input_size=448, max_num=12):
136
+ image = Image.open(image_file).convert('RGB')
137
+ transform = build_transform(input_size=input_size)
138
+ images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
139
+ pixel_values = [transform(image) for image in images]
140
+ pixel_values = torch.stack(pixel_values)
141
+ return pixel_values
142
+
143
+ path = 'jcwang0602/MLLMSeg_InternVL2_5_8B_RES' # or 'jcwang0602/MLLMSeg_InternVL2_5_8B_GRES'
144
+ model = AutoModel.from_pretrained(
145
+ path,
146
+ torch_dtype=torch.bfloat16,
147
+ low_cpu_mem_usage=True,
148
+ trust_remote_code=True).eval().cuda()
149
+ tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
150
+
151
+ # Example image. You need to download an example image, e.g., from the GitHub repo's assets:
152
+ # https://raw.githubusercontent.com/jcwang0602/MLLMSeg/main/examples/images/web_dfacd48d-d2c2-492f-b94c-41e6a34ea99f.png
153
+ # Save it to a local path like 'examples/images/web_dfacd48d-d2c2-492f-b94c-41e6a34ea99f.png'
154
+ # For demonstration, you might need to create dummy directories or replace the path.
155
+ image_path = './examples/images/web_dfacd48d-d2c2-492f-b94c-41e6a34ea99f.png'
156
+ if not os.path.exists(image_path):
157
+ print(f"Warning: Image not found at {image_path}. Please download it from the GitHub repo.")
158
+ # Fallback for demonstration if image is not present
159
+ dummy_image_data = np.zeros((1024, 768, 3), dtype=np.uint8)
160
+ dummy_image = Image.fromarray(dummy_image_data)
161
+ pixel_values = build_transform(input_size=448)(dummy_image).unsqueeze(0).to(torch.bfloat16).cuda()
162
+ else:
163
+ pixel_values = load_image(image_path, max_num=6).to(torch.bfloat16).cuda()
164
+
165
+ generation_config = dict(max_new_tokens=1024, do_sample=True)
166
+
167
+ question = "In the screenshot of this web page, please give me the coordinates of the element I want to click on according to my instructions(with point).\
168
+ \\\"'Champions League' link\\\""
169
+ response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
170
+ print(f'User: {question}
171
+ Assistant: {response}')
172
+ ```
173
+
174
+ ---
175
+
176
+ ## Performance Metrics
177
+
178
+ ### Referring Expression Segmentation
179
+ <img src="https://raw.githubusercontent.com/jcwang0602/MLLMSeg/main/assets/tab_res.png" width="800">
180
+
181
+ ### Referring Expression Comprehension
182
+ <img src="https://raw.githubusercontent.com/jcwang0602/MLLMSeg/main/assets/tab_rec.png" width="800">
183
+
184
+ ### Generalized Referring Expression Segmentation
185
+ <img src="https://raw.githubusercontent.com/jcwang0602/MLLMSeg/main/assets/tab_gres.png" width="800">
186
+
187
+ ---
188
+
189
+ ## Visualization
190
+ ### Referring Expression Segmentation
191
+ <img src="https://raw.githubusercontent.com/jcwang0602/MLLMSeg/main/assets/res.png" width="800">
192
+
193
+ ### Referring Expression Comprehension
194
+ <img src="https://raw.githubusercontent.com/jcwang0602/MLLMSeg/main/assets/rec.png" width="800">
195
+
196
+ ### Generalized Referring Expression Segmentation
197
+ <img src="https://raw.githubusercontent.com/jcwang0602/MLLMSeg/main/assets/gres.png" width="800">
198
+
199
+ ---
200
+
201
+ ## Citation
202
+ If our work is useful for your research, please consider citing:
203
+ ```bibtex
204
+ @misc{wang2025unlockingpotentialmllmsreferring,
205
+ title={Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder},
206
+ author={Jingchao Wang and Zhijian Wu and Dingjiang Huang and Yefeng Zheng and Hong Wang},
207
+ year={2025},
208
+ eprint={2508.04107},
209
+ archivePrefix={arXiv},
210
+ primaryClass={cs.CV},
211
+ url={https://arxiv.org/abs/2508.04107},
212
+ }
213
+ ```