tc-mb commited on
Commit
ec15ed2
·
verified ·
1 Parent(s): 4d84d8b

Initial commit: MiniCPM-V-4_5 model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,5 +1,267 @@
1
  ---
2
- license: other
3
- license_name: minicpm-model-license
4
- license_link: LICENSE
 
 
 
 
 
 
 
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pipeline_tag: image-text-to-text
3
+ datasets:
4
+ - openbmb/RLAIF-V-Dataset
5
+ library_name: transformers
6
+ language:
7
+ - multilingual
8
+ tags:
9
+ - minicpm-v
10
+ - vision
11
+ - ocr
12
+ - multi-image
13
+ - video
14
+ - custom_code
15
  ---
16
+
17
+ <h1>A GPT-4o Level MLLM for Single Image, Multi Image and Video Understanding on Your Phone</h1>
18
+
19
+ [GitHub](https://github.com/OpenBMB/MiniCPM-o) | [Demo](http://101.126.42.235:30910/)</a>
20
+
21
+
22
+
23
+ ## MiniCPM-V 4.5
24
+
25
+ **MiniCPM-V 4.5** is the latest and most capable model in the MiniCPM-V series. The model is built on Qwen3-8B and SigLIP2-400M with a total of 8B parameters. It exhibits a significant performance improvement over previous MiniCPM-V and MiniCPM-o models, and introduces new useful features. Notable features of MiniCPM-V 4.5 include:
26
+
27
+ - 🔥 **State-of-the-art Vision-Language Capability.**
28
+ MiniCPM-V 4.5 achieves an average score of 77.2 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-latest, Gemini-2.0 Pro, and strong open-source models like Qwen2.5-VL 72B** for vision-language capabilities, making it the most performant MLLM under 30B parameters.
29
+
30
+ - 🎬 **Efficient High Refresh Rate and Long Video Understanding.** Powered by a new unified 3D-Resampler over images and videos, MiniCPM-V 4.5 can now achieve 96x compression rate for video tokens, where 6 448x448 video frames can be jointly compressed into 64 video tokens (normally 1,536 tokens for most MLLMs). This means that the model can percieve significantly more video frames without increasing the LLM inference cost. This brings state-of-the-art high refresh rate (up to 10FPS) video understanding and long video understanding capabilities on Video-MME, LVBench, MLVU, MotionBench, FavorBench, etc., efficiently.
31
+
32
+ - ⚙️ **Controllable Hybrid Fast/Deep Thinking.** MiniCPM-V 4.5 supports both fast thinking for efficient frequent usage with competitive performance, and deep thinking for more complex problem solving. To cover efficiency and performance trade-offs in different user scenarios, this fast/deep thinking mode can be switched in a highly controlled fashion.
33
+
34
+ - 💪 **Strong OCR, Document Parsing and Others.**
35
+ Based on [LLaVA-UHD](https://arxiv.org/pdf/2403.11703) architecture, MiniCPM-V 4.5 can process high-resolution images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), using 4x less visual tokens than most MLLMs. The model achieves **leading performance on OCRBench, surpassing proprietary models such as GPT-4o-latest and Gemini 2.5**. It also achieves state-of-the-art performance for PDF document parsing capability on OmniDocBench among general MLLMs. Based on the the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, outperforming GPT-4o-latest on MMHal-Bench, and supports **multilingual capabilities** in more than 30 languages.
36
+
37
+
38
+ - 💫 **Easy Usage.**
39
+ MiniCPM-V 4.5 can be easily used in various ways: (1) [llama.cpp](https://github.com/tc-mb/llama.cpp/blob/Support-MiniCPM-V-4.5/docs/multimodal/minicpmv4.5.md) and [ollama](https://github.com/tc-mb/ollama/tree/MIniCPM-V) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-4_5-int4), [GGUF](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf) and [AWQ](https://github.com/tc-mb/AutoAWQ) format quantized models in 16 sizes, (3) [SGLang](https://github.com/tc-mb/sglang/tree/main) and [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with [Transformers](https://github.com/tc-mb/transformers/tree/main) and [LLaMA-Factory](./docs/llamafactory_train_and_infer.md), (5) quick [local WebUI demo](#chat-with-our-demo-on-gradio), (6) optimized [local iOS app](https://github.com/tc-mb/MiniCPM-o-demo-iOS) on iPhone and iPad, and (7) online web demo on [server](http://101.126.42.235:30910/). See our [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook) for full usages!
40
+
41
+
42
+ ### Evaluation
43
+
44
+ <div align="center">
45
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/radar_minicpm_v45.png", width=60%>
46
+ </div>
47
+ <div align="center">
48
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv_4_5_evaluation_results.jpg" , width=100%>
49
+ </div>
50
+
51
+ ### Examples
52
+
53
+ <div align="center">
54
+ <a href="https://youtu.be/SCtimvC3Qfk"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/MiniCPM-V%204.5-8.26_img.jpeg", width=70%></a>
55
+ </div>
56
+
57
+ <div style="display: flex; flex-direction: column; align-items: center;">
58
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/en_case1.png" alt="en_case1" style="margin-bottom: 5px;">
59
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/en_case2.png" alt="en_case2" style="margin-bottom: 5px;">
60
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/en_case3.jpeg" alt="en_case3" style="margin-bottom: 5px;">
61
+ </div>
62
+
63
+ We deploy MiniCPM-V 4.5 on iPad M4 with [iOS demo](https://github.com/tc-mb/MiniCPM-o-demo-iOS). The demo video is the raw screen recording without edition.
64
+
65
+ <div align="center">
66
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_en_handwriting.gif" width="45%" style="display: inline-block; margin: 0 10px;"/>
67
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_en_cot.gif" width="45%" style="display: inline-block; margin: 0 10px;"/>
68
+ </div>
69
+
70
+ <div align="center">
71
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_cn_handwriting.gif" width="45%" style="display: inline-block; margin: 0 10px;"/>
72
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_cn_travel.gif" width="45%" style="display: inline-block; margin: 0 10px;"/>
73
+ </div>
74
+
75
+
76
+ ## Usage
77
+
78
+ ```python
79
+ import torch
80
+ from PIL import Image
81
+ from transformers import AutoModel, AutoTokenizer
82
+
83
+ torch.manual_seed(100)
84
+
85
+ model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6
86
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
87
+ model = model.eval().cuda()
88
+ tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6
89
+
90
+ image = Image.open('./assets/minicpmo2_6/show_demo.jpg').convert('RGB')
91
+
92
+ enable_thinking=False # If `enable_thinking=True`, the long-thinking mode is enabled.
93
+
94
+ # First round chat
95
+ question = "What is the landform in the picture?"
96
+ msgs = [{'role': 'user', 'content': [image, question]}]
97
+
98
+ answer = model.chat(
99
+ msgs=msgs,
100
+ tokenizer=tokenizer,
101
+ enable_thinking=enable_thinking
102
+ )
103
+ print(answer)
104
+
105
+ # Second round chat, pass history context of multi-turn conversation
106
+ msgs.append({"role": "assistant", "content": [answer]})
107
+ msgs.append({"role": "user", "content": ["What should I pay attention to when traveling here?"]})
108
+
109
+ answer = model.chat(
110
+ msgs=msgs,
111
+ tokenizer=tokenizer
112
+ )
113
+ print(answer)
114
+ ```
115
+
116
+ You will get the following output:
117
+
118
+ ```shell
119
+ # round1
120
+ The landform in the picture is karst topography. Karst landscapes are characterized by distinctive, jagged limestone hills or mountains with steep, irregular peaks and deep valleys—exactly what you see here These unique formations result from the dissolution of soluble rocks like limestone over millions of years through water erosion.
121
+
122
+ This scene closely resembles the famous karst landscape of Guilin and Yangshuo in China’s Guangxi Province. The area features dramatic, pointed limestone peaks rising dramatically above serene rivers and lush green forests, creating a breathtaking and iconic natural beauty that attracts millions of visitors each year for its picturesque views.
123
+
124
+ # round2
125
+ When traveling to a karst landscape like this, here are some important tips:
126
+
127
+ 1. Wear comfortable shoes: The terrain can be uneven and hilly.
128
+ 2. Bring water and snacks for energy during hikes or boat rides.
129
+ 3. Protect yourself from the sun with sunscreen, hats, and sunglasses—especially since you’ll likely spend time outdoors exploring scenic spots.
130
+ 4. Respect local customs and nature regulations by not littering or disturbing wildlife.
131
+
132
+ By following these guidelines, you'll have a safe and enjoyable trip while appreciating the stunning natural beauty of places such as Guilin’s karst mountains.
133
+ ```
134
+
135
+
136
+ #### Chat with Video
137
+ <summary> Click to view Python code running MiniCPM-V-4_5 by with video input and 3D-Resampler. </summary>
138
+
139
+ ```python
140
+ ## The 3d-resampler compresses multiple frames into 64 tokens by introducing temporal_ids.
141
+ # To achieve this, you need to organize your video data into two corresponding sequences:
142
+ # frames: List[Image]
143
+ # temporal_ids: List[List[Int]].
144
+
145
+ import torch
146
+ from PIL import Image
147
+ from transformers import AutoModel, AutoTokenizer
148
+ from decord import VideoReader, cpu # pip install decord
149
+ from scipy.spatial import cKDTree
150
+ import numpy as np
151
+ import math
152
+
153
+ model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6
154
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
155
+ model = model.eval().cuda()
156
+ tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6
157
+
158
+ MAX_NUM_FRAMES=180 # Indicates the maximum number of frames received after the videos are packed. The actual maximum number of valid frames is MAX_NUM_FRAMES * MAX_NUM_PACKING.
159
+ MAX_NUM_PACKING=3 # indicates the maximum packing number of video frames. valid range: 1-6
160
+ TIME_SCALE = 0.1
161
+
162
+ def map_to_nearest_scale(values, scale):
163
+ tree = cKDTree(np.asarray(scale)[:, None])
164
+ _, indices = tree.query(np.asarray(values)[:, None])
165
+ return np.asarray(scale)[indices]
166
+
167
+
168
+ def group_array(arr, size):
169
+ return [arr[i:i+size] for i in range(0, len(arr), size)]
170
+
171
+ def encode_video(video_path, choose_fps=3, force_packing=None):
172
+ def uniform_sample(l, n):
173
+ gap = len(l) / n
174
+ idxs = [int(i * gap + gap / 2) for i in range(n)]
175
+ return [l[i] for i in idxs]
176
+ vr = VideoReader(video_path, ctx=cpu(0))
177
+ fps = vr.get_avg_fps()
178
+ video_duration = len(vr) / fps
179
+
180
+ if choose_fps * int(video_duration) <= MAX_NUM_FRAMES:
181
+ packing_nums = 1
182
+ choose_frames = round(min(choose_fps, round(fps)) * min(MAX_NUM_FRAMES, video_duration))
183
+
184
+ else:
185
+ packing_nums = math.ceil(video_duration * choose_fps / MAX_NUM_FRAMES)
186
+ if packing_nums <= MAX_NUM_PACKING:
187
+ choose_frames = round(video_duration * choose_fps)
188
+ else:
189
+ choose_frames = round(MAX_NUM_FRAMES * MAX_NUM_PACKING)
190
+ packing_nums = MAX_NUM_PACKING
191
+
192
+ frame_idx = [i for i in range(0, len(vr))]
193
+ frame_idx = np.array(uniform_sample(frame_idx, choose_frames))
194
+
195
+ if force_packing:
196
+ packing_nums = min(force_packing, MAX_NUM_PACKING)
197
+
198
+ print(video_path, ' duration:', video_duration)
199
+ print(f'get video frames={len(frame_idx)}, packing_nums={packing_nums}')
200
+
201
+ frames = vr.get_batch(frame_idx).asnumpy()
202
+
203
+ frame_idx_ts = frame_idx / fps
204
+ scale = np.arange(0, video_duration, TIME_SCALE)
205
+
206
+ frame_ts_id = map_to_nearest_scale(frame_idx_ts, scale) / TIME_SCALE
207
+ frame_ts_id = frame_ts_id.astype(np.int32)
208
+
209
+ assert len(frames) == len(frame_ts_id)
210
+
211
+ frames = [Image.fromarray(v.astype('uint8')).convert('RGB') for v in frames]
212
+ frame_ts_id_group = group_array(frame_ts_id, packing_nums)
213
+
214
+ return frames, frame_ts_id_group
215
+
216
+
217
+ video_path="video_test.mp4"
218
+ fps = 5 # fps for video
219
+ force_packing = None # You can set force_packing to ensure that 3D packing is forcibly enabled; otherwise, encode_video will dynamically set the packing quantity based on the duration.
220
+ frames, frame_ts_id_group = encode_video(video_path, fps, force_packing=force_packing)
221
+
222
+ question = "Describe the video"
223
+ msgs = [
224
+ {'role': 'user', 'content': frames + [question]},
225
+ ]
226
+
227
+
228
+ answer = model.chat(
229
+ msgs=msgs,
230
+ tokenizer=tokenizer,
231
+ use_image_id=False,
232
+ max_slice_nums=1,
233
+ temporal_ids=frame_ts_id_group
234
+ )
235
+ print(answer)
236
+ ```
237
+
238
+
239
+ ## License
240
+ #### Model License
241
+ * The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
242
+ * The usage of MiniCPM-V series model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM-o/blob/main/MiniCPM%20Model%20License.md).
243
+ * The models and weights of MiniCPM are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-V 4.5 weights are also available for free commercial use.
244
+
245
+
246
+ #### Statement
247
+ * As an LMM, MiniCPM-V 4.5 generates contents by learning a large amount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V 4.5 does not represent the views and positions of the model developers
248
+ * We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
249
+
250
+ ## Key Techniques and Other Multimodal Projects
251
+
252
+ 👏 Welcome to explore key techniques of MiniCPM-V 4.5 and other multimodal projects of our team:
253
+
254
+ [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
255
+
256
+ ## Citation
257
+
258
+ If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
259
+
260
+ ```bib
261
+ @article{yao2024minicpm,
262
+ title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
263
+ author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others},
264
+ journal={Nat Commun 16, 5509 (2025)},
265
+ year={2025}
266
+ }
267
+ ```
added_tokens.json ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</box>": 151674,
3
+ "</image>": 151670,
4
+ "</image_id>": 151682,
5
+ "</point>": 151678,
6
+ "</quad>": 151676,
7
+ "</ref>": 151672,
8
+ "</slice>": 151680,
9
+ "</think>": 151668,
10
+ "</tool_call>": 151658,
11
+ "</tool_response>": 151666,
12
+ "</unit>": 151684,
13
+ "<box>": 151673,
14
+ "")
128
+ self.slice_start_token = kwargs.pop("slice_start", "<slice>")
129
+ self.slice_end_token = kwargs.pop("slice_end", "</slice>")
130
+ self.unk_token = kwargs.pop("unk", "<unk>")
131
+ self.im_id_start = kwargs.pop("im_id_start", "<image_id>")
132
+ self.im_id_end = kwargs.pop("im_id_end", "</image_id>")
133
+ self.slice_mode = kwargs.pop("slice_mode", True)
134
+ self.mean = np.array(kwargs.pop("norm_mean", [0.5, 0.5, 0.5]))
135
+ self.std = np.array(kwargs.pop("norm_std", [0.5, 0.5, 0.5]))
136
+ self.version = kwargs.pop("version", 2.0)
137
+
138
+ def ensure_divide(self, length, patch_size):
139
+ return max(round(length / patch_size) * patch_size, patch_size)
140
+
141
+ def find_best_resize(self,
142
+ original_size,
143
+ scale_resolution,
144
+ patch_size,
145
+ allow_upscale=False):
146
+ width, height = original_size
147
+ if (width * height >
148
+ scale_resolution * scale_resolution) or allow_upscale:
149
+ r = width / height
150
+ height = int(scale_resolution / math.sqrt(r))
151
+ width = int(height * r)
152
+ best_width = self.ensure_divide(width, patch_size)
153
+ best_height = self.ensure_divide(height, patch_size)
154
+ return (best_width, best_height)
155
+
156
+ def get_refine_size(self,
157
+ original_size,
158
+ grid,
159
+ scale_resolution,
160
+ patch_size,
161
+ allow_upscale=False):
162
+ width, height = original_size
163
+ grid_x, grid_y = grid
164
+
165
+ refine_width = self.ensure_divide(width, grid_x)
166
+ refine_height = self.ensure_divide(height, grid_y)
167
+
168
+ grid_width = refine_width / grid_x
169
+ grid_height = refine_height / grid_y
170
+
171
+ best_grid_size = self.find_best_resize((grid_width, grid_height),
172
+ scale_resolution,
173
+ patch_size,
174
+ allow_upscale=allow_upscale)
175
+ refine_size = (best_grid_size[0] * grid_x, best_grid_size[1] * grid_y)
176
+ return refine_size
177
+
178
+ def split_to_patches(self, image, grid):
179
+ patches = []
180
+ width, height = image.size
181
+ grid_x = int(width / grid[0])
182
+ grid_y = int(height / grid[1])
183
+ for i in range(0, height, grid_y):
184
+ images = []
185
+ for j in range(0, width, grid_x):
186
+ box = (j, i, j + grid_x, i + grid_y)
187
+ patch = image.crop(box)
188
+ images.append(patch)
189
+ patches.append(images)
190
+ return patches
191
+
192
+ def slice_image(
193
+ self, image, max_slice_nums=9, scale_resolution=448, patch_size=14, never_split=False
194
+ ):
195
+ original_size = image.size
196
+ source_image = None
197
+ best_grid = self.get_sliced_grid(original_size, max_slice_nums, never_split)
198
+ patches = []
199
+
200
+ if best_grid is None:
201
+ # dont need to slice, upsample
202
+ best_size = self.find_best_resize(
203
+ original_size, scale_resolution, patch_size, allow_upscale=True
204
+ )
205
+ source_image = image.resize(best_size, resample=Image.Resampling.BICUBIC)
206
+ else:
207
+ # source image, down-sampling and ensure divided by patch_size
208
+ best_resize = self.find_best_resize(original_size, scale_resolution, patch_size)
209
+ source_image = image.copy().resize(best_resize, resample=Image.Resampling.BICUBIC)
210
+ refine_size = self.get_refine_size(
211
+ original_size, best_grid, scale_resolution, patch_size, allow_upscale=True
212
+ )
213
+ refine_image = image.resize(refine_size, resample=Image.Resampling.BICUBIC)
214
+ patches = self.split_to_patches(refine_image, best_grid)
215
+
216
+ return source_image, patches, best_grid
217
+
218
+ def get_grid_placeholder(self, grid):
219
+ if grid is None:
220
+ return ""
221
+ slice_image_placeholder = (
222
+ self.slice_start_token
223
+ + self.unk_token * self.image_feature_size
224
+ + self.slice_end_token
225
+ )
226
+
227
+ cols = grid[0]
228
+ rows = grid[1]
229
+ slices = []
230
+ for i in range(rows):
231
+ lines = []
232
+ for j in range(cols):
233
+ lines.append(slice_image_placeholder)
234
+ slices.append("".join(lines))
235
+
236
+ slice_placeholder = "\n".join(slices)
237
+ return slice_placeholder
238
+
239
+ def get_image_id_placeholder(self, idx=0):
240
+ return f"{self.im_id_start}{idx}{self.im_id_end}"
241
+
242
+ def get_sliced_images(self, image, max_slice_nums=None):
243
+ slice_images = []
244
+
245
+ if not self.slice_mode:
246
+ return [image]
247
+
248
+ max_slice_nums = self.max_slice_nums if max_slice_nums is None else int(max_slice_nums)
249
+ assert max_slice_nums > 0
250
+ source_image, patches, sliced_grid = self.slice_image(
251
+ image,
252
+ max_slice_nums, # default: 9
253
+ self.scale_resolution, # default: 448
254
+ self.patch_size # default: 14
255
+ )
256
+
257
+ slice_images.append(source_image)
258
+ if len(patches) > 0:
259
+ for i in range(len(patches)):
260
+ for j in range(len(patches[0])):
261
+ slice_images.append(patches[i][j])
262
+ return slice_images
263
+
264
+ def get_sliced_grid(self, image_size, max_slice_nums, nerver_split=False):
265
+ original_width, original_height = image_size
266
+ log_ratio = math.log(original_width / original_height)
267
+ ratio = original_width * original_height / (self.scale_resolution * self.scale_resolution)
268
+ multiple = min(math.ceil(ratio), max_slice_nums)
269
+ if multiple <= 1 or nerver_split:
270
+ return None
271
+ candidate_split_grids_nums = []
272
+ for i in [multiple - 1, multiple, multiple + 1]:
273
+ if i == 1 or i > max_slice_nums:
274
+ continue
275
+ candidate_split_grids_nums.append(i)
276
+
277
+ candidate_grids = []
278
+ for split_grids_nums in candidate_split_grids_nums:
279
+ m = 1
280
+ while m <= split_grids_nums:
281
+ if split_grids_nums % m == 0:
282
+ candidate_grids.append([m, split_grids_nums // m])
283
+ m += 1
284
+
285
+ best_grid = [1, 1]
286
+ min_error = float("inf")
287
+ for grid in candidate_grids:
288
+ error = abs(log_ratio - math.log(grid[0] / grid[1]))
289
+ if error < min_error:
290
+ best_grid = grid
291
+ min_error = error
292
+
293
+ return best_grid
294
+
295
+ def get_slice_image_placeholder(self, image_size, image_idx=0, max_slice_nums=None, use_image_id=None):
296
+ max_slice_nums = self.max_slice_nums if max_slice_nums is None else int(max_slice_nums)
297
+ assert max_slice_nums > 0
298
+ grid = self.get_sliced_grid(image_size=image_size, max_slice_nums=max_slice_nums)
299
+
300
+ image_placeholder = (
301
+ self.im_start_token
302
+ + self.unk_token * self.image_feature_size
303
+ + self.im_end_token
304
+ )
305
+ use_image_id = self.use_image_id if use_image_id is None else bool(use_image_id)
306
+ if use_image_id:
307
+ final_placeholder = self.get_image_id_placeholder(image_idx) + image_placeholder
308
+ else:
309
+ final_placeholder = image_placeholder
310
+
311
+ if self.slice_mode:
312
+ final_placeholder = final_placeholder + self.get_grid_placeholder(grid=grid)
313
+ return final_placeholder
314
+
315
+ def to_pil_image(self, image, rescale=None) -> PIL.Image.Image:
316
+ """
317
+ Converts `image` to a PIL Image. Optionally rescales it and puts the channel dimension back as the last axis if
318
+ needed.
319
+
320
+ Args:
321
+ image (`PIL.Image.Image` or `numpy.ndarray` or `torch.Tensor`):
322
+ The image to convert to the PIL Image format.
323
+ rescale (`bool`, *optional*):
324
+ Whether or not to apply the scaling factor (to make pixel values integers between 0 and 255). Will
325
+ default to `True` if the image type is a floating type, `False` otherwise.
326
+ """
327
+ if isinstance(image, PIL.Image.Image):
328
+ return image
329
+ if is_torch_tensor(image):
330
+ image = image.numpy()
331
+
332
+ if isinstance(image, np.ndarray):
333
+ if rescale is None:
334
+ # rescale default to the array being of floating type.
335
+ rescale = isinstance(image.flat[0], np.floating)
336
+ # If the channel as been moved to first dim, we put it back at the end.
337
+ if image.ndim == 3 and image.shape[0] in [1, 3]:
338
+ image = image.transpose(1, 2, 0)
339
+ if rescale:
340
+ image = image * 255
341
+ image = image.astype(np.uint8)
342
+ return PIL.Image.fromarray(image)
343
+ return image
344
+
345
+ def reshape_by_patch(self, image):
346
+ """
347
+ :param image: shape [3, H, W]
348
+ :param patch_size:
349
+ :return: [3, patch_size, HW/patch_size]
350
+ """
351
+ image = torch.from_numpy(image)
352
+ patch_size = self.patch_size
353
+ patches = torch.nn.functional.unfold(
354
+ image,
355
+ (patch_size, patch_size),
356
+ stride=(patch_size, patch_size)
357
+ )
358
+
359
+ patches = patches.reshape(image.size(0), patch_size, patch_size, -1)
360
+ patches = patches.permute(0, 1, 3, 2).reshape(image.size(0), patch_size, -1)
361
+ return patches.numpy()
362
+
363
+ def preprocess(
364
+ self,
365
+ images: Union[Image.Image, List[Image.Image], List[List[Image.Image]]],
366
+ do_pad: Optional[bool] = True, # TODO: add pad for MiniCPM-Llama3-V-2_5
367
+ max_slice_nums: int = None,
368
+ temporal_ids: Optional[Union[List[List[int]], List[List[List[int]]]]] = None,
369
+ return_tensors: Optional[Union[str, TensorType]] = None,
370
+ **kwargs
371
+ ) -> MiniCPMVBatchFeature:
372
+ if isinstance(images, Image.Image):
373
+ images_list = [[images]]
374
+ elif isinstance(images[0], Image.Image):
375
+ images_list = [images]
376
+ else:
377
+ images_list = images
378
+
379
+ if temporal_ids is not None:
380
+ if list_depth(temporal_ids) == 2:
381
+ temporal_ids = [temporal_ids]
382
+
383
+ new_images_list = []
384
+ image_sizes_list = []
385
+ tgt_sizes_list = []
386
+ temporal_ids_list = []
387
+ skip_image_idx_list = []
388
+
389
+ for batch_idx, _images in enumerate(images_list):
390
+ if _images is None or len(_images) == 0:
391
+ new_images_list.append([])
392
+ image_sizes_list.append([])
393
+ tgt_sizes_list.append([])
394
+ temporal_ids_list.append([])
395
+ skip_image_idx_list.append([])
396
+ continue
397
+ if not valid_images(_images):
398
+ raise ValueError(
399
+ "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, "
400
+ "torch.Tensor, tf.Tensor or jax.ndarray."
401
+ )
402
+
403
+ _images = [self.to_pil_image(image).convert("RGB") for image in _images]
404
+ input_data_format = infer_channel_dimension_format(np.array(_images[0]))
405
+
406
+ new_images = []
407
+ image_sizes = [image.size for image in _images]
408
+ tgt_sizes = []
409
+ tp_ids = []
410
+ skip_image_idx = []
411
+
412
+ # for image in _images:
413
+ # image_patches = self.get_sliced_images(image, max_slice_nums)
414
+ # image_patches = [to_numpy_array(image).astype(np.float32) / 255 for image in image_patches]
415
+ # image_patches = [
416
+ # self.normalize(image=image, mean=self.mean, std=self.std, input_data_format=input_data_format)
417
+ # for image in image_patches
418
+ # ]
419
+ # image_patches = [
420
+ # to_channel_dimension_format(image, ChannelDimension.FIRST, input_channel_dim=input_data_format)
421
+ # for image in image_patches
422
+ # ]
423
+ # for slice_image in image_patches:
424
+ # new_images.append(self.reshape_by_patch(slice_image))
425
+ # tgt_sizes.append(np.array((slice_image.shape[1] // self.patch_size, slice_image.shape[2] // self.patch_size)))
426
+
427
+ if temporal_ids is None:
428
+ # no temporal ids
429
+ for image in _images:
430
+ image_patches = self.get_sliced_images(image, max_slice_nums)
431
+ image_patches = [to_numpy_array(image).astype(np.float32) / 255 for image in image_patches]
432
+ image_patches = [
433
+ self.normalize(image=image, mean=self.mean, std=self.std, input_data_format=input_data_format)
434
+ for image in image_patches
435
+ ]
436
+ image_patches = [
437
+ to_channel_dimension_format(image, ChannelDimension.FIRST, input_channel_dim=input_data_format)
438
+ for image in image_patches
439
+ ]
440
+ for slice_image in image_patches:
441
+ new_images.append(self.reshape_by_patch(slice_image))
442
+ tgt_sizes.append(np.array((slice_image.shape[1] // self.patch_size, slice_image.shape[2] // self.patch_size)))
443
+
444
+ tp_ids.extend([[-1]] * len(image_patches))
445
+ else:
446
+ temporal_ids_flatten = list(chain.from_iterable(temporal_ids[batch_idx]))
447
+ assert len(temporal_ids_flatten) == len(_images)
448
+ frame_groups = []
449
+ s = 0
450
+ for group in temporal_ids[batch_idx]:
451
+ frame_groups.append(_images[s:s+len(group)])
452
+ s += len(group)
453
+
454
+ skip_start = 0
455
+ for frame_group, tp_id in zip(frame_groups, temporal_ids[batch_idx]):
456
+ image_patches_group = []
457
+ for frame in frame_group:
458
+ image_patches = self.get_sliced_images(frame, max_slice_nums)
459
+ image_patches = [to_numpy_array(image).astype(np.float32) / 255 for image in image_patches]
460
+ image_patches = [
461
+ self.normalize(image=image, mean=self.mean, std=self.std, input_data_format=input_data_format)
462
+ for image in image_patches
463
+ ]
464
+ image_patches = [
465
+ to_channel_dimension_format(image, ChannelDimension.FIRST, input_channel_dim=input_data_format)
466
+ for image in image_patches
467
+ ]
468
+ image_patches_group.append(image_patches)
469
+
470
+ group_cnt = len(image_patches_group[0])
471
+ for gidx in range(group_cnt):
472
+ group_images = [s[gidx] for s in image_patches_group]
473
+ tgt_sizes.extend([np.array((i.shape[1] // self.patch_size, i.shape[2] // self.patch_size)) for i in group_images])
474
+
475
+ group_images = [self.reshape_by_patch(i) for i in group_images]
476
+ new_images.extend(group_images)
477
+ tp_ids.append(tp_id)
478
+ skip_image_idx.extend(list(range(skip_start + 1, skip_start + len(frame_group))))
479
+ skip_start += len(frame_group)
480
+
481
+ if tgt_sizes:
482
+ tgt_sizes = np.vstack(tgt_sizes)
483
+
484
+ new_images_list.append(new_images)
485
+ image_sizes_list.append(image_sizes)
486
+ tgt_sizes_list.append(tgt_sizes)
487
+ temporal_ids_list.append(tp_ids)
488
+ skip_image_idx_list.append(skip_image_idx)
489
+
490
+ data = {
491
+ "pixel_values": new_images_list,
492
+ "image_sizes": image_sizes_list,
493
+ "tgt_sizes": tgt_sizes_list,
494
+ "temporal_ids": temporal_ids_list,
495
+ "skip_image_idx": skip_image_idx_list
496
+ }
497
+
498
+
499
+ return MiniCPMVBatchFeature(data=data, tensor_type=return_tensors)
500
+
501
+ AutoImageProcessor.register("MiniCPMVImageProcessor", MiniCPMVImageProcessor)
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cab2060d439c1ec86787778fac83ba8a5f76735208218ad257382f064aa7e90
3
+ size 4852385280
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c8e628772e2b4500d018c4cfb2ce05854b816906eceda74c3a9014165f06538
3
+ size 2256571832
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
modeling_minicpmv.py ADDED
@@ -0,0 +1,461 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ from typing import List, Optional
3
+ import json
4
+ import torch
5
+ import torchvision
6
+
7
+ from threading import Thread
8
+ from copy import deepcopy
9
+ from PIL import Image
10
+ from transformers import AutoProcessor, Qwen3PreTrainedModel, Qwen3ForCausalLM, TextIteratorStreamer
11
+
12
+ from .configuration_minicpm import MiniCPMVConfig
13
+ from .modeling_navit_siglip import SiglipVisionTransformer
14
+ from .resampler import Resampler
15
+
16
+
17
+
18
+ class MiniCPMVPreTrainedModel(Qwen3PreTrainedModel):
19
+ config_class = MiniCPMVConfig
20
+
21
+
22
+ class MiniCPMV(MiniCPMVPreTrainedModel):
23
+ def __init__(self, config):
24
+ super().__init__(config)
25
+ self.llm = Qwen3ForCausalLM(config)
26
+ self.vpm = self.init_vision_module()
27
+ self.vision_dim = self.vpm.embed_dim
28
+ self.embed_dim = self.llm.config.hidden_size
29
+ self.resampler = self.init_resampler(self.embed_dim, self.vision_dim)
30
+ self.processor = None
31
+
32
+ self.terminators = ['<|im_end|>', '<|endoftext|>']
33
+
34
+ def init_vision_module(self):
35
+ # same as HuggingFaceM4/siglip-so400m-14-980-flash-attn2-navit add tgt_sizes
36
+ if self.config._attn_implementation == 'flash_attention_2':
37
+ self.config.vision_config._attn_implementation = 'flash_attention_2'
38
+ else:
39
+ # not suport sdpa
40
+ self.config.vision_config._attn_implementation = 'eager'
41
+ model = SiglipVisionTransformer(self.config.vision_config)
42
+ if self.config.drop_vision_last_layer:
43
+ model.encoder.layers = model.encoder.layers[:-1]
44
+
45
+ setattr(model, 'embed_dim', model.embeddings.embed_dim)
46
+ setattr(model, 'patch_size', model.embeddings.patch_size)
47
+
48
+ return model
49
+
50
+ def init_resampler(self, embed_dim, vision_dim):
51
+ return Resampler(
52
+ num_queries=self.config.query_num,
53
+ embed_dim=embed_dim,
54
+ num_heads=embed_dim // 128,
55
+ kv_dim=vision_dim,
56
+ adaptive=True,
57
+ batch_infer=self.config.batch_3d_resampler
58
+ )
59
+
60
+ def get_input_embeddings(self):
61
+ return self.llm.get_input_embeddings()
62
+
63
+ def set_input_embeddings(self, value):
64
+ self.llm.embed_tokens = value
65
+
66
+ def get_output_embeddings(self):
67
+ return self.llm.lm_head
68
+
69
+ def set_output_embeddings(self, new_embeddings):
70
+ self.llm.lm_head = new_embeddings
71
+
72
+ def set_decoder(self, decoder):
73
+ self.llm = decoder
74
+
75
+ def get_decoder(self):
76
+ return self.llm
77
+
78
+ def get_vllm_embedding(self, data):
79
+ if 'vision_hidden_states' not in data:
80
+ dtype = self.llm.model.embed_tokens.weight.dtype
81
+ device = self.llm.model.embed_tokens.weight.device
82
+ tgt_sizes = data['tgt_sizes']
83
+ pixel_values_list = data['pixel_values']
84
+ temporal_ids = data.get('temporal_ids', None)
85
+ vision_hidden_states = []
86
+ all_pixel_values = []
87
+ img_cnt = []
88
+ all_temporal_ids = None
89
+
90
+ for pixel_values in pixel_values_list:
91
+ img_cnt.append(len(pixel_values))
92
+ all_pixel_values.extend([i.flatten(end_dim=1).permute(1, 0) for i in pixel_values])
93
+
94
+ if temporal_ids is not None:
95
+ all_temporal_ids = []
96
+ for t in temporal_ids:
97
+ all_temporal_ids.extend(t)
98
+
99
+ # exist image
100
+ if all_pixel_values:
101
+ tgt_sizes = [tgt_size for tgt_size in tgt_sizes if isinstance(tgt_size, torch.Tensor)]
102
+ tgt_sizes = torch.vstack(tgt_sizes).type(torch.int32)
103
+
104
+ max_patches = torch.max(tgt_sizes[:, 0] * tgt_sizes[:, 1])
105
+
106
+ all_pixel_values = torch.nn.utils.rnn.pad_sequence(all_pixel_values, batch_first=True,
107
+ padding_value=0.0)
108
+ B, L, _ = all_pixel_values.shape
109
+ all_pixel_values = all_pixel_values.permute(0, 2, 1).reshape(B, 3, -1, L)
110
+
111
+ patch_attn_mask = torch.zeros((B, 1, max_patches), dtype=torch.bool, device=device)
112
+ for i in range(B):
113
+ patch_attn_mask[i, 0, :tgt_sizes[i][0] * tgt_sizes[i][1]] = True
114
+
115
+ vision_batch_size = self.config.vision_batch_size
116
+ all_pixel_values = all_pixel_values.type(dtype)
117
+ if B > vision_batch_size:
118
+ hs = []
119
+ for i in range(0, B, vision_batch_size):
120
+ start_idx = i
121
+ end_idx = i + vision_batch_size
122
+ tmp_hs = self.vpm(all_pixel_values[start_idx:end_idx], patch_attention_mask=patch_attn_mask[start_idx:end_idx], tgt_sizes=tgt_sizes[start_idx:end_idx]).last_hidden_state
123
+ hs.append(tmp_hs)
124
+ vision_embedding = torch.cat(hs, dim=0)
125
+ else:
126
+ vision_embedding = self.vpm(all_pixel_values, patch_attention_mask=patch_attn_mask, tgt_sizes=tgt_sizes).last_hidden_state
127
+ vision_embedding = self.resampler(vision_embedding, tgt_sizes, all_temporal_ids)
128
+
129
+ start = 0
130
+ for pixel_values in pixel_values_list:
131
+ img_cnt = len(pixel_values)
132
+ if img_cnt > 0:
133
+ vision_hidden_states.append(vision_embedding[start: start + img_cnt])
134
+ start += img_cnt
135
+ else:
136
+ vision_hidden_states.append([])
137
+ else: # no image
138
+ if self.training:
139
+ dummy_image = torch.zeros(
140
+ (1, 3, 224, 224),
141
+ device=device, dtype=dtype
142
+ )
143
+ tgt_sizes = torch.Tensor([[(224 // self.config.patch_size), math.ceil(224 / self.config.patch_size)]]).type(torch.int32)
144
+ dummy_feature = self.resampler(self.vpm(dummy_image).last_hidden_state, tgt_sizes)
145
+ else:
146
+ dummy_feature = []
147
+ for _ in range(len(pixel_values_list)):
148
+ vision_hidden_states.append(dummy_feature)
149
+
150
+ else:
151
+ vision_hidden_states = data['vision_hidden_states']
152
+
153
+ if hasattr(self.llm.config, 'scale_emb'):
154
+ vllm_embedding = self.llm.model.embed_tokens(data['input_ids']) * self.llm.config.scale_emb
155
+ else:
156
+ vllm_embedding = self.llm.model.embed_tokens(data['input_ids'])
157
+
158
+ vision_hidden_states = [i.type(vllm_embedding.dtype) if isinstance(
159
+ i, torch.Tensor) else i for i in vision_hidden_states]
160
+
161
+ bs = len(data['input_ids'])
162
+ device = vllm_embedding.device
163
+ embed_dim = vllm_embedding.shape[-1]
164
+
165
+ updated_vllm_embedding = torch.empty_like(vllm_embedding)
166
+
167
+ for i in range(bs):
168
+ cur_vs_hs = vision_hidden_states[i]
169
+ cur_vllm_emb = vllm_embedding[i]
170
+
171
+ if len(cur_vs_hs) == 0:
172
+ updated_vllm_embedding[i] = cur_vllm_emb
173
+ continue
174
+
175
+ cur_image_bound = data['image_bound'][i]
176
+
177
+ if len(cur_image_bound) > 0:
178
+ image_indices = torch.cat([
179
+ torch.arange(r[0], r[1], dtype=torch.long)
180
+ for r in cur_image_bound
181
+ ]).to(device)
182
+
183
+ indices_expanded = image_indices.view(-1, 1).expand(-1, embed_dim)
184
+ vision_features = cur_vs_hs.view(-1, embed_dim)
185
+
186
+ updated_emb = cur_vllm_emb.clone()
187
+ updated_emb.scatter_(0, indices_expanded, vision_features)
188
+ updated_vllm_embedding[i] = updated_emb
189
+ elif self.training:
190
+ if isinstance(cur_vs_hs, torch.Tensor) and cur_vs_hs.numel() > 0:
191
+ dummy_gradient_term = cur_vs_hs.sum() * 0.0
192
+ updated_vllm_embedding[i] = cur_vllm_emb + dummy_gradient_term
193
+ else:
194
+ updated_vllm_embedding[i] = cur_vllm_emb
195
+ else:
196
+ updated_vllm_embedding[i] = cur_vllm_emb
197
+
198
+ vllm_embedding = updated_vllm_embedding
199
+
200
+ return vllm_embedding, vision_hidden_states
201
+
202
+
203
+ def forward(self, data, **kwargs):
204
+ vllm_embedding, vision_hidden_states = self.get_vllm_embedding(data)
205
+
206
+ position_ids = data["position_ids"]
207
+ if position_ids.dtype != torch.int64:
208
+ position_ids = position_ids.long()
209
+
210
+ # compatible with llama factory
211
+ for key in ["input_ids", "inputs_embeds", "position_ids"]:
212
+ if key in kwargs:
213
+ del kwargs[key]
214
+
215
+ return self.llm(
216
+ input_ids=None,
217
+ position_ids=position_ids,
218
+ inputs_embeds=vllm_embedding,
219
+ **kwargs
220
+ )
221
+
222
+ def _decode(self, inputs_embeds, tokenizer, attention_mask, decode_text=False, **kwargs):
223
+ terminators = [tokenizer.convert_tokens_to_ids(i) for i in self.terminators]
224
+ output = self.llm.generate(
225
+ inputs_embeds=inputs_embeds,
226
+ pad_token_id=0,
227
+ eos_token_id=terminators,
228
+ attention_mask=attention_mask,
229
+ **kwargs
230
+ )
231
+ if decode_text:
232
+ return self._decode_text(output, tokenizer)
233
+ return output
234
+
235
+ def _decode_stream(self, inputs_embeds, tokenizer, **kwargs):
236
+ terminators = [tokenizer.convert_tokens_to_ids(i) for i in self.terminators]
237
+ streamer = TextIteratorStreamer(tokenizer=tokenizer)
238
+ generation_kwargs = {
239
+ 'inputs_embeds': inputs_embeds,
240
+ 'pad_token_id': 0,
241
+ 'eos_token_id': terminators,
242
+ 'streamer': streamer
243
+ }
244
+ generation_kwargs.update(kwargs)
245
+
246
+ thread = Thread(target=self.llm.generate, kwargs=generation_kwargs)
247
+ thread.start()
248
+
249
+ return streamer
250
+
251
+ def _decode_text(self, result_ids, tokenizer):
252
+ terminators = [tokenizer.convert_tokens_to_ids(i) for i in self.terminators]
253
+ result_text = []
254
+ for result in result_ids:
255
+ result = result[result != 0]
256
+ if result[0] == tokenizer.bos_id:
257
+ result = result[1:]
258
+ if result[-1] in terminators:
259
+ result = result[:-1]
260
+ result_text.append(tokenizer.decode(result).strip())
261
+ return result_text
262
+
263
+ def generate(
264
+ self,
265
+ input_ids=None,
266
+ pixel_values=None,
267
+ tgt_sizes=None,
268
+ image_bound=None,
269
+ temporal_ids=None,
270
+ attention_mask=None,
271
+ tokenizer=None,
272
+ vision_hidden_states=None,
273
+ return_vision_hidden_states=False,
274
+ stream=False,
275
+ decode_text=False,
276
+ **kwargs
277
+ ):
278
+ assert input_ids is not None
279
+ assert len(input_ids) == len(pixel_values)
280
+
281
+ model_inputs = {
282
+ "input_ids": input_ids,
283
+ "image_bound": image_bound,
284
+ "temporal_ids": temporal_ids,
285
+ }
286
+
287
+ if vision_hidden_states is None:
288
+ model_inputs["pixel_values"] = pixel_values
289
+ model_inputs['tgt_sizes'] = tgt_sizes
290
+ else:
291
+ model_inputs["vision_hidden_states"] = vision_hidden_states
292
+
293
+ with torch.inference_mode():
294
+ (
295
+ model_inputs["inputs_embeds"],
296
+ vision_hidden_states,
297
+ ) = self.get_vllm_embedding(model_inputs)
298
+
299
+ if stream:
300
+ result = self._decode_stream(model_inputs["inputs_embeds"], tokenizer, **kwargs)
301
+ else:
302
+ result = self._decode(model_inputs["inputs_embeds"], tokenizer, attention_mask, decode_text=decode_text, **kwargs)
303
+
304
+ if return_vision_hidden_states:
305
+ return result, vision_hidden_states
306
+
307
+ return result
308
+
309
+ def chat(
310
+ self,
311
+ image=None,
312
+ msgs=None,
313
+ tokenizer=None,
314
+ processor=None,
315
+ vision_hidden_states=None,
316
+ max_new_tokens=2048,
317
+ min_new_tokens=0,
318
+ sampling=True,
319
+ max_inp_length=16384,
320
+ system_prompt='',
321
+ stream=False,
322
+ max_slice_nums=None,
323
+ use_image_id=None,
324
+ temporal_ids=None,
325
+ enable_thinking=False,
326
+ **kwargs
327
+ ):
328
+ if isinstance(msgs[0], list):
329
+ batched = True
330
+ else:
331
+ batched = False
332
+ msgs_list = msgs
333
+ images_list = image
334
+
335
+ if batched is False:
336
+ images_list, msgs_list = [images_list], [msgs_list]
337
+ else:
338
+ assert images_list is None, "Please integrate image to msgs when using batch inference."
339
+ images_list = [None] * len(msgs_list)
340
+ assert len(images_list) == len(msgs_list), "The batch dim of images_list and msgs_list should be the same."
341
+
342
+ if processor is None:
343
+ if self.processor is None:
344
+ self.processor = AutoProcessor.from_pretrained(self.config._name_or_path, trust_remote_code=True)
345
+ processor = self.processor
346
+
347
+ assert self.config.query_num == processor.image_processor.image_feature_size, "These two values should be the same. Check `config.json` and `preprocessor_config.json`."
348
+ assert self.config.patch_size == processor.image_processor.patch_size, "These two values should be the same. Check `config.json` and `preprocessor_config.json`."
349
+ assert self.config.use_image_id == processor.image_processor.use_image_id, "These two values should be the same. Check `config.json` and `preprocessor_config.json`."
350
+ assert self.config.slice_config.max_slice_nums == processor.image_processor.max_slice_nums, "These two values should be the same. Check `config.json` and `preprocessor_config.json`."
351
+ assert self.config.slice_mode == processor.image_processor.slice_mode, "These two values should be the same. Check `config.json` and `preprocessor_config.json`."
352
+
353
+
354
+ prompts_lists = []
355
+ input_images_lists = []
356
+ for image, msgs in zip(images_list, msgs_list):
357
+ if isinstance(msgs, str):
358
+ msgs = json.loads(msgs)
359
+ copy_msgs = deepcopy(msgs)
360
+
361
+ assert len(msgs) > 0, "msgs is empty"
362
+ assert sampling or not stream, "if use stream mode, make sure sampling=True"
363
+
364
+ if image is not None and isinstance(copy_msgs[0]["content"], str):
365
+ copy_msgs[0]["content"] = [image, copy_msgs[0]["content"]]
366
+
367
+ images = []
368
+ for i, msg in enumerate(copy_msgs):
369
+ role = msg["role"]
370
+ content = msg["content"]
371
+ assert role in ["user", "assistant"]
372
+ if i == 0:
373
+ assert role == "user", "The role of first msg should be user"
374
+ if isinstance(content, str):
375
+ content = [content]
376
+ cur_msgs = []
377
+ for c in content:
378
+ if isinstance(c, Image.Image):
379
+ images.append(c)
380
+ cur_msgs.append("()")
381
+ elif isinstance(c, str):
382
+ cur_msgs.append(c)
383
+ msg["content"] = "\n".join(cur_msgs)
384
+
385
+ if system_prompt:
386
+ sys_msg = {'role': 'system', 'content': system_prompt}
387
+ copy_msgs = [sys_msg] + copy_msgs
388
+
389
+
390
+ prompts_lists.append(processor.tokenizer.apply_chat_template(copy_msgs, tokenize=False, add_generation_prompt=True, enable_thinking=enable_thinking))
391
+ input_images_lists.append(images)
392
+
393
+ if enable_thinking:
394
+ prefill_answer = '<think>\n'
395
+ else:
396
+ prefill_answer = ''
397
+
398
+ inputs = processor(
399
+ prompts_lists,
400
+ input_images_lists,
401
+ max_slice_nums=max_slice_nums,
402
+ use_image_id=use_image_id,
403
+ temporal_ids=temporal_ids,
404
+ return_tensors="pt",
405
+ max_length=max_inp_length
406
+ ).to(self.device)
407
+
408
+ if sampling:
409
+ generation_config = {
410
+ "temperature": 0.7,
411
+ "do_sample": True,
412
+ }
413
+ if not enable_thinking:
414
+ generation_config.update(
415
+ {
416
+ "top_p": 0.8,
417
+ "top_k": 100,
418
+ "repetition_penalty": 1.03
419
+ }
420
+ )
421
+ else:
422
+ generation_config = {
423
+ "num_beams": 3,
424
+ "repetition_penalty": 1.2,
425
+ }
426
+
427
+ if min_new_tokens > 0:
428
+ generation_config['min_new_tokens'] = min_new_tokens
429
+
430
+ generation_config.update(
431
+ (k, kwargs[k]) for k in generation_config.keys() & kwargs.keys()
432
+ )
433
+
434
+ inputs.pop("image_sizes")
435
+ with torch.inference_mode():
436
+ res = self.generate(
437
+ **inputs,
438
+ tokenizer=tokenizer,
439
+ max_new_tokens=max_new_tokens,
440
+ vision_hidden_states=vision_hidden_states,
441
+ stream=stream,
442
+ decode_text=True,
443
+ **generation_config
444
+ )
445
+
446
+ if stream:
447
+ def stream_gen():
448
+ for text in prefill_answer:
449
+ yield text
450
+ for text in res:
451
+ for term in self.terminators:
452
+ text = text.replace(term, '')
453
+ yield text
454
+ return stream_gen()
455
+
456
+ else:
457
+ if batched:
458
+ answer = [prefill_answer + i if prefill_answer else i for i in res]
459
+ else:
460
+ answer = prefill_answer + res[0] if prefill_answer else '' + res[0]
461
+ return answer
modeling_navit_siglip.py ADDED
@@ -0,0 +1,937 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Google AI and The HuggingFace Team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ PyTorch Siglip model. """
16
+ # Copied from HuggingFaceM4/siglip-so400m-14-980-flash-attn2-navit and add tgt_sizes
17
+
18
+
19
+ import os
20
+ import math
21
+ import warnings
22
+ from dataclasses import dataclass
23
+ from typing import Any, Optional, Tuple, Union
24
+
25
+ import numpy as np
26
+ import torch
27
+ import torch.nn.functional as F
28
+ import torch.utils.checkpoint
29
+ from torch import nn
30
+ from torch.nn.init import _calculate_fan_in_and_fan_out
31
+
32
+ from transformers.activations import ACT2FN
33
+ from transformers.modeling_attn_mask_utils import _prepare_4d_attention_mask
34
+ from transformers.modeling_outputs import BaseModelOutput, BaseModelOutputWithPooling
35
+ from transformers.modeling_utils import PreTrainedModel
36
+ from transformers.configuration_utils import PretrainedConfig
37
+ from transformers.utils import (
38
+ ModelOutput,
39
+ add_start_docstrings,
40
+ add_start_docstrings_to_model_forward,
41
+ is_flash_attn_2_available,
42
+ logging,
43
+ replace_return_docstrings,
44
+ )
45
+ from transformers.utils import logging
46
+
47
+ logger = logging.get_logger(__name__)
48
+
49
+ class SiglipVisionConfig(PretrainedConfig):
50
+ r"""
51
+ This is the configuration class to store the configuration of a [`SiglipVisionModel`]. It is used to instantiate a
52
+ Siglip vision encoder according to the specified arguments, defining the model architecture. Instantiating a
53
+ configuration with the defaults will yield a similar configuration to that of the vision encoder of the Siglip
54
+ [google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) architecture.
55
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
56
+ documentation from [`PretrainedConfig`] for more information.
57
+ Args:
58
+ hidden_size (`int`, *optional*, defaults to 768):
59
+ Dimensionality of the encoder layers and the pooler layer.
60
+ intermediate_size (`int`, *optional*, defaults to 3072):
61
+ Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
62
+ num_hidden_layers (`int`, *optional*, defaults to 12):
63
+ Number of hidden layers in the Transformer encoder.
64
+ num_attention_heads (`int`, *optional*, defaults to 12):
65
+ Number of attention heads for each attention layer in the Transformer encoder.
66
+ num_channels (`int`, *optional*, defaults to 3):
67
+ Number of channels in the input images.
68
+ image_size (`int`, *optional*, defaults to 224):
69
+ The size (resolution) of each image.
70
+ patch_size (`int`, *optional*, defaults to 16):
71
+ The size (resolution) of each patch.
72
+ hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
73
+ The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
74
+ `"relu"`, `"selu"` and `"gelu_new"` ``"quick_gelu"` are supported.
75
+ layer_norm_eps (`float`, *optional*, defaults to 1e-06):
76
+ The epsilon used by the layer normalization layers.
77
+ attention_dropout (`float`, *optional*, defaults to 0.0):
78
+ The dropout ratio for the attention probabilities.
79
+ Example:
80
+ ```python
81
+ >>> from transformers import SiglipVisionConfig, SiglipVisionModel
82
+ >>> # Initializing a SiglipVisionConfig with google/siglip-base-patch16-224 style configuration
83
+ >>> configuration = SiglipVisionConfig()
84
+ >>> # Initializing a SiglipVisionModel (with random weights) from the google/siglip-base-patch16-224 style configuration
85
+ >>> model = SiglipVisionModel(configuration)
86
+ >>> # Accessing the model configuration
87
+ >>> configuration = model.config
88
+ ```"""
89
+
90
+ model_type = "siglip_vision_model"
91
+
92
+ def __init__(
93
+ self,
94
+ hidden_size=768,
95
+ intermediate_size=3072,
96
+ num_hidden_layers=12,
97
+ num_attention_heads=12,
98
+ num_channels=3,
99
+ image_size=224,
100
+ patch_size=16,
101
+ hidden_act="gelu_pytorch_tanh",
102
+ layer_norm_eps=1e-6,
103
+ attention_dropout=0.0,
104
+ **kwargs,
105
+ ):
106
+ super().__init__(**kwargs)
107
+
108
+ self.hidden_size = hidden_size
109
+ self.intermediate_size = intermediate_size
110
+ self.num_hidden_layers = num_hidden_layers
111
+ self.num_attention_heads = num_attention_heads
112
+ self.num_channels = num_channels
113
+ self.patch_size = patch_size
114
+ self.image_size = image_size
115
+ self.attention_dropout = attention_dropout
116
+ self.layer_norm_eps = layer_norm_eps
117
+ self.hidden_act = hidden_act
118
+
119
+ @classmethod
120
+ def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig":
121
+ cls._set_token_in_kwargs(kwargs)
122
+
123
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
124
+
125
+ # get the vision config dict if we are loading from SiglipConfig
126
+ if config_dict.get("model_type") == "siglip":
127
+ config_dict = config_dict["vision_config"]
128
+
129
+ if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
130
+ logger.warning(
131
+ f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
132
+ f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
133
+ )
134
+
135
+ return cls.from_dict(config_dict, **kwargs)
136
+
137
+
138
+ _CHECKPOINT_FOR_DOC = "google/siglip-base-patch16-224"
139
+
140
+ SIGLIP_PRETRAINED_MODEL_ARCHIVE_LIST = [
141
+ "google/siglip-base-patch16-224",
142
+ # See all SigLIP models at https://huggingface.co/models?filter=siglip
143
+ ]
144
+
145
+ if is_flash_attn_2_available():
146
+ from flash_attn import flash_attn_func, flash_attn_varlen_func
147
+ from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa
148
+
149
+
150
+ # Copied from transformers.models.llama.modeling_llama._get_unpad_data
151
+ def _get_unpad_data(attention_mask):
152
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
153
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
154
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
155
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.torch.int32), (1, 0))
156
+ return (
157
+ indices,
158
+ cu_seqlens,
159
+ max_seqlen_in_batch,
160
+ )
161
+
162
+
163
+ def _trunc_normal_(tensor, mean, std, a, b):
164
+ # Cut & paste from PyTorch official master until it's in a few official releases - RW
165
+ # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
166
+ def norm_cdf(x):
167
+ # Computes standard normal cumulative distribution function
168
+ return (1.0 + math.erf(x / math.sqrt(2.0))) / 2.0
169
+
170
+ if (mean < a - 2 * std) or (mean > b + 2 * std):
171
+ warnings.warn(
172
+ "mean is more than 2 std from [a, b] in nn.init.trunc_normal_. "
173
+ "The distribution of values may be incorrect.",
174
+ stacklevel=2,
175
+ )
176
+
177
+ # Values are generated by using a truncated uniform distribution and
178
+ # then using the inverse CDF for the normal distribution.
179
+ # Get upper and lower cdf values
180
+ l = norm_cdf((a - mean) / std)
181
+ u = norm_cdf((b - mean) / std)
182
+
183
+ # Uniformly fill tensor with values from [l, u], then translate to
184
+ # [2l-1, 2u-1].
185
+ tensor.uniform_(2 * l - 1, 2 * u - 1)
186
+
187
+ # Use inverse cdf transform for normal distribution to get truncated
188
+ # standard normal
189
+ if tensor.dtype in [torch.float16, torch.bfloat16]:
190
+ # The `erfinv_` op is not (yet?) defined in float16+cpu, bfloat16+gpu
191
+ og_dtype = tensor.dtype
192
+ tensor = tensor.to(torch.float32)
193
+ tensor.erfinv_()
194
+ tensor = tensor.to(og_dtype)
195
+ else:
196
+ tensor.erfinv_()
197
+
198
+ # Transform to proper mean, std
199
+ tensor.mul_(std * math.sqrt(2.0))
200
+ tensor.add_(mean)
201
+
202
+ # Clamp to ensure it's in the proper range
203
+ if tensor.dtype == torch.float16:
204
+ # The `clamp_` op is not (yet?) defined in float16+cpu
205
+ tensor = tensor.to(torch.float32)
206
+ tensor.clamp_(min=a, max=b)
207
+ tensor = tensor.to(torch.float16)
208
+ else:
209
+ tensor.clamp_(min=a, max=b)
210
+
211
+
212
+ def trunc_normal_tf_(
213
+ tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0, a: float = -2.0, b: float = 2.0
214
+ ) -> torch.Tensor:
215
+ """Fills the input Tensor with values drawn from a truncated
216
+ normal distribution. The values are effectively drawn from the
217
+ normal distribution :math:`\\mathcal{N}(\text{mean}, \text{std}^2)`
218
+ with values outside :math:`[a, b]` redrawn until they are within
219
+ the bounds. The method used for generating the random values works
220
+ best when :math:`a \\leq \text{mean} \\leq b`.
221
+ NOTE: this 'tf' variant behaves closer to Tensorflow / JAX impl where the
222
+ bounds [a, b] are applied when sampling the normal distribution with mean=0, std=1.0
223
+ and the result is subsquently scaled and shifted by the mean and std args.
224
+ Args:
225
+ tensor: an n-dimensional `torch.Tensor`
226
+ mean: the mean of the normal distribution
227
+ std: the standard deviation of the normal distribution
228
+ a: the minimum cutoff value
229
+ b: the maximum cutoff value
230
+ """
231
+ with torch.no_grad():
232
+ _trunc_normal_(tensor, 0, 1.0, a, b)
233
+ tensor.mul_(std).add_(mean)
234
+
235
+
236
+ def variance_scaling_(tensor, scale=1.0, mode="fan_in", distribution="normal"):
237
+ fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
238
+ if mode == "fan_in":
239
+ denom = fan_in
240
+ elif mode == "fan_out":
241
+ denom = fan_out
242
+ elif mode == "fan_avg":
243
+ denom = (fan_in + fan_out) / 2
244
+
245
+ variance = scale / denom
246
+
247
+ if distribution == "truncated_normal":
248
+ # constant is stddev of standard normal truncated to (-2, 2)
249
+ trunc_normal_tf_(tensor, std=math.sqrt(variance) / 0.87962566103423978)
250
+ elif distribution == "normal":
251
+ with torch.no_grad():
252
+ tensor.normal_(std=math.sqrt(variance))
253
+ elif distribution == "uniform":
254
+ bound = math.sqrt(3 * variance)
255
+ with torch.no_grad():
256
+ tensor.uniform_(-bound, bound)
257
+ else:
258
+ raise ValueError(f"invalid distribution {distribution}")
259
+
260
+
261
+ def lecun_normal_(tensor):
262
+ variance_scaling_(tensor, mode="fan_in", distribution="truncated_normal")
263
+
264
+
265
+ def default_flax_embed_init(tensor):
266
+ variance_scaling_(tensor, mode="fan_in", distribution="normal")
267
+
268
+
269
+ @dataclass
270
+ # Copied from transformers.models.clip.modeling_clip.CLIPVisionModelOutput with CLIP->Siglip
271
+ class SiglipVisionModelOutput(ModelOutput):
272
+ """
273
+ Base class for vision model's outputs that also contains image embeddings of the pooling of the last hidden states.
274
+ Args:
275
+ image_embeds (`torch.FloatTensor` of shape `(batch_size, output_dim)` *optional* returned when model is initialized with `with_projection=True`):
276
+ The image embeddings obtained by applying the projection layer to the pooler_output.
277
+ last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
278
+ Sequence of hidden-states at the output of the last layer of the model.
279
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
280
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
281
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
282
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
283
+ attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
284
+ Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
285
+ sequence_length)`.
286
+ Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
287
+ heads.
288
+ """
289
+
290
+ image_embeds: Optional[torch.FloatTensor] = None
291
+ last_hidden_state: torch.FloatTensor = None
292
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
293
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
294
+
295
+
296
+ class SiglipVisionEmbeddings(nn.Module):
297
+ def __init__(self, config: SiglipVisionConfig):
298
+ super().__init__()
299
+ self.config = config
300
+ self.embed_dim = config.hidden_size
301
+ self.image_size = config.image_size
302
+ self.patch_size = config.patch_size
303
+
304
+ self.patch_embedding = nn.Conv2d(
305
+ in_channels=config.num_channels,
306
+ out_channels=self.embed_dim,
307
+ kernel_size=self.patch_size,
308
+ stride=self.patch_size,
309
+ padding="valid",
310
+ )
311
+
312
+ self.num_patches_per_side = self.image_size // self.patch_size
313
+ self.num_patches = self.num_patches_per_side**2
314
+ self.num_positions = self.num_patches
315
+ self.position_embedding = nn.Embedding(self.num_positions, self.embed_dim)
316
+
317
+ def forward(self, pixel_values: torch.FloatTensor, patch_attention_mask: torch.BoolTensor, tgt_sizes: Optional[torch.IntTensor]=None) -> torch.Tensor:
318
+ batch_size = pixel_values.size(0)
319
+
320
+ patch_embeds = self.patch_embedding(pixel_values)
321
+ embeddings = patch_embeds.flatten(2).transpose(1, 2)
322
+
323
+ max_im_h, max_im_w = pixel_values.size(2), pixel_values.size(3)
324
+ max_nb_patches_h, max_nb_patches_w = max_im_h // self.patch_size, max_im_w // self.patch_size
325
+ boundaries = torch.arange(1 / self.num_patches_per_side, 1.0, 1 / self.num_patches_per_side)
326
+ position_ids = torch.full(
327
+ size=(
328
+ batch_size,
329
+ max_nb_patches_h * max_nb_patches_w,
330
+ ),
331
+ fill_value=0,
332
+ )
333
+
334
+ for batch_idx, p_attn_mask in enumerate(patch_attention_mask):
335
+ if tgt_sizes is not None:
336
+ nb_patches_h = tgt_sizes[batch_idx][0]
337
+ nb_patches_w = tgt_sizes[batch_idx][1]
338
+ else:
339
+ nb_patches_h = p_attn_mask[:, 0].sum()
340
+ nb_patches_w = p_attn_mask[0].sum()
341
+
342
+ fractional_coords_h = torch.arange(0, 1 - 1e-6, 1 / nb_patches_h)
343
+ fractional_coords_w = torch.arange(0, 1 - 1e-6, 1 / nb_patches_w)
344
+
345
+ bucket_coords_h = torch.bucketize(fractional_coords_h, boundaries, right=True)
346
+ bucket_coords_w = torch.bucketize(fractional_coords_w, boundaries, right=True)
347
+
348
+ pos_ids = (bucket_coords_h[:, None] * self.num_patches_per_side + bucket_coords_w).flatten()
349
+ position_ids[batch_idx][p_attn_mask.view(-1).cpu()] = pos_ids
350
+
351
+ position_ids = position_ids.to(self.position_embedding.weight.device)
352
+
353
+ embeddings = embeddings + self.position_embedding(position_ids)
354
+ return embeddings
355
+
356
+
357
+ class SiglipAttention(nn.Module):
358
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
359
+
360
+ # Copied from transformers.models.clip.modeling_clip.CLIPAttention.__init__
361
+ def __init__(self, config):
362
+ super().__init__()
363
+ self.config = config
364
+ self.embed_dim = config.hidden_size
365
+ self.num_heads = config.num_attention_heads
366
+ self.head_dim = self.embed_dim // self.num_heads
367
+ if self.head_dim * self.num_heads != self.embed_dim:
368
+ raise ValueError(
369
+ f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:"
370
+ f" {self.num_heads})."
371
+ )
372
+ self.scale = self.head_dim**-0.5
373
+ self.dropout = config.attention_dropout
374
+
375
+ self.k_proj = nn.Linear(self.embed_dim, self.embed_dim)
376
+ self.v_proj = nn.Linear(self.embed_dim, self.embed_dim)
377
+ self.q_proj = nn.Linear(self.embed_dim, self.embed_dim)
378
+ self.out_proj = nn.Linear(self.embed_dim, self.embed_dim)
379
+
380
+ def forward(
381
+ self,
382
+ hidden_states: torch.Tensor,
383
+ attention_mask: Optional[torch.Tensor] = None,
384
+ output_attentions: Optional[bool] = False,
385
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
386
+ """Input shape: Batch x Time x Channel"""
387
+
388
+ batch_size, q_len, _ = hidden_states.size()
389
+
390
+ query_states = self.q_proj(hidden_states)
391
+ key_states = self.k_proj(hidden_states)
392
+ value_states = self.v_proj(hidden_states)
393
+
394
+ query_states = query_states.view(batch_size, q_len, self.num_heads, self.head_dim).transpose(1, 2)
395
+ key_states = key_states.view(batch_size, q_len, self.num_heads, self.head_dim).transpose(1, 2)
396
+ value_states = value_states.view(batch_size, q_len, self.num_heads, self.head_dim).transpose(1, 2)
397
+
398
+ k_v_seq_len = key_states.shape[-2]
399
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) * self.scale
400
+
401
+ if attn_weights.size() != (batch_size, self.num_heads, q_len, k_v_seq_len):
402
+ raise ValueError(
403
+ f"Attention weights should be of size {(batch_size, self.num_heads, q_len, k_v_seq_len)}, but is"
404
+ f" {attn_weights.size()}"
405
+ )
406
+
407
+ if attention_mask is not None:
408
+ if attention_mask.size() != (batch_size, 1, q_len, k_v_seq_len):
409
+ raise ValueError(
410
+ f"Attention mask should be of size {(batch_size, 1, q_len, k_v_seq_len)}, but is {attention_mask.size()}"
411
+ )
412
+ attn_weights = attn_weights + attention_mask
413
+
414
+ # upcast attention to fp32
415
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
416
+ attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
417
+ attn_output = torch.matmul(attn_weights, value_states)
418
+
419
+ if attn_output.size() != (batch_size, self.num_heads, q_len, self.head_dim):
420
+ raise ValueError(
421
+ f"`attn_output` should be of size {(batch_size, self.num_heads, q_len, self.head_dim)}, but is"
422
+ f" {attn_output.size()}"
423
+ )
424
+
425
+ attn_output = attn_output.transpose(1, 2).contiguous()
426
+ attn_output = attn_output.reshape(batch_size, q_len, self.embed_dim)
427
+
428
+ attn_output = self.out_proj(attn_output)
429
+
430
+ return attn_output, attn_weights
431
+
432
+
433
+ class SiglipFlashAttention2(SiglipAttention):
434
+ """
435
+ Llama flash attention module. This module inherits from `LlamaAttention` as the weights of the module stays
436
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
437
+ flash attention and deal with padding tokens in case the input contains any of them.
438
+ """
439
+
440
+ def __init__(self, *args, **kwargs):
441
+ super().__init__(*args, **kwargs)
442
+ self.is_causal = False # Hack to make sure we don't use a causal mask
443
+
444
+ def forward(
445
+ self,
446
+ hidden_states: torch.Tensor,
447
+ attention_mask: Optional[torch.LongTensor] = None,
448
+ position_ids: Optional[torch.LongTensor] = None,
449
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
450
+ output_attentions: bool = False,
451
+ use_cache: bool = False,
452
+ **kwargs,
453
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
454
+ output_attentions = False
455
+
456
+ bsz, q_len, _ = hidden_states.size()
457
+
458
+ query_states = self.q_proj(hidden_states)
459
+ key_states = self.k_proj(hidden_states)
460
+ value_states = self.v_proj(hidden_states)
461
+
462
+ # Flash attention requires the input to have the shape
463
+ # batch_size x seq_length x head_dim x hidden_dim
464
+ # therefore we just need to keep the original shape
465
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
466
+ key_states = key_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
467
+ value_states = value_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
468
+
469
+ kv_seq_len = key_states.shape[-2]
470
+ if past_key_value is not None:
471
+ kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
472
+ # cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
473
+ # query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
474
+
475
+ # if past_key_value is not None:
476
+ # cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
477
+ # key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
478
+
479
+ # TODO: These transpose are quite inefficient but Flash Attention requires the layout [batch_size, sequence_length, num_heads, head_dim]. We would need to refactor the KV cache
480
+ # to be able to avoid many of these transpose/reshape/view.
481
+ query_states = query_states.transpose(1, 2)
482
+ key_states = key_states.transpose(1, 2)
483
+ value_states = value_states.transpose(1, 2)
484
+
485
+ dropout_rate = self.dropout if self.training else 0.0
486
+
487
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
488
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
489
+ # cast them back in the correct dtype just to be sure everything works as expected.
490
+ # This might slowdown training & inference so it is recommended to not cast the LayerNorms
491
+ # in fp32. (LlamaRMSNorm handles it correctly)
492
+
493
+ input_dtype = query_states.dtype
494
+ if input_dtype == torch.float32:
495
+ if torch.is_autocast_enabled():
496
+ target_dtype = torch.get_autocast_gpu_dtype()
497
+ # Handle the case where the model is quantized
498
+ elif hasattr(self.config, "_pre_quantization_dtype"):
499
+ target_dtype = self.config._pre_quantization_dtype
500
+ else:
501
+ target_dtype = self.q_proj.weight.dtype
502
+
503
+ logger.warning_once(
504
+ "The input hidden states seems to be silently casted in float32, this might be related to the fact"
505
+ " you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
506
+ f" {target_dtype}."
507
+ )
508
+
509
+ query_states = query_states.to(target_dtype)
510
+ key_states = key_states.to(target_dtype)
511
+ value_states = value_states.to(target_dtype)
512
+
513
+ attn_output = self._flash_attention_forward(
514
+ query_states, key_states, value_states, attention_mask, q_len, dropout=dropout_rate
515
+ )
516
+
517
+ attn_output = attn_output.reshape(bsz, q_len, self.embed_dim).contiguous()
518
+ attn_output = self.out_proj(attn_output)
519
+
520
+ if not output_attentions:
521
+ attn_weights = None
522
+
523
+ return attn_output, attn_weights
524
+
525
+ def _flash_attention_forward(
526
+ self, query_states, key_states, value_states, attention_mask, query_length, dropout=0.0, softmax_scale=None
527
+ ):
528
+ """
529
+ Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
530
+ first unpad the input, then computes the attention scores and pad the final attention scores.
531
+ Args:
532
+ query_states (`torch.Tensor`):
533
+ Input query states to be passed to Flash Attention API
534
+ key_states (`torch.Tensor`):
535
+ Input key states to be passed to Flash Attention API
536
+ value_states (`torch.Tensor`):
537
+ Input value states to be passed to Flash Attention API
538
+ attention_mask (`torch.Tensor`):
539
+ The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the
540
+ position of padding tokens and 1 for the position of non-padding tokens.
541
+ dropout (`int`, *optional*):
542
+ Attention dropout
543
+ softmax_scale (`float`, *optional*):
544
+ The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim)
545
+ """
546
+
547
+ # TODO: Remove the `query_length != 1` check once Flash Attention for RoCm is bumped to 2.1. For details, please see the comment in LlamaFlashAttention2 __init__.
548
+ causal = self.is_causal and query_length != 1
549
+
550
+ # Contains at least one padding token in the sequence
551
+ if attention_mask is not None:
552
+ batch_size = query_states.shape[0]
553
+ query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._upad_input(
554
+ query_states, key_states, value_states, attention_mask, query_length
555
+ )
556
+
557
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
558
+ max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens
559
+
560
+ attn_output_unpad = flash_attn_varlen_func(
561
+ query_states,
562
+ key_states,
563
+ value_states,
564
+ cu_seqlens_q=cu_seqlens_q,
565
+ cu_seqlens_k=cu_seqlens_k,
566
+ max_seqlen_q=max_seqlen_in_batch_q,
567
+ max_seqlen_k=max_seqlen_in_batch_k,
568
+ dropout_p=dropout,
569
+ softmax_scale=softmax_scale,
570
+ causal=causal,
571
+ )
572
+
573
+ attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)
574
+ else:
575
+ attn_output = flash_attn_func(
576
+ query_states, key_states, value_states, dropout, softmax_scale=softmax_scale, causal=causal
577
+ )
578
+
579
+ return attn_output
580
+
581
+ def _upad_input(self, query_layer, key_layer, value_layer, attention_mask, query_length):
582
+ indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
583
+ batch_size, kv_seq_len, num_key_value_heads, head_dim = key_layer.shape
584
+
585
+ key_layer = index_first_axis(
586
+ key_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
587
+ )
588
+ value_layer = index_first_axis(
589
+ value_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
590
+ )
591
+ if query_length == kv_seq_len:
592
+ query_layer = index_first_axis(
593
+ query_layer.reshape(batch_size * kv_seq_len, self.num_heads, head_dim), indices_k
594
+ )
595
+ cu_seqlens_q = cu_seqlens_k
596
+ max_seqlen_in_batch_q = max_seqlen_in_batch_k
597
+ indices_q = indices_k
598
+ elif query_length == 1:
599
+ max_seqlen_in_batch_q = 1
600
+ cu_seqlens_q = torch.arange(
601
+ batch_size + 1, dtype=torch.int32, device=query_layer.device
602
+ ) # There is a memcpy here, that is very bad.
603
+ indices_q = cu_seqlens_q[:-1]
604
+ query_layer = query_layer.squeeze(1)
605
+ else:
606
+ # The -q_len: slice assumes left padding.
607
+ attention_mask = attention_mask[:, -query_length:]
608
+ query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input(query_layer, attention_mask)
609
+
610
+ return (
611
+ query_layer,
612
+ key_layer,
613
+ value_layer,
614
+ indices_q,
615
+ (cu_seqlens_q, cu_seqlens_k),
616
+ (max_seqlen_in_batch_q, max_seqlen_in_batch_k),
617
+ )
618
+
619
+
620
+ # Copied from transformers.models.clip.modeling_clip.CLIPMLP with CLIP->Siglip
621
+ class SiglipMLP(nn.Module):
622
+ def __init__(self, config):
623
+ super().__init__()
624
+ self.config = config
625
+ self.activation_fn = ACT2FN[config.hidden_act]
626
+ self.fc1 = nn.Linear(config.hidden_size, config.intermediate_size)
627
+ self.fc2 = nn.Linear(config.intermediate_size, config.hidden_size)
628
+
629
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
630
+ hidden_states = self.fc1(hidden_states)
631
+ hidden_states = self.activation_fn(hidden_states)
632
+ hidden_states = self.fc2(hidden_states)
633
+ return hidden_states
634
+
635
+
636
+ # Copied from transformers.models.clip.modeling_clip.CLIPEncoderLayer with CLIP->Siglip
637
+ class SiglipEncoderLayer(nn.Module):
638
+ def __init__(self, config: SiglipVisionConfig):
639
+ super().__init__()
640
+ self.embed_dim = config.hidden_size
641
+ self._use_flash_attention_2 = config._attn_implementation == "flash_attention_2"
642
+ self.self_attn = (
643
+ SiglipAttention(config)
644
+ if not self._use_flash_attention_2
645
+ else SiglipFlashAttention2(config)
646
+ )
647
+ self.layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
648
+ self.mlp = SiglipMLP(config)
649
+ self.layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
650
+
651
+ def forward(
652
+ self,
653
+ hidden_states: torch.Tensor,
654
+ attention_mask: torch.Tensor,
655
+ output_attentions: Optional[bool] = False,
656
+ ) -> Tuple[torch.FloatTensor]:
657
+ """
658
+ Args:
659
+ hidden_states (`torch.FloatTensor`):
660
+ Input to the layer of shape `(batch, seq_len, embed_dim)`.
661
+ attention_mask (`torch.FloatTensor`):
662
+ Attention mask of shape `(batch, 1, q_len, k_v_seq_len)` where padding elements are indicated by very large negative values.
663
+ output_attentions (`bool`, *optional*, defaults to `False`):
664
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
665
+ returned tensors for more detail.
666
+ """
667
+ residual = hidden_states
668
+
669
+ hidden_states = self.layer_norm1(hidden_states)
670
+ hidden_states, attn_weights = self.self_attn(
671
+ hidden_states=hidden_states,
672
+ attention_mask=attention_mask,
673
+ output_attentions=output_attentions,
674
+ )
675
+ hidden_states = residual + hidden_states
676
+
677
+ residual = hidden_states
678
+ hidden_states = self.layer_norm2(hidden_states)
679
+ hidden_states = self.mlp(hidden_states)
680
+ hidden_states = residual + hidden_states
681
+
682
+ outputs = (hidden_states,)
683
+
684
+ if output_attentions:
685
+ outputs += (attn_weights,)
686
+
687
+ return outputs
688
+
689
+
690
+ class SiglipPreTrainedModel(PreTrainedModel):
691
+ """
692
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
693
+ models.
694
+ """
695
+
696
+ config_class = SiglipVisionConfig
697
+ base_model_prefix = "siglip"
698
+ supports_gradient_checkpointing = True
699
+
700
+ def _init_weights(self, module):
701
+ """Initialize the weights"""
702
+
703
+ if isinstance(module, SiglipVisionEmbeddings):
704
+ width = self.config.hidden_size
705
+ nn.init.normal_(module.position_embedding.weight, std=1 / np.sqrt(width))
706
+ elif isinstance(module, nn.Embedding):
707
+ default_flax_embed_init(module.weight)
708
+ elif isinstance(module, SiglipAttention):
709
+ nn.init.normal_(module.q_proj.weight)
710
+ nn.init.normal_(module.k_proj.weight)
711
+ nn.init.normal_(module.v_proj.weight)
712
+ nn.init.normal_(module.out_proj.weight)
713
+ nn.init.zeros_(module.q_proj.bias)
714
+ nn.init.zeros_(module.k_proj.bias)
715
+ nn.init.zeros_(module.v_proj.bias)
716
+ nn.init.zeros_(module.out_proj.bias)
717
+ elif isinstance(module, SiglipMLP):
718
+ nn.init.normal_(module.fc1.weight)
719
+ nn.init.normal_(module.fc2.weight)
720
+ nn.init.normal_(module.fc1.bias, std=1e-6)
721
+ nn.init.normal_(module.fc2.bias, std=1e-6)
722
+ elif isinstance(module, (nn.Linear, nn.Conv2d)):
723
+ lecun_normal_(module.weight)
724
+ if module.bias is not None:
725
+ nn.init.zeros_(module.bias)
726
+ elif isinstance(module, nn.LayerNorm):
727
+ module.bias.data.zero_()
728
+ module.weight.data.fill_(1.0)
729
+
730
+
731
+ SIGLIP_START_DOCSTRING = r"""
732
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
733
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
734
+ etc.)
735
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
736
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
737
+ and behavior.
738
+ Parameters:
739
+ config ([`SiglipVisionConfig`]): Model configuration class with all the parameters of the model.
740
+ Initializing with a config file does not load the weights associated with the model, only the
741
+ configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
742
+ """
743
+
744
+
745
+ SIGLIP_VISION_INPUTS_DOCSTRING = r"""
746
+ Args:
747
+ pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
748
+ Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
749
+ [`AutoImageProcessor`]. See [`CLIPImageProcessor.__call__`] for details.
750
+ output_attentions (`bool`, *optional*):
751
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
752
+ tensors for more detail.
753
+ output_hidden_states (`bool`, *optional*):
754
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
755
+ more detail.
756
+ return_dict (`bool`, *optional*):
757
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
758
+ """
759
+
760
+
761
+ # Copied from transformers.models.clip.modeling_clip.CLIPEncoder with CLIP->Siglip
762
+ class SiglipEncoder(nn.Module):
763
+ """
764
+ Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a
765
+ [`SiglipEncoderLayer`].
766
+ Args:
767
+ config: SiglipConfig
768
+ """
769
+
770
+ def __init__(self, config: SiglipVisionConfig):
771
+ super().__init__()
772
+ self.config = config
773
+ self.layers = nn.ModuleList([SiglipEncoderLayer(config) for _ in range(config.num_hidden_layers)])
774
+ self.gradient_checkpointing = False
775
+
776
+ # Ignore copy
777
+ def forward(
778
+ self,
779
+ inputs_embeds,
780
+ attention_mask: Optional[torch.Tensor] = None,
781
+ output_attentions: Optional[bool] = None,
782
+ output_hidden_states: Optional[bool] = None,
783
+ return_dict: Optional[bool] = None,
784
+ ) -> Union[Tuple, BaseModelOutput]:
785
+ r"""
786
+ Args:
787
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
788
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
789
+ This is useful if you want more control over how to convert `input_ids` indices into associated vectors
790
+ than the model's internal embedding lookup matrix.
791
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
792
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
793
+ - 1 for tokens that are **not masked**,
794
+ - 0 for tokens that are **masked**.
795
+ [What are attention masks?](../glossary#attention-mask)
796
+ output_attentions (`bool`, *optional*):
797
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
798
+ returned tensors for more detail.
799
+ output_hidden_states (`bool`, *optional*):
800
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
801
+ for more detail.
802
+ return_dict (`bool`, *optional*):
803
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
804
+ """
805
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
806
+ output_hidden_states = (
807
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
808
+ )
809
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
810
+
811
+ encoder_states = () if output_hidden_states else None
812
+ all_attentions = () if output_attentions else None
813
+
814
+ hidden_states = inputs_embeds
815
+ for encoder_layer in self.layers:
816
+ if output_hidden_states:
817
+ encoder_states = encoder_states + (hidden_states,)
818
+ if self.gradient_checkpointing and self.training:
819
+ layer_outputs = self._gradient_checkpointing_func(
820
+ encoder_layer.__call__,
821
+ hidden_states,
822
+ attention_mask,
823
+ output_attentions,
824
+ )
825
+ else:
826
+ layer_outputs = encoder_layer(
827
+ hidden_states,
828
+ attention_mask,
829
+ output_attentions=output_attentions,
830
+ )
831
+
832
+ hidden_states = layer_outputs[0]
833
+
834
+ if output_attentions:
835
+ all_attentions = all_attentions + (layer_outputs[1],)
836
+
837
+ if output_hidden_states:
838
+ encoder_states = encoder_states + (hidden_states,)
839
+
840
+ if not return_dict:
841
+ return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)
842
+ return BaseModelOutput(
843
+ last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions
844
+ )
845
+
846
+ @add_start_docstrings(
847
+ """The vision model from SigLIP without any head or projection on top.""",
848
+ SIGLIP_START_DOCSTRING
849
+ )
850
+ class SiglipVisionTransformer(SiglipPreTrainedModel):
851
+ config_class = SiglipVisionConfig
852
+ main_input_name = "pixel_values"
853
+ _supports_flash_attn_2 = True
854
+
855
+ def __init__(self, config: SiglipVisionConfig):
856
+ super().__init__(config)
857
+ self.config = config
858
+ embed_dim = config.hidden_size
859
+
860
+ self.embeddings = SiglipVisionEmbeddings(config)
861
+ self.encoder = SiglipEncoder(config)
862
+ self.post_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
863
+ self._use_flash_attention_2 = config._attn_implementation == "flash_attention_2"
864
+
865
+ # Initialize weights and apply final processing
866
+ self.post_init()
867
+
868
+ def get_input_embeddings(self) -> nn.Module:
869
+ return self.embeddings.patch_embedding
870
+
871
+ @add_start_docstrings_to_model_forward(SIGLIP_VISION_INPUTS_DOCSTRING)
872
+ @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=SiglipVisionConfig)
873
+ def forward(
874
+ self,
875
+ pixel_values,
876
+ patch_attention_mask: Optional[torch.BoolTensor] = None,
877
+ tgt_sizes: Optional[torch.IntTensor] = None,
878
+ output_attentions: Optional[bool] = None,
879
+ output_hidden_states: Optional[bool] = None,
880
+ return_dict: Optional[bool] = None,
881
+ ) -> Union[Tuple, BaseModelOutputWithPooling]:
882
+ r"""
883
+ Returns:
884
+ """
885
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
886
+ output_hidden_states = (
887
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
888
+ )
889
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
890
+
891
+ batch_size = pixel_values.size(0)
892
+ if patch_attention_mask is None:
893
+ patch_attention_mask = torch.ones(
894
+ size=(
895
+ batch_size,
896
+ pixel_values.size(2) // self.config.patch_size,
897
+ pixel_values.size(3) // self.config.patch_size,
898
+ ),
899
+ dtype=torch.bool,
900
+ device=pixel_values.device,
901
+ )
902
+
903
+ hidden_states = self.embeddings(pixel_values=pixel_values, patch_attention_mask=patch_attention_mask, tgt_sizes=tgt_sizes)
904
+
905
+ patch_attention_mask = patch_attention_mask.view(batch_size, -1)
906
+ # The call to `_upad_input` in `_flash_attention_forward` is expensive
907
+ # So when the `patch_attention_mask` is full of 1s (i.e. attending to the whole sequence),
908
+ # avoiding passing the attention_mask, which is equivalent to attending to the full sequence
909
+ if not torch.any(~patch_attention_mask):
910
+ attention_mask=None
911
+ else:
912
+ attention_mask = (
913
+ _prepare_4d_attention_mask(patch_attention_mask, hidden_states.dtype)
914
+ if not self._use_flash_attention_2
915
+ else patch_attention_mask
916
+ )
917
+
918
+ encoder_outputs = self.encoder(
919
+ inputs_embeds=hidden_states,
920
+ attention_mask=attention_mask,
921
+ output_attentions=output_attentions,
922
+ output_hidden_states=output_hidden_states,
923
+ return_dict=return_dict,
924
+ )
925
+
926
+ last_hidden_state = encoder_outputs[0]
927
+ last_hidden_state = self.post_layernorm(last_hidden_state)
928
+
929
+ if not return_dict:
930
+ return (last_hidden_state, None) + encoder_outputs[1:]
931
+
932
+ return BaseModelOutputWithPooling(
933
+ last_hidden_state=last_hidden_state,
934
+ pooler_output=None,
935
+ hidden_states=encoder_outputs.hidden_states,
936
+ attentions=encoder_outputs.attentions,
937
+ )
preprocessor_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_processor_type": "MiniCPMVImageProcessor",
3
+ "auto_map": {
4
+ "AutoProcessor": "processing_minicpmv.MiniCPMVProcessor",
5
+ "AutoImageProcessor": "image_processing_minicpmv.MiniCPMVImageProcessor"
6
+ },
7
+ "processor_class": "MiniCPMVProcessor",
8
+ "max_slice_nums": 9,
9
+ "scale_resolution": 448,
10
+ "patch_size": 14,
11
+ "use_image_id": true,
12
+ "image_feature_size": 64,
13
+ "im_start": "",
15
+ "slice_start": "<slice>",
16
+ "slice_end": "</slice>",
17
+ "unk": "<unk>",
18
+ "im_id_start": "<image_id>",
19
+ "im_id_end": "</image_id>",
20
+ "slice_mode": true,
21
+ "norm_mean": [0.5, 0.5, 0.5],
22
+ "norm_std": [0.5, 0.5, 0.5],
23
+ "version": 2.6
24
+ }
processing_minicpmv.py ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ Processor class for MiniCPMV.
17
+ """
18
+
19
+ from typing import List, Optional, Union, Dict, Any
20
+ import torch
21
+ import re
22
+
23
+ from transformers.image_processing_utils import BatchFeature
24
+ from transformers.image_utils import ImageInput
25
+ from transformers.processing_utils import ProcessorMixin
26
+ from transformers.tokenization_utils_base import PaddingStrategy, PreTokenizedInput, TextInput, TruncationStrategy
27
+ from transformers.utils import TensorType, requires_backends, is_torch_dtype, is_torch_device
28
+
29
+ from .image_processing_minicpmv import MiniCPMVBatchFeature
30
+
31
+
32
+ class MiniCPMVProcessor(ProcessorMixin):
33
+ r"""
34
+ Constructs a MiniCPMV processor which wraps a MiniCPMV image processor and a MiniCPMV tokenizer into a single processor.
35
+
36
+ [`MiniCPMVProcessor`] offers all the functionalities of [`MiniCPMVImageProcessor`] and [`LlamaTokenizerWrapper`]. See the
37
+ [`~MiniCPMVProcessor.__call__`] and [`~MiniCPMVProcessor.decode`] for more information.
38
+
39
+ Args:
40
+ image_processor ([`MiniCPMVImageProcessor`], *optional*):
41
+ The image processor is a required input.
42
+ tokenizer ([`LlamaTokenizerWrapper`], *optional*):
43
+ The tokenizer is a required input.
44
+ """
45
+ attributes = ["image_processor", "tokenizer"]
46
+ image_processor_class = "AutoImageProcessor"
47
+ tokenizer_class = "AutoTokenizer"
48
+
49
+ def __init__(self, image_processor=None, tokenizer=None):
50
+ super().__init__(image_processor, tokenizer)
51
+ self.version = image_processor.version
52
+
53
+ def __call__(
54
+ self,
55
+ text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
56
+ images: ImageInput = None,
57
+ max_length: Optional[int] = None,
58
+ do_pad: Optional[bool] = True,
59
+ max_slice_nums: int = None,
60
+ use_image_id: bool = None,
61
+ temporal_ids: Optional[Union[List[List[int]], List[List[List[int]]]]] = None,
62
+ return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
63
+ **kwargs
64
+ ) -> MiniCPMVBatchFeature:
65
+
66
+ if images is not None:
67
+ # image_inputs = self.image_processor(images, do_pad=do_pad, max_slice_nums=max_slice_nums, return_tensors=return_tensors)
68
+ image_inputs = self.image_processor(images, do_pad=do_pad, max_slice_nums=max_slice_nums, temporal_ids=temporal_ids, return_tensors=return_tensors)
69
+ # return self._convert_images_texts_to_inputs(image_inputs, text, max_slice_nums=max_slice_nums, use_image_id=use_image_id, max_length=max_length, **kwargs)
70
+ return self._convert_images_texts_to_inputs(image_inputs, text, max_slice_nums=max_slice_nums, use_image_id=use_image_id, max_length=max_length, temporal_ids=temporal_ids, **kwargs)
71
+
72
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.batch_decode with CLIP->Llama
73
+ def batch_decode(self, *args, **kwargs):
74
+ """
75
+ This method forwards all its arguments to LlamaTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please
76
+ refer to the docstring of this method for more information.
77
+ """
78
+ output_ids = args[0]
79
+ result_text = []
80
+ for result in output_ids:
81
+ result = result[result != 0]
82
+ if result[0] == self.tokenizer.bos_id:
83
+ result = result[1:]
84
+ if result[-1] == self.tokenizer.eos_id:
85
+ result = result[:-1]
86
+ result_text.append(self.tokenizer.decode(result, *args[1:], **kwargs).strip())
87
+ return result_text
88
+ # return self.tokenizer.batch_decode(*args, **kwargs)
89
+
90
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.decode with CLIP->Llama
91
+ def decode(self, *args, **kwargs):
92
+ """
93
+ This method forwards all its arguments to LlamaTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to
94
+ the docstring of this method for more information.
95
+ """
96
+ result = args[0]
97
+ result = result[result != 0]
98
+ if result[0] == self.tokenizer.bos_id:
99
+ result = result[1:]
100
+ if result[-1] == self.tokenizer.eos_id or (hasattr(self.tokenizer, "eot_id") and result[-1] == self.tokenizer.eot_id):
101
+ result = result[:-1]
102
+ return self.tokenizer.decode(result, *args[1:], **kwargs).strip()
103
+
104
+ def _convert(
105
+ self, input_str, max_inp_length: Optional[int] = None
106
+ ):
107
+ if self.version > 2.5 or not getattr(self.tokenizer, "add_bos_token", False):
108
+ input_ids = self.tokenizer.encode(input_str)
109
+ else:
110
+ input_ids = [self.tokenizer.bos_id] + self.tokenizer.encode(input_str)
111
+ if max_inp_length is not None:
112
+ input_ids = input_ids[:max_inp_length]
113
+ input_ids = torch.tensor(input_ids, dtype=torch.int32)
114
+
115
+ start_cond = (input_ids == self.tokenizer.im_start_id) | (input_ids == self.tokenizer.slice_start_id)
116
+ end_cond = (input_ids == self.tokenizer.im_end_id) | (input_ids == self.tokenizer.slice_end_id)
117
+
118
+ image_start_tokens = torch.where(start_cond)[0]
119
+ image_start_tokens += 1
120
+ image_end_tokens = torch.where(end_cond)[0]
121
+
122
+ valid_image_nums = max(len(image_start_tokens), len(image_end_tokens))
123
+
124
+ image_bounds = torch.hstack(
125
+ [
126
+ image_start_tokens[:valid_image_nums].unsqueeze(-1),
127
+ image_end_tokens[:valid_image_nums].unsqueeze(-1),
128
+ ]
129
+ )
130
+ return input_ids, image_bounds
131
+
132
+ def _convert_images_texts_to_inputs(
133
+ self,
134
+ images,
135
+ texts: Union[str, List[str]],
136
+ truncation=None,
137
+ max_length=None,
138
+ max_slice_nums=None,
139
+ use_image_id=None,
140
+ return_tensors=None,
141
+ **kwargs
142
+ ):
143
+ if images is None or not len(images):
144
+ model_inputs = self.tokenizer(texts, return_tensors=return_tensors, truncation=truncation, max_length=max_length, **kwargs)
145
+ return MiniCPMVBatchFeature(data={**model_inputs})
146
+
147
+ pattern = "()"
148
+ # images, image_sizes, tgt_sizes = images["pixel_values"], images["image_sizes"], images["tgt_sizes"]
149
+ images, image_sizes, tgt_sizes, temporal_ids, skip_image_idx = images["pixel_values"], images["image_sizes"], images["tgt_sizes"], images["temporal_ids"], images["skip_image_idx"]
150
+
151
+ if isinstance(texts, str):
152
+ texts = [texts]
153
+ input_ids_list = []
154
+ image_bounds_list = []
155
+ for index, (text, skip_idx) in enumerate(zip(texts, skip_image_idx)):
156
+ image_tags = re.findall(pattern, text)
157
+ assert len(image_tags) == len(image_sizes[index])
158
+ text_chunks = text.split(pattern)
159
+ final_text = ""
160
+
161
+ for i in range(len(image_tags)):
162
+ if i in skip_idx:
163
+ image_placeholder = ''
164
+ text_chunk = text_chunks[i].strip()
165
+
166
+ else:
167
+ image_placeholder = self.image_processor.get_slice_image_placeholder(
168
+ image_sizes[index][i],
169
+ i,
170
+ max_slice_nums,
171
+ use_image_id
172
+ )
173
+ text_chunk = text_chunks[i]
174
+
175
+ final_text = final_text + text_chunk + image_placeholder
176
+
177
+ final_text += text_chunks[-1]
178
+
179
+ input_ids, image_bounds = self._convert(final_text, max_length)
180
+ input_ids_list.append(input_ids)
181
+ image_bounds_list.append(image_bounds)
182
+ padded_input_ids, padding_lengths = self.pad(
183
+ input_ids_list,
184
+ padding_side="left"
185
+ )
186
+ for i, length in enumerate(padding_lengths):
187
+ image_bounds_list[i] = image_bounds_list[i] + length
188
+ attention_mask = padded_input_ids.ne(0)
189
+
190
+ return MiniCPMVBatchFeature(data={
191
+ "input_ids": padded_input_ids,
192
+ "attention_mask": attention_mask,
193
+ "pixel_values": images,
194
+ "image_sizes": image_sizes,
195
+ "image_bound": image_bounds_list,
196
+ "tgt_sizes": tgt_sizes,
197
+ "temporal_ids": temporal_ids
198
+ })
199
+
200
+ @property
201
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.model_input_names
202
+ def model_input_names(self):
203
+ tokenizer_input_names = self.tokenizer.model_input_names
204
+ image_processor_input_names = self.image_processor.model_input_names
205
+ return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
206
+
207
+
208
+ def pad(self, inputs, max_length=None, padding_value=0, padding_side="left"):
209
+ items = []
210
+ if isinstance(inputs[0], list):
211
+ assert isinstance(inputs[0][0], torch.Tensor)
212
+ for it in inputs:
213
+ for tr in it:
214
+ items.append(tr)
215
+ else:
216
+ assert isinstance(inputs[0], torch.Tensor)
217
+ items = inputs
218
+
219
+ batch_size = len(items)
220
+ shape = items[0].shape
221
+ dim = len(shape)
222
+ assert dim <= 2
223
+ if max_length is None:
224
+ max_length = 0
225
+ max_length = max(max_length, max(item.shape[-1] for item in items))
226
+ min_length = min(item.shape[-1] for item in items)
227
+ dtype = items[0].dtype
228
+
229
+ if dim == 0:
230
+ return torch.stack([item for item in items], dim=0), [0]
231
+ elif dim == 1:
232
+ if max_length == min_length:
233
+ return torch.stack([item for item in items], dim=0), [0] * batch_size
234
+ tensor = torch.zeros((batch_size, max_length), dtype=dtype) + padding_value
235
+ else:
236
+ tensor = (
237
+ torch.zeros((batch_size, max_length, shape[-1]), dtype=dtype)
238
+ + padding_value
239
+ )
240
+
241
+ padding_length = []
242
+ for i, item in enumerate(items):
243
+ if dim == 1:
244
+ if padding_side == "left":
245
+ tensor[i, -len(item) :] = item.clone()
246
+ else:
247
+ tensor[i, : len(item)] = item.clone()
248
+ elif dim == 2:
249
+ if padding_side == "left":
250
+ tensor[i, -len(item) :, :] = item.clone()
251
+ else:
252
+ tensor[i, : len(item), :] = item.clone()
253
+ padding_length.append(tensor.shape[-1] - len(item))
254
+
255
+ return tensor, padding_length
resampler.py ADDED
@@ -0,0 +1,309 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from functools import partial
2
+ from itertools import chain
3
+ from typing import Optional, Tuple, List
4
+ import numpy as np
5
+
6
+ import torch
7
+ from torch import nn
8
+ from torch.nn.init import trunc_normal_
9
+
10
+ from transformers.integrations import is_deepspeed_zero3_enabled
11
+
12
+ def get_2d_sincos_pos_embed(embed_dim, image_size):
13
+ """
14
+ image_size: image_size or (image_height, image_width)
15
+ return:
16
+ pos_embed: [image_height, image_width, embed_dim]
17
+ """
18
+ if isinstance(image_size, int):
19
+ grid_h_size, grid_w_size = image_size, image_size
20
+ else:
21
+ grid_h_size, grid_w_size = image_size[0], image_size[1]
22
+
23
+ grid_h = np.arange(grid_h_size, dtype=np.float32)
24
+ grid_w = np.arange(grid_w_size, dtype=np.float32)
25
+ grid = np.meshgrid(grid_w, grid_h) # here w goes first
26
+ grid = np.stack(grid, axis=0)
27
+
28
+ pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid)
29
+ return pos_embed
30
+
31
+
32
+ def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
33
+ assert embed_dim % 2 == 0
34
+
35
+ # use half of dimensions to encode grid_h
36
+ emb_h = get_1d_sincos_pos_embed_from_grid_new(embed_dim // 2, grid[0]) # (H, W, D/2)
37
+ emb_w = get_1d_sincos_pos_embed_from_grid_new(embed_dim // 2, grid[1]) # (H, W, D/2)
38
+
39
+ emb = np.concatenate([emb_h, emb_w], axis=-1) # (H, W, D)
40
+ return emb
41
+
42
+
43
+ def get_1d_sincos_pos_embed_from_grid_new(embed_dim, pos):
44
+ """
45
+ embed_dim: output dimension for each position
46
+ pos: a list of positions to be encoded: size (H, W)
47
+ out: (H, W, D)
48
+ """
49
+ assert embed_dim % 2 == 0
50
+ omega = np.arange(embed_dim // 2, dtype=np.float32)
51
+ omega /= embed_dim / 2.
52
+ omega = 1. / 10000 ** omega # (D/2,)
53
+
54
+ out = np.einsum('hw,d->hwd', pos, omega) # (H, W, D/2), outer product
55
+
56
+ emb_sin = np.sin(out) # (H, W, D/2)
57
+ emb_cos = np.cos(out) # (H, W, D/2)
58
+
59
+ emb = np.concatenate([emb_sin, emb_cos], axis=-1) # (H, W, D)
60
+ return emb
61
+
62
+ def get_1d_sincos_pos_embed_from_temporal_size(embed_dim, pos):
63
+ """
64
+ embed_dim: output dimension for each position
65
+ pos: a list of positions to be encoded: size (M,)
66
+ out: (M, D)
67
+ """
68
+ assert embed_dim % 2 == 0
69
+ omega = np.arange(embed_dim // 2, dtype=np.float32)
70
+ omega /= embed_dim / 2.
71
+ omega = 1. / 10000**omega # (D/2,)
72
+
73
+ pos = pos.reshape(-1) # (M,)
74
+ out = np.einsum('m,d->md', pos, omega) # (M, D/2), outer product
75
+
76
+ emb_sin = np.sin(out) # (M, D/2)
77
+ emb_cos = np.cos(out) # (M, D/2)
78
+
79
+ emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D)
80
+ return emb
81
+
82
+
83
+ class Resampler(nn.Module):
84
+ """
85
+ A 2D perceiver-resampler network with one cross attention layers by
86
+ given learnable queries and 2d sincos pos_emb
87
+ Outputs:
88
+ A tensor with the shape of (batch_size, num_queries, embed_dim)
89
+ """
90
+
91
+ def __init__(
92
+ self,
93
+ num_queries,
94
+ embed_dim,
95
+ num_heads,
96
+ kv_dim=None,
97
+ norm_layer=partial(nn.LayerNorm, eps=1e-6),
98
+ adaptive=False,
99
+ max_size=(70, 70),
100
+ max_temporal_size=72000,
101
+ batch_infer=False
102
+ ):
103
+ super().__init__()
104
+ self.num_queries = num_queries
105
+ self.embed_dim = embed_dim
106
+ self.num_heads = num_heads
107
+ self.adaptive = adaptive
108
+ self.max_size = max_size
109
+ self.max_temporal_size = max_temporal_size
110
+ self.batch_infer = batch_infer
111
+
112
+ self.query = nn.Parameter(torch.zeros(self.num_queries, embed_dim))
113
+ trunc_normal_(self.query, std=.02)
114
+
115
+ if kv_dim is not None and kv_dim != embed_dim:
116
+ self.kv_proj = nn.Linear(kv_dim, embed_dim, bias=False)
117
+ else:
118
+ self.kv_proj = nn.Identity()
119
+
120
+ self.attn = nn.MultiheadAttention(embed_dim, num_heads)
121
+ self.ln_q = norm_layer(embed_dim)
122
+ self.ln_kv = norm_layer(embed_dim)
123
+
124
+ self.ln_post = norm_layer(embed_dim)
125
+ self.proj = nn.Parameter((embed_dim ** -0.5) * torch.randn(embed_dim, embed_dim))
126
+
127
+ self._set_2d_pos_cache(self.max_size)
128
+ self._set_temporal_pos_cache(self.max_temporal_size)
129
+ self.apply(self._init_weights)
130
+
131
+ def _set_2d_pos_cache(self, max_size, device='cpu'):
132
+ if is_deepspeed_zero3_enabled():
133
+ device='cuda'
134
+ pos_embed = torch.from_numpy(get_2d_sincos_pos_embed(self.embed_dim, max_size)).float().to(device)
135
+ self.register_buffer("pos_embed", pos_embed, persistent=False)
136
+
137
+ def _adjust_pos_cache(self, tgt_sizes, device):
138
+ max_h = torch.max(tgt_sizes[:, 0])
139
+ max_w = torch.max(tgt_sizes[:, 1])
140
+ if max_h > self.max_size[0] or max_w > self.max_size[1]:
141
+ self.max_size = [max(max_h, self.max_size[0]), max(max_w, self.max_size[1])]
142
+ self._set_2d_pos_cache(self.max_size, device)
143
+
144
+ def _set_temporal_pos_cache(self, max_temporal_size, device='cpu'):
145
+ temporal_size = np.arange(max_temporal_size, dtype=np.float32)
146
+ pos_embed = torch.from_numpy(get_1d_sincos_pos_embed_from_temporal_size(self.embed_dim, temporal_size)).float().to(device)
147
+ self.register_buffer("temporal_pos_embed", pos_embed, persistent=False)
148
+
149
+ def _adjust_temporal_pos_cache(self, max_temporal_size, device):
150
+ if max_temporal_size > self.max_temporal_size:
151
+ self.max_temporal_size = max_temporal_size
152
+ self._set_temporal_pos_cache(self.max_temporal_size, device)
153
+
154
+ def _init_weights(self, m):
155
+ if isinstance(m, nn.Linear):
156
+ trunc_normal_(m.weight, std=.02)
157
+ if isinstance(m, nn.Linear) and m.bias is not None:
158
+ nn.init.constant_(m.bias, 0)
159
+ elif isinstance(m, nn.LayerNorm):
160
+ nn.init.constant_(m.bias, 0)
161
+ nn.init.constant_(m.weight, 1.0)
162
+
163
+ def forward(self, x, tgt_sizes=None, temporal_ids=None):
164
+ assert x.shape[0] == tgt_sizes.shape[0]
165
+ bs = x.shape[0]
166
+
167
+ device = x.device
168
+ dtype = x.dtype
169
+
170
+ patch_len = tgt_sizes[:, 0] * tgt_sizes[:, 1]
171
+
172
+ self._adjust_pos_cache(tgt_sizes, device=device)
173
+
174
+ temporal_pos_emb = False
175
+ temporal_ids_flatten = None
176
+ if temporal_ids is not None:
177
+ # example: [[-1], [-1], [2, 6, 9]]
178
+ temporal_ids_flatten = list(chain.from_iterable(temporal_ids))
179
+ max_temporal_size = max(temporal_ids_flatten) + 1
180
+ if max_temporal_size > -1:
181
+ temporal_pos_emb = True
182
+ if max_temporal_size > self.max_temporal_size:
183
+ self._adjust_temporal_pos_cache(max_temporal_size, device)
184
+
185
+
186
+ max_patch_len = torch.max(patch_len)
187
+ key_padding_mask = torch.zeros((bs, max_patch_len), dtype=torch.bool, device=device)
188
+
189
+ pos_embed = []
190
+ for i in range(bs):
191
+ tgt_h, tgt_w = tgt_sizes[i]
192
+ pos_embed.append(self.pos_embed[:tgt_h, :tgt_w, :].reshape((tgt_h * tgt_w, -1)).to(dtype)) # patches * D
193
+ key_padding_mask[i, patch_len[i]:] = True
194
+
195
+ pos_embed = torch.nn.utils.rnn.pad_sequence(
196
+ pos_embed, batch_first=True, padding_value=0.0).permute(1, 0, 2) # BLD => L * B * D
197
+
198
+ x = self.kv_proj(x) # B * L * D
199
+ x = self.ln_kv(x).permute(1, 0, 2) # L * B * D
200
+
201
+ q = self.ln_q(self.query) # Q * D
202
+
203
+ pos_embed_2d = []
204
+ pos_embed_temporal = []
205
+ for i in range(bs):
206
+ tgt_h, tgt_w = tgt_sizes[i]
207
+ if temporal_pos_emb:
208
+ if temporal_ids_flatten[i] == -1:
209
+ pos_embed_temporal.append(torch.zeros(self.embed_dim, dtype=dtype, device=device))
210
+ else:
211
+ pos_embed_temporal.append(self.temporal_pos_embed[temporal_ids_flatten[i]].to(dtype)) # D
212
+
213
+ pos_embed_2d.append(self.pos_embed[:tgt_h, :tgt_w, :].reshape((tgt_h * tgt_w, -1)).to(dtype)) # patches * D
214
+ key_padding_mask[i, patch_len[i]:] = True
215
+
216
+ pos_embed_2d = torch.nn.utils.rnn.pad_sequence(
217
+ pos_embed_2d, batch_first=True, padding_value=0.0).permute(1, 0, 2) # BLD => L * B * D
218
+
219
+ v = x
220
+ k = x + pos_embed_2d
221
+
222
+ if self.batch_infer:
223
+ out = self.batch_attn_forward(q, k, v, pos_embed_temporal, temporal_ids, key_padding_mask)
224
+ else: # save gpu memory
225
+ out = self.foreach_attn_forward(q, k, v, pos_embed_temporal, temporal_ids, key_padding_mask)
226
+
227
+ # out: Q * B * D
228
+ x = out.permute(1, 0, 2) # B * Q * D
229
+
230
+ x = self.ln_post(x)
231
+ x = x @ self.proj
232
+ return x
233
+
234
+
235
+ def _repeat(self, query, N: int):
236
+ return query.unsqueeze(1).repeat(1, N, 1)
237
+
238
+
239
+ def batch_attn_forward(self, q, k, v, pos_embed_temporal, temporal_ids, key_padding_mask):
240
+ bs = k.shape[0]
241
+
242
+ if pos_embed_temporal:
243
+ # temporal 维度折叠
244
+ # 时序 embedding
245
+ k += torch.stack(pos_embed_temporal, dim=0)
246
+ bs = len(temporal_ids)
247
+ merge_k = []
248
+ merge_v = []
249
+ merge_key_padding_mask = []
250
+
251
+ start = 0
252
+ for tp in temporal_ids:
253
+ end = start + len(tp)
254
+ # # L * (end-start) * D -> (end-start) * L * D -> 1 * L*(end-start) * D
255
+ merge_k.append(k[:, start: end, :].permute(1, 0, 2).reshape(-1, self.embed_dim))
256
+ merge_v.append(v[:, start: end, :].permute(1, 0, 2).reshape(-1, self.embed_dim))
257
+ merge_key_padding_mask.append(key_padding_mask[start: end, :].reshape(-1, 1))
258
+
259
+ start = end
260
+
261
+ k = torch.nn.utils.rnn.pad_sequence(merge_k, batch_first=True, padding_value=0.0).permute(1, 0, 2) # L*(end-start)
262
+ v = torch.nn.utils.rnn.pad_sequence(merge_v, batch_first=True, padding_value=0.0).permute(1, 0, 2) # L*(end-start)
263
+ key_padding_mask = torch.nn.utils.rnn.pad_sequence(merge_key_padding_mask, batch_first=True, padding_value=True).squeeze(-1)
264
+
265
+ out = self.attn(
266
+ self._repeat(q, bs), # Q * B * D
267
+ k, # L * B * D + L * B * D
268
+ v,
269
+ key_padding_mask=key_padding_mask)[0]
270
+
271
+ return out
272
+
273
+
274
+ def foreach_attn_forward(self, q, k, v, pos_embed_temporal, temporal_ids, key_padding_mask):
275
+ bs = k.shape[0]
276
+
277
+ if pos_embed_temporal:
278
+ k += torch.stack(pos_embed_temporal, dim=0)
279
+ # bs = len(temporal_ids)
280
+ out_list = []
281
+
282
+ start = 0
283
+ for tp in temporal_ids:
284
+ end = start + len(tp)
285
+ # 处理每个序列而不padding
286
+ curr_k = k[:, start:end, :].reshape(-1, self.embed_dim)
287
+ curr_v = v[:, start:end, :].reshape(-1, self.embed_dim)
288
+ curr_key_padding_mask = key_padding_mask[start: end, :].reshape(-1)
289
+ curr_out = self.attn(
290
+ q,
291
+ curr_k,
292
+ curr_v,
293
+ key_padding_mask=curr_key_padding_mask,
294
+ )[0]
295
+
296
+ out_list.append(curr_out)
297
+ start = end
298
+
299
+ # 合并所有序列的结果
300
+ out = torch.stack(out_list, dim=1)
301
+
302
+ else:
303
+ out = self.attn(
304
+ self._repeat(q, bs), # Q * B * D
305
+ k, # L * B * D + L * B * D
306
+ v,
307
+ key_padding_mask=key_padding_mask)[0]
308
+
309
+ return out
special_tokens_map.json ADDED
@@ -0,0 +1,578 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<unk>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ {
25
+ "content": "<ref>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ },
31
+ {
32
+ "content": "</ref>",
33
+ "lstrip": false,
34
+ "normalized": false,
35
+ "rstrip": false,
36
+ "single_word": false
37
+ },
38
+ {
39
+ "content": "<box>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false
44
+ },
45
+ {
46
+ "content": "</box>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false
51
+ },
52
+ {
53
+ "content": "<quad>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false
58
+ },
59
+ {
60
+ "content": "</quad>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false
65
+ },
66
+ {
67
+ "content": "<point>",
68
+ "lstrip": false,
69
+ "normalized": false,
70
+ "rstrip": false,
71
+ "single_word": false
72
+ },
73
+ {
74
+ "content": "</point>",
75
+ "lstrip": false,
76
+ "normalized": false,
77
+ "rstrip": false,
78
+ "single_word": false
79
+ },
80
+ {
81
+ "content": "<slice>",
82
+ "lstrip": false,
83
+ "normalized": false,
84
+ "rstrip": false,
85
+ "single_word": false
86
+ },
87
+ {
88
+ "content": "</slice>",
89
+ "lstrip": false,
90
+ "normalized": false,
91
+ "rstrip": false,
92
+ "single_word": false
93
+ },
94
+ {
95
+ "content": "<image_id>",
96
+ "lstrip": false,
97
+ "normalized": false,
98
+ "rstrip": false,
99
+ "single_word": false
100
+ },
101
+ {
102
+ "content": "</image_id>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false
107
+ },
108
+ {
109
+ "content": "<unit>",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false
114
+ },
115
+ {
116
+ "content": "</unit>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false
121
+ },
122
+ {
123
+ "content": "<|reserved_0|>",
124
+ "lstrip": false,
125
+ "normalized": false,
126
+ "rstrip": false,
127
+ "single_word": false
128
+ },
129
+ {
130
+ "content": "<|reserved_1|>",
131
+ "lstrip": false,
132
+ "normalized": false,
133
+ "rstrip": false,
134
+ "single_word": false
135
+ },
136
+ {
137
+ "content": "<|reserved_2|>",
138
+ "lstrip": false,
139
+ "normalized": false,
140
+ "rstrip": false,
141
+ "single_word": false
142
+ },
143
+ {
144
+ "content": "<|reserved_3|>",
145
+ "lstrip": false,
146
+ "normalized": false,
147
+ "rstrip": false,
148
+ "single_word": false
149
+ },
150
+ {
151
+ "content": "<|reserved_4|>",
152
+ "lstrip": false,
153
+ "normalized": false,
154
+ "rstrip": false,
155
+ "single_word": false
156
+ },
157
+ {
158
+ "content": "<|reserved_5|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false
163
+ },
164
+ {
165
+ "content": "<|reserved_6|>",
166
+ "lstrip": false,
167
+ "normalized": false,
168
+ "rstrip": false,
169
+ "single_word": false
170
+ },
171
+ {
172
+ "content": "<|reserved_7|>",
173
+ "lstrip": false,
174
+ "normalized": false,
175
+ "rstrip": false,
176
+ "single_word": false
177
+ },
178
+ {
179
+ "content": "<|reserved_8|>",
180
+ "lstrip": false,
181
+ "normalized": false,
182
+ "rstrip": false,
183
+ "single_word": false
184
+ },
185
+ {
186
+ "content": "<|reserved_9|>",
187
+ "lstrip": false,
188
+ "normalized": false,
189
+ "rstrip": false,
190
+ "single_word": false
191
+ },
192
+ {
193
+ "content": "<|reserved_10|>",
194
+ "lstrip": false,
195
+ "normalized": false,
196
+ "rstrip": false,
197
+ "single_word": false
198
+ },
199
+ {
200
+ "content": "<|reserved_11|>",
201
+ "lstrip": false,
202
+ "normalized": false,
203
+ "rstrip": false,
204
+ "single_word": false
205
+ },
206
+ {
207
+ "content": "<|reserved_12|>",
208
+ "lstrip": false,
209
+ "normalized": false,
210
+ "rstrip": false,
211
+ "single_word": false
212
+ },
213
+ {
214
+ "content": "<|reserved_13|>",
215
+ "lstrip": false,
216
+ "normalized": false,
217
+ "rstrip": false,
218
+ "single_word": false
219
+ },
220
+ {
221
+ "content": "<|reserved_14|>",
222
+ "lstrip": false,
223
+ "normalized": false,
224
+ "rstrip": false,
225
+ "single_word": false
226
+ },
227
+ {
228
+ "content": "<|reserved_15|>",
229
+ "lstrip": false,
230
+ "normalized": false,
231
+ "rstrip": false,
232
+ "single_word": false
233
+ },
234
+ {
235
+ "content": "<|reserved_16|>",
236
+ "lstrip": false,
237
+ "normalized": false,
238
+ "rstrip": false,
239
+ "single_word": false
240
+ },
241
+ {
242
+ "content": "<|reserved_17|>",
243
+ "lstrip": false,
244
+ "normalized": false,
245
+ "rstrip": false,
246
+ "single_word": false
247
+ },
248
+ {
249
+ "content": "<|reserved_18|>",
250
+ "lstrip": false,
251
+ "normalized": false,
252
+ "rstrip": false,
253
+ "single_word": false
254
+ },
255
+ {
256
+ "content": "<|reserved_19|>",
257
+ "lstrip": false,
258
+ "normalized": false,
259
+ "rstrip": false,
260
+ "single_word": false
261
+ },
262
+ {
263
+ "content": "<|reserved_20|>",
264
+ "lstrip": false,
265
+ "normalized": false,
266
+ "rstrip": false,
267
+ "single_word": false
268
+ },
269
+ {
270
+ "content": "<|reserved_21|>",
271
+ "lstrip": false,
272
+ "normalized": false,
273
+ "rstrip": false,
274
+ "single_word": false
275
+ },
276
+ {
277
+ "content": "<|reserved_22|>",
278
+ "lstrip": false,
279
+ "normalized": false,
280
+ "rstrip": false,
281
+ "single_word": false
282
+ },
283
+ {
284
+ "content": "<|reserved_23|>",
285
+ "lstrip": false,
286
+ "normalized": false,
287
+ "rstrip": false,
288
+ "single_word": false
289
+ },
290
+ {
291
+ "content": "<|reserved_24|>",
292
+ "lstrip": false,
293
+ "normalized": false,
294
+ "rstrip": false,
295
+ "single_word": false
296
+ },
297
+ {
298
+ "content": "<|reserved_25|>",
299
+ "lstrip": false,
300
+ "normalized": false,
301
+ "rstrip": false,
302
+ "single_word": false
303
+ },
304
+ {
305
+ "content": "<|reserved_26|>",
306
+ "lstrip": false,
307
+ "normalized": false,
308
+ "rstrip": false,
309
+ "single_word": false
310
+ },
311
+ {
312
+ "content": "<|reserved_27|>",
313
+ "lstrip": false,
314
+ "normalized": false,
315
+ "rstrip": false,
316
+ "single_word": false
317
+ },
318
+ {
319
+ "content": "<|reserved_28|>",
320
+ "lstrip": false,
321
+ "normalized": false,
322
+ "rstrip": false,
323
+ "single_word": false
324
+ },
325
+ {
326
+ "content": "<|reserved_29|>",
327
+ "lstrip": false,
328
+ "normalized": false,
329
+ "rstrip": false,
330
+ "single_word": false
331
+ },
332
+ {
333
+ "content": "<|reserved_30|>",
334
+ "lstrip": false,
335
+ "normalized": false,
336
+ "rstrip": false,
337
+ "single_word": false
338
+ },
339
+ {
340
+ "content": "<|reserved_31|>",
341
+ "lstrip": false,
342
+ "normalized": false,
343
+ "rstrip": false,
344
+ "single_word": false
345
+ },
346
+ {
347
+ "content": "<|reserved_32|>",
348
+ "lstrip": false,
349
+ "normalized": false,
350
+ "rstrip": false,
351
+ "single_word": false
352
+ },
353
+ {
354
+ "content": "<|reserved_33|>",
355
+ "lstrip": false,
356
+ "normalized": false,
357
+ "rstrip": false,
358
+ "single_word": false
359
+ },
360
+ {
361
+ "content": "<|reserved_34|>",
362
+ "lstrip": false,
363
+ "normalized": false,
364
+ "rstrip": false,
365
+ "single_word": false
366
+ },
367
+ {
368
+ "content": "<|reserved_35|>",
369
+ "lstrip": false,
370
+ "normalized": false,
371
+ "rstrip": false,
372
+ "single_word": false
373
+ },
374
+ {
375
+ "content": "<|reserved_36|>",
376
+ "lstrip": false,
377
+ "normalized": false,
378
+ "rstrip": false,
379
+ "single_word": false
380
+ },
381
+ {
382
+ "content": "<|reserved_37|>",
383
+ "lstrip": false,
384
+ "normalized": false,
385
+ "rstrip": false,
386
+ "single_word": false
387
+ },
388
+ {
389
+ "content": "<|reserved_38|>",
390
+ "lstrip": false,
391
+ "normalized": false,
392
+ "rstrip": false,
393
+ "single_word": false
394
+ },
395
+ {
396
+ "content": "<|reserved_39|>",
397
+ "lstrip": false,
398
+ "normalized": false,
399
+ "rstrip": false,
400
+ "single_word": false
401
+ },
402
+ {
403
+ "content": "<|reserved_40|>",
404
+ "lstrip": false,
405
+ "normalized": false,
406
+ "rstrip": false,
407
+ "single_word": false
408
+ },
409
+ {
410
+ "content": "<|reserved_41|>",
411
+ "lstrip": false,
412
+ "normalized": false,
413
+ "rstrip": false,
414
+ "single_word": false
415
+ },
416
+ {
417
+ "content": "<|reserved_42|>",
418
+ "lstrip": false,
419
+ "normalized": false,
420
+ "rstrip": false,
421
+ "single_word": false
422
+ },
423
+ {
424
+ "content": "<|reserved_43|>",
425
+ "lstrip": false,
426
+ "normalized": false,
427
+ "rstrip": false,
428
+ "single_word": false
429
+ },
430
+ {
431
+ "content": "<|reserved_44|>",
432
+ "lstrip": false,
433
+ "normalized": false,
434
+ "rstrip": false,
435
+ "single_word": false
436
+ },
437
+ {
438
+ "content": "<|reserved_45|>",
439
+ "lstrip": false,
440
+ "normalized": false,
441
+ "rstrip": false,
442
+ "single_word": false
443
+ },
444
+ {
445
+ "content": "<|reserved_46|>",
446
+ "lstrip": false,
447
+ "normalized": false,
448
+ "rstrip": false,
449
+ "single_word": false
450
+ },
451
+ {
452
+ "content": "<|reserved_47|>",
453
+ "lstrip": false,
454
+ "normalized": false,
455
+ "rstrip": false,
456
+ "single_word": false
457
+ },
458
+ {
459
+ "content": "<|reserved_48|>",
460
+ "lstrip": false,
461
+ "normalized": false,
462
+ "rstrip": false,
463
+ "single_word": false
464
+ },
465
+ {
466
+ "content": "<|reserved_49|>",
467
+ "lstrip": false,
468
+ "normalized": false,
469
+ "rstrip": false,
470
+ "single_word": false
471
+ },
472
+ {
473
+ "content": "<|reserved_50|>",
474
+ "lstrip": false,
475
+ "normalized": false,
476
+ "rstrip": false,
477
+ "single_word": false
478
+ },
479
+ {
480
+ "content": "<|reserved_51|>",
481
+ "lstrip": false,
482
+ "normalized": false,
483
+ "rstrip": false,
484
+ "single_word": false
485
+ },
486
+ {
487
+ "content": "<|reserved_52|>",
488
+ "lstrip": false,
489
+ "normalized": false,
490
+ "rstrip": false,
491
+ "single_word": false
492
+ },
493
+ {
494
+ "content": "<|reserved_53|>",
495
+ "lstrip": false,
496
+ "normalized": false,
497
+ "rstrip": false,
498
+ "single_word": false
499
+ },
500
+ {
501
+ "content": "<|reserved_54|>",
502
+ "lstrip": false,
503
+ "normalized": false,
504
+ "rstrip": false,
505
+ "single_word": false
506
+ },
507
+ {
508
+ "content": "<|reserved_55|>",
509
+ "lstrip": false,
510
+ "normalized": false,
511
+ "rstrip": false,
512
+ "single_word": false
513
+ },
514
+ {
515
+ "content": "<|reserved_56|>",
516
+ "lstrip": false,
517
+ "normalized": false,
518
+ "rstrip": false,
519
+ "single_word": false
520
+ },
521
+ {
522
+ "content": "<|reserved_57|>",
523
+ "lstrip": false,
524
+ "normalized": false,
525
+ "rstrip": false,
526
+ "single_word": false
527
+ },
528
+ {
529
+ "content": "<|reserved_58|>",
530
+ "lstrip": false,
531
+ "normalized": false,
532
+ "rstrip": false,
533
+ "single_word": false
534
+ },
535
+ {
536
+ "content": "<|reserved_59|>",
537
+ "lstrip": false,
538
+ "normalized": false,
539
+ "rstrip": false,
540
+ "single_word": false
541
+ },
542
+ {
543
+ "content": "<|reserved_60|>",
544
+ "lstrip": false,
545
+ "normalized": false,
546
+ "rstrip": false,
547
+ "single_word": false
548
+ },
549
+ {
550
+ "content": "<|reserved_61|>",
551
+ "lstrip": false,
552
+ "normalized": false,
553
+ "rstrip": false,
554
+ "single_word": false
555
+ },
556
+ {
557
+ "content": "<|reserved_62|>",
558
+ "lstrip": false,
559
+ "normalized": false,
560
+ "rstrip": false,
561
+ "single_word": false
562
+ }
563
+ ],
564
+ "eos_token": {
565
+ "content": "<|im_end|>",
566
+ "lstrip": false,
567
+ "normalized": false,
568
+ "rstrip": false,
569
+ "single_word": false
570
+ },
571
+ "pad_token": {
572
+ "content": "<|endoftext|>",
573
+ "lstrip": false,
574
+ "normalized": false,
575
+ "rstrip": false,
576
+ "single_word": false
577
+ }
578
+ }
tokenization_minicpmv_fast.py ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import Qwen2TokenizerFast
2
+
3
+
4
+ class MiniCPMVTokenizerFast(Qwen2TokenizerFast):
5
+ def __init__(self, **kwargs):
6
+ super().__init__(**kwargs)
7
+ self.im_start = ""
9
+ self.ref_start = "<ref>"
10
+ self.ref_end = "</ref>"
11
+ self.box_start = "<box>"
12
+ self.box_end = "</box>"
13
+ self.quad_start = "<quad>"
14
+ self.quad_end = "</quad>"
15
+ self.slice_start = "<slice>"
16
+ self.slice_end = "</slice>"
17
+ self.im_id_start = "<image_id>"
18
+ self.im_id_end = "</image_id>"
19
+
20
+ @property
21
+ def eos_id(self):
22
+ return self.eos_token_id
23
+
24
+ @property
25
+ def bos_id(self):
26
+ return self.bos_token_id
27
+
28
+ @property
29
+ def unk_id(self):
30
+ return self.unk_token_id
31
+
32
+ @property
33
+ def im_start_id(self):
34
+ return self.convert_tokens_to_ids(self.im_start)
35
+
36
+ @property
37
+ def im_end_id(self):
38
+ return self.convert_tokens_to_ids(self.im_end)
39
+
40
+ @property
41
+ def slice_start_id(self):
42
+ return self.convert_tokens_to_ids(self.slice_start)
43
+
44
+ @property
45
+ def slice_end_id(self):
46
+ return self.convert_tokens_to_ids(self.slice_end)
47
+
48
+ @property
49
+ def im_id_start_id(self):
50
+ return self.convert_tokens_to_ids(self.im_id_start)
51
+
52
+ @property
53
+ def im_id_end_id(self):
54
+ return self.convert_tokens_to_ids(self.im_id_end)
55
+
56
+ @property
57
+ def newline_id(self):
58
+ return self.convert_tokens_to_ids('\n')
59
+
60
+ @staticmethod
61
+ def escape(text: str) -> str:
62
+ return text
63
+
64
+ @staticmethod
65
+ def unescape(text: str) -> str:
66
+ return text
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5a94a2c3913b8aa2175fffb5fd6cf4301958f323d06475bfd91037c13bdd74b
3
+ size 11437868
tokenizer_config.json ADDED
@@ -0,0 +1,953 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "128244": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151643": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151644": {
22
+ "content": "<|im_start|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151645": {
30
+ "content": "<|im_end|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151646": {
38
+ "content": "<|object_ref_start|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151647": {
46
+ "content": "<|object_ref_end|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151648": {
54
+ "content": "<|box_start|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151649": {
62
+ "content": "<|box_end|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151650": {
70
+ "content": "<|quad_start|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151651": {
78
+ "content": "<|quad_end|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151652": {
86
+ "content": "<|vision_start|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151653": {
94
+ "content": "<|vision_end|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151654": {
102
+ "content": "<|vision_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151655": {
110
+ "content": "<|image_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151656": {
118
+ "content": "<|video_pad|>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": true
124
+ },
125
+ "151657": {
126
+ "content": "<tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151658": {
134
+ "content": "</tool_call>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151659": {
142
+ "content": "<|fim_prefix|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151660": {
150
+ "content": "<|fim_middle|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151661": {
158
+ "content": "<|fim_suffix|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151662": {
166
+ "content": "<|fim_pad|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151663": {
174
+ "content": "<|repo_name|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151664": {
182
+ "content": "<|file_sep|>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151665": {
190
+ "content": "<tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151666": {
198
+ "content": "</tool_response>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151667": {
206
+ "content": "<think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ },
213
+ "151668": {
214
+ "content": "</think>",
215
+ "lstrip": false,
216
+ "normalized": false,
217
+ "rstrip": false,
218
+ "single_word": false,
219
+ "special": false
220
+ },
221
+ "151669": {
222
+ "content": "",
231
+ "lstrip": false,
232
+ "normalized": false,
233
+ "rstrip": false,
234
+ "single_word": false,
235
+ "special": true
236
+ },
237
+ "151671": {
238
+ "content": "<ref>",
239
+ "lstrip": false,
240
+ "normalized": false,
241
+ "rstrip": false,
242
+ "single_word": false,
243
+ "special": true
244
+ },
245
+ "151672": {
246
+ "content": "</ref>",
247
+ "lstrip": false,
248
+ "normalized": false,
249
+ "rstrip": false,
250
+ "single_word": false,
251
+ "special": true
252
+ },
253
+ "151673": {
254
+ "content": "<box>",
255
+ "lstrip": false,
256
+ "normalized": false,
257
+ "rstrip": false,
258
+ "single_word": false,
259
+ "special": true
260
+ },
261
+ "151674": {
262
+ "content": "</box>",
263
+ "lstrip": false,
264
+ "normalized": false,
265
+ "rstrip": false,
266
+ "single_word": false,
267
+ "special": true
268
+ },
269
+ "151675": {
270
+ "content": "<quad>",
271
+ "lstrip": false,
272
+ "normalized": false,
273
+ "rstrip": false,
274
+ "single_word": false,
275
+ "special": true
276
+ },
277
+ "151676": {
278
+ "content": "</quad>",
279
+ "lstrip": false,
280
+ "normalized": false,
281
+ "rstrip": false,
282
+ "single_word": false,
283
+ "special": true
284
+ },
285
+ "151677": {
286
+ "content": "<point>",
287
+ "lstrip": false,
288
+ "normalized": false,
289
+ "rstrip": false,
290
+ "single_word": false,
291
+ "special": true
292
+ },
293
+ "151678": {
294
+ "content": "</point>",
295
+ "lstrip": false,
296
+ "normalized": false,
297
+ "rstrip": false,
298
+ "single_word": false,
299
+ "special": true
300
+ },
301
+ "151679": {
302
+ "content": "<slice>",
303
+ "lstrip": false,
304
+ "normalized": false,
305
+ "rstrip": false,
306
+ "single_word": false,
307
+ "special": true
308
+ },
309
+ "151680": {
310
+ "content": "</slice>",
311
+ "lstrip": false,
312
+ "normalized": false,
313
+ "rstrip": false,
314
+ "single_word": false,
315
+ "special": true
316
+ },
317
+ "151681": {
318
+ "content": "<image_id>",
319
+ "lstrip": false,
320
+ "normalized": false,
321
+ "rstrip": false,
322
+ "single_word": false,
323
+ "special": true
324
+ },
325
+ "151682": {
326
+ "content": "</image_id>",
327
+ "lstrip": false,
328
+ "normalized": false,
329
+ "rstrip": false,
330
+ "single_word": false,
331
+ "special": true
332
+ },
333
+ "151683": {
334
+ "content": "<unit>",
335
+ "lstrip": false,
336
+ "normalized": false,
337
+ "rstrip": false,
338
+ "single_word": false,
339
+ "special": true
340
+ },
341
+ "151684": {
342
+ "content": "</unit>",
343
+ "lstrip": false,
344
+ "normalized": false,
345
+ "rstrip": false,
346
+ "single_word": false,
347
+ "special": true
348
+ },
349
+ "151685": {
350
+ "content": "<|reserved_0|>",
351
+ "lstrip": false,
352
+ "normalized": false,
353
+ "rstrip": false,
354
+ "single_word": false,
355
+ "special": true
356
+ },
357
+ "151686": {
358
+ "content": "<|reserved_1|>",
359
+ "lstrip": false,
360
+ "normalized": false,
361
+ "rstrip": false,
362
+ "single_word": false,
363
+ "special": true
364
+ },
365
+ "151687": {
366
+ "content": "<|reserved_2|>",
367
+ "lstrip": false,
368
+ "normalized": false,
369
+ "rstrip": false,
370
+ "single_word": false,
371
+ "special": true
372
+ },
373
+ "151688": {
374
+ "content": "<|reserved_3|>",
375
+ "lstrip": false,
376
+ "normalized": false,
377
+ "rstrip": false,
378
+ "single_word": false,
379
+ "special": true
380
+ },
381
+ "151689": {
382
+ "content": "<|reserved_4|>",
383
+ "lstrip": false,
384
+ "normalized": false,
385
+ "rstrip": false,
386
+ "single_word": false,
387
+ "special": true
388
+ },
389
+ "151690": {
390
+ "content": "<|reserved_5|>",
391
+ "lstrip": false,
392
+ "normalized": false,
393
+ "rstrip": false,
394
+ "single_word": false,
395
+ "special": true
396
+ },
397
+ "151691": {
398
+ "content": "<|reserved_6|>",
399
+ "lstrip": false,
400
+ "normalized": false,
401
+ "rstrip": false,
402
+ "single_word": false,
403
+ "special": true
404
+ },
405
+ "151692": {
406
+ "content": "<|reserved_7|>",
407
+ "lstrip": false,
408
+ "normalized": false,
409
+ "rstrip": false,
410
+ "single_word": false,
411
+ "special": true
412
+ },
413
+ "151693": {
414
+ "content": "<|reserved_8|>",
415
+ "lstrip": false,
416
+ "normalized": false,
417
+ "rstrip": false,
418
+ "single_word": false,
419
+ "special": true
420
+ },
421
+ "151694": {
422
+ "content": "<|reserved_9|>",
423
+ "lstrip": false,
424
+ "normalized": false,
425
+ "rstrip": false,
426
+ "single_word": false,
427
+ "special": true
428
+ },
429
+ "151695": {
430
+ "content": "<|reserved_10|>",
431
+ "lstrip": false,
432
+ "normalized": false,
433
+ "rstrip": false,
434
+ "single_word": false,
435
+ "special": true
436
+ },
437
+ "151696": {
438
+ "content": "<|reserved_11|>",
439
+ "lstrip": false,
440
+ "normalized": false,
441
+ "rstrip": false,
442
+ "single_word": false,
443
+ "special": true
444
+ },
445
+ "151697": {
446
+ "content": "<|reserved_12|>",
447
+ "lstrip": false,
448
+ "normalized": false,
449
+ "rstrip": false,
450
+ "single_word": false,
451
+ "special": true
452
+ },
453
+ "151698": {
454
+ "content": "<|reserved_13|>",
455
+ "lstrip": false,
456
+ "normalized": false,
457
+ "rstrip": false,
458
+ "single_word": false,
459
+ "special": true
460
+ },
461
+ "151699": {
462
+ "content": "<|reserved_14|>",
463
+ "lstrip": false,
464
+ "normalized": false,
465
+ "rstrip": false,
466
+ "single_word": false,
467
+ "special": true
468
+ },
469
+ "151700": {
470
+ "content": "<|reserved_15|>",
471
+ "lstrip": false,
472
+ "normalized": false,
473
+ "rstrip": false,
474
+ "single_word": false,
475
+ "special": true
476
+ },
477
+ "151701": {
478
+ "content": "<|reserved_16|>",
479
+ "lstrip": false,
480
+ "normalized": false,
481
+ "rstrip": false,
482
+ "single_word": false,
483
+ "special": true
484
+ },
485
+ "151702": {
486
+ "content": "<|reserved_17|>",
487
+ "lstrip": false,
488
+ "normalized": false,
489
+ "rstrip": false,
490
+ "single_word": false,
491
+ "special": true
492
+ },
493
+ "151703": {
494
+ "content": "<|reserved_18|>",
495
+ "lstrip": false,
496
+ "normalized": false,
497
+ "rstrip": false,
498
+ "single_word": false,
499
+ "special": true
500
+ },
501
+ "151704": {
502
+ "content": "<|reserved_19|>",
503
+ "lstrip": false,
504
+ "normalized": false,
505
+ "rstrip": false,
506
+ "single_word": false,
507
+ "special": true
508
+ },
509
+ "151705": {
510
+ "content": "<|reserved_20|>",
511
+ "lstrip": false,
512
+ "normalized": false,
513
+ "rstrip": false,
514
+ "single_word": false,
515
+ "special": true
516
+ },
517
+ "151706": {
518
+ "content": "<|reserved_21|>",
519
+ "lstrip": false,
520
+ "normalized": false,
521
+ "rstrip": false,
522
+ "single_word": false,
523
+ "special": true
524
+ },
525
+ "151707": {
526
+ "content": "<|reserved_22|>",
527
+ "lstrip": false,
528
+ "normalized": false,
529
+ "rstrip": false,
530
+ "single_word": false,
531
+ "special": true
532
+ },
533
+ "151708": {
534
+ "content": "<|reserved_23|>",
535
+ "lstrip": false,
536
+ "normalized": false,
537
+ "rstrip": false,
538
+ "single_word": false,
539
+ "special": true
540
+ },
541
+ "151709": {
542
+ "content": "<|reserved_24|>",
543
+ "lstrip": false,
544
+ "normalized": false,
545
+ "rstrip": false,
546
+ "single_word": false,
547
+ "special": true
548
+ },
549
+ "151710": {
550
+ "content": "<|reserved_25|>",
551
+ "lstrip": false,
552
+ "normalized": false,
553
+ "rstrip": false,
554
+ "single_word": false,
555
+ "special": true
556
+ },
557
+ "151711": {
558
+ "content": "<|reserved_26|>",
559
+ "lstrip": false,
560
+ "normalized": false,
561
+ "rstrip": false,
562
+ "single_word": false,
563
+ "special": true
564
+ },
565
+ "151712": {
566
+ "content": "<|reserved_27|>",
567
+ "lstrip": false,
568
+ "normalized": false,
569
+ "rstrip": false,
570
+ "single_word": false,
571
+ "special": true
572
+ },
573
+ "151713": {
574
+ "content": "<|reserved_28|>",
575
+ "lstrip": false,
576
+ "normalized": false,
577
+ "rstrip": false,
578
+ "single_word": false,
579
+ "special": true
580
+ },
581
+ "151714": {
582
+ "content": "<|reserved_29|>",
583
+ "lstrip": false,
584
+ "normalized": false,
585
+ "rstrip": false,
586
+ "single_word": false,
587
+ "special": true
588
+ },
589
+ "151715": {
590
+ "content": "<|reserved_30|>",
591
+ "lstrip": false,
592
+ "normalized": false,
593
+ "rstrip": false,
594
+ "single_word": false,
595
+ "special": true
596
+ },
597
+ "151716": {
598
+ "content": "<|reserved_31|>",
599
+ "lstrip": false,
600
+ "normalized": false,
601
+ "rstrip": false,
602
+ "single_word": false,
603
+ "special": true
604
+ },
605
+ "151717": {
606
+ "content": "<|reserved_32|>",
607
+ "lstrip": false,
608
+ "normalized": false,
609
+ "rstrip": false,
610
+ "single_word": false,
611
+ "special": true
612
+ },
613
+ "151718": {
614
+ "content": "<|reserved_33|>",
615
+ "lstrip": false,
616
+ "normalized": false,
617
+ "rstrip": false,
618
+ "single_word": false,
619
+ "special": true
620
+ },
621
+ "151719": {
622
+ "content": "<|reserved_34|>",
623
+ "lstrip": false,
624
+ "normalized": false,
625
+ "rstrip": false,
626
+ "single_word": false,
627
+ "special": true
628
+ },
629
+ "151720": {
630
+ "content": "<|reserved_35|>",
631
+ "lstrip": false,
632
+ "normalized": false,
633
+ "rstrip": false,
634
+ "single_word": false,
635
+ "special": true
636
+ },
637
+ "151721": {
638
+ "content": "<|reserved_36|>",
639
+ "lstrip": false,
640
+ "normalized": false,
641
+ "rstrip": false,
642
+ "single_word": false,
643
+ "special": true
644
+ },
645
+ "151722": {
646
+ "content": "<|reserved_37|>",
647
+ "lstrip": false,
648
+ "normalized": false,
649
+ "rstrip": false,
650
+ "single_word": false,
651
+ "special": true
652
+ },
653
+ "151723": {
654
+ "content": "<|reserved_38|>",
655
+ "lstrip": false,
656
+ "normalized": false,
657
+ "rstrip": false,
658
+ "single_word": false,
659
+ "special": true
660
+ },
661
+ "151724": {
662
+ "content": "<|reserved_39|>",
663
+ "lstrip": false,
664
+ "normalized": false,
665
+ "rstrip": false,
666
+ "single_word": false,
667
+ "special": true
668
+ },
669
+ "151725": {
670
+ "content": "<|reserved_40|>",
671
+ "lstrip": false,
672
+ "normalized": false,
673
+ "rstrip": false,
674
+ "single_word": false,
675
+ "special": true
676
+ },
677
+ "151726": {
678
+ "content": "<|reserved_41|>",
679
+ "lstrip": false,
680
+ "normalized": false,
681
+ "rstrip": false,
682
+ "single_word": false,
683
+ "special": true
684
+ },
685
+ "151727": {
686
+ "content": "<|reserved_42|>",
687
+ "lstrip": false,
688
+ "normalized": false,
689
+ "rstrip": false,
690
+ "single_word": false,
691
+ "special": true
692
+ },
693
+ "151728": {
694
+ "content": "<|reserved_43|>",
695
+ "lstrip": false,
696
+ "normalized": false,
697
+ "rstrip": false,
698
+ "single_word": false,
699
+ "special": true
700
+ },
701
+ "151729": {
702
+ "content": "<|reserved_44|>",
703
+ "lstrip": false,
704
+ "normalized": false,
705
+ "rstrip": false,
706
+ "single_word": false,
707
+ "special": true
708
+ },
709
+ "151730": {
710
+ "content": "<|reserved_45|>",
711
+ "lstrip": false,
712
+ "normalized": false,
713
+ "rstrip": false,
714
+ "single_word": false,
715
+ "special": true
716
+ },
717
+ "151731": {
718
+ "content": "<|reserved_46|>",
719
+ "lstrip": false,
720
+ "normalized": false,
721
+ "rstrip": false,
722
+ "single_word": false,
723
+ "special": true
724
+ },
725
+ "151732": {
726
+ "content": "<|reserved_47|>",
727
+ "lstrip": false,
728
+ "normalized": false,
729
+ "rstrip": false,
730
+ "single_word": false,
731
+ "special": true
732
+ },
733
+ "151733": {
734
+ "content": "<|reserved_48|>",
735
+ "lstrip": false,
736
+ "normalized": false,
737
+ "rstrip": false,
738
+ "single_word": false,
739
+ "special": true
740
+ },
741
+ "151734": {
742
+ "content": "<|reserved_49|>",
743
+ "lstrip": false,
744
+ "normalized": false,
745
+ "rstrip": false,
746
+ "single_word": false,
747
+ "special": true
748
+ },
749
+ "151735": {
750
+ "content": "<|reserved_50|>",
751
+ "lstrip": false,
752
+ "normalized": false,
753
+ "rstrip": false,
754
+ "single_word": false,
755
+ "special": true
756
+ },
757
+ "151736": {
758
+ "content": "<|reserved_51|>",
759
+ "lstrip": false,
760
+ "normalized": false,
761
+ "rstrip": false,
762
+ "single_word": false,
763
+ "special": true
764
+ },
765
+ "151737": {
766
+ "content": "<|reserved_52|>",
767
+ "lstrip": false,
768
+ "normalized": false,
769
+ "rstrip": false,
770
+ "single_word": false,
771
+ "special": true
772
+ },
773
+ "151738": {
774
+ "content": "<|reserved_53|>",
775
+ "lstrip": false,
776
+ "normalized": false,
777
+ "rstrip": false,
778
+ "single_word": false,
779
+ "special": true
780
+ },
781
+ "151739": {
782
+ "content": "<|reserved_54|>",
783
+ "lstrip": false,
784
+ "normalized": false,
785
+ "rstrip": false,
786
+ "single_word": false,
787
+ "special": true
788
+ },
789
+ "151740": {
790
+ "content": "<|reserved_55|>",
791
+ "lstrip": false,
792
+ "normalized": false,
793
+ "rstrip": false,
794
+ "single_word": false,
795
+ "special": true
796
+ },
797
+ "151741": {
798
+ "content": "<|reserved_56|>",
799
+ "lstrip": false,
800
+ "normalized": false,
801
+ "rstrip": false,
802
+ "single_word": false,
803
+ "special": true
804
+ },
805
+ "151742": {
806
+ "content": "<|reserved_57|>",
807
+ "lstrip": false,
808
+ "normalized": false,
809
+ "rstrip": false,
810
+ "single_word": false,
811
+ "special": true
812
+ },
813
+ "151743": {
814
+ "content": "<|reserved_58|>",
815
+ "lstrip": false,
816
+ "normalized": false,
817
+ "rstrip": false,
818
+ "single_word": false,
819
+ "special": true
820
+ },
821
+ "151744": {
822
+ "content": "<|reserved_59|>",
823
+ "lstrip": false,
824
+ "normalized": false,
825
+ "rstrip": false,
826
+ "single_word": false,
827
+ "special": true
828
+ },
829
+ "151745": {
830
+ "content": "<|reserved_60|>",
831
+ "lstrip": false,
832
+ "normalized": false,
833
+ "rstrip": false,
834
+ "single_word": false,
835
+ "special": true
836
+ },
837
+ "151746": {
838
+ "content": "<|reserved_61|>",
839
+ "lstrip": false,
840
+ "normalized": false,
841
+ "rstrip": false,
842
+ "single_word": false,
843
+ "special": true
844
+ },
845
+ "151747": {
846
+ "content": "<|reserved_62|>",
847
+ "lstrip": false,
848
+ "normalized": false,
849
+ "rstrip": false,
850
+ "single_word": false,
851
+ "special": true
852
+ }
853
+ },
854
+ "additional_special_tokens": [
855
+ "<unk>",
856
+ "",
858
+ "<ref>",
859
+ "</ref>",
860
+ "<box>",
861
+ "</box>",
862
+ "<quad>",
863
+ "</quad>",
864
+ "<point>",
865
+ "</point>",
866
+ "<slice>",
867
+ "</slice>",
868
+ "<image_id>",
869
+ "</image_id>",
870
+ "<unit>",
871
+ "</unit>",
872
+ "<|reserved_0|>",
873
+ "<|reserved_1|>",
874
+ "<|reserved_2|>",
875
+ "<|reserved_3|>",
876
+ "<|reserved_4|>",
877
+ "<|reserved_5|>",
878
+ "<|reserved_6|>",
879
+ "<|reserved_7|>",
880
+ "<|reserved_8|>",
881
+ "<|reserved_9|>",
882
+ "<|reserved_10|>",
883
+ "<|reserved_11|>",
884
+ "<|reserved_12|>",
885
+ "<|reserved_13|>",
886
+ "<|reserved_14|>",
887
+ "<|reserved_15|>",
888
+ "<|reserved_16|>",
889
+ "<|reserved_17|>",
890
+ "<|reserved_18|>",
891
+ "<|reserved_19|>",
892
+ "<|reserved_20|>",
893
+ "<|reserved_21|>",
894
+ "<|reserved_22|>",
895
+ "<|reserved_23|>",
896
+ "<|reserved_24|>",
897
+ "<|reserved_25|>",
898
+ "<|reserved_26|>",
899
+ "<|reserved_27|>",
900
+ "<|reserved_28|>",
901
+ "<|reserved_29|>",
902
+ "<|reserved_30|>",
903
+ "<|reserved_31|>",
904
+ "<|reserved_32|>",
905
+ "<|reserved_33|>",
906
+ "<|reserved_34|>",
907
+ "<|reserved_35|>",
908
+ "<|reserved_36|>",
909
+ "<|reserved_37|>",
910
+ "<|reserved_38|>",
911
+ "<|reserved_39|>",
912
+ "<|reserved_40|>",
913
+ "<|reserved_41|>",
914
+ "<|reserved_42|>",
915
+ "<|reserved_43|>",
916
+ "<|reserved_44|>",
917
+ "<|reserved_45|>",
918
+ "<|reserved_46|>",
919
+ "<|reserved_47|>",
920
+ "<|reserved_48|>",
921
+ "<|reserved_49|>",
922
+ "<|reserved_50|>",
923
+ "<|reserved_51|>",
924
+ "<|reserved_52|>",
925
+ "<|reserved_53|>",
926
+ "<|reserved_54|>",
927
+ "<|reserved_55|>",
928
+ "<|reserved_56|>",
929
+ "<|reserved_57|>",
930
+ "<|reserved_58|>",
931
+ "<|reserved_59|>",
932
+ "<|reserved_60|>",
933
+ "<|reserved_61|>",
934
+ "<|reserved_62|>"
935
+ ],
936
+ "bos_token": "<|im_start|>",
937
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set content = message.content %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is defined and message.reasoning_content is not none %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in message.content %}\n {%- set content = message.content.split('</think>')[-1].lstrip('\\n') %}\n {%- set reasoning_content = message.content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- endif %}\n {%- if enable_thinking is defined and enable_thinking is true %}\n {{- '<think>\\n' }}\n {%- endif %}\n{%- endif %}",
938
+ "clean_up_tokenization_spaces": false,
939
+ "eos_token": "<|im_end|>",
940
+ "errors": "replace",
941
+ "extra_special_tokens": {},
942
+ "model_max_length": 131072,
943
+ "pad_token": "<|endoftext|>",
944
+ "split_special_tokens": false,
945
+ "unk_token": "<unk>",
946
+ "auto_map": {
947
+ "AutoTokenizer": [
948
+ "tokenization_minicpmv_fast.MiniCPMVTokenizerFast",
949
+ null
950
+ ]
951
+ },
952
+ "tokenizer_class": "MiniCPMVTokenizerFast"
953
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff