Update README.md
Browse files
README.md
CHANGED
@@ -5,5 +5,96 @@ datasets:
|
|
5 |
language:
|
6 |
- en
|
7 |
base_model:
|
8 |
-
- Qwen/Qwen2-VL-
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
language:
|
6 |
- en
|
7 |
base_model:
|
8 |
+
- Qwen/Qwen2-VL-7B-Instruct
|
9 |
+
---
|
10 |
+
|
11 |
+
|
12 |
+
# lmms-lab/Qwen2-VL-7B-GRPO-8k
|
13 |
+
|
14 |
+
## Model Summary
|
15 |
+
|
16 |
+
This model is 7B parameter models trained on 8k curated [dataset](https://huggingface.co/datasets/lmms-lab/multimodal-open-r1-8k-verified) with GRPO
|
17 |
+
|
18 |
+
- **Repository:** [EvolvingLMMs-Lab/open-r1-multimodal](https://github.com/EvolvingLMMs-Lab/open-r1-multimodal)
|
19 |
+
- **Languages:** English, Chinese
|
20 |
+
|
21 |
+
|
22 |
+
### Generation
|
23 |
+
|
24 |
+
The generation of this model is the same as the original `Qwen/Qwen2-VL-7B-Instruct` simply changes the model_id in from pretrained would works
|
25 |
+
|
26 |
+
```python
|
27 |
+
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
|
28 |
+
from qwen_vl_utils import process_vision_info
|
29 |
+
|
30 |
+
SYSTEM_PROMPT = (
|
31 |
+
"A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant "
|
32 |
+
"first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning "
|
33 |
+
"process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., "
|
34 |
+
"<think> reasoning process here </think><answer> answer here </answer>"
|
35 |
+
)
|
36 |
+
|
37 |
+
# default: Load the model on the available device(s)
|
38 |
+
model = Qwen2VLForConditionalGeneration.from_pretrained(
|
39 |
+
"Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
|
40 |
+
)
|
41 |
+
|
42 |
+
# default processer
|
43 |
+
processor = AutoProcessor.from_pretrained("lmms-lab/Qwen2-VL-7B-GRPO-8k")
|
44 |
+
|
45 |
+
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
|
46 |
+
# min_pixels = 256*28*28
|
47 |
+
# max_pixels = 1280*28*28
|
48 |
+
# processor = AutoProcessor.from_pretrained("lmms-lab/Qwen2-VL-7B-GRPO-8k", min_pixels=min_pixels, max_pixels=max_pixels)
|
49 |
+
|
50 |
+
messages = [
|
51 |
+
{"role": "system", "content": SYSTEM_PROMPT},
|
52 |
+
{
|
53 |
+
"role": "user",
|
54 |
+
"content": [
|
55 |
+
{
|
56 |
+
"type": "image",
|
57 |
+
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
|
58 |
+
},
|
59 |
+
{"type": "text", "text": "Describe this image."},
|
60 |
+
],
|
61 |
+
}
|
62 |
+
]
|
63 |
+
|
64 |
+
# Preparation for inference
|
65 |
+
text = processor.apply_chat_template(
|
66 |
+
messages, tokenize=False, add_generation_prompt=True
|
67 |
+
)
|
68 |
+
image_inputs, video_inputs = process_vision_info(messages)
|
69 |
+
inputs = processor(
|
70 |
+
text=[text],
|
71 |
+
images=image_inputs,
|
72 |
+
videos=video_inputs,
|
73 |
+
padding=True,
|
74 |
+
return_tensors="pt",
|
75 |
+
)
|
76 |
+
inputs = inputs.to("cuda")
|
77 |
+
|
78 |
+
# Inference: Generation of the output
|
79 |
+
generated_ids = model.generate(**inputs, max_new_tokens=128)
|
80 |
+
generated_ids_trimmed = [
|
81 |
+
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
|
82 |
+
]
|
83 |
+
output_text = processor.batch_decode(
|
84 |
+
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
|
85 |
+
)
|
86 |
+
print(output_text)
|
87 |
+
|
88 |
+
|
89 |
+
```
|
90 |
+
|
91 |
+
|
92 |
+
# Training
|
93 |
+
|
94 |
+
## Model
|
95 |
+
|
96 |
+
- **Architecture:** Qwen/Qwen2-VL-7B-Instruct
|
97 |
+
- **Initialized Model:** Qwen/Qwen2-VL-7B-Instruct
|
98 |
+
- **Data:** lmms-lab/multimodal-open-r1-8k-verified
|
99 |
+
- **Precision:** bfloat16
|
100 |
+
|