File size: 5,100 Bytes
9e1ae8d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19f66c9
 
 
4929c37
a346c0b
4929c37
 
 
 
 
 
 
 
 
 
abf499d
4929c37
a346c0b
4929c37
 
 
 
 
 
 
 
 
 
 
 
abf499d
4929c37
7157849
4929c37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a346c0b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-14B
- Qwen/Qwen2.5-14B-Instruct
- Qwen/Qwen2.5-14B-Instruct-1M
- EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
- Azure99/Blossom-V6-14B
- arcee-ai/Virtuoso-Small-v2
pipeline_tag: text-generation
tags:
- merge
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e174e202fa032de4143324/CfIE4_oZgpNsNZyurjO7D.png)
# Qwen2.5-14B-1M-YOYO-V3
This time, I not only released the model but also shared some model merging insights that might be even more valuable than the model itself.

Let’s start by looking at the initial merge configuration (YAML):
```yaml
merge_method: model_stock  
base_model: Qwen/Qwen2.5-14B  
models:  
  - model: Qwen/Qwen2.5-14B-instruct  
  - model: Qwen/Qwen2.5-14B-instruct-1M  
dtype: bfloat16
```
Does it seem like there are no issues at all? However, merged models occasionally exhibit **uncontrollable outputs**, likely due to significant discrepancies between instruction-tuned models and base models.

To address this, I first attempted to directly integrate a fine-tuned model with smaller divergence from the base model, such as **Virtuoso-Small-v2**.

This gave rise to [Qwen2.5-14B-YOYO-latest-V2](https://huggingface.co/YOYO-AI/Qwen2.5-14B-YOYO-latest-V2).
```yaml
merge_method: model_stock  
base_model: Qwen/Qwen2.5-14B  
models:  
  - model: Qwen/Qwen2.5-14B-instruct  
  - model: Qwen/Qwen2.5-14B-instruct-1M  
  - model: arcee-ai/Virtuoso-Small-v2  
dtype: bfloat16
name: Qwen2.5-14B-YOYO-latest-V2
```
Although the uncontrollable output issue has been addressed, the model still lacks stability.

Through practical experimentation, I found that first merging **"high-divergence"** models (significantly different from the base) into **"low-divergence"** models (closer to the base) using the  [DELLA](https://arxiv.org/abs/2403.19522)  method, then applying the  [Model Stock](https://arxiv.org/abs/2406.11617)  method, ultimately produces a model that is not only more stable but also achieves better performance.

## Key models used:  
*1. Low-divergence, high-performance models:* 

   - Virtuoso-Small-v2  
   - Blossom-V6-14B
     
*2. High-divergence, instruction-focused models:*

   - Qwen2.5-14B-instruct  
   - Qwen2.5-14B-instruct-1M

## DELLA Merge Configuration:
```yaml
models:  
  - model: Qwen/Qwen2.5-14B-Instruct  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: arcee-ai/Virtuoso-Small-v2  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: bfloat16  
tokenizer_source: base  
name: Qwen2.5-14B-YOYO-della1
```
```yaml
models:  
  - model: Qwen/Qwen2.5-14B-Instruct-1M  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: arcee-ai/Virtuoso-Small-v2  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: bfloat16  
tokenizer_source: base  
name: Qwen2.5-14B-YOYO-della2
```
```yaml
models:  
  - model: Qwen/Qwen2.5-14B-Instruct  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: Azure99/Blossom-V6-14B  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: bfloat16  
tokenizer_source: base  
name: Qwen2.5-14B-YOYO-della3
```
```yaml
models:  
  - model: Qwen/Qwen2.5-14B-Instruct-1M  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: Azure99/Blossom-V6-14B  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: bfloat16  
tokenizer_source: base  
name: Qwen2.5-14B-YOYO-della3
```
This approach yielded four variants:  
- `Qwen2.5-14B-YOYO-della1`  
- `Qwen2.5-14B-YOYO-della2`  
- `Qwen2.5-14B-YOYO-della3`  
- `Qwen2.5-14B-YOYO-della4`

## Base Model:
To enhance base model roleplay and creative writing capabilities, I applied the same strategy:
```yaml
models:  
  - model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: Qwen/Qwen2.5-14B  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: bfloat16  
tokenizer_source: base  
name: EVA-Qwen2.5-14B-base
```
Next, I extended the context length using the SCE method:
```yaml
merge_method: sce  
models:  
  - model: EVA-Qwen2.5-14B-base  
base_model: Qwen/Qwen2.5-14B-Instruct-1M  
parameters:  
  select_topk: 1  
dtype: bfloat16  
tokenizer_source: base  
normalize: true  
int8_mask: true  
name: Qwen2.5-14B-pro
```
## Final Merge Step:
```yaml
merge_method: model_stock  
base_model: Qwen2.5-14B-pro  
models:  
  - model: Qwen2.5-14B-YOYO-della1  
  - model: Qwen2.5-14B-YOYO-della2  
  - model: Qwen2.5-14B-YOYO-della3  
  - model: Qwen2.5-14B-YOYO-della4  
dtype: bfloat16  
tokenizer_source: base  
int8_mask: true  
normalize: true  
name: Qwen2.5-14B-1M-YOYO-V3
```
I hope this helps!