|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
base_model: |
|
- unsloth/Mistral-Small-24B-Base-2501 |
|
- unsloth/Mistral-Small-24B-Instruct-2501 |
|
- trashpanda-org/MS-24B-Instruct-Mullein-v0 |
|
- trashpanda-org/Llama3-24B-Mullein-v1 |
|
- ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4 |
|
- TheDrummer/Cydonia-24B-v2 |
|
- estrogen/MS2501-24b-Ink-apollo-ep2 |
|
- huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated |
|
- ToastyPigeon/ms3-roselily-rp-v2 |
|
- PocketDoc/Dans-DangerousWinds-V1.1.1-24b |
|
- ReadyArt/Forgotten-Safeword-24B-V2.2 |
|
- PocketDoc/Dans-PersonalityEngine-V1.2.0-24b |
|
- Undi95/MistralThinker-e2 |
|
- lemonilia/Mistral-Small-3-Reasoner-s1 |
|
- arcee-ai/Arcee-Blitz |
|
- SicariusSicariiStuff/Redemption_Wind_24B |
|
--- |
|
*** |
|
## Tantum |
|
|
|
>Everything is edible if you are brave enough |
|
|
|
 |
|
|
|
### Overview |
|
|
|
It's kind of hard to judge a 24B model after using a 70B for a while. From some tests, I think it might be better than my ms-22B and qwen-32B merges. |
|
|
|
It has some prose, some character adherence, and... `<think>` tags! It will consistently think if you add `<think>` tag as prefill, tho I think it will obviously not think as well as an actual thinking model distill. |
|
|
|
People also like RP-Whole(RP-Broth). You can find it [here](https://huggingface.co/d-rang-d/MS3-RP-Broth-24B) |
|
|
|
**Settings:** |
|
|
|
Samplers: [Weird preset](https://files.catbox.moe/ccwmca.json) | [Forgotten-Safeword preset](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-Extra-Dry) |
|
|
|
Prompt format: Mistral-V7 (?) |
|
|
|
ChatML and Llama3 give better results imo. In the case of ChatML, there are Dans-PersonalityEngine and Redemption-Wind models that have been trained on it. But Llama3? No clue. |
|
|
|
I use [this](https://files.catbox.moe/daluze.json) lorebook for all chats instead of a system prompt for mistal models. |
|
|
|
### Quants |
|
|
|
[Static](https://huggingface.co/mradermacher/MS3-Tantum-24B-v0.1-GGUF) | [Imatrix](https://huggingface.co/mradermacher/MS3-Tantum-24B-v0.1-i1-GGUF) |
|
|
|
*** |
|
|
|
## Merge Details |
|
### Merging steps |
|
|
|
## MS3-test-Merge-1 |
|
|
|
```yaml |
|
models: |
|
- model: unsloth/Mistral-Small-24B-Base-2501 |
|
- model: unsloth/Mistral-Small-24B-Instruct-2501+ToastyPigeon/new-ms-rp-test-ws |
|
parameters: |
|
select_topk: |
|
- value: [0.05, 0.03, 0.02, 0.02, 0.01] |
|
- model: unsloth/Mistral-Small-24B-Instruct-2501+estrogen/MS2501-24b-Ink-ep2-adpt |
|
parameters: |
|
select_topk: 0.1 |
|
- model: trashpanda-org/MS-24B-Instruct-Mullein-v0 |
|
parameters: |
|
select_topk: 0.4 |
|
base_model: unsloth/Mistral-Small-24B-Base-2501 |
|
merge_method: sce |
|
parameters: |
|
int8_mask: true |
|
rescale: true |
|
normalize: true |
|
dtype: bfloat16 |
|
tokenizer_source: base |
|
``` |
|
|
|
```yaml |
|
dtype: bfloat16 |
|
tokenizer_source: base |
|
merge_method: della_linear |
|
parameters: |
|
density: 0.55 |
|
base_model: Step1 |
|
models: |
|
- model: unsloth/Mistral-Small-24B-Instruct-2501 |
|
parameters: |
|
weight: |
|
- filter: v_proj |
|
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] |
|
- filter: o_proj |
|
value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1] |
|
- filter: up_proj |
|
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] |
|
- filter: gate_proj |
|
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] |
|
- filter: down_proj |
|
value: [1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0] |
|
- value: 0 |
|
- model: Step1 |
|
parameters: |
|
weight: |
|
- filter: v_proj |
|
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] |
|
- filter: o_proj |
|
value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0] |
|
- filter: up_proj |
|
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] |
|
- filter: gate_proj |
|
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] |
|
- filter: down_proj |
|
value: [0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1] |
|
- value: 1 |
|
|
|
``` |
|
|
|
Some early MS3 merge. Not really worth using on its own. Just added it for fun. |
|
|
|
## RP-half1 |
|
|
|
```yaml |
|
models: |
|
- model: ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4 |
|
parameters: |
|
weight: 0.2 |
|
density: 0.7 |
|
- model: trashpanda-org/Llama3-24B-Mullein-v1 |
|
parameters: |
|
weight: 0.2 |
|
density: 0.7 |
|
- model: TheDrummer/Cydonia-24B-v2 |
|
parameters: |
|
weight: 0.2 |
|
density: 0.7 |
|
merge_method: della_linear |
|
base_model: Nohobby/MS3-test-Merge-1 |
|
parameters: |
|
epsilon: 0.2 |
|
lambda: 1.1 |
|
dtype: bfloat16 |
|
tokenizer: |
|
source: base |
|
``` |
|
|
|
## RP-half2 |
|
|
|
```yaml |
|
base_model: Nohobby/MS3-test-Merge-1 |
|
parameters: |
|
epsilon: 0.05 |
|
lambda: 0.9 |
|
int8_mask: true |
|
rescale: true |
|
normalize: false |
|
dtype: bfloat16 |
|
tokenizer: |
|
source: base |
|
merge_method: della |
|
models: |
|
- model: estrogen/MS2501-24b-Ink-apollo-ep2 |
|
parameters: |
|
weight: [0.1, -0.01, 0.1, -0.02, 0.1] |
|
density: [0.6, 0.4, 0.5, 0.4, 0.6] |
|
- model: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated |
|
parameters: |
|
weight: [0.02, -0.01, 0.02, -0.02, 0.01] |
|
density: [0.45, 0.55, 0.45, 0.55, 0.45] |
|
- model: ToastyPigeon/ms3-roselily-rp-v2 |
|
parameters: |
|
weight: [0.01, -0.02, 0.02, -0.025, 0.01] |
|
density: [0.45, 0.65, 0.45, 0.65, 0.45] |
|
- model: PocketDoc/Dans-DangerousWinds-V1.1.1-24b |
|
parameters: |
|
weight: [0.1, -0.01, 0.1, -0.02, 0.1] |
|
density: [0.6, 0.4, 0.5, 0.4, 0.6] |
|
``` |
|
|
|
## RP-whole |
|
|
|
```yaml |
|
base_model: ReadyArt/Forgotten-Safeword-24B-V2.2 |
|
merge_method: model_stock |
|
dtype: bfloat16 |
|
models: |
|
- model: mergekit-community/MS3-RP-half1 |
|
- model: mergekit-community/MS3-RP-RP-half2 |
|
``` |
|
|
|
## INT |
|
|
|
```yaml |
|
merge_method: della_linear |
|
dtype: bfloat16 |
|
parameters: |
|
normalize: true |
|
int8_mask: true |
|
tokenizer: |
|
source: base |
|
base_model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b |
|
models: |
|
- model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b |
|
parameters: |
|
density: 0.55 |
|
weight: 1 |
|
- model: Undi95/MistralThinker-e2 |
|
parameters: |
|
density: 0.55 |
|
weight: 1 |
|
- model: d-rang-d/ignore_MS3-Reasoner-mergekit |
|
parameters: |
|
density: 0.55 |
|
weight: 1 |
|
- model: arcee-ai/Arcee-Blitz |
|
parameters: |
|
density: 0.55 |
|
weight: 1 |
|
``` |
|
|
|
## Tantumv00 |
|
|
|
```yaml |
|
output_base_model: "SicariusSicariiStuff/Redemption_Wind_24B" |
|
output_dtype: "bfloat16" |
|
finetune_merge: |
|
- { "model": "mergekit-community/MS3-INT", "base": "unsloth/Mistral-Small-24B-Instruct-2501", "alpha": 1.0, "is_input": true } |
|
- { "model": "mergekit-community/MS-RP-whole", "base": "unsloth/Mistral-Small-24B-Instruct-2501", "alpha": 0.7, "is_output": true } |
|
output_dir: "output_model" |
|
device: "cpu" |
|
clean_cache: false |
|
cache_dir: "cache" |
|
storage_dir: "storage" |
|
``` |
|
|
|
Doesn't look like a mergekit recipe, right? Well, it's not. It's for a standalone merge tool: https://github.com/54rt1n/shardmerge |
|
|
|
If you want to use it for something non-qwen you can replace index.py with [this](https://files.catbox.moe/bgxmuz.py) and writer.py with [that](https://files.catbox.moe/ewww39.py). A much better solution is possible, ofc, but I'm a dumdum and can't code. The creator knows about this issue and will fix it... Someday, I guess. |
|
|
|
You also need to know that this thing is *really* slow, and it took me 5 hours to cram 3 24B models together. |
|
|
|
## Tantumv01 |
|
|
|
```yaml |
|
dtype: bfloat16 |
|
tokenizer: |
|
source: unsloth/Mistral-Small-24B-Instruct-2501 |
|
merge_method: della_linear |
|
parameters: |
|
density: 0.55 |
|
base_model: d-rang-d/MS3-megamerge |
|
models: |
|
- model: unsloth/Mistral-Small-24B-Instruct-2501 |
|
parameters: |
|
weight: |
|
- filter: v_proj |
|
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] |
|
- filter: o_proj |
|
value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1] |
|
- filter: up_proj |
|
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] |
|
- filter: gate_proj |
|
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] |
|
- filter: down_proj |
|
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] |
|
- value: 0 |
|
- model: d-rang-d/MS3-megamerge |
|
parameters: |
|
weight: |
|
- filter: v_proj |
|
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] |
|
- filter: o_proj |
|
value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0] |
|
- filter: up_proj |
|
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] |
|
- filter: gate_proj |
|
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] |
|
- filter: down_proj |
|
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] |
|
- value: 1 |
|
``` |