File size: 2,660 Bytes
e75c7c1 a7904d5 e75c7c1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
base_model:
- cognitivecomputations/dolphin-2.7-mixtral-8x7b
- Sao10K/Sensualize-Mixtral-bf16
- jondurbin/bagel-dpo-8x7b-v0.2
- mistralai/Mixtral-8x7B-v0.1
- Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
- smelborp/MixtralOrochi8x7B
- mistralai/Mixtral-8x7B-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# maid-yuzu-v4
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model is a model that I merged with several models I know because I had leftover credits for merging. Of course, the results are not good. Please do not use it.
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b)
* [Sao10K/Sensualize-Mixtral-bf16](https://huggingface.co/Sao10K/Sensualize-Mixtral-bf16)
* [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2)
* [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
* [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: mistralai/Mixtral-8x7B-v0.1
dtype: bfloat16
merge_method: dare_ties
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: smelborp/MixtralOrochi8x7B
parameters:
density: 0.75
weight: 0.7
- layer_range: [0, 32]
model:
model:
path: cognitivecomputations/dolphin-2.7-mixtral-8x7b
parameters:
density: 0.6
weight: 0.1
- layer_range: [0, 32]
model:
model:
path: jondurbin/bagel-dpo-8x7b-v0.2
parameters:
density: 0.6
weight: 0.1
- layer_range: [0, 32]
model:
lora:
path: Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
model:
path: mistralai/Mixtral-8x7B-v0.1
parameters:
density: 0.5
weight: 0.25
- layer_range: [0, 32]
model:
model:
path: Sao10K/Sensualize-Mixtral-bf16
parameters:
density: 0.5
weight: 0.2
- layer_range: [0, 32]
model:
model:
path: mistralai/Mixtral-8x7B-v0.1
```
|