Sorawiz's picture
Update README.md
f2a6576 verified
---
base_model:
- ReadyArt/Forgotten-Safeword-24B-V2.2
- ReadyArt/Forgotten-Safeword-24B-V2.0
- trashpanda-org/MS-24B-Mullein-v1-lora
- ReadyArt/Forgotten-Abomination-24B-V2.2
- OddTheGreat/Apparatus_24B
- Darkknight535/WinterEngine-24B-Instruct
- allura-org/MS3-24B-Roselily-Creative
- TroyDoesAI/BlackSheep-24B
- Nohobby/MS3-Tantum-24B-v0.1
- TheDrummer/Cydonia-24B-v2.1
library_name: transformers
tags:
- mergekit
- merge
---
# Chat Template
Mistral Instruct
```
{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}<|im_end|>
```
ChatML
```
{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}
```
# GGUF
* Q6_K quant - [Sorawiz/MistralSmall-Creative-24B-Q6_K-GGUF](https://huggingface.co/Sorawiz/MistralSmall-Creative-24B-Q6_K-GGUF)
# MERGE
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method.
### Models Merged
The following models were included in the merge:
* [ReadyArt/Forgotten-Safeword-24B-V2.2](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-V2.2)
* [ReadyArt/Forgotten-Safeword-24B-V2.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-V2.0) + [trashpanda-org/MS-24B-Mullein-v1-lora](https://huggingface.co/trashpanda-org/MS-24B-Mullein-v1-lora)
* [ReadyArt/Forgotten-Abomination-24B-V2.2](https://huggingface.co/ReadyArt/Forgotten-Abomination-24B-V2.2)
* [OddTheGreat/Apparatus_24B](https://huggingface.co/OddTheGreat/Apparatus_24B)
* [Darkknight535/WinterEngine-24B-Instruct](https://huggingface.co/Darkknight535/WinterEngine-24B-Instruct)
* [allura-org/MS3-24B-Roselily-Creative](https://huggingface.co/allura-org/MS3-24B-Roselily-Creative)
* [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B)
* [Nohobby/MS3-Tantum-24B-v0.1](https://huggingface.co/Nohobby/MS3-Tantum-24B-v0.1)
* [TheDrummer/Cydonia-24B-v2.1](https://huggingface.co/TheDrummer/Cydonia-24B-v2.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
name: Sorawiz/MS-Creative-24B-Test-A
merge_method: dare_ties
base_model: ReadyArt/Forgotten-Safeword-24B-V2.2
models:
- model: ReadyArt/Forgotten-Safeword-24B-V2.2
parameters:
weight: 0.05
- model: ReadyArt/Forgotten-Abomination-24B-V2.2
parameters:
weight: 0.20
- model: OddTheGreat/Apparatus_24B
parameters:
weight: 0.20
- model: Darkknight535/WinterEngine-24B-Instruct
parameters:
weight: 0.15
- model: ReadyArt/Forgotten-Safeword-24B-V2.0+trashpanda-org/MS-24B-Mullein-v1-lora
parameters:
weight: 0.15
- model: allura-org/MS3-24B-Roselily-Creative
parameters:
weight: 0.15
- model: TroyDoesAI/BlackSheep-24B
parameters:
weight: 0.10
parameters:
density: 0.79
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/MS-Creative-24B-Test-B
models:
- model: ReadyArt/Forgotten-Abomination-24B-V2.2
- model: OddTheGreat/Apparatus_24B
parameters:
density: 1.00
weight: 1.00
- model: TroyDoesAI/BlackSheep-24B
parameters:
density: 1.00
weight: 1.00
- model: Darkknight535/WinterEngine-24B-Instruct
parameters:
density: 1.00
weight: 1.00
- model: allura-org/MS3-24B-Roselily-Creative
parameters:
density: 0.70
weight: 0.50
- model: Nohobby/MS3-Tantum-24B-v0.1
parameters:
density: 0.70
weight: 0.50
merge_method: ties
base_model: ReadyArt/Forgotten-Abomination-24B-V2.2
parameters:
normalize: true
dtype: float32
---
models:
- model: Sorawiz/MS-Creative-24B-Test-A
- model: Sorawiz/MS-Creative-24B-Test-B
merge_method: model_stock
base_model: TheDrummer/Cydonia-24B-v2.1
parameters:
filter_wise: false
dtype: float32
```