File size: 1,404 Bytes
d604f60 8362914 157aa76 aa19ce1 157aa76 7dfafc5 635ee67 ef70284 d604f60 686b84e d604f60 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- Nitral-AI/Infinitely-Laydiculous-9B
library_name: transformers
tags:
- mergekit
- merge
---
![example](https://i.imgur.com/6iMB3Cp.png)
# Description
This is the first merge I have ever tried. It seems to be working, or by the minimum does not appear to be broken :) The prime idea of this is to be an experiment and help me learn how to do merges and quants.
[GGUF /IQ / Imatrix](https://huggingface.co/ABX-AI/Infinitely-Kunodiculous-9B-GGUF-IQ-Imatrix)
# Infinitely-Kunodiculous-9B
This model is intended for role-playing and storywriting purposes.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [Nitral-AI/Infinitely-Laydiculous-9B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Nitral-AI/Infinitely-Laydiculous-9B
layer_range: [0, 20]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
```
|