Ppoyaa commited on
Commit
52bfddd
·
verified ·
1 Parent(s): 6fb2f04

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -42
README.md CHANGED
@@ -1,9 +1,4 @@
1
  ---
2
- base_model:
3
- - ChaoticNeutrals/Nyanade_Stunna-Maid-7B-v0.2
4
- - Ppoyaa/LuminRP-7B-128k-v0.5
5
- - mlabonne/AlphaMonarch-7B
6
- - Nitral-AI/Kunocchini-7b-128k-test
7
  library_name: transformers
8
  tags:
9
  - mergekit
@@ -14,40 +9,3 @@ tags:
14
 
15
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
 
17
- ## Merge Details
18
- ### Merge Method
19
-
20
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Nitral-AI/Kunocchini-7b-128k-test](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test) as a base.
21
-
22
- ### Models Merged
23
-
24
- The following models were included in the merge:
25
- * [ChaoticNeutrals/Nyanade_Stunna-Maid-7B-v0.2](https://huggingface.co/ChaoticNeutrals/Nyanade_Stunna-Maid-7B-v0.2)
26
- * [Ppoyaa/LuminRP-7B-128k-v0.5](https://huggingface.co/Ppoyaa/LuminRP-7B-128k-v0.5)
27
- * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
28
-
29
- ### Configuration
30
-
31
- The following YAML configuration was used to produce this model:
32
-
33
- ```yaml
34
- models:
35
- - model: Ppoyaa/LuminRP-7B-128k-v0.5
36
- parameters:
37
- density: 0.85
38
- weight: 0.4
39
- - model: mlabonne/AlphaMonarch-7B
40
- parameters:
41
- density: 0.85
42
- weight: 0.2
43
- - model: ChaoticNeutrals/Nyanade_Stunna-Maid-7B-v0.2
44
- parameters:
45
- density: 0.85
46
- weight: 0.4
47
- merge_method: ties
48
- base_model: Nitral-AI/Kunocchini-7b-128k-test
49
- parameters:
50
- normalize: false
51
- int8_mask: true
52
- dtype: bfloat16
53
- ```
 
1
  ---
 
 
 
 
 
2
  library_name: transformers
3
  tags:
4
  - mergekit
 
9
 
10
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
11