icefog72 commited on
Commit
caa1b26
·
verified ·
1 Parent(s): 49666ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -48
README.md CHANGED
@@ -1,48 +1,50 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # Ice0.70-25.01-RP
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the SLERP merge method.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * H:\FModels\Ice0.69-25.01-RP
22
- * H:\FModels\Ice0.68-25.01-RP
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
-
28
- ```yaml
29
- slices:
30
- - sources:
31
- - model: H:\FModels\Ice0.69-25.01-RP
32
- layer_range: [0, 32]
33
- - model: H:\FModels\Ice0.68-25.01-RP
34
- layer_range: [0, 32]
35
-
36
- merge_method: slerp
37
- base_model: H:\FModels\Ice0.68-25.01-RP
38
- parameters:
39
- t:
40
- - filter: self_attn
41
- value: [0, 0.5, 0.3, 0.7, 1]
42
- - filter: mlp
43
- value: [1, 0.5, 0.7, 0.3, 0]
44
- - value: 0.5 # fallback for rest of tensors
45
- dtype: bfloat16
46
-
47
-
48
- ```
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: transformers
4
+ tags:
5
+ - alpaca
6
+ - mistral
7
+ - not-for-all-audiences
8
+ - nsfw
9
+ - exl2
10
+ ---
11
+ # IceNalyvkaRP-7b-4.2bpw-exl2 (Ice0.70-25.01-RP)
12
+
13
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
+
15
+ ## Merge Details
16
+ ### Merge Method
17
+
18
+ This model was merged using the SLERP merge method.
19
+
20
+ ### Models Merged
21
+
22
+ The following models were included in the merge:
23
+ * Ice0.69-25.01-RP
24
+ * Ice0.68-25.01-RP
25
+
26
+ ### Configuration
27
+
28
+ The following YAML configuration was used to produce this model:
29
+
30
+ ```yaml
31
+ slices:
32
+ - sources:
33
+ - model: Ice0.69-25.01-RP
34
+ layer_range: [0, 32]
35
+ - model: Ice0.68-25.01-RP
36
+ layer_range: [0, 32]
37
+
38
+ merge_method: slerp
39
+ base_model: Ice0.68-25.01-RP
40
+ parameters:
41
+ t:
42
+ - filter: self_attn
43
+ value: [0, 0.5, 0.3, 0.7, 1]
44
+ - filter: mlp
45
+ value: [1, 0.5, 0.7, 0.3, 0]
46
+ - value: 0.5 # fallback for rest of tensors
47
+ dtype: bfloat16
48
+
49
+
50
+ ```