munish0838 commited on
Commit
a5cb596
·
verified ·
1 Parent(s): 5e25114

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - princeton-nlp/gemma-2-9b-it-SimPO
6
+ - TheDrummer/Gemmasutra-9B-v1
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+ - roleplay
12
+ - sillytavern
13
+ - gemma2
14
+ language:
15
+ - en
16
+
17
+ ---
18
+
19
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
20
+
21
+ # QuantFactory/Ellaria-9B-GGUF
22
+ This is quantized version of [tannedbum/Ellaria-9B](https://huggingface.co/tannedbum/Ellaria-9B) created using llama.cpp
23
+
24
+ # Original Model Card
25
+
26
+
27
+ Same reliable approach as before. A good RP model and a suitable dose of SimPO are a match made in heaven.
28
+
29
+ ## SillyTavern
30
+
31
+ ## Text Completion presets
32
+ ```
33
+ temp 0.9
34
+ top_k 30
35
+ top_p 0.75
36
+ min_p 0.2
37
+ rep_pen 1.1
38
+ smooth_factor 0.25
39
+ smooth_curve 1
40
+ ```
41
+ ## Advanced Formatting
42
+
43
+
44
+ Context & Instruct Presets for Gemma [Here](https://huggingface.co/tannedbum/ST-Presets/tree/main) IMPORTANT !
45
+
46
+ Instruct Mode: Enabled
47
+
48
+
49
+
50
+
51
+
52
+
53
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
54
+
55
+ ### Merge Method
56
+
57
+ This model was merged using the SLERP merge method.
58
+
59
+ ### Models Merged
60
+
61
+ The following models were included in the merge:
62
+ * [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO)
63
+ * [TheDrummer/Gemmasutra-9B-v1](https://huggingface.co/TheDrummer/Gemmasutra-9B-v1)
64
+
65
+ ### Configuration
66
+
67
+ The following YAML configuration was used to produce this model:
68
+
69
+ ```yaml
70
+ slices:
71
+ - sources:
72
+ - model: TheDrummer/Gemmasutra-9B-v1
73
+ layer_range: [0, 42]
74
+ - model: princeton-nlp/gemma-2-9b-it-SimPO
75
+ layer_range: [0, 42]
76
+ merge_method: slerp
77
+ base_model: TheDrummer/Gemmasutra-9B-v1
78
+ parameters:
79
+ t:
80
+ - filter: self_attn
81
+ value: [0.2, 0.4, 0.6, 0.2, 0.4]
82
+ - filter: mlp
83
+ value: [0.8, 0.6, 0.4, 0.8, 0.6]
84
+ - value: 0.4
85
+ dtype: bfloat16
86
+
87
+
88
+ ```
89
+
90
+ Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum