v000000 commited on
Commit
0e53f4d
·
verified ·
1 Parent(s): a5197d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -8
README.md CHANGED
@@ -11,20 +11,61 @@ tags:
11
  This model was converted to GGUF format from [`v000000/SwallowMaid-8B-L3-SPPO-abliterated`](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated) using llama.cpp
12
  Refer to the [original model card](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated) for more details on the model.
13
 
14
- # SwallowMaid-8B-Llama-3-SPPO-abliterated
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/q2wphxeSprbFi56E_pzug.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
- # merge
 
 
 
 
 
19
 
20
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
21
 
22
- ## Merge Details
23
- ### Merge Method
24
 
25
  This model was merged using a multi-step merge method.
26
 
27
- ### Models Merged
28
 
29
  The following models were included in the merge:
30
  * [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
@@ -34,7 +75,7 @@ The following models were included in the merge:
34
  * [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1)
35
  * [Nitral-AI/Hathor_Respawn-L3-8B-v0.8](https://huggingface.co/Nitral-AI/Hathor_Respawn-L3-8B-v0.8)
36
 
37
- ### Configuration
38
 
39
  The following YAML configuration was used to produce this model:
40
 
@@ -77,4 +118,19 @@ models:
77
  weight: 0.15
78
  merge_method: linear
79
  dtype: float32
80
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  This model was converted to GGUF format from [`v000000/SwallowMaid-8B-L3-SPPO-abliterated`](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated) using llama.cpp
12
  Refer to the [original model card](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated) for more details on the model.
13
 
14
+ ---
15
+ base_model:
16
+ - x0000001/mergekit-task_arithmetic-vlehhex
17
+ - grimjim/Llama-3-Instruct-abliteration-LoRA-8B
18
+ library_name: transformers
19
+ tags:
20
+ - mergekit
21
+ - merge
22
+
23
+ ---
24
+ <!DOCTYPE html>
25
+ <style>
26
+
27
+ h1 {
28
+ color: #801ffa; /* Red color */
29
+ font-size: 1.25em; /* Larger font size */
30
+ text-align: left; /* Center alignment */
31
+ text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); /* Shadow effect */
32
+ background: linear-gradient(90deg, #801ffa, #e9a8fb); /* Gradient background */
33
+ -webkit-background-clip: text; /* Clipping the background to text */
34
+ -webkit-text-fill-color: transparent; /* Making the text transparent */
35
+ }
36
 
37
+ a {
38
+ color: #801ffa; /* Red color */
39
+ font-size: 1.25em; /* Larger font size */
40
+ text-align: left; /* Center alignment */
41
+ text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); /* Shadow effect */
42
+ background: linear-gradient(90deg, #801ffa, #e9a8fb); /* Gradient background */
43
+ -webkit-background-clip: text; /* Clipping the background to text */
44
+ -webkit-text-fill-color: transparent; /* Making the text transparent */
45
+ }
46
+ </style>
47
+ <html lang="en">
48
+ <head>
49
+ </head>
50
+ <body>
51
+ <h1>SwallowMaid-8B-Llama-3-SPPO-abliterated</h1>
52
+ Excellent "Llama-3-Instruct-8B-SPPO-Iter3" fully uncensored with 35% RP-Mix infused to gain roleplay capabilities and prose while attempting to preserve the qualities of Meta's Llama-3-Instruct finetune.
53
 
54
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/0vhS2LvbcQm6dwaFkC_HK.png)
55
+
56
+ # <a>Quants</a>
57
+ * [GGUF Q8_0](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated-Q8_0-GGUF)
58
+
59
+ # <h1>merge</h1>
60
 
61
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
62
 
63
+ # <h1>Merge Details</h1>
64
+ # <h1>Merge Method</h1>
65
 
66
  This model was merged using a multi-step merge method.
67
 
68
+ # <h1>Models Merged</h1>
69
 
70
  The following models were included in the merge:
71
  * [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
 
75
  * [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1)
76
  * [Nitral-AI/Hathor_Respawn-L3-8B-v0.8](https://huggingface.co/Nitral-AI/Hathor_Respawn-L3-8B-v0.8)
77
 
78
+ # <h1>Configuration</h1>
79
 
80
  The following YAML configuration was used to produce this model:
81
 
 
118
  weight: 0.15
119
  merge_method: linear
120
  dtype: float32
121
+ ```
122
+
123
+ # <h1>Prompt Template:</h1>
124
+ ```bash
125
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
126
+
127
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
128
+
129
+ {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
130
+
131
+ {output}<|eot_id|>
132
+
133
+ ```
134
+
135
+ </body>
136
+ </html>