SicariusSicariiStuff commited on
Commit
c77170f
·
verified ·
1 Parent(s): 3f95738

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -8
README.md CHANGED
@@ -147,6 +147,19 @@ Sicarius
147
  - (Can still be decent for merges, fairly uncensored): [LLAMA-3_8B_Unaligned_Alpha](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
148
  - Roleplay merge example: [LLAMA-3_8B_Unaligned_Alpha_RP_Soup](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup)
149
 
 
 
 
 
 
 
 
 
 
 
 
 
 
150
  # Model instruction template: (Can use either ChatML or Llama-3)
151
  # ChatML
152
  ```
@@ -251,16 +264,9 @@ You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
251
 
252
  </details>
253
 
254
- # Model Details
255
-
256
- <details>
257
- <summary>This was based on several different models, as well as an abliviated model, which after days of finetuning at different Lora R values are probably no longer even recognizable. The result of this intermediate checkpoint is published under <b>SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha</b>, while this model is now fully fine-tuned instead of just a very deep Lora.</summary>
258
- The full fine-tuning is performed on the full LLAMA-3 8k Context. It will not only be used for stacking several different prompts into a total length of 8k but also for using the full context length for single prompts. The training data contains a lot of highly cleaned, highest-quality story writing, and some RP.
259
 
260
- Of course, a massive and deep uncensoring protocol is used, along with giving the model some sass and personality! A lot of effort was poured into this work to ensure the model is not compromised by the deep uncensoring protocol. The goal is to create a model that is highly creative, serving as a writing assistant, co-editor, and having some role play abilities, while still being fairly intelligent, as much as an 8B model can be.
261
 
262
- The most important aspect of this work is to make it fresh, trained on datasets that have never been used in any other model, giving it a truly unique vibe.
263
- </details>
264
 
265
  ## Available quantizations:
266
 
 
147
  - (Can still be decent for merges, fairly uncensored): [LLAMA-3_8B_Unaligned_Alpha](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
148
  - Roleplay merge example: [LLAMA-3_8B_Unaligned_Alpha_RP_Soup](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup)
149
 
150
+ ---
151
+
152
+ ## TL;DR
153
+
154
+ This was based on several different models, as well as an abliviated model, which after days of finetuning at different Lora R values are probably no longer even recognizable. The result of this intermediate checkpoint is published under <b>SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha</b>, while this model is now fully fine-tuned instead of just a very deep Lora.</summary>
155
+ The full fine-tuning is performed on the full LLAMA-3 8k Context. It will not only be used for stacking several different prompts into a total length of 8k but also for using the full context length for single prompts. The training data contains a lot of highly cleaned, highest-quality story writing, and some RP.
156
+
157
+ Of course, a massive and deep uncensoring protocol is used, along with giving the model some sass and personality! A lot of effort was poured into this work to ensure the model is not compromised by the deep uncensoring protocol. The goal is to create a model that is highly creative, serving as a writing assistant, co-editor, and having some role play abilities, while still being fairly intelligent, as much as an 8B model can be.
158
+
159
+ The most important aspect of this work is to make it fresh, trained on datasets that have never been used in any other model, giving it a truly unique vibe.
160
+
161
+ ---
162
+
163
  # Model instruction template: (Can use either ChatML or Llama-3)
164
  # ChatML
165
  ```
 
264
 
265
  </details>
266
 
267
+ ---
 
 
 
 
268
 
 
269
 
 
 
270
 
271
  ## Available quantizations:
272