DavidAU commited on
Commit
53f37f1
·
verified ·
1 Parent(s): 669fb68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -32,14 +32,14 @@ pipeline_tag: text-generation
32
 
33
  (quants uploading, examples to be added)
34
 
35
- <H2>Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-15B-gguf</H2>
36
 
37
  <img src="qwen-tiny.jpg" style="float:right; width:300px; height:300px; padding:5px;">
38
 
39
  This is a Qwen2.5 MOE (Mixture of Experts) model comprised of TWO Qwen 2.5 Deepseek (Censored/Normal AND Uncensored) 7B models
40
- creating a 15B model with the "Abliterated" (Uncensored) version of Deepseek Qwen 2.5 7B "in charge" so to speak.
41
 
42
- The model is just over 15B because of the unqiue "shared expert" (roughly 2.5 models here) used in Qwen MOEs.
43
 
44
  The oddball configuration yields interesting "thinking/reasoning" which is stronger than either 7B model on its own.
45
 
 
32
 
33
  (quants uploading, examples to be added)
34
 
35
+ <H2>Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-gguf</H2>
36
 
37
  <img src="qwen-tiny.jpg" style="float:right; width:300px; height:300px; padding:5px;">
38
 
39
  This is a Qwen2.5 MOE (Mixture of Experts) model comprised of TWO Qwen 2.5 Deepseek (Censored/Normal AND Uncensored) 7B models
40
+ creating a 19B model with the "Abliterated" (Uncensored) version of Deepseek Qwen 2.5 7B "in charge" so to speak.
41
 
42
+ The model is just over 19B because of the unqiue "shared expert" (roughly 2.5 models here) used in Qwen MOEs.
43
 
44
  The oddball configuration yields interesting "thinking/reasoning" which is stronger than either 7B model on its own.
45