Envoid commited on
Commit
1dce4e0
·
verified ·
1 Parent(s): 7783172

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -1,3 +1,20 @@
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
4
+ # Caution this model may be unpredictable
5
+ ![](https://files.catbox.moe/y8nv86.jpg)
6
+ ## Mixtral-Instruct-ITR (Interpolative Training Regression)
7
+
8
+ We have to go back, edition.
9
+
10
+ For this model I took what I learned in the making of Cat-8x7B and went back to the very beginning and SLERP merged [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) onto [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
11
+
12
+ While the results aren't perfect the model feels more creative and less overcooked than Mixtral Instruct is often accused of being.
13
+
14
+ The hopes are that this should also have left the model much more receptive to additional finetuning and I am interested to see what comes of it so please feel free to download it and have fun.
15
+
16
+ Apologies about the small shard size (keep forgetting to change the mergekit config back)
17
+
18
+ ## The model is a lot less likely to refuse certain requests in this state:
19
+
20
+ so if you are going to apply additional finetuning to the model you may need to bolster its alignment depending on your use case.