|
--- |
|
license: cc-by-nc-4.0 |
|
--- |
|
# Caution this model may be unpredictable |
|
 |
|
## Mixtral-Instruct-ITR (Interpolative Training Regression) |
|
|
|
We have to go back, edition. |
|
|
|
For this model I took what I learned in the making of Cat-8x7B and went back to the very beginning and SLERP merged [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) onto [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) |
|
|
|
While the results aren't perfect the model feels more creative and less overcooked than Mixtral Instruct is often accused of being. |
|
|
|
The hopes are that this should also have left the model much more receptive to additional finetuning and I am interested to see what comes of it so please feel free to download it and have fun. |
|
|
|
Apologies about the small shard size (keep forgetting to change the mergekit config back) |
|
|
|
## The model is a lot less likely to refuse certain requests in this state: |
|
|
|
so if you are going to apply additional finetuning to the model you may need to bolster its alignment depending on your use case. |