This was an experiment. I got the delta between mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated and meta-llama/Llama-3.1-8B-Instruct and applied that on the common layers from ICTNLP/Llama-3.1-8B-Omni.

The intention was to see if the Omni model can gain abliterated functions. The result (this model) is coherent, but it's not 100% uncensored. The reason most probably has to do with the way the Omni model was trained.

Downloads last month
2
Safetensors
Model size
9.11B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.