|
|
|
|
|
--- |
|
base_model: byroneverson/gemma-2-27b-it-abliterated |
|
language: |
|
- en |
|
library_name: transformers |
|
license: gemma |
|
pipeline_tag: text-generation |
|
tags: |
|
- gemma |
|
- gemma-2 |
|
- chat |
|
- it |
|
- abliterated |
|
- llama-cpp |
|
- gguf-my-repo |
|
- gguf |
|
- quant |
|
- quantized |
|
--- |
|
|
|
[](https://hf.co/QuantFactory) |
|
|
|
|
|
# QuantFactory/gemma-2-27b-it-abliterated-GGUF |
|
This is quantized version of [byroneverson/gemma-2-27b-it-abliterated](https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated) created using llama.cpp |
|
|
|
# Original Model Card |
|
|
|
|
|
|
|
|
|
# gemma-2-27b-it-abliterated |
|
|
|
## Now accepting abliteration requests. If you would like to see a model abliterated, follow me and leave me a message with model link. |
|
|
|
This is a new approach for abliterating models using CPU only. I was able to abliterate this model using free kaggle processing with no accelerator. |
|
1. Obtain refusal direction vector using a quant model with llama.cpp (llama-cpp-python and ggml-python). |
|
2. Orthogonalize each .safetensors files directly from original repo and upload to a new repo. (one at a time) |
|
|
|
Check out the <a href="https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated/blob/main/abliterate-gemma-2-27b-it.ipynb">jupyter notebook</a> for details of how this model was abliterated from gemma-2-27b-it. |
|
|
|
 |
|
|
|
|