Transformers
GGUF
Inference Endpoints
conversational

QuantFactory Banner

QuantFactory/Mistral-Nemo-Prism-12B-v7-GGUF

This is quantized version of nbeerbower/Mistral-Nemo-Prism-12B-v7 created using llama.cpp

Original Model Card

image/png

🧪 Just Another Model Experiment

This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!

Mistral-Nemo-Prism-12B-v7

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.

The goal was to reduce archaic language and purple prose in a completely uncensored model.

Method

ORPO tuned with 8x A40 for 10 epochs.

For this version, beta was increased to 2.

In conclusion, LoRA does not seem to be able to completely remove some of the language issues deeply embedded in the model.

Downloads last month
82
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for QuantFactory/Mistral-Nemo-Prism-12B-v7-GGUF

Quantized
(11)
this model

Datasets used to train QuantFactory/Mistral-Nemo-Prism-12B-v7-GGUF