QuantFactory/mistral-nemo-cc-12B-GGUF
This is quantized version of nbeerbower/mistral-nemo-cc-12B created using llama.cpp
Original Model Card
mistral-nemo-cc-12B
nbeerbower/mistral-nemo-gutenberg-12B-v3 finetuned on flammenai/casual-conversation-DPO.
This is an experimental finetune that formats the conversation data sequentially with ChatML.
Method
Finetuned using an A100 on Google Colab for 3 epochs.
- Downloads last month
- 69
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for QuantFactory/mistral-nemo-cc-12B-GGUF
Base model
intervitens/mini-magnum-12b-v1.1
Finetuned
nbeerbower/mistral-nemo-gutenberg-12B-v3