Transformers
GGUF
Inference Endpoints
conversational

QuantFactory/mistral-nemo-cc-12B-GGUF

This is quantized version of nbeerbower/mistral-nemo-cc-12B created using llama.cpp

Original Model Card

mistral-nemo-cc-12B

nbeerbower/mistral-nemo-gutenberg-12B-v3 finetuned on flammenai/casual-conversation-DPO.

This is an experimental finetune that formats the conversation data sequentially with ChatML.

Method

Finetuned using an A100 on Google Colab for 3 epochs.

Fine-tune Llama 3 with ORPO

Downloads last month
69
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for QuantFactory/mistral-nemo-cc-12B-GGUF

Dataset used to train QuantFactory/mistral-nemo-cc-12B-GGUF