This is a general capability upgrade to Mistral-7B, using open source data to improve multilingual ability, overall knowledge, extended communication, and technical skill.

This model is primarily recommended as a superior-to-Mistral-7B baseline for additional finetuning, not for direct deployment to production as a chat model. The user accepts full responsibility for all outputs.

Downloads last month
162
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for adonlee/Mistral_7B_SFT_DPO_v0

Quantizations
2 models

Spaces using adonlee/Mistral_7B_SFT_DPO_v0 6