YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward. Please refer to our repository for more details.

Downloads last month
4,596
Safetensors
Model size
7.24B params
Tensor type
BF16
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for princeton-nlp/Mistral-7B-Base-SFT-CPO

Quantizations
2 models

Space using princeton-nlp/Mistral-7B-Base-SFT-CPO 1

Collection including princeton-nlp/Mistral-7B-Base-SFT-CPO