YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
tulu-2-dpo-13b-ExPO - GGUF
- Model creator: https://huggingface.co/chujiezheng/
- Original model: https://huggingface.co/chujiezheng/tulu-2-dpo-13b-ExPO/
Name | Quant method | Size |
---|---|---|
tulu-2-dpo-13b-ExPO.Q2_K.gguf | Q2_K | 4.52GB |
tulu-2-dpo-13b-ExPO.IQ3_XS.gguf | IQ3_XS | 4.99GB |
tulu-2-dpo-13b-ExPO.IQ3_S.gguf | IQ3_S | 5.27GB |
tulu-2-dpo-13b-ExPO.Q3_K_S.gguf | Q3_K_S | 5.27GB |
tulu-2-dpo-13b-ExPO.IQ3_M.gguf | IQ3_M | 5.57GB |
tulu-2-dpo-13b-ExPO.Q3_K.gguf | Q3_K | 5.9GB |
tulu-2-dpo-13b-ExPO.Q3_K_M.gguf | Q3_K_M | 5.9GB |
tulu-2-dpo-13b-ExPO.Q3_K_L.gguf | Q3_K_L | 6.45GB |
tulu-2-dpo-13b-ExPO.IQ4_XS.gguf | IQ4_XS | 6.54GB |
tulu-2-dpo-13b-ExPO.Q4_0.gguf | Q4_0 | 6.86GB |
tulu-2-dpo-13b-ExPO.IQ4_NL.gguf | IQ4_NL | 6.9GB |
tulu-2-dpo-13b-ExPO.Q4_K_S.gguf | Q4_K_S | 6.91GB |
tulu-2-dpo-13b-ExPO.Q4_K.gguf | Q4_K | 7.33GB |
tulu-2-dpo-13b-ExPO.Q4_K_M.gguf | Q4_K_M | 7.33GB |
tulu-2-dpo-13b-ExPO.Q4_1.gguf | Q4_1 | 7.61GB |
tulu-2-dpo-13b-ExPO.Q5_0.gguf | Q5_0 | 8.36GB |
tulu-2-dpo-13b-ExPO.Q5_K_S.gguf | Q5_K_S | 8.36GB |
tulu-2-dpo-13b-ExPO.Q5_K.gguf | Q5_K | 8.6GB |
tulu-2-dpo-13b-ExPO.Q5_K_M.gguf | Q5_K_M | 8.6GB |
tulu-2-dpo-13b-ExPO.Q5_1.gguf | Q5_1 | 9.1GB |
tulu-2-dpo-13b-ExPO.Q6_K.gguf | Q6_K | 9.95GB |
tulu-2-dpo-13b-ExPO.Q8_0.gguf | Q8_0 | 12.88GB |
Original model description:
license: other license_name: ai2-impact-license-low-risk license_link: https://allenai.org/impact-license language: - en
tulu-2-dpo-13b-ExPO
The extrapolated (ExPO) model based on allenai/tulu-2-dpo-13b
and allenai/tulu-2-13b
, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.
Specifically, we obtain this model by extrapolating (alpha = 0.5) from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.
Evaluation Results
Evaluation results on the AlpacaEval 2.0 benchmark (you can find the evaluation outputs on the official GitHub repo):
Win Rate (Ori) | LC Win Rate (Ori) | Win Rate (+ ExPO) | LC Win Rate (+ ExPO) | |
---|---|---|---|---|
HuggingFaceH4/zephyr-7b-alpha |
6.7% | 10.0% | 10.6% | 13.6% |
HuggingFaceH4/zephyr-7b-beta |
10.2% | 13.2% | 11.1% | 14.0% |
berkeley-nest/Starling-LM-7B-alpha |
15.0% | 18.3% | 18.2% | 19.5% |
Nexusflow/Starling-LM-7B-beta |
26.6% | 25.8% | 29.6% | 26.4% |
snorkelai/Snorkel-Mistral-PairRM |
24.7% | 24.0% | 28.8% | 26.4% |
RLHFlow/LLaMA3-iterative-DPO-final |
29.2% | 36.0% | 32.7% | 37.8% |
internlm/internlm2-chat-1.8b |
3.8% | 4.0% | 5.2% | 4.3% |
internlm/internlm2-chat-7b |
20.5% | 18.3% | 28.1% | 22.7% |
internlm/internlm2-chat-20b |
36.1% | 24.9% | 46.2% | 27.2% |
allenai/tulu-2-dpo-7b |
8.5% | 10.2% | 11.5% | 11.7% |
allenai/tulu-2-dpo-13b |
11.2% | 15.5% | 15.6% | 17.6% |
allenai/tulu-2-dpo-70b |
15.4% | 21.2% | 23.0% | 25.7% |
Evaluation results on the MT-Bench benchmark (you can find the evaluation outputs on the official GitHub repo):
Original | + ExPO | |
---|---|---|
HuggingFaceH4/zephyr-7b-alpha |
6.85 | 6.87 |
HuggingFaceH4/zephyr-7b-beta |
7.02 | 7.06 |
berkeley-nest/Starling-LM-7B-alpha |
7.82 | 7.91 |
Nexusflow/Starling-LM-7B-beta |
8.10 | 8.18 |
snorkelai/Snorkel-Mistral-PairRM |
7.63 | 7.69 |
RLHFlow/LLaMA3-iterative-DPO-final |
8.08 | 8.45 |
internlm/internlm2-chat-1.8b |
5.17 | 5.26 |
internlm/internlm2-chat-7b |
7.72 | 7.80 |
internlm/internlm2-chat-20b |
8.13 | 8.26 |
allenai/tulu-2-dpo-7b |
6.35 | 6.38 |
allenai/tulu-2-dpo-13b |
7.00 | 7.26 |
allenai/tulu-2-dpo-70b |
7.79 | 8.03 |
- Downloads last month
- 39
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.