image/png

Qwen2.5-Gutenberg-Doppel-14B

Qwen/Qwen2.5-14B-Instruct finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO tuned with 4x A40 for 3 epochs.

Thank you @ParasiticRogue for sponsoring.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 32.30
IFEval (0-Shot) 80.91
BBH (3-Shot) 48.24
MATH Lvl 5 (4-Shot) 0.00
GPQA (0-shot) 11.07
MuSR (0-shot) 10.02
MMLU-PRO (5-shot) 43.57
Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for async0x42/Qwen2.5-Gutenberg-Doppel-14B-exl2_5.0bpw

Base model

Qwen/Qwen2.5-14B
Quantized
(92)
this model

Datasets used to train async0x42/Qwen2.5-Gutenberg-Doppel-14B-exl2_5.0bpw

Evaluation results