-
-
-
-
-
-
Inference Providers
Active filters:
dpo
lewtun/tmp-dpo
Text Generation
•
Updated
•
134
SongTonyLi/gemma-2b-it-SFT-D1_chosen-then-DPO-D2a-orca
Text Generation
•
Updated
•
129
CharlesLi/OpenELM-1_1B-DPO-full-self-improve
Text Generation
•
Updated
•
134
QinLiuNLP/llama3-sudo-dpo-instruct-5epochs-jxkey
dmariko/SmolLM-360M-Instruct-dpo-16k
dmariko/SmolLM-1.7B-Instruct-dpo-15k
Updated
•
15
dmariko/SmolLM-1.7B-Instruct-dpo-16k
Updated
•
10
QinLiuNLP/llama3-sudo-dpo-instruct-100epochs-jxkey
Updated
•
170
DUAL-GPO/phi-2-dpo-chatml-lora-40k-60k-v2-i2
vincentlinzhu/dspv1_dpo_dspfmt_medium
SongTonyLi/gemma-2b-it-SFT-D1_chosen-then-DPO-D2a-distilabel-math-preference
Text Generation
•
Updated
•
128
vincentlinzhu/dspv1_dpo_llemmafmt_medium
DUAL-GPO/phi-2-dpo-chatml-lora-0k-20k-i2
Updated
LBK95/Llama-2-7b-hf-DPO-LookAhead3_FullEval_TTree1.4_TLoop0.7_TEval0.2_Filter0.2_V1.0
Huertas97/smollm-gec-sftt-dpo
Text Generation
•
Updated
•
131
SameedHussain/gemma-2-2b-it-Flight-Multi-Turn-V2-DPO
Text Generation
•
Updated
•
131
Siddartha10/outputs_dpo
Text Generation
•
Updated
•
131
SongTonyLi/gemma-2b-it-SFT-D1_chosen-then-DPO-D2a-HuggingFaceH4-ultrafeedback_binarized-Xlarge
Text Generation
•
Updated
•
7
CharlesLi/OpenELM-1_1B-DPO-full-llama-improve-openelm
Text Generation
•
Updated
•
133
maxmyn/c4ai-takehome-model-dpo
Text Generation
•
Updated
•
188
CharlesLi/OpenELM-1_1B-DPO-full-max-4-reward
Text Generation
•
Updated
•
4
CharlesLi/OpenELM-1_1B-DPO-full-max-12-reward
Text Generation
•
Updated
•
103
DUAL-GPO/phi-2-ipo-chatml-lora-i1
DUAL-GPO/phi-2-ipo-chatml-lora-10k-30k-i1
Updated
DUAL-GPO/phi-2-ipo-chatml-lora-20k-40k-i1
DUAL-GPO/phi-2-ipo-chatml-lora-30k-50k-i1
Updated
rasyosef/phi-2-apo
LBK95/Llama-2-7b-hf-DPO-LookAhead3_FullEval_TTree1.4_TLoop0.7_TEval0.2_Filter0.2_V2.0
coscotuff/SLFT_Trials_2
Text Generation
•
Updated
•
89
preethu19/tiny-chatbot-dpo
Updated