NoManDeRY commited on
Commit
e4c4012
·
verified ·
1 Parent(s): 70d6248

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -16,6 +16,8 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # Llama-3-dpo-5e-7-SFTed-paged_adamw_32bit-1.0
18
 
 
 
19
  This model is a fine-tuned version of [princeton-nlp/Llama-3-Base-8B-SFT](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT) on the HuggingFaceH4/ultrafeedback_binarized dataset.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.5487
 
16
 
17
  # Llama-3-dpo-5e-7-SFTed-paged_adamw_32bit-1.0
18
 
19
+ This is a model released from the preprint: [DPO-Shift: Shifting the Distribution of Direct Preference Optimization](https://arxiv.org/abs/2502.07599). Please refer to our [repository](https://github.com/Meaquadddd/DPO-Shift) for more details.
20
+
21
  This model is a fine-tuned version of [princeton-nlp/Llama-3-Base-8B-SFT](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT) on the HuggingFaceH4/ultrafeedback_binarized dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.5487