qgallouedec HF staff commited on
Commit
9a5e8f8
·
verified ·
1 Parent(s): 55cb2ff

End of training

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -22,13 +22,13 @@ from transformers import pipeline
22
 
23
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
24
  generator = pipeline("text-generation", model="qgallouedec/dpo-qwen2", device="cuda")
25
- output = generator([{"role": "user", "content": question}], max_new_tokens=500)[0]
26
- print(output["generated_text"][1]["content"])
27
  ```
28
 
29
  ## Training procedure
30
 
31
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/ce70egbk)
32
 
33
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
34
 
 
22
 
23
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
24
  generator = pipeline("text-generation", model="qgallouedec/dpo-qwen2", device="cuda")
25
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
26
+ print(output["generated_text"])
27
  ```
28
 
29
  ## Training procedure
30
 
31
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/90tpt217)
32
 
33
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
34