siqi00 nielsr HF staff commited on
Commit
8ae247a
·
verified ·
1 Parent(s): 24a9403

Add pipeline tag, link to code and paper (#1)

Browse files

- Add pipeline tag, link to code and paper (9ed7049e49cccf9a11e9c2c2f9e45e3c0210164b)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +43 -7
README.md CHANGED
@@ -1,23 +1,23 @@
1
  ---
 
 
 
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: mistralai/Mistral-7B-v0.1
5
  tags:
6
  - alignment-handbook
7
  - generated_from_trainer
8
- datasets:
9
- - siqi00/mistral_ultrafeedback_unhelpful_chatprompt_0.7_1.0_50_320
10
  model-index:
11
  - name: mistral-feedbuhcp2-dft-lr2e-6-tau1.0-u_init0-s2-e2-gamma0.85
12
  results: []
13
  ---
14
 
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
-
18
  # Mistral-7B-DFT
19
 
20
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the siqi00/mistral_ultrafeedback_unhelpful_chatprompt_0.7_1.0_50_320 dataset.
 
 
21
 
22
  ### Training hyperparameters
23
 
@@ -42,3 +42,39 @@ The following hyperparameters were used during training:
42
  - Pytorch 2.1.2+cu121
43
  - Datasets 3.0.1
44
  - Tokenizers 0.20.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: mistralai/Mistral-7B-v0.1
3
+ datasets:
4
+ - siqi00/mistral_ultrafeedback_unhelpful_chatprompt_0.7_1.0_50_320
5
  library_name: transformers
6
  license: apache-2.0
 
7
  tags:
8
  - alignment-handbook
9
  - generated_from_trainer
10
+ pipeline_tag: text-generation
 
11
  model-index:
12
  - name: mistral-feedbuhcp2-dft-lr2e-6-tau1.0-u_init0-s2-e2-gamma0.85
13
  results: []
14
  ---
15
 
 
 
 
16
  # Mistral-7B-DFT
17
 
18
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the siqi00/mistral_ultrafeedback_unhelpful_chatprompt_0.7_1.0_50_320 dataset. It was finetuned as part of the paper [Discriminative Finetuning of Generative Large Language Models without Reward Models and Preference Data](https://arxiv.org/abs/2502.18679)
19
+
20
+ The code is available at https://github.com/PenGuln/DFT.
21
 
22
  ### Training hyperparameters
23
 
 
42
  - Pytorch 2.1.2+cu121
43
  - Datasets 3.0.1
44
  - Tokenizers 0.20.1
45
+
46
+ ### Usage Example
47
+
48
+ The model can be used for text generation tasks. A basic example using the `transformers` library is shown below:
49
+
50
+ ```python
51
+ from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
52
+ import torch
53
+
54
+ model_id = "siqi00/Mistral-7B-DFT"
55
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
56
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
57
+
58
+ prompt = "What is the capital of France?"
59
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
60
+ generation_config = GenerationConfig(max_new_tokens=20, temperature=0.7)
61
+ outputs = model.generate(inputs["input_ids"], generation_config=generation_config)
62
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
63
+ print(generated_text)
64
+ ```
65
+
66
+ Remember to install the necessary libraries (`pip install transformers`) and adjust parameters like `temperature` and `max_new_tokens` to fine-tune generation.
67
+
68
+ ## Citation
69
+
70
+ ```bibtex
71
+ @misc{guo2025discriminativefinetuninggenerativelarge,
72
+ title={Discriminative Finetuning of Generative Large Language Models without Reward Models and Preference Data},
73
+ author={Siqi Guo and Ilgee Hong and Vicente Balmaseda and Tuo Zhao and Tianbao Yang},
74
+ year={2025},
75
+ eprint={2502.18679},
76
+ archivePrefix={arXiv},
77
+ primaryClass={cs.CL},
78
+ url={https://arxiv.org/abs/2502.18679},
79
+ }
80
+ ```