siqi00 nielsr HF staff commited on
Commit
66fce59
·
verified ·
1 Parent(s): 13cf725

Improve model card: Add pipeline tag, link to paper and code (#1)

Browse files

- Improve model card: Add pipeline tag, link to paper and code (f8fa94f3c4d91e67e9f39f499fbd9bb5782571ee)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -1,12 +1,13 @@
1
  ---
 
 
 
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: mistralai/Mistral-7B-v0.1
5
  tags:
6
  - alignment-handbook
7
  - generated_from_trainer
8
- datasets:
9
- - siqi00/mistral_ultrafeedback_unhelpful_chatprompt_0.7_1.0_50_320
10
  model-index:
11
  - name: mistral-feedbuhcp2-dft-lr2e-6-tau0.3-u_init0-s2-e2-gamma0.90-rf
12
  results: []
@@ -19,6 +20,9 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the siqi00/mistral_ultrafeedback_unhelpful_chatprompt_0.7_1.0_50_320 dataset.
21
 
 
 
 
22
  ### Training hyperparameters
23
 
24
  The following hyperparameters were used during training:
@@ -41,4 +45,4 @@ The following hyperparameters were used during training:
41
  - Transformers 4.45.2
42
  - Pytorch 2.1.2+cu121
43
  - Datasets 3.0.1
44
- - Tokenizers 0.20.1
 
1
  ---
2
+ base_model: mistralai/Mistral-7B-v0.1
3
+ datasets:
4
+ - siqi00/mistral_ultrafeedback_unhelpful_chatprompt_0.7_1.0_50_320
5
  library_name: transformers
6
  license: apache-2.0
7
+ pipeline_tag: text-generation
8
  tags:
9
  - alignment-handbook
10
  - generated_from_trainer
 
 
11
  model-index:
12
  - name: mistral-feedbuhcp2-dft-lr2e-6-tau0.3-u_init0-s2-e2-gamma0.90-rf
13
  results: []
 
20
 
21
  This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the siqi00/mistral_ultrafeedback_unhelpful_chatprompt_0.7_1.0_50_320 dataset.
22
 
23
+ This model was trained using Discriminative Fine-tuning (DFT), as described in the paper [Discriminative Finetuning of Generative Large Language Models without Reward Models and Preference Data](https://arxiv.org/abs/2502.18679). The code is available at [PenGuln/DFT](https://github.com/PenGuln/DFT).
24
+
25
+
26
  ### Training hyperparameters
27
 
28
  The following hyperparameters were used during training:
 
45
  - Transformers 4.45.2
46
  - Pytorch 2.1.2+cu121
47
  - Datasets 3.0.1
48
+ - Tokenizers 0.20.1