Text Generation
Transformers
Safetensors
mistral
conversational
text-generation-inference
nbeerbower commited on
Commit
8776904
·
verified ·
1 Parent(s): 15e2a07

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md CHANGED
@@ -18,3 +18,50 @@ datasets:
18
  ### Method
19
 
20
  [ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 8x A100 for 2 epochs.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ### Method
19
 
20
  [ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 8x A100 for 2 epochs.
21
+
22
+ QLoRA config:
23
+ ```
24
+ # QLoRA config
25
+ bnb_config = BitsAndBytesConfig(
26
+ load_in_4bit=True,
27
+ bnb_4bit_quant_type="nf4",
28
+ bnb_4bit_compute_dtype=torch_dtype,
29
+ bnb_4bit_use_double_quant=True,
30
+ )
31
+ # LoRA config
32
+ peft_config = LoraConfig(
33
+ r=16,
34
+ lora_alpha=32,
35
+ lora_dropout=0.05,
36
+ bias="none",
37
+ task_type="CAUSAL_LM",
38
+ target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
39
+ )
40
+ ```
41
+
42
+ Training config:
43
+ ```
44
+ orpo_args = ORPOConfig(
45
+ run_name=new_model,
46
+ learning_rate=8e-6,
47
+ lr_scheduler_type="linear",
48
+ max_length=4096,
49
+ max_prompt_length=2048,
50
+ max_completion_length=2048,
51
+ beta=0.1,
52
+ per_device_train_batch_size=2,
53
+ per_device_eval_batch_size=2,
54
+ gradient_accumulation_steps=1,
55
+ optim="paged_adamw_8bit",
56
+ num_train_epochs=2,
57
+ evaluation_strategy="steps",
58
+ eval_steps=0.2,
59
+ logging_steps=1,
60
+ warmup_steps=10,
61
+ max_grad_norm=10,
62
+ report_to="wandb",
63
+ output_dir="./results/",
64
+ bf16=True,
65
+ gradient_checkpointing=True,
66
+ )
67
+ ```