Update README.md
Browse files
README.md
CHANGED
@@ -14,12 +14,12 @@ Finetune base model [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-
|
|
14 |
## Model Details
|
15 |
|
16 |
Here are the traning parameters
|
17 |
-
- base_model 'TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T'
|
18 |
-
- yahma/alpaca-cleaned
|
19 |
-
- lora_r 16
|
20 |
-
- lora_alpha 16
|
21 |
-
- lora_dropout 0.05
|
22 |
-
- lora_target_modules '[q_proj, k_proj, v_proj, o_proj]'
|
23 |
|
24 |
|
25 |
Took 6-7 hours on a single A5000 GPU (lots of issues arise when trying to use multiple GPUs).
|
|
|
14 |
## Model Details
|
15 |
|
16 |
Here are the traning parameters
|
17 |
+
- base_model ='TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T'
|
18 |
+
- dataset = yahma/alpaca-cleaned
|
19 |
+
- lora_r = 16
|
20 |
+
- lora_alpha = 16
|
21 |
+
- lora_dropout = 0.05
|
22 |
+
- lora_target_modules = '[q_proj, k_proj, v_proj, o_proj]'
|
23 |
|
24 |
|
25 |
Took 6-7 hours on a single A5000 GPU (lots of issues arise when trying to use multiple GPUs).
|