Update README.md
Browse files
README.md
CHANGED
|
@@ -1,8 +1,33 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
|
|
|
| 5 |
|
| 6 |
-
This is
|
| 7 |
|
| 8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
datasets:
|
| 3 |
+
- vicgalle/alpaca-gpt4
|
| 4 |
+
library_name: peft
|
| 5 |
+
tags:
|
| 6 |
+
- gptj-6b
|
| 7 |
+
- code
|
| 8 |
+
- instruct
|
| 9 |
+
- instruct-alpaca
|
| 10 |
+
- code-alpaca
|
| 11 |
+
- alpaca-instruct
|
| 12 |
+
- alpaca
|
| 13 |
+
- gpt4
|
| 14 |
---
|
| 15 |
|
| 16 |
+
We finetuned gptj-6b on Code-Alpaca-Instruct Dataset (vicgalle/alpaca-gpt4) for 10 epochs or ~ 50,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
|
| 17 |
|
| 18 |
+
This dataset is vicgalle/alpaca-gpt4 unfiltered,
|
| 19 |
|
| 20 |
+
The finetuning session got completed in 7 hours and costed us only `$25` for the entire finetuning run!
|
| 21 |
+
|
| 22 |
+
#### Hyperparameters & Run details:
|
| 23 |
+
- Model Path: vicgalle/alpaca-gpt4
|
| 24 |
+
- Dataset: vicgalle/alpaca-gpt4
|
| 25 |
+
- Learning rate: 0.0003
|
| 26 |
+
- Number of epochs: 5
|
| 27 |
+
- Data split: Training: 90% / Validation: 10%
|
| 28 |
+
- Gradient accumulation steps: 1
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
license: apache-2.0
|
| 33 |
+
---
|