Commit
·
732a0ac
1
Parent(s):
d9791fd
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,9 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
This repo contains a low-rank adapter for LLaMA-7b fit on `Nebulous/gpt4all_pruned`, `sahil2801/CodeAlpaca-20k`, `yahma/alpaca-cleaned` and some datasets part of the OpenAssistant project.
|
|
@@ -12,5 +16,4 @@ This version of the weights was trained with the following hyperparameters:
|
|
| 12 |
- Max Length: 2048
|
| 13 |
- Learning rate: 4e-6
|
| 14 |
- Lora _r_: 16
|
| 15 |
-
- Lora target modules: q_proj, k_proj, v_proj, o_proj
|
| 16 |
-
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
datasets:
|
| 4 |
+
- Nebulous/gpt4all_pruned
|
| 5 |
+
- sahil2801/CodeAlpaca-20k
|
| 6 |
+
- yahma/alpaca-cleaned
|
| 7 |
---
|
| 8 |
|
| 9 |
This repo contains a low-rank adapter for LLaMA-7b fit on `Nebulous/gpt4all_pruned`, `sahil2801/CodeAlpaca-20k`, `yahma/alpaca-cleaned` and some datasets part of the OpenAssistant project.
|
|
|
|
| 16 |
- Max Length: 2048
|
| 17 |
- Learning rate: 4e-6
|
| 18 |
- Lora _r_: 16
|
| 19 |
+
- Lora target modules: q_proj, k_proj, v_proj, o_proj
|
|
|