Update README.md
Browse files
README.md
CHANGED
@@ -18,18 +18,20 @@ Who needs em, we all have em, they're just like us. Unusable models, compute opt
|
|
18 |
|
19 |
The B, C, and D classes are derived from the tokens per model ratio from LLaMA, as LLaMA 65B is nearly Chinchilla-optimal with a ratio of 21 x Million Params tokens in training. Descending down the model sizes per training set for each model gives us these classes.
|
20 |
|
|
|
|
|
21 |
| Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss |
|
22 |
| --- | --- | --- | --- | --- | --- | --- |
|
23 |
-
| GerbilLab/Gerbil-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.6644 |
|
24 |
| GerbilLab/Gerbil-B-3.3m | 3.3m | B-Class | 42 | 126M | 65.5k | 6.0822 |
|
25 |
| GerbilLab/Gerbil-C-3.3m | 3.3m | C-Class | 76 | 228M | 65.5k | 5.7934 |
|
26 |
| GerbilLab/Gerbil-D-3.3m | 3.3m | D-Class | 142 | 426M | 65.5k | coming soon |
|
27 |
-
| GerbilLab/Gerbil-A-6.7m | 6.7m | A-Class | 20 | 134M | 131k | 6.0741 |
|
28 |
| GerbilLab/Gerbil-B-6.7m | 6.7m | B-Class | 42 | 281M | 131k | 5.5132 |
|
29 |
| GerbilLab/Gerbil-C-6.7m | 6.7m | C-Class | 76 | 509M | 131k | 5.1098 |
|
30 |
| GerbilLab/Gerbil-D-6.7m | 6.7m | D-Class | 142 | 951M | 131k | 4.8186 |
|
31 |
-
| GerbilLab/Gerbil-A-15m | 15m | A-Class | 20 | 280M | 131k | 4.9999 |
|
32 |
-
| GerbilLab/Gerbil-A-32m | 32m | A-Class | 20 | 640M | 262K | 4.0487 |
|
33 |
| --- | --- | --- | --- | --- | --- | --- |
|
34 |
| GerbilLab/GerbilBlender-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.622 |
|
35 |
| GerbilLab/GerbilBlender-A-6.7m | 6.7m | A-Class | 20 | 134M | 131k | coming soon |
|
|
|
18 |
|
19 |
The B, C, and D classes are derived from the tokens per model ratio from LLaMA, as LLaMA 65B is nearly Chinchilla-optimal with a ratio of 21 x Million Params tokens in training. Descending down the model sizes per training set for each model gives us these classes.
|
20 |
|
21 |
+
Models of A class are in bold for (my) easier readability.
|
22 |
+
|
23 |
| Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss |
|
24 |
| --- | --- | --- | --- | --- | --- | --- |
|
25 |
+
| **GerbilLab/Gerbil-A-3.3m** | 3.3m | A-Class | 20 | 60M | 65.5k | 6.6644 |
|
26 |
| GerbilLab/Gerbil-B-3.3m | 3.3m | B-Class | 42 | 126M | 65.5k | 6.0822 |
|
27 |
| GerbilLab/Gerbil-C-3.3m | 3.3m | C-Class | 76 | 228M | 65.5k | 5.7934 |
|
28 |
| GerbilLab/Gerbil-D-3.3m | 3.3m | D-Class | 142 | 426M | 65.5k | coming soon |
|
29 |
+
| **GerbilLab/Gerbil-A-6.7m** | 6.7m | A-Class | 20 | 134M | 131k | 6.0741 |
|
30 |
| GerbilLab/Gerbil-B-6.7m | 6.7m | B-Class | 42 | 281M | 131k | 5.5132 |
|
31 |
| GerbilLab/Gerbil-C-6.7m | 6.7m | C-Class | 76 | 509M | 131k | 5.1098 |
|
32 |
| GerbilLab/Gerbil-D-6.7m | 6.7m | D-Class | 142 | 951M | 131k | 4.8186 |
|
33 |
+
| **GerbilLab/Gerbil-A-15m** | 15m | A-Class | 20 | 280M | 131k | 4.9999 |
|
34 |
+
| **GerbilLab/Gerbil-A-32m** | 32m | A-Class | 20 | 640M | 262K | 4.0487 |
|
35 |
| --- | --- | --- | --- | --- | --- | --- |
|
36 |
| GerbilLab/GerbilBlender-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.622 |
|
37 |
| GerbilLab/GerbilBlender-A-6.7m | 6.7m | A-Class | 20 | 134M | 131k | coming soon |
|