Update README.md
Browse files
README.md
CHANGED
@@ -16,8 +16,6 @@ Who needs em, we all have em, they're just like us. Unusable models, compute opt
|
|
16 |
|
17 |
The B, C, and D classes are derived from the tokens per model ratio from LLaMA, as LLaMA 65B is nearly Chinchilla-optimal with a ratio of 21 x Million Params tokens in training. Descending down the model sizes per training set for each model gives us these classes.
|
18 |
|
19 |
-
Mixer models are trained equally in fill-in-the-middle, causal modelling, and masked language modelling tasks.
|
20 |
-
|
21 |
| Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss |
|
22 |
| --- | --- | --- | --- | --- | --- | --- |
|
23 |
| GerbilLab/Gerbil-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.6644 |
|
@@ -35,4 +33,20 @@ Mixer models are trained equally in fill-in-the-middle, causal modelling, and ma
|
|
35 |
| GerbilLab/GerbilBlender-A-15m | 15m | A-Class | 20 | 280M | 131k | coming soon |
|
36 |
| GerbilLab/GerbilBlender-A-32m | 32m | A-Class | 20 | 640M | 262K | coming soon |
|
37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
The only application where I can imagine these being useful in the slightest is warm-starting very small encoder-decoder models or fitting a new scaling law that takes into account smaller models. Every model was trained on a singular GPU, either a RTX2060, RTX3060, or a T4.
|
|
|
16 |
|
17 |
The B, C, and D classes are derived from the tokens per model ratio from LLaMA, as LLaMA 65B is nearly Chinchilla-optimal with a ratio of 21 x Million Params tokens in training. Descending down the model sizes per training set for each model gives us these classes.
|
18 |
|
|
|
|
|
19 |
| Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss |
|
20 |
| --- | --- | --- | --- | --- | --- | --- |
|
21 |
| GerbilLab/Gerbil-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.6644 |
|
|
|
33 |
| GerbilLab/GerbilBlender-A-15m | 15m | A-Class | 20 | 280M | 131k | coming soon |
|
34 |
| GerbilLab/GerbilBlender-A-32m | 32m | A-Class | 20 | 640M | 262K | coming soon |
|
35 |
|
36 |
+
"Blender" models are trained equally in fill-in-the-middle, causal modelling, and masked language modelling tasks. Special tokens for these models include:
|
37 |
+
|
38 |
+
```
|
39 |
+
'<fitm_start>', '<multiple_tok_mask>', '<fitm_result>', '<causal>', '<mlm_start>', '<single_tok_mask>', '<mlm_end>'
|
40 |
+
|
41 |
+
# Example fill in the middle
|
42 |
+
'<fitm_start> this is an <multiple_tok_mask> for fill-in-the-middle <fitm_result> example text <|endoftext|>'
|
43 |
+
|
44 |
+
# Example causal language modelling
|
45 |
+
'<causal> this is an example text for causal language modelling <|endoftext|>'
|
46 |
+
|
47 |
+
# Example masked language modelling
|
48 |
+
'<mlm_start> this is an <single_tok_mask> text for masked language modelling <mlm_end> example <|endoftext|>'
|
49 |
+
|
50 |
+
```
|
51 |
+
|
52 |
The only application where I can imagine these being useful in the slightest is warm-starting very small encoder-decoder models or fitting a new scaling law that takes into account smaller models. Every model was trained on a singular GPU, either a RTX2060, RTX3060, or a T4.
|