Update README.md
Browse files
README.md
CHANGED
@@ -18,10 +18,10 @@ We generated these checkpoints after the original training of our OLMo 2 1B mode
|
|
18 |
|
19 |
### A Note on these Checkpoints
|
20 |
These checkpoints use the same architecture and starting checkpoint as the official OLMo 2 1B, but they aren’t identical to the original run due to the non-deterministic nature of LLM training environments. Performance may differ slightly. If you're interested in comparing these checkpoints to our original OLMo 2 1B, you can compare the checkpoints that are present in both repositories:
|
21 |
-
- stage1-step0-tokens0B -- This official OLMo 2 1B checkpoint is loaded in as the starting point for these checkpoints
|
22 |
-
- stage1-step10000-tokens21B
|
23 |
-
- stage1-step20000-tokens42B
|
24 |
-
- stage1-step30000-tokens63B
|
25 |
|
26 |
## Inference
|
27 |
You can access these checkpoints using the standard Hugging Face Transformers library:
|
|
|
18 |
|
19 |
### A Note on these Checkpoints
|
20 |
These checkpoints use the same architecture and starting checkpoint as the official OLMo 2 1B, but they aren’t identical to the original run due to the non-deterministic nature of LLM training environments. Performance may differ slightly. If you're interested in comparing these checkpoints to our original OLMo 2 1B, you can compare the checkpoints that are present in both repositories:
|
21 |
+
- `stage1-step0-tokens0B` -- This official OLMo 2 1B checkpoint is loaded in as the starting point for these checkpoints
|
22 |
+
- `stage1-step10000-tokens21B`
|
23 |
+
- `stage1-step20000-tokens42B`
|
24 |
+
- `stage1-step30000-tokens63B`
|
25 |
|
26 |
## Inference
|
27 |
You can access these checkpoints using the standard Hugging Face Transformers library:
|