leo-pekelis-gradient
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ Gradient incorporates your data to deploy autonomous assistants that power criti
|
|
18 |
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
|
19 |
|
20 |
This model extends LLama-3 8B's context length from 8k to 4194K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai).
|
21 |
-
It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. For this stage, we trained on
|
22 |
|
23 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/644fac0ce1d7a97f3b653ab1/01_d4UYPE47EHlFGyaG9X.png)
|
24 |
|
|
|
18 |
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
|
19 |
|
20 |
This model extends LLama-3 8B's context length from 8k to 4194K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai).
|
21 |
+
It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. For this stage, we trained on 201M tokens, and 1.6B tokens total for all stages, which is ~ 0.01% of Llama-3's original pre-training data.
|
22 |
|
23 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/644fac0ce1d7a97f3b653ab1/01_d4UYPE47EHlFGyaG9X.png)
|
24 |
|