Awan LLM
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,9 @@ In terms of reasoning and intelligence, this model is probably worse than the OG
|
|
14 |
|
15 |
Will soon have quants uploaded here on HF and have it up on our site https://awanllm.com for anyone to try.
|
16 |
|
|
|
|
|
|
|
17 |
|
18 |
Training:
|
19 |
- 4096 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
|
|
|
14 |
|
15 |
Will soon have quants uploaded here on HF and have it up on our site https://awanllm.com for anyone to try.
|
16 |
|
17 |
+
OpenLLM Benchmark:
|
18 |
+

|
19 |
+
|
20 |
|
21 |
Training:
|
22 |
- 4096 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
|