Add files using upload-large-folder tool
Browse files
README.md
CHANGED
|
@@ -17,7 +17,6 @@ base_model:
|
|
| 17 |
- meta-llama/Llama-4-Scout-17B-16E
|
| 18 |
tags:
|
| 19 |
- facebook
|
| 20 |
-
- unsloth
|
| 21 |
- meta
|
| 22 |
- pytorch
|
| 23 |
- llama
|
|
@@ -98,12 +97,7 @@ extra_gated_heading: "Please be sure to provide your full legal name, date of bi
|
|
| 98 |
license: other
|
| 99 |
license_name: llama4
|
| 100 |
---
|
| 101 |
-
|
| 102 |
-
<p style="margin-bottom: 0; margin-top: 0;">
|
| 103 |
-
<strong>See <a href="https://huggingface.co/collections/unsloth/llama-4-67f19503d764b0f3a2a868d2">our collection</a> for versions of Llama 4 including 4-bit & 16-bit formats.</strong>
|
| 104 |
-
</p>
|
| 105 |
-
</div>
|
| 106 |
-
</div>
|
| 107 |
|
| 108 |
## Model Information
|
| 109 |
|
|
@@ -275,7 +269,7 @@ In this section, we report the results for Llama 4 relative to our previous mode
|
|
| 275 |
| Image Understanding | ChartQA | 0 | relaxed\_accuracy | | | 88.8 | 90.0 |
|
| 276 |
| | DocVQA (test) | 0 | anls | | | 94.4 | 94.4 |
|
| 277 |
| Coding | LiveCodeBench (10/01/2024-02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 |
|
| 278 |
-
| Reasoning & Knowledge | MMLU Pro | 0 | macro\_avg/
|
| 279 |
| | GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 |
|
| 280 |
| Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 |
|
| 281 |
| Long context | MTOB (half book) eng-\>kgv/kgv-\>eng | \- | chrF | Context window is 128K | | 42.2/36.6 | 54.0/46.4 |
|
|
|
|
| 17 |
- meta-llama/Llama-4-Scout-17B-16E
|
| 18 |
tags:
|
| 19 |
- facebook
|
|
|
|
| 20 |
- meta
|
| 21 |
- pytorch
|
| 22 |
- llama
|
|
|
|
| 97 |
license: other
|
| 98 |
license_name: llama4
|
| 99 |
---
|
| 100 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
|
| 102 |
## Model Information
|
| 103 |
|
|
|
|
| 269 |
| Image Understanding | ChartQA | 0 | relaxed\_accuracy | | | 88.8 | 90.0 |
|
| 270 |
| | DocVQA (test) | 0 | anls | | | 94.4 | 94.4 |
|
| 271 |
| Coding | LiveCodeBench (10/01/2024-02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 |
|
| 272 |
+
| Reasoning & Knowledge | MMLU Pro | 0 | macro\_avg/acc | 68.9 | 73.4 | 74.3 | 80.5 |
|
| 273 |
| | GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 |
|
| 274 |
| Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 |
|
| 275 |
| Long context | MTOB (half book) eng-\>kgv/kgv-\>eng | \- | chrF | Context window is 128K | | 42.2/36.6 | 54.0/46.4 |
|