Update README.md
Browse files
README.md
CHANGED
|
@@ -32,7 +32,6 @@ MegaMath is the largest open math pre-training dataset to date, surpassing DeepS
|
|
| 32 |
<div style="display: flex; justify-content: center; gap: 20px;">
|
| 33 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/_yAulJQ1lcL56Dvqnxr4W.png" width="75%" />
|
| 34 |
</div>
|
| 35 |
-
![image/png]()
|
| 36 |
|
| 37 |
## MegaMath Delivers with High Quality
|
| 38 |
During development, we use extensive experiments to find optimal practice for text extraction, deduplication, fasttext training, etc. Training MegaMath data shows better performance than existing open datasets.
|
|
@@ -47,7 +46,7 @@ During development, we use extensive experiments to find optimal practice for te
|
|
| 47 |
We also release two proof-of-concept models which is based on [Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) and [LLama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
|
| 48 |
Training MegaMath on Llama-3.2-1B and LLama-3.2-3B brings about 15% ~ 20% performance boost on 10 downstream benchmarks, demonstrateing its high data quality.
|
| 49 |
<div style="display: flex; justify-content: center; gap: 20px;">
|
| 50 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/EIReQ8TIbyn1V3JfsEKiL.png" width="
|
| 51 |
</div>
|
| 52 |
|
| 53 |
## Detailed Statistics
|
|
|
|
| 32 |
<div style="display: flex; justify-content: center; gap: 20px;">
|
| 33 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/_yAulJQ1lcL56Dvqnxr4W.png" width="75%" />
|
| 34 |
</div>
|
|
|
|
| 35 |
|
| 36 |
## MegaMath Delivers with High Quality
|
| 37 |
During development, we use extensive experiments to find optimal practice for text extraction, deduplication, fasttext training, etc. Training MegaMath data shows better performance than existing open datasets.
|
|
|
|
| 46 |
We also release two proof-of-concept models which is based on [Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) and [LLama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
|
| 47 |
Training MegaMath on Llama-3.2-1B and LLama-3.2-3B brings about 15% ~ 20% performance boost on 10 downstream benchmarks, demonstrateing its high data quality.
|
| 48 |
<div style="display: flex; justify-content: center; gap: 20px;">
|
| 49 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/EIReQ8TIbyn1V3JfsEKiL.png" width="50%" />
|
| 50 |
</div>
|
| 51 |
|
| 52 |
## Detailed Statistics
|