Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ size_categories:
|
|
11 |
---
|
12 |
# Dataset Card for Recycling-The-Web Synthetic Data
|
13 |
|
14 |
-
We release ~44B tokens of high-quality, filtered synthetic texts obtained via our REcycling the Web with guIded REwrite (REWIRE) approach.
|
15 |
The generation process involves taking all documents that are of moderate quality (i.e., having passed some rule-based filters),
|
16 |
using an LLM (Llama-3.3-70B-Instruct) to identify the purpose of the text content, and then asking the LLM to come up with an improved document conditioned on chain-of-thought reasoning.
|
17 |
Our approach specifically targets the vast quantity of low-quality documents that are somewhat informative but still not considered high-quality by existing filters.
|
@@ -50,6 +50,9 @@ The data is intended for pre-training large language models (LLMs), and designed
|
|
50 |
|
51 |
The data was obtained by using an LLM to rewrite low-quality documents that have been discarded by quality filters, in order to make them useful for pre-training.
|
52 |
This helps increase the token availability and address the impending data bottleneck, as the growth of public human-generated texts has been lagging behind the increase in model capacity and training token budget.
|
|
|
|
|
|
|
53 |
|
54 |
### Source Data
|
55 |
|
@@ -73,6 +76,7 @@ The negative data are random samples selected from our rewriting generations.
|
|
73 |
|
74 |
This data release contains _only_ the rewritten outputs that have been filtered, i.e. those in the top 10% of the generations based on fastText scores.
|
75 |
|
|
|
76 |
|
77 |
#### Personal and Sensitive Information
|
78 |
|
|
|
11 |
---
|
12 |
# Dataset Card for Recycling-The-Web Synthetic Data
|
13 |
|
14 |
+
We release ~44B tokens of high-quality, model-filtered synthetic texts obtained via our [REcycling the Web with guIded REwrite (REWIRE)](https://arxiv.org/abs/2506.04689) approach.
|
15 |
The generation process involves taking all documents that are of moderate quality (i.e., having passed some rule-based filters),
|
16 |
using an LLM (Llama-3.3-70B-Instruct) to identify the purpose of the text content, and then asking the LLM to come up with an improved document conditioned on chain-of-thought reasoning.
|
17 |
Our approach specifically targets the vast quantity of low-quality documents that are somewhat informative but still not considered high-quality by existing filters.
|
|
|
50 |
|
51 |
The data was obtained by using an LLM to rewrite low-quality documents that have been discarded by quality filters, in order to make them useful for pre-training.
|
52 |
This helps increase the token availability and address the impending data bottleneck, as the growth of public human-generated texts has been lagging behind the increase in model capacity and training token budget.
|
53 |
+
Across different model scales, we find that mixing our synthetic data and high-quality web data consistently outperforms training on only the latter.
|
54 |
+
|
55 |
+

|
56 |
|
57 |
### Source Data
|
58 |
|
|
|
76 |
|
77 |
This data release contains _only_ the rewritten outputs that have been filtered, i.e. those in the top 10% of the generations based on fastText scores.
|
78 |
|
79 |
+

|
80 |
|
81 |
#### Personal and Sensitive Information
|
82 |
|