Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,93 @@ tags:
|
|
8 |
- guided_rewriting
|
9 |
size_categories:
|
10 |
- 10B<n<100B
|
11 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
- guided_rewriting
|
9 |
size_categories:
|
10 |
- 10B<n<100B
|
11 |
+
---
|
12 |
+
# Dataset Card for Recycling-The-Web Synthetic Data
|
13 |
+
|
14 |
+
We release ~44B tokens of high-quality, filtered synthetic texts obtained via our REcycling the Web with guIded REwrite (REWIRE) approach.
|
15 |
+
The generation process involves taking all documents that are of moderate quality (i.e., having passed some rule-based filters),
|
16 |
+
using an LLM (Llama-3.3-70B-Instruct) to identify the purpose of the text content, and then asking the LLM to come up with an improved document conditioned on chain-of-thought reasoning.
|
17 |
+
Our approach specifically targets the vast quantity of low-quality documents that are somewhat informative but still not considered high-quality by existing filters.
|
18 |
+
We use LLM’s knowledge and reasoning capabilities to recycle these discarded documents and add them back to the training pool.
|
19 |
+
|
20 |
+
## Dataset Details
|
21 |
+
|
22 |
+
### Dataset Description
|
23 |
+
|
24 |
+
- **Curated by:** Thao Nguyen
|
25 |
+
- **Language(s):** Mostly English texts
|
26 |
+
- **License:** CC-by-NC
|
27 |
+
|
28 |
+
The texts are outputs of Llama 3.3 and subject to the Llama 3.3 license (https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE).
|
29 |
+
If you use the data to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.
|
30 |
+
Third party content pulled from other locations are subject to its own licenses and you may have other legal obligations or restrictions that govern your use of that content.
|
31 |
+
|
32 |
+
|
33 |
+
### Dataset Sources
|
34 |
+
|
35 |
+
- **Paper:** [Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models](https://arxiv.org/abs/2506.04689), COLM 2025
|
36 |
+
|
37 |
+
## Uses
|
38 |
+
|
39 |
+
|
40 |
+
### Direct Use
|
41 |
+
|
42 |
+
The data is intended for pre-training large language models (LLMs), and designed with the goal of _complementing_ existing web-scraped texts.
|
43 |
+
|
44 |
+
|
45 |
+
## Dataset Creation
|
46 |
+
|
47 |
+
### Curation Rationale
|
48 |
+
|
49 |
+
<!-- Motivation for the creation of this dataset. -->
|
50 |
+
|
51 |
+
The data was obtained by using an LLM to rewrite low-quality documents that have been discarded by quality filters, in order to make them useful for pre-training.
|
52 |
+
This helps increase the token availability and address the impending data bottleneck, as the growth of public human-generated texts has been lagging behind the increase in model capacity and training token budget.
|
53 |
+
|
54 |
+
### Source Data
|
55 |
+
|
56 |
+
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
57 |
+
|
58 |
+
#### Data Collection and Processing
|
59 |
+
|
60 |
+
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
61 |
+
|
62 |
+
We first gathered raw web documents from DCLM-RefinedWeb (Li et al., 2024), Common Crawl data that has passed the rule-based quality filters from RefinedWeb (Penedo et al., 2023)
|
63 |
+
(e.g., repetition filter, page length filter, URL filter, etc.) and global deduplication, but has not gone through model-based filtering.
|
64 |
+
|
65 |
+
We then prompted Llama-3.3-70B-Instruct (Grattafiori et al., 2024) to perform chain-of-thought reasoning on the original web documents,
|
66 |
+
such as identifying the task or purpose of the text, reasoning about the steps needed to achieve the purpose, etc. before generating an improved version of the documents.
|
67 |
+
Refer to our paper for the full prompt we used. We applied this rewriting process to all documents in the starting pool (DCLM-RefinedWeb).
|
68 |
+
|
69 |
+
To control the quality of the generations, we further performed model-based filtering.
|
70 |
+
Following DCLM (Li et al., 2024), we trained a fastText classifier on 400K documents split evenly between positive and negative classes.
|
71 |
+
The positive data is the same as used in DCLM, which includes synthetic instruction data from OpenHermes 2.5 (Teknium, 2023) (OH-2.5) and high-scoring posts from the r/ExplainLikeImFive (ELI5) subreddit.
|
72 |
+
The negative data are random samples selected from our rewriting generations.
|
73 |
+
|
74 |
+
This data release contains _only_ the rewritten outputs that have been filtered, i.e. those in the top 10% of the generations based on fastText scores.
|
75 |
+
|
76 |
+
|
77 |
+
#### Personal and Sensitive Information
|
78 |
+
|
79 |
+
We sourced the raw documents (on which we performed rewriting) from DCLM (Li et al., 2024), which states that "none \[of the documents\] have had any special treatment for PII and sensitive content to preserve representativeness of the raw
|
80 |
+
data". Therefore, it is strongly recommended that this dataset be used only for research purposes. We encourage additional studies on filtering the released data both to preserve privacy as well as to discard any potentially biased or harmful content before training downstream models.
|
81 |
+
|
82 |
+
## Bias, Risks, and Limitations
|
83 |
+
|
84 |
+
As with most synthetic data approaches, there are always risks of hallucinations in the generations, as well as risks of overfitting to the biases of the model used for data refinement.
|
85 |
+
|
86 |
+
## Citation [optional]
|
87 |
+
|
88 |
+
If you use data from Recyling The Web, please cite with the following BibTex entry:
|
89 |
+
```
|
90 |
+
@article{nguyen2025recycling,
|
91 |
+
title={Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models},
|
92 |
+
author={Nguyen, Thao and Li, Yang and Golovneva, Olga and Zettlemoyer, Luke and Oh, Sewoong and Schmidt, Ludwig and Li, Xian},
|
93 |
+
journal={arXiv preprint arXiv:2506.04689},
|
94 |
+
year={2025}
|
95 |
+
}
|
96 |
+
```
|
97 |
+
|
98 |
+
## Dataset Card Contact
|
99 |
+
|
100 |
+
Thao Nguyen ([email protected])
|