File size: 4,840 Bytes
7363911
 
 
 
 
 
 
 
 
 
7b5ce95
 
 
da1e076
7b5ce95
 
 
 
 
 
 
 
 
da1e076
 
 
 
 
7b5ce95
 
 
 
 
 
 
 
da1e076
7b5ce95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d54eb45
 
c4a34bc
7b5ce95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da1e076
7b5ce95
 
 
c4a34bc
7b5ce95
c4a34bc
7b5ce95
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: cc-by-nc-4.0
language:
- en
tags:
- synthetic_data
- LLM_pretraining
- guided_rewriting
size_categories:
- 10B<n<100B
---
# Dataset Card for Recycling-The-Web Synthetic Data

We release 44.4B tokens of high-quality, model-filtered synthetic texts obtained via our [REcycling the Web with guIded REwrite (REWIRE)](https://arxiv.org/abs/2506.04689) approach.
The generation process involves taking all documents that are of moderate quality (i.e., having passed some rule-based filters), 
using an LLM (Llama-3.3-70B-Instruct) to identify the purpose of the text content, and then asking the LLM to come up with an improved document conditioned on chain-of-thought reasoning. 
Our approach specifically targets the vast quantity of low-quality documents that are somewhat informative but still not considered high-quality by existing filters. 
We use LLM’s knowledge and reasoning capabilities to recycle these discarded documents and add them back to the training pool.

## Dataset Details

### Dataset Description

Curated by: Thao Nguyen

Language(s): Mostly English texts

License: CC-by-NC

The texts are outputs of Llama 3.3 and subject to the Llama 3.3 license (https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE).
If you use the data to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.
Third party content pulled from other locations are subject to its own licenses and you may have other legal obligations or restrictions that govern your use of that content.


### Dataset Sources

Paper: [Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models](https://arxiv.org/abs/2506.04689), COLM 2025

## Uses


### Direct Use

The data is intended for pre-training large language models (LLMs), and designed with the goal of _complementing_ existing web-scraped texts.


## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->

The data was obtained by using an LLM to rewrite low-quality documents that have been discarded by quality filters, in order to make them useful for pre-training. 
This helps increase the token availability and address the impending data bottleneck, as the growth of public human-generated texts has been lagging behind the increase in model capacity and training token budget.
Across different model scales, we find that mixing our synthetic data and high-quality web data consistently outperforms training on only the latter.

<img src="https://huggingface.co/datasets/facebook/recycling_the_web/resolve/main/main_figure.png" alt="Summary of performance improvement" width="350px">

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

We first gathered raw web documents from DCLM-RefinedWeb (Li et al., 2024), Common Crawl data that has passed the rule-based quality filters from RefinedWeb (Penedo et al., 2023) 
(e.g., repetition filter, page length filter, URL filter, etc.) and global deduplication, but has not gone through model-based filtering.

We then prompted Llama-3.3-70B-Instruct (Grattafiori et al., 2024) to perform chain-of-thought reasoning on the original web documents, 
such as identifying the task or purpose of the text, reasoning about the steps needed to achieve the purpose, etc. before generating an improved version of the documents. 
Refer to our paper for the full prompt we used. We applied this rewriting process to all documents in the starting pool (DCLM-RefinedWeb).

To control the quality of the generations, we further performed model-based filtering using a fastText classifier, following DCLM (Li et al., 2024).

This data release contains _only_ the rewritten outputs that have been filtered, i.e. those in the top 10% of the generations based on fastText scores.

<img src="https://huggingface.co/datasets/facebook/recycling_the_web/resolve/main/REWIRE_pipeline.png" alt="REWIRE pipeline" width="800px">

## Citation

If you use data from Recyling The Web, please cite with the following BibTex entry:
```
@article{nguyen2025recycling,
  title={Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models},
  author={Nguyen, Thao and Li, Yang and Golovneva, Olga and Zettlemoyer, Luke and Oh, Sewoong and Schmidt, Ludwig and Li, Xian},
  journal={arXiv preprint arXiv:2506.04689},
  year={2025}
}
```

## Dataset Card Contact

Thao Nguyen ([email protected])