recycling_the_web / README.md
thaottn's picture
Update README.md
c4a34bc verified
metadata
license: cc-by-nc-4.0
language:
  - en
tags:
  - synthetic_data
  - LLM_pretraining
  - guided_rewriting
size_categories:
  - 10B<n<100B

Dataset Card for Recycling-The-Web Synthetic Data

We release 44.4B tokens of high-quality, model-filtered synthetic texts obtained via our REcycling the Web with guIded REwrite (REWIRE) approach. The generation process involves taking all documents that are of moderate quality (i.e., having passed some rule-based filters), using an LLM (Llama-3.3-70B-Instruct) to identify the purpose of the text content, and then asking the LLM to come up with an improved document conditioned on chain-of-thought reasoning. Our approach specifically targets the vast quantity of low-quality documents that are somewhat informative but still not considered high-quality by existing filters. We use LLM’s knowledge and reasoning capabilities to recycle these discarded documents and add them back to the training pool.

Dataset Details

Dataset Description

Curated by: Thao Nguyen

Language(s): Mostly English texts

License: CC-by-NC

The texts are outputs of Llama 3.3 and subject to the Llama 3.3 license (https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE). If you use the data to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. Third party content pulled from other locations are subject to its own licenses and you may have other legal obligations or restrictions that govern your use of that content.

Dataset Sources

Paper: Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models, COLM 2025

Uses

Direct Use

The data is intended for pre-training large language models (LLMs), and designed with the goal of complementing existing web-scraped texts.

Dataset Creation

Curation Rationale

The data was obtained by using an LLM to rewrite low-quality documents that have been discarded by quality filters, in order to make them useful for pre-training. This helps increase the token availability and address the impending data bottleneck, as the growth of public human-generated texts has been lagging behind the increase in model capacity and training token budget. Across different model scales, we find that mixing our synthetic data and high-quality web data consistently outperforms training on only the latter.

Summary of performance improvement

Source Data

Data Collection and Processing

We first gathered raw web documents from DCLM-RefinedWeb (Li et al., 2024), Common Crawl data that has passed the rule-based quality filters from RefinedWeb (Penedo et al., 2023) (e.g., repetition filter, page length filter, URL filter, etc.) and global deduplication, but has not gone through model-based filtering.

We then prompted Llama-3.3-70B-Instruct (Grattafiori et al., 2024) to perform chain-of-thought reasoning on the original web documents, such as identifying the task or purpose of the text, reasoning about the steps needed to achieve the purpose, etc. before generating an improved version of the documents. Refer to our paper for the full prompt we used. We applied this rewriting process to all documents in the starting pool (DCLM-RefinedWeb).

To control the quality of the generations, we further performed model-based filtering using a fastText classifier, following DCLM (Li et al., 2024).

This data release contains only the rewritten outputs that have been filtered, i.e. those in the top 10% of the generations based on fastText scores.

REWIRE pipeline

Citation

If you use data from Recyling The Web, please cite with the following BibTex entry:

@article{nguyen2025recycling,
  title={Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models},
  author={Nguyen, Thao and Li, Yang and Golovneva, Olga and Zettlemoyer, Luke and Oh, Sewoong and Schmidt, Ludwig and Li, Xian},
  journal={arXiv preprint arXiv:2506.04689},
  year={2025}
}

Dataset Card Contact

Thao Nguyen ([email protected])