--- license: cc-by-nc-4.0 language: - en tags: - synthetic_data - LLM_pretraining - guided_rewriting size_categories: - 10B The data was obtained by using an LLM to rewrite low-quality documents that have been discarded by quality filters, in order to make them useful for pre-training. This helps increase the token availability and address the impending data bottleneck, as the growth of public human-generated texts has been lagging behind the increase in model capacity and training token budget. Across different model scales, we find that mixing our synthetic data and high-quality web data consistently outperforms training on only the latter. Summary of performance improvement ### Source Data #### Data Collection and Processing We first gathered raw web documents from DCLM-RefinedWeb (Li et al., 2024), Common Crawl data that has passed the rule-based quality filters from RefinedWeb (Penedo et al., 2023) (e.g., repetition filter, page length filter, URL filter, etc.) and global deduplication, but has not gone through model-based filtering. We then prompted Llama-3.3-70B-Instruct (Grattafiori et al., 2024) to perform chain-of-thought reasoning on the original web documents, such as identifying the task or purpose of the text, reasoning about the steps needed to achieve the purpose, etc. before generating an improved version of the documents. Refer to our paper for the full prompt we used. We applied this rewriting process to all documents in the starting pool (DCLM-RefinedWeb). To control the quality of the generations, we further performed model-based filtering using a fastText classifier, following DCLM (Li et al., 2024). This data release contains _only_ the rewritten outputs that have been filtered, i.e. those in the top 10% of the generations based on fastText scores. REWIRE pipeline ## Citation If you use data from Recyling The Web, please cite with the following BibTex entry: ``` @article{nguyen2025recycling, title={Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models}, author={Nguyen, Thao and Li, Yang and Golovneva, Olga and Zettlemoyer, Luke and Oh, Sewoong and Schmidt, Ludwig and Li, Xian}, journal={arXiv preprint arXiv:2506.04689}, year={2025} } ``` ## Dataset Card Contact Thao Nguyen (thaottn@cs.washington.edu)