thaottn commited on
Commit
da1e076
·
verified ·
1 Parent(s): d54eb45

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -18
README.md CHANGED
@@ -11,7 +11,7 @@ size_categories:
11
  ---
12
  # Dataset Card for Recycling-The-Web Synthetic Data
13
 
14
- We release ~44B tokens of high-quality, model-filtered synthetic texts obtained via our [REcycling the Web with guIded REwrite (REWIRE)](https://arxiv.org/abs/2506.04689) approach.
15
  The generation process involves taking all documents that are of moderate quality (i.e., having passed some rule-based filters),
16
  using an LLM (Llama-3.3-70B-Instruct) to identify the purpose of the text content, and then asking the LLM to come up with an improved document conditioned on chain-of-thought reasoning.
17
  Our approach specifically targets the vast quantity of low-quality documents that are somewhat informative but still not considered high-quality by existing filters.
@@ -21,9 +21,11 @@ We use LLM’s knowledge and reasoning capabilities to recycle these discarded d
21
 
22
  ### Dataset Description
23
 
24
- - **Curated by:** Thao Nguyen
25
- - **Language(s):** Mostly English texts
26
- - **License:** CC-by-NC
 
 
27
 
28
  The texts are outputs of Llama 3.3 and subject to the Llama 3.3 license (https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE).
29
  If you use the data to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.
@@ -32,7 +34,7 @@ Third party content pulled from other locations are subject to its own licenses
32
 
33
  ### Dataset Sources
34
 
35
- - **Paper:** [Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models](https://arxiv.org/abs/2506.04689), COLM 2025
36
 
37
  ## Uses
38
 
@@ -69,24 +71,12 @@ We then prompted Llama-3.3-70B-Instruct (Grattafiori et al., 2024) to perform ch
69
  such as identifying the task or purpose of the text, reasoning about the steps needed to achieve the purpose, etc. before generating an improved version of the documents.
70
  Refer to our paper for the full prompt we used. We applied this rewriting process to all documents in the starting pool (DCLM-RefinedWeb).
71
 
72
- To control the quality of the generations, we further performed model-based filtering.
73
- Following DCLM (Li et al., 2024), we trained a fastText classifier on 400K documents split evenly between positive and negative classes.
74
- The positive data is the same as used in DCLM, which includes synthetic instruction data from OpenHermes 2.5 (Teknium, 2023) (OH-2.5) and high-scoring posts from the r/ExplainLikeImFive (ELI5) subreddit.
75
- The negative data are random samples selected from our rewriting generations.
76
 
77
  This data release contains _only_ the rewritten outputs that have been filtered, i.e. those in the top 10% of the generations based on fastText scores.
78
 
79
  ![REWIRE pipeline](https://huggingface.co/datasets/facebook/recycling_the_web/blob/main/REWIRE_pipeline.png)
80
 
81
- #### Personal and Sensitive Information
82
-
83
- We sourced the raw documents (on which we performed rewriting) from DCLM (Li et al., 2024), which states that "none \[of the documents\] have had any special treatment for PII and sensitive content to preserve representativeness of the raw
84
- data". Therefore, it is strongly recommended that this dataset be used only for research purposes. We encourage additional studies on filtering the released data both to preserve privacy as well as to discard any potentially biased or harmful content before training downstream models.
85
-
86
- ## Bias, Risks, and Limitations
87
-
88
- As with most synthetic data approaches, there are always risks of hallucinations in the generations, as well as risks of overfitting to the biases of the model used for data refinement.
89
-
90
  ## Citation [optional]
91
 
92
  If you use data from Recyling The Web, please cite with the following BibTex entry:
 
11
  ---
12
  # Dataset Card for Recycling-The-Web Synthetic Data
13
 
14
+ We release 44.4B tokens of high-quality, model-filtered synthetic texts obtained via our [REcycling the Web with guIded REwrite (REWIRE)](https://arxiv.org/abs/2506.04689) approach.
15
  The generation process involves taking all documents that are of moderate quality (i.e., having passed some rule-based filters),
16
  using an LLM (Llama-3.3-70B-Instruct) to identify the purpose of the text content, and then asking the LLM to come up with an improved document conditioned on chain-of-thought reasoning.
17
  Our approach specifically targets the vast quantity of low-quality documents that are somewhat informative but still not considered high-quality by existing filters.
 
21
 
22
  ### Dataset Description
23
 
24
+ Curated by: Thao Nguyen
25
+
26
+ Language(s): Mostly English texts
27
+
28
+ License: CC-by-NC
29
 
30
  The texts are outputs of Llama 3.3 and subject to the Llama 3.3 license (https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE).
31
  If you use the data to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.
 
34
 
35
  ### Dataset Sources
36
 
37
+ Paper: [Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models](https://arxiv.org/abs/2506.04689), COLM 2025
38
 
39
  ## Uses
40
 
 
71
  such as identifying the task or purpose of the text, reasoning about the steps needed to achieve the purpose, etc. before generating an improved version of the documents.
72
  Refer to our paper for the full prompt we used. We applied this rewriting process to all documents in the starting pool (DCLM-RefinedWeb).
73
 
74
+ To control the quality of the generations, we further performed model-based filtering using a fastText classifier, following DCLM (Li et al., 2024).
 
 
 
75
 
76
  This data release contains _only_ the rewritten outputs that have been filtered, i.e. those in the top 10% of the generations based on fastText scores.
77
 
78
  ![REWIRE pipeline](https://huggingface.co/datasets/facebook/recycling_the_web/blob/main/REWIRE_pipeline.png)
79
 
 
 
 
 
 
 
 
 
 
80
  ## Citation [optional]
81
 
82
  If you use data from Recyling The Web, please cite with the following BibTex entry: