Update README.md
Browse files
README.md
CHANGED
@@ -193,6 +193,9 @@ This dataset consists of 120 million texts (approximately 89.3B tokens) filtered
|
|
193 |
- **small_tokens**: Data composed solely of texts with 512 tokens or fewer
|
194 |
- **small_tokens_cleaned**: Data from small_tokens with Web-specific text noise removed
|
195 |
|
|
|
|
|
|
|
196 |
## Background on Dataset Creation
|
197 |
|
198 |
[FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) (English only) was created for deduplicating web data and extracting high-quality text. In addition, [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) extracts high-quality text for educational purposes, enabling efficient learning with fewer tokens.
|
|
|
193 |
- **small_tokens**: Data composed solely of texts with 512 tokens or fewer
|
194 |
- **small_tokens_cleaned**: Data from small_tokens with Web-specific text noise removed
|
195 |
|
196 |
+
[For the introduction article in Japanese, click here.](https://secon.dev/entry/2025/02/20/100000-fineweb-2-edu-japanese/)
|
197 |
+
|
198 |
+
|
199 |
## Background on Dataset Creation
|
200 |
|
201 |
[FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) (English only) was created for deduplicating web data and extracting high-quality text. In addition, [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) extracts high-quality text for educational purposes, enabling efficient learning with fewer tokens.
|