--- dataset_info: features: - name: text dtype: string - name: id dtype: string - name: metadata struct: - name: date dtype: timestamp[us] - name: dump dtype: string - name: file_path dtype: string - name: int_score dtype: int64 - name: language dtype: string - name: language_score dtype: float64 - name: score dtype: float64 - name: token_count dtype: int64 - name: url dtype: string splits: - name: train num_bytes: 3479493957.7581863 num_examples: 1001536 download_size: 1983898608 dataset_size: 3479493957.7581863 configs: - config_name: default data_files: - split: train path: data/train-* --- ## Qwark Corpus 1 billion of the highest quality tokens on the internet, based on [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus)'s `fineweb-edu-dedup` subset. Filtering process: | Step | Description | Rows Remaining | |--------------------------------------------------------|--------------------------------------------------------|----------------| | 1. Stream dataset until 1M samples have been selected | Keep only items without strange characters | 1,000,000 | | 2. Remove items with length > 50,000 | Filter items exceeding 50,000 characters in length | 998,420 | | 3. Combine with a selection of 4,000 TED transcripts | Add educational TED talk transcripts to the dataset | 1,002,425 | | 4. Filter and remove items containing strange characters | Clean items with strange characters after merging | 1,002,425 |