| # Essential Web v1.0 - 10M Token Sample | |
| Approximately 10,000,000 tokens sampled from Essential Web v1.0. | |
| ## Dataset Info | |
| - **Target**: 10,000,000 tokens | |
| - **Actual**: ~10,999,800 tokens (estimated) | |
| - **Source**: [EssentialAI/essential-web-v1.0](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | |
| ## Schema | |
| This sample preserves ALL columns from the original dataset, including: | |
| - `id`: Document ID | |
| - `text`: Text content | |
| - `metadata`: URL and source information | |
| - `quality_signals`: RedPajama quality metrics | |
| - `eai_taxonomy`: Essential AI taxonomy labels | |
| - `pid`: Partition ID | |
| - And all other original columns | |
| ## Usage | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset("sumuks/essential-web-v1.0-sample-10M") | |
| # Access the data with all columns | |
| example = dataset['train'][0] | |
| print(example['text'][:200] + "...") | |
| # Access quality signals | |
| print(example['quality_signals']) | |
| # Access taxonomy | |
| print(example['eai_taxonomy']) | |
| ``` | |
| ## File Structure | |
| The dataset is split across multiple parquet files in the `data/` directory: | |
| - `data/part-00000.parquet` | |
| - `data/part-00001.parquet` | |
| - etc. | |
| HuggingFace datasets automatically loads all parts as a single dataset. | |
| ## Sampling Method | |
| - Random sampling across snapshots | |
| - Preserves all original columns and metadata | |
| - Token estimation: ~600 tokens per row | |