Add 10M token sample with all columns
Browse files
README.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Essential Web v1.0 - 10M Token Sample
|
| 2 |
+
|
| 3 |
+
Approximately 10,000,000 tokens sampled from Essential Web v1.0.
|
| 4 |
+
|
| 5 |
+
## Dataset Info
|
| 6 |
+
|
| 7 |
+
- **Target**: 10,000,000 tokens
|
| 8 |
+
- **Actual**: ~10,999,800 tokens (estimated)
|
| 9 |
+
- **Source**: [EssentialAI/essential-web-v1.0](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0)
|
| 10 |
+
|
| 11 |
+
## Schema
|
| 12 |
+
|
| 13 |
+
This sample preserves ALL columns from the original dataset, including:
|
| 14 |
+
- `id`: Document ID
|
| 15 |
+
- `text`: Text content
|
| 16 |
+
- `metadata`: URL and source information
|
| 17 |
+
- `quality_signals`: RedPajama quality metrics
|
| 18 |
+
- `eai_taxonomy`: Essential AI taxonomy labels
|
| 19 |
+
- `pid`: Partition ID
|
| 20 |
+
- And all other original columns
|
| 21 |
+
|
| 22 |
+
## Usage
|
| 23 |
+
|
| 24 |
+
```python
|
| 25 |
+
from datasets import load_dataset
|
| 26 |
+
|
| 27 |
+
dataset = load_dataset("sumuks/essential-web-v1.0-sample-10M")
|
| 28 |
+
|
| 29 |
+
# Access the data with all columns
|
| 30 |
+
example = dataset['train'][0]
|
| 31 |
+
print(example['text'][:200] + "...")
|
| 32 |
+
|
| 33 |
+
# Access quality signals
|
| 34 |
+
print(example['quality_signals'])
|
| 35 |
+
|
| 36 |
+
# Access taxonomy
|
| 37 |
+
print(example['eai_taxonomy'])
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
## File Structure
|
| 41 |
+
|
| 42 |
+
The dataset is split across multiple parquet files in the `data/` directory:
|
| 43 |
+
- `data/part-00000.parquet`
|
| 44 |
+
- `data/part-00001.parquet`
|
| 45 |
+
- etc.
|
| 46 |
+
|
| 47 |
+
HuggingFace datasets automatically loads all parts as a single dataset.
|
| 48 |
+
|
| 49 |
+
## Sampling Method
|
| 50 |
+
|
| 51 |
+
- Random sampling across snapshots
|
| 52 |
+
- Preserves all original columns and metadata
|
| 53 |
+
- Token estimation: ~600 tokens per row
|
data/part-00000.parquet/2b6cd273-5f8a-4e81-a3c9-de1548f2f780-0.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c50134664654eb618c6b643d64d7b5f19c8b6bd622cbdc4a26471d40a6636e38
|
| 3 |
+
size 62907198
|