metadata
license: mit
language:
- en
Ettin Mid-training Data
Phase 2 of 3: Higher-quality filtered data with context extension (250B tokens) used for mid-training of Ettin models.
This dataset contains the mid-training phase data used to train all Ettin encoder and decoder models. This phase focuses on higher-quality filtered data and context length extension to 8K tokens. The data is provided in MDS format ready for use with Composer and the ModernBERT training repository.
π Data Composition
| Data Source | Tokens (B) | Percentage | Description |
|---|---|---|---|
| DCLM (Dolmino) | 175.5 | 70.4% | High-quality filtered web crawl |
| Starcoder | 38.4 | 15.4% | Code repositories and files |
| Math (Dolmino) | 10.4 | 4.2% | Mathematical content (filtered) |
| PeS2o | 8.3 | 3.3% | Scientific papers |
| 6.2 | 2.5% | Social discussion threads | |
| Arxiv | 4.1 | 1.6% | Academic preprints |
| StackExchange (Dolmino) | 2.7 | 1.1% | Q&A forums (filtered) |
| Tulu Flan | 2.4 | 1.0% | Instruction-following data |
| Books | 0.8 | 0.3% | Literature and reference books |
| Wikipedia | 0.5 | 0.2% | Encyclopedia articles |
| Total | 249.3 | 100.0% | Quality-focused mixture |
π§ Key Changes from Pre-training
Data Quality Improvements
- Filtered DCLM: Using Dolmino-filtered version instead of raw DCLM
- Enhanced Math: Dolmino-filtered mathematical content
- Curated StackExchange: Higher-quality Q&A content
- Removed Noisy Sources: Dropped CC Head, CC News, and general StackExchange
Technical Improvements
- Context Extension: Increased from 1K to 8K token sequences
- RoPE Updates: Modified positional encoding for longer context
- Learning Schedule: Inverse square root decay from peak LR
π Usage
For pre-training see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT
Direct Access
from streaming import StreamingDataset
# Load the streaming dataset
dataset = StreamingDataset(
remote='https://huggingface.co/datasets/jhu-clsp/ettin-extension-data',
local='/tmp/ettin-extension-data',
shuffle=True
)
# Access samples (note: these will be longer sequences)
for sample in dataset:
text = sample['text'] # Up to 8K tokens
# Process your data...
π Structure
Each folder contains filtered, higher-quality data sources in MDS format:
arxiv/- Academic papers from ArXivbooks/- Literature and reference booksdclm_dolmino/- Dolmino-filtered web crawl data (primary source)math_dolmino/- Filtered mathematical contentpes2o/- Scientific papersreddit/- Reddit discussion threadsstackexchange_dolmino/- Filtered StackExchange Q&Astarcoder/- Code from GitHub repositoriestulu_flan/- Instruction-following exampleswikipedia/- Wikipedia articles
π Related Resources
- Models: Ettin Model Suite (17M-1B parameters)
- Phase 1: Pre-training Data (1.7T tokens)
- Phase 3: Decay Phase Data (50B tokens)
- Training Order: Batch-level Data Order
- Paper: Arxiv link
- Code: GitHub Repository
Citation
@misc{weller2025seqvsseqopen,
title={Seq vs Seq: An Open Suite of Paired Encoders and Decoders},
author={Orion Weller and Kathryn Ricci and Marc Marone and Antoine Chaffin and Dawn Lawrie and Benjamin Van Durme},
year={2025},
eprint={2507.11412},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.11412},
}