File size: 4,593 Bytes
cc4b60d 264bbce cc4b60d 264bbce cc4b60d 264bbce cc4b60d 264bbce cc4b60d 264bbce cc4b60d 996ec10 cc4b60d 264bbce cc4b60d 264bbce cc4b60d 264bbce cc4b60d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
---
license: mit
language:
- en
---
# Ettin Mid-training Data
[](https://opensource.org/licenses/MIT)
[](https://arxiv.org/abs/2507.11412)
[](https://huggingface.co/jhu-clsp)
[](https://github.com/jhu-clsp/ettin-encoder-vs-decoder)
> **Phase 2 of 3**: Higher-quality filtered data with context extension (250B tokens) used for mid-training of Ettin models.
This dataset contains the mid-training phase data used to train all [Ettin encoder and decoder models](https://huggingface.co/collections/jhu-clsp/encoders-vs-decoders-the-ettin-suite-686303e16142257eed8e6aeb). This phase focuses on **higher-quality filtered data** and **context length extension to 8K tokens**. The data is provided in **MDS format** ready for use with [Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
## 📊 Data Composition
| Data Source | Tokens (B) | Percentage | Description |
|:------------|:-----------|:-----------|:------------|
| DCLM (Dolmino) | 175.5 | 70.4% | High-quality filtered web crawl |
| Starcoder | 38.4 | 15.4% | Code repositories and files |
| Math (Dolmino) | 10.4 | 4.2% | Mathematical content (filtered) |
| PeS2o | 8.3 | 3.3% | Scientific papers |
| Reddit | 6.2 | 2.5% | Social discussion threads |
| Arxiv | 4.1 | 1.6% | Academic preprints |
| StackExchange (Dolmino) | 2.7 | 1.1% | Q&A forums (filtered) |
| Tulu Flan | 2.4 | 1.0% | Instruction-following data |
| Books | 0.8 | 0.3% | Literature and reference books |
| Wikipedia | 0.5 | 0.2% | Encyclopedia articles |
| **Total** | **249.3** | **100.0%** | Quality-focused mixture |
## 🔧 Key Changes from Pre-training
### Data Quality Improvements
- **Filtered DCLM**: Using Dolmino-filtered version instead of raw DCLM
- **Enhanced Math**: Dolmino-filtered mathematical content
- **Curated StackExchange**: Higher-quality Q&A content
- **Removed Noisy Sources**: Dropped CC Head, CC News, and general StackExchange
### Technical Improvements
- **Context Extension**: Increased from 1K to 8K token sequences
- **RoPE Updates**: Modified positional encoding for longer context
- **Learning Schedule**: Inverse square root decay from peak LR
## 🚀 Usage
For pre-training see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT
### Direct Access
```python
from streaming import StreamingDataset
# Load the streaming dataset
dataset = StreamingDataset(
remote='https://huggingface.co/datasets/jhu-clsp/ettin-extension-data',
local='/tmp/ettin-extension-data',
shuffle=True
)
# Access samples (note: these will be longer sequences)
for sample in dataset:
text = sample['text'] # Up to 8K tokens
# Process your data...
```
## 📁 Structure
Each folder contains filtered, higher-quality data sources in MDS format:
- `arxiv/` - Academic papers from ArXiv
- `books/` - Literature and reference books
- `dclm_dolmino/` - Dolmino-filtered web crawl data (primary source)
- `math_dolmino/` - Filtered mathematical content
- `pes2o/` - Scientific papers
- `reddit/` - Reddit discussion threads
- `stackexchange_dolmino/` - Filtered StackExchange Q&A
- `starcoder/` - Code from GitHub repositories
- `tulu_flan/` - Instruction-following examples
- `wikipedia/` - Wikipedia articles
## 🔗 Related Resources
- **Models**: [Ettin Model Suite](https://huggingface.co/collections/jhu-clsp/encoders-vs-decoders-the-ettin-suite-686303e16142257eed8e6aeb) (17M-1B parameters)
- **Phase 1**: [Pre-training Data](https://huggingface.co/datasets/jhu-clsp/ettin-pretraining-data) (1.7T tokens)
- **Phase 3**: [Decay Phase Data](https://huggingface.co/datasets/jhu-clsp/ettin-decay-data) (50B tokens)
- **Training Order**: [Batch-level Data Order](https://huggingface.co/datasets/jhu-clsp/ettin-data-order)
- **Paper**: [Arxiv link](https://arxiv.org/abs/2507.11412)
- **Code**: [GitHub Repository](https://github.com/jhu-clsp/ettin-encoder-vs-decoder)
## Citation
```bibtex
@misc{weller2025seqvsseqopen,
title={Seq vs Seq: An Open Suite of Paired Encoders and Decoders},
author={Orion Weller and Kathryn Ricci and Marc Marone and Antoine Chaffin and Dawn Lawrie and Benjamin Van Durme},
year={2025},
eprint={2507.11412},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.11412},
}
``` |