nielsr HF Staff commited on
Commit
9040036
·
verified ·
1 Parent(s): 144d2b0

Improve dataset card for DCLM-10B-Qwen2-binidx with paper, code, and usage

Browse files

This PR enhances the dataset card for `DCLM-10B-Qwen2-binidx` by:
- Updating metadata with `text-generation` as the task category and adding relevant tags (`linear-attention`, `distillation`, `language-modeling`, `llm`, `rwkv`, `qwen`).
- Linking the dataset to its associated paper, [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005).
- Including a link to the official RADLADS GitHub repository, which contains the training code and further details.
- Providing clear sample usage instructions for downloading the dataset files.
- Adding the BibTeX citation for the associated paper.

Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,3 +1,45 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ tags:
6
+ - linear-attention
7
+ - distillation
8
+ - language-modeling
9
+ - llm
10
+ - rwkv
11
+ - qwen
12
+ ---
13
+
14
+ This repository contains the **DCLM-10B-Qwen2-binidx** dataset, a large-scale text corpus used as training data for the [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005) protocol.
15
+
16
+ RADLADS proposes a method for rapidly converting traditional softmax attention transformers into efficient linear attention decoder models. This dataset is crucial for the distillation process, enabling the conversion of large language models like Qwen2.5 into linear attention variants with minimal training tokens while maintaining high inference quality.
17
+
18
+ For more details on the RADLADS project, including the training code and converted models, please refer to the official GitHub repository:
19
+ [https://github.com/recursal/RADLADS](https://github.com/recursal/RADLADS)
20
+
21
+ ### Sample Usage
22
+
23
+ You can download the `dclm-10B.idx` and `dclm-10B.bin` dataset files using `wget` as follows:
24
+
25
+ ```bash
26
+ mkdir -p data
27
+ wget --continue -O data/dclm-10B.idx https://huggingface.co/datasets/recursal/DCLM-10B-Qwen2-binidx/resolve/main/dclm-10B.idx?download=true
28
+ wget --continue -O data/dclm-10B.bin https://huggingface.co/datasets/recursal/DCLM-10B-Qwen2-binidx/resolve/main/dclm-10B.bin?download=true
29
+ ```
30
+
31
+ ### Citation
32
+
33
+ If you use this dataset or find the RADLADS work valuable, please consider citing the associated paper:
34
+
35
+ ```bibtex
36
+ @misc{goldstein2025radladsrapidattentiondistillation,
37
+ title={RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale},
38
+ author={Daniel Goldstein and Eric Alcaide and Janna Lu and Eugene Cheah},
39
+ year={2025},
40
+ eprint={2505.03005},
41
+ archivePrefix={arXiv},
42
+ primaryClass={cs.CL},
43
+ url={https://arxiv.org/abs/2505.03005},
44
+ }
45
+ ```