Enhance model card with detailed description, key highlights, and usage example (#2)
Browse files- Enhance model card with detailed description, key highlights, and usage example (be7c2eb85653411a9606aafea8074cacc1db72d5)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
@@ -1,9 +1,83 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
library_name: transformers
|
|
|
4 |
pipeline_tag: text-generation
|
5 |
---
|
6 |
|
7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
2 |
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
pipeline_tag: text-generation
|
5 |
---
|
6 |
|
7 |
+
# RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale
|
8 |
+
|
9 |
+
<div align="center">
|
10 |
+
<img src="https://github.com/recursal/RADLADS/raw/main/assets/radlads_process.png" height=63 alt="RADLADS Conversion Process" />
|
11 |
+
</div>
|
12 |
+
|
13 |
+
**RADLADS** (Rapid Attention Distillation to Linear Attention Decoders at Scale) introduces a novel protocol for rapidly converting softmax attention transformers into linear attention decoder models. This highly efficient process requires only 350-700 million tokens of distillation, which is less than 0.005% of the token count used to train the original teacher models.
|
14 |
+
|
15 |
+
This repository provides the **RADRWKV7Qwen2.5-7B** model, a converted version from Qwen2.5. The RADLADS approach maintains quality remarkably close to the original transformer while achieving state-of-the-art downstream performance for linear attention models of their size. It enables significantly faster inference due to its constant-time inference per token.
|
16 |
+
|
17 |
+
**Paper:** [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005)
|
18 |
+
|
19 |
+
**GitHub Repository:** [recursal/RADLADS](https://github.com/recursal/RADLADS)
|
20 |
+
|
21 |
+
## ✨ Key Highlights
|
22 |
+
|
23 |
+
* **Efficient & Cost-Effective Conversion**: Converts large softmax attention transformers to linear attention models with minimal additional training tokens. Converting a 72B model costs less than $2,000 USD.
|
24 |
+
* **Quality Preservation**: Models converted using RADLADS maintain quality remarkably close to the original teacher transformer models.
|
25 |
+
* **State-of-the-Art Performance**: These models achieve state-of-the-art downstream performance across standard benchmarks for linear attention models of their size.
|
26 |
+
* **Faster Inference**: Leverages linear attention for constant-time inference per token, significantly boosting decoding speed.
|
27 |
+
* **New Architectures**: Introduces new RWKV-variant architectures (RAD-RWKV6 and RAD-RWKV7) optimized for linear attention.
|
28 |
+
|
29 |
+
## How to use
|
30 |
+
|
31 |
+
This model is compatible with the Hugging Face `transformers` library. Ensure `trust_remote_code=True` is set when loading the model due to custom architecture components.
|
32 |
+
|
33 |
+
### Text Generation
|
34 |
+
|
35 |
+
```python
|
36 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
37 |
+
import torch
|
38 |
+
|
39 |
+
model_name = "recursal/RADRWKV7Qwen2.5-7B" # This model
|
40 |
+
|
41 |
+
# Load model and tokenizer
|
42 |
+
model = AutoModelForCausalLM.from_pretrained(
|
43 |
+
model_name,
|
44 |
+
torch_dtype=torch.bfloat16, # or torch.float16 if needed
|
45 |
+
device_map="auto",
|
46 |
+
trust_remote_code=True
|
47 |
+
)
|
48 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False)
|
49 |
+
|
50 |
+
# Prepare input
|
51 |
+
text = "The quick brown fox jumps over the lazy"
|
52 |
+
input_ids = tokenizer.encode(text, return_tensors="pt").to(model.device)
|
53 |
+
|
54 |
+
# Generate text
|
55 |
+
generated_ids = model.generate(
|
56 |
+
input_ids,
|
57 |
+
max_new_tokens=50,
|
58 |
+
do_sample=False, # Set to True for sampling, adjust temperature/top_p
|
59 |
+
eos_token_id=tokenizer.eos_token_id,
|
60 |
+
pad_token_id=tokenizer.eos_token_id, # Or set to a specific pad_token_id if available
|
61 |
+
)
|
62 |
+
|
63 |
+
# Decode output
|
64 |
+
output_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
|
65 |
+
print(f"Input: {text}
|
66 |
+
Generated: {output_text}")
|
67 |
+
```
|
68 |
+
|
69 |
+
## Citation
|
70 |
+
|
71 |
+
If you use this model or find our work valuable, please consider citing the RADLADS paper:
|
72 |
|
73 |
+
```bibtex
|
74 |
+
@misc{goldstein2025radladsrapidattentiondistillation,
|
75 |
+
title={RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale},
|
76 |
+
author={Daniel Goldstein and Eric Alcaide and Janna Lu and Eugene Cheah},
|
77 |
+
year={2025},
|
78 |
+
eprint={2505.03005},
|
79 |
+
archivePrefix={arXiv},
|
80 |
+
primaryClass={cs.CL},
|
81 |
+
url={https://arxiv.org/abs/2505.03005},
|
82 |
+
}
|
83 |
+
```
|