OpenMOSE nielsr HF Staff commited on
Commit
f67cade
·
verified ·
1 Parent(s): 80432ef

Enhance model card: Add metadata, paper/code links, and Transformers usage (#1)

Browse files

- Enhance model card: Add metadata, paper/code links, and Transformers usage (b5bdfb3f74c076186fff4cc18bb6c0df19930130)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +102 -43
README.md CHANGED
@@ -1,6 +1,19 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
4
  # HRWKV7-Reka-Flash3-Preview
5
 
6
  <div align="center">
@@ -11,27 +24,33 @@ license: apache-2.0
11
  > It's still far from perfect,
12
  > but I hope you'll bear with me as I continue this journey. :)
13
 
 
 
 
 
 
 
14
  ### Model Description
15
 
16
  HRWKV7-Reka-Flash3-Preview is an experimental hybrid architecture model that combines RWKV v7's linear attention mechanism with Group Query Attention (GQA) layers. Built upon the Reka-flash3 21B foundation, this model replaces most Transformer attention blocks with RWKV blocks while strategically maintaining some GQA layers to enhance performance on specific tasks.
17
 
18
- - **Developed by:** OpenMOSE
19
- - **Model type:** Hybrid Linear-Attention Language Model
20
- - **Language(s):** Multilingual (inherited from Reka-flash3 21B)
21
- - **License:** Apache-2.0
22
- - **Base Model:** Reka-flash3 21B(https://huggingface.co/RekaAI/reka-flash-3)
23
- - **Year:** 2025
24
 
25
  ### Architecture Specifications
26
 
27
- - **Architecture:** RWKV v7 based "hxa079" Architecture + Group Query Attention Hybrid
28
- - **Total Layers:** 44 layers (L44D6114)
29
- - 38 RWKV layers (with Rope)
30
- - 6 GQA layers (No Rope, No Position Embeddings)
31
- - **Hidden Dimension:** 6144
32
- - **Training Context Window:** 4096 tokens
33
- - **Inference Context Window** 32768+
34
- - **Training Strategy** Following RADLADS method based knowledge distillation
35
 
36
  ## Technical Innovation
37
 
@@ -39,60 +58,84 @@ HRWKV7-Reka-Flash3-Preview is an experimental hybrid architecture model that com
39
 
40
  The model implements several key improvements over standard RWKV architectures:
41
 
42
- 1. **Token Shift Removal**: In order to effectively inherit the teacher model weights, we removed the residual connection one token ago.
43
- 2. **GroupNorm Removal**: Helps improve training stability issues
44
- 3. **k_first Introduction**: Experimentally adopted the approach of residually connecting k layers in layer 0.
45
 
46
  ### Hybrid Design Benefits
47
 
48
- - **Linear Attention Inference**: RWKV blocks enable O(1) memory complexity during inference, and the hybrid approach reduces the KVCache to 1/7 of full GQA.
49
- - **Enhanced Needle Tasks**: Strategic placement of GQA layers significantly improves performance on needle-in-haystack retrieval tasks, addressing a known limitation of pure linear attention models
50
- - **Implicit Position Encoding**: Interestingly, the model achieves better performance when RoPE (Rotary Position Embedding) is not applied to GQA layers, suggesting that RWKV blocks provide implicit positional encoding capabilities
51
 
52
  ## Intended Use
53
 
54
  This is an **experimental research model** designed to explore hybrid architectures combining linear and quadratic attention mechanisms. It is intended for:
55
 
56
- - Research into efficient attention mechanisms
57
- - Benchmarking hybrid architecture performance
58
- - Exploring linear attention limitations and solutions
59
- - Academic and industrial R&D purposes
60
 
61
  ## Limitations
62
 
63
- - **Experimental Status**: This model is in experimental stages and may exhibit unexpected behaviors
64
- - **Context Window**: Limited to 4096 tokens during training, though RWKV architecture theoretically supports longer sequences
65
- - **Performance Variability**: As a hybrid model, performance may vary significantly across different task types
66
 
67
  ## Training Details
68
 
69
- - **Training Context Window:** 4096 tokens
70
- - **Training GPU** AMD MI300X x 1(takes 68hrs)
71
- - **Training Strategy** 8bit MLP Quant, frozen emb,mlp,head, Deepspeed Stage1
72
- - **Base Model Initialization:** Weights initialized from Reka-flash3 21B
73
- - **Architecture Conversion:** Transformer attention blocks systematically replaced with RWKV blocks, except for 6 strategically placed GQA layers
74
 
75
  ## Evaluation
76
 
77
  Performance evaluation is ongoing. The model shows promising results in:
78
- - Maintaining base model capabilities while achieving linear attention efficiency
79
- - Significantly improved needle-in-haystack task performance compared to pure RWKV architectures
80
- - Competitive performance on standard language modeling benchmarks
81
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
- ## Run
84
- - RWKV-Infer now support hxa079
85
  ```bash
86
- curl http://127.0.0.1:9000/loadmodel -X POST -H "Content-Type: application/json" -d '{"model_filename":"/home/client/Projects/llm/hxa079-reka-flash3-stage2-hybrid.pth","model_viewname":"RWKV HXA079 L38T6 Reka Flash3","model_strategy":"int8","adapter_filename":"","adapter_mode":"", "template":"rekaflash3", "endtoken":"\n <sep>","default_temperature":"0.2", "default_top_p":"0.3", "rope_theta":"8000000.0", "rms_norm_eps":"1e-5"}'
 
87
 
88
  ```
89
 
90
  ## Thank you for Big help :)
91
- - SmerkyG Inspired by RADLADS (https://arxiv.org/abs/2505.03005)
92
- - https://github.com/recursal/RADLADS-paper
93
 
94
  ## Training Code
95
- - https://github.com/OpenMOSE/RWKVInside (still buggy)
96
 
97
 
98
  ## Model Card Contact
@@ -101,4 +144,20 @@ OpenMOSE - 2025
101
 
102
  ---
103
 
104
- *Note: This is an experimental model. Performance characteristics and behaviors may differ from both pure RWKV and standard Transformer architectures. Users should thoroughly evaluate the model for their specific use cases.*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ tags:
6
+ - text-generation
7
+ - causal-lm
8
+ - linear-attention
9
+ - rwkv
10
+ - reka
11
+ - knowledge-distillation
12
+ - multilingual
13
+ languages:
14
+ - mul
15
  ---
16
+
17
  # HRWKV7-Reka-Flash3-Preview
18
 
19
  <div align="center">
 
24
  > It's still far from perfect,
25
  > but I hope you'll bear with me as I continue this journey. :)
26
 
27
+ ## Paper and Project Details
28
+
29
+ This model is part of the research presented in the paper [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005).
30
+
31
+ The main codebase for the RADLADS project can be found at: [https://github.com/recursal/RADLADS-paper](https://github.com/recursal/RADLADS-paper)
32
+
33
  ### Model Description
34
 
35
  HRWKV7-Reka-Flash3-Preview is an experimental hybrid architecture model that combines RWKV v7's linear attention mechanism with Group Query Attention (GQA) layers. Built upon the Reka-flash3 21B foundation, this model replaces most Transformer attention blocks with RWKV blocks while strategically maintaining some GQA layers to enhance performance on specific tasks.
36
 
37
+ - **Developed by:** OpenMOSE
38
+ - **Model type:** Hybrid Linear-Attention Language Model
39
+ - **Language(s):** Multilingual (inherited from Reka-flash3 21B)
40
+ - **License:** Apache-2.0
41
+ - **Base Model:** Reka-flash3 21B(https://huggingface.co/RekaAI/reka-flash-3)
42
+ - **Year:** 2025
43
 
44
  ### Architecture Specifications
45
 
46
+ - **Architecture:** RWKV v7 based "hxa079" Architecture + Group Query Attention Hybrid
47
+ - **Total Layers:** 44 layers (L44D6114)
48
+ - 38 RWKV layers (with Rope)
49
+ - 6 GQA layers (No Rope, No Position Embeddings)
50
+ - **Hidden Dimension:** 6144
51
+ - **Training Context Window:** 4096 tokens
52
+ - **Inference Context Window** 32768+
53
+ - **Training Strategy** Following RADLADS method based knowledge distillation
54
 
55
  ## Technical Innovation
56
 
 
58
 
59
  The model implements several key improvements over standard RWKV architectures:
60
 
61
+ 1. **Token Shift Removal**: In order to effectively inherit the teacher model weights, we removed the residual connection one token ago.
62
+ 2. **GroupNorm Removal**: Helps improve training stability issues
63
+ 3. **k_first Introduction**: Experimentally adopted the approach of residually connecting k layers in layer 0.
64
 
65
  ### Hybrid Design Benefits
66
 
67
+ - **Linear Attention Inference**: RWKV blocks enable O(1) memory complexity during inference, and the hybrid approach reduces the KVCache to 1/7 of full GQA.
68
+ - **Enhanced Needle Tasks**: Strategic placement of GQA layers significantly improves performance on needle-in-haystack retrieval tasks, addressing a known limitation of pure linear attention models
69
+ - **Implicit Position Encoding**: Interestingly, the model achieves better performance when RoPE (Rotary Position Embedding) is not applied to GQA layers, suggesting that RWKV blocks provide implicit positional encoding capabilities
70
 
71
  ## Intended Use
72
 
73
  This is an **experimental research model** designed to explore hybrid architectures combining linear and quadratic attention mechanisms. It is intended for:
74
 
75
+ - Research into efficient attention mechanisms
76
+ - Benchmarking hybrid architecture performance
77
+ - Exploring linear attention limitations and solutions
78
+ - Academic and industrial R&D purposes
79
 
80
  ## Limitations
81
 
82
+ - **Experimental Status**: This model is in experimental stages and may exhibit unexpected behaviors
83
+ - **Context Window**: Limited to 4096 tokens during training, though RWKV architecture theoretically supports longer sequences
84
+ - **Performance Variability**: As a hybrid model, performance may vary significantly across different task types
85
 
86
  ## Training Details
87
 
88
+ - **Training Context Window:** 4096 tokens
89
+ - **Training GPU** AMD MI300X x 1(takes 68hrs)
90
+ - **Training Strategy** 8bit MLP Quant, frozen emb,mlp,head, Deepspeed Stage1
91
+ - **Base Model Initialization:** Weights initialized from Reka-flash3 21B
92
+ - **Architecture Conversion:** Transformer attention blocks systematically replaced with RWKV blocks, except for 6 strategically placed GQA layers
93
 
94
  ## Evaluation
95
 
96
  Performance evaluation is ongoing. The model shows promising results in:
97
+ - Maintaining base model capabilities while achieving linear attention efficiency
98
+ - Significantly improved needle-in-haystack task performance compared to pure RWKV architectures
99
+ - Competitive performance on standard language modeling benchmarks
100
+
101
+ ## Usage with Hugging Face Transformers
102
+
103
+ This model can be loaded and used with the `transformers` library. Ensure you have `transformers` installed: `pip install transformers`.
104
+ When loading, remember to set `trust_remote_code=True` because of the custom architecture.
105
+
106
+ ```python
107
+ from transformers import pipeline, AutoTokenizer
108
+ import torch
109
+
110
+ model_name = "OpenMOSE/HRWKV7-Reka-Flash3-Preview" # Replace with the actual model ID if different
111
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
112
+ pipe = pipeline(
113
+ "text-generation",
114
+ model_name,
115
+ tokenizer=tokenizer,
116
+ torch_dtype=torch.bfloat16, # or torch.float16 depending on your GPU and model precision
117
+ device_map="auto",
118
+ trust_remote_code=True,
119
+ )
120
+
121
+ text = "The quick brown fox jumps over the lazy "
122
+ result = pipe(text, max_new_tokens=20, do_sample=True, top_p=0.9, temperature=0.7)[0]["generated_text"]
123
+ print(result)
124
+ ```
125
 
126
+ ## Run with RWKV-Infer (as provided by original authors)
127
+ - RWKV-Infer now support hxa079
128
  ```bash
129
+ curl http://127.0.0.1:9000/loadmodel -X POST -H "Content-Type: application/json" -d '{"model_filename":"/home/client/Projects/llm/hxa079-reka-flash3-stage2-hybrid.pth","model_viewname":"RWKV HXA079 L38T6 Reka Flash3","model_strategy":"int8","adapter_filename":"","adapter_mode":"", "template":"rekaflash3", "endtoken":"
130
+ <sep>","default_temperature":"0.2", "default_top_p":"0.3", "rope_theta":"8000000.0", "rms_norm_eps":"1e-5"}'
131
 
132
  ```
133
 
134
  ## Thank you for Big help :)
135
+ - SmerkyG Inspired by RADLADS (https://arxiv.org/abs/2505.03005)
 
136
 
137
  ## Training Code
138
+ - https://github.com/OpenMOSE/RWKVInside (still buggy)
139
 
140
 
141
  ## Model Card Contact
 
144
 
145
  ---
146
 
147
+ *Note: This is an experimental model. Performance characteristics and behaviors may differ from both pure RWKV and standard Transformer architectures. Users should thoroughly evaluate the model for their specific use cases.*
148
+
149
+ ## Citation
150
+
151
+ If you use this code or find our work valuable, please consider citing RADLADS:
152
+
153
+ ```bibtex
154
+ @misc{goldstein2025radladsrapidattentiondistillation,
155
+ title={RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale},
156
+ author={Daniel Goldstein and Eric Alcaide and Janna Lu and Eugene Cheah},
157
+ year={2025},
158
+ eprint={2505.03005},
159
+ archivePrefix={arXiv},
160
+ primaryClass={cs.CL},
161
+ url={https://arxiv.org/abs/2505.03005},
162
+ }
163
+ ```