Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ license: apache-2.0
|
|
12 |
|
13 |
### Model Description
|
14 |
|
15 |
-
|
16 |
|
17 |
- **Developed by:** OpenMOSE
|
18 |
- **Model type:** Hybrid Linear-Attention Language Model
|
|
|
12 |
|
13 |
### Model Description
|
14 |
|
15 |
+
RWKV-Reka-3.1 Flash is an RNN hybrid architecture model that combines RWKV v7's linear attention mechanism with Group Query Attention (GQA) layers. Built upon the Reka-flash3.1 21B foundation, this model replaces most Transformer attention blocks with RWKV blocks while strategically maintaining some GQA layers to enhance performance on specific tasks.
|
16 |
|
17 |
- **Developed by:** OpenMOSE
|
18 |
- **Model type:** Hybrid Linear-Attention Language Model
|