Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,10 @@ license: apache-2.0
|
|
3 |
---
|
4 |
# HRWKV7-Reka-Flash3-Preview
|
5 |
|
|
|
|
|
|
|
|
|
6 |
### Model Description
|
7 |
|
8 |
HRWKV7-Reka-Flash3-Preview is an experimental hybrid architecture model that combines RWKV v7's linear attention mechanism with Group Query Attention (GQA) layers. Built upon the Reka-flash3 21B foundation, this model replaces most Transformer attention blocks with RWKV blocks while strategically maintaining some GQA layers to enhance performance on specific tasks.
|
|
|
3 |
---
|
4 |
# HRWKV7-Reka-Flash3-Preview
|
5 |
|
6 |
+
<div align="center">
|
7 |
+
<img src="./hxa079.png" style="border-radius: 15px; width: 60%; height: 60%; object-fit: cover; box-shadow: 10px 10px 20px rgba(0, 0, 0, 0.5); border: 2px solid white;" alt="PRWKV" />
|
8 |
+
</div>
|
9 |
+
|
10 |
### Model Description
|
11 |
|
12 |
HRWKV7-Reka-Flash3-Preview is an experimental hybrid architecture model that combines RWKV v7's linear attention mechanism with Group Query Attention (GQA) layers. Built upon the Reka-flash3 21B foundation, this model replaces most Transformer attention blocks with RWKV blocks while strategically maintaining some GQA layers to enhance performance on specific tasks.
|