Update README.md
Browse files
README.md
CHANGED
@@ -15,13 +15,12 @@ EdgeRunner-Tactical-7B is a 7 billion parameter language model that delivers pow
|
|
15 |
|
16 |
## Highlights
|
17 |
|
18 |
-
- 7 billion parameters
|
19 |
-
- SOTA performance
|
20 |
-
- Initialized from Qwen2-Instruct
|
21 |
-
-
|
22 |
-
- Competitive performance with
|
23 |
-
-
|
24 |
-
|
25 |
|
26 |
## Quickstart
|
27 |
|
|
|
15 |
|
16 |
## Highlights
|
17 |
|
18 |
+
- 7 billion parameters that balance power and efficiency
|
19 |
+
- SOTA performance within the 7B model range
|
20 |
+
- Initialized from Qwen2-Instruct, leveraging prior advancements
|
21 |
+
- Self-Play Preference Optimization (SPPO) applied for continuous training and alignment
|
22 |
+
- Competitive performance on several benchmarks with Meta’s Llama-3-70B, Mixtral 8x7B, and Yi 34B
|
23 |
+
- Context length of 128K tokens, ideal for extensive conversations and large-scale text tasks
|
|
|
24 |
|
25 |
## Quickstart
|
26 |
|