coltonm commited on
Commit
d49c02b
·
verified ·
1 Parent(s): 60b4153

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -7
README.md CHANGED
@@ -15,13 +15,12 @@ EdgeRunner-Tactical-7B is a 7 billion parameter language model that delivers pow
15
 
16
  ## Highlights
17
 
18
- - 7 billion parameters
19
- - SOTA performance for its size
20
- - Initialized from Qwen2-Instruct
21
- - Applied Self-Play Preference Optimization ([SPPO](https://arxiv.org/abs/2405.00675)) for continuous training on Qwen2-Instruct
22
- - Competitive performance with Mixtral-8x7B, Meta Llama-3-70B on several benchmarks.
23
- - Supports a context length of 128K tokens, making it ideal for tasks requiring many conversation turns or working with large amounts of text
24
-
25
 
26
  ## Quickstart
27
 
 
15
 
16
  ## Highlights
17
 
18
+ - 7 billion parameters that balance power and efficiency
19
+ - SOTA performance within the 7B model range
20
+ - Initialized from Qwen2-Instruct, leveraging prior advancements
21
+ - Self-Play Preference Optimization (SPPO) applied for continuous training and alignment
22
+ - Competitive performance on several benchmarks with Meta’s Llama-3-70B, Mixtral 8x7B, and Yi 34B
23
+ - Context length of 128K tokens, ideal for extensive conversations and large-scale text tasks
 
24
 
25
  ## Quickstart
26