Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ OpenLLM represents the first open-source effort to build a LLM specialized for R
|
|
32 |
<!-- Provide the basic links for the model. -->
|
33 |
|
34 |
- **Repository:** https://github.com/OpenLLM-Ro/llama-recipes
|
35 |
-
- **Paper:** https://arxiv.org/abs/
|
36 |
|
37 |
## Intended Use
|
38 |
|
@@ -74,19 +74,19 @@ print(tokenizer.decode(outputs[0]))
|
|
74 |
|
75 |
| Model | Average | ARC | MMLU |Winogrande|HellaSwag | GSM8k |TruthfulQA|
|
76 |
|--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
|
77 |
-
| Llama-2-7b
|
78 |
-
|
79 |
-
|
|
80 |
-
|
81 |
-
|
82 |
|
83 |
## MT-Bench
|
84 |
|
85 |
| Model | Average | 1st turn | 2nd turn |
|
86 |
|--------------------|:--------:|:--------:|:--------:|
|
87 |
-
| Llama-2-7b-chat | 1.
|
88 |
-
|*RoLlama2-7b-Instruct*| ***
|
89 |
-
|RoLlama2-7b-Chat |
|
90 |
|
91 |
|
92 |
|
|
|
32 |
<!-- Provide the basic links for the model. -->
|
33 |
|
34 |
- **Repository:** https://github.com/OpenLLM-Ro/llama-recipes
|
35 |
+
- **Paper:** https://arxiv.org/abs/2406.18266
|
36 |
|
37 |
## Intended Use
|
38 |
|
|
|
74 |
|
75 |
| Model | Average | ARC | MMLU |Winogrande|HellaSwag | GSM8k |TruthfulQA|
|
76 |
|--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
|
77 |
+
| Llama-2-7b-chat | 38.03 | 37.95 | 27.22 | 59.29 | 57.22 | 2.53 | 44.00 |
|
78 |
+
|*RoLlama2-7b-Instruct*|***45.71***|***43.66***|***39.70***|***70.34*** | *57.36* |***18.78***| *44.44* |
|
79 |
+
|RoLlama2-7b-Chat | 43.82 | 41.92 | 37.29 | 66.68 | **57.91**| 13.47 | **45.65 **|
|
80 |
+
|
81 |
+
|
82 |
|
83 |
## MT-Bench
|
84 |
|
85 |
| Model | Average | 1st turn | 2nd turn |
|
86 |
|--------------------|:--------:|:--------:|:--------:|
|
87 |
+
| Llama-2-7b-chat | 1.21 | 1.68 | 0.74 |
|
88 |
+
|*RoLlama2-7b-Instruct*| ***3.70***|***4.74***| *2.66* |
|
89 |
+
|RoLlama2-7b-Chat | TBC | TBC | TBC |
|
90 |
|
91 |
|
92 |
|