Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,7 @@ The training process was geared towards simulating verbal exchanges between doct
|
|
38 |
The model got the **TMMLU+** (0 shot) performance using [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) with default settings.
|
39 |
|
40 |
|Details on TMMLU+ (0 shot):<br/>Model | Base Model | STEM | Social Science | Humanities | Other | AVG |
|
41 |
-
|
42 |
| Taiwan-inquiry_7B_v2.0 |Breeze-7B-Instruct-v1_0| 36.46 | 43.94 | 35.68 | 38.21 | 39.38 |
|
43 |
| Taiwan-inquiry_7B_v1.1 |Taiwan-inquiry_7B_v1.0 | 36.46 | 43.94 | 35.68 | 38.21 | 39.38 |
|
44 |
| Taiwan-inquiry_7B_v1.0 |Taiwan-LLM-7B-v2.1-chat| 36.46 | 43.94 | 35.68 | 38.21 | 39.38 |
|
|
|
38 |
The model got the **TMMLU+** (0 shot) performance using [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) with default settings.
|
39 |
|
40 |
|Details on TMMLU+ (0 shot):<br/>Model | Base Model | STEM | Social Science | Humanities | Other | AVG |
|
41 |
+
|-----------------------------------------------------|:---------------------:|-----------------|----------------|------------|--------- |---------|
|
42 |
| Taiwan-inquiry_7B_v2.0 |Breeze-7B-Instruct-v1_0| 36.46 | 43.94 | 35.68 | 38.21 | 39.38 |
|
43 |
| Taiwan-inquiry_7B_v1.1 |Taiwan-inquiry_7B_v1.0 | 36.46 | 43.94 | 35.68 | 38.21 | 39.38 |
|
44 |
| Taiwan-inquiry_7B_v1.0 |Taiwan-LLM-7B-v2.1-chat| 36.46 | 43.94 | 35.68 | 38.21 | 39.38 |
|