English
yintongl commited on
Commit
809c657
·
verified ·
1 Parent(s): da7c2dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -37,19 +37,19 @@ Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.gi
37
  lm_eval --model hf --model_args pretrained="Intel/opt-13b-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu --batch_size 32
38
  ```
39
 
40
- | Metric | BF16 | INT4 |
41
  | -------------- | ------ | ------ |
42
- | Avg. | 0. | 0.5021 |
43
- | mmlu | 0. | 0.2456 |
44
- | lambada_openai | 0. | 0.6949 |
45
- | hellaswag | 0. | 0.5177 |
46
- | winogrande | 0. | 0.6448 |
47
- | piqa | 0. | 0.7573 |
48
- | truthfulqa_mc1 | 0. | 0.2056 |
49
- | openbookqa | 0. | 0.2780 |
50
- | boolq | 0. | 0.6801 |
51
- | arc_easy | 0. | 0.6717 |
52
- | arc_challenge | 0. | 0.3251 |
53
 
54
 
55
 
 
37
  lm_eval --model hf --model_args pretrained="Intel/opt-13b-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu --batch_size 32
38
  ```
39
 
40
+ | Metric | FP16 | INT4 |
41
  | -------------- | ------ | ------ |
42
+ | Avg. | 0.4988 | 0.5021 |
43
+ | mmlu | 0.2455 | 0.2456 |
44
+ | lambada_openai | 0.6858 | 0.6949 |
45
+ | hellaswag | 0.5247 | 0.5177 |
46
+ | winogrande | 0.6472 | 0.6448 |
47
+ | piqa | 0.7590 | 0.7573 |
48
+ | truthfulqa_mc1 | 0.1971 | 0.2056 |
49
+ | openbookqa | 0.2680 | 0.2780 |
50
+ | boolq | 0.6587 | 0.6801 |
51
+ | arc_easy | 0.6713 | 0.6717 |
52
+ | arc_challenge | 0.3294 | 0.3251 |
53
 
54
 
55