TwT-6 commited on
Commit
a506710
·
verified ·
1 Parent(s): 3444284

Upload 5 files

Browse files
README (5).md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ - config_name: vision_bench_0701
4
+ features:
5
+ - name: question_id
6
+ dtype: string
7
+ - name: instruction
8
+ dtype: string
9
+ - name: image
10
+ dtype: image
11
+ - name: language
12
+ dtype: string
13
+ splits:
14
+ - name: test
15
+ num_bytes: 1654009592.0
16
+ num_examples: 500
17
+ download_size: 1653981819
18
+ dataset_size: 1654009592.0
19
+ - config_name: vision_bench_0617
20
+ features:
21
+ - name: question_id
22
+ dtype: string
23
+ - name: instruction
24
+ dtype: string
25
+ - name: image
26
+ dtype: image
27
+ - name: language
28
+ dtype: string
29
+ splits:
30
+ - name: test
31
+ num_bytes: 1193682526.0
32
+ num_examples: 500
33
+ download_size: 1193578497
34
+ dataset_size: 1193682526.0
35
+ configs:
36
+ - config_name: vision_bench_0701
37
+ data_files:
38
+ - split: test
39
+ path: vision_bench_0701/test-*
40
+ - config_name: vision_bench_0617
41
+ data_files:
42
+ - split: test
43
+ path: vision_bench_0617/test-*
44
+ ---
45
+
46
+ # WildVision-Bench
47
+
48
+ We have two versions of Wildvision-Bench data
49
+ - `vision_bench_0617`: the selected 500 examples that best simulates the vision-arena elo ranking, same data in the paper.
50
+ - `vision_bench_0701`: the further filter and selected 500 examples by NSFW and manual selection. Leaderboard are still preparing.
51
+
52
+ ## Evaluation
53
+ Please refer to our [Github](https://github.com/WildVision-AI/WildVision-Bench) for evaluation
54
+
55
+ If you want to evaluate your model, please use the `vision_bench_0617` version to fairly compare the performance with other models in the following leaderboard.
56
+
57
+ ## Leaderboard (`vision_bench_0717`)
58
+
59
+ | Model | Score | 95% CI | Win Rate | Reward | Much Better | Better | Tie | Worse | Much Worse | Avg Tokens |
60
+ | :------------------------------: | :---: | :---------: | :------: | :----: | :---------: | :----: | :---: | :---: | :--------: | :--------: |
61
+ | gpt-4o | 89.15 | (-1.9, 1.5) | 80.6% | 56.4 | 255.0 | 148.0 | 14.0 | 72.0 | 11.0 | 142 |
62
+ | gpt-4-vision-preview | 79.78 | (-2.9, 2.2) | 71.8% | 39.4 | 182.0 | 177.0 | 22.0 | 91.0 | 28.0 | 138 |
63
+ | Reka-Flash | 64.65 | (-2.6, 2.7) | 58.8% | 18.9 | 135.0 | 159.0 | 28.0 | 116.0 | 62.0 | 168 |
64
+ | claude-3-opus-20240229 | 62.03 | (-3.7, 2.8) | 53.0% | 13.5 | 103.0 | 162.0 | 48.0 | 141.0 | 46.0 | 105 |
65
+ | yi-vl-plus | 55.05 | (-3.4, 2.3) | 52.8% | 7.2 | 98.0 | 166.0 | 29.0 | 124.0 | 83.0 | 140 |
66
+ | liuhaotian/llava-v1.6-34b | 51.89 | (-3.4, 3.8) | 49.2% | 2.5 | 90.0 | 156.0 | 26.0 | 145.0 | 83.0 | 153 |
67
+ | claude-3-sonnet-20240229 | 50.0 | (0.0, 0.0) | 0.2% | 0.1 | 0.0 | 1.0 | 499.0 | 0.0 | 0.0 | 114 |
68
+ | claude-3-haiku-20240307 | 37.83 | (-2.6, 2.8) | 30.6% | -16.5 | 54.0 | 99.0 | 47.0 | 228.0 | 72.0 | 89 |
69
+ | gemini-pro-vision | 35.57 | (-3.0, 3.2) | 32.6% | -21.0 | 80.0 | 83.0 | 27.0 | 167.0 | 143.0 | 68 |
70
+ | liuhaotian/llava-v1.6-vicuna-13b | 33.87 | (-2.9, 3.3) | 33.8% | -21.4 | 62.0 | 107.0 | 25.0 | 167.0 | 139.0 | 136 |
71
+ | deepseek-ai/deepseek-vl-7b-chat | 33.61 | (-3.3, 3.0) | 35.6% | -21.2 | 59.0 | 119.0 | 17.0 | 161.0 | 144.0 | 116 |
72
+ | THUDM/cogvlm-chat-hf | 32.01 | (-2.2, 3.0) | 30.6% | -26.4 | 75.0 | 78.0 | 15.0 | 172.0 | 160.0 | 61 |
73
+ | liuhaotian/llava-v1.6-vicuna-7b | 26.41 | (-3.3, 3.1) | 27.0% | -31.4 | 45.0 | 90.0 | 36.0 | 164.0 | 165.0 | 130 |
74
+ | idefics2-8b-chatty | 23.96 | (-2.2, 2.4) | 26.4% | -35.8 | 44.0 | 88.0 | 19.0 | 164.0 | 185.0 | 135 |
75
+ | Qwen/Qwen-VL-Chat | 18.08 | (-1.9, 2.2) | 19.6% | -47.9 | 42.0 | 56.0 | 15.0 | 155.0 | 232.0 | 69 |
76
+ | llava-1.5-7b-hf | 15.5 | (-2.4, 2.4) | 18.0% | -47.8 | 28.0 | 62.0 | 25.0 | 174.0 | 211.0 | 185 |
77
+ | liuhaotian/llava-v1.5-13b | 14.43 | (-1.7, 1.6) | 16.8% | -52.5 | 28.0 | 56.0 | 19.0 | 157.0 | 240.0 | 91 |
78
+ | BAAI/Bunny-v1_0-3B | 12.98 | (-2.0, 2.1) | 16.6% | -54.4 | 23.0 | 60.0 | 10.0 | 164.0 | 243.0 | 72 |
79
+ | openbmb/MiniCPM-V | 11.95 | (-2.4, 2.1) | 13.6% | -57.5 | 25.0 | 43.0 | 16.0 | 164.0 | 252.0 | 86 |
80
+ | bczhou/tiny-llava-v1-hf | 8.3 | (-1.6, 1.2) | 11.0% | -66.2 | 16.0 | 39.0 | 15.0 | 127.0 | 303.0 | 72 |
81
+ | unum-cloud/uform-gen2-qwen-500m | 7.81 | (-1.3, 1.7) | 10.8% | -68.5 | 16.0 | 38.0 | 11.0 | 115.0 | 320.0 | 92 |
82
+
83
+
84
+ ## Citation
85
+ ```
86
+ @article{lu2024wildvision,
87
+ title={WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences},
88
+ author={Lu, Yujie and Jiang, Dongfu and Chen, Wenhu and Wang, William Yang and Choi, Yejin and Lin, Bill Yuchen},
89
+ journal={arXiv preprint arXiv:2406.11069},
90
+ year={2024}
91
+ }
92
+ ```
93
+
vision_bench_0701/test-00000-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6bea1c829ce2c5a8c2355fc49ca14e6b78dc4be31685de1c3fe2f8f91e1d570
3
+ size 382836478
vision_bench_0701/test-00001-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:690df2c7c840ba30b3062b49116c3c2f7b3eedf718a35864d2883d0a569718e6
3
+ size 523797223
vision_bench_0701/test-00002-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72f6652e0380e20e4cf6c1002b0b8ba02939be7cd4dbd559ba642f7ab7b1f86e
3
+ size 290377772
vision_bench_0701/test-00003-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf03902847737b7c73166502c69c785309e3836efb9ecfa300c8d1f28232b078
3
+ size 456970346