Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ SFT Dataset: [OmniAlign-V](https://huggingface.co/datasets/PhoenixZ/OmniAlign-V)
|
|
10 |
|
11 |
DPO Dataset: [OmniAlign-V-DPO](https://huggingface.co/datasets/PhoenixZ/OmniAlign-V-DPO),
|
12 |
|
13 |
-
MM-AlignBench: [
|
14 |
|
15 |
Checkpoints: [LLaVANext-OA-7B](https://huggingface.co/PhoenixZ/LLaVANext-OmniAlign-7B), [LLaVANext-OA-32B](https://huggingface.co/PhoenixZ/LLaVANext-OmniAlign-32B), [LLaVANext-OA-32B-DPO](https://huggingface.co/PhoenixZ/LLaVANext-OmniAlign-32B-DPO)
|
16 |
|
@@ -25,9 +25,9 @@ By integrating OmniAlign-V-DPO datasets in DPO stage, we can further improve the
|
|
25 |
|
26 |
| Model | Win Rate | Reward | Better+ | Better | Tie | Worse | Worse+ |
|
27 |
|-------------------------------|------------------------------|---------------------------|------------|-----|----|-----|-----|
|
28 |
-
| Claude3.5V-Sonnet | 84.9 | +51.4 | 70 | 144 |
|
29 |
| GPT-4o | 81.3 | +49.0 | 81 | 124 | 12 | 31 | 4 |
|
30 |
-
| GPT-4V | 82.5 | +46.0 | 57 |
|
31 |
| GeminiFlash1.5-002 | 77.0 | +39.1 | 56 | 138 | 14 | 35 | 9 |
|
32 |
| LLaVANext-OA-32B-DPO | 74.2 | +36.9 | 49 | 138 | 20 | 40 | 5 |
|
33 |
| Qwen2VL-72B | 61.5 | +21.6 | 43 | 112 | 15 | 75 | 7 |
|
|
|
10 |
|
11 |
DPO Dataset: [OmniAlign-V-DPO](https://huggingface.co/datasets/PhoenixZ/OmniAlign-V-DPO),
|
12 |
|
13 |
+
MM-AlignBench: [VLMEvalkit](https://github.com/open-compass/VLMEvalKit), [Huggingface](https://huggingface.co/datasets/PhoenixZ/MM-AlignBench)
|
14 |
|
15 |
Checkpoints: [LLaVANext-OA-7B](https://huggingface.co/PhoenixZ/LLaVANext-OmniAlign-7B), [LLaVANext-OA-32B](https://huggingface.co/PhoenixZ/LLaVANext-OmniAlign-32B), [LLaVANext-OA-32B-DPO](https://huggingface.co/PhoenixZ/LLaVANext-OmniAlign-32B-DPO)
|
16 |
|
|
|
25 |
|
26 |
| Model | Win Rate | Reward | Better+ | Better | Tie | Worse | Worse+ |
|
27 |
|-------------------------------|------------------------------|---------------------------|------------|-----|----|-----|-----|
|
28 |
+
| Claude3.5V-Sonnet | 84.9 | +51.4 | 70 | 144 | 13 | 25 | 0 |
|
29 |
| GPT-4o | 81.3 | +49.0 | 81 | 124 | 12 | 31 | 4 |
|
30 |
+
| GPT-4V | 82.5 | +46.0 | 57 | 151 | 12 | 31 | 1 |
|
31 |
| GeminiFlash1.5-002 | 77.0 | +39.1 | 56 | 138 | 14 | 35 | 9 |
|
32 |
| LLaVANext-OA-32B-DPO | 74.2 | +36.9 | 49 | 138 | 20 | 40 | 5 |
|
33 |
| Qwen2VL-72B | 61.5 | +21.6 | 43 | 112 | 15 | 75 | 7 |
|